uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
2,869,038,155,428 | arxiv | \section{Introduction}
\label{sec:intro}
Quantum computation has shown advantages over classical computation in solving some intractable computational problems based on the unique properties of quantum mechanics. In recent years, tremendous advances have been made in quantum technologies, in both theory and experiment, and hundreds of quantum algorithms have been proposed with rigorous mathematical proofs of speedup over the best possible or best known classical counterparts \cite{quantumalgorithm}. When these algorithms are realized in quantum circuits consisting of 1-qubit and 2-qubit gates, however, qubit connectivity often comes as a constraint. Some leading implementation schemes such as superconducting qubits~\cite{IBMQ,ye2019propagation,arute2019quantum,gong2021quantum}, quantum dots \cite{ciorga2000addition,elzerman2003few,petta2004manipulation,schroer2007electrostatically,zajac2016scalable}, and cold atoms \cite{bloch2008quantum,buluta2011natural,bernien2017probing,graham2019rydberg},
can only apply 2-qubit gates on certain pairs of qubits, while other schemes such as trapped ion \cite{leibfried2003quantum,blatt2012quantum,schindler2013quantum,pogorelov2021compact} and photonic quantum computers \cite{wang201818,zhong2020quantum,madsen2022quantum}
may not be subject to the same constraints. While this connectivity constraint is typically viewed as a considerable disadvantage, the extent of this disadvantage seems yet to be systematically studied.
This paper aims to address the following central question:
\begin{quote}
{\it How does qubit connectivity affect quantum circuit complexity?}
\end{quote}
We will study this question in terms of circuit depth and size.
Let us start with a motivating example. Some early-stage superconducting quantum systems have qubits arranged in a 1D chain and only allow nearest neighbor interactions, which we refer to as being under \textit{path graph constraint} (see \S \ref{sec:preliminaries} for concrete examples).
The 1D chain has very poor connectivity by almost all graph-theoretic measures, such as diameter, average degree, number of edges, vertex or edge expansion, etc. Compiling a quantum circuit based on all-to-all qubit connectivity to one compatible with 1D chain connectivity usually results in a blowup of depth by $O(n^2)$ and of size by $\Theta(n)$. Indeed, each layer generally has $\Theta(n)$ two-qubit gates and many of these gates act on two qubits that are $\Theta(n)$ apart on the chain. In this regard, it is even appealing to conjecture that these overheads in depth and size are unavoidable for generic quantum circuits. However, this intuition turns out to be wrong, as the following result shows.
\begin{theorem}\label{thm:US_noancilla_path_intro}
Any $n$-qubit unitary operation can be implemented by a quantum circuit of depth $O(4^n/n)$ and size $O(4^n)$ under path graph constraint.
\end{theorem}
Note that these bounds are tight: it is known that even \textit{without any connectivity restrictions}, almost all $n$-qubit unitary circuits need depth $\Omega(4^n/n)$ and size $\Omega(4^n)$ to implement \cite{sun2021asymptotically}. Therefore, the above theorem implies that the qubit connectivity constraint does not increase the depth and size complexity (by more than a constant factor) for almost all $n$-qubit unitaries.
\medskip
This somewhat counter-intuitive example calls for more systematic studies of the central question in specific settings. In this paper, we investigate three aspects of this topic:
\begin{enumerate}
\item Graphs: What constraint graphs affect circuit complexity and by how much? Is there a simple graph property such as diameter, vertex degree, or expansion constant that characterizes the impact the graph has on circuit depth and size?
\item Space: Recent studies show that ancillary qubits can be used to reduce quantum circuit depth. How does the qubit connectivity constraint affect this?
\item Unitaries: What can we say about specific sets of unitary operations, in terms of worst-case and average-case complexity?
\end{enumerate}
Our main results are described in the following subsections. The results involving ancillary qubits are easiest to state and will be used subsequently, so we begin with those.
\subsection{Ancillary qubits and depth-space trade-offs}
Recently, a number of depth-space trade-offs have been discovered, showing that one can reduce circuit depth by utilizing ancillary qubits \cite{low2018trading,wu2019optimization,sun2021asymptotically,yuan2022optimal,rosenthal2021query,yuan2022optimal}. In particular, it is known that -- assuming all-to-all connectivity -- any $n$-qubit unitary operation $U$ can be implemented by a quantum circuit of depth $O\left(n2^{n/2}+\frac{n^{1/2}2^{3n/2}}{m^{1/2}}\right)$ when $m$ ancillary qubits are available \cite{yuan2022optimal}. It is an open question whether this can be improved to $poly(n)$ for sufficiently large $m$; indeed the best lower bound is $\Omega(n+4^n/(n+m))$ \cite{sun2021asymptotically}.
When qubit connectivity constraints are taken into consideration, for example, when all $n+m$ qubits are arranged in a 1D chain, can we still trade ancilla for depth\footnote{Technically speaking, one should specify where the $n$ non-ancilla qubits are located in the chain, e.g., if they are located at the two ends, with the $m$ ancilla in a contiguous block in the middle, then one requires at least $O(n+m)$ depth to let them ``reach'' each other. Here we consider the case where the ancilla and non-ancilla qubits form two contiguous blocks, a scenario more natural for downstream applications.}? We show:
\begin{theorem}
Any unitary operation on the first $n$ qubits can be implemented by a quantum circuit of depth $O(4^n/(n+m))$ and size $O(4^n)$ under the $(n+m)$-long path constraint, for any $m \le O(2^{n/2})$. These bounds are tight. In particular, when no ancillary qubits are available, the required circuit depth is $O(4^n/n)$, the same as for unrestricted circuits.
\end{theorem}
That is, when at most $O(2^{n/2})$ ancilla are available, 1D chain connectivity does not affect either the worst case or generic circuit depth or size. On the other hand, we show that if one has access to $m > O(2^{n/2})$ ancilla, the circuit depth upper and lower bounds are $O\left(2^{3n/2} + \frac{4^n}{n+m}\right)$ and $\Omega\left(2^{n} + \frac{4^n}{n+m}\right)$, respectively. Compare this with the depth upper bound of $O\left(n2^{n/2}+\frac{n^{1/2}2^{3n/2}}{m^{1/2}}\right)$ in the unrestricted case. The effect of connectivity on circuit complexity is therefore sensitive to the number of ancilla available.
\subsection{The effect of graph constraints on connectivity}
Qubit connectivity can be modelled by an undirected, connected \emph{constraint graph} $G=(V,E)$, with vertices $v\in V$ corresponding to qubits, and edges $(u,v)\in E$ corresponding to pairs of qubits on which one can apply 2-qubit gates. The case where $G=K_n$, i.e., the complete graph on $n$ vertices, describes an $n$-qubit circuit with all-to-all connectivity (or, equivalently, no connectivity constraints).
Current superconducting quantum processors have qubit connectivity corresponding to a range of constraint graphs. In addition to the simple 1D chain, bilinear chains~\cite{IBMQ,ye2019propagation}, 2D grids~\cite{arute2019quantum,gong2021quantum}, brick-wall graphs~\cite{IBMQ} and trees~\cite{IBMQ} have also been realized, and 3D grids may potentially be utilized by multi-layer chips in the future.
We study three families of graphs: (i) $d$-dimensional grids, (ii) $d$-ary trees, and (iii) expanders. Our main results are summarized below.
The first result concerns grid graphs $[n^{1/d}]\times \cdots \times [n^{1/d}]$, where one has access to $m$ ancillary qubits.
\begin{theorem}
Any unitary on the $n$ qubits $[n^{1/d}]\times \cdots \times [n^{1/d}]$ can be implemented by a quantum circuit of $O(4^n/(n+m))$ depth and $O(4^n)$ size under the $[(n+m)^{1/d}]\times \cdots \times [(n+m)^{1/d}]$ graph constraint for any $m\le O(2^{\frac{dn}{d+1}}/d)$, and these bounds are tight. When no ancilla are used, the required circuit depth is $O(4^n/n)$, the same as for unrestricted circuits.
\end{theorem}
We make several remarks. First, although, as stated, this result applies to unitaries acting on the ``corner'' $n$ qubits in $[n^{1/d}]\times \cdots \times [n^{1/d}]$, the location of these $n$ qubits does not really matter as long as they are connected together in the grid graph.
Second, in later sections, we will give circuit constructions for $d$-dimensional grids of general sizes $[n_1]\times \cdots \times [n_d]$, which include bilinear chains as a special case.
Of particular importance are the cases $d=2$ and $d=3$, which correspond to practical implementations of superconducting processors. Third, some graphs, such the brick-wall graph found in some IBM processors, do not fall into this family, but we show how our circuit construction for the 2D grid can be used to construct a circuit for the brick-wall graph with a similar (and tight) bound. Fourth, for $m$ larger than $O(2^{dn/(d+1)}/d)$, we will also show upper and lower bounds.
The second family of graphs we study are the complete $d$-ary trees. We find:
\begin{theorem}
Any unitary defined on the top $n$ qubits in a complete $d$-ary tree of $n+m$ vertices can be realized by a quantum circuit of depth $\Tilde{O}\left(dn2^n + \frac{(n+d)4^n}{n+m}\right)$ and size $O(4^n)$. In particular, when no ancilla are available, the required circuit depth is $O(4^n)$, and this is optimal up to a factor of $O(n/d)$.
\end{theorem}
As qubit connectivity in real devices can vary greatly (see \cite{IBMQ} for a few examples), we also study circuit size under general graph constraints. We show:
\begin{theorem}\label{thm:US_size_upper_intro}
Any $n$-qubit unitary matrix can be implemented by a quantum circuit of size $O(4^n)$ under arbitrary connected graph constraints.
\end{theorem}
This result is tight, as the circuit size lower bound is $\Omega(4^n)$ even assuming all-to-all connectivity \cite{shende2004minimal}.
This implies that for almost all unitary operations, arbitrary graph constraints do not impact the required circuit size.
The results above, along with others summarized in Table \ref{tab:US_graph}, relate to the challenge of General Unitary Synthesis (GUS), i.e., the implementation of general $n$-qubit unitary operations. Similar to size complexity, our circuit constructions apply to the worst case (i.e. work for all unitary operations), and our lower bounds hold for generic (i.e., almost all) unitaries, which make our results stronger.
\begin{table}[]
\centering
\resizebox{\textwidth}{!}{\begin{tabular}{c|cc|c}
\hline
Graph & Depth upper bounds / $O(\cdot)$ & Depth lower bounds / $\Omega(\cdot)$ &Optimal range of $m$ \\
\hline
\hline
\multirow{2}*{$(n+m)$-Path} & $4^{3n/4}+\frac{4^n}{n+m}$ & $4^{n/2}+\frac{4^n}{n+m}$ & \multirow{2}*{$0\le m\le O(2^{n/2})$} \\
& [Cor. \ref{coro:US_path_grid} ]& [Cor. \ref{coro:lower_bound_path}] &
\\ \hline
\multirow{2}*{$(n_1,n_2)$-Grid} & $4^{2n/3}+\frac{4^{3n/4}}{(n_2)^{1/2}}+\frac{4^n}{n+m}$ & $\max\left\{4^{n/3}, \frac{4^{n/2}}{(n_2)^{1/2}},\frac{4^n}{n+m}\right\}$ & \multirow{2}*{$0\le m\le O\Big(\frac{2^n}{2^{n/3}+\frac{2^{n/2}}{(n_2)^{1/2}})}\Big)$} \\
& [Cor. \ref{coro:US_path_grid} ] &[Lem. \ref{lem:lower_bound_grid_k_US}] & \\
\hline
\multirow{2}*{$(n_1,\ldots,n_d)$-Grid} & $n^22^n+d4^{\frac{(d+2)n}{2(d+1)}}+\max\limits_{j\in\{2,\ldots,d\}}\Big\{\frac{d4^{(j+1)n/(2j)}}{(\Pi_{i=j}^d n_i)^{1/j}}\Big\}+\frac{4^n}{n+m}$ & $ n+4^{\frac{n}{d+1}}+\max\limits_{j\in [d]}\Big\{\frac{4^{n/j}}{(\Pi_{i=j}^d n_i)^{1/j}}\Big\}$& \multirow{2}*{$0\le m\le O\Big(\frac{2^n}{n^2+d2^{\frac{n}{d+1}}+\max\limits_{j\in\{2,\ldots,d\}}\big\{\frac{d2^{n/j}}{(\Pi_{i=j}^d n_i)^{1/j}}\big\}}\Big)$} \\
&[Thm. \ref{thm:US_grid}] & [Lem. \ref{lem:lower_bound_grid_k_US}] & \\
\hline
\multirow{2}*{Binary Tree} & $n^2\log^2(n)2^n+\frac{\log(n)4^n}{n+m}$& $\max\left\{n,\frac{4^n}{n+m}\right\}$ & \multirow{2}*{optimal up to $\log (n)$ when $m \le O(2^n/n^2\log n)$}\\
& [Thm. \ref{thm:US_binarytree}] &[Cor. \ref{coro:lower_bound_binary}] & \\
\hline
\multirow{2}*{$d$-ary Tree} & $n2^nd\log_d (n+m)\log_d(n+d)+\frac{(n+d)\log_d(n+d) 4^{n}}{n+m}$& $\max\left\{n,\frac{d4^n}{n+m}\right\}$& \multirow{2}*{optimal up to $n\log (n)$ when $m \le O(2^n/d\log (n))$} \\
& [Thm. \ref{thm:US_darytree}] & [Lem. \ref{lem:depth_lower_dary}] &\\
\hline
\multirow{2}*{$(n+m)$-Star} & $4^n$ & $4^n$ & \multirow{2}*{$m\ge 0$} \\
&[Cor. \ref{coro:US_star}] &[Cor. \ref{coro:depth_lower_star}] &\\
\hline
\multirow{2}*{$(n+m)$-Expander} & $n^22^n+\frac{\log(m)4^n}{n+m}$ & $\max\left\{n,\frac{4^n}{n+m}\right\}$ & \multirow{2}*{optimal up to $n$ when $m \le O(2^n/n)$}\\
&[Thm. \ref{thm:US_expander_graph}] &[Prop. \ref{prop:depth_lowerbound_graph}] &\\
\hline
\end{tabular}}
\caption{Upper and lower bounds on circuit depth required to implement $n$-qubit general unitary synthesis (GUS) under graph constraints, using $m$ ancillary qubits. The $(n+m)$-Path denotes a path consisting of $(n+m)$ vertices. The $(n+m)$-Expander denotes an expander graph consisting of $(n+m)$ vertices. The $(n_1,\ldots,n_d)$-Grid is a $d$-dimensional grid of size $n_1\times n_2\times \cdots \times n_d$ with $n_1\ge n_2\ge\cdots \ge n_d\ge 1, n\ge 1$ and $m+n=\prod_{i=1}^dn_i$. In the (complete) $d$-ary tree, every non-leaf node has exactly $d$ children. The parameter $d$ can be as small as $2$ (a binary tree), and as large as $n+m-1$ (an $(n+m)$-Star). The last column gives ranges of $m$ where our upper and lower bounds match.}
\label{tab:US_graph}
\end{table}
\subsection{Circuit complexity for special families of unitaries}
While the above results for GUS tell us what we can hope for in a generic solution for all $n$-qubit unitaries, special families of unitary operations warrant further study. Firstly, by utilizing the structure of particular unitaries, one may design better constructions (in particular we desire $poly(n)$-depth circuits where possible). Secondly, by focusing on special tasks, one may derive tighter circuit complexity bounds, which may elucidate the effects of connectivity constraints. We study three special families of unitary operations:
\begin{enumerate}
\item Diagonal unitaries.
\item 2-by-2 block diagonal unitaries.
\item Quantum state preparation (QSP) unitaries, i.e., parameterized circuits $U(\theta)$ that can generate any $n$-qubit state $\ket{\psi}$ from initial state $\ket{0^n}$ \footnote{That is, $\left\{U(\theta)\ket{0}^{\otimes n}: \theta\in \mathbb{R}^D\right\}$ contains the set of all $n$-qubit states, where $D$ is the dimension of the parameter vector $\theta$. When $m$ ancillary qubits are available, it becomes $\left\{U(\theta)\ket{0^{n+m}}: \theta\in \mathbb{R}^D\right\} \supseteq \{\ket{\psi}\ket{0^m}: \ket{\psi} \text{ is an $n$-qubit quantum state}\}$.}.
\end{enumerate}
These three families are closely related, and have all been extensively studied in quantum circuit theory.
For brevity, here we discuss QSP only (for diagonal or 2-by-2 block diagonal unitaries, refer to \cite{bergholm2005quantum,mottonen2005decompositions,plesch2011quantum}). QSP is an important subroutine in many quantum machine learning algorithms \cite{lloyd2014quantum,kerenidis2017quantum,rebentrost2018quantum,harrow2009quantum,wossnig2018quantum,kerenidis2019q,kerenidis2020quantum,rebentrost2014quantum} and Hamiltonian simulation algorithms \cite{berry2015simulating,low2017optimal,low2019hamiltonian,berry2015hamiltonian}, and has been the subject of increasing attention \cite{araujo2020divide,zhang2021low, zhang2021parallel,sun2021asymptotically,yuan2022optimal,rosenthal2021query,johri2021nearest}.
For QSP, we can again consider circuit size under general graph constraints, and circuit depth for grids and complete $d$-ary tree graphs. We have the following results.
\begin{theorem}
QSP unitaries can be implemented by a quantum circuit of size $O(2^n)$ under any connected graph constraint.
\end{theorem}
This bound is tight, as QSP needs $\Omega(2^n)$ size even without any connectivity constraints \cite{plesch2011quantum}, and the presence of constraints does not increase the required circuit size.
For $d$-dimensional grids, we prove asymptotically optimal circuit depth requirements for any constant $d$, and almost optimal results for larger $d$:
\begin{theorem}
QSP unitaries can be implemented by a quantum circuit of depth $O\left(2^{n/2} + \frac{2^n}{n+m}\right)$ under 1D chain constraint, depth $O\left(2^{n/3} + \frac{2^n}{n+m}\right)$ under 2D grid constraint, and depth $O\left(n^3+d2^{\frac{n}{d+1}}+\frac{2^n}{n+m}\right)$ under $d$-dimensional grid $[(n+m)^{1/d}]\times \cdots \times [(n+m)^{1/d}]$ constraint. These bounds are tight for any constant $d$, and off by at most a factor of $d$ for $d(n)=\omega(1)$.
\end{theorem}
For trees, we give circuit constructions whose depth is optimal if $m$ is not too large.
\begin{theorem}
QSP unitaries can be implemented by a quantum circuit of depth $\tilde O\left(n^2 2^{n} + 4^n/(n+m)\right)$ under complete binary tree constraint, depth $\tilde O\left(d n 2^n + (n+d)4^{n}/(n+m)\right)$ on complete $d$-ary tree constraint, and depth $O\left(4^n\right)$ under star graph constraint. The bound for the star graph is tight, and the bound for general complete $d$-ary trees is tight for $m = O(2^n/n^2 d)$.
\end{theorem}
\begin{table}[]
\centering
\resizebox{\textwidth}{!}{\begin{tabular}{c|c c |c}
\hline
Graph & Depth upper bounds / $O(\cdot)$ & Depth lower bounds/ $\Omega(\cdot)$ & Optimal range of $m$ \\
\hline
\hline
\multirow{2}*{$(n+m)$-Path} & $2^{n/2}+\frac{2^n}{n+m}$ & $2^{n/2}+\frac{2^n}{n+m}$ & \multirow{2}*{$m\ge 0$} \\
&[Cor. \ref{coro:US_path_grid} ] &[Cor. \ref{coro:lower_bound_path}] & \\
\hline
\multirow{2}*{$(n_1,n_2)$-Grid} & $2^{n/3}+\frac{2^{n/2}}{(n_2)^{1/2}}+\frac{2^n}{n+m}$ & $\max\big\{2^{n/3}, \frac{2^{n/2}}{(n_2)^{1/2}},\frac{2^n}{n+m}\big\}$ & \multirow{2}*{$m\ge 0$} \\
&[Cor. \ref{coro:US_path_grid} ] &[Lem. \ref{lem:lower_bound_grid_k}] & \\
\hline
\multirow{2}*{$(n_1,\ldots,n_d)$-Grid } & $n^3+d2^{\frac{n}{d+1}}+\max\limits_{j\in\{2,\ldots,d\}}\Big\{\frac{d2^{n/j}}{(\Pi_{i=j}^d n_i)^{1/j}}\Big\}+\frac{2^n}{n+m}$ & \multirow{2}*{$ n+2^{\frac{n}{d+1}}+\max\limits_{j\in [d]}\big\{\frac{2^{n/j}}{(\Pi_{i=j}^d n_i)^{1/j}}\big\}$ } & if $d$ is a constant, $m\ge 0$; \\
&[Thm. \ref{thm:QSP_grid}] &[Lem. \ref{lem:lower_bound_grid_k}] & otherwise, $0\le m\le O\Big(\frac{2^n}{n^3+d2^{\frac{n}{d+1}}+\max\limits_{j\in\{2,\ldots,d\}}\big\{\frac{d2^{n/j}}{(\Pi_{i=j}^d n_i)^{1/j}}\big\}}\Big)$\\
\hline
\multirow{3}*{Binary Tree} & $n^2\log^2(n)+\frac{\log(n)2^n}{n+m}$, if $m\le o(2^n)$ & $\max\left\{n,\frac{2^n}{n+m}\right\}$ & \multirow{3}*{optimal up to $\log (n)$ when $m \le O(2^n/n^2\log n)$} \\
& $n^2\log(n)$, if $m\ge\Omega(2^n)$ & \\
&[Thm. \ref{thm:QSP_binarytree_improvement}] & [Cor. \ref{coro:lower_bound_binary}] & \\
\hline
\multirow{2}*{$d$-ary Tree} & $n^2d\log_d (n+m)\log_d(n+d)+\frac{(n+d)\log_d(n+d) 2^{n}}{n+m}$ & $\max\left\{n,\frac{d2^n}{n+m}\right\}$ & optimal up to $n\log (n)$ when $m \le O(2^n/nd\log n)$ \\
&[Thm. \ref{thm:QSP_darytree}] &[Lem. \ref{lem:depth_lower_dary}] & \\
\hline
\multirow{2}*{$(n+m)$-Star} & $2^n$ & $2^n$& \multirow{2}*{$m\ge 0$} \\
& [Cor. \ref{coro:QSP_star}] & [Cor. \ref{coro:depth_lower_star}] & \\
\hline
\multirow{2}*{$(n+m)$-Expander} & $n^3+\frac{\log(m)2^n}{n+m}$ & $\max\left\{n,\frac{2^n}{n+m}\right\}$ & \multirow{2}*{optimal up to $n$ when $m \le O(2^n/n^2)$} \\
&[Thm. \ref{thm:QSP_expander}] &[Prop. \ref{prop:depth_lowerbound_graph}] &\\
\hline
\end{tabular}}
\caption{Upper and lower bounds on circuit depth required to implement $n$-qubit quantum state preparation (QSP) under graph constraints using $m$ ancillary qubits.
}
\label{tab:QSP_graph}
\end{table}
Our results for QSP are summarized in Table \ref{tab:QSP_graph}. Compared to the results for general unitary synthesis (Theorems 1-5), we obtain tighter bounds for QSP. This allows us to examine the effect of connectivity constraints more carefully.
First, connectivity constraints make it harder to trade space for depth. Without connectivity constraints, tight bounds for QSP are known for any number $m$ of ancillary qubits~\cite{yuan2022optimal}: The optimal circuit depth is $O(n+2^n/(n+m))$ and the optimal size is $O(2^n)$. In particular, QSP circuit depth is polynomial (in fact, linear) in $n$ when sufficiently many ancilla are available. However, both constant-dimensional grid and $d$-ary tree constraints cause the required circuit depth to become exponential in $n$, regardless of the number of ancillary qubits.
Second, more connectivity generally implies smaller depth, with the quantitative characterization depending on graphs. In both $d$-dimensional grids and $d$-ary trees, as $d$ grows larger (with the number of vertices roughly fixed), the diameter decreases, and the degree and expansion increase--- intuitively, the graph gets more connected. For grids, our results show that the circuit depth decreases with $d$, consistent with the intuition that greater connectivity enables shallower circuits. However, for $d$-ary trees, the required circuit depth increases slightly with $d$, reaching a maximum when $d$ takes its largest possible value (i.e, a star graph). This is because the size of a maximum matching also plays an important role in circuit depth---if the constraint graph does not contain a large matching, it limits how many two-qubit gates can be applied in parallel. Thus, it seems difficult to use one simple measure of graph connectivity to characterize its effect on quantum circuit complexity.
\subsection{Related work.}
\begin{table}\small
\centering
\begin{tabular}{c|cc|ccc}
\hline
Problem & \multicolumn{2}{c|}{Circuit Size} &\multicolumn{3}{c}{Circuit Depth}\\
\hline
\hline
\multirow{4}*{GUS} & $O(n^34^n)$ &\cite{barenco1995elementary} & $O\left(n2^n+\frac{4^n}{n+m}\right)$& $~m\ge 0$ & \cite{sun2021asymptotically}\\
& $O(n4^n)$ &\cite{barenco1995elementary,knill1995approximation} & $O(n2^{n/2})$&$m=\Theta(n4^n)$& \cite{rosenthal2021query}\\
& $O(4^n)$ &\cite{vartiainen2004efficient,mottonen2005decompositions}& $O\left(n2^{n/2}+\frac{n^{1/2}2^{3n/2}}{m^{1/2}}\right)$&$\Omega(2^n)\le m\le O(\frac{4^n}{n})$ &\cite{yuan2022optimal}\\
& $\Omega(4^n)$ &\cite{shende2004minimal}& $\Omega\left(n+\frac{4^n}{n+m}\right)$ &$m\ge 0$ &\cite{sun2021asymptotically}\\
\hline
\multirow{2}*{QSP} & $O(2^n)$ & \cite{nielsen2002quantum,plesch2011quantum,bergholm2005quantum} &$O\left(n+\frac{2^n}{n+m}\right)$& $m\ge 0$ &\cite{sun2021asymptotically,yuan2022optimal}\\
& $\Omega(2^n)$ & \cite{plesch2011quantum} &$\Omega\left(n+\frac{2^n}{n+m}\right)$& $m\ge 0$ &\cite{sun2021asymptotically}\\
\hline
\end{tabular}
\caption{Previous circuit complexity for GUS and QSP under no qubit connectivity constraint. Integer $n$ is the size of GUS and QSP, and $m$ is the number of ancillary qubits, respectively.}
\label{tab:related-work}
\end{table}
Circuit complexity for GUS and QSP in the absence of graph constraints has been widely investigated (see Tab.~\ref{tab:related-work}).
There are also some related studies which do not prepare standard quantum states $\sum_{k=0}^{2^n-1}v_k |k\rangle$. In \cite{johri2021nearest}, the authors utilized $d2^{n/d}$ qubits $\ket{k_1,k_2,\ldots,k_d}$ to encode $\ket{k}$ in a quantum state. The circuit depth required to prepare the corresponding $d2^{n/d}$-qubit quantum state is $O\left(\frac{n}{d} 2^{n-n/d}\right)$. Araujo \textit{et al.} \cite{araujo2020divide} prepared a quantum state $\sum\limits_{x\in \{0,1\}^n}v_x |x\rangle |\text{garbage}_x\rangle$ in depth $O(n^2)$, in which $\ket{x}|\text{garbage}_x\rangle$ is a basis state of $O(2^n)$ qubits.
There are some known circuit constructions for QSP and specific unitary synthesis under the path constraint. In \cite{mottonen2005decompositions}, the circuit size of any $n$-qubit uniformly controlled gate (UCG) can be optimized to $O(2^n)$ under path constraint. In the same paper, based on the circuits of UCGs, the circuit size of any $n$-qubit QSP circuit can be optimized to $O(2^n)$ under path constraint.
{ Ref.\cite{rosenbaum2013optimal} showed that the depth and size required for a general $n$-qubit-controlled 1-qubit gate are $\Theta(n^{1/k})$ and $\Theta(n)$, respectively, under $n^{1/k}\times \cdots \times n^{1/k}$ grid constraint. The paper also shows the same bounds for the Fanout operation with $n$ target qubits. Ref. \cite{herbert2018depth} showed that there exist $n$-qubit circuits, such that a multiplicative overhead of $\Omega(\log(n))$ on depth is needed for some circuits under certain constant-degree graph constraints, and there exist constant-degree graphs $G$ that such a logarithmic depth overhead is sufficient for any circuit on $G$.}
\subsection{Organization.}
The rest of this paper is organized as follows. In \S \ref{sec:preliminaries}, we introduce notation and review some previous results.
In \S \ref{sec:diag_without_ancilla} and \S \ref{sec:diag_with_ancilla} we give circuit constructions for diagonal unitary matrices under various graph constraints, which will be used in the two sections that follow. We prove circuit depth and size upper bound for quantum state preparation and general unitary synthesis under various graph constraints in \S \ref{sec:QSP_US_graph}, and prove corresponding lower bounds in \S \ref{sec:QSP_US_lowerbound}. We conclude in \S \ref{sec:conlusion} and discuss a few open questions for future study.
\section{Preliminaries}
\label{sec:preliminaries}
\subsection{Notation}
Let $[n]$ denote the set $\{1,2,\cdots,n\}$. All logarithms $\log(\cdot)$ are base 2 in this paper. Let $\mathbb{I}_n\in\mathbb{R}^{2^n\times 2^n}$ be the $n$-qubit identity operator. For any $x=x_1\cdots x_s\in\{0,1\}^s$, $y=y_1\cdots y_t\in\{0,1\}^t$, $xy$ denotes the $(s+t)$-bit string $x_1\cdots x_s y_1\cdots y_t\in\{0,1\}^{s+t}$. For $x=x_1\cdots x_n$, $ y=y_1y_2\cdots y_n$, the inner product of $x,y$ is $\langle x,y\rangle:=\oplus_{i=1}^n x_i\cdot y_i$, where addition $\oplus$ and multiplication $\cdot$ are over the field $\mathbb{F}_2$. We also use $x\oplus y$ to denote the bit-wise XOR of the two strings $x$ and $y$. Let $\epsilon$ denote an empty string. Define the set $\mbox{$\{0,1\}$}^{\le k}:=\bigcup_{i=0}^{k}\mbox{$\{0,1\}$}^{k}$, in which $\mbox{$\{0,1\}$}^0:=\{\epsilon\}$. For any quantum state $\ket{\psi}$ and qubit set $S$, $\ket{\psi}_S$ denotes the reduced quantum state corresponding to qubits in $S$. If $S=\{i\}$, we simply write $\ket{\psi}_{i}$ for $\ket{\psi}_{\{i\}}$. For sets $S$ and $T$, define $S-T:=\{x: x\in S \text{~and~} x\notin T\}$. For set $S\subseteq
V$, set $\overline{S}:=V-S$.
An $n$-qubit quantum circuit implements a $2^n\times 2^n$ unitary transformation by a sequence of gates. The set of all single qubit gates and the 2-qubit CNOT gate can be used to implement any unitary transformation, and is therefore said to be universal for quantum computation. We will refer to quantum circuits consisting of only these gates as \textit{standard quantum circuits}, and, unless otherwise stated, all circuits in this paper are standard quantum circuits.
\subsection{Graph constraints}
Some implementation schemes of real quantum computers have a notion of \textit{connectivity}. That is, two-qubit gates may only be implementable between certain pairs of qubits. This can be modelled by a graph $G = (V,E)$ with vertex and edge sets $V$ and $E$, respectively, where a two-qubit gate can be applied to qubits $(i,j)$ if and only if $(i,j)\in E$. We refer to $G$ as the \textit{constraint graph} of the circuit, and the corresponding circuit is said to be \textit{under $G$ constraint}. If $G$ is arbitrary, we say that the circuit is under \textit{arbitrary graph constraint}. For any graph $G$, $d_G(u,v)$ denotes the distance between vertices $u$ and $v$ in $G$, i.e, the number of edges on the shortest path from $u$ to $v$. The subscript $G$ is dropped when no confusion is caused. The diameter of $G$ is defined to be $\diam(G)\defeq \max\limits_{u,v\in V}d(u,v)$. The degree $\deg(v)$ of a vertex $v\in V$ is the number of edges incident to it in $G$.
In this paper we study circuit constructions for general constraint graphs, as well as certain specific families of graphs. A central question is: \textit{What properties of the constraint graph influence quantum circuit complexity the most?} To study this, we investigate three families of graphs: (i) grids, (ii) trees, and (iii) expanders. We will also consider the general case with $m$ ancillary qubits available.
\paragraph{$d$-dimensional grids.} These are graphs with vertex and edge sets (see Fig.\ref{fig:d-grids}).
\begin{align*}
V&=\left\{v_{i_1,i_2,\ldots,i_d}:i_k\in[n_k] \text{~for~all~} k\in[d]\right\}, \\
E&=\left\{(v_{i_1,i_2,\ldots,i_d},v_{i_1+1,i_2,\ldots,i_d}), (v_{i_1,i_2,\ldots,i_d},v_{i_1,i_2+1,\ldots,i_d}),\ldots,(v_{i_1,i_2,\ldots,i_d},v_{i_1,i_2,\ldots,i_d+1}): i_k\in[n_k-1] \text{~for~all~} k\in[d]\right\}.
\end{align*}
\begin{definition}
A quantum circuit on $n$ qubits will be said to be under $\Grid^{n_1,n_2, \ldots, n_d}_{n}$ constraint if the the constraint graph is a $d$-dimensional grid with $\prod_{k=1}^d n_k = n$. Without loss of generality we assume that $n_1 \ge n_2 \ge \cdots\ge n_d$. We will refer to the case $d=1$ as $\Path_{n}$ (see Fig.~\ref{fig:n-path}). \end{definition}
\begin{figure}[!ht]
\begin{subfigure}{0.5\textwidth}
\centering
\begin{tikzpicture}
\draw (0,0) -- (3.2,0)
(3.8,0) -- (5,0);
\draw [fill=black] (0,0) circle (0.05)
(1,0) circle (0.05)
(2,0) circle (0.05)
(3,0) circle (0.05)
(4,0) circle (0.05)
(5,0) circle (0.05);
\draw (3.2,0) node[anchor=west]{\scriptsize $\cdots$};
\draw (0,-0.3) node{\scriptsize $v_1$}
(1,-0.3) node{\scriptsize $v_2$}
(2,-0.3) node{\scriptsize $v_3$}
(3,-0.3) node{\scriptsize $v_4$}
(4,-0.3) node{\scriptsize $v_{n-1}$}
(5,-0.3) node{\scriptsize $v_{n}$};
\end{tikzpicture}
\caption{The $1$-dimensional path $\Path_{n}$.}
\label{fig:n-path}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\begin{tikzpicture}[scale=0.8]
\node (a) at (-0.2,3.2) {};
\node (b) at (5.2,3.2) {};
\draw[decorate,decoration={brace,raise=5pt}] (a) -- (b);
\node (a) at (-0.2,-0.2) {};
\node (b) at (-0.2,3.2) {};
\draw[decorate,decoration={brace,raise=5pt}] (a) -- (b);
\draw (0,0) -- (3.2,0) (3.8,0) -- (5,0)
(0,1) -- (3.2,1) (3.8,1) -- (5,1)
(0,2) -- (3.2,2) (3.8,2) -- (5,2)
(0,3) -- (3.2,3) (3.8,3) -- (5,3);
\draw [fill=black] (0,0) circle (0.05) (1,0) circle (0.05) (2,0) circle (0.05) (3,0) circle (0.05) (4,0) circle (0.05) (5,0) circle (0.05);
\draw [fill=black] (0,1) circle (0.05) (1,1) circle (0.05) (2,1) circle (0.05) (3,1) circle (0.05) (4,1) circle (0.05) (5,1) circle (0.05);
\draw [fill=black] (0,2) circle (0.05) (1,2) circle (0.05) (2,2) circle (0.05) (3,2) circle (0.05) (4,2) circle (0.05) (5,2) circle (0.05);
\draw [fill=black] (0,3) circle (0.05) (1,3) circle (0.05) (2,3) circle (0.05) (3,3) circle (0.05) (4,3) circle (0.05) (5,3) circle (0.05);
\draw (3.2,0) node[anchor=west]{\scriptsize $\cdots$} (3.2,1) node[anchor=west]{\scriptsize $\cdots$} (3.2,2) node[anchor=west]{\scriptsize $\cdots$} (3.2,3) node[anchor=west]{\scriptsize $\cdots$};
\draw (0,0) -- (0,1.2) (0,1.8) -- (0,3) (1,0) -- (1,1.2) (1,1.8) -- (1,3) (2,0) -- (2,1.2) (2,1.8) -- (2,3) (3,0) -- (3,1.2) (3,1.8) -- (3,3) (4,0) -- (4,1.2) (4,1.8) -- (4,3) (5,0) -- (5,1.2) (5,1.8) -- (5,3);
\draw (0,2) node[anchor=north]{\scriptsize $\vdots$} (1,2) node[anchor=north]{\scriptsize $\vdots$} (2,2) node[anchor=north]{\scriptsize $\vdots$} (3,2) node[anchor=north]{\scriptsize $\vdots$} (4,2) node[anchor=north]{\scriptsize $\vdots$} (5,2) node[anchor=north]{\scriptsize $\vdots$};
\draw (2.5,3.7) node{\scriptsize $n_2$ vertices} (-1.3,1.5) node{\scriptsize $n_1$ vertices~~};
\end{tikzpicture}
\caption{The 2-dimensional grid $\Grid_{n}^{n_1,n_2}$.}
\label{fig:m1-m_2-grid}
\end{subfigure}
\caption{Examples of $d$-dimensional grids with $d=1$ and $2$. }
\label{fig:d-grids}
\end{figure}
\paragraph{$d$-ary trees.} The complete $d$-ary tree is a tree in which every non-leaf node has exactly $d$ children (see Fig.~\ref{fig:tree}). {The \textit{depth} of a tree is the distance between the root and the furthest leaf. Note that a depth-$d$ tree has $d+1$ layers of nodes.}
\begin{definition}
A quantum circuit on $n$ qubits will be said to be under $\Tree_{n}(d)$ constraint if the constraint graph is a $d$-ary tree with $\sum_{i=0}^h d^i=n$. $\Tree_{n}(2)$ corresponds to a binary tree and the case $d = n -1$ will be denoted $\Star_{n}$.
\end{definition}
\begin{figure}[!hbt]
\begin{subfigure}{0.5\textwidth}
\centering
\begin{tikzpicture}
\draw [fill=black] (0,0) circle (0.05) (-1,-1) circle (0.05) (0,-1) circle (0.05) (1,-1) circle (0.05);
\draw [fill=black] (-1.3,-2) circle (0.05) (-1,-2) circle (0.05) (-0.7,-2) circle (0.05) (-0.3,-2) circle (0.05) (0,-2) circle (0.05) (0.3,-2) circle (0.05) (0.7,-2) circle (0.05) (1,-2) circle (0.05) (1.3,-2) circle (0.05);
\draw [fill=black] (-1.6,-3) circle (0.05) (-1.3,-3) circle (0.05) (-1,-3) circle (0.05) (-0.3,-3) circle (0.05) (0,-3) circle (0.05) (0.3,-3) circle (0.05) (1.6,-3) circle (0.05) (1.3,-3) circle (0.05) (1,-3) circle (0.05);
\draw (0,0)--(-1,-1) (0,0)--(0,-1) (0,0)--(1,-1);
\draw (-1,-1)--(-1.3,-2) (-1,-1)--(-1,-2) (-1,-1)--(-0.7,-2);
\draw (0,-1)--(-0.3,-2) (0,-1)--(0,-2) (0,-1)--(0.3,-2);
\draw (1,-1)--(0.7,-2) (1,-1)--(1,-2) (1,-1)--(1.3,-2);
\draw (-1.3,-2)--(-1.6,-3) (-1.3,-2)--(-1.3,-3) (-1.3,-2)--(-1,-3);
\draw (0,-2)--(-0.3,-3) (0,-2)--(0,-3) (0,-2)--(0.3,-3);
\draw (1.3,-2)--(1.6,-3) (1.3,-2)--(1.3,-3) (1.3,-2)--(1,-3);
\draw (-1,-2)--(-1.1,-2.3) (-1,-2)--(-1,-2.3) (-1,-2)--(-0.9,-2.3) ;
\draw (-0.7,-2)--(-0.8,-2.3) (-0.7,-2)--(-0.7,-2.3) (-0.7,-2)--(-0.6,-2.3) ;
\draw (-0.3,-2)--(-0.4,-2.3) (-0.3,-2)--(-0.3,-2.3) (-0.3,-2)--(-0.2,-2.3);
\draw (0.3,-2)--(0.2,-2.3) (0.3,-2)--(0.3,-2.3) (0.3,-2)--(0.4,-2.3);
\draw (0.7,-2)--(0.6,-2.3) (0.7,-2)--(0.7,-2.3) (0.7,-2)--(0.8,-2.3);
\draw (1,-2)--(0.9,-2.3) (1,-2)--(1,-2.3) (1,-2)--(1.1,-2.3);
\draw (-0.7,-2.7) node{\scriptsize $\cdots$} (0.7,-2.7) node{\scriptsize $\cdots$};
\draw (-0.3,-0.3) arc(-135:-25:0.35);
\draw (-0.15,-1.3) arc(-150:-50:0.2);
\draw (-1.15,-1.3) arc(-150:-50:0.2);
\draw (0.85,-1.3) arc(-150:-50:0.2);
\draw (-1.45,-2.3) arc(-150:-50:0.2);
\draw (1.15,-2.3) arc(-150:-50:0.2);
\draw (-0.15,-2.3) arc(-150:-50:0.2);
\draw (1,0) node{\scriptsize $d$ children};
\draw (2,-1) node{\scriptsize $d$ children};
\draw (2.2,-2) node{\scriptsize $d$ children};
\end{tikzpicture}
\caption{The general $d$-ary tree $\Tree_{n}(d)$.}
\label{fig:darytree}
\end{subfigure}
\begin{subfigure}{0.5\textwidth}
\centering
\begin{tikzpicture}
\draw (-1,0) -- (1,0) (0,-1) -- (0,1) (0.75, 0.75) -- (-0.75, -0.75) (-0.75, 0.75) -- (0.75, -0.75);
\draw [fill=black] (0,0) circle (0.05) (-1,0) circle (0.05) (1,0) circle (0.05) (0,-1) circle (0.05) (0,1) circle (0.05) (0.75, 0.75) circle (0.05) (-0.75, -0.75) circle (0.05) (-0.75, 0.75) circle (0.05) (0.75, -0.75) circle (0.05);
\draw (0.3,-0.1) node{\scriptsize $v_{n}$};
\draw (-1,0) [anchor=east] node{\scriptsize $v_1$};
\draw (-0.75, -0.75) [anchor=east] node{\scriptsize $v_{n-1}$};
\draw (-0.75, 0.75) [anchor=east] node{\scriptsize $v_{2}$};
\draw (0,1) [anchor=south] node{\scriptsize $v_{3}$};
\draw (1,0) [anchor=west] node{\scriptsize $v_5$};
\draw (0.75, 0.75) [anchor=west] node{\scriptsize $v_{4}$};
\draw (0.75, -0.75) [anchor=west] node{\scriptsize $v_{n-3}$};
\draw (0,-1) [anchor=north] node{\scriptsize $v_{n-2}$};
\draw (0.7, -0.3) [anchor=west] node{\scriptsize $\vdots$};
\end{tikzpicture}
\caption{The $(n)$-star graph $\Star_{n}$.}
\label{fig:star_n}
\end{subfigure}
\caption{Examples of $d$-ary trees.}
\label{fig:tree}
\end{figure}
\medskip
\paragraph{Expander graphs.}
\begin{definition}[Vertex expansion]\label{def:expansion}
Let $G=(V,E)$ be a connected graph. The vertex expansion of $G$ is defined as
\begin{equation*}
h_{out}(G):=\min_{\substack{S\subseteq V,\\ 0<|S|<|V|/2}}\frac{|\partial_{out}(S)|}{|S|},
\end{equation*}
where $\partial_{out}(S):=\{v\in V-S : \exists u\in S, \text{~s.t.~} (u,v)\in E\}$.
\end{definition}
\begin{definition}[Expanders]
Let $\mathcal{G} = (G_n)_{n\in\mathbb{N}}$ be a sequence of graphs with $n$ vertices. If, for all $n$,
\begin{equation*}
h_{out}(G_n) \ge c
\end{equation*}
for some constant $c > 0$, then we say that $\mathcal{G}$ is a family of expanders. We call any $G=(V,E)$ an expander if it belongs to a family of expanders.
\end{definition}
\begin{definition}
A quantum circuit on $n$ qubits will be said to be under $\Expander_{n}$ constraint if the constraint graph is an $n$-vertex expander.
\end{definition}
For explicit constructions of families of expanders, see e.g., \cite{hoory2006expander}. The following two lemmas will prove useful later.
\begin{lemma}[\cite{hoory2006expander}]\label{lem:distance}
Let $G=(V,E)$ be an expander. Then, the distance between any two vertices in $G$ is $O(\log(|V|))$.
\end{lemma}
\begin{lemma}\label{lem:graph_property}
Let $G=(V,E)$ be a graph with vertex expansion $h_{out}(G) = h$. Let $S\subset V$ have size at most $|V|/2$.
Define a bipartite graph $B=(S\cup\partial_{out}(S),E')$, where $E'=\{(u,v)\in E: u\in S,v\in\partial_{out}(S)\}$. Then, the size of any maximal matching for $B$ is at least $\frac{h}{h+2} |S|$. In particular, if $G$ is an expander, then the size of any maximal matching in $B$ is $\Omega(|S|)$.
\end{lemma}
\begin{proof}
Let $M:=\left\{(u_i,w_i): u_i\in S, w_i\in\partial_{out}(S),~i\in[k]\right\}$ be a maximal matching in $B$, i.e, $M$ is not a proper subset of any other matching in $B$. Let $U = \{u_i: i\in [k]\}$ and $W = \{w_i: i\in [k]\}$. If $U = S$, then the matching $M$ has size $|S|$, and we have proved the claim. Now consider the case that $S - U$ is not empty. Since $M$ is maximal, there do not exist edges between $S-U$ and $\partial_{out}(S)-W$.
The neighbors of $S-U$ must therefore lie in set $U\cup W$. This implies that the size of $\partial_{out}(S-U)$ is no more than $2k$. Since $0<|S-U|<|V|/2$, by definition of vertex expansion, we have
\[h \le \frac{|\partial_{out}(S-U)|}{|S-U|}\le \frac{2k}{|S|-k}.\]
Rearranging the terms gives $k\ge \frac{h}{h+2}|S|$, as claimed.
\end{proof}
\paragraph{Examples of constraint graphs.}
Connectivity for a number of superconducting processors can be expressed in terms of these graphs:
\begin{itemize}
\item $\Path_n$: IBM's Falcon r5.11L chip \cite{IBMQ}, 9-qubit chip \cite{kelly2015state}.
\item $\Grid^{2, n}_{2n}$ (i.e., bilinear chain) : IBM's Melbourne chip \cite{IBMQ}, USTC's 24-qubit chip \cite{ye2019propagation}.
\item $\Grid_{n_1n_2}^{n_1, n_2}$: Google's Sycamore chip \cite{arute2019quantum,acharya2022suppressing}
, USTC's Zuchongzhi chip \cite{gong2021quantum}.
\item $\Tree_n(2)$ : IBM's Falcon r5.11H chips \cite{IBMQ}.
\end{itemize}
In addition to these, several other constraint graphs are also encountered in practice:
\begin{itemize}
\item Brick-wall: IBM's Falcon r8/Falcon r4/Falcon r5.10/Falcon r5.11/
Hummingbird r3/
Eagle r1 chips \cite{IBMQ}.
\item T-shape: IBM's Falcon r4T chips.
\item Butterfly: IBM's Yorktown chip.
\end{itemize}
Of particular note is the brick-wall structure, which we briefly describe below.
\paragraph{Brick-walls.} For integers $n_1,n_2\ge 1$, $b_1\ge 2$, $b_2\ge 3$ and $b_2$ odd, the $(n_1,n_2,b_1,b_2)$-brick-wall graph is divided into $n_1$ layers, with each layer containing $n_2$ `bricks', and each brick a rectangle containing $b_1$ vertices on `vertical' edges and $b_2$ vertices on `horizontal' edges (see Fig.~\ref{fig:brickwall}). In IBM's brick-wall chips, $b_1=3$ and $b_2=5$.
\begin{figure}[!hbt]
\centering
\begin{tikzpicture}
\filldraw[fill=green!20] (4,0)--(4,0.5)--(6,0.5)--(6,0)--cycle;
\draw [fill=black] (0,0) circle (0.05) (0.5,0) circle (0.05) (1,0) circle (0.05) (1.5,0) circle (0.05) (2,0) circle (0.05) (2.5,0) circle (0.05) (3,0) circle (0.05) (3.5,0) circle (0.05) (4,0) circle (0.05) (4.5,0) circle (0.05) (5,0) circle (0.05) (5.5,0) circle (0.05) (6,0) circle (0.05);
\draw [fill=black] (0,0.25) circle (0.05) (2,0.25) circle (0.05) (4,0.25) circle (0.05) (6,0.25) circle (0.05);
\draw [fill=black] (0,0.5) circle (0.05) (0.5,0.5) circle (0.05) (1,0.5) circle (0.05) (1.5,0.5) circle (0.05) (2,0.5) circle (0.05) (2.5,0.5) circle (0.05) (3,0.5) circle (0.05) (3.5,0.5) circle (0.05) (4,0.5) circle (0.05) (4.5,0.5) circle (0.05) (5,0.5) circle (0.05) (5.5,0.5) circle (0.05) (6,0.5) circle (0.05);
\draw [fill=black] (0,1) circle (0.05) (0.5,1) circle (0.05) (1,1) circle (0.05) (1.5,1) circle (0.05) (2,1) circle (0.05) (2.5,1) circle (0.05) (3,1) circle (0.05) (3.5,1) circle (0.05) (4,1) circle (0.05) (4.5,1) circle (0.05) (5,1) circle (0.05) (5.5,1) circle (0.05) (6,1) circle (0.05);
\draw [fill=black] (1,0.75) circle (0.05) (3,0.75) circle (0.05) (5,0.75) circle (0.05);
\draw [fill=black] (0,1.25) circle (0.05) (2,1.25) circle (0.05) (4,1.25) circle (0.05) (6,1.25) circle (0.05);
\draw [fill=black] (0,1.5) circle (0.05) (0.5,1.5) circle (0.05) (1,1.5) circle (0.05) (1.5,1.5) circle (0.05) (2,1.5) circle (0.05) (2.5,1.5) circle (0.05) (3,1.5) circle (0.05) (3.5,1.5) circle (0.05) (4,1.5) circle (0.05) (4.5,1.5) circle (0.05) (5,1.5) circle (0.05) (5.5,1.5) circle (0.05) (6,1.5) circle (0.05);
\draw [fill=black] (1,1.75) circle (0.05) (3,1.75) circle (0.05) (5,1.75) circle (0.05);
\draw [fill=black] (0,2) circle (0.05) (0.5,2) circle (0.05) (1,2) circle (0.05) (1.5,2) circle (0.05) (2,2) circle (0.05) (2.5,2) circle (0.05) (3,2) circle (0.05) (3.5,2) circle (0.05) (4,2) circle (0.05) (4.5,2) circle (0.05) (5,2) circle (0.05) (5.5,2) circle (0.05) (6,2) circle (0.05);
\draw [fill=black] (0,2.25) circle (0.05) (2,2.25) circle (0.05) (4,2.25) circle (0.05) (6,2.25) circle (0.05);
\draw [fill=black] (0,2.5) circle (0.05) (0.5,2.5) circle (0.05) (1,2.5) circle (0.05) (1.5,2.5) circle (0.05) (2,2.5) circle (0.05) (2.5,2.5) circle (0.05) (3,2.5) circle (0.05) (3.5,2.5) circle (0.05) (4,2.5) circle (0.05) (4.5,2.5) circle (0.05) (5,2.5) circle (0.05) (5.5,2.5) circle (0.05) (6,2.5) circle (0.05);
\draw (0,0)--(6,0) (0,0.5)--(6,0.5) (0,1)--(6,1) (0,1.5)--(6,1.5) (0,2)--(6,2) (0,2.5)--(6,2.5);
\draw (0,0)--(0,0.5) (0,1)--(0,1.5) (0,2)--(0,2.5) (2,0)--(2,0.5) (2,1)--(2,1.5) (2,2)--(2,2.5) (4,0)--(4,0.5) (4,1)--(4,1.5) (4,2)--(4,2.5) (6,0)--(6,0.5) (6,1)--(6,1.5) (6,2)--(6,2.5);
\draw (1,0.5)--(1,1) (1,1.5)--(1,2) (3,0.5)--(3,1) (3,1.5)--(3,2) (5,0.5)--(5,1) (5,1.5)--(5,2);
\node (a) at (-0.2,2.5) {};
\node (b) at (6.2,2.5) {};
\draw[decorate,decoration={brace,raise=5pt}] (a) -- (b);
\draw (3,3) node{\scriptsize $n_2$ bricks in each layer};
\node (a) at (0,-0.2) {};
\node (b) at (0,2.7) {};
\draw[decorate,decoration={brace,raise=5pt}] (a) -- (b);
\draw (-1.5,1.25) node{\scriptsize $n_1$ layers of bricks};
\node (a) at (3.8,-0.05) {};
\node (b) at (6.2,-0.05) {};
\draw[decorate,decoration={brace,raise=5pt,mirror}] (a) -- (b);
\draw (5,-0.5) node{\scriptsize $b_2$ vertices};
\node (a) at (6,-0.2) {};
\node (b) at (6,0.7) {};
\draw[decorate,decoration={brace,raise=5pt,mirror}] (a) -- (b);
\draw (7,0.25) node{\scriptsize $b_1$ vertices};
\end{tikzpicture}
\caption{The brick-wall graph $\brickwall_{n}^{n_1,n_2,b_1,b_2}$.}
\label{fig:brickwall}
\end{figure}
\begin{definition}A quantum circuit on $n$ qubits will be said to be under $\brickwall^{n_1, n_2, b_1, b_2}_{n}$ constraint if the the constraint graph is an $(n_1, n_2, b_1, b_2)$-brick-wall.
\end{definition}
While brick-walls lie outside the families of graphs we consider, in \S~\ref{sec:circuit_transformation} we show that our results for the $2$-dimensional grid can be used to construct a circuit for brick-wall graphs with similar bounds.
\medskip
Finally, we introduce the concept of \textit{disjoint constraint graphs}. Suppose $C_1$ and $C_2$ are two quantum circuits under $G=(V,E)$ constraint, with gates acting on subsets $V_1,V_2\subseteq V$ of qubits, respectively. If there exist two connected subgraphs $G'=(V',E')$ and $G''=(V'',E'')$ of $G$ such that $V_1\subseteq V'$, $V_2\subseteq V''$, and {$V' \cap V'' = \emptyset$}, then we say that circuits $C_1$ and $C_2$ act on disjoint constraint graphs. Circuits acting on disjoint constraint graphs can be implemented in parallel.
\subsection{Gray codes}\label{subsec:gray}
An $n$-bit Gray code is an ordering of all $2^n$ $n$-bit strings such that any two successive strings differ in exactly one bit, as do the first and the last strings. That is, an $n$-bit Gray code corresponds to a Hamiltonian cycle in the Boolean hypercube $G=(V,E)$ where $V=\{0,1\}^n$ and $E=\{(x,y): |x\oplus y|=1\}.$ An explicit construction is as follows.
Define the ruler function $\zeta(n)=\max\left\{k: 2^{k-1}|n \right\}$.
It is not hard to verify that for all $k\in[n]$, there are $2^{n-k}$ elements $i \in[2^n-1]$ such that $\zeta(i)=k$. For all $i\in[n]$ and $j\in[2^n]$, we define $h_{ij}$ as
\begin{equation}\label{eq:index}
h_{ij} = (\zeta(j-1)+i-2 \mod n )+1, \text{ with } \zeta(0)\defeq 0.
\end{equation}
It is straightforward to show that
\begin{equation}\label{eq:h1j}
h_{i1} = \begin{cases} n & \text{if } i=1, \\ i-1 & \text{if } i\in \{2,3,\ldots, n\}, \end{cases} \quad \text{and} \quad
h_{1j} = \begin{cases} n & \text{if } j=1, \\ \zeta(j-1) & \text{if } j\in \{2,3,\ldots, 2^n\}. \end{cases}
\end{equation}
For each $i\in [k]$, one can make use of $h_{ij}$ to construct an $n$-bit Gray code, as follows.
\begin{lemma}[\cite{frank1953pulse,savage1997survey,gilbert1958gray}]\label{lem:GrayCode}
For any $i\in[n]$, construct $n$-bit strings $c^i_1,c^i_2,\cdots,c^i_{2^{n}-1}, c^i_{2^{n}}$ as follows: Let $c_1^{i}=0^{n}$, and for each $j = 2, 3, \ldots, 2^{n}$, string $c_j^i$ is obtained by flipping the $h_{ij}$-th bit of $c_{j-1}^i$. Then the following properties hold.
\begin{enumerate}
\item The above sequence of strings $c^i_1,c^i_2,\cdots,c^i_{2^{n}-1}, c^i_{2^{n}}$ are all distinct, and in that order, they form an $n$-bit Gray code. Each $c^i_j$ differs from $c^i_{j-1}$ in the $h_{ij}$-th bit for each $j\ge 2$, and $c_1^i$ and $c_{2^{n}}^i$ differ in the $h_{i1}$-th bit.
\item For each $k\in [n]$, there are $2^{n-k}$ elements $j\in \{2,3,\ldots,2^{n}\}$ such that $h_{ij} = (k+i-2 \mod n)+1$. In particular, there are $2^{n-k}$ elements $j\in \{2,3,\ldots,2^{n}\}$ such that $h_{1j} = k$.
\end{enumerate}
\end{lemma}
We refer to this ordered sequence $c_1^i,c_2^i,\cdots, c_{2^{n}}^i$ as an $(n,i)$-Gray code, or simply an $i$-Gray code if $n$ is clear from context.
\subsection{Quantum gates and circuits}
\label{subsec:qgates}
For arbitrary $ \theta\in\mathbb{R}$, single-qubit gates $R_x(\theta)$, $R_z(\theta)$, $R(\theta)$ are defined as
\begin{equation}\label{eq:rotation}
R_x(\theta)=\left(\begin{array}{cc}
\cos(\theta/2) & -i\sin(\theta/2) \\
-i\sin(\theta/2) & \cos(\theta/2)
\end{array}\right), \quad R_z(\theta)=\left(\begin{array}{cc}
e^{-i(\theta/2)} & \\
& e^{i(\theta/2)}
\end{array}\right), \quad
R(\theta)=\left(\begin{array}{cc}
1 & 0 \\
0 & e^{i\theta}
\end{array}\right).
\end{equation}
Two special cases that will be later used are the phase gate $S$ and the Hadamard gate $H$,
\[~S=\left(\begin{array}{cc}
1 & \\
& i
\end{array}\right), \quad \quad
~H=\frac{1}{\sqrt{2}}\left(\begin{array}{cc}
1 & 1 \\
1 & -1
\end{array}\right).
\]
A CNOT gate on qubits $i$ and $j$, denoted $\Cnot^i_j$, effects the transformation $\Cnot^i_j\ket{x}_i\ket{y}_j=\ket{x}_i\ket{x\oplus y}_j$, $\forall x,y\in \mbox{$\{0,1\}$}$. Here, $i$ is referred to as the \textit{control qubit} and $j$ the \textit{target qubit}. The SWAP gate $\textsf{SWAP}^{i}_{j}$ implements $\textsf{SWAP}^{i}_{j}\ket{x}_{i}\ket{y}_{j}= \ket{y}_{i}\ket{x}_{j}$ for any $x,y\in \mbox{$\{0,1\}$}$, and can be realized by three CNOT gates, viz., $\textsf{SWAP}^{i}_{j}=\textsf{CNOT}^{i}_{j}\textsf{CNOT}^{j}_{i}\textsf{CNOT}^{i}_{j}$. We shall call a quantum circuit consisting of only CNOT gates a \emph{CNOT circuit}. As the following lemma shows, an $n$-qubit invertible linear transformation over $\mathbb{F}_2$ can be implemented by an efficient $n$-qubit CNOT circuit.
\begin{lemma}[\cite{wu2019optimization}]\label{lem:cnot_circuit}
Let $G_\delta$ be a connected graph with $n$ vertices and minimum degree $\delta$ . Any $n$-qubit invertible linear transformation can be implemented in circuit depth and size $O(n^2/\log (\delta))$ under $G_\delta$ constraint
\end{lemma}
Two natural extensions of the CNOT gate are the Toffoli (multi-controlled not) gate and the multi-target CNOT gate. For qubit set $S$ and string $y\in \mbox{$\{0,1\}$}^{|S|}$, the $(|S|+1)$-qubit Toffoli gate $\textsf{Tof}^S_i(y)$ is defined as
\[\ket{x}_S\ket{b}_i\to\ket{x}_S\ket{1_{[x=y]}\oplus b}_i, \forall x\in \mbox{$\{0,1\}$}^{|S|},\forall b\in \mbox{$\{0,1\}$},\]
where $1_{[x=y]} = 1$ if $x=y$, and 0 otherwise. Here $S$ is the control qubit set and $i$ is the target qubit. This extends the standard Toffoli gate in which $y = 11...1$.
\begin{lemma}[\cite{multi-controlled-gate}]\label{lem:tof}
An $n$-qubit Toffoli gate ${\sf Tof}^S_i(y)$ can be implemented by a quantum circuit of size and depth $O(n)$.
\end{lemma}
The $n+1$ qubit multi-target CNOT gate $\textsf{CNOT}^i_{j_1, \ldots, j_n}$ effects the transformation
\begin{align*}
\textsf{CNOT}^i_{j_1, \ldots, j_n} \ket{x}_i \ket{y_1}_{j_1}\ldots\ket{y_n}_{j_n} &\rightarrow \ket{x}_i \ket{x\oplus y_1}_{j_1}\ldots\ket{x\oplus y_n}_{j_n},
\end{align*}
and can be implemented efficiently by a CNOT circuit.
\begin{lemma}\label{lem:multicontrolcnot}
The $n+1$ qubit multi-target $\Cnot^i_{j_1, \ldots, j_n}$ gate can be implemented by a CNOT circuit consisting of $2n-1$ $\mathsf{CNOT}$ gates under $\Path_{n+1}$ constraint (see Fig.~\ref{fig:add-circuit}).
\end{lemma}
\begin{figure}[!hbt]
\centerline
{
\Qcircuit @C=0.6em @R=0.7em {
\lstick{\ket{x}}&\qw & \qw &\qw & \qw & \ctrl{1} & \qw & \qw & \qw & \qw & \qw &\rstick{\ket{x}} \\
\lstick{\ket{y_1}} &\qw &\qw &\qw & \ctrl{1} & \targ & \ctrl{1} &\qw & \qw & \qw & \qw &\rstick{\ket{x\oplus y_1}}\\
\lstick{\ket{y_2}} &\qw &\qw & \ctrl{1} & \targ & \qw & \targ & \ctrl{1} & \qw &\qw & \qw &\rstick{\ket{x\oplus y_2}}\\
\lstick{\vdots}&\qw & \ctrl{1} &\targ & \qw & \qw & \qw & \targ & \ctrl{1} & \qw &\qw &\rstick{\vdots}\\
\lstick{\ket{y_{n}}}&\qw & \targ &\qw & \qw & \qw & \qw & \qw & \targ & \qw &\qw &\rstick{\ket{x\oplus y_{n}}
}
}\caption{CNOT circuit for implementing the $n+1$ qubit multi-target $\mathsf{CNOT}$ gate.}\label{fig:add-circuit}
\end{figure}
\subsection{Quantum state preparation and general unitary synthesis}
Two key problems addressed in this paper are the quantum state preparation problem, and the general unitary synthesis problem.
\paragraph{The quantum state preparation (QSP) problem.} Given a vector $v=(v_x)_{x\in\{0,1\}^n}\in\mathbb{C}^{2^n}$ where $\sqrt{\sum_{x\in\{0,1\}^n}|v_x|^2}=1$, prepare the corresponding $n$-qubit quantum state
\[\ket{\psi_v}=\sum_{x\in\{0,1\}^n}v_x\ket{x},\]
by a standard quantum circuit, starting from initial quantum state $\ket{0^n}$. We shall refer to such a circuit as a \textit{QSP circuit}.
\paragraph{The general unitary synthesis (GUS) problem.} Given an $n$-qubit unitary matrix $U=[u_{xy}]_{x,y\in\{0,1\}^n}\in\mathbb{C}^{2^n\times 2^n}$, construct a standard quantum circuit for $U$. We call such a circuit a \textit{GUS circuit}.
\medskip
QSP circuits and GUS circuits may make use of ancillary qubits. In this case, we say that:
\begin{enumerate}
\item A circuit $C_{\rm QSP}$ with $m$ ancillary qubits solves the QSP problem if
\[C_{\rm QSP}\ket{0^n}\ket{0^m}=\ket{\psi_v}\ket{0^m}.\]
\item A circuit $C_{\rm GUS}$ with $m$ ancillary qubits solves the GUS problem if
\[C_{\rm GUS}\ket{x}\ket{0^m}=(U\ket{x})\ket{0^m}, ~\forall x\in\mbox{$\{0,1\}^n$}.\]
\end{enumerate}
Some remarks:
\begin{enumerate}
\item In both cases, we restore the ancillary qubits to $\ket{0^m}$ after the computation. We require this because in many applications of QSP and GUS, there are downstream operations which may need all-0 ancillary qubits. Actually, some circuits in \cite{biamonte2017quantum,quantumalgorithm} and this paper use recursive constructions, for which restoring ancillary qubits to $\ket{0^m}$ is naturally needed.
\item While our constructions restore the ancillary qubits, our matching lower bounds hold even if this ancilla restoration is not required.
\item Parametrized circuits are especially desirable as the architecture is fixed and all that is required is to set the parameters for a given input, either by computation or by tuning. The circuits constructed in this paper are all parametrized circuits.
\end{enumerate}
\subsection{Uniformly controlled gates $V_n$ and diagonal unitary matrices $\Lambda_n$} \label{sec:ucg}
Given 1-qubit unitary matrices $U_1, U_2, \ldots,U_{2^{n-1}-1}, U_{2^{n-1}} \in\mathbb{C}^{2\times 2}$, an $n$-qubit \textit{uniformly controlled gate} (UCG) $V_n$ is a block diagonal matrix as follows
\begin{equation}\label{eq:UCG}
V_n=\left(\begin{array}{cccc}
U_1 & & &\\
& U_2& &\\
& & \ddots &\\
& & & U_{2^{n-1}}
\end{array}\right)\in\mathbb{C}^{2^n\times 2^n}.
\end{equation}
That is, conditioned on the state of the first $n-1$ qubits, $V_n$ applies the corresponding $U_i$ operation to the $n$-th qubit
Any $n$-qubit diagonal unitary matrix $\Lambda_n$ can be expressed as
\begin{equation*}
\Lambda_n=\left(\begin{array}{cccc}
1& & &\\
& e^{i\theta_1}& &\\
& & \ddots &\\
& & & e^{i\theta_{2^n-1}}
\end{array}\right)\in\mathbb{C}^{2^n\times 2^n},
\end{equation*}
where $\theta_1,\ldots, \theta_{2^n-1} \in\mathbb{R}$. As quantum states that differ only by a global phase are indistinguishable, without loss of generality we may set the first entry to 1.
\begin{lemma}[\cite{mottonen2005decompositions}]\label{lem:diag_size}
Any $n$-qubit diagonal unitary matrix $\Lambda_n$ can be implemented by a quantum circuit of size $O(2^n)$.
\end{lemma}
{The task of implementing UCGs can be reduced to that of implementing diagonal unitary matrices:
\begin{lemma}[\cite{sun2021asymptotically}]\label{lem:UCG_decomposition}
Any $n$-qubit UCG $V_n$ can be decomposed into three $n$-qubit diagonal unitary matrices and four single-qubit gates,
\[V_n=\Lambda_n'''(\mathbb{I}_{n-1}\otimes (S H))\Lambda_n'' (\mathbb{I}_{n-1}\otimes (HS^\dagger))\Lambda_n',\]
where $\Lambda_n',\Lambda_n'',\Lambda_n'''\in\mathbb{C}^{2^n\times 2^n}$ are $n$-qubit diagonal unitary matrices (see Fig.~\ref{fig:V_n}).
\end{lemma}}
\begin{figure}[!hbt]
\centerline{
\Qcircuit @C=1.2em @R=0.6em {
& \multigate{4}{\scriptstyle V_n} & \qw & & & \multigate{4}{\scriptstyle \Lambda'_n} & \qw & \qw & \multigate{4}{\scriptstyle \Lambda''_n} & \qw & \qw & \multigate{4}{\scriptstyle \Lambda'''_n} & \qw\\
& \ghost{\scriptstyle V_n} & \qw & & & \ghost{\scriptstyle \Lambda'_n} & \qw &\qw & \ghost{\scriptstyle \Lambda''_n} &\qw & \qw & \ghost{\scriptstyle \Lambda'''_n} &\qw\\
& \ghost{\scriptstyle V_n} & \qw & = & & \ghost{\scriptstyle \Lambda'_n} & \qw &\qw & \ghost{\scriptstyle \Lambda''_n} &\qw & \qw & \ghost{\scriptstyle \Lambda'''_n} &\qw\\
& \ghost{\scriptstyle V_n} & \qw & & & \ghost{\scriptstyle \Lambda'_n} & \qw &\qw &\ghost{\scriptstyle \Lambda''_n} & \qw & \qw & \ghost{\scriptstyle \Lambda'''_n} &\qw \\
& \ghost{\scriptstyle V_n} & \qw & & & \ghost{\scriptstyle \Lambda'_n} & \gate{\scriptstyle S^\dagger} &\gate{\scriptstyle H} &\ghost{\scriptstyle \Lambda''_n} & \gate{\scriptstyle H} &\gate{\scriptstyle S} & \ghost{\scriptstyle \Lambda'''_n} &\qw \inputgroupv{1}{5}{1.5em}{3.3em}{ \scriptstyle n \text{~qubits}~~~~~}\\
}
}
\caption{Decomposition of UCG $V_n$ into 3 diagonal unitary matrices and $4$ single-qubit gates.}
\label{fig:V_n}
\end{figure}
To implement $\Lambda_n$, which can be represented as
\begin{equation}\label{eq:diag}
\ket{x}\to e^{i\theta(x)}\ket{x},~\forall x\in \mbox{$\{0,1\}^n$}-\{0^n\},
\end{equation}
in a quantum circuit, it suffices to accomplish the following two tasks.
\begin{enumerate}
\item For every $s\in \{0,1\}^n-\{0^n\}$, effect a phase shift of $\alpha_s$ on each basis $\ket{x}$ with $\langle s,x\rangle = 1$, i.e.
\begin{equation}\label{eq:task1}
\ket{x} \to e^{i\alpha_s\langle s,x\rangle } \ket{x}.
\end{equation}
\item Find $\{\alpha_s:s\in \mbox{$\{0,1\}^n$}-\{0^n\}\}$ s.t. \begin{equation}\label{eq:alpha}
\sum_{s\in \{0,1\}^n-\{0^n\}}\alpha_s\langle x,s\rangle = \theta(x), \quad \forall x\in \{0,1\}^n-\{0^n\}.
\end{equation}
\end{enumerate}
Combining the two gives
\[\ket{x} \to \prod_{s\in \{0,1\}^n-\{0^n\}} e^{i\alpha_s\langle s,x\rangle } \ket{x} = e^{i\sum_s\alpha_s \langle s,x\rangle} \ket{x} = e^{i\theta(x) } \ket{x},\]
as required. For notational convenience, define $\alpha_{0^n}=0$.
For any $x\in\mbox{$\{0,1\}^n$}$, if we generate a state $\ket{\langle s,x\rangle}$ on a qubit, apply $R(\alpha_s)$ on it, and restore this qubit, then Task 1 is implemented. We call the process of generating $\ket{\langle s,x\rangle}$ \textit{generating $s$}. Given $\{\theta(x):x\in \mbox{$\{0,1\}$}-\{0^n\}\}$, the values $\{\alpha_s:s\in \mbox{$\{0,1\}^n$}-\{0^n\}\}$ in Task 2 can be efficiently found by the Walsh-Hadamard transform \cite{sun2021asymptotically}. After we generate all $s\in\mbox{$\{0,1\}^n$}-\{0^n\}$, apply $R(\alpha_s)$ on $\ket{\langle s,x\rangle}$ and restore the qubits, we have implemented $\Lambda_n$ (by Eq. \eqref{eq:alpha}).
\section{Circuit constructions for diagonal unitary matrices without ancillary qubits under qubit connectivity constraints}
\label{sec:diag_without_ancilla}
In this section, we present a circuit construction for diagonal unitary matrices $\Lambda_n$ under graph constraints without ancillary qubits, which will be used in \S \ref{sec:QSP_US_graph} to construct QSP and GUS circuits
\subsection{Circuit framework}
\label{sec:diag_without_ancilla_framework}
Our circuit is construction based on the framework of~\cite{sun2021asymptotically}, modified to minimize additional overhead costs when graph constraints are imposed.
\paragraph{Old method.}
In~\cite{sun2021asymptotically}, an $n$-bit string $s$ is divided into two parts: an $r_c$-bit prefix and an $r_t$-bit suffix, where $r_c=\lceil n/2\rceil$ and $r_t=\lfloor n/2\rfloor$. The suffix set $\mbox{$\{0,1\}$}^{r_t}-\{0^{r_t}\}$ is itself divided into $\ell~ \le \frac{2^{r_t+2}}{r_t+1}-1$ sets $T^{(1)}, \ldots, T^{(\ell)}$, each of size $r_t$. The process of generating all $s\in\mbox{$\{0,1\}^n$}-\{0^n\}$ consists of $\ell$ phases, where the $i$-th phase generates all $n$-bit strings with suffixes in $T^{(i)}$. For bit strings ending with the $j$-th suffix in $T^{(i)}$, prefixes are enumerated in the order of a $j$-Gray code. The prefixes are implemented by CNOT gates where the control qubit lies in the first $r_c$ qubits and the target qubit lies in the last $r_t$ qubits. Bit strings with suffix $0^{r_t}$ need special treatment for technical reasons, and are handled by recursive generation.
Unfortunately, this framework is inefficient under qubit connectivity constraints. More specifically, when we generate $r_t$ prefixes simultaneously by Gray code, we apply $r_t$ CNOT gates which cannot be implemented in parallel, and impose an overhead of $O(n^2)$ (see \S \ref{sec:intro}) to the circuit depth. To resolve this issue, we choose different lengths of prefixes (and suffixes) under different constraint graphs, and rearrange the positions of the control and target qubits to minimize the number of controlled operations that involve distant qubits on the graph.
\paragraph{New method.}
Our circuit framework for $\Lambda_n$ is shown in Fig. \ref{fig:diag_without_ancilla_framwork}. The $n$ input qubits of $\Lambda_n$ are labelled $1,2,\cdots,n$, and are divided into two registers: control register $\textsf{C}$ and target register $\textsf{T}$, with sizes $r_c$ and $r_t:=n-r_c$, respectively, where $[n]=\textsf{C}\cup \textsf{T}$. Compared to ~\cite{sun2021asymptotically}, our construction differs in two main ways:
\begin{enumerate}
\item The design of registers $\textsf{C}$ and $\textsf{T}$. In~\cite{sun2021asymptotically}, $\textsf{C}$ and $\textsf{T}$ are specified as the first $\lceil n/2\rceil$ and the last $\lfloor n/2\rfloor$ qubits, respectively. In this work, $\textsf{C}$ and $\textsf{T}$ depend on the constraint graph (details specified in following sections): they do not always have sizes $r_c = r_t = n/2$, and their positions are determined by a transformation $\Pi$ which permutes the first $r_c$ and the last $r_t$ qubits \footnote{Technically, we do not really need to permute the qubits. Since we aim to apply a general $n$-qubit diagonal unitary matrix, relabeling qubits would achieve the same effect. Here we move qubits because it is easier to specify the circuit, and the moving cost turns out to be a small factor that can be absorbed into other terms.}.
\item The choice of Gray codes. The implementation of the $C_i$ operators involves choosing $r_t$ Gray codes, specified by integers $j_1, \ldots, j_{r_t}$. The choice of Gray codes determines the sequence of qubits which act as the control for CNOT operations required to implement $C_k$. If two or more integers $j_i$ are the same, this corresponds to the same control qubit used for CNOT operations acting on different target qubits. In~\cite{sun2021asymptotically} the Gray codes used correspond to choosing $j(i)=i$. Here, by carefully choosing the $j_i$, accounting for the graph connectivity and the choice of $\sf C$ and $\sf T$, we can achieve a reduction in circuit depth.
\end{enumerate}
\begin{figure}[!ht]
\centerline{
\Qcircuit @C=1.3em @R=0.7em {
\lstick{\ket{x_1}} &\multigate{5}{\Pi} &\multigate{5}{C_1} &\multigate{5}{C_2} &\qw & {\cdots~~~~} & \multigate{5}{C_\ell} & \multigate{5}{\Pi^{\dagger}} & \multigate{2}{\Lambda_{r_c}} &\qw \\
\lstick{\vdots~~~} & \ghost{\Pi} &\ghost{C_1} &\ghost{C_2} &\qw & {\cdots~~~~} & \ghost{C_\ell} & \ghost{\Pi^{\dagger}} & \ghost{\Lambda_{r_c}} &\qw \\
\lstick{\ket{x_{r_c}}} & \ghost{\Pi} &\ghost{C_1} &\ghost{C_2} &\qw & {\cdots~~~~} & \ghost{C_\ell} & \ghost{\Pi^{\dagger}} & \ghost{\Lambda_{r_c}} &\qw\\
\lstick{\ket{x_{r_c+1}}} & \ghost{\Pi}&\ghost{C_1} &\ghost{C_2}&\qw & {\cdots~~~~} & \ghost{C_\ell} & \ghost{\Pi^{\dagger}} & \multigate{2}{\mathcal{R}} &\qw \\
\lstick{\vdots~~~} & \ghost{\Pi} &\ghost{C_1} &\ghost{C_2}&\qw & {\cdots~~~~} & \ghost{C_\ell} & \ghost{\Pi^{\dagger}} & \ghost{\mathcal{R}} &\qw \\
\lstick{\ket{x_n}} & \ghost{\Pi} &\ghost{C_1} &\ghost{C_2} &\qw & {\cdots~~~~} & \ghost{C_\ell} & \ghost{\Pi^{ \dagger}} & \ghost{\mathcal{R}} &\qw \\
}
}
\caption{Circuit framework for implementing diagonal unitaries $\Lambda_n$ without ancillary qubits under graph constraints. Control and target register sizes, $r_c$ and $r_t$, respectively, depend on the graph constraint. Operations $\Pi$ and $\Pi^\dagger$ are added to move control and target qubits close together. Our implementation of each $C_i$ differs from that in \cite{sun2021asymptotically}, and needs to be adapted to different constraint graphs. $\ell\le \frac{2^{r_t+2}}{r_t+1}-1$.}
\label{fig:diag_without_ancilla_framwork}
\end{figure}
To describe the operators $\Pi$, $C_1,\cdots, C_\ell$, $\Lambda_{r_c}$ and $\mathcal{R}$ in Fig. \ref{fig:diag_without_ancilla_framwork}, recall the following result:
\begin{lemma}[\cite{sun2021asymptotically}]\label{lem:partition}
There exist sets $T^{(1)},T^{(2)},\cdots,T^{(\ell)} \subseteq \{0,1\}^{r_t}-\{0^{r_t}\}$, for some integer $\ell \le \frac{2^{r_t+2}}{r_t+1}-1$, such that:
\begin{enumerate}
\item For any $i\in[\ell]$, $|T^{(i)}|=r_t$;
\item For any $i\in[\ell]$, the Boolean vectors in $T^{(i)}=\{{t^{(i)}_1},{t^{(i)}_2},\cdots,{t^{(i)}_{r_t}}\}$ are linearly independent over $\mathbb{F}_2$;
\item $\bigcup_{i\in[\ell]} T^{(i)}= \{0,1\}^{r_t} - \{0^{r_t}\} $.
\end{enumerate}
\end{lemma}
Now, for each $k\in[\ell]\cup\{0\}$, define an $r_t$-qubit state in register $\textsf{T}$:
\begin{equation} \label{eq:yk}
\ket {y^{(k)}}_\textsf{T} = \ket{y_1^{(k)}y_2^{(k)}\cdots y_{r_t}^{(k)}}_\textsf{T}, \quad \text{ where } \quad
y_j^{(k)} =\left\{\begin{array}{ll}
x_{r_c+j} & \text{if~} k=0, \\
\langle {0^{r_c}t_j^{(k)}},x\rangle & \text{if~} k\in[\ell].
\end{array}\right.
\end{equation}
Next, define disjoint sets $F_1,\ldots,F_\ell$ from $T^{(1)},\ldots, T^{(\ell)}$ by removing duplicates.
\begin{equation}\label{eq:F_k}
\left\{\begin{array}{ll}
F_1=\left\{ct:\ t\in T^{(1)},c\in\{0,1\}^{r_c}\right\}, & \\
F_k=\left\{ct:\ t\in T^{(k)},c\in\{0,1\}^{r_c}\right\}-\bigcup_{d\in[k-1]}F_{d}, & 2\le k\le \ell.
\end{array}\right.
\end{equation}
These satisfy
$F_i\cap F_j =\emptyset$, for all $i\neq j \in[\ell]$, and
\begin{equation}\label{eq:set_eq}
\bigcup_{k\in [\ell]} F_k = \mbox{$\{0,1\}$}^{r_c}\times \cup_{k\in [\ell]} T^{(k)} = \mbox{$\{0,1\}$}^{r_c}\times (\mbox{$\{0,1\}$}^{r_t} - \{0^{r_t}\}) = \mbox{$\{0,1\}^n$}-\{c0^{r_t}:\ c\in\{0,1\}^{r_c}\}.
\end{equation}
We are now in a position to define the unitary operators $\Pi$, $C_k$, $\mathcal{R}$ and $\Lambda_{r_c}$.
\begin{enumerate}
\item $\Pi$ is an $n$-qubit unitary defined by
\begin{equation}\label{eq:pi}
\Pi\ket{x_1x_2\cdots x_n}_{[n]}=\ket{x_1x_2\cdots x_{r_c}}_{\textsf{C}}\ket{x_{r_c+1}x_{r_c+2}\cdots x_n}_{\textsf{T}}\defeq\ket{x_{control}}_{\textsf{C}}\ket{x_{target}}_{\textsf{T}}.
\end{equation}
That is, $\Pi$ moves the content of the first $r_c$ qubits to register $\textsf{C}$ and the remaining qubits to register $\textsf{T}$. Note that $\Pi$ can be implemented by a sequence of SWAPs, and is thus an invertible linear transformation over $\mathbb{F}_2$.
\item For $k\in[\ell]$,
\begin{equation}\label{eq:Ck}
C_k\ket{x_{control}}_{\textsf{C}}\ket{y^{(k-1)}}_{\textsf{T}}=e^{i\sum_{s \in F_k}\langle s,x\rangle \alpha_s } \ket{x_{control}}_{\textsf{C}}\ket{y^{(k)}}_{\textsf{T}},
\end{equation}
where $\alpha_s$ is determined by Eq. \eqref{eq:alpha}, i.e., $C_k$ introduces a phase and updates stage $k-1$ to stage $k$.
\item $\mathcal{R}$ acts on qubit set $[n]-[r_c]$ and resets the suffix state as follows
\begin{equation}\label{eq:reset}
\mathcal{R}\ket{y^{(\ell)}}_{[n]-[r_c]}=\ket{y^{(0)}}_{[n]-[r_c]}.
\end{equation}
$\mathcal R$ is an invertible linear transformation over $\mathbb{F}_2$.
\item $\Lambda_{r_c}$ is an $r_c$-qubit diagonal matrix acting on qubit set $[r_c]$, which satisfies
\begin{equation}\label{eq:Lambda_rc}
\Lambda_{r_c}\ket{x_{control}}_{[r_c]}= e^{i\sum_{c\in \{0,1\}^{r_c}-\{0^{r_c}\}}\langle c0^{r_t},x\rangle\alpha_{c0^{r_t}}}\ket{x_{control}}_{[r_c]}.
\end{equation}
\end{enumerate}
We now present circuit constructions for $C_k$, $\mathcal{R}$ and $\Lambda_{r_c}$ under general graph constraints.
\paragraph{Circuit construction for $C_k$.}
Let $G=(V,E)$ denote a connected graph with vertex set $V={\sf C}\uplus {\sf T}$.
For all $k\in [\ell]$, $C_k$ is constructed in two stages:
\begin{align}
\ket{x_{control}}_{\textsf{C}}\ket{y^{(k-1)}}_{\textsf{T}}&\xrightarrow{U_{Gen}^{(k)}}\ket{x_{control}}_{\textsf{C}}\ket{y^{(k)}}_{\textsf{T}}, \label{eq:Ugen_Graph}\\
&\xrightarrow{U_{Gray}^{(k)}}e^{i\sum_{s \in F_k}\langle s,x\rangle \alpha_s}\ket{x_{control}}_{\textsf{C}}\ket{y^{(k)}}_{\textsf{T}}. \label{eq:Ugray_Graph}
\end{align}
$U^{(k)}_{Gen}$ is a linear transformation (over $\mathbb{F}_2$) on register $\textsf{T}$, and updates $\ket{y^{(k-1)}}_{\sf T}\rightarrow \ket{y^{(k)}}_{\sf T}$.
$U^{(k)}_{Gray}$ is parameterized by $r_t$ integers $j_1, j_2, \ldots, j_{r_t}\in[r_c]$, each of which specifies an $(r_c,j_i)$-Gray code. These Gray codes are used to update each qubit in the target register in a sequence of steps. More precisely, $U^{(k)}_{Gray}$ is carried out in $2^{r_c}+1$ phases, with each phase $p$ implementing a unitary $U_p$:
\begin{enumerate}
\item Phase 1. For all $i\in[r_t]$, $U_1$ applies $R(\alpha_{0^{r_c}t_i^{(k)}})$ (see Eqs.~\eqref{eq:rotation} and~\eqref{eq:alpha}) to the $i$-th qubit in $\sf T$ if $0^{r_c}t_i^{(k)} \in F_k$
\item Phases $2\le p\le 2^{r_c}$. $U_{p}$ consists of two steps:\begin{enumerate}
\item Step $p.1$: Apply a unitary transformation $C_{p.1}$ satisfying, $\forall x\in\mbox{$\{0,1\}^n$}$,
\begin{align}\label{eq:step_p1}
&\ket{x_{control}}_{\sf C}\ket{\langle c_{p-1}^{j_1 }t_1^{(k)},x\rangle,\langle c_{p-1}^{j_2 }t_2^{(k)},x\rangle,\cdots,\langle c_{p-1}^{j_{r_t} }t_{r_t}^{(k)},x\rangle}_{\sf T}\nonumber\\ \xrightarrow{C_{p.1}} &\ket{x_{control}}_{\sf C}\ket{\langle c_{p}^{j_1 }t_1^{(k)},x\rangle,\langle c_{p}^{j_2 }t_2^{(k)},x\rangle,\cdots,\langle c_{p}^{j_{r_t} }t_{r_t}^{(k)},x\rangle}_{\sf T}, \nonumber \\
=&\ket{x_{control}}_{\sf C}\ket{\langle c_{p-1}^{j_1 }t_1^{(k)},x \rangle \oplus x_{h_{j_1 p}},\langle c_{p-1}^{j_2 }t_2^{(k)},x\rangle \oplus x_{h_{j_2 p}},\cdots,\langle c_{p-1}^{j_{r_t} }t_{r_t}^{(k)},x\rangle \oplus x_{h_{j_{r_t}p}}}_{\sf T}.
\end{align}
Note that each update $\langle c^{j_i}_{p-1} t_1^{(k)},x \rangle\rightarrow \langle c^{j_i}_{p} t_1^{(k)},x \rangle$ changes the prefix from $c^{j_i}_{p-1}$ to $c^{j_i}_{p}$, and can be implemented by a $\mathsf{CNOT}$ with control qubit $\ket{x_{h_{j_i p}}}$ and target being the $i$-th qubit in ${\sf T}$.
\item Step $p.2$: For all $i\in[r_t]$, apply $R(\alpha_{c^{j_i }_pt_{i}^{(k)}})$ to the $i$-th qubit in $\sf T$ if $c^i_pt_{i}^{(k)}\in F_k$, where $\alpha_{c_p^{j_i }t_i^{(k)}}$ is defined in Eq. \eqref{eq:alpha}.
\end{enumerate}
\item Phase $2^{r_c}+1$. $U_{2^{r_c}+1}$ carries out a transformation satisfying, $\forall x\in\mbox{$\{0,1\}^n$}$
,
\begin{align}\label{eq:phase_2rc+1}
&\ket{x_{control}}_{\sf C}\ket{\langle c_{2^{r_c}}^{j_1 }t_1^{(k)},x\rangle,\langle c_{2^{r_c}}^{j_2 }t_2^{(k)},x\rangle,\cdots,\langle c_{2^{r_c}}^{j_{r_t} }t_{r_t}^{(k)},x\rangle}_{\sf T}\nonumber\\
\xrightarrow{U_{2^{r_c}+1}} &\ket{x_{control}}_{\sf C}\ket{\langle c_{1}^{j_1 }t_1^{(k)},x\rangle,\langle c_{1}^{j_2 }t_2^{(k)},x\rangle,\cdots,\langle c_{1}^{j_{r_t} }t_{r_t}^{(k)},x\rangle}_{\sf T}, \nonumber\\
=&\ket{x_{control}}_{\sf C}\ket{\langle c_{2^{r_c}}^{j_1 }t_1^{(k)},x \rangle \oplus x_{h_{j_1 1}},\langle c_{2^{r_c}}^{j_2 }t_2^{(k)},x\rangle \oplus x_{h_{j_2 1}},\cdots,\langle c_{2^{r_c}}^{j_{r_t} }t_{r_t}^{(k)},x\rangle\oplus x_{h_{j_{r_c} 1}}}_{\sf T}
\end{align}
Each update $\langle c_{2^{r_c}}^{j_i }t_1^{(k)},x\rangle\rightarrow \langle c_{1}^{j_i }t_1^{(k)},x\rangle$ changes the last prefix $c_{2^{r_c}}^{j_i }$ to the first one $c_{1}^{j_i }$, which can be implemented by a CNOT with the $i$-th qubit in ${\sf T}$ as the target, controlled by $\ket{x_{h_{j_1 1}}}$.
\end{enumerate}
Let $\mathcal{D}(C_{p.1})$ and $\mathcal{S}(C_{p.1})$ denote the circuit depth and size, respectively, required to implement $C_{p.1}$ (Eq. \eqref{eq:step_p1}) under arbitrary graph constraint.
\begin{lemma}\label{lem:Ck}
For all $k\in[\ell]$, the circuit $C_k$ in Eq.\eqref{eq:Ck} can be implemented by a quantum circuit of depth $O\left(n^2+2^{r_c}+\sum_{p=2}^{2^{r_c}}\mathcal{D}(C_{p.1})\right)$ and size $O\left(n^2+r_t2^{r_c}+\sum_{p=2}^{2^{r_c}}\mathcal{S}(C_{p.1})\right)$ under arbitrary graph constraint.
\end{lemma}
\begin{proof}
This circuit implements the above description. We give a proof of its correctness in Appendix \ref{append:Ck_correctness} for completeness.
We analyze the complexity as follows. By Lemma \ref{lem:cnot_circuit}, $U_{Gen}^{(k)}$ (Eq. \eqref{eq:Ugen_Graph}) can be implemented by a CNOT circuit of depth and size $O(n^2)$.
Now we discuss the circuit depth for $U^{(k)}_{Gray}$.
For every $p\in\{2,3,\ldots,2^{r_c}\}$, Phase 1 and Step $p$.2 consist of $R(\theta)$ gates applied on different qubits in the target register, which can be made in depth 1 and size $O(r_t)$. The depth and size of Step $p$.1 are $\mathcal{D}(C_{p.1})$ and $\mathcal{S}(C_{p.1})$ by definition.
For any $i\in[r_t]$, $c_{2^{r_c}}^{j_i}$ and $c_1^{j_i}$ differ in the $h_{j_i,1}$-th bit. Then $U_{2^{r_c}+1}$ can be implemented by adding $x_{h_{j_i,1}}$ to the $i$-th qubit of target register $\sf T$, using a CNOT circuit. That is, $U_{2^{r_c}+1}$ can be realized by a CNOT circuit.
Again, by Lemma \ref{lem:cnot_circuit}, Phase $2^{r_c}+1$ can be implemented by a circuit of depth and size $O(n^2)$.
Circuit $C_k$ thus has total depth $O(n^2)+O(1)+\sum_{p=2}^{2^{r_c}}(1+\mathcal{D}(C_{p.1}))+O(n^2)=O(n^2+2^{r_c}+\sum_{p=2}^{2^{r_c}}\mathcal{D}(C_{p.1}))$, and total size $O(n^2)+O(r_t)+\sum_{p=2}^{2^{r_c}}(r_t+\mathcal{S}(C_{p.1}))+O(n^2)=O(n^2+r_t2^{r_c}+\sum_{p=2}^{2^{r_c}}\mathcal{S}(C_{p.1}))$.
\end{proof}
\paragraph{Circuit construction for $\mathcal{R}$.}
\begin{lemma}\label{lem:reset}
Unitary transformation $\mathcal{R}$ (Eq. \eqref{eq:reset}) can be implemented by a quantum circuit of depth and size $O(n^2)$ under arbitrary graph constraint.
\end{lemma}
\begin{proof}
$\mathcal{R}$ is an invertible linear transformation over $\mathbb{F}_2$ acting on qubits $[n]-[r_c]$. The result follows from Lemma~\ref{lem:cnot_circuit}.
\end{proof}
\paragraph{Circuit construction for $\Lambda_{r_c}$.}
\begin{lemma}\label{lem:cnot_path_constraint}
CNOT gate ${\sf CNOT}_v^u$ can be implemented by a CNOT circuit of depth and size $O(d(u,v))$ under arbitrary graph constraint, where $d(u,v)$ is the minimum distance between vertices $u$ and $v$ in $G$
\end{lemma}
\begin{proof}
Let the nodes along the shortest path in $G$ from $u$ to $v$ be $u_0, u_1, \cdots, u_{d}$ where $d=d(u,v)$, $u=u_0$ and $v=u_d$. $\Cnot_v^u$ can be implemented by the circuit in Fig. \ref{fig:cnot_path}, which has depth and size $O(d)$, and consists only of CNOT gates acting on adjacent qubits.
\begin{figure}[!hbt]
\centerline
{
\Qcircuit @C=0.6em @R=1em {
\lstick{\scriptstyle u=u_0 ~~\ket{x}}&\ctrl{5}&\qw &\push{\scriptstyle \ket{x}}&&&\push{\scriptstyle \ket{x}}& \qw & \qw &\qw & \qw & \ctrl{1} & \qw & \qw & \qw & \qw & \qw &\qw &\qw &\qw &\ctrl{1} &\qw &\qw &\qw &\qw &\qw &\qw &\rstick{\scriptstyle \ket{x}}\\
\lstick{\scriptstyle u_1~~\ket{y_1}} &\qw &\qw &\push{\scriptstyle\ket{y_1}}&&&\push{\scriptstyle \ket{y_1}}&\qw &\qw &\qw & \ctrl{1} & \targ & \ctrl{1} &\qw & \qw & \qw & \qw & \qw& \qw &\ctrl{1} &\targ&\ctrl{1} &\qw &\qw &\qw &\qw &\qw &\rstick{\scriptstyle \ket{y_1}}\\
\lstick{\scriptstyle u_2~~\ket{y_2}} &\qw &\qw &\push{\scriptstyle\ket{y_2}}&&&\push{\scriptstyle \ket{y_2}}&\qw &\qw & \ctrl{1} & \targ & \qw & \targ & \ctrl{1} & \qw &\qw & \qw &\qw &\ctrl{1} &\targ &\qw &\targ& \qw &\ctrl{1}&\qw &\qw &\qw &\rstick{\scriptstyle \ket{y_2}}\\
\lstick{\vdots~~}&\qw &\qw &\vdots&=&&\push{\vdots~~}&\qw & \ctrl{1} &\targ & \qw & \qw & \qw & \targ & \ctrl{1} & \qw &\qw &\ctrl{1}&\targ &\qw &\qw &\qw &\qw &\targ&\qw&\ctrl{1}&\qw &\rstick{\vdots}\\
\lstick{\scriptstyle u_{d-1}~~\ket{y_{d-1}}}&\qw &\qw &\push{\scriptstyle\ket{y_{d-1}}}&&&\push{\scriptstyle \ket{y_{d-1}}}&\ctrl{1} & \targ &\qw & \qw & \qw & \qw & \qw & \targ & \ctrl{1} &\qw &\targ &\qw &\qw &\qw &\qw &\qw &\qw& \qw &\targ &\qw &\rstick{\scriptstyle \ket{y_{d-1}}}\\
\lstick{\scriptstyle v=u_{d}~~\ket{y_d}}&\targ&\qw &\push{\scriptstyle\ket{x\oplus y_d}}&&&\push{\scriptstyle \ket{y_d}}&\targ & \qw &\qw & \qw & \qw & \qw & \qw & \qw &\targ &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\qw &\rstick{\scriptstyle \ket{x\oplus y_d}}\\
}
}
\caption{Implementation of a $\Cnot_v^u$ gate under path constraint by $O(d)$ CNOT gates acting on adjacent qubits.}\label{fig:cnot_path}
\end{figure}
\end{proof}
In \cite{sun2021asymptotically}, unitary $\Lambda_{r_c}$ is implemented recursively in depth $O(2^{r_c}/r_c)$. Under a constraint graph, however, the $r_c$ qubits of $\Lambda_{r_c}$ are not necessarily connected, and we therefore cannot implement $\Lambda_{r_c}$ recursively as before.
Fortunately, we can can still realize $\Lambda_{r_c}$ with only a modest $O(n)$ overhead.
\begin{lemma}\label{lem:Lambda_rc}
The $r_c$-qubit diagonal unitary matrix $\Lambda_{r_c}$ (Eq.\eqref{eq:Lambda_rc}) can be implemented by a quantum circuit of depth and size $O(n2^{r_c})$ under arbitrary graph constraint.
\end{lemma}
\begin{proof}
By lemma \ref{lem:diag_size}, in the absence of any graph constraint, $\Lambda_{r_c}$ can be implemented by a quantum circuit of size (and thus also depth) $O(2^{r_c})$.
Under arbitrary graph constraint, the distance between control and target qubits of any CNOT gate is at most $O(n)$, which can be realized by a circuit of size $O(n)$ (Lemma \ref{lem:cnot_path_constraint}). Therefore, the required circuit size and depth for $\Lambda_{r_c}$ is $O(n)\cdot O(2^{r_c})=O(n2^{r_c})$.
\end{proof}
\begin{lemma}\label{lem:diag_without_ancilla_correctness}
Any diagonal unitary matrix $\Lambda_n$ can be realized by the quantum circuit
\begin{equation}\label{eq:framework_withoutancilla}
(\Lambda_{r_c}\otimes\mathcal{R})\Pi^{\dagger} C_{\ell}C_{\ell-1}\cdots C_{1}\Pi
\end{equation}
shown as in Fig. \ref{fig:diag_without_ancilla_framwork}, under arbitrary graph constraint.
\end{lemma}
\begin{proof}
See Appendix~\ref{append:diag_without_ancilla_correctness}.
\end{proof}
\subsection{Efficient circuits: general framework}
\label{sec:general_framework_noancilla}
In~\cite{sun2021asymptotically}, there is an $O(2^n/n)$-depth and $O(2^n)$-size circuit construction for general $n$-qubit diagonal unitary $\Lambda_n$ under no graph constraints, using no ancillary qubits. From this, it is straightforward, via Lemma~\ref{lem:cnot_path_constraint}, to obtain upper bounds on the circuit depth required under various graph constraints (see Table~\ref{tab:lambda-bounds}). These bounds lead to an increase in circuit depth by a factor of $n\cdot \diam(G)$, which may seem unavoidable. However, we show that this is not the case, and savings can be had by the constructions we give in the remainder of this section. Note that for $\Path_n$, $\Tree_n(2)$ and $\Expander_n$, our constructed circuits have depth either $O(2^n/n)$ or $O(\log (n)\cdot 2^n/n)$, which are almost tight as a lower bound of $\Omega(2^n/n)$ is known for QSP (or diagonal unitary operations) even without graph constraints. For general graphs, our constructed circuit has depth $O(2^n)$, which is tight as the $\Star_n$ graph requires depth $\Omega(2^n)$ (Corollary \ref{coro:depth_lower_star}).
\begin{table}[h!]
\centering
\begin{tabular}{c|cccc}
& $\Path_n$ & $\Tree_n(2)$ & $\Expander_n$ & General $G$\\ \hline
$\diam(G)$ & $n$ & $\log n$ & $\log n$ & $n$\\ \hline
Depth (ub, trival) & $n2^n$ & $\log(n)2^n$ & $\log(n)2^n$ &$n2^n$ \\
Depth (ub, new) & $2^n/n$ [Lem.~\ref{lem:diag_path_withoutancilla}] & $\log(n) 2^n/n$ [Lem.~\ref{lem:diag_bianrytree_withoutancilla}]& $\log(n)2^n/n$ [Lem.~\ref{lem:diag_expander_withoutancilla}] & $2^n$ [Lem. \ref{lem:diag_graph_withoutancilla}]
\end{tabular}
\caption{Circuit depth upper bounds (ub) required to implement $\Lambda_n$ in circuits under various graph constraints. The trivial bounds are based on the unconstrained construction from~\cite{sun2021asymptotically} and Lemma~\ref{lem:cnot_path_constraint}, which implies that, under constraint graph $G$, the required circuit depth is $O(n\cdot \diam(G)\cdot 2^n/n)$. Big O notation is implied.
}
\label{tab:lambda-bounds}
\end{table}
To achieve the more efficient constructions summarized in the row for new upper bound of Table~\ref{tab:lambda-bounds} we make two design choices for each constraint graph type:
\begin{enumerate}
\item The choice of control and target registers $\sf C$ and $\sf T$.
\item The choice of Gray codes, as specified by the integers $j_1, j_2, \ldots, j_{r_t}$ used to implement the $C_k$ operators.
\end{enumerate}
We adopt the following general strategy. As per Lemma~\ref{lem:cnot_path_constraint}, a graph $G$ constraint leads to an overhead cost $d(u,v)$ when implementing CNOT gates between vertices $u,v\in G$. To minimize the increase in circuit depth, we aim to choose $\sf C$ and $\sf T$
such that the control and target qubits are close for as many CNOT gates as possible. Ideally, one desires that all CNOT gates have control and target qubits $O(1)$-distance apart. As this does not appear to be possible, we instead design $\sf C$ and $\sf T$ and the circuits $C_i$ in such a way that the number of CNOT gates acting across distance $d$ decays exponentially with $d$, leading to only a small ($O(\log n)$) or even constant overall overhead.
As mentioned in \S\ref{sec:diag_without_ancilla_framework}, we also adapt the choice of Gray codes to account for the graph constraints, and the choices of control and target registers (see Tab.~\ref{tab:choice-of-gray}).
In what follows, we give further details on these design choices for path and grid (\S \ref{sec:diag_without_ancilla_path}), complete binary tree (\S \ref{sec:diag_without_ancilla_binarytree}), expander graph (\S \ref{sec:diag_without_ancilla_expander}) and general graph (\S \ref{sec:diag_without_ancilla_graph}) constraints, and analyze the corresponding circuit complexities.
\begin{table}[h!]
\centering
\begin{tabular}{c|c|cccc}
& $K_n$~\cite{sun2021asymptotically} & $\Path_n$ & $\Tree_n(2)$ & $\Expander_n$ & General $G$\\ \hline
$j_i$ & $i$ &$ i$ & $1+A(i)$ & $1$ & $1$
\end{tabular}
\caption{Choice of integers $j_i$ ($i=1, \ldots, r_t$) which specify the $r_t$ Gray codes used in the implementation of $C_k$ operators, for various graph constraints. Here $A(i) = (i-1)(2^{a+1}-2)$, with $ a=\lceil \log(2\log n)\rceil$. $K_n$ is the complete graph on $n$ vertices, and corresponds to no connectivity constraints.}
\label{tab:choice-of-gray}
\end{table}
\subsection{Efficient circuits under $\Path_n$ and $\Grid_{n}^{n_1,n_2,\ldots,n_d}$ constraints}
\label{sec:diag_without_ancilla_path}
\paragraph{Choice of $\sf C$ and $\sf T$.}
If $n-2\lceil \log n\rceil$ is even, let $\tau=2\lceil \log n\rceil$; if $n-2\lceil \log n\rceil$ is odd, let $\tau=2\lceil \log n\rceil+1$. Then $2\lceil \log n\rceil\le \tau\le 2\lceil \log n\rceil+1$ and $n-\tau$ is even.
The control and target registers are taken to be
\[{\sf C}=\left\{2i-1:\forall i\in[r_t]\right\}\cup \left\{2r_t+j:\forall j\in[\tau]\right\} \quad \text{and} \quad {\sf T}=\left\{2i:\forall i\in[r_t]\right\},\]
respectively (see Fig. \ref{fig:pi_n^k}, lower part), where $r_c=\frac{n+\tau}{2}$ and $r_t=\frac{n-\tau}{2}$.
\paragraph{Implementation of $\Pi$.}
In this subsection, the unitary $\Pi$ of Eq. \eqref{eq:pi} is denoted $\Pi^{path}$, and implemented in the following way.
\begin{lemma}[]\label{lem:pi_path}
The transformation $\Pi^{path}$, defined by
\begin{equation*}
\bigotimes_{i=1}^n\ket{x_i}_{i}\xrightarrow{\Pi^{path}} \bigotimes_{i=1}^{r_t
}\ket{x_i}_{{2i-1}}
\bigotimes_{i=1}^{\tau}\ket{x_{r_t
+i}}_{{n-\tau+i}}
\bigotimes_{i=1}^{r_t
}\ket{x_{i+
r_c
}}_{{2i}} \defeq\ket{x_{control}}_{\sf C}\ket{x_{target}}_{\sf T},
\end{equation*}
i.e, which moves the last $r_t$ qubits to the first $r_t$ even positions, can be implemented by a CNOT circuit of depth O(n) and size $O(n^2)$ under $\Path_n$ constraint.
\end{lemma}
\begin{proof}
The effect of $\Pi^{path}$ is shown in Fig. \ref{fig:pi_n^k}, and the transformation can be implemented by a sequence of $O(n^2)$ SWAP operations: each qubit $i\in \{ (n+\tau)/2+1, (n+\tau)/2+2,\ldots, n\}$ can be moved from its original to final position using $O(n)$ SWAP operations between adjacent qubits, and the SWAPs for different qubits can be implemented in parallel in a pipeline.
\begin{figure}[!ht]
\centering
\begin{tikzpicture}
\draw (-2,0) -- (1.2,0) (1.8,0) -- (5.2,0) (5.8,0) -- (7,0);
\draw [fill=black] (-2,0) circle (0.05)
(-1,0) circle (0.05)
(0,0) circle (0.05)
(1,0) circle (0.05)
(2,0) circle (0.05)
(3,0) circle (0.05);
\draw [fill=black] (4,0) circle (0.05)
(5,0) circle (0.05)
(6,0) circle (0.05)
(7,0) circle (0.05);
\draw (-3,0) node{qubit};
\draw (-3,-0.8) node{state};
\draw (-3,-2) node{qubit};
\draw (-3,-2.8) node{state};
\draw (1.2,0) node[anchor=west]{\scriptsize $\cdots$} (5.2,0) node[anchor=west]{\scriptsize $\cdots$};
\draw (-2,-0.3) node{\scriptsize $1$} (-2,-0.8) node{\scriptsize $\ket{x_1}$};
\draw (-1,-0.3) node{\scriptsize $2$} (-1,-0.8) node{\scriptsize $\ket{x_2}$};
\draw (0,-0.3) node{\scriptsize $3$} (0,-0.8) node{\scriptsize $\ket{x_3}$}
(1,-0.3) node{\scriptsize $4$} (1,-0.8) node{\scriptsize $\ket{x_4}$}
(2,-0.3) node{\scriptsize ${r_c-1}$} (2,-0.8) node{\scriptsize $\ket{x_{r_c-1}}$}
(3,-0.3) node{\scriptsize ${r_c}$} (3,-0.8) node{\scriptsize $\ket{x_{r_c}}$}
(4,-0.3) node{\scriptsize ${r_c+1}$} (4,-0.8) node{\scriptsize \color{purple} $\ket{x_{r_c+1}}$}
(5,-0.3) node{\scriptsize ${r_c+2}$} (5,-0.8) node{\scriptsize \color{purple} $\ket{x_{r_c+2}}$}
(6,-0.3) node{\scriptsize ${n-1}$} (6,-0.8) node{\scriptsize \color{purple} $\ket{x_{n-1}}$}
(7,-0.3) node{\scriptsize ${n}$} (7,-0.8) node{\scriptsize \color{purple} $\ket{x_n}$};
\draw (1.2,-2) node[anchor=west]{\scriptsize $\cdots$} (5.2,-2) node[anchor=west]{\scriptsize $\cdots$};
\draw (-2,-2) -- (1.2, -2) (1.8,-2) -- (5.2,-2) (5.8,-2) -- (7,-2);
\draw [fill=black] (-2,-2) circle (0.05) (0,-2) circle (0.05)
(2,-2) circle (0.05) (4,-2) circle (0.05) (5,-2) circle (0.05) (6,-2) circle (0.05) (7,-2) circle (0.05);
\draw [fill=purple, draw=purple] (-1,-2) circle (0.05) (1,-2) circle (0.05) (3,-2) circle (0.05);
\draw (-2,-2.3) node{\scriptsize $1$} (-2,-2.8) node{\scriptsize $\ket{x_1}$};
\draw (-1,-2.3) node{\scriptsize $2$} (-1,-2.8) node{\scriptsize \color{purple} $\ket{x_{r_c+1}}$};
\draw (0,-2.3) node{\scriptsize $3$} (0,-2.8) node{\scriptsize $\ket{x_2}$}
(1,-2.3) node{\scriptsize $4$} (1,-2.8) node{\scriptsize \color{purple} $\ket{x_{r_c+2}}$}
(2,-2.3) node{\scriptsize ${n-\tau-1}$} (2,-2.8) node{\scriptsize $\ket{x_{r_t}}$}
(3,-2.3) node{\scriptsize ${n-\tau}$} (3,-2.8) node{\scriptsize \color{purple} $\ket{x_{n}}$}
(4,-2.3) node{\scriptsize ${n-\tau+1}$} (4,-2.8) node{\scriptsize $\ket{x_{r_t+1}}$}
(5,-2.3) node{\scriptsize ${n-\tau+2}$} (5,-2.8) node{\scriptsize $\ket{x_{r_t+2}}$}
(6,-2.3) node{\scriptsize ${n-1}$} (6,-2.8) node{\scriptsize $\ket{x_{r_c-1}}$}
(7,-2.3) node{\scriptsize ${n}$} (7,-2.8) node{\scriptsize $\ket{x_{r_c}}$};
\draw (2.5,-1.5) node{$\downarrow~\Pi^{path}$};
\end{tikzpicture}
\caption{$\Pi^{path}$. In the lower figure, the qubits in red form register $\sf T$, and those in black form register $\sf C$.}
\label{fig:pi_n^k}
\end{figure}
\end{proof}
\paragraph{Implementation of $C_k$.}
We first state and prove the following lemma, which will be used to prove the circuit complexity required to implement $C_k$ under $\Path_n$ constraint (Lemma~\ref{lem:Ck_path}).
\begin{lemma}\label{lem:U(k)_without_ancilla}
Let $x=x_1\cdots x_{r_c}\in \mbox{$\{0,1\}$}^{r_c}$ and $y=y_1\cdots y_{r_t}\in\mbox{$\{0,1\}$}^{r_t}$. For all $k\le r_c$, define the unitary $U^{(k)}$ (see Fig.~\ref{fig:U(k)_without_ancilla}) by
\begin{align}\label{eq:U(k)_without_ancilla}
&\bigotimes_{i=1}^{r_t}\ket{x_iy_i}_{\{2i-1,2i\}}\bigotimes_{i=r_t+1}^{r_c}\ket{x_{i}}_{r_t+i}\xrightarrow{U^{(k)}}\nonumber\\
&\left\{
\begin{array}{ll}
\bigotimes_{i=1}^{r_t}\ket{x_i(y_i\oplus x_{i+k-1})}_{\{2i-1,2i\}}\bigotimes_{i=r_t+1}^{r_c}\ket{x_{i}}_{r_t+i}, &\text{if~}k\le \tau+1, \\
\bigotimes_{i=1}^{r_c-k+1}\ket{x_i(y_i\oplus x_{i+k-1})}_{\{2i-1,2i\}}\bigotimes_{i=r_c-k+2}^{r_t}\ket{x_i(y_i
\oplus x_{i-r_c+k-1})}_{\{2i-1,2i\}}\bigotimes_{i=r_t+1}^{r_c}\ket{x_{i}}_{r_t+i},&\text{if~}k\ge \tau+2. \\
\end{array}\right.
\end{align}
Under $\Path_n$ constraint, $U^{(k)}$ can be implemented by circuit of depth $O(k)$ and size $O(r_t k)$ for $k\le \tau+1$, and depth and size $O(r_tk)$ for $k\ge \tau+2$.
\end{lemma}
\begin{figure}[!hbt]
\centering
\begin{tikzpicture}
\draw[purple] (0.5,2) parabola bend (1.25,2.25) (2,2) (2.5,2) parabola bend (3.25,2.25) (4,2) (4.5,2) parabola bend (5.25,2.25) (6,2) (6.5,2) parabola bend (7.25,2.25) (8,2) (1.5,2) parabola bend (2.25,2.25) (3,2) (3.5,2) parabola bend (4.25,2.25) (5,2) (5.5,2) parabola bend (6.25,2.25) (7,2) (7.5,2) parabola bend (8,2.25) (8.5,2);
\draw[purple] (0.5,1) parabola bend (2.75,1.4) (5,1) (1.5,1) parabola bend (3.75,1.4) (6,1) (2.5,1) parabola bend (4.75,1.4) (7,1) (3.5,1) parabola bend (5.75,1.4) (8,1) (4.5,1) parabola bend (6.5,1.4) (8.5,1) (5.5,1) parabola bend (7.25,1.4) (9,1) (0,1) parabola bend (3.25,0.3) (6.5,1) (1,1) parabola bend (4.25,0.3) (7.5,1);
\draw (0,1) -- (9,1);
\draw (0,2) -- (9,2);
\draw [fill=black] (0,1) circle (0.05) (1,1) circle (0.05) (2,1) circle (0.05) (3,1) circle (0.05) (4,1) circle (0.05) (5,1) circle (0.05) (6,1) circle (0.05) (7,1) circle (0.05);
\draw [fill=black] (0,2) circle (0.05) (1,2) circle (0.05) (2,2) circle (0.05) (3,2) circle (0.05) (4,2) circle (0.05) (5,2) circle (0.05) (6,2) circle (0.05) (7,2) circle (0.05);
\draw [fill=white] (0.5,1) circle (0.05) (1.5,1) circle (0.05) (2.5,1) circle (0.05) (3.5,1) circle (0.05) (4.5,1) circle (0.05) (5.5,1) circle (0.05) (6.5,1) circle (0.05) (7.5,1) circle (0.05);
\draw [fill=white] (0.5,2) circle (0.05) (1.5,2) circle (0.05) (2.5,2) circle (0.05) (3.5,2) circle (0.05) (4.5,2) circle (0.05) (5.5,2) circle (0.05) (6.5,2) circle (0.05) (7.5,2) circle (0.05);
\draw [fill=black,draw=black] (8,1) circle (0.05) (8.5,1) circle (0.05) (9,1) circle (0.05) (8,2) circle (0.05) (8.5,2) circle (0.05) (9,2) circle (0.05);
\draw [fill=black] (0,1) node[anchor=north]{\scriptsize $x_1$} (1,1) node[anchor=north]{\scriptsize $x_2$} (2,1) node[anchor=north]{\scriptsize $x_3$} (3,1) node[anchor=north]{\scriptsize $x_4$} (4,1) node[anchor=north]{\scriptsize $x_5$} (5,1) node[anchor=north]{\scriptsize $x_{r_t-2}$} (6,1) node[anchor=north]{\scriptsize $x_{r_t-1}$} (7,1) node[anchor=north]{\scriptsize $x_{r_t}$};
\draw [fill=black] (0,2) node[anchor=north]{\scriptsize $x_1$} (1,2) node[anchor=north]{\scriptsize $x_2$} (2,2) node[anchor=north]{\scriptsize $x_3$} (3,2) node[anchor=north]{\scriptsize $x_4$} (4,2) node[anchor=north]{\scriptsize $\cdots$} (5,2) node[anchor=north]{\scriptsize $x_{r_t-2}$} (6,2) node[anchor=north]{\scriptsize $x_{r_t-1}$} (7,2) node[anchor=north]{\scriptsize $x_{r_t}$};
\draw [fill=black] (0.5,1) node[anchor=north]{\scriptsize $y_1$} (1.5,1) node[anchor=north]{\scriptsize $y_2$} (2.5,1) node[anchor=north]{\scriptsize $y_3$} (3.5,1) node[anchor=north]{\scriptsize $y_4$} (4.5,1) node[anchor=north]{\scriptsize $\cdots$} (5.5,1) node[anchor=north]{\scriptsize $y_{r_t-2}$} (6.5,1) node[anchor=north]{\scriptsize $y_{r_t-1}$} (7.5,1) node[anchor=north]{\scriptsize $y_{r_t}$};
\draw [fill=black] (0.5,2) node[anchor=north]{\scriptsize $y_1$} (1.5,2) node[anchor=north]{\scriptsize $y_2$} (2.5,2) node[anchor=north]{\scriptsize $y_3$} (3.5,2) node[anchor=north]{\scriptsize $y_4$} (4.5,2) node[anchor=north]{\scriptsize $\cdots$} (5.5,2) node[anchor=north]{\scriptsize $y_{r_t-2}$} (6.5,2) node[anchor=north]{\scriptsize $y_{r_t-1}$} (7.5,2) node[anchor=north]{\scriptsize $y_{r_t}$};
\draw [fill=black] (8,1) node[anchor=north]{\scriptsize $x_{r_t+1}$} (8.5,1)node[anchor=north]{\scriptsize $\cdots$} (9,1) node[anchor=north]{\scriptsize $x_{r_c}$} (8,2) node[anchor=north]{\scriptsize $x_{r_t+1}$} (8.5,2) node[anchor=north]{\scriptsize $\cdots $} (9,2) node[anchor=north]{\scriptsize $x_{r_c}$};
\draw (-1.5,1) node{\scriptsize Case 2: $k\ge \tau+2$} (-1.5,2) node{\scriptsize Case 1: $k\le \tau+1$};
\end{tikzpicture}
\caption{The effect of $U^{(k)}$. Red curves indicate that the value of a black qubit has been added (xor-ed) to the corresponding white qubit.}
\label{fig:U(k)_without_ancilla}
\end{figure}
\begin{proof}
\textbf{Case 1 ($k\le \tau+1$).} If $k=1$, apply a CNOT circuit $\Pi_{j=1}^{r_t}\textsf{CNOT}^{2j-1}_{2j}$ of depth $1$ and size $O(r_t)$.
Let us first show how to implement $y_1\oplus x_k$: (i) use a sequence of $2k-4$ SWAPs to move $y_1$ and $x_k$ adjacent to each other; (ii) apply a CNOT gate to change $\ket{y_1}$ to $\ket{y_1\oplus x_k}$; (iii) undo the first sequence of SWAPs to return the qubits to their original positions. The overall cost is $O(k)$ SWAP gates and one CNOT gate. Since each SWAP can be implemented by three CNOT gates, the cost to implement $y_1\oplus x_k$ is $O(k)$ CNOT gates. We can effect $y_2\oplus x_{k+1}$, $\ldots$, $y_{r_t}\oplus x_{r_t+k-1}$ similarly, and these can be implemented in parallel. The overall depth and size are $O(k)$ and $O(k r_t)$, respectively, as claimed.
\textbf{Case 2 ($k\ge \tau+2$) .} $U^{(k)}$ can be implemented by a CNOT circuit
containing two parts: The first part adds $x_{i+k-1}$ to $y_i$ (for each $i=1,\ldots, r_c-k+1$), which are $O(k)$ apart. The second part adds $x_{i-r_c+k-1}$ to $y_i$ (for each $i=r_c-k+2,\ldots, r_t$), which are $O(r_c-k)$ apart. Since $k\ge \tau + 2$, we know that $r_c-k \le r_c-\tau -2 \le r_t - 2$.
Therefore, this CNOT circuit can be implemented in depth and size
\begin{align*}
& \ (r_c-k+1)\cdot O(k)+(r_t-(r_c-k+2)+1)\cdot O(r_c-k) & (\text{by Lemma \ref{lem:cnot_path_constraint}}) \\
= & \ r_t \cdot O(k) + k \cdot O(r_t) & (r_t < r_c \text{, and } r_c-k \le r_t-2) \\
= & \ O(r_t k).
\end{align*}
\end{proof}
\begin{lemma}\label{lem:Ck_path}
For all $k\in[\ell]$, $C_k$ (Eq.~\eqref{eq:Ck}) can be implemented by a quantum circuit of depth $O(2^{r_c})$ and size $O(n2^{r_c})$ under $\Path_n$ constraint.
\end{lemma}
\begin{proof}
First, we construct quantum circuits for $C_{p.1}$ (Eq. \eqref{eq:step_p1}) for all $p\in\{2,3,\ldots,2^{r_c}\}$. For every $i\in[r_t]$, choose integers $j_i=i $. The strings $c_{p-1}^i$ and $c_{p}^i$ in the $(r_c,i)$-Gray code differ in the $h_{ip}$-th bit for all $p\in\{2,3,\ldots,2^{r_c}\}$.
Recall that $C_{p.1}$ transforms prefix $c_{p-1}^{j_i}$ to $c_p^{j_i}$ in the $i$-th qubit of target register $\sf T$, i.e.,
\begin{align*}
&\bigotimes_{i=1}^{r_t}\ket{x_i,\langle c^i_{p-1}t_i^{(k)},x\rangle}_{\{2i-1,2i\}}\bigotimes_{i=r_t+1}^{r_c}\ket{x_{i}}_{r_t+i}
\xrightarrow{C_{p.1}} \bigotimes_{i=1}^{r_t}\ket{x_i,\langle c^i_{p}t_i^{(k)},x\rangle}_{\{2i-1,2i\}}\bigotimes_{i=r_t+1}^{r_c}\ket{x_{i}}_{r_t+i}\\
=&\bigotimes_{i=1}^{r_t}\ket{x_i,\langle c^i_{p-1}t_i^{(k)},x\rangle\oplus x_{h_{ip}}}_{\{2i-1,2i\}}\bigotimes_{i=r_t+1}^{r_c}\ket{x_{i}}_{r_t+i},\forall x=x_1x_2\cdots x_n\in \mbox{$\{0,1\}^n$}.
\end{align*}
We now show that $C_{p.1}$ is equivalent to $U^{(h_{1p})}$ where $U^{(\cdot)}$ is defined in Eq.~\eqref{eq:U(k)_without_ancilla}. We first define
\begin{align*}
k' \defeq h_{1p} &= \zeta(p-1) \quad (\text{see Eq.}~\eqref{eq:h1j})
\end{align*}
We consider two cases:
\begin{enumerate}
\item $k'\le \tau+1$: For all $i\in [r_t]$, we have
\[0 \le \zeta(p-1)+i-2 = k' + i -2
\le \tau + i -1
= r_c - r_t + i -1
\le r_c -1 \]
where the second equality is because of $r_c = r_t + \tau$. This implies that
\[h_{ip} = (\zeta(p-1)+i-2\bmod r_c)+1
= \zeta(p-1) + i -1
= k'+i-1
\]
Thus, the qubits $\ket{x_{k'}}$, $\ket{x_{k'+1}}$, $\ldots$, $\ket{x_{r_t+k'-1}}$ are exactly $\ket{x_{h_{1p}}}$, $\ket{x_{h_{2p}}}$, $\ldots$, $\ket{x_{h_{r_t p}}}$, which implies $U^{(h_{1p})}$ corresponds to the first case of Eq. \eqref{eq:U(k)_without_ancilla}.
%
\item $k'\ge \tau+2$: In this case,
\begin{align*}
0 \le \zeta(p-1)+i-2 &= k' + i -2 \begin{cases}
\le r_c -1, &\quad \text{ if } i \le r_c - k' + 1,\\
\ge r_c, &\quad \text{ if } i \ge r_c - k' +2.
\end{cases} \\
\end{align*}
It follows that
\begin{align*}
h_{ip} &= (\zeta(p-1)+i-2\bmod r_c)+1\\
&= \begin{cases}
\zeta(p-1) + i -1 = k' + i -1, &\quad \text{ if } i \le r_c - k' + 1,\\
\left(\zeta(p-1) + i -2\right) -r_c +1 = k' +i -r_c -1, &\quad \text{ if } i \ge r_c - k' +2.
\end{cases}
\end{align*}
where the last line is due to the fact that $i\le r_t$ and $\zeta(p-1)\le r_c$.
Therefore, the qubits $\ket{x_{k'}}$, $\ket{x_{k'+1}}$, $\ldots$, $\ket{x_{r_c}}$, $\ket{x_1}$, $\ldots$, $\ket{x_{k'-\tau-1}}$ are exactly $\ket{x_{h_{1p}}}$, $\ket{x_{h_{2p}}}$, $\ldots$, $\ket{x_{h_{r_t p}}}$. Namely, $U^{(h_{1p})}$ is the same as defined in the second case of Eq. \eqref{eq:U(k)_without_ancilla}.
\end{enumerate}
We now analyze the circuit depth of $C_k$.
{By Lemma \ref{lem:U(k)_without_ancilla}, the depth and size of $U^{(k')}$ are $O(k')$ and $O(r_tk')$ if $k'\le \tau+1$; the depth and size are both $O(r_tk')$ if $\tau+2\le k'\le r_c$.}
Recall that for every $k''\in[r_c]$, there are $2^{r_c-k''}$ many $p\in \{2,3,\ldots,2^{r_c}\}$ satisfying $h_{1p} = k''$ (Lemma \ref{lem:GrayCode}). Thus, $U^{(k')}$ appears $2^{r_c-k'}$ times in Step $p$.1 ($C_{p.1}$) when we run all iterations $p\in\{2,3,\ldots,2^{r_c}\}$. With $\mathcal{D}(C_{p.1})$ and $\mathcal{S}(C_{p.1})$ denoting the circuit depth and size for $C_{p.1}$,
by Lemma \ref{lem:Ck}, $C_k$ has circuit depth
\begin{equation}\label{eq:ck-depth}
O(n^2+2^{r_c}+\sum_{p=2}^{2^{r_c}}\mathcal{D}(C_{p.1}))=O(n^2+2^{r_c})+\sum_{k'=1}^{\tau+1} O(k')2^{r_c-k'} +\sum_{k'=\tau+2}^{r_c}O(r_tk')2^{r_c-k'}= O(2^{r_c}),
\end{equation}
and circuit size
\[O(n^2+r_t2^{r_c}+\sum_{p=2}^{2^{r_c}}\mathcal{S}(C_{p.1}))=O(n^2+2^{r_c})+\sum_{k'=1}^{r_c}2^{r_c-k'}O(r_tk')= O(r_t2^{r_c}),\]
where we use the fact that $2\lceil\log(n)\rceil\le \tau\le 2\lceil\log(n)\rceil+1$.
\end{proof}
\paragraph{Remark.} The reason we choose $\tau = 2\lceil \log n\rceil$ is the following. The series $\sum_{j=1}^n j \cdot 2^{-j} \le 2$, with the first $2\log(n)$ terms contributing the majority of the sum, i.e., if $\tau = 2\lceil\log n\rceil$ then
\begin{equation*}
\sum_{j=\tau}^n j\cdot 2^{-j} = O(1/n).
\end{equation*}
In Eq.~\eqref{eq:ck-depth}, the circuit depth contains contributions from the terms \begin{equation*}
2^{r_c}\sum_{k'=1}^{\tau+1}O(k')2^{-k'} + 2^{r_c}r_t\sum_{k'=\tau+2}^{r_c}O(k')2^{-k'},
\end{equation*}
which, for each $k'$, can be understood roughly as $2^{r_c-k'}$ CNOT circuits, in which each CNOT gate acts on qubits separated by distance $k'$. Noting that $\tau = 2\lceil\log n\rceil$, $r_t = (n-\tau)/2 \approx n/2$ and $r_c= (n+\tau)/2\approx n/2$, second term has the factor of $r_t$ cancelled by the factor of $1/n$ that comes from the series summation. The number of CNOT circuits with CNOT gates acting on qubits separated by distances $d$ greater than $2\log n$ is exponentially reduced, and the cost of implementing those gates is suppressed by $1/n$. We take a similar approach with other graph constraints.
\paragraph{Implementation of $\Lambda_n$.
Now putting everything together, we can obtain the complexity for general diagonal unitary matrices.
\begin{lemma}\label{lem:diag_path_withoutancilla}
Any $n$-qubit diagonal unitary matrix $\Lambda_n$ can be realized by a quantum circuit of depth $O(2^n/n)$ and size $O(2^n)$, under $\Path_n$ constraint without ancillary qubits.
\end{lemma}
\begin{proof}
By Lemma \ref{lem:diag_without_ancilla_correctness}, $\Lambda_n$ can be implemented by the circuit in Fig.~\ref{fig:diag_without_ancilla_framwork}. Recall that $r_c=\frac{n+\tau}{2}$, $r_t=n-r_c$, $2\lceil\log(n)\rceil\le \tau\le 2\lceil\log(n)\rceil+1$ and $\ell\le \frac{2^{r_t+2}}{r_t+1}-1$. Combining Lemmas \ref{lem:pi_path}, \ref{lem:Ck_path}, \ref{lem:reset} and \ref{lem:Lambda_rc}, the total depth and size of $\Lambda_n$ are
\begin{align*}
&\text{depth:}~2O(n)+\ell\cdot O(2^{r_c})+O(n^2)+O(n2^{r_c})=O(2^n/n),\\
&\text{size:}~2O(n^2)+\ell \cdot O(r_t2^{r_c})+O(n^2)+O(n2^{r_c})=O(2^n),
\end{align*}
under $\Path_n$ constraint.
\end{proof}
This result can be extended to graph constraints admitting a Hamiltonian path , with the same upper bounds.
\begin{theorem}\label{thm:diag_hamiltonian_withoutancilla}
Let $G=(V,E)$ be a connected graph on $n$ vertices admitting a Hamiltonian path. Any $n$-qubit diagonal unitary matrix $\Lambda_n$ can be realized by a quantum circuit under $G$ constraint. The circuit uses no ancillary qubit, and has depth $O(2^n/n)$ and size $O(2^n)$.
\end{theorem}
\begin{proof}
There exists a Hamiltonian path $\Path_n$ in $G$. This result follows from Lemma \ref{lem:diag_path_withoutancilla}.
\end{proof}
\begin{corollary}\label{coro:diag_grid_withoutancilla}
For $\Pi_{i=1}^d n_i=n$, any $n$-qubit diagonal unitary matrix $\Lambda_n$ can be realized by a quantum circuit under $\Grid_{n}^{n_1,n_2,\ldots,n_d}$ constraint. The circuit uses no ancillary qubit, and has depth $O(2^n/n)$ and size $O(2^n)$.
\end{corollary}
\begin{proof}
There exists a Hamiltonian path in any $d$-dimensional grid. This result follows from Theorem \ref{thm:diag_hamiltonian_withoutancilla}.
\end{proof}
\paragraph{Remark.}
Under path and $d$-dimensional grid constraints, we prove in a later section (Corollary \ref{coro:lower_bound_path}) that the depth and size lower bounds for $\Lambda_n$ are $\Omega(2^n/n)$ and $\Omega(2^n)$, respectively, using no ancillary qubits.
The upper and lower bounds match for both depth and size, and it is somewhat surprising that path and grid constraints do not asymptotically increase the circuit depth or size required to implement general $n$-qubit diagonal unitary matrices.
\subsection{Circuit implementation under $\Tree_n(2)$ constraints}
\label{sec:diag_without_ancilla_binarytree}
In this section we give $\Lambda_n$ circuit constructions under binary tree ($d=2$) constraint, and defer details of constructions for general $d$ -- which are similar but somewhat cumbersome -- to Appendix \ref{append:d-arytree}.
\paragraph{Choice of $\sf C$ and $\sf T$.}
Label the qubits in the input register $[n]$ as follows.
For the binary tree $\Tree_n(2)$, label the root node with the empty string $\epsilon$. For a node with label $z$, label its left and right children $z0$ and $z1$, respectively (see Fig.~\ref{fig:label_binarytree}).
\begin{figure}[!hbt]
\centering
\begin{tikzpicture}
\draw[fill=black] (0,0) circle (0.05) (-0.6,-0.5) circle (0.05) (0.6,-0.5) circle (0.05) (-1,-1) circle (0.05) (-0.2,-1) circle (0.05) (1,-1) circle (0.05) (0.2,-1) circle (0.05);
\draw (0,0)--(-0.6,-0.5)--(-1,-1) (-0.6,-0.5)--(-0.2,-1) (0,0)--(0.6,-0.5)--(1,-1) (0.6,-0.5)--(0.2,-1);
\draw (0,0.2) node{\scriptsize $\epsilon$} (-0.4,-0.5) node{\scriptsize $0$} (0.8,-0.5) node{\scriptsize $1$} (-1,-1.2) node{\scriptsize $00$} (-0.2,-1.2) node{\scriptsize $01$} (1,-1.2) node{\scriptsize $11$} (0.2,-1.2) node{\scriptsize $10$} ;
\end{tikzpicture}
\caption{The labels of qubits in a depth-2 binary tree.}
\label{fig:label_binarytree}
\end{figure}
Let $\kappa=\left\lceil\log(\frac{n+1}{2})\right\rceil$, $a=\left\lceil\log(2\log n)\right\rceil$. Let $\Tree_z^j=\{zy:y\in\{0,1\}^{\le j}\}$ denote the binary tree with root $z$ and depth $j$. A $\Tree_z^j$ consists of $j+1$ layers of qubits. The $n$ input qubits of $\Lambda_n$ are stored in a binary tree of depth $\kappa$, i.e. $\Tree_{\epsilon}^\kappa$. We divide these $n$ qubits into $O\left(\frac{n}{\log(n)}\right)$ binary subtrees, each of which has depth $a$ and $2^{a+1}-1=O(\log(n))$ vertices, except the `top' subtree, which may have fewer vertices and lower depth (see Fig.~\ref{fig:register_binarytree}).
The target register $\sf T$ and control register $\sf C$ are defined as
\[\textsf{T}:=\bigcup_{j=1}^s\mbox{$\{0,1\}$}^{\kappa-j(a+1)+1},\quad\textsf{C}:=\Tree_\epsilon^\kappa -\textsf{T} =\Big(\bigcup_{z\in \textsf{T}}(\Tree_z^a-\{z\})\Big)\cup \Tree_{\epsilon}^{\kappa-s(a+1)},\]
where $s+1$ is the total number of layers of binary subtrees, with $s=\left\lfloor\frac{\kappa+1}{a+1}\right\rfloor$.
In words, the target register consists of the root nodes of the binary subtrees (except the top subtree), while the control register consists of all other nodes. $\sf T$ and $\sf C$ have sizes $r_t=\sum_{j=1}^s2^{\kappa-j(a+1)+1}=O\left(\frac{n}{\log n}\right)$ and $r_c=n-r_t=O\left(n-\frac{n}{\log(n)}\right)$, respectively.
\begin{figure}[!hbt]
\centering
\begin{tikzpicture}
\draw[dotted] (0,0)--(-4.7,0) (0,-1)--(-3,-1) (0,-2)--(-3,-2) (-1,-2.5)--(-3,-2.5) (-1.5,-3.5)--(-4.7,-3.5);
\draw[->] (-4.5,-2)--(-4.5,-3.5);
\draw[->] (-4.5,-2)--(-4.5,0);
\draw[->] (-1.6,0) --(-1.6,-1);
\draw[<-] (-1.6,0) --(-1.6,-1);
\draw[->] (-1.6,-1) --(-1.6,-2);
\draw[<-] (-1.6,-1) --(-1.6,-2);
\draw[->] (-1.6,-2.5) --(-1.6,-3.5);
\draw[<-] (-1.6,-2.5) --(-1.6,-3.5);
\draw (0,0)--(0.5,-1)--(-0.5,-1)--cycle;
\draw (-0.5,-1)--(-0.9,-2)--(-0.1,-2)--cycle (0.5,-1)--(0.1,-2)--(0.9,-2)--cycle;
\draw (-1,-2.5)--(-1.4,-3.5)--(-0.6,-3.5)--cycle (1,-2.5)--(1.4,-3.5)--(0.6,-3.5)--cycle (0,-2.5)-- (-0.4,-3.5)--(0.4,-3.5)--cycle;
\draw [fill=purple,draw=purple] (-0.5,-1.15) circle (0.05) (0.5,-1.15) circle (0.05) (-1,-2.65) circle (0.05) (0,-2.65) circle (0.05) (1,-2.65) circle (0.05);
\draw [fill=black] (0,-0.15) circle (0.05) (-0.15,-0.45) circle (0.05) (0.15,-0.45) circle (0.05) (-0.15,-0.9) circle (0.05) (0.15,-0.9) circle (0.05)(-0.4,-0.9) circle (0.05) (0.4,-0.9) circle (0.05);
\draw [fill=black] (-0.6,-1.45) circle (0.05) (-0.4,-1.45) circle (0.05) (0.6,-1.45) circle (0.05) (0.4,-1.45) circle (0.05) (-0.6,-1.9) circle (0.05) (0.6,-1.9) circle (0.05)(-0.8,-1.9) circle (0.05) (0.8,-1.9) circle (0.05) (0.4,-1.9) circle (0.05) (0.2,-1.9) circle (0.05) (-0.4,-1.9) circle (0.05) (-0.2,-1.9) circle (0.05);
\draw[fill=black] (-1.1,-2.9) circle (0.05) (-0.9,-2.9) circle (0.05) (-0.1,-2.9) circle (0.05) (0.1,-2.9) circle (0.05) (1.1,-2.9) circle (0.05) (0.9,-2.9) circle (0.05);
\draw[fill=black] (-1.3,-3.4) circle (0.05) (-1.1,-3.4) circle (0.05) (-0.9,-3.4) circle (0.05) (-0.7,-3.4) circle (0.05) (-0.3,-3.4) circle (0.05)(-0.1,-3.4) circle (0.05) (0.1,-3.4) circle (0.05) (0.3,-3.4) circle (0.05) (1.1,-3.4) circle (0.05) (0.9,-3.4) circle (0.05) (1.3,-3.4) circle (0.05) (0.7,-3.4) circle (0.05);
\draw (0,-2.3) node{\scriptsize $\cdots$} (0,-1.5) node{\scriptsize $\cdots$} (-0.5,-3) node{\scriptsize $\cdots$} (0.5,-3) node{\scriptsize $\cdots$};
\draw (-5.3,-1.75) node{\scriptsize $\kappa+1$ layers} (-3,-0.5) node{\scriptsize $\kappa-s(a+1)+1$ layers} (-2.5,-1.5) node{\scriptsize $a+1$ layers} (-2.5,-3) node{\scriptsize $a+1$ layers};
\end{tikzpicture}
\caption{Control ($\textsf{C}$) and target ($\textsf{T}$) registers for $\Tree_n(2)$. The tree is partitioned into $O\left(\frac{n}{\log (n)}\right)$ binary subtrees, each of size $O(\log(n))$ and depth $a$ ($a+1$ layers of qubits). $\textsf{T}$ consists of the root nodes of all subtrees except for the `top' subtree (red vertices), while $\textsf{C}$ consists of all other (black) vertices. }
\label{fig:register_binarytree}
\end{figure}
\paragraph{Implementation of $\Pi$.}
In this subsection, $\Pi$ (Eq.\eqref{eq:pi}) is denoted $\Pi^{binarytree}$. We wish to permute the qubit states in a way that groups consecutive qubit states together in binary subtrees. More precisely, we define $A(i):=(i-1)(2^{a+1}-2)$ and, for all $i\in[r_t]$, permute the $2^{a+1}-2$ states $x_{1+A(i)}, \ldots, x_{A(i+1)}$ to the binary subtree with root given by the $i$-th qubit in the target register (see Fig.~\ref{fig:subtree_states}).
\begin{figure}[!hbt]
\centering
\begin{tikzpicture}
\draw[fill=black] (-0.6,-0.5) circle (0.05) (0.6,-0.5) circle (0.05) (-1,-1) circle (0.05) (-0.2,-1) circle (0.05) (1,-1) circle (0.05) (0.2,-1) circle (0.05) (-1.2,-1.5) circle (0.05) (-0.6,-1.5) circle (0.05) (1.2,-1.5) circle (0.05) (0.6,-1.5) circle (0.05) (-0.8,-2) circle (0.05) (-0.4,-2) circle (0.05) (-1.4,-2) circle (0.05) (-1,-2) circle (0.05) (0.8,-2) circle (0.05) (0.4,-2) circle (0.05) (1.4,-2) circle (0.05) (1,-2) circle (0.05);
\draw (0,0)--(-0.6,-0.5)--(-1,-1) (-0.6,-0.5)--(-0.2,-1) (0,0)--(0.6,-0.5)--(1,-1) (0.6,-0.5)--(0.2,-1) (-1.2,-1.5)--(-1.4,-2) (-1.2,-1.5)--(-1,-2) (-0.6,-1.5)--(-0.8,-2) (-0.6,-1.5)--(-0.4,-2) (1.2,-1.5)--(1.4,-2) (1.2,-1.5)--(1,-2) (0.6,-1.5)--(0.8,-2) (0.6,-1.5)--(0.4,-2);
\draw (0,0.2) node{\tiny $x_{i+r_c}$} (-1.1,-0.5) node{\tiny $x_{1+A(i)}$} (1.1,-0.5) node{\tiny $x_{2+A(i)}$} (-1.4,-1) node{\tiny $x_{3+A(i)}$} (-0.6,-1) node{\tiny $x_{4+A(i)}$} (1.4,-1) node{\tiny $x_{6+A(i)}$} (0.6,-1) node{\tiny $x_{5+A(i)}$} ;
\draw (0,-1.5) node{\tiny$\cdots$} (0,-2) node{\tiny$\cdots$} (0,-2.2) node{\tiny$\cdots$} (-1.6,-2.2) node{\tiny $x_{2^a-1+A(i)}$} (1.6,-2.2) node{\tiny $x_{A(i+1)}$} (1.8,-1.5) node{\tiny $x_{2^{a}-2+A(i)}$}(-1.8,-1.5) node{\tiny $x_{2^{a-1}-1+A(i)}$};
\draw[fill=red,draw=red] (0,0) circle (0.05);
\end{tikzpicture}
\caption{The states of qubits in binary subtree $\Tree_{z_i}^a$ after applying $\Pi^{binarytree}$ (Lemma~\ref{lem:pi_binarytree}). $\Tree_{z_i}^a$ has $2^{a+1}-1$ vertices, with the root corresponding to the $i$-th qubit in the target register ${\sf T}$.}
\label{fig:subtree_states}
\end{figure}
\begin{lemma}\label{lem:pi_binarytree}
Let $z_i$ denote the $i$-th element in target register $\sf T$ and $A(i):=(i-1)(2^{a+1}-2)$.
Unitary transformation $\Pi^{binarytree}$ is defined as
\begin{multline}\label{eq:pi_binarytree}
\ket{x_1x_2\cdots x_n}_{[n]}\xrightarrow{\Pi^{binarytree}} \ket{x_1x_2\cdots x_{r_c}}_{\sf{C}}\ket{x_{r_c+1}\cdots x_{n}}_{\sf{T}}\\
=\bigotimes_{z_i\in{\sf T}}\left(\ket{x_{r_c+i}}_{z_i}\ket{x_{1+A(i)} x_{2+A(i)} \cdots x_{A(i+1)}}_{\Tree_{z_i}^a-\{z_i\}}\right)\otimes\ket{x_{A(r_t+1)+1}\cdots x_{r_c}}_{\Tree_\epsilon^{\kappa-s(a+1)}},\forall x=x_1x_2\cdots x_n\in\mbox{$\{0,1\}^n$}.
\end{multline}
It can be implemented by a CNOT circuit of depth and size $O(n\log(n))$ under $\Tree_n(2)$ constraint.
\end{lemma}
\begin{proof}
$\Pi^{binarytree}$ permutes the last $r_t$ qubits $x_{r_c+1},x_{r_c+2},\cdots,x_n$ to the target register, i.e., the root nodes of the binary subtrees, and the first $r_c$ qubits to the control register $\textsf{C}$. In the absence of any graph constraints, $\Pi^{binarytree}$ can be implemented by at most $n$ SWAP gates. The result frollows from Lemma \ref{lem:cnot_path_constraint}, noting that the distance between control and target qubits for any CNOT gate in a binary tree of $n$ vertices is at most $O(\log(n))$, and every SWAP gate can be implemented by 3 CNOT gates
\end{proof}
\paragraph{Implementation of $C_k$.}
\begin{lemma}\label{lem:Ck_binarytree}
For all $k\in[\ell]$, operator $C_k$ (Eq.\eqref{eq:Ck}) can be implemented by a quantum circuit of depth $O(2^{r_c})$ under $\Tree_n(2)$ constraint.
\end{lemma}
\begin{proof}
First, we construct quantum circuits for $C_{p.1}$ (Eq.~\eqref{eq:step_p1}) for all $p\in\{2,3,\ldots,2^{r_c}\}$. For every $i\in[r_t]$, choose integers $j_i= 1+A(i)$, where recall $A(i)=(i-1)(2^{a+1}-2)$ is the index for the last node of the $(i-1)$-th subtree (Fig. \ref{fig:subtree_states}). Strings $c_{p-1}^{1+A(i)}$ and $c_{p}^{1+A(i)}$ in the $(r_c,1+A(i))$-Gray code differ in the $h_{1+A(i),p}$-th bit.
Let $z_i$ denote the $i$-th element in $\textsf{T}$. $C_{p.1}$ effects the transformation
\begin{align*}
&\bigotimes_{z_i\in\textsf{T}}\left(\ket{\langle c_{p-1}^{1+A(i)}t_i^{(k)},x\rangle}_{z_i}\ket{x_{1+A(i)} x_{2+A(i)} \cdots x_{A(i+1)}}_{\Tree_{z_i}^a-\{z_i\}}\right) \otimes\ket{x_{A(r_t+1)+1}x_{A(r_t+1)+2}\cdots x_{r_c}}_{\Tree_\epsilon^{\kappa-s(a+1)}}\\
\xrightarrow{C_{p.1}}&\bigotimes_{z_i\in\textsf{T}}\left(\ket{\langle c_{p-1}^{1+A(i)}t_i^{(k)},x\rangle \oplus h_{1+A(i),p}}_{z_i}\ket{x_{1+A(i)} x_{2+A(i)} \cdots x_{A(i+1)}}_{\Tree_{z_i}^a-\{z_i\}}\right) \otimes\ket{x_{A(r_t+1)+1}x_{A(r_t+1)+2}\cdots x_{r_c}}_{\Tree_\epsilon^{\kappa-s(a+1)}},
\end{align*}
The key operation is thus the mapping of $\ket{\langle c_{p-1}^{1+A(i)}t_i^{(k)},x\rangle }_{z_i}\rightarrow \ket{\langle c_{p-1}^{1+A(i)}t_i^{(k)},x\rangle \oplus h_{1+A(i),p}}_{z_i}$, for all $z_i\in {\sf T}$. To implement this,
for each $i\in[r_t]$, we apply a CNOT gate with target qubit $z_i$, and control qubit $\ket{x_{h_{1+A(i),p}}}$. By construction, $\ket{x_{h_{1+A(i),p}}}$ lies in subtree $\Tree_{z_i}^a-\{z_i\}$ if $h_{1+A(i),p}\in\{1+A(i),2+A(i),\ldots,A(i+1)\}$, and otherwise lies in subtree $\Tree_\epsilon^{\kappa}-\Tree_{z_i}^a$.
We now analyze the depth of $C_k$.
\begin{enumerate}
\item If $h_{1+A(i),p}:=k'+A(i)\in\{1+A(i),2+A(i),\ldots,A(i+1)\}$ for all $z_i\in{\sf T}$ and $k'\in[2^{a+2}-2]$, all CNOT gates in Step $p.1$ ($C_{p.1}$) can be implemented simultaneously because they are in disjoint binary subtrees $\Tree_{z_i}^a$. Since the distance between control and target qubits in each CNOT gate in Step $p.1$ is $O(\log(h_{1+A(i),p}-A(i)))=O(\log(k'))$, by Lemma \ref{lem:cnot_path_constraint}, $C_{p.1} $ can be realized in depth $O(\log(k'))$.
\item If $h_{1+A(i),p}\notin\{1+A(i),2+A(i)\ldots,A(i+1)\}$, Step $p.1$ is an $n$-qubit CNOT circuit under $\Tree_n(2)$ constraint. By Lemma \ref{lem:cnot_circuit} it can be implemented in depth $O(n^2)$.
\end{enumerate}
By Lemma \ref{lem:GrayCode}, for every $k'\in[r_c]$, there are $2^{r_c-k'}$ many $p\in \{2,3,\ldots,2^{r_c}\}$ satisfying
\begin{equation*}
h_{1+A(i),p} = \begin{cases}
k'+A(i), & (\text{if }k'\le r_c-A(i)+1) \\
k'+A(i)-r_c, & (\text{if }k'\ge r_c-A(i)+2)
\end{cases}
\end{equation*}
Thus, there are $2^{r_c-k'}$ values of $p\in\{2,3\ldots,2^{r_c}\}$ such that $C_{p.1}$ has depth $\mathcal{D}(C_{p.1})=O(\log(k'))$, with $k'\in[2^{a+2}-2]$. The remaining $2^{r_c}-\sum_{k'=1}^{2^{a+2}-2}2^{r_c-k'}-1$ values of $p$ have corresponding circuits $C_{p.1}$ that can be realized in depth $\mathcal{D}(C_{p.1})=O(n^2)$, with $k'\ge 2^{a+2}-1$.
By Lemma \ref{lem:Ck}, $C_k$ has circuit depth
\[O(n^2+2^{r_c}+\sum_{p=2}^{2^{r_c}}\mathcal{D}(C_{p.1}))=O(n^2+2^{r_c})+\sum_{k'=1}^{2^{a+1}-2} O(\log(k'))2^{r_c-k'} +O(n^2)\cdot (2^{r_c}-\sum_{k'=1}^{2^{a+1}-2}2^{r_c-k'}-1)= O(2^{r_c}),\]
where we use the fact that $a=\lceil\log(2\log n)\rceil$.
\end{proof}
\paragraph{Implementation of $\Lambda_n$.}
\begin{lemma}\label{lem:diag_bianrytree_withoutancilla}
Any $n$-qubit diagonal unitary matrix $\Lambda_n$ can be realized by a quantum circuit of depth $O(\log(n)2^n/n)$ under $\Tree_n(2)$ constraint without ancillary qubits.
\end{lemma}
\begin{proof}
By Lemma \ref{lem:diag_without_ancilla_correctness}, $\Lambda_n$ can be implemented by the circuit in Fig.~\ref{fig:diag_without_ancilla_framwork}. Recall that $r_t=O(n/\log(n))$, $r_c=n-r_t$, and $\ell\le \frac{2^{r_t+2}}{r_t+1}-1$. Combining Lemmas \ref{lem:pi_binarytree}, \ref{lem:Ck_binarytree}, \ref{lem:reset} and \ref{lem:Lambda_rc}, the total depth and size for $\Lambda_n$ are
\[O(n\log(n))+\ell\cdot O(2^{r_c})+O(n^2)+O(n2^{r_c})=O(\log(n)2^n/n),\]
under $\Tree_n(2)$ constraint.
\end{proof}
\subsection{Circuit implementation under $\Expander_n$ constraints}
\label{sec:diag_without_ancilla_expander}
In this section, we study circuit complexity under $\Expander_n$ constraints. Consider a graph $G$ with vertex expansion $h_{out}(G) = c$ for some constant $c$.
\paragraph{Choice of $\sf C$ and $\sf T$. }
Let $c' = \frac{c}{c+2}$.
By Lemma \ref{lem:graph_property}, for any $S\subset V$ of size at most $n/2$, we can find a matching of size $c'|S|$ between $S$ and $V-S$. Then,
\[M_S=\{(u_j^S,w_j^S)\in E: u_j^S\in S, w_j^S\in\partial_{out}(S),\forall j\in[\lfloor c'|S|\rfloor]\}.\]
is a matching of size $\lfloor c'|S|\rfloor$ .
Define the corresponding vertex set of size $\lfloor c'|S|\rfloor$:
\[\Gamma(S):=\left\{w_j^S\in\partial_{out}(S): (u_j^S,w_j^S)\in M_S, u_j^S\in S, j\in[\lfloor c'|S|\rfloor]\right\}.\]
Let $d=\Big\lfloor\frac{\log(n)-1-\log(\lceil 1/c'\rceil+1)}{\log(1+c')}\Big\rfloor+2 = O(\log n)$ and define the sequence of vertex sets $S_1,\ldots,S_d$, where
\begin{align}
S_1 \subseteq V, &\qquad\abs{S_1} = \lceil 1/c'\rceil+1, \text{vertices in }S_1 \text{ arbitrary}\label{eq:s1}\\
S_{i+1}=S_i\cup \Gamma(S_i), &\qquad \forall i\in[d-1] \label{eq:siplus1}
\end{align}
Based on Eq. \eqref{eq:siplus1} and the definition of $\Gamma(S_i)$, $|S_{i+1}|=|S_i|+\lceil c'|S_i|\rceil$, which satisfies $(1+c')|S_i|-1\le |S_{i+1}|\le (1+c')|S_i|$. By reduction, we obtain $(|S_1|-\frac{1}{c'})(1+c')^{i-1}\le |S_i|\le |S_1|(1+c')^{i-1}$ for all $\forall i\in [d]$. We also have
\[\frac{n}{2(\lceil 1/c'\rceil+1)(1+c')}\le (|S_1|-\frac{1}{c'})(1+c')^{d-2}\le|S_{d-1}|\le |S_1|(1+c')^{d-2} \le n/2.\]
The control and target register are now defined as
\[{\sf T}:=S_{d-1}\quad \text{and} \quad{\sf C}:=V-S_{d-1}.\]
By construction, $\sf T$ and $\sf C$ have sizes $r_t=|S_{d-1}|\in\big[\frac{n}{2(\lceil 1/c'\rceil+1)(1+c')}, n/2\big]$ and $r_c=|V-S_{d-1}|\in \big[n/2, n-\frac{n}{2(\lceil 1/c'\rceil+1)(1+c')}\big]$, respectively.
\paragraph{Implementation of $\Pi$.}
In this subsection, unitary transformation $\Pi$ (Eq. \eqref{eq:pi}) is denoted $\Pi^{expander}$.
\begin{lemma}[$\Pi^{expander}$]\label{lem:pi_expander}
The unitary transformation $\Pi^{expander}$, defined by
\begin{equation*}
\ket{x_1x_2\cdots x_n}_{V}\xrightarrow{\Pi^{expander}}\ket{x_{control}}_{V-S_{d-1}}\ket{x_{target}}_{S_{d-1}}\defeq \ket{x_{control}}_{\sf{C}}\ket{x_{target}}_{\sf{T}},
\end{equation*}
can be realized by a CNOT circuit of depth and size $O(n\log(n))$ under $\Expander_n$ constraint.
\end{lemma}
\begin{proof}
$\Pi^{expander}$ permutes $\ket{x_i}$ to a qubit in $\sf C$ for every $i\le r_c$ and to a qubit in $\sf T$ for every $i\ge r_c+1$. In the absence of any graph constraint, $\Pi^{expander}$ can be realized by $O(n)$ swap gates, each of which can be implemented by 3 CNOT gates. The distance between any two qubits in an expander is $O(\log(n))$. Thus, by Lemma \ref{lem:cnot_path_constraint}, $\Pi^{expander}$ can be implemented in depth and size $O(n)\cdot O(\log(n))=O(n\log(n))$.
\end{proof}
\begin{lemma}\label{lem:Ck_expander}
For all $k\in[\ell]$, unitary transformation $C_k$ (Eq.\eqref{eq:Ck}) can be implemented by a quantum circuit of depth $O(\log(n)2^{r_c})$ under $\Expander_n$ constraint.
\end{lemma}
\begin{proof}
We first construct a quantum circuit for $C_{p.1}$ (Eq.~\eqref{eq:step_p1}) for all $p\in\{2,3,\ldots,2^{r_c}\}$. For all $i\in [r_t]$, choose integers $j_i = 1$. The strings $c_{p-1}^{1}$ and $c_{p}^{1}$ in the $(r_c,1)$-Gray code differ in the $h_{1p}$-th bit. $C_{p.1}$ effects the transformation
\begin{align*}
& \ket{x_1x_2\ldots x_{r_c}}_{V-S_{d-1}}\ket{\langle c_{p-1}^1t_1^{(k)},x\rangle,\ldots,\langle c_{p-1}^1t_{r_t}^{(k)},x\rangle}_{S_{d-1}}\\
\to & \ket{x_1x_2\ldots x_{r_c}}_{V-S_{d-1}}\ket{\langle c_{p}^1t_1^{(k)},x\rangle,\ldots,\langle c_{p}^1t_{r_t}^{(k)},x\rangle}_{S_{d-1}}
\\
= & \ket{x_1x_2\ldots x_{r_c}}_{V-S_{d-1}}\ket{\langle c_{p-1}^1t_1^{(k)},x\rangle\oplus x_{h_{1p}},\ldots,\langle c_{p-1}^1t_{r_t}^{(k)},x\rangle\oplus x_{h_{1p}}}_{S_{d-1}},\forall x\in\mbox{$\{0,1\}^n$}.
\end{align*}
That is, it is equivalent to a multi-target CNOT gate (see Sec.~\ref{subsec:qgates}), with control $\ket{x_{h_{1p}}}$ and targets being all qubits in $\sf T$. This multi-target CNOT gate can be implemented as follows.
For each set $S_i$ used in the construction of $\sf C$ and $\sf T$, there is an associated matching
\begin{equation}\label{eq:matching}
M_{S_i}=\left\{(u_j^{S_i},w_j^{S_i})\in E: u_j^{S_i}\in {S_i}, w_j^{S_i}\in\partial_{out}({S_i}),\text{~for~}\forall j\in[c'|{S_i}|]\right\}.
\end{equation}
\begin{figure}[]
\centering
\begin{tikzpicture}
\draw (0,0) ellipse (0.5 and 0.3);
\draw (0,-0.2) ellipse (1 and 0.6);
\draw (0,-0.5) ellipse (1.3 and 1);
\draw (0,-0.8) ellipse (1.6 and 1.4);
\draw (0,-1.2) ellipse (2 and 1.9);
\draw (0,-1.6) ellipse (2.3 and 2.4);
\draw [draw=yellow] (0,1)--(0,0);
\draw [draw=teal] (0,0)--(0,-0.5);
\draw [draw=violet] (0,-0.5)--(0,-1.15) (0,0)--(0.4,-1.15);
\draw [draw=red] (0,-1.15)--(-0.2,-1.9) (0,-0.5)--(0.3,-1.9) (0.4,-1.15)--(0.6,-1.9);
\draw [draw=blue] (-0.2,-1.9)--(-0.3,-2.7) (0,-1.15)--(-0.1,-2.7) (0.3,-1.9)--(0.1,-2.7) (0.6,-1.9)--(0.7,-2.7);
\draw [draw=orange] (-0.3,-2.7)--(-0.4,-3.5) (-0.1,-2.7)--(-0.2,-3.5) (0.1,-2.7)--(0,-3.5) (0.6,-1.9)--(0.6,-3.5) (0.7,-2.7)--(0.8,-3.5) ;
\draw (0.1,-1.9) node{\scriptsize $\cdots$} (0.4,-2.7) node{\scriptsize $\cdots$} (0.3,-3.5) node{\scriptsize $\cdots$};
\draw (-0.4,-0.5) node{\tiny $\Gamma(S_1)$} (-0.4,-1.15) node{\tiny $\Gamma(S_2)$} (-0.6,-1.85) node{\tiny $\Gamma(S_{d-4})$} (-0.8,-2.7) node{\tiny $\Gamma(S_{d-3})$} (-0.85,-3.5) node{\tiny $\Gamma(S_{d-2})$};
\draw (-0.5,0) node{\tiny $S_1$} (-1,-0.2) node{\tiny $S_2$} (-1.3,-0.4) node{\tiny $S_3$} (-1.6,-0.7) node{\tiny $S_{d-3}$} (-1.9,-1) node{\tiny $S_{d-2}$} (-2.25,-1.4) node{\tiny $S_{d-1}$} (0.5,1) node{\tiny $\ket{x_{h_{1p}}}$};
\draw [->] (4.3,-3)--(4.3,-2.7);
\draw [->] (4.3,-2.2)--(4.3,-1.9);
\draw [->] (4.3,-1.4)--(4.3,-1.15);
\draw [->] (4.3,-0.68)-- (4.3,-0.5);
\draw [->] (4.3,0)--(4.3,0.2);
\draw [<-] (8.5,0)--(8.5,0.2);
\draw [<-] (8.5,-3)--(8.5,-2.7);
\draw [<-] (8.5,-2.2)--(8.5,-1.9);
\draw [<-] (8.5,-1.4)--(8.5,-1.15);
\draw [<-] (8.5,-0.68)-- (8.5,-0.5);
\draw (4.3,-3.25) node{\tiny \fbox{Step 1: Apply CNOT gates on {\color{orange} $M_{S_{d-2}}$}~~}};
\draw (4.3,-2.45) node{\tiny \fbox{Step 2: Apply CNOT gates on {\color{blue} $M_{S_{d-3}}$}}};
\draw (4.3,-1.65) node{\tiny \fbox{Step 3: Apply CNOT gates on {\color{red} $M_{S_{d-4}}$}}};
\draw (4.3,-0.9) node{\tiny \fbox{Step $d-3$: Apply CNOT gates on {\color{violet} $M_{S_{2}}$}}};
\draw (4.3,-0.25) node{\tiny \fbox{Step $d-2$: Apply CNOT gates on {\color{teal} $M_{S_{1}}$}}};
\draw (4.5,-1.3) node{\tiny $\cdots$} (8.7,-1.3) node{\tiny $\cdots$};
\draw (6.4,0.5) node{\tiny \fbox{Step $d-1$: Apply $|S_1|$ CNOT gates, where the controls are $\ket{x_{h_{1p}}}$ and targets are in $S_1$.}};
\draw (8.5,-3.25) node{\tiny \fbox{Step $2d-3$: Apply CNOT gates on {\color{orange} $M_{S_{d-2}}$}}};
\draw (8.5,-2.45) node{\tiny \fbox{Step $2d-4$: Apply CNOT gates on {\color{blue} $M_{S_{d-3}}$}}};
\draw (8.5,-1.65) node{\tiny \fbox{Step $2d-5$: Apply CNOT gates on {\color{red} $M_{S_{d-4}}$}}};
\draw (8.5,-0.9) node{\tiny \fbox{Step $d+1$: Apply CNOT gates on {\color{violet} $M_{S_{2}}$}}};
\draw (8.5,-0.25) node{\tiny \fbox{Step $d$: Apply CNOT gates on {\color{teal} $M_{S_{1}}$}}};
\draw (1.2,-3.25)node{\tiny {\color{orange} $M_{S_{d-2}}$}} (1.1,-2.4)node{\tiny {\color{blue} $M_{S_{d-3}}$}} (0.85,-1.65)node{\tiny {\color{red} $M_{S_{d-4}}$}} (0.6,-1)node{\tiny {\color{violet} $M_{S_{2}}$}} (0.5,-0.4)node{\tiny {\color{teal} $M_{S_{1}}$}};
\draw [fill=black] (0,1) circle (0.05);
\draw [fill=black] (0,0) circle (0.05);
\draw [fill=black] (0,-0.5) circle (0.05);
\draw [fill=black] (0,-1.15) circle (0.05) (0.4,-1.15) circle (0.05);
\draw [fill=black] (-0.2,-1.9) circle (0.05) (0.3,-1.9) circle (0.05) (0.6,-1.9) circle (0.05);
\draw [fill=black] (-0.3,-2.7) circle (0.05) (-0.1,-2.7) circle (0.05) (0.1,-2.7) circle (0.05) (0.7,-2.7) circle (0.05);
\draw [fill=black] (-0.4,-3.5) circle (0.05) (-0.2,-3.5) circle (0.05) (0,-3.5) circle (0.05) (0.6,-3.5) circle (0.05) (0.8,-3.5) circle (0.05);
\end{tikzpicture}
\caption{Circuit implementation of $C_{p.1}$ under $\Expander_n$ constraint. For all $i\in[d-2]$, applying CNOT gates on matching $M_{S_i}$ means applying CNOT gates on all edges in $M_{S_i}$, where the controls are in set $S_i$. }\label{fig:expander_cp1}
\end{figure}
$C_{p.1}$ aims to XOR qubit $\ket{x_{h_{1p}}}$ to all qubits in $S_{d-1}$, and will be implemented in a way similar to that in Fig.~\ref{fig:add-circuit}. More precisely, this is constructed in $2d-3$ steps (see Fig.~\ref{fig:expander_cp1}).
\begin{itemize}
\item Step $i$ for $i=1, 2, \ldots, d-2$: Apply CNOT gates to $M_{S_{d-i-1}}$;
\item Step $d-1$: Apply $\lceil 1/c'\rceil+1$ CNOT gates, with each CNOT gate having a separate qubit in $S_1$ as target, and control qubit $\ket{x_{h_{1p}}}$;
\item Step $j$ for $j=d, d+1, \ldots, 2d-3$: Apply CNOT gates to $M_{S_{j-(d-1)}}$;
\end{itemize}
Above, when we say ``apply CNOT gates to $M_{S_i}$'', we mean to apply CNOT gates to all qubit pairs $(u,v)$ corresponding to edges in the matching, with control qubits in set $S_i$. The correctness of this circuit can be seen by comparing Fig.~\ref{fig:expander_cp1} with the circuit in Fig.~\ref{fig:add-circuit}.
We now analyze the circuit depth of $C_{p.1}$. For each $i\in[d-2]$, all CNOT gates acting on $M_{S_i}$ can be implemented in depth 1 since $M_{S_i}$ is a matching. By Lemma~\ref{lem:distance}, the distance between $\ket{x_{h_{1p}}}$ and any qubit in $S_1$ is at most $O(\log(n))$ and therefore, by Lemma \ref{lem:cnot_path_constraint}, Step $d-1$ can be implemented in depth $O(\log(n))\cdot (\lceil 1/c'\rceil+1)=O(\log(n))$. The total depth of $C_{p.1}$ is thus $\mathcal{D}(C_{p.1})=2(d-2)+O(\log(n))=O(\log(n))$.
By Lemma \ref{lem:Ck}, the total depth of $C_k$ is
\[O(n^2+2^{r_c}+\sum_{p=2}^{2^{r_c}}\mathcal{D}(C_{p.1}))=O(n^2+2^{r_c}+(2^{r_c}-1)\cdot O(\log(n))=O(\log(n)2^{r_c}).\]
\end{proof}
\paragraph{Implementation of $\Lambda_n$.}
\begin{lemma}\label{lem:diag_expander_withoutancilla}
Any $n$-qubit diagonal unitary matrix $\Lambda_n$ can be realized by a quantum circuit of depth $O\Big(\frac{\log(n)2^n}{n}\Big)$ under $\Expander_n$ constraint, using no ancillary qubits.
\end{lemma}
\begin{proof}
By Lemma \ref{lem:diag_without_ancilla_correctness}, $\Lambda_n$ can be implemented by the circuit in Fig.~\ref{fig:diag_without_ancilla_framwork}. Recall that both $r_t$ and $r_c=n-r_t$ are between $\Omega(n)$ and $n-\Omega(n)$, and $\ell\le \frac{2^{r_t+2}}{r_t+1}-1$. By Lemmas \ref{lem:pi_expander}, \ref{lem:Ck_expander}, \ref{lem:reset} and \ref{lem:Lambda_rc}, the total depth of $\Lambda_n$ is
\begin{equation*}
2O(n\log(n))+\ell \cdot O(\log(n)2^{r_c})+O(n^2)+O(n2^{r_c})=O(\log(n)2^n/n).
\end{equation*}
\end{proof}
\subsection{Circuit implementation under arbitrary graph constraints}
\label{sec:diag_without_ancilla_graph}
In this section, we show an optimal size upper bound of $O(2^n)$ for $\Lambda_n$ by constructing quantum circuits for operators $\Pi$ and $C_{p.1}$ under general connected constraint graph $G=(V,E)$.
These operators are determined by the definitions of control register $\textsf{C}$ and target register $\textsf{T}$.
\paragraph{Choice of $\sf C$ and $\sf T$. }
Let $T$ be a spanning tree of connected graph $G=(E,V)$, with $\abs{V}=n$. We label all vertices as follows: we traverse $T$ by depth-first search (DFS), starting from the root, and label the vertices along the way in reverse order $n, n-1, \ldots, 2, 1$.
Let $r_c=\lceil n/2\rceil$, $r_t=n-r_c$, and set $\textsf{C}=[r_c]$ and $\textsf{T}=[n]-\textsf{C}$. That is, $\textsf{T}$ contains the first $r_t=\lfloor n/2\rfloor$ vertices traversed in the DFS. By DFS, the vertices in register $\textsf{T}$ span a connected subgraph of graph $G=(V,E)$.
\begin{lemma}\label{lem:dfs_distance}
Let $d(i)$ denote the distance between qubits $i$ and qubit $i+1$ (as labelled by the DFS procedure above) in spanning tree $T$. Then, $\sum_{i=1}^{n-1}d(i)=O(n)$.
\end{lemma}
\begin{proof}
Note that when we traverse $T$ in DFS, we traverse qubits in the order $n,n-1,\ldots,1$. Since $d(i) = dist_T(i,i+1)$ is the distance on the shortest path from $i$ to $i+1$ on $T$, $dist_T(i,i+1)$ is at most the distance we walk along the DFS traversal path from $i$ to $i+1$. Summing this up for all $i\in [n-1]$, we see that $\sum_{i=1}^{n-1} d(i) = \sum_{i=1}^{n-1} dist_T(i,i+1)$ is at most the total distance we travel in a DFS traveral, which is at most $2(n-1)$, as each edge is visited at most twice in DFS.
\end{proof}
\paragraph{Implementation of $\Pi$.}
In this subsection, unitary transformation $\Pi$ (Eq.\eqref{eq:pi}) is denoted by $\Pi^{graph}$.
\begin{lemma}\label{lem:pi_graph}
Unitary transformation $\Pi^{graph}$ is defined as
\begin{equation*}
\ket{x_1x_2\cdots x_n}_{[n]}\xrightarrow{\Pi^{graph}}\bigotimes_{i=1}^n\ket{x_i}_i\defeq \ket{x_{control}}_{\sf{C}}\ket{x_{target}}_{\sf{T}}.
\end{equation*}
It can be realized by a CNOT circuit of depth and size $O(n^2)$ under arbitrary graph constraint.
\end{lemma}
\begin{proof}
For all $i\in[n]$, $\Pi^{graph}$ permutes $\ket{x_i}$ to qubit $i$, and can be implemented by a SWAP gates, each of requires 3 CNOT gates. The result follows from Lemma \ref{lem:cnot_circuit}.
\end{proof}
\paragraph{Implementation of $C_k$.}
\begin{lemma}\label{lem:Ck_graph}
For all $k\in[\ell]$, unitary transformation $C_k$ (Eq.\eqref{eq:Ck}) can be implemented by a standard quantum circuit of size $O(n2^{r_c})$ under arbitrary graph constraint.
\end{lemma}
\begin{proof}
First, we construct quantum circuits for $C_{p.1}$ (Eq. \eqref{eq:step_p1}) for all $p\in\{2,3,\ldots,2^{r_c}\}$.
For every $i\in[r_t]$, choose integers $j_i=1$. Strings $c_{p-1}^{1}$ and $c_{p}^{1}$ in the $(r_c,1)$-Gray code differ in the $h_{1p}$-th bit.
For all $x\in \mbox{$\{0,1\}^n$}$, $C_{p.1}$ effects the transformation
\begin{align*}\label{eq:implement_Cp1}
\bigotimes_{j=1}^{r_c}\ket{x_j}_{j}\bigotimes_{i=1}^{r_t}\ket{\langle c_{p-1}^1t_i^{(k)},x\rangle}_{r_c+i} \to& \bigotimes_{j=1}^{r_c}\ket{x_j}_{j}\bigotimes_{i=1}^{r_t}\ket{\langle c_{p}^1t_i^{(k)},x\rangle}_{r_c+i} \\ =&\bigotimes_{j=1}^{r_c}\ket{x_j}_j\bigotimes_{i=1}^{r_t}\ket{\langle c_{p-1}^1t_i^{(k)},x\rangle\oplus x_{h_{1p}}}_{r_c+i},
\end{align*}
and corresponds to a multi-target $\mathsf{CNOT}$ gate (see \S~\ref{subsec:qgates}), with control being $\ket{x_{h_{1p}}}$ and targets being all qubits in $\sf T$. This can be implemented by the circuit in Fig.~\ref{fig:multi-cnot-general-g}, which is simply a relabelled version of Fig.~\ref{fig:add-circuit}.
\begin{figure}[!hbt]
\centerline
{
\Qcircuit @C=0.6em @R=0.7em {
\lstick{\ket{x_{h_{1p}}}}&\qw & \qw &\qw & \qw & \ctrl{1} & \qw & \qw & \qw & \qw & \qw &\rstick{\ket{x_{h_{1p}}}} \\
\lstick{\ket{x_{r_c+1}}} &\qw &\qw &\qw & \ctrl{1} & \targ & \ctrl{1} &\qw & \qw & \qw & \qw &\rstick{\ket{x_{h_{1p}}\oplus x_{r_c+1}}}\\
\lstick{\ket{x_{r_c+2}}} &\qw &\qw & \ctrl{1} & \targ & \qw & \targ & \ctrl{1} & \qw &\qw & \qw &\rstick{\ket{x_{h_{1p}}\oplus x_{r_c+2}}}\\
\lstick{\vdots~~~}&\qw & \ctrl{1} &\targ & \qw & \qw & \qw & \targ & \ctrl{1} & \qw &\qw &\rstick{~~~\vdots}\\
\lstick{\ket{x_{n}}}&\qw & \targ &\qw & \qw & \qw & \qw & \qw & \targ & \qw &\qw &\rstick{\ket{x_{h_{1p}}\oplus x_{n}}
}
}\caption{CNOT circuit construction of multi-target $\mathsf{CNOT}$ gate used to implement $C_{p.1}$.}\label{fig:multi-cnot-general-g}
\end{figure}
Let $d(i)$ denote the distance between qubits $i$ and $i+1$ in $G$. By Lemma~\ref{lem:dfs_distance}, $\sum_{i=r_c+1}^{n-1} d(i)=O(n)$, and $\sum_{i=h_{1p}}^{r_c+1} d(i) = O(n)$. By Lemma~\ref{lem:multicontrolcnot}, $C_{p.1}$ can be implemented in circuit size $\mathcal{S}(C_{p.1})=O(n)$.
By Lemma \ref{lem:Ck}, the total size of $C_k$ is
$O(n^2+r_t2^{r_c}+\sum_{p=2}^{2^{r_c}}\mathcal{S}(C_{p.1}))=O(n^2+r_t2^{r_c}+(2^{r_c}-1)\cdot O(n))=O(n2^{r_c})$.
\end{proof}
\begin{lemma}\label{lem:diag_graph_withoutancilla}
Any $n$-qubit diagonal unitary matrix $\Lambda_n$ can be realized by a standard quantum circuit of size $O(2^n)$ under arbitrary graph constraint without ancillary qubits.
\end{lemma}
\begin{proof}
By Lemma \ref{lem:diag_without_ancilla_correctness}, $\Lambda_n$ can be implemented by the circuit in Fig.~\ref{fig:diag_without_ancilla_framwork}. Recall that $r_c=\lceil n/2\rceil$, $r_t=n-r_c$ and $\ell\le \frac{2^{r_t+2}}{r_t+1}-1$. By Lemmas \ref{lem:pi_graph}, \ref{lem:Ck_graph}, \ref{lem:reset} and \ref{lem:Lambda_rc}, the total size required to implement $\Lambda_n$ is
\begin{equation*}
O(n^2)+\ell\cdot O(n2^{r_c})+O(n^2)+O(n2^{r_c})=O(2^n).
\end{equation*}
\end{proof}
Though this is not as good as the $\tilde O(2^n/n)$ upper bound obtained in the constructions for grids, tree and expanders, in Section \ref{sec:QSP_US_lowerbound} we will see that this extra price of $O(n)$ is unavoidable for general graphs
\section{Circuit constructions for diagonal unitary matrices with ancillary qubits under qubit connectivity constraints}
\label{sec:diag_with_ancilla}
In this section, we present quantum circuits for $\Lambda_n$ under graph constraints using $m$ ancillary qubits, which will be used to construct QSP and GUS circuits in \S~\ref{sec:QSP_US_graph}. Note that the constructions of \S~\ref{sec:diag_without_ancilla}, which do not make use of ancilla, are not simply special cases (i.e., corresponding to $m=0$) of the constructions in this section. Indeed, the constructions here are fundamentally different, and require $m\ge 3n$.
\subsection{Circuit framework}
\label{sec:diag_with_ancilla_framework}
Our circuit framework for $\Lambda_n$ using $m$ ancilla is shown in Fig.~\ref{fig:diag_with_ancilla_framwork}, which generalizes the ancilla-based framework of~\cite{sun2021asymptotically}.
In that original framework, input $x\in\{0,1\}^n$ is divided into a prefix $x_{pre}$ and suffix $x_{suf}$ of lengths $n-\log(m/2)$ and $\log(m/2)$, respectively, and similarly for $s$ with the same cutoff point. The ancillary qubits are divided into an $m/2$-qubit copy register and an $m/2$-qubit target register, the former used for storing copies of $x_{pre}$ and $x_{suf}$, to increase the degree to which cycling through Gray codes can be done in parallel.
The $m/2$ qubits in the target register is each responsible for enumerating a suffix of $s$, and different layers of circuits enumerate all prefixes. The procedure then consists of five stages, which are similar to the five stages we use in our new procedure. Let us therefore describe our new approach, and then comment further on the differences between it and the original framework of~\cite{sun2021asymptotically}.
In our approach here, the $n+m$ qubits are divided into four registers:
\begin{itemize}
\item ${\sf R}_{\rm inp}$: an $n$-qubit input register used to hold the input state $\ket{x}$, with $x\in\{0,1\}^n$ divided into an $(n-p)$-bit prefix $x_{pre}=x_1x_2\ldots x_{n-p}$ and a $p$-bit suffix $x_{suf}=x_{n-p+1}\ldots x_n$. The first $\tau$ bits of $x_{pre}$ (with $\tau$ dependent on the constraint graph) are referred to as $x_{aux}$, i.e., $x_{aux}=x_1x_2\ldots x_{\tau}$
{and hold frequently used content, to be copied close to the target qubits in order to reduce the circuit depth of the Gray cycle stage.}
\item The $m$ ancillary qubits are divided into three registers:
\begin{itemize}
\item ${\sf R}_{\rm copy}$: the copy register of size $\lambda_{copy} \ge n$
\item ${\sf R}_{\rm targ}$: the target register of size $\lambda_{targ} = 2^p \ge n$
\item ${\sf R}_{\rm aux}$: the auxiliary register of size $\lambda_{aux} \ge n$
\end{itemize}
\end{itemize}
{A few remarks on why we need the registers each have size at least $n$.} As our approach requires creating at least one copy for each of $\ket{x_{pre}}$ and $\ket{x_{suf}}$ (for a total of $n$ qubits), we require at least $n$ ancillary qubits for the copy register. If the size of the target register is $o(n)$, the circuit depths achievable by methods from this section will be larger than the circuit depths in \S \ref{sec:diag_without_ancilla_path} and \S\ref{sec:diag_without_ancilla_binarytree}. We therefore must also allow $n$ ancillary qubits for the target register. While the auxiliary register may be smaller than $n$, for simplicity we also allow $n$ qubits here, and therefore in total we assume that $m\ge 3n$.
The circuit itself consists of $5$ stages.
\begin{enumerate}
\item Suffix Copy: makes $O\left(\lambda_{copy}/p\right)$ copies of $\ket{x_{suf}}$ in ${\sf R}_{\rm copy}$.
\item Gray Initial: prepares the state $\ket{\langle c_1^{\ell_1}t_1, x\rangle}\otimes\cdots\otimes\ket{\langle c_1^{\ell_{2^p}}t_{2^p},x\rangle}=\ket{\langle t_1, x_{suf}\rangle}\otimes\cdots\otimes\ket{\langle t_{2^p},x_{suf}\rangle}$ in ${\sf R}_{\rm targ}$, where $\ell_k$ (for $k\in[2^p]$) are integers specifying $2^p$ $(n-p, \ell_k)$-Gray codes $\{c^{\ell_k}_1,c^{\ell_k}_2,\ldots c^{\ell_k}_{2^{n-p}}\}$, $\{t_1, \ldots, t_{2^p}\} = \{0,1\}^{p}$, and $c_1^i=0^{n-p}$ and $t_i$ are the prefix and suffix of $s$ (see Eq.~\eqref{eq:task1}).
\item Prefix Copy: makes $O\left(\lambda_{aux}/\tau\right)$ copies of $\ket{x_{aux}}$ in ${\sf R}_{\rm aux}$, and replaces the copies of $\ket{x_{suf}}$ in ${\sf R}_{\rm copy}$ with $O\left(\lambda_{copy}/(n-p)\right)$ copies of $\ket{x_{pre}}$.
\item Gray Cycle:
This stage enumerates all $2^{n-p}$ prefixes of $s$ by going along a Gray code---each qubit $k$ uses $(n-p,\ell_k)$-Gray code, which consists of $2^{n-p}$ steps, with each step $j$ responsible for (i) updating
prefix, and (ii) implementing a phase shift (see further details below).
\item Inverse: restores all ancillary qubits to zero.
\end{enumerate}
More precisely, if we define\footnote{There exist some qubits which are not utilized to store copies of suffixes and prefixes. We omit these qubits for simplicity.}
\begin{equation*}
\ket{x_{SufCopy}}:=\underbrace{\ket{x_{suf}\cdots x_{suf}}}_{O(\frac{\lambda_{copy}}{p})\text{~copies~of~}x_{suf}},\quad
\ket{x_{PreCopy}}:= \underbrace{\ket{x_{pre}\cdots x_{pre}}}_{O(\frac{\lambda_{copy}}{n-p})~{\rm copies~of}~x_{pre}},\quad
\ket{x_{AuxCopy}}:= \underbrace{\ket{x_{aux}\cdots x_{aux}}}_{O(\frac{\lambda_{aux}}{\tau})~{\rm copies~of}~x_{aux}},
\end{equation*}
as well as, for all $j\in[2^{n-p}]$ and all $k\in [2^p]$
\begin{equation}\label{eq:s,f}
s(j,k):= c_j^{\ell_k} t_k, \qquad f_{j,k}:= \langle s(j,k), x\rangle, \qquad \ket{f_j}_{{\sf R}_{\rm targ}}:=\bigotimes_{k\in[2^p]}\ket{f_{j,k}}_{{\sf R}_{\rm targ,k}},
\end{equation}
where ${\sf R}_{{\rm targ},k}$ is the $k$-th qubit in ${\sf R}_{\rm targ}$, then unitary operators corresponding to each of the above 5 stages can be expressed as:
\begin{align}
U_{SufCopy}\ket{x}_{{\sf R}_{\rm inp}}\ket{0^{\lambda_{copy}}}_{{\sf R}_{\rm copy}}&=\ket{x}_{{\sf R}_{\rm inp}}\ket{x_{SufCopy}}_{{\sf R}_{\rm copy}}, \label{eq:sufcopy_graph} \\
U_{GrayInit}\ket{x_{SufCopy}}_{{\sf R}_{\rm copy}}\ket{0^{\lambda_{targ}}}_{{\sf R}_{\rm targ}}&=\ket{x_{SufCopy}}_{{\sf R}_{\rm copy}}\ket{f_1}_{{\sf R}_{\rm targ}},\label{eq:gray_initial_graph}\\
U_{PreCopy}\ket{x}_{{\sf R}_{\rm inp}}\ket{x_{SufCopy}}_{\sf{R}_{\rm copy}}\ket{0^{\lambda_{aux}}}_{{\sf R}_{\rm aux}}&=\ket{x}_{{\sf R}_{\rm inp}}\ket{x_{PreCopy}}_{\sf{R}_{\rm copy}}\ket{x_{AuxCopy}}_{{\sf R}_{\rm aux}},\label{eq:precopy_graph}\\
U_{GrayCycle}\ket{x_{PreCopy}}_{{\sf R}_{\rm copy}} \ket{f_1}_{{\sf R}_{\rm targ}}\ket{x_{AuxCopy}}_{{\sf R}_{{\rm aux}}} &=e^{i\theta(x)}\ket{x_{PreCopy}}_{{\sf R}_{\rm copy}} \ket{f_{1}}_{{\sf R}_{\rm targ}}\ket{x_{AuxCopy}}_{{\sf R}_{{\rm aux}}},\label{eq:gray_cycle_graph}\\
U_{Inverse}\ket{x}_{{\sf R}_{\rm inp}}\ket{x_{PreCopy}}_{{\sf R}_{\rm copy}} \ket{f_1}_{{\sf R}_{\rm targ}}\ket{x_{AuxCopy}}_{{\sf R}_{\rm aux}} &=\ket{x}_{{\sf R}_{\rm inp}}\ket{0^{\lambda_{copy}}}_{{\sf R}_{\rm copy}}\ket{0^{\lambda_{targ}}}_{{\sf R}_{\rm targ}}\ket{0^{\lambda_{aux}}}_{{\sf R}_{\rm aux}}.\label{eq:inverse_graph}
\end{align}
It is straightforward to verify that the sequential application of these unitary operators implements
$\Lambda_n$, i.e. $\ket{x}\to e^{i\theta(x)}\ket{x}$ for all $x\in\mbox{$\{0,1\}^n$}$, as in Eq. \eqref{eq:diag}. Note that $c_1^{\ell_i}:=0^{n-p}$ for all $i\in[2^p]$ and thus $\ket{f_1}= \ket{\langle t_1, x_{suf}\rangle}\otimes \cdots \otimes \ket{\langle t_{2^p}, x_{suf}\rangle}$.
\begin{figure}[hbt]
\centerline{
\begin{tabular}{c}
\begin{pgfpicture}{0em}{0em}{0em}{0em}
\color{llgray}
\pgfrect[fill]{\pgfpoint{4.5em}{0.5 em}}{\pgfpoint{31em}{-12em}}
\color{llblue}
\pgfrect[fill]{\pgfpoint{4.5em}{-12 em}}{\pgfpoint{31em}{-4.5em}}
\color{llgreen}
\pgfrect[fill]{\pgfpoint{4.5em}{-16.9 em}}{\pgfpoint{31em}{-4.6em}}
\color{llyellow}
\pgfrect[fill]{\pgfpoint{4.5em}{-22.0 em}}{\pgfpoint{31em}{-4.5em}}
\color{lgray}
\pgfrect[fill]{\pgfpoint{31em}{0.5 em}}{\pgfpoint{9em}{-12em}}
\color{lblue}
\pgfrect[fill]{\pgfpoint{31em}{-12 em}}{\pgfpoint{9em}{-4.5em}}
\color{lgreen}
\pgfrect[fill]{\pgfpoint{31em}{-16.9 em}}{\pgfpoint{9em}{-4.6em}}
\color{lyellow}
\pgfrect[fill]{\pgfpoint{31em}{-22.0 em}}{\pgfpoint{9em}{-4.5em}}
\end{pgfpicture}
\Qcircuit @C=0.6em @R=0.5em {
&\lstick{\scriptstyle\ket{x_1}} & \qw & \qw & \qw & \qw & \multigate{5}{\scriptstyle U_{PreCopy}}& \qw &\qw & \ustick{e^{i\theta(x)}}\qw & \multigate{17}{\scriptstyle U_{Inverse}} & \qw &\rstick{\scriptstyle \ket{x_1}} \\
&\lstick{\scriptstyle\vdots~~} & \qw & \qw &\qw & \qw & \ghost{\scriptstyle U_{PreCopy}}&\qw & \qw &\qw & \ghost{\scriptstyle U_{Inverse}} & \qw & \rstick{\scriptstyle ~~\vdots} \inputgroupv{2}{2}{2.5 em}{0em}{\scriptstyle\ket{x_{aux}}~~~~~~}\\
&\lstick{\scriptstyle\ket{x_\tau}} &\qw & \qw & \qw & \qw & \ghost{\scriptstyle U_{PreCopy}}&\qw & \qw &\qw & \ghost{\scriptstyle U_{Inverse}} & \qw & \rstick{\scriptstyle \ket{x_\tau}} \\
&\lstick{\scriptstyle\ket{x_{\tau+1}}} & \qw &\qw & \qw & \qw & \ghost{\scriptstyle U_{PreCopy}}&\qw & \qw &\qw & \ghost{\scriptstyle U_{Inverse}} & \qw & \rstick{\scriptstyle \ket{x_{\tau+1}}}\\
& \lstick{\scriptstyle\vdots~~} &\qw &\qw &\qw &\qw & \ghost{\scriptstyle U_{PreCopy}}&\qw &\qw &\qw & \ghost{\scriptstyle U_{Inverse}} & \qw & \rstick{{\scriptstyle ~~\vdots}~~~~{\sf R}_{\rm inp}} \\
&\lstick{\scriptstyle\ket{x_{n-p}}} &\qw & \qw &\qw & \qw & \ghost{\scriptstyle U_{PreCopy}}&\qw & \qw &\qw & \ghost{\scriptstyle U_{Inverse}} & \qw & \rstick{\scriptstyle\ket{x_{n-p}}} \inputgroupv{3}{3}{7.5 em}{0em}{\scriptstyle \ket{x_{pre}}~~~~~~~~~~~~~~~~~~~~~~~~~~~}\\
&\lstick{\scriptstyle\ket{x_{n-p+1}}} & \multigate{5}{\scriptstyle U_{SufCopy}} & \qw &\qw &\qw & \qw & \qw &\qw &\qw & \ghost{\scriptstyle U_{Inverse}} & \qw & \rstick{\scriptstyle\ket{x_{n-p+1}}}\\
&\lstick{\scriptstyle\vdots~~} & \ghost{\scriptstyle U_{SufCopy}} & \qw &\qw & \qw & \qw \qwx[-2]\qwx[2] &\qw &\qw &\qw & \ghost{\scriptstyle U_{Inverse}} & \qw & \rstick{\scriptstyle ~~\vdots} \inputgroupv{8}{8}{4.6 em}{0em}{\scriptstyle \ket{x_{suf}}~~~~~~~~~~~~~}\\
&\lstick{\scriptstyle\ket{x_n}} & \ghost{\scriptstyle U_{SufCopy}} & \qw &\qw &\qw & \qw & \qw &\qw &\qw & \ghost{\scriptstyle U_{Inverse}} & \qw & \rstick{\scriptstyle\ket{x_n}} \\
&\lstick{\scriptstyle\ket{0}} & \ghost{\scriptstyle U_{SufCopy}} & \push{\scriptstyle\ket{x_{suf}}}\qw & \multigate{5}{\scriptstyle U_{GrayInit}}&\qw & \multigate{2}{\scriptstyle U_{PreCopy}}& \push{\scriptstyle\ket{x_{pre}}}\qw & \multigate{8}{\scriptstyle U_{GrayCycle}} &\qw & \ghost{\scriptstyle U_{Inverse}} & \qw & \rstick{\scriptstyle\ket{0}}\\
&\lstick{\scriptstyle\vdots~~} & \ghost{\scriptstyle U_{SufCopy}} & \push{\scriptstyle\vdots}\qw &\ghost{\scriptstyle U_{GrayInit}}& \qw & \ghost{\scriptstyle U_{PreCopy}}& \push{\scriptstyle\vdots}\qw & \ghost{\scriptstyle U_{GrayCycle}} & \qw & \ghost{\scriptstyle U_{Inverse}} & \qw &\rstick{{\scriptstyle ~~\vdots}~~~~{\sf R}_{\rm copy}}\\
&\lstick{\scriptstyle\ket{0}} & \ghost{\scriptstyle U_{SufCopy}} & \push{\scriptstyle\ket{x_{suf}}} \qw &\ghost{\scriptstyle U_{GrayInit}}&\qw & \ghost{\scriptstyle U_{PreCopy}}&\push{\scriptstyle\ket{x_{pre}}} \qw &\ghost{\scriptstyle U_{GrayCycle}} &\qw & \ghost{\scriptstyle U_{Inverse}} & \qw &\rstick{\scriptstyle\ket{0}} \inputgroupv{11}{11}{3 em}{0em}{\scriptstyle \lambda_{copy}~\text{qubits}~~~~~~~~~~~~~~~~}\\
&\lstick{\scriptstyle\ket{0}} & \qw &\qw & \ghost{\scriptstyle U_{GrayInit}}& \push{\scriptstyle\ket{\langle c_1^{\ell_1}t_1,x\rangle}}\qw & \qw &\qw & \ghost{\scriptstyle U_{GrayCycle}} & \push{\scriptstyle\ket{\langle c_1^{\ell_1}t_1,x\rangle}}\qw & \ghost{\scriptstyle U_{Inverse}} &\qw & \rstick{\scriptstyle\ket{0}}\\
&\lstick{\scriptstyle\vdots~~} &\qw &\qw &\ghost{\scriptstyle U_{GrayInit}}&\push{\scriptstyle\vdots }\qw & \qw \qwx[-2] \qwx[2] &\qw & \ghost{\scriptstyle U_{GrayCycle}} &\push{\scriptstyle \vdots}\qw & \ghost{\scriptstyle U_{Inverse}} &\qw &\rstick{{\scriptstyle ~~\vdots}~~~~{\sf R}_{\rm targ}}\\
&\lstick{\scriptstyle\ket{0}} &\qw &\qw & \ghost{\scriptstyle U_{GrayInit}}& \push{\scriptstyle\ket{\langle c_1^{\ell_{2^p}}t_{2^p},x\rangle}}\qw & \qw &\qw & \ghost{\scriptstyle U_{GrayCycle}} &\push{\scriptstyle\ket{\langle c_1^{\ell_{2^p}}t_{2^p},x\rangle}}\qw & \ghost{\scriptstyle U_{Inverse}} &\qw &\rstick{\scriptstyle\ket{0}} \inputgroupv{14}{14}{3 em}{0em}{\scriptstyle \lambda_{targ}=2^p~\text{qubits}~~~~~~~~~~~~~~~~~~~}\\
&\lstick{\scriptstyle\ket{0}} &\qw & \qw &\qw & \qw & \multigate{2}{\scriptstyle U_{PreCopy}}& \push{\scriptstyle\ket{x_{aux}}}\qw &\ghost{\scriptstyle U_{GrayCycle}} &\qw & \ghost{\scriptstyle U_{Inverse}} & \qw &\rstick{\scriptstyle\ket{0}}\\
&\lstick{\scriptstyle\vdots~~} &\qw &\qw &\qw &\qw & \ghost{\scriptstyle U_{PreCopy}}& \push{\scriptstyle\vdots}\qw &\ghost{\scriptstyle U_{GrayCycle}} &\qw & \ghost{\scriptstyle U_{Inverse}} &\qw &\rstick{{\scriptstyle ~~\vdots}~~~~{\sf R}_{\rm aux}}\\
&\lstick{\scriptstyle\ket{0}} &\qw & \qw &\qw &\qw & \ghost{\scriptstyle U_{PreCopy}}& \push{\scriptstyle\ket{x_{aux}}}\qw &\ghost{\scriptstyle U_{GrayCycle}} &\qw & \ghost{\scriptstyle U_{Inverse}} & \qw &\rstick{\scriptstyle\ket{0}} \inputgroupv{17}{17}{3 em}{0em}{\scriptstyle \lambda_{aux}~\text{qubits}~~~~~~~~~~~~~~~~}\\
}
\end{tabular}
}
\caption{Circuit framework for implementing diagonal unitaries $\Lambda_n$ with $m$ ancillary qubits under graph constraints. The framework consists of 5 stages: prefix copy, Gray initial, prefix copy, Gray cycle and inverse. The input register $\textsf{R}_{\rm inp}$ (grey) corresponds to the $n$ input qubits of $\Lambda_n$. The $m$ ancillary qubits are partitioned into 3 registers, ${\sf R}_{\rm copy}$ (blue), ${\sf R}_{\rm targ}$ (green) and ${\sf R}_{\rm aux}$ (yellow). Darker shading indicates that the phase shift $e^{i\theta(x)}$ has been effected. Note that $c_1^{\ell_i}:=0^{n-p}$ for all $i\in[2^p]$, and thus $\langle c_1^{\ell_i}t_i, x\rangle = \langle t_i, x_{suf}\rangle$.
}
\label{fig:diag_with_ancilla_framwork}
\end{figure}
\paragraph{Remark.}
Compared to~\cite{sun2021asymptotically}, which did not consider connectivity constraints, our construction differs in:
\begin{enumerate}
\item The design of the registers. In \cite{sun2021asymptotically}, the value of $p$ (which specifies the division of $x$ into $x_{pre}$ and $x_{suf}$) is fixed at $p =\log(m/2)$, and the ancillary qubits are divided into 2 registers only, with the first $m/2$ qubits forming ${\sf R}_{\rm copy}$ and the second $m/2$ qubits forming ${\sf R}_{\rm targ}$. In this work, $p$ is chosen dependent on the constraint graph, and we add a new register ${\sf R}_{\rm aux}$. The positions and sizes of the ancillary registers now also depend on the constraint graph.
\item The prefix copy stage. In \cite{sun2021asymptotically}, prefix copy is responsible for making copies of $\ket{x_{pre}}$ in the copy register. Here, it also makes copies of $\ket{x_{aux}}$ in the auxiliary register. More specifically, when we generate prefixes simultaneously by Gray code, we apply CNOT gates where the control qubits are in a copy of $\ket{x_{pre}}$ or $\ket{x_{aux}}$. It will impose an overhead of $O(n^2)$ to the circuit depth since the distances of control and target qubits are at most $O(n)$. To resolve this issue, we make copies of $\ket{x_{aux}}$ and arrange them close to the qubits in target register. If the distance between control qubits in $\ket{x_{pre}}$ and the target qubits is too large, we use the qubits in $\ket{x_{aux}}$ as the control qubits instead.
\item The choice of Gray codes. The Gray Cycle stage involves choosing $2^{p}$ Gray codes, specified by the integers $\ell_k$. In \cite{sun2021asymptotically}, these are chosen as $\ell_k = (k-1)\mod (n-p)+1$ for every $k\in[2^p]$. Here we choose $\ell_k$ dependent on the constraint graph (see Table~\ref{tab:choice-of-graycycle}).
\end{enumerate}
These changes were made to address the fact that the framework of~\cite{sun2021asymptotically} does not perform well under connectivity constraints.
In particular, the generation of the $2^p$ prefixes by Gray codes (during the Gray Initial and Gray Cycle stages) involves $2^p$ CNOT gates which may not be implementable in parallel under connectivity constraints, and may impose an overhead of $O(m^2)$ to the circuit depth.
\begin{table}[h!]\small
\centering
\begin{tabular}{c|c|cccc}\footnotesize
& $K_{n+m}$~\cite{sun2021asymptotically} & $\Path_{n+m}$ & $\Grid^{n_1,\ldots,n_d}_{n+m}$& $\Tree_{n+m}(2)$ & $\Expander_{n+m}$ \\ \hline
$\ell_k$ & $(k-1)\mod (n-p)+1$ & $(k-1)\mod (n-p)+1$ & $(k-1)\mod (n-p)+1$ & $1$ & $1$
\end{tabular}
\caption{Choice of integers $\ell_k$ ($k=1, \ldots, 2^p$) which specify the $2^{p}$ Gray codes used in the Gray Cycle stage, for various graph constraints. $K_{n+m}$ is the complete graph on $n+m$ vertices, and corresponds to no connectivity constraints.
}
\label{tab:choice-of-graycycle}
\end{table}
Next, we show circuit depth bounds for several of the stages under general graph constraint. In what follows, we use $\mathcal{D}(U)$ to denote the circuit depth required to implement operator $U$.
\paragraph{Prefix Copy.} It will be convenient to define the following two operators $U''_{PreCopy}$, $U'''_{PreCopy}$
\begin{align}
\ket{x}_{{\sf R}_{\rm inp}}\ket{0^{\lambda_{copy}}}_{\sf{R}_{\rm copy}}\ket{0^{\lambda_{aux}}}_{{\sf R}_{\rm aux}} &\xrightarrow{U''_{PreCopy}}\ket{x}_{{\sf R}_{\rm inp}}\ket{x_{PreCopy}}_{\sf{R}_{\rm copy}}\ket{0^{\lambda_{aux}}}_{{\sf R}_{\rm aux}},\label{eq:precopy2_graph} \\
\ket{x}_{{\sf R}_{\rm inp}}\ket{x_{PreCopy}}_{\sf{R}_{\rm copy}}\ket{0^{\lambda_{aux}}}_{{\sf R}_{\rm aux}}& \xrightarrow{U'''_{PreCopy}}\ket{x}_{{\sf R}_{\rm inp}}\ket{x_{PreCopy}}_{\sf{R}_{\rm copy}}\ket{x_{AuxCopy}}_{{\sf R}_{\rm aux}}.\label{eq:precopy3_graph}
\end{align}
\begin{lemma}[]\label{lem:precopy_graph}
$U_{PreCopy}$ can be implemented in circuit depth
\begin{equation*}
\mathcal{D}(U_{PreCopy})\le \mathcal{D}(U_{SufCopy})+\mathcal{D}(U''_{PreCopy})+\mathcal{D}(U'''_{PreCopy}).
\end{equation*}
\end{lemma}
\begin{proof}
$U_{PreCopy}$ can be implemented in the following way:
\begin{align*}
&\ket{x}_{{\sf R}_{\rm inp}}\ket{x_{SufCopy}}_{\sf{R}_{\rm copy}}\ket{0^{\lambda_{aux}}}_{{\sf R}_{\rm aux}}\\
\xrightarrow{U^\dagger_{SufCopy}}&\ket{x}_{{\sf R}_{\rm inp}}\ket{0^{\lambda_{copy}}}_{\sf{R}_{\rm copy}}\ket{0^{\lambda_{aux}}}_{{\sf R}_{\rm aux}},\\%\label{eq:precopy1_graph}\\
\xrightarrow{U''_{PreCopy}}&\ket{x}_{{\sf R}_{\rm inp}}\ket{x_{PreCopy}}_{\sf{R}_{\rm copy}}\ket{0^{\lambda_{aux}}}_{{\sf R}_{\rm aux}},\\%\label{eq:precopy2_graph}\\
\xrightarrow{U'''_{PreCopy}}&\ket{x}_{{\sf R}_{\rm inp}}\ket{x_{PreCopy}}_{\sf{R}_{\rm copy}}\ket{x_{AuxCopy}}_{{\sf R}_{\rm aux}}
\end{align*}
\end{proof}
\paragraph{Gray Cycle.} Let $s(j,k)$ and $\ket{f_j}$ be as in Eq.~\eqref{eq:s,f}. $U_{GrayCycle}$ consists of $2^{n-p}$ phases.
For $j\le 2^{n-p}-1$, the $j$-th phase consists of two parts, $U^{(j)}_{Gen}$ and $R_j$:
\begin{align}
&\ket{x_{PreCopy}}_{{\sf R}_{\rm copy}} \ket{f_j}_{{\sf R}_{\rm targ}}\ket{x_{AuxCopy}}_{{\sf R}_{\rm aux}}\nonumber\\
\xrightarrow{U_{Gen}^{(j)}}&\ket{x_{PreCopy}}_{{\sf R}_{\rm copy}} \ket{f_{j+1}}_{{\sf R}_{\rm targ}}\ket{x_{AuxCopy}}_{{\sf R}_{\rm aux}}, \label{eq:Ugenj_graph}\\
\xrightarrow{R_j}& e^{i(\sum_{k=1}^{2^p} f_{j+1,k},\alpha_{s(j+1,k)})}\ket{x_{PreCopy}}_{{\sf R}_{\rm copy}} \ket{f_{j+1}}_{{\sf R}_{\rm targ}}\ket{x_{AuxCopy}}_{{\sf R}_{\rm aux}},\label{eq:rotationj_graph}
\end{align}
Note that $R_j$ consists of $2^{p}$ single-qubit gates acting on target register, i.e., $R_j:=\bigotimes_{k=1}^{2^p}R(\alpha_{s(j+1,k)})$ of depth 1.
The $2^{n-p}$-th phase is
\begin{align}
&\ket{x_{PreCopy}}_{{\sf R}_{\rm copy}} \ket{f_{2^{n-p}}}_{{\sf R}_{\rm targ}}\ket{x_{AuxCopy}}_{{\sf R}_{\rm aux}}\nonumber\\
\xrightarrow{U_{Gen}^{(2^{n-p})}}&\ket{x_{PreCopy}}_{{\sf R}_{\rm copy}} \ket{f_{1}}_{{\sf R}_{\rm targ}}\ket{x_{AuxCopy}}_{{\sf R}_{\rm aux}}, \label{eq:Ugen2n-p_graph}\\
\xrightarrow{R_{2^{n-p}}}& e^{i(\sum_{k=1}^{2^p} f_{1,k},\alpha_{s(1,k)})}\ket{x_{PreCopy}}_{{\sf R}_{\rm copy}} \ket{f_{1}}_{{\sf R}_{\rm targ}}\ket{x_{AuxCopy}}_{{\sf R}_{\rm aux}}.\label{eq:rotation2n-p_graph}
\end{align}
$R_{2^{n-p}}$ consists of $2^{p}$ single-qubit gates acting on target register, i.e., $R_{2^{n-p}}:=\bigotimes\limits_{k\in[2^p]}R(\alpha_{s(1,k)})$ of depth 1.
By applying these $2^{n-p}$ phases, the following transformation is implemented:
\[\ket{x_{PreCopy}}_{{\sf R}_{\rm copy}} \ket{f_1}_{{\sf R}_{\rm targ}}\ket{x_{AuxCopy}}_{{\sf R}_{{\rm aux}}} \to e^{i\left(\sum\limits_{j=1}^{2^{n-p}}\sum\limits_{k=1}^{2^p}f_{j,k}\alpha_{s(j,k)}\right)}\ket{x_{PreCopy}}_{{\sf R}_{\rm copy}} \ket{f_{1}}_{{\sf R}_{\rm targ}}\ket{x_{AuxCopy}}_{{\sf R}_{{\rm aux}}}.\]
From Eq.~\eqref{eq:alpha}, $\sum_{j=1}^{2^{n-p}}\sum_{k=1}^{2^p}f_{j,k}\alpha_{s(j,k)}=\theta(x)$ for all $x\in\mbox{$\{0,1\}^n$}$, and the above procedure implements the desired $U_{GrayCycle}$ transformation of Eq.~\eqref{eq:gray_cycle_graph}.
\begin{lemma}[]\label{lem:graycycle_graph}
$U_{GrayCycle}$ can be implemented in circuit depth
\[\mathcal{D}(U_{GrayCycle})\le \sum_{j=1}^{2^{n-p}}\mathcal{D}(U_{Gen}^{(j)})+2^{n-p}.\]
\end{lemma}
\begin{proof}
For $j\in[2^{n-p}]$, the depth of the $j$-th phase is $\mathcal{D}(U_{Gen}^{(j)})+1$. The total depth is therefore $\sum_{j=1}^{2^{n-p}}(\mathcal{D}(U_{Gen}^{(j)})+1)=\sum_{j=1}^{2^{n-p}}\mathcal{D}(U_{Gen}^{(j)})+2^{n-p}.$
\end{proof}
\paragraph{Inverse.}
The inverse stage can be implemented as follows.
\begin{align}
&\ket{x}_{{\sf R}_{\rm inp}}\ket{x_{PreCopy}}_{{\sf R}_{\rm copy}} \ket{f_1}_{{\sf R}_{\rm targ}}\ket{x_{AuxCopy}}_{{\sf R}_{\rm aux}} \nonumber\\
\xrightarrow{U^\dagger_{PreCopy}} &\ket{x}_{{\sf R}_{\rm inp}}\ket{x_{SufCopy}}_{{\sf R}_{\rm copy}} \ket{f_1}_{{\sf R}_{\rm targ}}\ket{0^{\lambda_{aux}}}_{{\sf R}_{\rm aux}}, \label{eq:inverse1_graph}\\
\xrightarrow{U^\dagger_{GrayInit}} & \ket{x}_{{\sf R}_{\rm inp}}\ket{x_{SufCopy}}_{{\sf R}_{\rm copy}} \ket{0^{\lambda_{targ}}}_{{\sf R}_{\rm targ}}\ket{0^{\lambda_{aux}}}_{{\sf R}_{\rm aux}}, \label{eq:inverse2_graph}\\
\xrightarrow{U_{SufCopy}^\dagger} & \ket{x}_{{\sf R}_{\rm inp}}\ket{0^{\lambda_{copy}}}_{{\sf R}_{\rm copy}} \ket{0^{\lambda_{targ}}}_{{\sf R}_{\rm targ}}\ket{0^{\lambda_{aux}}}_{{\sf R}_{\rm aux}}.\label{eq:inverse3_graph}
\end{align}
It follows from Lemma~\ref{lem:precopy_graph} that:
\begin{lemma} []\label{lem:inverse_graph}
$U_{Inverse}$ can be implemented in depth
\[\mathcal{D}(U_{Inverse})\le 2\mathcal{D}(U_{SufCopy})+\mathcal{D}(U_{PreCopy}'')+\mathcal{D}(U_{PreCopy}''')+\mathcal{D}(U_{GrayInit}).\]
\end{lemma}
\subsection{Efficient circuits: general framework}
We shall use the framework of Fig.~\ref{fig:diag_with_ancilla_framwork} for implementing $\Lambda_n$ under path (\S \ref{sec:diag_with_ancilla_path}), grid (\S \ref{sec:diag_with_ancilla_grid_d}) and complete binary tree (\S \ref{sec:diag_with_ancilla_binarytree}) constraints. The case for expander graph constraints differs slightly, see \S \ref{sec:diag_with_ancilla_expander}.
The constructions of~\cite{sun2021asymptotically} give $O\left(n+\frac{2^n}{n+m}\right)$-depth and $O(2^n)$-size upper bounds for implementing $\Lambda_n$ under no graph constraints, using $m$ ancillary qubits (see Table~\ref{tab:lambda-bounds_ancilla}). Similar to the trivial upper bounds of \S~\ref{sec:general_framework_noancilla}, a trivial depth upper bound for $\Lambda_n$ of $O\left((n+m)\cdot \diam(G)\cdot (n+\frac{2^n}{n+m})\right)$ can be given under graph $G$ constraints.
\begin{table}[h!]\scriptsize
\centering
\begin{tabular}{c|cccc}
& $\Path_{n+m}$ & $\Grid^{n_1,\ldots,n_d}_{n+m}$ & $\Tree_{n+m}(2)$ & $\Expander_{n+m}$ \\ \hline
$\diam(G)$ & $n+m$ & $\sum_{j=1}^d n_j$ & $\log (n+m)$ & $\log(n+m)$\\ \hline
Depth (ub, trival) & $(n+m)(n(n+m)+2^n)$ & $(\sum_{j=1}^d n_j)(n(n+m)+2^n)$ & $\log(n+m)(n(n+m)+2^n)$ & $\log(n+m)(n(n+m)+2^n)$ \\
\multirow{2}{*}{Depth (ub)} & $2^{n/2}+\frac{2^n}{n+m}$ & $n^2+d2^{\frac{n}{d+1}}+\max\limits_{j\in\{2,\ldots,d\}}\left\{\frac{d2^{n/j}}{(\Pi_{i=j}^d n_i)^{1/j}}\right\}+\frac{2^n}{n+m}$ & $n^2\log(n)+\frac{\log(n) 2^n}{n+m}$ & $n^2+\frac{\log(m) 2^n}{n+m}$ \\
& [Lem.~\ref{lem:diag_path_ancillary}] &[Lem.~\ref{lem:diag_grid_ancillary}] &[Lem.~\ref{lem:diag_binarytree_withancilla}]& [Lem.~\ref{lem:diag_expander_ancilla}]\\ \hline
\multirow{2}{*}{Depth (lb)} &$2^{n/2}+\frac{2^n}{n+m}$ & $n+2^{\frac{n}{d+1}}+\max\limits_{j\in [d]}\big\{\frac{2^{n/j}}{(\Pi_{i=j}^d n_i)^{1/j}}\big\}$ &$n+\frac{2^n}{n+m}$ & $n+\frac{2^n}{n+m}$ \\
&[Cor. \ref{coro:lower_bound_path}]&[Lem. \ref{lem:lower_bound_grid_k}]&[Cor. \ref{coro:lower_bound_binary}]&[Cor. \ref{coro:lower_bound_expander}]
\end{tabular}
\caption{Circuit depth upper (ub) and lower bounds (lb) required to implement $\Lambda_n$ in circuits under various graph constraints, using $m$ ancillary qubits.
The trivial bounds are based on the unconstrained construction from~\cite{sun2021asymptotically} and Lemma~\ref{lem:cnot_path_constraint}, which implies that, under constraint graph $G$, the required circuit depth is $O((n+m)\cdot \diam(G)\cdot (n+\frac{2^n}{n+m}))$. Big O and $\Omega$ notation is suppressed.
}
\label{tab:lambda-bounds_ancilla}
\end{table}
To achieve the more efficient constructions summarized in the second last row of Table~\ref{tab:lambda-bounds_ancilla}, for each constraint graph type we must carefully choose:
\begin{enumerate}
\item the size and locations for ${\sf R}_{\rm inp}$, ${\sf R}_{\rm copy}$, ${\sf R}_{\rm targ}$ and ${\sf R}_{\rm aux}$, and
\item the particular Gray codes used, i.e., the integers $\ell_1, \ell_2, \ldots, \ell_{2^{p}}$ used to implement the Gray cycle stage.
\end{enumerate}
From the previous section (Lemmas~\ref{lem:precopy_graph}, \ref{lem:graycycle_graph}, \ref{lem:inverse_graph}), to bound the $\Lambda_n$ circuit depth complexity for each graph constraint type, it is sufficient to analyze the circuits implementing
$U_{SufCopy}$ (Eq.~\eqref{eq:sufcopy_graph}),
$U''_{PreCopy}$ (Eq.~\eqref{eq:precopy2_graph}),
$U'''_{PreCopy}$ (Eq.~\eqref{eq:precopy3_graph}),
$U_{GrayInit}$ (Eq.~\eqref{eq:gray_initial_graph}), and
$U_{Gen}^{(j)}$ (Eq.~\eqref{eq:Ugen2n-p_graph}).
As in \S~\ref{sec:diag_without_ancilla}, we aim to minimize circuit depth by arranging qubit registers and Gray codes such that control and target qubits for required CNOT gates are close, and constraint paths for different CNOT gates are disjoint (and hence implementable in parallel) where possible.
\subsection{Circuit implementation under $\Path_{n+m}$ constraints}
\label{sec:diag_with_ancilla_path}
We assume that $m\ge 3n$ and $\frac{m}{3}$ is an integer. Without loss of generality, we also assume that $m\le 3\cdot 2^n$;
if $m> 3\cdot 2^n$, we only use $3\cdot 2^n$ ancillary qubits. We take $p=\big\lfloor \log (\frac{m}{3}) \big\rfloor$, $\tau=2\lceil\log (n-p)\rceil$, $\lambda_{copy}=\lambda_{targ}=2^p$, and $\lambda_{aux}=r\tau$ where $r=\frac{2^p}{n-p}$ \footnote{Here we assume $2^p$ is a multiple of $(n-p)$ for convenience. In the general case where this assumption does not hold, we can define $r=\big\lceil\frac{2^p}{n-p}\big\rceil$ with the last register $R_r$ holding the leftover qubits. The details are tedious and technically uninteresting, thus omitted here.}.
\paragraph{Choice of registers.}
We assign qubits to ${\sf R}_{\rm inp}$, ${\sf R}_{\rm copy}$, ${\sf R}_{\rm targ}$ and ${\sf R}_{\rm aux}$ as in Fig.~\ref{fig:register_in_path}.
\begin{itemize}
\item ${\sf R}_{\rm inp}$ consists of the first $n$ qubits.
\item The $2\cdot 2^p+r\tau$ ancillary qubits are divided into $r$ registers $\textsf{R}_1,\ldots,\textsf{R}_r$.
\item Each $\textsf{R}_k$ for $k\in[r]$ contains $2(n-p) +\tau$ qubits, with the first $2(n-p)$ qubits alternately assigned to ${\sf R}_{\rm copy}$ and ${\sf R}_{\rm targ}$, and the final $\tau$ qubits assigned to ${\sf R}_{\rm aux}$.
\end{itemize}
\begin{figure}[!hbt]
\centering
\begin{tikzpicture}
\draw (-1.4,0)--(-1.2,0) (-0.6,0)--(1.1,0) (1.5,0)--(2.3,0) (2.7,0)--(4.1,0) (4.5,0)--(5.3,0) (5.7,0)--(6.2,0) (6.6,0)--(8.1,0) (8.6,0)--(9.3,0) (9.65,0)--(9.8,0);
\draw [fill=blue,draw=blue] (-0.2,0) circle (0.05) (-0.4,0) circle (0.05) (-0.6,0) circle (0.05) (-1.2,0) circle (0.05) (-1.4,0) circle (0.05);
\draw [fill=black] (0,0) circle (0.05) (0.4,0) circle (0.05) (0.8,0) circle (0.05) (1.6,0) circle (0.05) (3,0) circle (0.05) (3.4,0) circle (0.05) (3.8,0) circle (0.05) (4.6,0) circle (0.05) (7,0) circle (0.05) (7.4,0) circle (0.05) (7.8,0) circle (0.05) (8.6,0) circle (0.05) ;
\draw [fill=white] (0.2,0) circle (0.05) (0.6,0) circle (0.05) (1.0,0) circle (0.05) (1.8,0) circle (0.05) (3.2,0) circle (0.05) (3.6,0) circle (0.05) (4.0,0) circle (0.05) (4.8,0) circle (0.05) (7.2,0) circle (0.05) (7.6,0) circle (0.05) (8.0,0) circle (0.05) (8.8,0) circle (0.05);
\draw [draw=red, fill=red] (2,0) circle (0.05) (2.2,0) circle (0.05) (2.8,0) circle (0.05) (5,0) circle (0.05) (5.2,0) circle (0.05) (5.8,0) circle (0.05) (9,0) circle (0.05) (9.2,0) circle (0.05) (9.8,0) circle (0.05);
\draw (1.3,0) node{\scriptsize $\cdots$} (2.5,0) node{\scriptsize $\cdots$} (4.3,0) node{\scriptsize $\cdots$} (5.5,0) node{\scriptsize $\cdots$} (8.3,0) node{\scriptsize $\cdots$} (9.5,0) node{\scriptsize $\cdots$} (6.4,0) node{\scriptsize $\cdots$} (-0.9,0) node{\scriptsize $\cdots$};
\node (a) at (-0.2,-0.2) {};
\node (b) at (3,-0.2) {};
\draw [decorate,decoration={brace,mirror}] (a)--(b);
\node (c) at (2.8,-0.2) {};
\node (d) at (6,-0.2) {};
\draw [decorate,decoration={brace,mirror}] (c)--(d);
\node (e) at (6.8,-0.2) {};
\node (f) at (10,-0.2) {};
\draw [decorate,decoration={brace,mirror}] (e)--(f);
\node (g) at (1.8,0.2) {};
\node (h) at (3,0.2) {};
\draw [decorate,decoration={brace}] (g)--(h);
\node (g) at (4.8,0.2) {};
\node (h) at (6,0.2) {};
\draw [decorate,decoration={brace}] (g)--(h);
\node (g) at (8.8,0.2) {};
\node (h) at (10,0.2) {};
\draw [decorate,decoration={brace}] (g)--(h);
\node (g) at (-0.2,0.2) {};
\node (h) at (2,0.2) {};
\draw [decorate,decoration={brace}] (g)--(h);
\node (g) at (2.8,0.2) {};
\node (h) at (5,0.2) {};
\draw [decorate,decoration={brace}] (g)--(h);
\node (g) at (6.8,0.2) {};
\node (h) at (9,0.2) {};
\draw [decorate,decoration={brace}] (g)--(h);
\node (a) at (-1.6,-0.2) {};
\node (b) at (0,-0.2) {};
\draw [decorate,decoration={brace,mirror}] (a)--(b);
\node (a) at (-1.6,0.2) {};
\node (b) at (0,0.2) {};
\draw [decorate,decoration={brace}] (a)--(b);
\draw (1.4,-0.6) node{\scriptsize $\textsf{R}_{1}$} (4.4,-0.6) node{\scriptsize $\textsf{R}_{2}$} (8.4,-0.6) node{\scriptsize $\textsf{R}_{r}$} (-0.8,-0.6) node{\scriptsize $\textsf{R}_{\rm inp}$};
\draw (0.9,0.6) node {\scriptsize $2(n-p)$} (3.9,0.6) node {\scriptsize $2(n-p)$} (7.9,0.6) node {\scriptsize
$2(n-p)$};
\draw (2.4,0.6) node{\color{red} \scriptsize $\tau$} (5.4,0.6) node{\color{red} \scriptsize $\tau$} (9.4,0.6) node{\color{red}\scriptsize $\tau$} (-0.8,0.6) node{\color{blue}\scriptsize $n$};
\draw (6.4,-0.6) node{\scriptsize $\cdots$};
\end{tikzpicture}
\caption{${\sf R}_{\rm inp}$, ${\sf R}_{\rm copy}$, ${\sf R}_{\rm targ}$ ${\sf R}_{\rm aux}$ for quantum circuits under $\Path_{n+m}$ constraint. Colors correspond to input (blue), copy (black), target (white) and auxiliary (red) register qubits. The ancillary qubits are grouped into registers labelled $\textsf{R}_1,\textsf{R}_2,\cdots,\textsf{R}_r$. }
\label{fig:register_in_path}
\end{figure}
It is easily verified that our construction uses $2\cdot 2^p+r\tau\le m $ of the total $m$ ancillary qubits available. For the integers specifying the Gray codes, we take $\ell_k=(k-1)\mod (n-p)+1$ for all $k\in[2^p]$.
\paragraph{Implementation of Suffix Copy and Prefix Copy stages}
\begin{lemma}\label{lem:copy_path}
The unitary transformation $U_{copy}^{path}$ making $t$ copies of an $n$-bit string $x$, defined by
\begin{equation}
\ket{x}\ket{0^{nt}}\xrightarrow{U^{path}_{copy}}\ket{x}\ket{\underbrace{xx\cdots xx}_{t {\rm~copies~of ~}x}},
\end{equation}
where the two registers are connected in the path graph, can be implemented by a circuit of depth $O(n^2+nt)$ and size $O(n^2t)$ under $\Path_{n(t+1)}$ constraint.
\end{lemma}
\begin{proof}
An explicit circuit, in the absence of any connectivity constraints, is given in Fig.~\ref{fig:my_copypath}, which consists of $t$ CNOT circuits arranged in a pipeline. The total circuit depth is $n + t-1$ and the size is $nt$.
Now we consider the path constraint. In each layer, by Lemma~\ref{lem:cnot_path_constraint}, each CNOT gate in Fig.~\ref{fig:my_copypath} can be implemented in depth and size $O(n)$ under path constraint, since the distance between any pair of control and target qubits is $O(n)$. Also note that different CNOT gates in the same layer are on disjoint regions of the path graph, and can thus be implemented in parallel. The result follows.
\end{proof}
\begin{figure}[htb!]
\centerline{
\Qcircuit @C=0.6em @R=0.7em {
\lstick{\scriptstyle\ket{x_1}} &\qw &\qw & \ctrl{3} &\qw & \qw &\\
\lstick{\scriptstyle\vdots~~} & \qw & \ctrl{3} &\qw &\qw &\qw &\\
\lstick{\scriptstyle\ket{x_n}} &\ctrl{3} &\qw&\qw &\qw &\qw &\\
\lstick{\scriptstyle\ket{0}}& \qw & \qw &\targ & \ctrl{3} & \qw &\\
\lstick{\scriptstyle\vdots~~} & \qw &\targ & \ctrl{3} &\qw &\qw &\\
\lstick{\scriptstyle\ket{0}}&\targ&\ctrl{3} & \qw &\qw & \qw &\\
\lstick{\scriptstyle\ket{0}} & \qw & \qw &\qw &\targ &\qw &\\
\lstick{\scriptstyle\vdots~~} & \qw &\qw & \targ &\qw &\qw &\\
\lstick{\scriptstyle\ket{0}} &\qw&\targ & \qw &\qw &\qw &\\
& &\vdots & \vdots&\vdots &\vdots\\
& & & & &\\
\lstick{\scriptstyle\ket{0}} & \qw & \qw &\qw &\qw & \ctrl{3} &\\
\lstick{\scriptstyle\vdots~~} & \qw &\qw & \qw &\ctrl{3} & \qw &\\
\lstick{\scriptstyle\ket{0}}&\qw&\qw & \ctrl{3} &\qw & \qw & \\
\lstick{\scriptstyle\ket{0}} & \qw & \qw &\qw &\qw & \targ&\\
\lstick{\scriptstyle\vdots~~} & \qw &\qw & \qw &\targ & \qw &\\
\lstick{\scriptstyle\ket{0}} &\qw&\qw & \targ &\qw & \qw & \\
}
}
\caption{Implementation of $U_{copy}^{path}$ (Lemma \ref{lem:copy_path}) to create $t$ copies of $\ket{x_1 x_2, \ldots x_n}$. Under path constraint, each CNOT gate can be implemented in depth and size $O(n)$. }
\label{fig:my_copypath}
\end{figure}
\begin{lemma}[]\label{lem:sufcopy_path}
$U_{SufCopy}$ and $U''_{PreCopy}$ can each be implemented by a quantum circuit of depth $O(m)$ under $\Path_{n+m}$ constraint.
\end{lemma}
\begin{proof}
$U_{SufCopy}$ creates $\gamma:=\big\lceil\lambda_{targ}/p\big\rceil$ copies of the $p$-qubit state $\ket{x_{suf}}$ in the copy register, i.e.,
\begin{equation*}
\ket{x_{suf}}\ket{0^{p\gamma}}_{{\sf R}_{\rm copy}}\xrightarrow{U_{SufCopy}} \ket{x_{suf}}\ket{x_{suf}}^{\otimes \gamma}_{{\sf R}_{\rm copy}}.
\end{equation*}
As the $p$ qubits that comprise $\ket{x_{suf}}$ are located in a contiguous block in ${\sf R}_{\rm inp}$ that borders ${\sf R}_{\rm 1}$ (see Fig.~\ref{fig:register_in_path}), if the qubits in ${\sf R}_{\rm copy}$ were also located in a contiguous block bordering ${\sf R}_{\rm inp}$ then, by Lemma~\ref{lem:copy_path}, $U_{SufCopy}$ could be implemented by a CNOT circuit of depth $O(p^2+p\gamma)$, where each CNOT gate acts only on nearest neighbours in the path. However, in each layer of circuit in Fig.~\ref{fig:my_copypath}, each CNOT gate has its control and target qubits separated by either (i) $2p$ black and white qubits (in Fig.~\ref{fig:register_in_path}), or (ii) $2p+\tau$ black, white and red qubits. These two cases need depth $O(p)$ and $O(p+\tau)$, respectively. Putting the $p+\gamma-1$ layers in Fig.~\ref{fig:my_copypath} together,
The total depth required to implement $U_{SufCopy}$ is thus $(p+\gamma-1)\cdot O(p+\tau) = O(m)$.
The proof for $U_{PreCopy}''$ is similar, except in this case $r$ copies of the $(n-p)$-qubit state $\ket{x_{pre}}$ are made in the copy register.
\end{proof}
\begin{lemma}[]\label{lem:precopy3_path}
$U_{PreCopy}'''$ (Eq.~\eqref{eq:precopy3_graph}) can be implemented by a quantum circuit of depth $O(n-p)$ under $\Path_{n+m}$ constraint.
\end{lemma}
\begin{proof}
$U_{PreCopy}'''$ makes $r=2^p/(n-p)$
copies of $\tau$-qubit state $\ket{x_{aux}}$ in ${\sf R}_{\rm aux}$. From Fig. \ref{fig:register_in_path}, the ancillary qubits are grouped into registers ${\sf R}_1,\ldots,{\sf R}_r$. Each ${\sf R}_{i}$ contains copy, target and auxiliary register qubits and we already have a copy of $\ket{x_{aux}}$ in the black qubits inside ${\sf R}_i$. Thus, within each ${\sf R}_{i}$ we can make a copy of $\ket{x_{aux}}$ from the the black qubits
to red qubits.
This can be implemented in depth $O(\tau) + O(n-p)=O(n-p)$ for each ${\sf R}_i$, by Lemma \ref{lem:cnot_path_constraint} and noting that the $\tau$ qubits can be copied in a pipeline. Since paths in distinct ${\sf R}_i$ are disjoint, the $r$ copies of $\ket{x_{aux}}$ can be implemented in parallel. The result follows.
\end{proof}
\begin{lemma}[]\label{lem:precopy_path}
$U_{PreCopy}$ can be implemented by a quantum circuit of depth $O(m)$ under $\Path_{n+m}$ constraint.
\end{lemma}
\begin{proof}
Follows from Lemmas \ref{lem:precopy_graph}, \ref{lem:sufcopy_path} and \ref{lem:precopy3_path}.
\end{proof}
\paragraph{Implementation of Gray Initial and Gray Cycle stages}
\begin{lemma}[] \label{lem:grayinitial_path}
$U_{GrayInit}$ (Eq. \eqref{eq:gray_initial_graph}) can be implemented by a CNOT circuit of depth $O(p^2)$ under $\Path_{n+m}$ constraint.
\end{lemma}
\begin{proof}
Recall that the Gray Initial stage aims to generate state $\ket{\langle t_1, x_{suf}\rangle}\otimes \cdots \otimes \ket{\langle t_{2^p}, x_{suf}\rangle}$ in ${\sf R}_{{\rm targ}}$.
{We consider 2 cases: $n-p\ge p$ and $n-p<p$.}
{Case 1: $n-p\ge p$.} Consider the first block of size $2p$ qubits in ${\sf R}_1$ in Fig.~\ref{fig:register_in_path}: The $p$ black qubits contain exactly $x_{suf}$ and the $p$ white qubits are to hold the state $\ket{\langle t_1, x_{suf}\rangle}\otimes \cdots \otimes \ket{\langle t_{p}, x_{suf}\rangle}$. By Lemma~\ref{lem:cnot_circuit}, this can be implemented by a $(2p)$-qubit CNOT circuit of depth and size $O(p^2)$ under $\Path_{2p}$ constraint. At the same time, we can also generate the state $\ket{\langle t_{p+1}, x_{suf}\rangle}\otimes \cdots \otimes \ket{\langle t_{2p}, x_{suf}\rangle}$ in the second block of $2p$ qubits in ${\sf R}_1$, and similarly for all the rest blocks of $2p$ qubits in all ${\sf R}_i$'s. All these blocks are on connected and disjoint regions on the path graph and can thus be implemented in parallel. The only possible exceptions are the end of each ${\sf R}_i$, where the leftover qubits may not form a complete $(2p)$-qubit block. But for each of these ``incomplete blocks'', there is still an $x_{suf}$ that is only $2p$-distance away, and thus these incomplete blocks can be handled in depth $O(p^2)$ as well. Putting them together, by handling all complete blocks first, and then handling all incomplete blocks afterwards, we achieve the desirable unitary with an overall depth of $2\cdot O(p^2) = O(p^2)$.
{Case 2: $n-p<p$. We again divide the qubits into blocks, where each block has $p$ black qubits and $p$ white qubits. Different to Case 1, now there are red qubits in each block. But note from Fig. \ref{fig:register_in_path} that every $\tau = 2\lceil\log(n-p)\rceil$ red qubits appear after $2(n-p)$ black/white qubits, thus the total number of red qubits in each block is not more than that of black/white ones. Therefore, the length of each block is still $O(p)$ and any CNOT circuit on one block still has depth and size $O(p^2)$ (and different circuits on different blocks can be parallelized) as in the previous case. Thus the overall depth is $O(p^2)$ as claimed.}
\end{proof}
The operator $U^{(k)}$, defined in the following lemma, is an important tool in the Gray cycle stage. In the lemma, the $\ket{x_i}$ and $\ket{x_j}$ are black qubits, the $\ket{y_i}$ and $\ket{y_j}$ are white qubits, and the $\ket{x_\ell}$ are red qubits. This lemma is where we use ${\sf R}_{\rm aux}$ to help the CNOT gates.
\begin{lemma}\label{lem:U(k)}
Let $x,y\in\mbox{$\{0,1\}$}^{n-p}$. For all $k\in[n-p]$, We desire a unitary transformation $U^{(k)}$ to satisfy
\begin{align*}
&\bigotimes_{i=1}^{n-p-k+1}\ket{x_i}_{2i-1}\ket{y_i}_{2i}\bigotimes_{j=n-p-k+2}^{n-p}\ket{x_j}_{2j-1}\ket{y_j}_{2j}\bigotimes_{\ell=1}^{2\lceil\log (n-p)\rceil}\ket{x_\ell}_{2(n-p)+\ell}\\
\xrightarrow{U^{(k)}}& \bigotimes_{i=1}^{n-p-k+1}\ket{x_i}_{2i-1}\ket{x_{i+k-1}\oplus y_i}_{2i}\bigotimes_{j=n-p-k+2}^{n-p}\ket{x_j}_{2j-1}\ket{x_{j-(n-p)+k-1}\oplus y_j}_{2j}\bigotimes_{\ell=1}^{2\lceil\log (n-p)\rceil}\ket{x_\ell}_{2(n-p)+\ell} &\forall x,y\in\mbox{$\{0,1\}$}^{n-p}.
\end{align*}
Under $\Path_{2(n-p)+2\lceil\log (n-p)\rceil}$ constraint, a $U^{(k)}$ can be implemented by a circuit of depth $O(k)$ and size $O(nk)$ if $k\in[2\lceil\log (n-p)\rceil+1]$; otherwise, $U^{(k)}$ can be implemented by a circuit of depth and size $O((n-p)k)$.
\end{lemma}
\begin{proof}
Case 1: $k\le 2\lceil\log(n-p)\rceil+1$. We use Lemma~\ref{lem:U(k)_without_ancilla} with parameters $r_c,r_t,\tau$ there set as $r_t=n-p$, $r_c=n-p+2\lceil \log(n-p) \rceil$, $\tau=2\lceil \log(n-p) \rceil$, and variables $x_{r_t+1},x_{r_t+2},\ldots,x_{r_c}$ there set as $x_{1},x_{2},\ldots,x_{2\lceil \log(n-p) \rceil}$ here. It is easily verified that $U^{(k)}$ in Lemma \ref{lem:U(k)_without_ancilla} satisfies the unitary requirement in this lemma. The result thus follows from Lemma \ref{lem:U(k)_without_ancilla}.
Case 2: $k\ge 2\lceil\log(n-p)\rceil+2$. $U^{(k)}$ can be implemented in two parts: The first part adds $x_{i+k-1}$ to $y_i$, which are $O(k)$ apart, for each $i=1,2,\ldots,n-p-k+1$. The second part adds $x_{i-(n-p)+k-1}$ to $y_i$, which are $O(n-p-k)$ apart, for each $i=n-p-k+2,\cdots,n-p$. Therefore, by Lemma \ref{lem:cnot_path_constraint}, this circuit can be implemented in depth and size
\[(n-p-k+1)\cdot O(k)+(n-p-(n-p-k+2)+1)\cdot O(n-p-k)=O((n-p)k).\]
\end{proof}
\begin{lemma}\label{lem:graycycle_path} $U_{GrayCycle}$ (Eq. \eqref{eq:gray_cycle_graph}) can be implemented by a quantum circuit of depth $O(2^{n-p})$ under $\Path_{n+m}$ constraint.
\end{lemma}
\begin{proof}
{Recall that each of the $2^p$ qubits in the target register corresponds to a suffix of $s$, and $U_{GrayCycle}$ enumerates all prefixes of $s$ in the order given in a Gray code---more precisely, qubit $k$ uses the $(n-p,\ell_k)$-Gray code where $\ell_{k} = (k-1)\bmod (n-p)+1$. $U_{GrayCycle}$ is given in Eqs. \eqref{eq:Ugenj_graph} to \eqref{eq:rotation2n-p_graph}, where the phase steps in Eqs. \eqref{eq:rotationj_graph} and \eqref{eq:rotation2n-p_graph} are straightforward, and let us} consider quantum circuits for $U_{Gen}^{(j)}$ for all $j\in[2^{n-p}]$ (Eqs. \eqref{eq:Ugenj_graph} and \eqref{eq:Ugen2n-p_graph}). Recall
the decomposition of qubits in Fig.~\ref{fig:register_in_path} into $r$ registers ${\sf R_1}, {\sf R_2},\ldots, {\sf R_r}$. For every $\ell\in[r]$ and $i\in[n-p]$, if $k=i+(\ell-1)(n-p)$, then
\[\ell_k = (k-1)\bmod (n-p)+1 = (i+(\ell-1)(n-p)-1)\bmod (n-p)+1 = i-1+1 = i.\]
For each register ${\sf R}_q$ where $q\in [r]$, since we have already copied the prefix by $U_{PreCopy}$, the white qubits are in state $\ket{x_1 x_2 \cdots x_{n-p}}$, and the red qubits are in state $\ket{x_1 x_2 \cdots x_{\tau}}$. For $j\in [2^{n-p}-1]$, before $U^{(j)}_{Gen}$, the black qubits are in state $\ket{f_{j,1+(q-1)(n-p)},f_{j,2+(q-1)(n-p)},\cdots, f_{j,q(n-p)}}$. Thus $U^{(j)}_{Gen}$ (Eq.\eqref{eq:Ugenj_graph}) can be represented as
\begin{align*}
&\ket{x_1,f_{j,1+(q-1)(n-p)},x_2,f_{j,2+(q-1)(n-p)},\cdots, x_{n-p},f_{j,q(n-p)}, x_1x_2\cdots x_{\tau}}_{{\sf R}_q}\\
\to &\ket{x_1,f_{j+1,1+(q-1)(n-p)},x_2,f_{j+1,2+(q-1)(n-p)},\cdots, x_{n-p},f_{j+1,q(n-p)}, x_1x_2\cdots x_{\tau}}_{{\sf R}_q}\\
= &\ket{x_1,f_{j,1+(q-1)(n-p)}\oplus x_{h_{1,j+1}},x_2,f_{j,2+(q-1)(n-p)}\oplus x_{h_{2,j+1}},\cdots, x_{n-p},f_{j,q(n-p)}\oplus x_{h_{n-p,j+1}}, x_1x_2\cdots x_{\tau}}_{{\sf R}_q},
\end{align*}
where $f_{j+1,k}=\langle c_{j+1}^{\ell_k}t_k,x\rangle=\langle c_{j}^{\ell_k}t_k,x\rangle\oplus x_{h_{\ell_k,j+1}}$ with $h_{ij}$ defined in Eq.~\eqref{eq:index}.
{Recall that $h_{1,j+1} = \zeta(j)$ (Eq.~\eqref{eq:index})}, and therefore
\begin{align*}
h_{i,j+1} &= (\zeta(j)+i-2\mod (n-p))+1\\
&=( h_{1,j+1}+i-2\mod (n-p))+1\\
& = \begin{cases}
h_{1,j+1}+i-1, &\quad \text{if } 1\le i\le n-p-{\zeta(j)}+1.\\
h_{1,j+1}+i-1-(n-p), &\quad \text{if } n-p-\zeta(j)+2\le i\le n-p.
\end{cases}
\end{align*}
Therefore, the integers $h_{1,j+1},h_{2,j+1},\ldots,h_{n-p,j+1}$ are equal to $h_{1,j+1}, h_{1,j+1}+1,\ldots, n-p, 1,2,\ldots, h_{1,j+1}-1$.
By Lemma~\ref{lem:U(k)} (with $k\leftarrow h_{1,j+1}$ and $n\leftarrow n-p$), the above transformation can be implemented by $U^{(h_{1,j+1})}$ acting on register ${\sf R}_q$, for every $q\in[r]$. Each $U^{(h_{1,j+1})}$ can be implemented in depth
\begin{equation*}
\mathcal{D}(U^{(h_{1,j+1})})= \begin{cases}
O(h_{1,j+1}), &\quad \text{if } h_{1,j+1}\le \tau+1, \\
O((n-p)h_{1,j+1}), & \quad \text{otherwise.}
\end{cases}
\end{equation*}
and, as paths in distinct ${\sf R_\ell}$ are disjoint, the $U^{(h_{1,j+1})}$ for all $r$ registers can be implemented in parallel.
{In the final iteration, the unitary} $U_{Gen}^{(2^{n-p})}$ (Eq. \eqref{eq:Ugen2n-p_graph}) {moves from the last prefix back to the first one}, and it can be implemented in the same way {as $U^{(j)}_{Gen}$ for $j\le 2^{n-p}-1$}, with the same depth upper bound.
By Lemma~\ref{lem:GrayCode}, there are $2^{n-p-i}$ values of $j\in[2^{n-p}-1]$ such that $h_{1,j+1}=i$.
By Lemma~\ref{lem:graycycle_graph}, the circuit depth required to implement $U_{GrayCyle}$ is
\begin{equation}\label{eq:graycycle-depth}
\sum_{j=1}^{2^{n-p}}\mathcal{D}(U_{Gen}^{(j)})+2^{n-p}=\sum_{i=1}^{\tau+1}O(i)\cdot O(2^{n-p-i})+\sum_{i=\tau+2}^{n-p}O((n-p)i)O(2^{n-p-i})+O((n-p)^2)+2^{n-p}=O(2^{n-p}),
\end{equation}
where $\tau=2\lceil\log (n-p)\rceil$.
\end{proof}
\paragraph{Remark.} {Note that like in the proof of Lemma \ref{lem:Ck_path}, here the first term in Eq.\eqref{eq:graycycle-depth} are those highly numerous CNOT operations, for which we use ${\sf R}_{\rm aux}$ to help to shrink the distance and cost. The second term in Eq.\eqref{eq:graycycle-depth} incurs more cost per operation but the number of operations is small. In general the number of operations exponentially decays with the distance, thus we choose the cutoff point $\tau = 2\lceil\log(n-p)\rceil$ to make the overall cost small.}
\paragraph{Implementation of Inverse Stage.}
\begin{lemma}\label{lem:inverse_path}
$U_{Inverse}$
can be implemented by a CNOT circuit of depth $O(m)$ under $\Path_{n+m}$ constraint.
\end{lemma}
\begin{proof}
Follows from Lemmas \ref{lem:inverse_graph}, \ref{lem:sufcopy_path}, \ref{lem:precopy3_path} and \ref{lem:grayinitial_path}.
\end{proof}
\paragraph{Implementation of $\Lambda_n$.}
\begin{lemma}
\label{lem:diag_path_ancillary}
Any $n$-qubit diagonal unitary matrix $\Lambda_n$ can be realized by a quantum circuit of depth {$O\left(2^{n/2}+\frac{2^n}{m}\right)$}
and size $O(2^n)$ under $\Path_{n+m}$ constraint, using {$m \ge 3n$}
ancillary qubits. In particular, we can achieve circuit depth $O(2^{n/2})$ by using $m=\Theta(2^{n/2})$ ancillary qubits.
\end{lemma}
\begin{proof}
If $m\le 3\cdot 2^{n/2}$, the total depth of $\Lambda_n$ is
\[O(m)+O(m)+O(2^{n-p})+O(p^2)+O(m) = O(m+2^n/m) =O(2^n/m),\]
from Lemmas \ref{lem:sufcopy_path}, \ref{lem:graycycle_path}, \ref{lem:precopy_path} and \ref{lem:inverse_path}. Since there are at most $n+m$ gates in each circuit depth, the total size is $O(2^n/m)\cdot (n+m)=O(2^n)$.
If $m\ge 3\cdot 2^{n/2}$, we only use $3\cdot 2^{n/2}$ ancillary qubits. In this case, the total depth and size are $O(2^{n/2})$ and $O(2^n)$. Putting the two cases together gives the claimed result.
\end{proof}
As we will see from \S \ref{sec:QSP_US_lowerbound} (Corollary \ref{coro:lower_bound_path}), this bound is optimal.
\begin{figure}
\centering
\begin{tikzpicture}[scale=0.7]
\draw[->] (0,0) -- (8,0);
\draw (5.5,0) node[anchor=north] {\small the number of ancillary qubits
$m$};
\draw (0,0) node[anchor=east] {$O$};
\draw (0,6) node[anchor=east] {\scriptsize $O(2^{n}/n)$};
\draw (0,36/21.4) node[anchor=east] {\scriptsize $O(2^{n/2})$};
\draw (5,2.2) node[anchor=east] {\scriptsize $\Theta(2^{n/2}+\frac{2^n}{n+m})$};
\draw (2,-0.3) node[anchor=east]{\scriptsize $O(2^{n/2})$};
\draw[->] (0,0) -- (0,7) node[anchor=east] {\small circuit depth};
\draw[thick] plot[smooth, domain = 0:1.4] (\x, {(36/11)/(\x + (6/11))});
\draw[thick] (1.4,36/21.4) -- (7.5,36/21.4);
\draw[dotted, gray] (0,36/21.4) -- (1.4, 36/21.4) -- (1.4,0) ;
\draw (0,6) node[fill,black,draw=black,circle,scale = 0.2]{};
\draw (1.4, 36/21.4) node[fill,black,draw=black,circle,scale = 0.2]{};
\draw (0,36/21.4) node[fill,black,draw=black,circle,scale = 0.2]{};
\draw (1.4,0) node[fill,black,draw=black,circle,scale = 0.2]{};
\end{tikzpicture}
\caption{Circuit depth for $n$-qubit diagonal unitary matrix $\Lambda_n$ under $\Path_{n+m}$ constraint (Lemmas \ref{lem:diag_path_withoutancilla} and \ref{lem:diag_path_ancillary}).}
\label{fig:depth_diag_path}
\end{figure}
\subsection{Circuit implementation under $\Grid_{n+m}^{n_1,n_2,\ldots, n_d}$ constraints}
\label{sec:diag_with_ancilla_grid_d}
In this section, we realize the suffix copy and prefix copy stages under $\Grid_{n+m}^{n_1,n_2,\ldots,n_d}$ constraint, where recall that we assume that $n_1 \ge n_2 \ge \cdots\ge n_d$ without loss of generality. The Gray initial and Gray cycle stages are implemented by circuits in \S \ref{sec:diag_with_ancilla_path} under the Hamiltonian path constraint in $\Grid_{n+m}^{n_1,n_2,\ldots,n_d}$. We take the $n$ input qubits to be arranged in the corner of a $d$-dimensional grid. They can be permuted to any other locations in the grid without increasing the order of the circuit depth required.
We assume that $m\ge 36n $. If $m< 36n $, diagonal unitary matrices are implemented in the way of \S \ref{sec:diag_without_ancilla_path}. We take $p= \log (\frac{m}{18})$, $\tau=2\lceil\log (n-p)\rceil$, $\lambda_{copy}=\lambda_{targ}=2^p$, and $\lambda_{aux}=r\tau$ where $r=\frac{2^p}{n-p}$. For the integers specifying the Gray codes, we take $\ell_k=(k-1)\mod (n-p)+1$ for all $k\in[2^p]$.
\paragraph{Choice of registers.} We assign qubits to ${\sf R}_{\rm inp}$, ${\sf R}_{\rm copy}$, ${\sf R}_{\rm targ}$ and ${\sf R}_{\rm aux}$ as follows (see Fig.~\ref{fig:register_in_grid}). We divide $\Grid_{n+m}^{n_1,n_2,\ldots, n_d}$ into two grids: $\Grid_{n_1\cdots n_{d-1} \lfloor n_d/2\rfloor}^{n_1,n_2,\ldots, n_{d-1}, \lfloor n_d/2\rfloor}$ and $\Grid_{n_1\cdots n_{d-1}\lceil n_d/2\rceil}^{n_1,n_2,\ldots, n_{d-1}, \lceil n_d/2\rceil}$. We can verify that the sizes of these two grids are at least $m/3~(\ge 12n) $ and $m/2~(\ge 18 n)$ respectively.
The input register is in $\Grid_{n_1\cdots n_{d-1} \lfloor n_d/2\rfloor}^{n_1,n_2,\ldots, n_{d-1}, \lfloor n_d/2\rfloor}$ and qubits in $\Grid_{n_1\cdots n_{d-1}\lceil n_d/2\rceil}^{n_1,n_2,\ldots, n_{d-1}, \lceil n_d/2\rceil}$ are utilized as ancillary qubits.
\begin{itemize}
\item We choose the lowest possible dimensional grid to store the input register. More specifically, let $k$ be the minimum integer satisfying $n_1\cdots n_k \ge n$, and $n_k'$ be the minimum integer satisfying $n_1\cdots n_{k-1} n_k' \ge n$. (When $k=1$, $n_1n_2\cdots n_{k-1}$ is defined to 1).
${\sf R}_{\rm inp}$ consists of the first $n$ qubits of sub-grid $\Grid^{n_1,n_2,\ldots,n_{k-1},n'_k,1,1,\ldots,1}_{n_1n_2\cdots n_{k-1}n'_k}$ in $\Grid_{n_1\cdots n_{d-1} \lfloor n_d/2\rfloor}^{n_1,n_2,\ldots, n_{d-1}, \lfloor n_d/2\rfloor}$.
\item We choose $2\cdot 2^p+r\tau$ ancillary qubits from $\Grid_{n_1\cdots n_{d-1}\lceil n_{d}/2 \rceil}^{n_1,\cdots, n_{d-1},\lceil n_d/2\rceil}$,
and utilize them to construct $r$ registers ${\sf R}_1,{\sf R}_2,\cdots, {\sf R}_r$. The size of each ${\sf R}_k$ is $2(n-p)+\tau$.
Now we construct register ${\sf R}_k$. Let $j$ be the minimum integer satisfying $n_1\cdots n_j \ge 2(n-p)+\tau$, and $n_j'$ be the minimum integer satisfying $n_1\cdots n_{j-1} n_j' \ge 2(n-p)+\tau$. We divide $\Grid_{n_1\cdots n_{d-1}(n_{d}-1)}^{n_1,\cdots, n_{d-1},(n_{d}-1)}$ into sub-grids isomorphic to $\Grid^{n_1,n_2,\ldots,n_{j-1},n'_{j},1,1\ldots,1}_{n_1n_2\cdots n_{j-1} n'_{j}}$. (When $j=1$, $n_1\cdots n_{j-1}$ is defined to 1). Each sub-grid stores exactly one register ${\sf R}_k$, and note that ${\sf R}_k$ occupies at least half of this grid, so the number of qubits in the sub-grid not in ${\sf R}_k$ is at most $2(n-p)+\tau$, so at most $r (2(n-p)+\tau)\le \frac{m}{6}$ qubits are wasted (not used). We again choose a Hamiltonian path $P$ in each sub-grid and assign qubits to ${\sf R}_{\rm copy}$, ${\sf R}_{\rm targ}$ and ${\sf R}_{\rm copy}$ and ${\sf R}_{\rm targ}$ registers in the same way as our assignment for Path in Fig.~\ref{fig:register_in_path}. We choose the same Hamiltonian path for all ${\sf R}_k$ (i.e., same for each sub-grid).
\end{itemize}
An example showing the registers for $\Grid_{n+m}^{n_1,n_2}$ ($n_1\ge n$ and $n_2\ge 2$) is shown in Fig.~\ref{fig:register_in_grid}.
Next we analyze the cost of prefix and suffix copy. Recall that the prefix consists of $n-p$ bits and the suffix has length $p$ bits. In the following lemma, we use a uniform parameter $n'$ to represent either of these two cases.
\begin{figure}[]
\centering
\begin{tikzpicture}
\draw [draw=Goldenrod,thick] (9.8,-1.4)--(0,-1.4)--(0,-1)--(9.8,-1)--(9.8,-0.4)--(0,-0.4)--(0,0)--(9.8,0);
\draw [fill=black] (3,0) circle (0.05) (3.4,0) circle (0.05) (3.8,0) circle (0.05) (4.6,0) circle (0.05) (7,0) circle (0.05) (7.4,0) circle (0.05) (7.8,0) circle (0.05) (8.6,0) circle (0.05) ;
\draw [fill=white] (0.2,0) circle (0.05) (0.6,0) circle (0.05) (1.0,0) circle (0.05) (1.8,0) circle (0.05) (3.2,0) circle (0.05) (3.6,0) circle (0.05) (4.0,0) circle (0.05) (4.8,0) circle (0.05) (7.2,0) circle (0.05) (7.6,0) circle (0.05) (8.0,0) circle (0.05) (8.8,0) circle (0.05);
\draw [draw=red, fill=red] (2,0) circle (0.05) (2.2,0) circle (0.05) (2.8,0) circle (0.05) (5,0) circle (0.05) (5.2,0) circle (0.05) (5.8,0) circle (0.05) (9,0) circle (0.05) (9.2,0) circle (0.05) (9.8,0) circle (0.05);
\draw (1.3,0) node{\scriptsize $\cdots$} (2.5,0) node{\scriptsize $\cdots$} (4.3,0) node{\scriptsize $\cdots$} (5.5,0) node{\scriptsize $\cdots$} (8.3,0) node{\scriptsize $\cdots$} (9.5,0) node{\scriptsize $\cdots$} (6.4,0) node{\scriptsize $\cdots$};
\draw [fill=black] (0,-0.4) circle (0.05) (0.4,-0.4) circle (0.05) (0.8,-0.4) circle (0.05) (1.6,-0.4) circle (0.05) (3,-0.4) circle (0.05) (3.4,-0.4) circle (0.05) (3.8,-0.4) circle (0.05) (4.6,-0.4) circle (0.05) (7,-0.4) circle (0.05) (7.4,-0.4) circle (0.05) (7.8,-0.4) circle (0.05) (8.6,-0.4) circle (0.05);
\draw [fill=white] (0.2,-0.4) circle (0.05) (0.6,-0.4) circle (0.05) (1.0,-0.4) circle (0.05) (1.8,-0.4) circle (0.05) (3.2,-0.4) circle (0.05) (3.6,-0.4) circle (0.05) (4.0,-0.4) circle (0.05) (4.8,-0.4) circle (0.05) (7.2,-0.4) circle (0.05) (7.6,-0.4) circle (0.05) (8.0,-0.4) circle (0.05) (8.8,-0.4) circle (0.05);
\draw [draw=red, fill=red] (2,-0.4) circle (0.05) (2.2,-0.4) circle (0.05) (2.8,-0.4) circle (0.05) (5,-0.4) circle (0.05) (5.2,-0.4) circle (0.05) (5.8,-0.4) circle (0.05) (9,-0.4) circle (0.05) (9.2,-0.4) circle (0.05) (9.8,-0.4) circle (0.05);
\draw (1.3,-0.4) node{\scriptsize $\cdots$} (2.5,-0.4) node{\scriptsize $\cdots$} (4.3,-0.4) node{\scriptsize $\cdots$} (5.5,-0.4) node{\scriptsize $\cdots$} (8.3,-0.4) node{\scriptsize $\cdots$} (9.5,-0.4) node{\scriptsize $\cdots$} (6.4,-0.4) node{\scriptsize $\cdots$};
\draw [fill=black] (0,0) circle (0.05) (0.4,0) circle (0.05) (0.8,0) circle (0.05) (1.6,0) circle (0.05) ;
\draw [fill=black] (0,-1) circle (0.05) (0.4,-1) circle (0.05) (0.8,-1) circle (0.05) (1.6,-0.4) circle (0.05) (3,-1) circle (0.05) (3.4,-1) circle (0.05) (3.8,-1) circle (0.05) (4.6,-1) circle (0.05) (7,-1) circle (0.05) (7.4,-1) circle (0.05) (7.8,-1) circle (0.05) (8.6,-1) circle (0.05)(1.6,-1) circle (0.05) ;
\draw [fill=white] (0.2,-1) circle (0.05) (0.6,-1) circle (0.05) (1.0,-1) circle (0.05) (1.8,-1) circle (0.05) (3.2,-1) circle (0.05) (3.6,-1) circle (0.05) (4.0,-1) circle (0.05) (4.8,-1) circle (0.05) (7.2,-1) circle (0.05) (7.6,-1) circle (0.05) (8.0,-1) circle (0.05) (8.8,-1) circle (0.05);
\draw [draw=red, fill=red] (2,-1) circle (0.05) (2.2,-1) circle (0.05) (2.8,-1) circle (0.05) (5,-1) circle (0.05) (5.2,-1) circle (0.05) (5.8,-1) circle (0.05) (9,-1) circle (0.05) (9.2,-1) circle (0.05) (9.8,-1) circle (0.05);
\draw (1.3,-1) node{\scriptsize $\cdots$} (2.5,-1) node{\scriptsize $\cdots$} (4.3,-1) node{\scriptsize $\cdots$} (5.5,-1) node{\scriptsize $\cdots$} (8.3,-1) node{\scriptsize $\cdots$} (9.5,-1) node{\scriptsize $\cdots$} (6.4,-1) node{\scriptsize $\cdots$};
\draw [fill=blue,draw=blue] (0,-1.4) circle (0.05) (0.4,-1.4) circle (0.05) (0.8,-1.4) circle (0.05) (1.6,-1.4) circle (0.05) (3,-1.4) circle (0.05) (3.4,-1.4) circle (0.05) (0.2,-1.4) circle (0.05) (0.6,-1.4) circle (0.05) (1.0,-1.4) circle (0.05) (1.8,-1.4) circle (0.05) (2,-1.4) circle (0.05) (2.2,-1.4) circle (0.05) (2.8,-1.4) circle (0.05) (3.8,-1.4) circle (0.05) (3.2,-1.4) circle (0.05) (3.6,-1.4) circle (0.05);
\draw [fill=lgray,draw=lgray](8.6,-1.4) circle (0.05) (7.8,-1.4) circle (0.05) (7,-1.4) circle (0.05) (7.4,-1.4) circle (0.05) (4.0,-1.4) circle (0.05) (4.8,-1.4) circle (0.05) (7.2,-1.4) circle (0.05) (7.6,-1.4) circle (0.05) (8.0,-1.4) circle (0.05) (8.8,-1.4) circle (0.05) (4.6,-1.4) circle (0.05);
\draw [fill=lgray,draw=lgray] (5,-1.4) circle (0.05) (5.2,-1.4) circle (0.05) (5.8,-1.4) circle (0.05) (9,-1.4) circle (0.05) (9.2,-1.4) circle (0.05) (9.8,-1.4) circle (0.05);
\draw (1.3,-1.4) node{\scriptsize $\cdots$} (2.5,-1.4) node{\scriptsize $\cdots$} (4.3,-1.4) node{\scriptsize $\cdots$} (5.5,-1.4) node{\scriptsize $\cdots$} (8.3,-1.4) node{\scriptsize $\cdots$} (9.5,-1.4) node{\scriptsize $\cdots$} (6.4,-1.4) node{\scriptsize $\cdots$};
\draw[fill=lgray,draw=lgray] (10,0) circle (0.05) (10.2,0) circle (0.05) (10.4,0) circle (0.05) (11,0) circle (0.05);
\draw (10.7,0) node{\scriptsize $\cdots$};
\draw[fill=lgray,draw=lgray] (10,-0.4) circle (0.05) (10.2,-0.4) circle (0.05) (10.4,-0.4) circle (0.05) (11,-0.4) circle (0.05);
\draw (10.7,-0.4) node{\scriptsize $\cdots$};
\draw[fill=lgray,draw=lgray] (10,-1) circle (0.05) (10.2,-1) circle (0.05) (10.4,-1) circle (0.05) (11,-1) circle (0.05);
\draw (10.7,-1) node{\scriptsize $\cdots$};
\draw[fill=lgray,draw=lgray] (10,-1.4) circle (0.05) (10.2,-1.4) circle (0.05) (10.4,-1.4) circle (0.05) (11,-1.4) circle (0.05);
\draw (10.7,-1.4) node{\scriptsize $\cdots$};
\draw[fill=lgray,draw=lgray] (10,-1.8) circle (0.05) (10.2,-1.8) circle (0.05) (10.4,-1.8) circle (0.05) (11,-1.8) circle (0.05);
\draw (10.7,-1.8) node{\scriptsize $\cdots$};
\draw [fill=lgray,draw=lgray] (0,-1.8) circle (0.05) (0.4,-1.8) circle (0.05) (0.8,-1.8) circle (0.05) (1.6,-1.8) circle (0.05) (3,-1.8) circle (0.05) (3.4,-1.8) circle (0.05) (0.2,-1.8) circle (0.05) (0.6,-1.8) circle (0.05) (1.0,-1.8) circle (0.05) (1.8,-1.8) circle (0.05) (2,-1.8) circle (0.05) (2.2,-1.8) circle (0.05) (2.8,-1.8) circle (0.05) (3.8,-1.8) circle (0.05) (3.2,-1.8) circle (0.05) (3.6,-1.8) circle (0.05);
\draw [fill=lgray,draw=lgray](8.6,-1.8) circle (0.05) (7.8,-1.8) circle (0.05) (7,-1.8) circle (0.05) (7.4,-1.8) circle (0.05) (4.0,-1.8) circle (0.05) (4.8,-1.8) circle (0.05) (7.2,-1.8) circle (0.05) (7.6,-1.8) circle (0.05) (8.0,-1.8) circle (0.05) (8.8,-1.8) circle (0.05) (4.6,-1.8) circle (0.05);
\draw [fill=lgray,draw=lgray] (5,-1.8) circle (0.05) (5.2,-1.8) circle (0.05) (5.8,-1.8) circle (0.05) (9,-1.8) circle (0.05) (9.2,-1.8) circle (0.05) (9.8,-1.8) circle (0.05);
\draw (1.3,-1.8) node{\scriptsize $\cdots$} (2.5,-1.8) node{\scriptsize $\cdots$} (4.3,-1.8) node{\scriptsize $\cdots$} (5.5,-1.8) node{\scriptsize $\cdots$} (8.3,-1.8) node{\scriptsize $\cdots$} (9.5,-1.8) node{\scriptsize $\cdots$} (6.4,-1.8) node{\scriptsize $\cdots$};
\draw[fill=lgray,draw=lgray] (10,-2.4) circle (0.05) (10.2,-2.4) circle (0.05) (10.4,-2.4) circle (0.05) (11,-2.4) circle (0.05);
\draw (10.7,-2.4) node{\scriptsize $\cdots$};
\draw [fill=lgray,draw=lgray] (0,-2.4) circle (0.05) (0.4,-2.4) circle (0.05) (0.8,-2.4) circle (0.05) (1.6,-2.4) circle (0.05) (3,-2.4) circle (0.05) (3.4,-2.4) circle (0.05) (0.2,-2.4) circle (0.05) (0.6,-2.4) circle (0.05) (1.0,-2.4) circle (0.05) (1.8,-2.4) circle (0.05) (2,-2.4) circle (0.05) (2.2,-2.4) circle (0.05) (2.8,-2.4) circle (0.05) (3.8,-2.4) circle (0.05) (3.2,-2.4) circle (0.05) (3.6,-2.4) circle (0.05);
\draw [fill=lgray,draw=lgray](8.6,-2.4) circle (0.05) (7.8,-2.4) circle (0.05) (7,-2.4) circle (0.05) (7.4,-2.4) circle (0.05) (4.0,-2.4) circle (0.05) (4.8,-2.4) circle (0.05) (7.2,-2.4) circle (0.05) (7.6,-2.4) circle (0.05) (8.0,-2.4) circle (0.05) (8.8,-2.4) circle (0.05) (4.6,-2.4) circle (0.05);
\draw [fill=lgray,draw=lgray] (5,-2.4) circle (0.05) (5.2,-2.4) circle (0.05) (5.8,-2.4) circle (0.05) (9,-2.4) circle (0.05) (9.2,-2.4) circle (0.05) (9.8,-2.4) circle (0.05);
\draw (1.3,-2.4) node{\scriptsize $\cdots$} (2.5,-2.4) node{\scriptsize $\cdots$} (4.3,-2.4) node{\scriptsize $\cdots$} (5.5,-2.4) node{\scriptsize $\cdots$} (8.3,-2.4) node{\scriptsize $\cdots$} (9.5,-2.4) node{\scriptsize $\cdots$} (6.4,-2.4) node{\scriptsize $\cdots$};
\draw (1.3,-0.5) node{\scriptsize $\vdots$} (2.5,-0.5) node{\scriptsize $\vdots$} (5.5,-0.5) node{\scriptsize $\vdots$} (8.3,-0.5) node{\scriptsize $\vdots$} (9.5,-0.5) node{\scriptsize $\vdots$} (6.4,-0.5) node{\scriptsize $\vdots$};
\draw [draw=teal] (2.9,-0.5)--(5.9,-0.5)--(5.9,-0.3)--(2.9,-0.3)--cycle;
\draw [draw=teal] (-0.1,-0.5)--(2.9,-0.5)--(2.9,-0.3)--(-0.1,-0.3)--cycle;
\draw [draw=teal] (6.9,-0.5)--(9.9,-0.5)--(9.9,-0.3)--(6.9,-0.3)--cycle;
\draw [draw=teal] (2.9,-0.1)--(5.9,-0.1)--(5.9,0.1)--(2.9,0.1)--cycle;
\draw [draw=teal] (-0.1,-0.1)--(2.9,-0.1)--(2.9,0.1)--(-0.1,0.1)--cycle;
\draw [draw=teal](6.9,-0.1)--(9.9,-0.1)--(9.9,0.1)--(6.9,0.1)--cycle;
\draw [draw=teal] (2.9,-0.9)--(5.9,-0.9)--(5.9,-1.1)--(2.9,-1.1)--cycle;
\draw [draw=teal] (-0.1,-0.9)--(2.9,-0.9)--(2.9,-1.1)--(-0.1,-1.1)--cycle;
\draw [draw=teal](6.9,-0.9)--(9.9,-0.9)--(9.9,-1.1)--(6.9,-1.1)--cycle;
\draw (4.4,-0.7) node{\tiny \color{teal} ${\sf R}_k$};
\draw (1.4,-1.2) node{\tiny\color{teal} ${\sf R}_1$};
\draw (4.4,-1.2) node{\tiny \color{teal} ${\sf R}_2$};
\draw (8.5,-1.3) node{\tiny \color{teal} ${\sf R}{\lfloor\frac{n_1}{2(n-p)+\tau}\rfloor}$};
\draw (8.5,-0.2) node{\tiny \color{teal} ${\sf R}_{r}$};
\node (g) at (1.8,0.2) {};
\node (h) at (3,0.2) {};
\draw [decorate,decoration={brace}] (g)--(h);
\node (g) at (4.8,0.2) {};
\node (h) at (6,0.2) {};
\draw [decorate,decoration={brace}] (g)--(h);
\node (g) at (8.8,0.2) {};
\node (h) at (10,0.2) {};
\draw [decorate,decoration={brace}] (g)--(h);
\node (g) at (-0.2,0.2) {};
\node (h) at (2,0.2) {};
\draw [decorate,decoration={brace}] (g)--(h);
\node (g) at (2.8,0.2) {};
\node (h) at (5,0.2) {};
\draw [decorate,decoration={brace}] (g)--(h);
\node (g) at (6.8,0.2) {};
\node (h) at (9,0.2) {};
\draw [decorate,decoration={brace}] (g)--(h);
\node (g) at (-0.2,-1.2) {};
\node (h) at (-0.2,0.2) {};
\draw [decorate,decoration={brace}] (g)--(h);
\node (g) at (-0.2,-2.6) {};
\node (h) at (-0.2,-1.2) {};
\draw [decorate,decoration={brace}] (g)--(h);
\node (g) at (11.2,-2.6) {};
\node (h) at (-0.2,-2.6) {};
\draw [decorate,decoration={brace}] (g)--(h);
\draw (5.5,-3) node{ \scriptsize $n_1$} (-0.8,-0.5) node{ \scriptsize $\lceil n_2/2\rceil$} (-0.8,-1.9) node{ \scriptsize $\lfloor n_2/2\rfloor$} (1.9,-1.6) node{ \color{blue} \scriptsize $n$};
\draw (0.9,0.6) node {\scriptsize $2(n-p)$} (3.9,0.6) node {\scriptsize $2(n-p)$} (7.9,0.6) node {\scriptsize $2(n-p)$};
\draw (2.4,0.6) node{\color{red} \scriptsize $\tau$} (5.4,0.6) node{\color{red} \scriptsize $\tau$} (9.4,0.6) node{\color{red}\scriptsize $\tau$};
\end{tikzpicture}
\caption{${\sf R}_{\rm inp}$, ${\sf R}_{\rm copy}$, ${\sf R}_{\rm targ}$ and ${\sf R}_{\rm aux}$ for $\Grid_{n+m}^{n_1,n_2}$ constraint for $n_1\ge n$ and $n_2\ge 2$. Colors correspond to input (blue), copy (black), target (white) and auxiliary (red) register qubits. The grey qubits are not utilized in the circuit construction.
The ancillary qubits are grouped into registers labelled $\textsf{R}_1,\textsf{R}_2,\ldots,\textsf{R}_r$. }
\label{fig:register_in_grid}
\end{figure}
\begin{lemma}\label{lem:copy_grid}
For any $n'\ge 1$, let $s\le d$ denote the minimum integer satisfying $n_1\cdots n_s\ge n'$, and $n_s'$ be the minimum integer satisfying $n_1\cdots n_{s-1}n'_s\ge n'$. For a general $y\in\{0,1\}^{n'}$, suppose the state $\ket{y}$ is input in the first $n'$ qubits of sub-grid $\Grid^{n_1,n_2,\ldots,n_{s-1},n'_s,1,1,\ldots,1}_{n_1n_2\cdots n_{s-1}n'_s}$.
Then one can implement a unitary transformation $U_{copy}^{grid_d}$ satisfying
\begin{equation}
\ket{y}\ket{0^{nt}}\xrightarrow{U^{grid_d}_{copy}}\ket{y}\underbrace{\ket{yy\cdots yy}}_{t:=O(\prod_{i=1}^dn_i/n') {\rm~copies~of ~}y},
\end{equation}
can be implemented by a circuit of depth $O((n')^2+\sum_{i=1}^d n_i)$ under $\Grid^{n_1,n_2,\ldots,n_d}_{n_1n_2\cdots n_d}$ constraint.
\end{lemma}
\begin{proof}
Label qubits in the grid by their integer coordinates $(i_1,i_2,\ldots,i_d)$ where $i_k\in[n_k]$, for all $k\in[d]$.
For $y\in \mbox{$\{0,1\}$}^{n'}$, $\ket{y}$ stores in first $n'$ qubits of sub-grid $\Grid^{n_1,n_2,\ldots,n_{s-1},n'_s,1,1,\ldots,1}_{n_1n_2\cdots n_{s-1}n'_s}$. It can be verified that the size of $\Grid^{n_1,n_2,\ldots,n_{s-1},n'_s,1,1,\ldots,1}_{n_1n_2\cdots n_{s-1}n'_s}$ is less than $2n'$. For simplicity of presentation assume that $n_s$ is a multiple of $n'_s$.
\begin{enumerate}
\item First, we make $\frac{n_s}{n_s'}-1$ copies of qubits of $\Grid^{n_1,n_2,\ldots,n_{s-1},n'_s,1,1,\ldots,1}_{n_1n_2\cdots n_{s-1}n'_s}$ in $\Grid^{n_1,n_2,\ldots,n_{s-1},n_s,1,1,\ldots,1}_{n_1n_2\cdots n_{s-1}n_s}$.
For every $(i_1,\cdots,i_{s-1})\in[n_1]\times [n_2]\times \ldots \times [n_{s-1}]$, define path $P_{(i_1,i_2,\cdots,i_{s-1})}$ of length $n_s$:
\begin{align*}
P_{(i_1,i_2,\cdots,i_{s-1})} = \{(i_1,i_2,\ldots,i_{s-1},v_s,1,\ldots,1): v_s\in [n_s]\}.
\end{align*}
For every $(n_1,\ldots,n_{s-1})$, we make $\frac{n_s}{n_s'}-1$ copies of the first $n'_s$ qubit of $P_{(i_1,i_2,\cdots,i_{s-1})}$ in this path under path constraint .
By Lemma~\ref{lem:copy_path}, this requires depth $O((n'_s)^2+n'_s(n_s/n'_s-1))=O((n'_s)^2+n_s)$.
\item Second, we make $n_{s+1}n_{s+2}\cdots n_d$ copies of qubits of $\Grid^{n_1,n_2,\ldots,n_{s-1},n_s,1,1,\ldots,1}_{n_1n_2\cdots n_s}$ in $\Grid^{n_1,n_2,\ldots,n_{d}}_{n_1\cdots n_d}$. This can be implemented in $d-s$ steps. For every $k\in[d-s]$, in the $k$-th step, we make $n_{s+k}-1$ copies of qubits of $\Grid_{n_1n_2\cdots n_{s+k-1}}^{n_1,n_2,\cdots,n_{s+k-1},1,\cdots,1}$ in $\Grid_{n_1n_2\cdots n_{s+k},1,\cdots,1}^{n_1,n_2,\cdots,n_{s+k},1,\cdots,1}$. Similar to the discussion above, the $k$-th step requires $O(1^2+n_{s+k})=O(n_{s+k})$ depth. The total depth of these $d-k$ steps is $\sum_{k=1}^{d-s}O(n_{s+k})=O(\sum_{i=s+1}^dn_i)$.
\end{enumerate}
Then we have made $\frac{n_s}{n'_s}n_{s+1}n_{s+2}\cdots n_d=O((\prod_{i=1}^d n_i)/n')$ copies of $y$ in total, since $n'\le (\prod_{i=1}^{s-1}n_i)n'_s<2n'$. The total depth is
\[O \big((n'_s)^2 + n_s+\sum_{i=s+1}^d n_i\big)\le O \big((n')^2 + \sum_{i=1}^d n_i\big).\]
\end{proof}
\begin{lemma}\label{lem:diag_grid_ancillary}
Any $n$-qubit diagonal unitary matrix $\Lambda_n$ can be realized by a quantum circuit of depth \[O\Big(n^2+d2^{\frac{n}{d+1}}+\max_{k\in\{2,\ldots,d\}}\Big\{\frac{d2^{n/k}}{(\Pi_{i=k}^d n_i)^{1/k}}\Big\}+\frac{2^n}{n+m}\Big)\]
under $\Grid^{n_1,n_2,\ldots,n_d}_{n+m}$ constraint, using $m \ge {36n}$ ancillary qubits. If $n_1=n_2=\cdots=n_d$, the circuit depth is $O\left(n^2+d2^{\frac{n}{d+1}}+\frac{2^n}{n+m}\right)$.
\end{lemma}
\begin{proof}
From Lemma~\ref{lem:copy_grid}, it follows from a similar argument to that in the proof of Lemma~\ref{lem:sufcopy_path} that $U_{SufCopy}$ and $U_{PreCopy}''$ can be realized by circuits of depth $O(n^2+\sum_{j=1}^d n_j)$. By Lemma \ref{lem:precopy3_path}, $U_{PreCopy}'''$ can be realized in depth $O(n-p)$. By Lemmas \ref{lem:precopy_graph}, \ref{lem:grayinitial_path}, \ref{lem:graycycle_path} and \ref{lem:inverse_graph}, we can see that the depth for prefix copy, Gray initial, Gray cycle and inverse stages are $O\big(n^2+\sum_{j=1}^d n_j\big)$, $O(p^2)$, $O(2^{n-p})$, $O\big(n^2+\sum_{j=1}^d n_j\big)$ and $O\big(n^2+\sum_{j=1}^d n_j\big)$ respectively.
The total circuit depth for $\Lambda_{n}$ is thus $O(n^2+\sum_{j=1}^d n_j+\frac{2^n}{n+m})$ under $\Grid_{n+m}^{n_1,n_2,\ldots,n_d}$ constraint. This bound is good when all the $n_j$'s are of similar magnitude: Indeed, when $n_1 = n_2 = \cdots = n_d$, the bound becomes $O(n^2+ dm^{1/d}+\frac{2^n}{n+m})$. If $m \le O(2^{\frac{d}{d+1}n}/d)$, then the third term dominates and the bound is $O(2^n/m)$. If $m \ge \Omega(2^{\frac{d}{d+1}n}/d)$, we choose to only use $2^{\frac{d}{d+1}n}/d$ many ancilla, yielding a depth bound of $O(d2^{\frac{n}{d+1}})$. This completes the proof for the special case of $n_1 = n_2 = \cdots = n_d$.
In the general case, where some $n_j$'s are much larger than others, we need some further treatment.
Actually, we can use only a sub-grid $\Grid_{n+m'}^{n_1',n_2',\ldots,n_d'}$ of $\Grid_{n+m}^{n_1,n_2,\ldots,n_d}$ for the construction of $\Lambda_n$, where $n_i'\le n_i$ for all $i\in[d]$, and $m'=\prod_{i=1}^dn'_i-n \ge 36n$.
We consider $d+1$ cases:
\begin{itemize}
\item Case 1: $n_d\ge 2^{\frac{n}{d+1}}$. In this case we choose $n_i'=2^{\frac{n}{d+1}}$ for all $i\in[d]$, which gives $m'=\prod_{i=1}^dn'_i-n \ge \omega(n)$. The total depth is
\[O\Big(n^2+\sum_{j=1}^d n'_j+\frac{2^n}{n+m'}\Big) = O\left(n^2+d2^{\frac{n}{d+1}}+\frac{2^n}{2^{dn/(d+1)}}\right) = O\left(n^2+d2^{\frac{n}{d+1}}\right).
\]
\item Case $j$ ($2\le j\le d$): $n_d,n_{d-1},\ldots,n_{d-j+1}$ satisfy
\begin{equation}\label{eq:range_casej}
n_d < 2^{n/(d+1)},\quad n_{d-i}<\frac{2^{\frac{n}{d-i+1}}}{(n_{d-i+1}\cdots n_{d})^{\frac{1}{d-i+1}}}, \forall i\in[j-2], \quad n_{d-j+1}.
\end{equation}
We set
\begin{equation}\label{eq:ni'}
n_i' =\begin{cases}
\frac{2^{\frac{n}{d-j+2}}}{(n_{d-j+2}\cdots n_d)^{\frac{1}{d-j+2}}} &\quad i\in[d-j+1]\\
n_i &\quad i\in\{d-j+2,d-j+3,\ldots,d\}
\end{cases}
\end{equation}
The number of ancillary qubits satisfies
\begin{align}\label{eq:m'}
m'&= \prod_{i=1}^{d}n'_i-n
=\Big( \frac{2^{\frac{n}{d-j+2}}}{(n_{d-j+2}\cdots n_d)^{\frac{1}{d-j+2}}} \Big)^{d-j+1}(n_{d-j+2}\cdots n_d)-n= 2^{\frac{(d-j+1)n}{d-j+2}}(n_{d-j+2}\cdots n_d)^{\frac{1}{d-j+2}}-n\\
\ge& 2^{\frac{(d-j+1)n}{d-j+2}}-n\ge 2^{\frac{n}{2}}-n=\omega(n).\nonumber
\end{align}
Now we have the following, where the first inequality holds because $n_{k-1}\le \frac{2^{n/k}}{(n_k\cdots n_d)^{1/k}}$ (Eq. \eqref{eq:range_casej}), and the second inequality holds because $n_{d-j+2}\ge n_{d-j+3}\ge \cdots \ge n_{k-1}$ for $ k\in \{d-j+3,\ldots, d\}$.
\begin{align*}
&\frac{2^{\frac{n}{d-j+2}}}{(n_{d-j+2}\cdots n_d)^{\frac{1}{d-j+2}}}/ \frac{2^{\frac{n}{k}}}{(n_k\cdots n_d)^{\frac{1}{k}}}= \frac{2^{\frac{n(k-(d-j+2))}{(d-j+2)k}}}{(n_{d-j+2}\cdots n_{k-1})^{\frac{1}{d-j+2}}(n_{k}\cdots n_d)^{\frac{k-(d-j+2)}{(d-j+2)k}}}\\
\ge& \frac{(n_{k-1})^{\frac{k-(d-j+2)}{d-j+2}}}{(n_{d-j+2}\cdots n_{k-1})^{\frac{1}{d-j+2}}}\ge \frac{( n_{k-1})^{\frac{k-(d-j+2)}{d-j+2}}}{(n_{d-j+2})^{\frac{k-(d-j+2)}{d-j+2}}}\ge 1,\quad \forall k\in \{d-j+3,\ldots, d\}.
\end{align*}
Therefore, we have
\begin{equation}\label{eq:casej_ineq}
\frac{2^{\frac{n}{d-j+2}}}{(n_{d-j+2}\cdots n_d)^{\frac{1}{d-j+2}}}\ge \frac{2^{\frac{n}{k}}}{(n_k\cdots n_d)^{\frac{1}{k}}}\quad \forall k\in \{d-j+3,\ldots, d\}.
\end{equation}
The total circuit depth in this case is
\begin{align*}
& O\Big(n^2+\sum_{i=1}^d n'_i+\frac{2^n}{n+m'}\Big)\\
=& O\Big(n^2+\frac{(d-j+1)2^{\frac{n}{d-j+2}}}{(n_{d-j+2}\cdots n_d)^{\frac{1}{d-j+2}}}+\sum_{k=d-j+2}^d n_k+\frac{2^n}{n+m'}\Big) & (\text{by~Eq. \eqref{eq:ni'}})\\
\le &O\Big(n^2+\frac{(d-j+1)2^{\frac{n}{d-j+2}}}{(n_{d-j+2}\cdots n_d)^{\frac{1}{d-j+2}}}+\sum_{k=d-j+2}^{d-1} \frac{2^{\frac{n}{k+1}}}{(n_{k+1}\cdots n_d)^{\frac{1}{k+1}}}+2^{\frac{n}{d+1}}+\frac{2^n}{n+m'}\Big) & (\text{by~Eq~}\eqref{eq:range_casej})\\
\le & O\Big(n^2+\frac{(d-j+1)2^{\frac{n}{d-j+2}}}{(n_{d-j+2}\cdots n_d)^{\frac{1}{d-j+2}}}+\sum_{k=d-j+2}^{d-1} \frac{2^{\frac{n}{k+1}}}{(n_{k+1}\cdots n_d)^{\frac{1}{k+1}}}+2^{\frac{n}{d+1}}+\frac{2^{\frac{n}{d-j+2}}}{(n_{d-j+2}\cdots n_d)^{\frac{1}{d-j+2}}}\Big) &(\text{by~Eq.~}\eqref{eq:m'})\\
\le & O\Big(n^2+2^{\frac{n}{d+1}}+\frac{d2^{\frac{n}{d-j+2}}}{(n_{d-j+2}\cdots n_d)^{\frac{1}{d-j+2}}}\Big)& (\text{by~Eq.~}\eqref{eq:casej_ineq})\\
\le & O\Big(n^2+2^{\frac{n}{d+1}}+\max_{k\in\{2,\ldots,d\}}\Big\{\frac{d2^{n/k}}{(\Pi_{i=k}^d n_i)^{1/k}}\Big\}+\frac{2^n}{n+m}\Big).
\end{align*}
\item Case $d+1$: $n_d,n_{d-1},\ldots,n_1$ satisfy
\begin{equation*}
n_d < 2^{n/(d+1)},\quad n_{d-i}<\frac{2^{\frac{n}{d-i+1}}}{(n_{d-i+1}\cdots n_{d})^{\frac{1}{d-i+1}}},\forall i\in[d-1].
\end{equation*}
In this case we set $n'_i=n_i$ for all $i\in[d]$. Thus, $m'=\prod_{i=1}^{d}n'_i-n=\prod_{i=1}^{d}n_i-n=m\ge 36n$. The total depth is
\[O\Big(n^2+\sum_{j=1}^d n_j+\frac{2^n}{n+m}\Big)\le O\Big(n^2+2^{\frac{n}{d+1}}+\sum_{j=2}^d\frac{2^{n/j}}{(\Pi_{i=j}^d n_i)^{1/j}}+\frac{2^n}{n+m}\Big)\le O\Big(n^2+2^{\frac{n}{d+1}}+\max_{k\in\{2,\ldots,d\}}\Big\{\frac{d2^{n/k}}{(\Pi_{i=k}^d n_i)^{1/k}}\Big\}+\frac{2^n}{n+m}\Big).\]
\end{itemize}
Combining the above $d+1$ cases, the depth upper bound can be summarized as
\[O\Big(n^2+d2^{\frac{n}{d+1}}+\max_{k\in\{2,\ldots,d\}}\Big\{\frac{d2^{n/k}}{(\Pi_{i=k}^d n_i)^{1/k}}\Big\}+\frac{2^n}{n+m}\Big).\]
\end{proof}
\paragraph{Remark.}
Under path and $d$-dimensional grid constraints, we prove in a later section (Lemma \ref{lem:lower_bound_grid_k_Lambda}) that the depth lower bound for $\Lambda_n$ is $\Omega\left(\max\limits_{j\in [d]}\left\{n,2^{\frac{n}{d+1}},\frac{2^{n/j}}{(\prod_{i=j}^d n_i)^{1/j}}\right\}\right)$, using $m$ ancillary qubits.
If $d$ is a constant, the depth upper and lower bounds match. If the number of ancillary qubits $m\le O\Big(\frac{2^n}{n^2+d2^{\frac{n}{d+1}}+\max\limits_{j\in\{2,\ldots,d\}}\frac{d2^{n/j}}{(\prod_{i=j}^d n_i)^{1/j}}}\Big)$, the depth upper bound is $O\left(\frac{2^n}{n+m}\right)$, which matches the corresponding lower bound $\Omega(\frac{2^n}{n+m})$, and both upper and lower bounds equal those under no graph constraints (\cite{sun2021asymptotically}). For example, if $m\le O(2^{n/2})$, the depth upper bound is $O\left(\frac{2^n}{n+m}\right)$ under path constraint.
It is somewhat surprising that the path and grid constraints do not asymptotically increase the circuit depth of diagonal unitary matrix $\Lambda_n$ if the size of grid (or the number of ancillary qubits) is not too large.
Moreover, our circuit depth is optimal if $d=O(1)$.
\subsection{Circuit implementation under $\Tree_{n+m}(2)$ constraints}
\label{sec:diag_with_ancilla_binarytree}
Without loss of generality, we assume that $m\le O(2^n)$; if $m\ge \omega(2^n)$, we only use $O(2^n)$ ancillary qubits. Our choice of Gray codes is given by setting $\ell_k = 1$, for all $k\in[2^p]$.
\paragraph{Choice of registers.} The labels of qubits in a binary tree is shown in Fig.~\ref{fig:label_binarytree} in \S \ref{sec:diag_without_ancilla_binarytree}. Recall that $\Tree_z^k:=\{zy:y\in\mbox{$\{0,1\}$}^k\}$ denotes a binary tree where the root node is labelled as $z$ and the depth is $k$ (with $k+1$ layers of nodes/qubits). The allocation of qubits to registers is shown schematically in Fig.~\ref{fig:register_ancillar_binary}, where the parameters $d$, $\kappa$ and $b$ are taken to be
\begin{equation*}
d=\left\lceil\log \left(n+m+1\right)\right\rceil-1, \qquad \kappa=\left\lceil\log \left(n+1\right)\right\rceil-1, \qquad b=\left\lceil\log (2\log n)\right\rceil.
\end{equation*}
\begin{enumerate}
\item The $n+m$ qubits/nodes are in a depth-$d$ complete-binary tree.
\item The input register is located in the top sub-tree of $\kappa+1$ layers of nodes (the green part), namely
${\sf R}_{\rm inp}\defeq \Tree_\epsilon^{\kappa}.$
The input corresponds to the first $n$ qubits of ${\sf R}_{\rm inp}$.
\item The remaining $(d+1)-(\kappa+1)=d-\kappa$ layers of nodes are divided into $\lfloor\frac{d-\kappa}{\kappa+b+1}\rfloor$ layers of subtrees, each of $\kappa+b+1$ layers of nodes. The roots of these subtrees collectively form the root register ${\sf R}_{\rm roots}$ (the black vertices in Fig.~\ref{fig:register_ancillar_binary}). The number of these subtrees, i.e. the size of ${\sf R}_{\rm roots}$, is
\[|{\sf R}_{\rm roots}|=\sum_{j=1}^{\lfloor\frac{d-\kappa}{\kappa+b+1}\rfloor}2^{d-j(\kappa+b+1)+1} = \Theta\left(\frac{m}{n\log(n)}\right).\]
\item For each $z\in {\sf R}_{\rm roots}$, the first $\kappa+1$ layers (including the root $z$ itself) in subtree $\Tree_z^{\kappa+b}$ are assigned to the copy register ${\sf R}_{\rm copy}$ (blue part in Fig.~\ref{fig:register_ancillar_binary}), the next one layer is assigned to the target register ${\sf R}_{\rm targ}$ (red part in Fig. \ref{fig:register_ancillar_binary}) and the last $b-1$ layers are assigned to the auxiliary register ${\sf R}_{\rm aux}$ (white part in Fig. \ref{fig:register_ancillar_binary}). The sizes of these three parts in each subtree are
\begin{equation}\label{eq:size_threepart}
\sum_{i=0}^\kappa 2^i = \left(2^{\kappa+1}-1\right) = \Theta(n),\quad 2^{\kappa+1} = \Theta(n),\quad \text{and}\quad \sum_{i=\kappa+2}^{\kappa+b} 2^i = 2^{\kappa+1}(2^{b}-2) = \Theta(n\log(n)),
\end{equation}
respectively. Putting all subtrees together, we multiply these sizes by $|{\sf R}_{\rm roots}|$, and get the sizes of the registers ${\sf R}_{\rm copy}$, ${\sf R}_{\rm targ}$, and ${\sf R}_{\rm aux}$
\begin{align*}
\lambda_{copy} = \Theta\left(\frac{m}{\log (n)}\right),\quad \lambda_{targ} = \Theta\left(\frac{m}{\log (n)}\right), \quad
\lambda_{aux} = \Theta\left(m\right),
\end{align*}
respectively.
\end{enumerate}
\begin{figure}[]
\centering
\begin{tikzpicture}
\filldraw[fill=green!20] (0,0)--(0.3,-0.75)--(-0.3,-0.75)--cycle;
\draw[dotted] (-3.6,0)--(0,0) (-2,-0.75)--(0,-0.75) (-3.6,-3.5)--(0,-3.5) (-2,-1)--(-0.5,-1) (-2,-2)--(-0.5,-2) (-2,-2.5)--(0,-2.5);
\draw (-4.3, -1.9)node{\scriptsize $d+1$ layers} (-2.5,-0.4) node{\scriptsize $\kappa+1$ layers} (-2.5,-1.5) node{\scriptsize $\kappa+b+1$ layers} (-2.5,-3) node{\scriptsize $\kappa+b+1$ layers} (-1.5,-2.1) node{\scriptsize $\vdots$};
\draw[->] (-3.5, -1.75)--(-3.5, 0);
\draw [->] (-3.5, -1.75)--(-3.5, -3.5);
\draw [->](-1.5,-0.375)--(-1.5,0);
\draw [->] (-1.5,-0.375)--(-1.5,-0.75);
\draw [->](-1.5,-1.375)--(-1.5,-1);
\draw [->] (-1.5,-1.375)--(-1.5,-2);
\draw [->](-1.5,-2.875)--(-1.5,-2.5);
\draw [->] (-1.5,-2.875)--(-1.5,-3.5);
\filldraw[fill=red!20] (-0.7,-1.5)--(-0.3,-1.5)--(-0.2,-1.75)--(-0.8,-1.75) -- cycle (0.7,-1.5)--(0.3,-1.5)--(0.2,-1.75)--(0.8,-1.75) -- cycle;
\filldraw[fill=blue!20] (-0.5,-1)--(-0.75,-1.625)--(-0.25,-1.625)--cycle (0.5,-1)--(0.25,-1.625)--(0.75,-1.625)--cycle;
\draw (-0.5,-1)--(-0.9,-2)--(-0.1,-2)--cycle (0.5,-1)--(0.1,-2)--(0.9,-2)--cycle;
\filldraw[fill=red!20] (-1.2,-3)--(-0.8,-3)--(-0.7,-3.25)--(-1.3,-3.25)--cycle (1.2,-3)--(0.8,-3)--(0.7,-3.25)--(1.3,-3.25)--cycle (-0.2,-3)--(0.2,-3) --(0.3,-3.25)--(-0.3,-3.25) --cycle;
\filldraw[fill=blue!20] (-1,-2.5)--(-1.25,-3.125)--(-0.75,-3.125)--cycle (1,-2.5)--(1.25,-3.125)--(0.75,-3.125)--cycle (0,-2.5)-- (-0.25,-3.125)--(0.25,-3.125)--cycle;
\draw (-1,-2.5)--(-1.4,-3.5)--(-0.6,-3.5)--cycle (1,-2.5)--(1.4,-3.5)--(0.6,-3.5)--cycle (0,-2.5)-- (-0.4,-3.5)--(0.4,-3.5)--cycle;
\draw (0,-2.3) node{\scriptsize $\cdots$} (0,-1.5) node{\scriptsize $\cdots$} (-0.5,-3) node{\scriptsize $\cdots$} (0.5,-3) node{\scriptsize $\cdots$};
\draw [->] (1,-2.75)--(1.9,-2.75);
\draw [->] (1,-3.17)--(1.9,-3.17);
\draw [->] (1,-3.45)--(1.9,-3.45);
\draw[fill=green!20] (1.5,0)--(1.5,-0.3)--(1.8,-0.3)--(1.8,-0)--cycle;
\draw[fill=blue!20] (1.5,-0.5)--(1.5,-0.8)--(1.8,-0.8)--(1.8,-0.5)--cycle;
\draw[fill=red!20] (1.5,-1)--(1.5,-1.3)--(1.8,-1.3)--(1.8,-1)--cycle;
\draw (1.5,-1.5)--(1.5,-1.8)--(1.8,-1.8)--(1.8,-1.5)--cycle;
\draw (4,-2.75) node{\scriptsize $2^{\kappa+1}-1=O(n)$ ~qubits, $\kappa+1$ layers} (3.5,-3.14) node{\scriptsize $2^{\kappa+1}=O(n)$ ~qubits, $1$ layer} (4.6,-3.5)node{\scriptsize $2^{\kappa+1}(2^b-2)=O(n\log(n))$ ~qubits, $b-1$ layers} (2.7,-0.15) node{\scriptsize input register} (2.7,-0.65) node{\scriptsize copy register} (2.75,-1.15) node{\scriptsize target register}(2.95,-1.65) node{\scriptsize auxiliary register};
\draw (1.3,0.2)--(4,0.2)--(4,-2)--(1.3,-2)--cycle;
\draw [fill=black,draw=black] (-0.5,-1.13) circle (0.03) (0.5,-1.13) circle (0.03) (-1,-2.63) circle (0.03) (0,-2.63) circle (0.03) (1,-2.63) circle (0.03);
\end{tikzpicture}
\caption{Input, copy, target and auxiliary registers in a binary tree with $d+1$ layers of qubits. The $n$ input qubits are assigned to a sub-tree with $\kappa+1$ layers of qubits at the top of the tree (green). The $m$ ancillary qubits are divided into $O\left(\frac{m}{n\log(n)}\right)$ binary sub-trees with $\kappa+b+1$ layers of qubits, each further divided into three parts: (i) the first $\kappa+1$ layers are the copy register (blue), and have size $O(n)$, (ii) a single layer of target register (red) qubits, of size $O(n)$, and (iii) $b-1$ layers of auxiliary register (white), of size $O(n\log(n))$. The values of $d$, $\kappa$ and $b$ are given in the main text. Note that every target register qubit has $\tau:=2^b-2$ auxiliary register descendants. ${\sf R}_{\rm roots}$ consists of the root nodes of all blue sub-trees (black vertices).}
\label{fig:register_ancillar_binary}
\end{figure}
The formal definition of these registers are given as follows, using the label notation as in \S \ref{sec:diag_without_ancilla_binarytree}.
\begin{align}
\label{eq:subtree-root-nodes}
& {\sf R}_{\rm roots}\defeq\bigcup_{j=1}^{\left\lfloor\frac{d-\kappa}{\kappa+b+1}\right\rfloor}\mbox{$\{0,1\}$}^{d-j(\kappa+b+1)+1},\\
&{\sf R}_{\rm copy}\defeq \bigcup_{\scriptsize z\in {\sf R}_{\rm roots}}\Tree_z^\kappa,\nonumber\\
& {\sf R}_{\rm targ}\defeq \bigcup_{\scriptsize z\in {\sf R}_{\rm roots}}\left\{zy: y\in \mbox{$\{0,1\}$}^{\kappa+1}\right\}, \nonumber\\
&{\sf R}_{\rm aux}\defeq \bigcup_{\scriptsize z\in {\sf R}_{\rm roots}}\Big\{zy: y\in \bigcup_{j=\kappa+2}^{\kappa+b}\mbox{$\{0,1\}$}^{j}\Big\}= \bigcup_{z\in {\sf R}_{\rm targ}}(\Tree_z^{b-1}-\{z\}).\nonumber
\end{align}
We take
\begin{align*}
p&=\log(\lambda_{targ}) = \log m - \log \log n \pm O(1), \qquad\tau=2^b-2 = \Theta(\log(n)),
\end{align*}
in specifying $x_{pre} = x_1\ldots x_{n-p}$, $x_{suf}=x_{n-p+1}\ldots x_n$, and $x_{aux}=x_1\ldots x_\tau$.
\paragraph{Implementation of Suffix Copy and Prefix Copy Stages.}
\begin{lemma}\label{lem:copy_binary_tree}
A unitary operation realizing the following transformation
\begin{equation}\label{eq:copy_binary_tree}
\ket{x'}\ket{0^{n't}}\xrightarrow{U_{copy}^{binarytree}}\ket{x'}\underbrace{\ket{x'\cdots x'}}_{t\text{~copies~of~} x'},\forall x'\in\mbox{$\{0,1\}^n$}, {\rm where }~ n'\le n, t=|{\sf R}_{\rm roots}|
\end{equation}
can be implemented by a circuit of depth $O(n'\kappa^2+n'\log(m)\kappa)$ under $\Tree_{n+m}(2)$ constraint, where input $x'$ is in ${\sf R}_{\rm inp}$ and every copy of $x'$ is in a subtree in Fig. \ref{fig:register_ancillar_binary}.
\end{lemma}
\begin{proof}
Let $\kappa':=\kappa+b+1$; note that $\kappa < \kappa' < 2\kappa$. We first implement the following unitary transformation
\begin{equation}\label{eq:copy_tree_2^{k+1}}
\ket{x'0^{2^{\kappa+1}-n'-1}}_{\Tree_\epsilon^\kappa}\bigotimes_{\scriptsize
z\in\mbox{$\{0,1\}$}^{\kappa+1}}\ket{0^{2^{\kappa'+1}-1}}_{\Tree_z^{\kappa'}}\to\ket{x'0^{2^{\kappa +1}-n'-1}}_{\Tree_\epsilon^\kappa}\bigotimes_{\scriptsize
z\in\mbox{$\{0,1\}$}^{\kappa+1}}\ket{x'0^{2^{\kappa'+1}-n'-1}}_{\Tree_z^{\kappa'}},\forall x'\in\mbox{$\{0,1\}$}^{n'},
\end{equation}
which makes $2^{\kappa+1}$ copies of $x$ in all $2^{\kappa+1}$ subtrees directly below the input register. It can be implemented in $\kappa+2$ steps.
\begin{enumerate}
\item Step 0: Make 1 copy of $x'$ from the top subtree $\Tree_{\epsilon }^\kappa$ to the first (i.e. leftmost) subtree under it (i.e. $\Tree_{0^{\kappa+1}}^{\kappa'}$), by applying $n$ CNOT gates with control and target qubits $O(\kappa)$-away. By Lemma~\ref{lem:cnot_path_constraint}, this step can be implemented in depth $O(n')\cdot O(\kappa)=O(n'\kappa)$.
\item Step 1: Make 1 copy of $x'$ from $\Tree_{\epsilon }^\kappa$ to $\Tree_{10^{\kappa}}^\kappa$. This step can similarly be implemented in depth $O(n'\kappa)$.
\item Step $j$ for $j = 2, 3, \cdots, \kappa+1$: For all $z\in\mbox{$\{0,1\}$}^{j-1}$, make 1 copy of $x'$ from $\Tree_{z00^{\kappa-j+1}}^{\kappa}$ to $\Tree_{z10^{\kappa-j+1}}^{\kappa}$. Such a copy can be realized in depth $O(n'\kappa)$ by Lemma \ref{lem:cnot_path_constraint}. Note that for different $z$, the copying processes are on disjoint connected components of the binary tree, thus these $2^{j-1}$ copies can be implemented in parallel. Therefore for each $j$, this step can be implemented in depth $O(n'\kappa)$.
\end{enumerate}
The total circuit depth required to implement Eq. \eqref{eq:copy_tree_2^{k+1}} is $O(n'\kappa) + \kappa\cdot O(n'\kappa) = O(n'\kappa^2)$.
Eq.~\eqref{eq:copy_binary_tree} can be implemented by using Eq.~\eqref{eq:copy_tree_2^{k+1}} repeatedly.
For every newly copied $x'$ in a $\kappa'$-depth binary sub-tree, we repeat the construction to make further copies of $x'$ in its adjacent binary sub-trees, which requires depth $O(n'(\kappa')^2)$, and so on.
We repeat this process $s:=\lfloor\frac{d-\kappa}{\kappa'+1}\rfloor=O(\log(m)/\kappa)$ times and make $t$ copies of $x'$. The total circuit depth is $O(n'\kappa^2)+(s-1)O(n'(\kappa')^2)=O(n'\kappa^2+n'\log(m)\kappa)$.
\end{proof}
\begin{lemma}[]\label{lem:sufcopy_tree}
$U_{SufCopy}$ and $U_{PreCopy}''$ can each be implemented by a circuit of depth $O(n\log(n)\log(m))$ under $\Tree_{n+m}(2)$ constraint.
\end{lemma}
\begin{proof}
$U_{SufCopy}$ makes $|{\sf R}_{\rm roots}
$ copies of $x_{suf}$, and can be represented as
\begin{equation*}
\ket{x0^{2^{\kappa+1}-n-1}}_{\Tree_\epsilon^\kappa}\bigotimes_{\scriptsize z\in {\sf R}_{\rm roots}}\ket{0^{2^{\kappa+1}-1}}_{\Tree_z^{\kappa}}
\to \ket{x0^{2^{\kappa+1}-n-1}}_{\Tree_\epsilon^\kappa}\bigotimes_{\scriptsize z\in {\sf R}_{\rm roots}}\ket{x_{suf}0^{2^{\kappa+1}-p-1}}_{\Tree_z^{\kappa}},\forall x\in\mbox{$\{0,1\}^n$}.
\end{equation*}
By Lemma~\ref{lem:copy_binary_tree} (with $n' =p$ and $\kappa = O(\log(n))$), it can be implemented by a circuit of depth
\[O\Big(p\log^2(n)+p\log(m)\log(n)\Big) = O(\log^2(m)\log(n)) = O(\log(m) n \log(n)),\]
where we used $p \le \log(m)$ and the assumption $m=O(2^n)$. The argument for $U_{PreCopy}''$ is similar though now the parameter $n'$ in Lemma~\ref{lem:copy_binary_tree} is $n-p$, and thus the depth upper bound is $O(n \log(m) \log(n))$.
\end{proof}
\begin{lemma}[]\label{lem:precopy3_tree}
$U_{PreCopy}'''$ (Eq.\eqref{eq:precopy3_graph}) can be implemented by a quantum circuit of depth $O(n\log^2(n))$ under $\Tree_{n+m}(2)$ constraint.
\end{lemma}
\begin{proof}
For each $z\in {\sf R}_{\rm roots}$ (Eq.~\eqref{eq:subtree-root-nodes}), $U_{PreCopy}'''$ makes $2^{\kappa+1}$ copies of $x_{aux}=x_1\ldots x_\tau$ (with $\tau=2^b-2)$ from the ${\sf R}_{\rm copy}$ to ${\sf R}_{\rm aux}$ parts of $\Tree_z^{\kappa+b}$ (i.e., from blue to white portions of each sub-tree in Fig.~\ref{fig:register_ancillar_binary}). As the distance between any two qubits in $\Tree_z^{\kappa+b}$ is $O(\log(n))$, by Lemma~\ref{lem:cnot_path_constraint}, the $2^{\kappa+1}$ copies
can be implemented in depth $O(2^{b}-2)\cdot O(\log(n))\cdot 2^{\kappa+1}=O(n\log^2 (n))$. Since all the binary sub-trees are disjoint, they can be implemented in parallel, and thus $U_{PreCopy}'''$ has circuit depth $O(n\log^2(n))$.
\end{proof}
\begin{lemma}[]\label{lem:precopy_tree}
$U_{PreCopy}$ can be implemented by a quantum circuit of depth $O(n\log(m)\log(n))$ under $\Tree_{n+m}(2)$ constraint.
\end{lemma}
\begin{proof}
Follows from Lemmas~\ref{lem:precopy_graph}, \ref{lem:sufcopy_tree} and \ref{lem:precopy3_tree}.
\end{proof}
\paragraph{Implementation of Gray Initial Stage.}
\begin{lemma}[]\label{lem:grayinitial_tree}
$U_{GrayInit}$ can be implemented by a CNOT circuit of depth $O(n^2)$ under $\Tree_{n+m}(2)$ constraint.
\end{lemma}
\begin{proof}
Recall $U_{GrayInit}$ defined in Eq.~\eqref{eq:gray_initial_graph}, which can be represented as follows. For all $z_{q}\in {\sf R}_{\rm roots}$,
\begin{equation}\label{eq:grayinitial_tree_version2}
\begin{array}{ll}
& \ket{x_{suf}0^{2^{\kappa+1}-p-1}}_{\Tree_{z_{q}}^{\kappa}}\ket{0^{2^{\kappa+1}}}_{\scriptsize \left\{z_{q}y:y\in \mbox{$\{0,1\}$}^{\kappa+1}\right\}} \\
\to & \ket{x_{suf}0^{2^{\kappa+1}-p-1}}_{\Tree_{z_{q}}^{\kappa}}\ket{f_{1,1+(q-1)2^{\kappa+1}} f_{1,2+(q-1)2^{\kappa+1}} \cdots f_{1,q2^{\kappa+1}}}_{\scriptsize \left\{z_{q}y:y\in \mbox{$\{0,1\}$}^{\kappa+1}\right\}},
\end{array}
\end{equation}
where $z_q$ is the $q$-th element in $ {\sf R}_{\rm roots}$.
Eq. \eqref{eq:grayinitial_tree_version2} acts on qubits in
{$\Tree_{z_q}^{\kappa+1}$ of size $O(n)$} and, by Lemma~\ref{lem:cnot_circuit}, can be realized by a CNOT circuit of depth $O(n^2)$ under binary tree constraint.
All trees $\Tree_{z_q}^{\kappa+1}$ are disjoint and $U_{GrayInit}$ can therefore be implemented in parallel in depth $O(n^2)$.
\end{proof}
\paragraph{Implementation of Gray Cycle Stage.}
\begin{lemma}[]\label{lem:graycycle_tree}
$U_{GrayCycle}$ (Eq. \eqref{eq:gray_cycle_graph}) can be implemented by a quantum circuit of depth $O(2^{n-p})$ under $\Tree_{n+m}(2)$ constraint.
\end{lemma}
\begin{proof}
First, we construct circuits for $U_{Gen}^{(j)}$ for all $j\in[2^{n-p}]$ in Eqs. \eqref{eq:Ugenj_graph} and \eqref{eq:Ugen2n-p_graph}.
Let $z_q$ be the $q$-th element in $ {\sf R}_{\rm roots}$. For every $i\in [2^{\kappa+1}]$, let $y_i$ denote the $i$-th element in $\mbox{$\{0,1\}$}^{\kappa+1}$ in lexicographical order.
$U_{Gen}^{(r)}$ (Eq.\eqref{eq:Ugenj_graph}) can be represented as acting on qubits in $\Tree_{z_q}^{\kappa+b} = \Tree_{z_q}^\kappa \cup (\bigcup_{y_i\in\{0,1\}^{\kappa+1}} \Tree_{z_qy_i}^{b-1})$ in the following way:
\begin{align}
&\ket{x_{pre}0^{2^{\kappa+1}-(n-p)-1}}_{\Tree_{z_q}^{\kappa}}\bigotimes_{\scriptsize y_i\in \mbox{$\{0,1\}$}^{\kappa+1}}\ket{f_{r,i+(q-1)2^{\kappa+1}}x_{aux}}_{\Tree_{z_q y_i}^{b-1}} \nonumber\\
\to &\ket{x_{pre}0^{2^{\kappa+1}-(n-p)-1}}_{\Tree_{z_q}^{\kappa}}\bigotimes_{\scriptsize y_i\in \mbox{$\{0,1\}$}^{\kappa+1}}\ket{f_{r+1,i+(q-1)2^{\kappa+1}} x_{aux}}_{\Tree_{z_q y_i}^{b-1}},\quad \forall z_q \in {\sf R}_{\rm roots}. \label{eq:Ugen_small}
\end{align}
For all $y_i\in\mbox{$\{0,1\}$}^{\kappa+1}$, Eq.~\eqref{eq:Ugen_small} transforms $\ket{f_{r,i+(q-1)2^{\kappa+1}}}_{z_{q}y_i}$ to $\ket{f_{r+1,i+(q-1)2^{\kappa+1}}}_{z_{q}y_i}=\ket{f_{r,i+(q-1)2^{\kappa+1}}\oplus x_{h_{1,r+1}}}_{z_{q}y_i}$, and can be implemented by a CNOT gate with target qubit $z_{q}y_i$, and control qubit in state $\ket{x_{h_{1,r+1}}}$. We consider two cases:
\begin{enumerate}
\item Case 1: If $h_{1,r+1}\le 2^b-2$, we use the control qubit $\ket{x_{h_{1,r+1}}}$ in ${\sf R}_{\rm aux}$ in $\Tree_{z_q y_i}^{b-1}-\{z_q y_i\}$, the subtree under the current target qubit $z_{q}y_i$. The distance between control and target qubits is $O(\log(h_{1,r+1}))$ in binary tree $\Tree_{z_{q}y_i}^{b-1}$. By Lemma \ref{lem:cnot_path_constraint}, it can be implemented in depth $O(\log(h_{1,r+1}))$ under an $O(\log(h_{1,r+1}))$-long path in binary tree. For all $y_{i}\in\mbox{$\{0,1\}$}^{\kappa+1}$, trees $\Tree_{z_{q}y_i}^{b-1}$ are disjoint. Eq.~\eqref{eq:Ugen_small} can thus be implemented in depth $O(\log(h_{1,r+1}))$.
\item Case 2: If $h_{1,r+1}> 2^b-2$, we use the control qubit $\ket{x_{h_{1,r+1}}}$ in ${\sf R}_{\rm copy}$ in $\Tree_{z_{q}}^{\kappa+1}$. Then Eq. \eqref{eq:Ugen_small} can be implemented by a CNOT circuit acting on qubits purely within $\Tree_{z_{q}}^{\kappa+1}$. By Lemma~\ref{lem:cnot_circuit}, Eq. \eqref{eq:Ugen_small} can be implemented in depth $O(n^2)$.
\end{enumerate}
For all $z_q\in {\sf R}_{\rm roots}$, $\Tree_{z_{q}y_i}^{b-1}$ are disjoint. Therefore, the circuit depth of Eq. \eqref{eq:Ugenj_graph} is $O(\log(h_{1,r+1}))$ if $h_{1,r+1}\le 2^b-2$, and $O(n^2)$ if $h_{1,r+1}>2^b-2$ under $\Tree_{n+m}(2)$ constraint.
Similar to the circuit of Eq. \eqref{eq:Ugenj_graph}, $U_{Gen}^{(2^{n-p})}$ (Eq. \eqref{eq:Ugen2n-p_graph}) can be implemented by a CNOT circuit of depth $O(n^2)$ according to Lemma \ref{lem:cnot_circuit}.
We now bound the circuit depth required to implement $U_{GrayCycle}$. From Lemma~\ref{lem:GrayCode}, there are $2^{n-p-i}$ values of $r$ in $[2^{n-p}-1]$ such that $h_{1,r+1}=i$.
Recall that $b=\lceil\log(2\log(n))\rceil$. By Lemma~\ref{lem:graycycle_graph}, the depth of the Gray cycle stage is
\[\sum_{j=1}^{2^{n-p}}\mathcal{D}(U_{Gen}^{(j)})+2^{n-p}=\sum_{i=1}^{\tau}O(\log (i))2^{n-p-i}+\sum_{i=\tau+1}^{n-p}O(n^2)2^{n-p-i}+2^{n-p}=O(2^{n-p}),\]
where $\tau=2^b-2\ge 2\log(n)-2$.
\end{proof}
\paragraph{Implementation of Inverse Stage.}
\begin{lemma} []\label{lem:inverse_tree}
$U_{Inverse}$ (Eq.\eqref{eq:inverse_graph})
can be implemented by a CNOT circuit of depth $O(\log(m)n\log (n))$ under $\Tree_{n+m}(2)$ constraint.
\end{lemma}
\begin{proof}
Follows from Lemmas \ref{lem:inverse_graph}, \ref{lem:grayinitial_path}, \ref{lem:sufcopy_tree} and \ref{lem:precopy3_tree}.
\end{proof}
\paragraph{Implementation of $\Lambda_n$.}
\begin{lemma}\label{lem:diag_binarytree_withancilla}
Any $n$-qubit unitary diagonal matrix $\Lambda_n$ can be implemented by a quantum circuit of depth \[O\left(n^2\log n+\frac{\log(n)2^n}{m}\right)\]
under $\Tree_{n+m}(2)$ constraint, using $m\ge 3n$ ancillary qubits.
\end{lemma}
\begin{proof}
We assume that $m\le O(2^n)$. If $m=\omega(2^n)$, we only use $O(2^n)$ ancillary qubits.
According to Lemmas \ref{lem:sufcopy_tree}, \ref{lem:grayinitial_tree}, \ref{lem:precopy_tree}, \ref{lem:graycycle_tree} and \ref{lem:inverse_tree}.
The total depth is
\[3O(n^2\log n)+O(n^2)+O(2^{n-p})=O\left(n^2\log (n)+\frac{\log(n)2^n}{m}\right).\]
where we used $p = \log m - \log \log n \pm O(1)$.
\end{proof}
\subsection{Circuit implementation under $\Expander_{n+m}$ constraints}
\label{sec:diag_with_ancilla_expander}
In this section, we implement $\Lambda_{n}$ under $\Expander_{n+m}$ constraint. For this case, we use a different circuit framework to that shown in Fig.~\ref{fig:diag_with_ancilla_framwork}:
\begin{enumerate}
\item Here, the ancillary qubits are divided into only two registers, ${\sf R}_{\rm copy}$ and ${\sf R}_{\rm targ}$, and there is no auxiliary register ${\sf R}_{\rm aux}$.
\item In Fig.~\ref{fig:diag_with_ancilla_framwork}, the suffix copy and prefix
copy stages make copies of $x_{suf}$ and $x_{pre}$, in order to reduce the depth of the Gray initial and Gray cycle stages which follow them, respectively. Here, the suffix copy and prefix copy stages are not used, and the circuit consists only of the other three stages, i.e. the Gray initial, Gray cycle and inverse stages.
The precise definition of these three steps are given in Eq. \eqref{eq:grayinitial_expander}, Eq. \eqref{eq:graycycle_expander} and Eq. \eqref{eq:inverse_expander}, respectively, from which it is easily verified that the diagonal unitary $\Lambda_n$ is realized.
\end{enumerate}
\paragraph{Choice of registers.}
Consider an expander graph $G$ with vertex expansion $h_{out}(G) = c$ for some constant $c>0$.
Let $c' = \frac{c}{c+2}$.
Let $\ell=\Big\lfloor\frac{\log(m)-1-\log(\lceil 1/c'\rceil+1)}{\log(1+c')}\Big\rfloor+2$ and define a sequence of sets $S_1, S_2, \ldots S_\ell$ as in \S~\ref{sec:diag_without_ancilla_expander} (Eqs.~\eqref{eq:s1} and~\eqref{eq:siplus1}), i.e.
\begin{enumerate}
\item For some constant $c'>0$, choose arbitrary set $S_1$ of size $\lceil 1/c'\rceil+1$;
\item For every $2\le i\le \ell$, $S_i=S_{i-1}\cup \Gamma(S_{i-1})$, where $\Gamma(S_{i-1})\subset V-{S_{i-1}}$ consists of $\lfloor c'|S_{i-1}|\rfloor$ vertices. The size of a maximum matching $M_{S_{i-1}} $between $S_{i-1}$ and $\Gamma(S_{i-1})$ is $\lfloor c'|S_{i-1}|\rfloor$.
\end{enumerate}
By construction, $|S_{\ell-1}|\le m/2$, $|S_{\ell-1}|=\Theta(m)$, $|\Gamma(S_{\ell-1})|=\Theta(m)$. We take
\begin{itemize}
\item ${\sf R}_{\rm copy}:= S_{\ell-1}$;
\item ${\sf R}_{\rm targ}:= \Gamma(S_{\ell-1})$;
\item ${\sf R}_{\rm inp} \subseteq V-({\sf R}_{\rm copy}\cup {\sf R}_{\rm targ})=V-S_{\ell}$.
\end{itemize}
The copy and target registers have sizes $\lambda_{copy}=\Theta(m)$ and $\lambda_{targ}=\Theta(m)$, respectively, while ${\sf R}_{\rm inp}$ consists of $n$ qubits in $V-S_{\ell}$. We define $p=\log(\lambda_{targ})$.
Our choice of Gray codes is given by setting $\ell_k = 1$, for all $k\in[2^p]$.
\paragraph{Remark.} Note that, once $S_1, \ldots, S_\ell$ have been constructed, it may not be the case that the $n$ input qubits (which have been loaded with non-zero inputs $\ket{x}$) lie entirely within $V-S_\ell$. However, by using at most $n$ SWAP operations (that may across some distance under $G$ constraint), we can permute the input qubits so that they do lie within $V-S_\ell$, and we can then take the locations of those qubits to define ${\sf R}_{\rm inp}$. By Lemma~\ref{lem:distance} the distance between the two qubits in any of these SWAP gates is $O(\log(n+m))$, and each SWAP can be implemented by three CNOT gates. By Lemma~\ref{lem:cnot_path_constraint}, permuting all input qubits into $V-S_\ell$ can be implemented in circuit depth $n\cdot O(\log(n+m))=O(n\log(n+m))$. We shall see that this does not impact the final circuit depth complexity required to implement $\Lambda_n$.
\paragraph{Implementation of $\Lambda_n$.}
We assume $m\ge \Omega(n)$. If $m\le o(n)$, the circuit depth in this section is larger than the depth in Lemma \ref{lem:diag_expander_withoutancilla}, which does not use ancillary qubits.
\begin{lemma}\label{lem:diag_expander_ancilla}
Any $n$-qubit diagonal unitary matrix $\Lambda_n$ can be realized by a quantum circuit of depth $O\Big(n^2+\frac{\log(m)2^n}{m}\Big)$ under $\Expander_{n+m}$ constraint, using $m\ge \Omega(n)$ ancillary qubits.
\end{lemma}
\begin{proof}
We assume that the number of ancillary qubits $m\le O(2^n)$. Let ${\sf R}_{{\rm inp},k}$ denote the $k$-th qubit of the input register, $\ket{x}_{{\sf R}_{\rm inp}}=\bigotimes_{k=1}^n\ket{x_k}_{{\sf R}_{{\rm inp},k}}$, and define $s(j,k)$, $f_{j,k}$ and $\ket{f_j}$ as in Eq.~\eqref{eq:s,f}.
Let $U^k_{copy}$ be a transformation which makes copies of $\ket{x_k}_{{\sf R}_{{\rm inp},k}}$ in ${\sf R}_{\rm copy}=S_{\ell-1}$, i.e.,
\begin{equation}\label{eq:copy_expander_graph}
\ket{x_k}_{{\sf R}_{{\rm inp},k}}\ket{0^{|S_{\ell-1}|}}_{S_{\ell-1}}\xrightarrow{ U_{copy}^k } \ket{x_k}_{{\sf R}_{{\rm inp},k}}\underbrace{\ket{x_k\cdots x_k}_{S_{\ell-1}}}_{|S_{\ell-1}|\text{~copies~of~}x_k}
\end{equation}
which can be realized in $\ell-1$ steps:
\begin{enumerate}
\item Step 1: make $|S_1|$ copies of $x_k$ from ${\sf R}_{{\rm inp},k}$ to $S_1$, i.e.,
\begin{equation*}
\ket{x_k}_{{\sf R}_{{\rm inp},k}}\ket{0^{|S_1|}}_{S_1}\to \ket{x_k}_{{\sf R}_{{\rm inp},k}}\underbrace{\ket{x_k\cdots x_k}_{S_{1}}}_{|S_{1}|\text{~copies~of~}x_k}
\end{equation*}
This can be implemented by applying $\lceil 1/c'\rceil+1$ CNOT gates, with each CNOT gate having a separate qubit in $S_1$ as target, and control qubit ${\sf R}_{{\rm inp},k}$. By Lemmas~\ref{lem:distance} and~\ref{lem:cnot_path_constraint}, Step 1 can be realized in depth $|S_1|\cdot O(\log(n+m))=O(\log(n+m))$.
\item Step $2\le i\le \ell-1$: make copies of $x_k$ from $S_{i-1}$ to $\Gamma(S_{i-1})$, i.e.,
\begin{equation}\label{eq:copy_expander_graph_i}
\underbrace{\ket{x_k\cdots x_k}_{S_{i-1}}}_{|S_{i-1}|\text{~copies~of~}x_k}\ket{0^{\lfloor c'|S_{i-1}|\rfloor}}_{\Gamma(S_{i-1})}\to \ket{x_k\cdots x_k}_{S_{i-1}}\underbrace{\ket{x_k\cdots x_k}_{\Gamma(S_{i-1})}}_{\lfloor c'|S_{i-1}|\rfloor\text{~copies~of~}x_k}=\underbrace{\ket{x_k\cdots x_k}_{S_{i}}}_{|S_{i}|\text{~copies~of~}x_k}, \forall x_k\in \mbox{$\{0,1\}$}.
\end{equation}
By the construction of $S_1,S_2,\ldots,S_\ell$, there exists a maximum matching $M_{S_{i-1}}$ between $S_{i-1}$ and $\Gamma(S_{i-1})$ of size $\lfloor c'|S_{i-1}|\rfloor$.
Eq. \eqref{eq:copy_expander_graph_i} can be implemented by applying CNOT gates to all pairs of qubits $(u,v)$ corresponding to edges in $M_{S_{i-1}}$. Each of these can be implemented in parallel, and thus the total depth required is $1$.
\end{enumerate}
The total depth required to implement Eq.~\eqref{eq:copy_expander_graph} is therefore $O(\log(n+m))+\ell-2=O(\log(m+n))$.
We now consider the circuit construction for $\Lambda_n$, which we implement in 3 stages:
\begin{enumerate}
\item Gray initial stage:
\begin{equation}\label{eq:grayinitial_expander}
\ket{x}_{{\sf R}_{\rm inp}}\ket{0^{|S_{\ell-1}|}}_{S_{\ell-1}}\ket{0^{\lfloor c'|S_{\ell-1}|\rfloor}}_{\Gamma(S_{\ell-1})}\to \ket{x}_{{\sf R}_{\rm inp}}\ket{0^{|S_{\ell-1}|}}_{S_{\ell-1}}\ket{f_1}_{\Gamma(S_{\ell-1})}
\end{equation}
This can be realized in $p$ steps by handling the $p$ suffix bits one by one. For all $j\in[p]$, the $j$-th step is implemented as follows:
\begin{enumerate}
\item First, we make $|S_{\ell-1}|$ copies of $x_{n-p+j}$ in copy register $S_{\ell-1}$ by the implementation of Eq. \eqref{eq:copy_expander_graph}, i.e.,
\begin{equation}\label{eq:copy_expander_phase1}
\ket{x}_{{\sf R}_{\rm inp}}\ket{0^{|S_{\ell-1}|}}_{S_{\ell-1}}\to \ket{x}_{{\sf R}_{\rm inp}}\underbrace{\ket{x_{n-p+j}x_{n-p+j}\cdots x_{n-p+j}}_{S_{\ell-1}}}_{|S_{\ell-1}|~\text{~copies~of~}x_{n-p+j}
\end{equation}
This requires depth $O(\log(n+m))$.
\item Second, for all $k\in[2^p]$, if $f_{j,k}=\langle s(1,k),x\rangle$ (viewed as a linear function of the variables $x_i$) contains $x_{n-p+j}$, we add $x_{n-p+j}$ to the $k$-th qubit of target register $\Gamma(S_{\ell-1})$. This can be implemented by applying a CNOT gate of which the control qubit is $\ket{x_{n-p+j}}$ in $S_{\ell-1}$ and the target qubit is the $k$-th qubit of $\Gamma(S_{\ell-1})$. Since there exists a $\lfloor c'|S_{\ell-1}|\rfloor$-size matching between $S_{\ell-1}$ and $\Gamma(S_{\ell-1})$ and each qubit in $S_{\ell-1}$ contains a copy of $x_{n-p+j}$, all the CNOT gates can be applied in parallel, and the required circuit depth is $1$.
\item Third, we apply the inverse circuit of Eq. \eqref{eq:copy_expander_phase1} of depth $O(\log(n+m))$ to restore the copy register.
\end{enumerate}
In total, the circuit depth for the Gray initial stage is $O(p\log(n+m))$.
\item Gray cycle stage:
\begin{equation}\label{eq:graycycle_expander}
\ket{x}_{{\sf R}_{\rm inp}}\ket{0^{|S_{\ell-1}|}}\ket{f_1}_{\Gamma(S_{\ell-1})} \to e^{i\theta(x)}\ket{x}_{{\sf R}_{\rm inp}}\ket{0^{|S_{\ell-1}|}}\ket{f_1}_{\Gamma(S_{\ell-1})}
\end{equation}
We implement this in $2^{n-p}$ steps. For $j\le 2^{n-p}-1$, the $j$-th step is defined as
\begin{equation}
\ket{x}_{{\sf R}_{\rm inp}}\ket{0^{|S_{\ell-1}|}}_{S_{\ell-1}}\ket{f_j}_{\Gamma(S_{\ell-1})}\to e^{i\sum\limits_{k\in[2^p]}f(j+1,k)\alpha_{s(j+1,k)}}\ket{x}_{{\sf R}_{\rm inp}}\ket{0^{|S_{\ell-1}|}}_{S_{\ell-1}}\ket{f_{j+1}}_{\Gamma(S_{\ell-1})},\forall x\in\mbox{$\{0,1\}^n$}.
\end{equation}
Recall that $s(j,k)$ and $s(j+1,k)$ differ in the $h_{1,j+1}$-th bit.
\begin{enumerate}
\item First, we make $|S_{\ell-1}|$ copies of $x_{h_{1,j+1}}$ in $S_{\ell-1}$. This can be done in depth $O(\log(n+m))$ using $U_{copy}^{h_{1,j+1}}$ (Eq. \eqref{eq:copy_expander_graph}).
\item Second, we add $x_{h_{1,j+1}}$ to every qubit of $\Gamma(S_{\ell-1})$, by applying CNOT gates to all qubit pairs $(u,v)$ corresponding to edges in $M_{S_{\ell-1}}$. This can be done in depth 1.
\item Third, for all $k\in[2^p]$, we apply $R(\alpha_{s(j+1,k)})$ on the $k$-th qubit of $\Gamma(S_{\ell-1})$, where $\alpha_{s(j+1,k)}\in\mathbb{R}$ is defined in Eq. \eqref{eq:alpha}. This can be done in depth 1.
\item Finally, we restore the copy register using the inverse of $U_{copy}^{h_{1,j+1}}$.
\end{enumerate}
The total depth of the $j$-th step is $O(\log(n+m))$. The last step, the $2^{n-p}$-th step, is defined as
\begin{equation}
\ket{x}_{{\sf R}_{\rm inp}}\ket{0^{|S_{\ell-1}|}}_{S_{\ell-1}}\ket{f_{2^{n-p}}}_{\Gamma(S_{\ell-1})}\to e^{i\sum\limits_{k\in[2^p]}f(1,k)\alpha_{s(1,k)}}\ket{x}_{{\sf R}_{\rm inp}}\ket{0^{|S_{\ell-1}|}}_{S_{\ell-1}}\ket{f_{1}}_{\Gamma(S_{\ell-1})},\forall x\in\mbox{$\{0,1\}^n$}.
\end{equation}
By a similar discussion, this can be implemented in depth $O(\log(n+m))$.
In summary, the total depth of the Gray Cycle stage is $O(2^{n-p}\log(n+m))$.
\item Inverse stage:
\begin{equation}\label{eq:inverse_expander}
\ket{x}_{{\sf R}_{\rm inp}}\ket{0^{|S_{\ell-1}|}}_{S_{\ell-1}}\ket{f_1}_{\Gamma(S_{\ell-1})}\to\ket{x}_{{\sf R}_{\rm inp}}\ket{0^{|S_{\ell-1}|}}_{S_{\ell-1}}\ket{0^{\lfloor c'|S_{\ell-1}|\rfloor}}_{\Gamma(S_{\ell-1})}.
\end{equation}
This can be implemented by the inverse of Eq. \eqref{eq:grayinitial_expander}.
\end{enumerate}
The total depth required to implement $\Lambda_n$ is thus $2O(p\log(n+m))+O(2^{n-p}\log(n+m))=O(\log(n+m)\log(m)+\frac{\log(m)2^n}{m})=O(n^2+\frac{\log(m)2^n}{m})$ for $m\le O(2^n)$. If $m\ge\omega(2^n)$, we only use $O(2^n)$ of the ancillary qubits, and the circuit depth is $O(n^2)$. Therefore, the total depth is $O\Big(n^2+\frac{\log(m)2^n}{m}\Big)$.
\end{proof}
\section{Circuit constructions for QSP and GUS under qubit connectivity constraints}
\label{sec:QSP_US_graph}
In this section, we bound the circuit size and depth for quantum state preparation (QSP) and general unitary synthesis (GUS) under different graph constraints, based on the circuit constructions for diagonal unitary matrices in \S \ref{sec:diag_without_ancilla} and \S \ref{sec:diag_with_ancilla}. In \S \ref{sec:QSP_graph} and \S \ref{sec:US_graph}, we present QSP and GUS circuits under path, $d$-dimensional grid, binary tree, expander graph and general graph constraints. In \S \ref{sec:circuit_transformation}, we present a transformation between circuits under different graph constraints, which we use to upper bound the circuit depth for QSP and GUS under brick-wall constraint.
We also obtain circuit depth and size bounds for the QRAM (quantum random access memory) problem under graph constraints. This is the problem: Given $2^k$ quantum states $\ket{\psi_i}$ of $n$ qubits, realize the transformation of
\[\ket{i}\ket{0^n} \to \ket{i}\ket{\psi_i}, \ \forall i\in \mbox{$\{0,1\}$}^k.\]
Since the proofs are similar to those for QSP circuits, we defer proofs to Appendix \ref{append:QRAM}.
\subsection{Circuit complexity for QSP under graph constraints}
\label{sec:QSP_graph}
\subsubsection{QSP under $\Path_{n+m}$ and $\Grid_{n+m}^{n_1,n_2,\ldots,n_d}$ constraints}
\label{sec:QSP_path_grid}
The results of this section are based on the fact that (i) every $j$-qubit uniformly controlled gate (UCG) $V_j$ can be decomposed into 3 $j$-qubit diagonal unitary matrices and 4 single-qubit gates (Lemma~\ref{lem:UCG_decomposition}); and (ii) any QSP circuit can be decomposed into a sequence of UCGs $V_1,V_2,\ldots, V_n$:
\begin{lemma}[\cite{grover2002creating,kerenidis2017quantum}]\label{lem:QSP_framework_UCG}
The QSP problem can be solved by $n$ UCGs acting on $1,2,\ldots, n$ qubits, respectively,
\[V_n (V_{n-1}\otimes \mathbb{I}_1) \cdots (V_2\otimes \mathbb{I}_{n-2})(V_1\otimes\mathbb{I}_{n-1}),\]
by the circuit in Fig.~\ref{fig:QSP_circuit}.
\end{lemma}
\begin{figure}[]
\centerline{
\Qcircuit @C=0.6em @R=0.6em {
\lstick{\ket{0}} & \gate{\scriptstyle V_1} & \multigate{1}{\scriptstyle V_2} & \multigate{2}{\scriptstyle V_3} & \qw & \push{\cdots} & & \qw & \multigate{4}{\scriptstyle V_n} & \qw\\
\lstick{\ket{0}} & \qw & \ghost{\scriptstyle V_2} & \ghost{\scriptstyle V_3} & \qw &\push{\cdots} & & \qw & \ghost{\scriptstyle V_n} & \qw\\
\lstick{\ket{0}} & \qw & \qw & \ghost{\scriptstyle V_3} & \qw & \push{\cdots} & & \qw & \ghost{\scriptstyle V_n} & \qw\\
\vdots~~~~~~~~~ & \qw & \qw & \qw & \qw & \push{\cdots} & & \qw & \ghost{\scriptstyle V_n}& \qw\\
\lstick{\ket{0}} & \qw & \qw & \qw & \qw & \push{\cdots} & & \qw & \ghost{\scriptstyle V_n} & \qw \inputgroupv{2}{4}{4em}{1.6em}{ \scriptstyle n \text{~qubits}~~~~~~~~~~~~~~~}\\
}
}
\caption{A QSP circuit to prepare an $n$-qubit state. Every $V_j$ is a $j$-qubit uniformly controlled gate (UCG) for $j\in[n]$, where the first $j-1$ qubits are control qubits and the last qubit is the target qubit. }
\label{fig:QSP_circuit}
\end{figure}
\begin{lemma}\label{lem:UCG_grid}
Any $n$-qubit UCG $V_n$ can be realized by a quantum circuit of depth
\begin{equation*}
O\Big(n^2+d2^{\frac{n}{d+1}}+\max_{j\in\{2,\ldots,d\}}\Big\{\frac{d2^{n/j}}{(\Pi_{i=j}^d n_i)^{1/j}}\Big\}+\frac{2^n}{n+m}\Big),
\end{equation*}
under $\Grid_{n+m}^{n_1,n_2,\ldots,n_d}$ constraint, using $m\ge 0$ ancillary qubits.
If $n_1=n_2=\cdots=n_d$, the depth is $O\left(n^2+d2^{\frac{n}{d+1}}+\frac{2^n}{n+m}\right)$.
\end{lemma}
\begin{proof}
Any $n$-qubit $\Lambda_n$ can be implemented in depth
\[
\begin{cases}
O(2^n/n), & \text{ if $m=0$, \qquad (\text{Corollary}~\ref{coro:diag_grid_withoutancilla})}\\
O\Big(n^2+d2^{\frac{n}{d+1}}+\max_{j\in\{2,\ldots,d\}}\Big\{\frac{d2^{n/j}}{(\Pi_{i=j}^d n_i)^{1/j}}\Big\}+\frac{2^n}{n+m}\Big) & \text{ if $m\ge 3n$. \qquad (\text{Lemma}~\ref{lem:diag_grid_ancillary})}
\end{cases}
\]
under $\Grid_{n+m}^{n_1,n_2,\ldots,n_d}$ constraint. If $0 < m < 3n$ we do not use the ancillary qubits.
The result follows from Lemma~\ref{lem:UCG_decomposition}. The case where $n_i=(n+m)^{1/d}$ is also dealt with in Lemma~\ref{lem:diag_grid_ancillary}.
\end{proof}
\begin{theorem}\label{thm:QSP_grid}
Any $n$-qubit quantum state can be prepared by a quantum circuit of depth
\begin{equation*}
O\Big(n^3+d2^{\frac{n}{d+1}}+\max\limits_{j\in\{2,\ldots,d\}}\Big\{\frac{d2^{n/j}}{(\Pi_{i=j}^d n_i)^{1/j}}\Big\}+\frac{2^n}{n+m}\Big),
\end{equation*}
under $\Grid_{n+m}^{n_1,n_2,\ldots,n_d}$ constraint, using $m \ge 0$ ancillary qubits.
If $n_1=n_2=\cdots=n_d$, the depth is $O\left(n^3+d2^{\frac{n}{d+1}}+\frac{2^n}{n+m}\right)$.
\end{theorem}
\begin{proof}
By Lemma~\ref{lem:QSP_framework_UCG}, any $n$-qubit QSP circuit can be decomposed into $n$ UCGs, $V_1,V_2,\ldots,V_n$ of growing size. Combined with Lemma~\ref{lem:UCG_grid}, this gives a circuit depth upper bound of
\[\sum_{k=1}^{n}O\Big(k^2+d2^{\frac{k}{d+1}}+\max_{j\in\{2,\ldots,d\}}\Big\{\frac{d2^{k/j}}{(\Pi_{i=j}^d n_i)^{1/j}}\Big\}+\frac{2^k}{k+m}\Big)=O\Big(n^3+d2^{\frac{n}{d+1}}+\max_{j\in\{2,\ldots,d\}}\Big\{\frac{d2^{n/j}}{(\Pi_{i=j}^d n_i)^{1/j}}\Big\}+\frac{2^n}{n+m}\Big).\]
\end{proof}
\begin{corollary}\label{coro:QSP_path_grid}
Any $n$-qubit quantum state can be prepared by a circuit with $m\ge 0$ ancillary qubits, of depth
\begin{enumerate}
\item $O\left(2^{n/2}+\frac{2^n}{n+m}\right)$ under $\Path_{n+m}$ constraint.
\item $O\left(2^{n/3}+\frac{2^{n/2}}{(n_2)^{1/2}}+\frac{2^n}{n+m}\right)$ under $\Grid^{n_1,n_2}_{n+m}$ constraint
\item $O\left(2^{n/4}+\frac{2^{n/2}}{(n_2n_3)^{1/2}}+\frac{2^{n/3}}{(n_3)^{1/3}}+\frac{2^n}{n+m}\right)$ under $\Grid^{n_1,n_2,n_3}_{n+m}$ constraint
\end{enumerate}
\end{corollary}
Note that the $\Path_{n+m}$ result holds by setting $d=1$ in Theorem \ref{thm:QSP_grid}.
\subsubsection{QSP under $\Expander_{n+m}$ constraints}
\label{sec:QSP_expander}
\begin{lemma}\label{lem:UCG_expander}
Any $n$-qubit UCG $V_n$ can be realized by a quantum circuit of depth
\begin{equation*}
O\left(n^2+\frac{\log(n+m)2^n}{n+m}\right)
\end{equation*}
under $\Expander_{n+m}$ constraint, using $m\ge 0$ ancillary qubits.
\end{lemma}
\begin{proof}
Any $n$-qubit diagonal unitary $\Lambda_n$ can be implemented in depth
\[
\begin{cases}
O(\log(n)2^n/n), & \text{if $m=0$, \qquad (\text{Lemma}~\ref{lem:diag_expander_withoutancilla})}\\
O\left(n^2+\frac{\log(m)2^n}{m}\right), &\text{ if $m\ge \Omega(n)$. \quad (\text{Lemma}~\ref{lem:diag_expander_ancilla})}
\end{cases}
\]
under $\Expander_{n+m}$ constraint. If $0 < m < O(n)$ we do not use the ancillary qubits. This result follows from Lemma \ref{lem:UCG_decomposition}.
\end{proof}
\begin{theorem}\label{thm:QSP_expander}
Any $n$-qubit quantum state can be prepared by a quantum circuit of depth
\[O\left(n^3+\frac{\log(n+m)2^n}{n+m}\right)\]
under $\Expander_{n+m}$ constraint, using $m\ge 0$ ancillary qubits.
\end{theorem}
\begin{proof}
By Lemma~\ref{lem:QSP_framework_UCG}, any $n$-qubit QSP circuit can be decomposed into $n$ UCGs, $V_1,V_2,\ldots,V_n$ of growing size. Combined with Lemma~\ref{lem:UCG_expander}, this gives a circuit depth upper bound of
\[\sum_{k=1}^{n}O\left(k^2+\frac{\log(k+m)2^k}{k+m}\right)=O\left(n^3+\frac{\log(n+m)2^n}{n+m}\right).\]
\end{proof}
\subsubsection{QSP under general graph $G$ constraints}
\label{sec:QSP_graph_general}
\begin{lemma}\label{lem:UCG_graph}
Any $n$-qubit UCG $V_n$ can be implemented by a quantum circuit of size and depth $O(2^n)$ under arbitrary graph constraint, using no ancillary qubits.
\end{lemma}
\begin{proof}
Follows directly from Lemmas~\ref{lem:UCG_decomposition} and \ref{lem:diag_graph_withoutancilla}
\end{proof}
\begin{theorem}\label{thm:QSP_graph}
Any $n$-qubit quantum state can be prepared by a quantum circuit of size and depth $O(2^n)$ under arbitrary graph $G$ constraint, using no ancillary qubits.
\end{theorem}
\begin{proof}
For every $i\in[n]$, UCG $V_i$ acts on $i$ qubits in $G$. We first swap the locations of these $i$ qubits such that they lie in a connected subgraph of $G$ with $i$ vertices. $V_i$ can then be implemented in depth $O(2^i)$ by Lemma \ref{lem:UCG_graph}, and the qubits then swapped back to their original positions. The process of swapping and unswapping the qubits can be realized by a CNOT circuit of size and depth $O(n^2)$ by Lemma \ref{lem:cnot_circuit}. The total depth and size of to implement the QSP circuit is $\sum_{i=1}^n (O(2^i)+O(n^2))=O(2^n)$.
\end{proof}
\subsubsection{QSP under $\Tree_{n+m}(2)$ and $\Tree_{n+m}(d)$ constraints}
\label{sec:QSP_binarytree}
\paragraph{QSP under $\Tree_{n+m}(2)$ constraints}
\begin{lemma}\label{lem:UCG_binarytree}
Any $n$-qubit UCG $V_n$ can be realized by a quantum circuit of depth
\begin{equation*}
O\left(n^2\log(n)+\frac{\log(n)2^n}{n+m}\right)
\end{equation*}
under $\Tree_{n+m}(2)$ constraint, using $m\ge 0$ ancillary qubits.
\end{lemma}
\begin{proof}
Follows directly from Lemmas~\ref{lem:UCG_decomposition}, \ref{lem:diag_bianrytree_withoutancilla} and \ref{lem:diag_binarytree_withancilla}.
\end{proof}
\begin{theorem}\label{thm:QSP_binarytree}
Any $n$-qubit quantum state can be realized by a quantum circuit of depth
\begin{equation*}
O\left(n^3\log(n)+\frac{\log(n)2^n}{n+m}\right)
\end{equation*}
under $\Tree_{n+m}(2)$ constraint, using $m\ge 0$ ancillary qubits.
\end{theorem}
\begin{proof}
By Lemma~\ref{lem:QSP_framework_UCG}, any $n$-qubit QSP circuit can be decomposed into $n$ UCGs, $V_1,V_2,\ldots,V_n$ of growing size. Combined with Lemma~\ref{lem:UCG_binarytree}, this gives a circuit depth upper bound of
\[\sum_{k=1}^{n}O\left(k^2\log(k)+\frac{\log(k)2^k}{k+m}\right)=O\left(n^3\log(n)+\frac{\log(n)2^n}{n+m}\right).\]
\end{proof}
In Appendix~\ref{append:binary_tree_improvement} we show that using a unary encoding for the QSP circuit and a different circuit framework, the circuit depth in Theorem \ref{thm:QSP_binarytree} can be improved to $O\left(n^2\log^2(n)+\frac{\log(n)2^n}{n+m}\right)$ if $m\le o(2^n)$, and $O(n^2\log(n))$ if $m\ge \Omega(2^n)$.
\paragraph{QSP under $\Tree_{n+m}(d)$ and $\Star_{n+m}$ constraints}
\begin{theorem}\label{thm:QSP_darytree}
Any $n$-qubit quantum state can be realized by a quantum circuit of depth \[O\left(n^2d\log_d (n+m)\log_d(n+d)+\frac{(n+d)\log_d(n+d) 2^{n}}{n+m}\right),\]
under $\Tree_{n+m}(d)$ constraint, using $m\ge 0$ ancillary qubits, for $d< n+m$.
\end{theorem}
\begin{proof}
See Theorem \ref{thm:QSP_darytree_append} in Appendix~\ref{append:d-arytree}.
\end{proof}
\begin{corollary}\label{coro:QSP_star}
Any $n$-qubit quantum state can be realized by a quantum circuit of depth $O\left(2^n\right)$
under $\Star_{n+m}$ constraint, using $m\ge 0$ ancillary qubits.
\end{corollary}
\begin{proof}
Do not use the ancillary qubits. The result follows Theorem~\ref{thm:QSP_graph}.
\end{proof}
\subsection{Circuit complexity for GUS under graph constraints}
\label{sec:US_graph}
\begin{lemma}[\cite{mottonen2005decompositions}]\label{lem:unitary_decomposition}
Any $n$-qubit unitary matrix $U\in\mathbb{C}^{2^{n}\times 2^n}$ can be decomposed into $2^{n}-1$ $n$-qubit UCGs.
\end{lemma}
Note that the target qubit of the UCGs in Lemma~\ref{lem:unitary_decomposition} may be arbitrary, which generalizes the UCGs in Eq. \eqref{eq:UCG} for which the target is always the $n$-th qubit.
\begin{theorem}\label{thm:US_grid}
Any $n$-qubit unitary can be prepared by a quantum circuit of depth \[O\Big(n^22^n+d4^{\frac{(d+2)n}{2(d+1)}}+\max_{j\in\{2,\ldots,d\}}\Big\{\frac{d4^{(j+1)n/(2j)}}{(\Pi_{i=j}^d n_i)^{1/j}}\Big\}+\frac{4^n}{n+m}\Big)\]
under $\Grid^{n_1,n_2,\ldots,n_d}_{n+m}$ constraint, using $m\ge 0$ ancillary qubits.
When $n_1=n_2=\cdots=n_d$, the depth is $O\left(n^22^n+d4^{\frac{(d+2)n}{2(d+1)}}+\frac{4^n}{n+m}\right)$.
\end{theorem}
\begin{proof}
Follows from Lemmas~\ref{lem:UCG_grid} and \ref{lem:unitary_decomposition}.
\end{proof}
\begin{corollary}\label{coro:US_path_grid}
Any $n$-qubit unitary can be realized by a quantum circuit with $m\ge 0$ ancillary qubits, of depth
\begin{enumerate}
\item $O\left(4^{3n/4}+\frac{4^n}{n+m}\right)$ under $\Path_{n+m}$ constraint.
\item $O\left(4^{2n/3}+\frac{4^{3n/4}}{(n_2)^{1/2}}+\frac{4^n}{n+m}\right)$ under $\Grid^{n_1,n_2}_{n+m}$ constraint.
\item $O\left(4^{5n/8}+\frac{4^{3n/4}}{(n_2n_3)^{1/2}}+\frac{4^{2n/3}}{(n_3)^{1/3}}+\frac{4^n}{n+m}\right)$ under $\Grid^{n_1,n_2,n_3}_{n+m}$ constraint.
\end{enumerate}
\end{corollary}
Note that the case for $\Path_{n+m}$ follows from choosing $d=1$ in Theorem \ref{thm:US_grid}.
\begin{theorem}\label{thm:US_binarytree}
Any $n$-qubit unitary can be realized by a quantum circuit of depth $O\left(n^2\log^2(n)2^n+\frac{\log(n)4^n}{n+m}\right)$ under $\Tree_{n+m}(2)$ constraint, using $m\ge 0$ ancillary qubits.
\end{theorem}
\begin{proof}
Follows from Lemmas~\ref{lem:UCG_binarytree} and \ref{lem:unitary_decomposition}.
\end{proof}
\begin{theorem}\label{thm:US_darytree}
Any $n$-qubit unitary can be realized by a quantum circuit of depth \[O\left(n2^nd\log_d (n+m)\log_d(n+d)+\frac{(n+d)\log_d(n+d) 4^{n}}{n+m}\right)\]
under $\Tree_{n+m}(d)$ constraint, using $m\ge 0$ ancillary qubits.
\end{theorem}
\begin{proof}
See Appendix~\ref{append:d-arytree}.
\end{proof}
\begin{theorem}\label{thm:US_expander_graph}
Any $n$-qubit unitary can be realized by a quantum circuit of depth \[O\Big(n^22^n+\frac{\log(n+m)4^n}{n+m}\Big),\]
under $\Expander_{n+m}$ constraint, using $m\ge 0$ ancillary qubits.
\end{theorem}
\begin{proof}
Follows from Lemmas~\ref{lem:UCG_expander} and \ref{lem:unitary_decomposition}.
\end{proof}
\begin{theorem}\label{thm:US_graph}
Any $n$-qubit unitary can be realized by a quantum circuit of size and depth $O(4^n)$ under general $G$ constraint.
\end{theorem}
\begin{proof}
Follows from Lemmas~\ref{lem:UCG_graph} and \ref{lem:unitary_decomposition}.
\end{proof}
\begin{corollary}\label{coro:US_star}
Any $n$-qubit unitary matrix can be realized by a quantum circuit of depth $O\left(4^n\right)$
under $\Star_{n+m}$ constraint, using $m\ge 0$ ancillary qubits.
\end{corollary}
\begin{proof}
Do not use the ancilla. The result follows from Theorem \ref{thm:US_graph}.
\end{proof}
\subsection{Circuit transformation between different graph constraints }
\label{sec:circuit_transformation}
We first show a transformation between circuits under different graph constraints.
\begin{lemma}\label{lem:circuit_trans}
Let $G=(V,E)$ and $G'=(V,E')$ be two graphs with common vertex set $V$, and with edge sets $E\subseteq E'$, $\abs{E'\setminus E} := \bigcup_{i=1}^c E_i$ such that,
\begin{enumerate}
\item $E_i \cap E_j = \emptyset$.
\item For each $i\in [c]$, there are vertex disjoint paths $P_{st}$ in $G$ of length at most $c'$, connecting all edges $(v_s,v_t)\in E_i$.
\end{enumerate}
If $\mathcal{C}'$ is a quantum circuit of depth $d$ and size $s$ under $G'$ constraint, there exists a circuit $\mathcal{C}$ implementing the same transformation of depth $O(cc'd)$ and size $O(c's)$, under $G$ constraint.
\end{lemma}
\begin{proof}
We shall say that a CNOT gate acts on $e=(v_s,v_t)$, if it acts on qubit $v_s$ and $v_t$. In $\mathcal{C}'$ there are $d$ layers of gates, each layer $C_k'$ ($k\in [d]$) can be represented as $C'_k = C'_k(E)\otimes (\otimes_{i=1}^c C'_k(E_i)) $, where $C'_k(E)$ consists of single-qubit gates and CNOT gates in $C'$ acting on edges in $E$, and $C'_k(E_i)$ consists of CNOT gates in $C'$ acting on edges in $E_i$.
Since $E$ is the edge set of $G$, the circuit $C_k'(E)$ can be realized by a circuit of depth $1$ under $G$ constraint. By assumption, for each $e=(v_s,v_t)\in E_i$, there exists a path from $v_s$ to $v_t$ of length at most $c'$ in $G$, and these paths for different edges $e\in E_i$ are disjoint. Thus, all CNOT gates in $C_k'(E_i)$ can be implemented in parallel, in depth and size $O(c')$.
A depth-$1$ CNOT circuit under $G'$ constraint can thus be realized by a CNOT circuit of depth $1+O(c')\cdot c = O(cc')$ under $G$ constraint. As every CNOT gate in $\mathcal{C}'$ can be realized in size $O(c')$ under path constraint in $G$, the total size of $C$ is $O(c')\times s=O(c's)$.
\end{proof}
\begin{figure}[]
\centering
\begin{tikzpicture}
\filldraw[fill=green!20] (0,0)--(2,0)--(2,0.5)--(0,0.5)--cycle (4,0)--(6,0)--(6,0.5)--(4,0.5)--cycle (0,1)--(2,1)--(2,1.5)--(0,1.5)--cycle (4,1)--(6,1)--(6,1.5)--(4,1.5)--cycle (0,2)--(2,2)--(2,2.5)--(0,2.5)--cycle (4,2)--(6,2)--(6,2.5)--(4,2.5)--cycle;
\filldraw[fill=blue!20,draw=white] (0,0.5)--(1,0.5)--(1,1)--(0,1)--cycle (0,1.5)--(1,1.5)--(1,2)--(0,2)--cycle ;
\filldraw[fill=blue!20] (3,0.5)--(5,0.5)--(5,1)--(3,1)--cycle (3,1.5)--(5,1.5)--(5,2)--(3,2)--cycle;
\draw (0,0.5)--(1,0.5)--(1,1)--(0,1) (0,1.5)--(1,1.5)--(1,2)--(0,2);
\filldraw[fill=yellow!20] (2,0)--(4,0)--(4,0.5)--(2,0.5)--cycle (2,1)--(4,1)--(4,1.5)--(2,1.5)--cycle (2,2)--(4,2)--(4,2.5)--(2,2.5)--cycle;
\draw[dotted] (0,0)--(0,2.5) (2,0)--(2,2.5) (4,0)--(4,2.5) (0.5,0)--(0.5,2.5) (2.5,0)--(2.5,2.5) (4.5,0)--(4.5,2.5) (1,0)--(1,2.5) (3,0)--(3,2.5) (5,0)--(5,2.5) (1.5,0)--(1.5,2.5) (3.5,0)--(3.5,2.5) (5.5,0)--(5.5,2.5) (6,0)--(6,2.5);
\draw [fill=black] (0,0) circle (0.05) (0.5,0) circle (0.05) (1,0) circle (0.05) (1.5,0) circle (0.05) (2,0) circle (0.05) (2.5,0) circle (0.05) (3,0) circle (0.05) (3.5,0) circle (0.05) (4,0) circle (0.05) (4.5,0) circle (0.05) (5,0) circle (0.05) (5.5,0) circle (0.05) (6,0) circle (0.05);
\draw [fill=black] (0,0.5) circle (0.05) (0.5,0.5) circle (0.05) (1,0.5) circle (0.05) (1.5,0.5) circle (0.05) (2,0.5) circle (0.05) (2.5,0.5) circle (0.05) (3,0.5) circle (0.05) (3.5,0.5) circle (0.05) (4,0.5) circle (0.05) (4.5,0.5) circle (0.05) (5,0.5) circle (0.05) (5.5,0.5) circle (0.05) (6,0.5) circle (0.05);
\draw [fill=black] (0,1) circle (0.05) (0.5,1) circle (0.05) (1,1) circle (0.05) (1.5,1) circle (0.05) (2,1) circle (0.05) (2.5,1) circle (0.05) (3,1) circle (0.05) (3.5,1) circle (0.05) (4,1) circle (0.05) (4.5,1) circle (0.05) (5,1) circle (0.05) (5.5,1) circle (0.05) (6,1) circle (0.05);
\draw [fill=black] (0,1.5) circle (0.05) (0.5,1.5) circle (0.05) (1,1.5) circle (0.05) (1.5,1.5) circle (0.05) (2,1.5) circle (0.05) (2.5,1.5) circle (0.05) (3,1.5) circle (0.05) (3.5,1.5) circle (0.05) (4,1.5) circle (0.05) (4.5,1.5) circle (0.05) (5,1.5) circle (0.05) (5.5,1.5) circle (0.05) (6,1.5) circle (0.05);
\draw [fill=red,draw=red] (1,1.75) circle (0.05) (3,1.75) circle (0.05) (5,1.75) circle (0.05);
\draw [fill=black] (0,2) circle (0.05) (0.5,2) circle (0.05) (1,2) circle (0.05) (1.5,2) circle (0.05) (2,2) circle (0.05) (2.5,2) circle (0.05) (3,2) circle (0.05) (3.5,2) circle (0.05) (4,2) circle (0.05) (4.5,2) circle (0.05) (5,2) circle (0.05) (5.5,2) circle (0.05) (6,2) circle (0.05);
\draw [fill=black] (0,2.5) circle (0.05) (0.5,2.5) circle (0.05) (1,2.5) circle (0.05) (1.5,2.5) circle (0.05) (2,2.5) circle (0.05) (2.5,2.5) circle (0.05) (3,2.5) circle (0.05) (3.5,2.5) circle (0.05) (4,2.5) circle (0.05) (4.5,2.5) circle (0.05) (5,2.5) circle (0.05) (5.5,2.5) circle (0.05) (6,2.5) circle (0.05);
\node (a) at (-0.2,2.5) {};
\node (b) at (6.2,2.5) {};
\draw[decorate,decoration={brace,raise=5pt}] (a) -- (b);
\draw (3,3) node{\scriptsize $n_2$ bricks in each layer};
\node (a) at (0,-0.2) {};
\node (b) at (0,2.7) {};
\draw[decorate,decoration={brace,raise=5pt}] (a) -- (b);
\draw (-1,1.25) node{\scriptsize $n_1$ layers};
\draw [fill=red,draw=red] (1,0.75) circle (0.05) (3,0.75) circle (0.05) (5,0.75) circle (0.05);
\draw [fill=red,draw=red] (0,1.25) circle (0.05) (2,1.25) circle (0.05) (4,1.25) circle (0.05) (6,1.25) circle (0.05);
\draw [fill=red,draw=red] (0,2.25) circle (0.05) (2,2.25) circle (0.05) (4,2.25) circle (0.05) (6,2.25) circle (0.05);
\draw [fill=red,draw=red] (0,0.25) circle (0.05) (2,0.25) circle (0.05) (4,0.25) circle (0.05) (6,0.25) circle (0.05);
\end{tikzpicture}
\caption{A new graph $G'$ constructed by adding edges to $\brickwall_{n+m}^{n_1,n_2,b_1,b_2}$. The red nodes are removed, and new dotted edges are added. Bricks are divided into $4$ groups, indicated by the green, white, yellow and blue colors.
}
\label{fig:brickwall_grid}
\end{figure}
We use this lemma to obtain QSP and GUS circuits under brick-wall constraint, by reducing that to our 2D grid results.
\begin{corollary}\label{coro:QSP_brickwall}
Any $n$-qubit quantum state can be prepared by a circuit of depth \[O\left(2^{n/3}+\frac{2^{n/2}}{\sqrt{\min\{n_1,n_2\}}}+\frac{2^n}{n+m}\right)\] under $\brickwall^{n_1,n_2,b_1,b_2}_{n+m}$ constraint with $b_1, b_2=O(1)$, using $m\ge 0$ ancillary qubits.
\end{corollary}
\begin{proof}
In $\brickwall_{n+m}^{n_1,n_2,b_1,b_2}$, each brick has a rectangle containing $b_1$ vertices on each vertical edge. We wish to apply Lemma~\ref{lem:circuit_trans}, by defining the brick-wall as a subgraph of a fully connected grid. To do so, conceptually we must first `remove' the $b_1-2$ vertices (indicated by the red nodes in Fig.~\ref{fig:brickwall_grid}) in the middle of each vertical edge and add an edge between the remaining two vertices. This is possible because, at the cost of an $O(b_1)$ overhead, we can implement a CNOT between the two remaining vertices along the path between them. Thus, we can view the brick-wall as a new graph $G=(V,E)$ where the red nodes have been removed, and CNOT gates across the newly added edges cost $O(b_1)$.
From $G=(V,E)$ we construct yet another new graph $G'=(V, E\cup E')$, by adding vertical edges across layers as in Fig.~\ref{fig:brickwall_grid}. We color the bricks in even and odd layers with alternating colors (using four colors total: two colors for each of the even and odd layers) and set $E' = \cup_{i=1}^4 E_i'$ where the four new edge sets $E_i'$ correspond to which brick color the edge lies in. Note that all bricks of the same color are vertex disjoint. We further decompose each $E_i'$ into at most $b_2 -2$ disjoint subsets $E_i$, with each $E_i$ formed by selecting at most $1$ edge from every brick in $E_i'$. Thus, $E' = \cup_{i=1}^c E_i$ for $c\le 4(b_2-2) = O(1)$. As each $(u,v)\in E_i$ lies in a separate brick, there exists a path from $u$ to $v$ in $G$ of length at most $c'=b_1+b_2 = O(1)$, and all such paths are disjoint.
With the transformation set up, we can invoke our results for the 2D grid to obtain circuits for the brick-wall. We consider two cases.
\begin{enumerate}
\item Case 1: $m\ge 2n$. Note that $G'(V, E\cup E')$ is a 2-dimensional grid $\Grid^{n_1+1,n_2b_2-n_2+1}_{(n_1+1)(n_2b_2-n_2+1)}$. By Corollary~\ref{coro:QSP_path_grid} (result 2), the circuit depth required for $n$-qubit QSP is
\[
O\left(2^{n/3}+\frac{2^{n/2}}{\sqrt{\min\{n_1+1, n_2 b_2-n_2+1\}}}+\frac{2^n}{n+m}\right)
= O\left(2^{n/3}+\frac{2^{n/2}}{\sqrt{\min\{n_1,n_2\}}}+\frac{2^n}{n+m}\right)
\]
under $\Grid^{n_1+1,n_2b_2-n_2+1}_{(n_1+1)(n_2b_2-n_2+1)}$ constraint, which translates, via Lemma \ref{lem:circuit_trans} into a circuit depth bound of $O\left(2^{n/3}+\frac{2^{n/2}}{\sqrt{\min\{n_1,n_2\}}}+\frac{2^n}{n+m}\right)$ under graph $G$ constraint.
By Lemma~\ref{lem:cnot_path_constraint}, in one layer of the above circuit, all CNOT gates acting on the vertical edges in the brick-wall can be implemented in depth $O(b_1)$ simultaneously. Therefore, the circuit depth of $n$-qubit QSP is $O(b_1)\cdot O\left(2^{n/3}+\frac{2^{n/2}}{\sqrt{\min\{n_1,n_2\}}}+\frac{2^n}{n+m}\right)=O\left(2^{n/3}+\frac{2^{n/2}}{\sqrt{\min\{n_1,n_2\}}}+\frac{2^n}{n+m}\right)$.
\item Case 2: $m\le 2n$.
There exists a Hamiltonian path in $G'$ and thus, by Corollary \ref{coro:QSP_path_grid} (result 1), the circuit depth for $n$-qubit QSP is $O(\frac{2^n}{n+m})$. By Lemma \ref{lem:circuit_trans}, the circuit depth for $n$-qubit QSP is $O(\frac{2^n}{n+m})$ under $\brickwall_{n+m}^{n_1,n_2,b_1,b_2}$ constraint.
\end{enumerate}
This completes the proof.
\end{proof}
Note that in the general case where $b_1$ and $b_2$ are not necessarily constant, the result still holds though with an extra $O(b_1b_2(b_1+b_2))$ factor.
By the same argument, we obtain the following depth of GUS circuits under brick-wall constraint.
\begin{corollary}\label{coro:US_brickwall}
Any $n$-qubit unitary matrix can be implemented by a circuit of depth \[O\left(4^{2n/3}+\frac{4^{3n/4}}{\sqrt{\min\{n_1,n_2\}}}+\frac{4^n}{n+m}\right)\]
under $\brickwall^{n_1,n_2,b_1,b_2}_{n+m}$ constraint with $b_1, b_2=O(1)$, using $m\ge 0$ ancillary qubits.
\end{corollary}
\section{Circuit size and depth lower bounds under graph constraints}
\label{sec:QSP_US_lowerbound}
In this section, we show circuit depth and size lower bounds for QSP, diagonal unitary matrix preparation and GUS under graph constraints. The methods used can be extended to prove similar lower bounds for QRAM under graph constraints, which are shown in Appendix \ref{append:QRAM}.
\subsection{Circuit lower bounds under general graph constraints}
\label{sec:QSP_US_lowerbound_general}
\paragraph{Size lower bounds.} There are some previous lower bounds for circuit size under no graph constraints as stated in the following lemma.
\begin{lemma}[\cite{shende2004minimal,plesch2011quantum}]\label{lem:lowerbound_size_previous}
There exist $n$-qubit quantum states and $n$-qubit unitaries which can only be implemented by quantum circuits under no graph constraints, of size at least $\Omega\left(2^n\right)$ and $\Omega\left(4^n\right)$ respectively.
\end{lemma}
Since a connectivity graph constraint only adds difficulty, the same lower bounds also hold for any constraint graph $G$.
\begin{proposition}\label{prop:lowerbound_size_graph}
For any connected graph $G$, there exist $n$-qubit quantum states and $n$-qubit unitaries which require quantum circuits of size at least $\Omega\left(2^n\right)$ and $\Omega\left(4^n\right)$, respectively, under $G$ constraint.
\end{proposition}
\begin{proof}
Let $G=K_n$ be the complete graph on $n$ vertices. The ability to implement any $n$-qubit quantum state and unitary matrix by circuits of size $o(2^n)$ and $o(4^n)$ would contradict Lemma~\ref{lem:lowerbound_size_previous}.
\end{proof}
The proof of Lemma \ref{lem:lowerbound_size_previous} is by parameter counting, which also applies to diagonal unitaries.
\begin{proposition}\label{prop:size_lowerbound_Lambda}
For any graph $G$, there exist $n$-qubit diagonal unitary matrices which can be implemented by quantum circuits under $G$ constraint, of size at least $\Omega(2^n)$.
\end{proposition}
\begin{proof}
A circuit of size $o(2^n)$ consisting of arbitrary $2$-qubit gates introduces $o(2^n)$ real parameters. On the other hand, a diagonal unitary matrix $\Lambda_n$ is determined by at least $2^n-1$ free real parameters.
\end{proof}
\paragraph{Depth lower bounds.}
\begin{lemma}[\cite{sun2021asymptotically}]\label{lem:lowerbound_previous}
There exist $n$-qubit quantum states and $n$-qubit unitaries which can only be implemented by quantum circuits under no graph constraints, of depth at least $\Omega\left(n+\frac{2^n}{n+m}\right)$ and $\Omega\left(n+\frac{4^n}{n+m}\right)$ respectively, using $m\ge 0$ ancillary qubits.
\end{lemma}
By the same method used to prove Lemma~\ref{lem:lowerbound_previous}, it can be shown that:
\begin{proposition}\label{prop:lowerbound_Lambda_noconstraint}
There exist $n$-qubit diagonal unitary matrices which can only be implemented by quantum circuits of depth at least $\Omega\left(n+\frac{2^n}{n+m}\right)$, under no graph constraints, using $m\ge 0$ ancillary qubits.
\end{proposition}
And again these lower bounds hold under any graph constraint.
\begin{proposition}\label{prop:depth_lowerbound_graph}
Let $G=(V,E)$ denote an arbitrary connected graph with $n+m$ vertices for any $m\ge 0$. There exist $n$-qubit quantum states, diagonal unitary matrices and unitaries which can only be implemented by circuits under $G$ constraint of depth at least $\Omega\left(n+\frac{2^n}{n+m}\right)$, $\Omega\left(n+\frac{2^n}{n+m}\right)$ and $\Omega\left(n+\frac{4^n}{n+m}\right)$ , respectively, using $m$ ancillary qubits.
\end{proposition}
\begin{proof}
These results follow from Lemma \ref{lem:lowerbound_previous} and Proposition \ref{prop:lowerbound_Lambda_noconstraint}.
\end{proof}
To give circuit depth lower bounds under general graph constraints, we first associate a quantum circuit with a directed graph.
\begin{definition}[Directed graphs for quantum circuits]\label{def:circuit-digraph}
Let $C$ be a quantum circuit on $n$ input and $m$ ancillary qubits consisting of $d$ depth-1 layers, with odd layers consisting only of single-qubit gates, even layers consisting only of CNOT gates, and any two (non-identity) single-qubit gates acting on the same qubit must be separated by at least one CNOT gate acting on that qubit (either as control or target). Let $L_1,L_2,\cdots,L_d$ denote the $d$ layers of this circuit, i.e., $C=L_dL_{d-1}\cdots L_1$. Define the directed graph $H=(V_C,E_C)$ associated with $C$ as follows.
\begin{enumerate}
\item Vertex set $V_C$: For each $i\in[d+1]$, define $S_{i}:=\{v_i^j:j\in[n+m]\}$, where $v_i^j$ is a label corresponding to the $j$-th qubit. Then, $V_C:=\bigcup_{i=1}^{d+1} S_i$.
\item Edge set $E_C$: For all $i\in [d]$:
\begin{enumerate}
\item
If there is a single-qubit gate acting on the $j$-th qubit in layer $L_i$ then, for all $i \le i' \le d$ there exists a directed edge $(v_{i'+1}^j,v_{i'}^j)$.
\item If there is a CNOT gate acting on qubits $j_1$ and $j_2$ in layer $L_i$, then there exist $4$ directed edges $(v_{i+1}^{j_1},v_i^{j_1})$, $(v_{i+1}^{j_2}, v_i^{j_1})$, $(v_{i+1}^{j_1},v_i^{j_2})$ and $(v_{i+1}^{j_2},v_i^{j_2})$
\end{enumerate}
Note that edges are directed from $S_{i+1}$ to $S_i$.
\end{enumerate}
\end{definition}
See Fig.~\ref{fig:example_directed_graph} for an example.
\paragraph{Remark.}
The circuits $C$ in Def.~\ref{def:circuit-digraph} assume a particular structure of alternating layers of single qubit gates and CNOT gates. However, an arbitrary circuit can be brought into this form with at most a constant factor overhead in depth: consecutive single qubit gates acting on a single qubit can be combined into one single-qubit gate, and consecutive circuit layers containing CNOT gates can be separated by a layer of identity gates. Thus, without loss of generality, for the remainder of this section we will assume (as in~\cite{shende2004minimal}) that circuits have this alternating layer structure.
\begin{figure}[]
\begin{subfigure}{0.48\textwidth}
\centerline{
\Qcircuit @C=0.6em @R=0.7em {
\lstick{\scriptstyle\ket{x_1}}&\gate{ } &\ctrl{1} & \qw & \qw & \gate{ } &\ctrl{1} &\gate{ } & \qw\\
\lstick{\scriptstyle\ket{x_2}}&\gate{ } & \targ & \gate{ } &\ctrl{1} & \gate{ } &\targ &\gate{ }& \qw\\
\lstick{\scriptstyle\ket{x_3}}&\gate{ } &\ctrl{1} &\gate{ } &\targ &\qw &\qw &\gate{ }& \qw\\
\lstick{\scriptstyle\ket{0}}&\gate{ } & \targ & \qw &\qw &\gate{ } &\ctrl{1} &\gate{ }& \qw &\rstick{\scriptstyle \ket{0}}\\
\lstick{\scriptstyle\ket{0}}&\gate{ } &\ctrl{1} &\gate{ } &\ctrl{1} & \gate{ }&\targ &\gate{ }& \qw &\rstick{\scriptstyle \ket{0}}\\
\lstick{\scriptstyle\ket{0}}&\gate{ } &\targ &\gate{ } & \targ &\qw &\qw &\gate{ }& \qw &\rstick{\scriptstyle \ket{0}}\\
&{\scriptstyle L_1} & {\scriptstyle L_2} & {\scriptstyle L_3} & {\scriptstyle L_4} & {\scriptstyle L_5} & {\scriptstyle L_6} & {\scriptstyle L_7} & \\
&&&&&&&&\\
}}
\caption{A depth $d=7$ circuit $C=L_7L_6\cdots L_1$ on $n=3$ input and $m=3$ ancillary qubits.}
\end{subfigure}
\hfill
\begin{subfigure}{0.48\textwidth}
\centering
\begin{tikzpicture}
\draw [fill=black] (0,0) circle (0.05) (0.5,0) circle (0.05) (1,0) circle (0.05) (1.5,0) circle (0.05) (2,0) circle (0.05) (2.5,0) circle (0.05) (3,0) circle (0.05) (3.5,0) circle (0.05);
\draw [fill=black] (0,0.5) circle (0.05) (0.5,0.5) circle (0.05) (1,0.5) circle (0.05) (1.5,0.5) circle (0.05) (2,0.5) circle (0.05) (2.5,0.5) circle (0.05) (3,0.5) circle (0.05) (3.5,0.5) circle (0.05);
\draw [fill=black] (0,1) circle (0.05) (0.5,1) circle (0.05) (1,1) circle (0.05) (1.5,1) circle (0.05) (2,1) circle (0.05) (2.5,1) circle (0.05) (3,1) circle (0.05) (3.5,1) circle (0.05);
\draw [fill=white] (0,-0.5) circle (0.05) (0.5,-0.5) circle (0.05) (1,-0.5) circle (0.05) (1.5,-0.5) circle (0.05) (2,-0.5) circle (0.05) (2.5,-0.5) circle (0.05) (3,-0.5) circle (0.05) (3.5,-0.5) circle (0.05);
\draw [fill=white] (0,-1) circle (0.05) (0.5,-1) circle (0.05) (1,-1) circle (0.05) (1.5,-1) circle (0.05) (2,-1) circle (0.05) (2.5,-1) circle (0.05) (3,-1) circle (0.05) (3.5,-1) circle (0.05);
\draw [fill=white] (0,-1.5) circle (0.05) (0.5,-1.5) circle (0.05) (1,-1.5) circle (0.05) (1.5,-1.5) circle (0.05) (2,-1.5) circle (0.05) (2.5,-1.5) circle (0.05) (3,-1.5) circle (0.05) (3.5,-1.5) circle (0.05);
\draw [->,draw=red] (3.5,1)--(3,1); \draw [->,draw=red] (3.5,0.5)--(3,0.5); \draw [->,draw=red] (3.5,0)--(3,0); \draw [->,draw=red](3.5,-0.5)--(3,-0.5); \draw [->,draw=red] (3.5,-1)--(3,-1);\draw [->,draw=red] (3.5,-1.5)--(3,-1.5);
\draw [->,draw=red] (3,1)--(2.5,1); \draw [->,draw=red] (3,1)--(2.5,0.5); \draw [->,draw=red] (3,0.5)--(2.5,1);\draw [->,draw=red] (3,0.5)--(2.5,0.5);\draw [->,draw=red] (3,-0.5)--(2.5,-0.5);\draw [->,draw=red] (3,-0.5)--(2.5,-1);\draw [->,draw=red] (3,-1)--(2.5,-0.5);\draw [->,draw=red] (3,-1)--(2.5,-1);
\draw [->,draw=red] (2.5,1)--(2,1); \draw [->,draw=red] (2.5,0.5)--(2,0.5);\draw [->,draw=red] (2.5,-0.5)--(2,-0.5);\draw [->,draw=red] (2.5,-1)--(2,-1);
\draw [->,draw=red] (2,0.5)--(1.5,0.5); \draw [->,draw=red](2,0)--(1.5,0);\draw [->,draw=red] (2,0.5)--(1.5,0);\draw [->,draw=red] (2,0)--(1.5,0.5);\draw [->,draw=red] (2,-1)--(1.5,-1);\draw [->,draw=red] (2,-1.5)--(1.5,-1.5);\draw [->,draw=red] (2,-1)--(1.5,-1.5); \draw [->,draw=red] (2,-1.5)--(1.5,-1);
\draw [->,draw=red] (1.5,0.5)--(1,0.5); \draw [->,draw=red] (1.5,0)--(1,0); \draw [->,draw=red] (1.5,-1)--(1,-1); \draw [->,draw=red] (1.5,-1.5)--(1,-1.5);
\draw [->,draw=red] (1,1)--(0.5,1); \draw [->,draw=red](1,0.5)--(0.5,0.5); \draw [->,draw=red] (1,1)--(0.5,0.5); \draw [->,draw=red] (1,0.5)--(0.5,1); \draw [->,draw=red] (1,0)--(0.5,0); \draw [->,draw=red] (1,-0.5)--(0.5,-0.5); \draw [->,draw=red] (1,0)--(0.5,-0.5);\draw [->,draw=red] (1,-0.5)--(0.5,0);\draw [->,draw=red] (1,-1)--(0.5,-1);\draw [->,draw=red] (1,-1.5)--(0.5,-1.5); \draw [->,draw=red](1,-1)--(0.5,-1.5);\draw [->,draw=red] (1,-1.5)--(0.5,-1);
\draw [->,draw=red] (0.5,1)--(0,1); \draw [->,draw=red] (0.5,0.5)--(0,0.5);\draw [->,draw=red] (0.5,0)--(0,0);\draw [->,draw=red] (0.5,-0.5)--(0,-0.5);\draw [->,draw=red] (0.5,-1)--(0,-1);\draw [->,draw=red] (0.5,-1.5)--(0,-1.5);
\draw (0,-2) node{\scriptsize $S_1$} (0.5,-2) node{\scriptsize $S_2$} (1,-2) node{\scriptsize $S_3$} (1.5,-2) node{\scriptsize $S_4$} (2,-2) node{\scriptsize $S_5$} (2.5,-2) node{\scriptsize $S_6$} (3,-2) node{\scriptsize $S_7$} (3.5,-2) node{\scriptsize $S_8$};
\draw [draw=blue] (-0.1,1.1)--(0.1,1.1)--(0.1,-0.6)--(-0.1,-0.6)--cycle (0.4,1.1)--(0.6,1.1)--(0.6,-0.6)--(0.4,-0.6)--cycle (0.9,0.6)--(1.1,0.6)--(1.1,-0.1)--(0.9,-0.1)--cycle (1.4,0.6)--(1.6,0.6)--(1.6,-0.1)--(1.4,-0.1)--cycle
(1.9,1.1)--(2.1,1.1)--(2.1,0.4)--(1.9,0.4)--cycle
(2.4,1.1)--(2.6,1.1)--(2.6,0.4)--(2.4,0.4)--cycle
(2.9,1.1)--(3.1,1.1)--(3.1,-0.1)--(2.9,-0.1)--cycle
(3.4,1.1)--(3.6,1.1)--(3.6,-0.1)--(3.4,-0.1)--cycle;
\draw[->,draw=teal] (2.5,-1.5)--(2,-1.5);
\draw[->,draw=teal] (3,-1.5)--(2.5,-1.5);
\draw[->,draw=teal] (3,0)--(2.5,-0);
\draw[->,draw=teal] (2.5,0)--(2,-0);
\draw[->,draw=teal] (2,-0.5)--(1.5,-0.5);
\draw[->,draw=teal] (1.5,-0.5)--(1,-0.5);
\draw[->,draw=teal] (2,1)--(1.5,1);
\draw[->,draw=teal] (1.5,1)--(1,1);
\draw (0,1.5) node{\color{blue}\scriptsize $S'_1$} (0.5,1.5) node{\color{blue}\scriptsize $S'_2$} (1,1.5) node{\color{blue}\scriptsize $S'_3$} (1.5,1.5) node{\color{blue}\scriptsize $S'_4$} (2,1.5) node{\color{blue}\scriptsize $S'_5$} (2.5,1.5) node{\color{blue}\scriptsize $S'_6$} (3,1.5) node{\color{blue}\scriptsize $S'_7$} (3.5,1.5) node{\color{blue}\scriptsize $S'_8$};
\end{tikzpicture}
\caption{The directed graph corresponding to $C$. $n+m$ vertices are added to each layer $S_i$, for all $i\in[d+1]$, with black (white) vertices corresponding to input (ancillary) qubit lines in the original circuit. $S'_i$ (blue boxes) denote the reachable subsets in $H$ (see Def.~\ref{def:reachable}). Green arrows between the $j$-th vertices of $S_{i+1}$ and $S_i$ indicate there is no quantum gate acting on the $j$-th qubit in the $i$-th depth of circuit.}
\end{subfigure}
\caption{A quantum circuit $C$ and it corresponding directed graph $H=(V_C, E_C)$.}
\label{fig:example_directed_graph}
\end{figure}
\begin{definition}[Reachable subsets]\label{def:reachable} Let $H=(V_C,E_C)$ be the directed graph associated with quantum circuit $C$ of depth $d$, with vertex set $V_C = \bigcup_{i=1}^{d+1} S_i$. For each $i\in[d+1]$ define the reachable subsets $S'_{i}$ of $H$ as follows:
\begin{itemize}
\item $S'_{d+1} = \{v^j_{d+1} : j\in[n]\}$, i.e., the subset of $n$ vertices in $S_{d+1}$ corresponding to the $n$ input qubits.
\item For $i\in[d]$, $S'_{i}\subseteq S_i$ is the subset of vertices $v^j_i$ in $S_i$ which are (i) reachable by a directed path from vertices in $S'_{d+1}$, and (ii) there is a quantum gate acting on qubit $j$ in circuit layer $L_{i}$.
\end{itemize}
\end{definition}
\begin{theorem}\label{thm:reachable-set-bounds}
Let $H=(V_C,E_C)$ be the directed graph associated with quantum circuit $C$, of depth $d$, acting on $n$ input and $m$ ancilla qubits. Let $S'_1, \ldots S'_{d+1}$ be the reachable subsets of $H$.
\begin{enumerate}
\item If $C$ is a circuit for any $n$-qubit quantum state, then $O(\sum_{i=1}^d |S_i'|)\ge 2^{n}-1$;
\item If $C$ is a circuit for any $n$-qubit diagonal unitary matrix, then $O(\sum_{i=1}^d |S_i'|)\ge 2^{n}-1$;
\item If $C$ is a circuit for any $n$-qubit general unitary matrix, then $O(\sum_{i=1}^d |S_i'|)\ge 4^{n}-1$.
\end{enumerate}
\end{theorem}
\begin{proof}
Let $V$ denote the set of $n+m$ qubits, and $V_{d+1}$ denote the $n$-input qubit set. For $i\in[d]$, $V_{i}=V_{i+1}\cup V'_i$, where $V'_i$ denotes the qubit set corresponding to $S_i'$.
The $i$-th layer $L_i$ can be represented as $L_i=L_{V_i}\otimes L_{\overline{V}_i}$, where $L_{V_i}$ consists of gates acting on $V_i$, and $L_{\overline{V}_i}$ consists of gates acting on $V-V_i$. Then, $C$ can be expressed as
\begin{align*}
C&=L_dL_{d-1}\cdots L_1 = (L_{V_d}\otimes L_{\overline{V}_d}) (L_{V_{d-1}}\otimes L_{\overline{V}_{d-1}})\cdots (L_{V_1}\otimes L_{\overline{V}_1})
\end{align*}
Since $C$ is a QSP circuit acting on $n$ input and $m$ ancillary qubits, we have
\begin{equation}
\ket{\psi}_I\ket{0^m}_A = C \ket{0^n}_I\ket{0^m}_A = (L_{V_d}\otimes L_{\overline{V}_d}) (L_{V_{d-1}}\otimes L_{\overline{V}_{d-1}})\cdots (L_{V_1}\otimes L_{\overline{V}_1}) \ket{0^n}_I\ket{0^m}_A
\end{equation}
where $I$ and $A$ are registers for holding the input and ancilla, respectively. Now we can cancel the gates in $L_{\overline{V}_i}$ without affecting $\ket{\psi}$. More precisely, we multiply the two sides of the above equation by $L_{\overline{V}_d}, \ldots, L_{\overline{V}_1}$ in that order, and get
\begin{equation}\label{eq:lightcone}
(\mathbb{I}_{V_1}\otimes L_{\overline{V}_1}^\dagger) (\mathbb{I}_{V_{2}}\otimes L_{\overline{V}_{2}}^\dagger)\cdots (\mathbb{I}_{V_d}\otimes L_{\overline{V}_d}^\dagger) \ket{\psi}_I\ket{0^m}_A = (L_{V_d}\otimes \mathbb{I}_{\overline{V}_d}) (L_{V_{d-1}}\otimes \mathbb{I}_{\overline{V}_{d-1}})\cdots (L_{V_1}\otimes \mathbb{I}_{\overline{V}_1}) \ket{0^n}_I\ket{0^m}_A.
\end{equation}
Note that $V_{d+1}\subseteq V_{d} \subseteq \cdots \subseteq V_1$ by definition, thus $\overline{V}_1\subseteq \overline{V}_{2} \subseteq \cdots \subseteq \overline{V}_d \subseteq \overline{V}_{d+1} = [n+m]-[n]$. Therefore, all the operators $L_{\overline{V}_i}^\dagger$ at the LHS of the above equation act on the second register $A$ only, thus
\[ (L_{V_d}\otimes \mathbb{I}_{\overline{V}_d}) (L_{V_{d-1}}\otimes \mathbb{I}_{\overline{V}_{d-1}})\cdots (L_{V_1}\otimes \mathbb{I}_{\overline{V}_1}) \ket{0^n}_I\ket{0^m}_A = \ket{\psi}_I\ket{\phi}_A\]
for some $m$-qubit state $\ket{\phi}_A$. That is, by removing all gates outside the lightcone, we get another circuit $C' = (L_{V_d}\otimes \mathbb{I}_{\overline{V}_d}) (L_{V_{d-1}}\otimes \mathbb{I}_{\overline{V}_{d-1}})\cdots (L_{V_1}\otimes \mathbb{I}_{\overline{V}_1})$ that also generates state $\ket{\psi}_I$, though with a garbage state $\ket{\phi}_A$ unentangled with $\ket{\psi}_I$.
Now we analyze the number of parameters in $C'$ to see how many different $\ket{\psi}_I$ it can generate. For any $i\in[d]$, according to the definition of $L_{V_i}$, there are $O(|S_i'|)=O(|V'_i|)$ gates in $L_{V_i}$. Therefore, $C'$ consists of $O(\sum_{i=1}^d|S'_i|)$ gates. As each gate can be fully specified by $O(1)$ free real parameters, $C'$ can be specified by $O(\sum_{i=1}^d|S'_i|)$ free real parameters. Thus the output of circuit $C'$ is a manifold of dimension at most $O(\sum_{i=1}^d|S'_i|)$. Since the set of all $n$-qubit states $\ket{\psi}$ is a sphere of dimension $2^n-1$, we have that $O(\sum_{i=1}^d|S'_i|)\ge 2^n-1$.
The results for $n$-qubit diagonal unitary matrices and $n$-qubit general unitary matrices follow similarly, noting that they are specified by at least $2^n-1$ and $4^n-1$ free parameters, respectively.
\end{proof}
\paragraph{Remark} In the above proof, we assumed that $C$ is a parameterized circuit, namely the architecture is fixed and only the parameters vary. But note that even if we allow flexible architecture for depth-$d$ circuits, that only multiplies the measure of the output by a finite number, and in particular cannot increase its dimension.
\begin{theorem}\label{thm:depth_lower_bound_graph}
Let $G_\nu=(V,E)$ be a connected graph with $n+m$ vertices, with $\nu$ the size of a maximum matching in $G$.
There exist $n$-qubit quantum states, diagonal unitary matrices and general unitaries which require quantum circuits under $G_\nu$ constraint of depth at least $\Omega\left(\max\{n,2^n/\nu\}\right)$, $\Omega\left(\max\{n,2^n/\nu\}\right)$ and $\Omega\left(\max\{n,4^n/\nu\}\right)$, respectively, to be implemented, using $m\ge 0$ ancillary qubits.
\end{theorem}
\begin{proof}
We consider the directed graph $H=(V_C, E_C)$ and reachable sets $S'_1$, $S'_2$, $\ldots$, $S'_d$, $S'_{d+1}$ for a QSP circuit $C$ of depth $d$. For $i\in [d]$, for every vertex in $S'_{i+1}$, there are at most 2 neighbors in $S'_{i}$ and thus $\abs{S'_{i}} \le 2|S'_{i+1}|$. Since $|S'_{d+1}|=n$, we have $\abs{S'_{i}} \le 2^{d-i+1}n$ for all $i\in[d]$.
Since the maximum matching size of $G$ is $\nu$, $\abs{S'_{i}}\le 2\nu$ for all $i\in[d]$. Combining the above two cases, we obtain $\abs{S'_i} \le \min\{2^{d-i+1}n,2\nu\}$, for all $i\in[d]$.
Based on Theorem \ref{thm:reachable-set-bounds}, we have
\begin{align*}
&2^n-1 \le O\big(\sum_{i=1}^d |S'_i|\big)= O\big( \sum_{i=1}^d \min\{2^{d-i+1}n,2\nu\}\big)\\
=&\left\{\begin{array}{ll}
O( \sum_{i=1}^d 2^{d-i+1}n)\le O(2^d n) \le O(v) \le O(dv), & \text{if~} n2^d \le 2\nu, \\
O\Big(\sum\limits_{i=1}^{ d- \lfloor\log(\frac{\nu}{n})\rfloor}2\nu+\sum\limits_{i= d-\lfloor \log(\frac{\nu}{n})\rfloor+1}^d2^{d-i+1}n\Big)=O\left((d-\log(\frac{\nu}{n}))\nu\right)\le O(d\nu), & \text{if~} n2^d > 2\nu,
\end{array}
\right.
\end{align*}
which implies $d=\Omega(2^n/\nu)$.
Lemma \ref{lem:lowerbound_previous} gives a depth lower bound $\Omega(n)$ for QSP. Combined with this result, we obtain a depth lower bound for QSP of $\Omega(\max\{n,2^n/\nu\})$.
The results for diagonal unitaries and arbitrary $n$-qubit unitaries follow by the same argument
\end{proof}
\subsection{Circuit size and depth lower bounds under specific graph constraints}
We shall prove lower bounds for specific graph constraints, starting with grid graphs. The generated state is in ${\sf R}_{\rm inp}$ as defined in \S \ref{sec:diag_with_ancilla_grid_d}.
\begin{lemma}\label{lem:lower_bound_grid_k}
There exists an $n$-qubit quantum state that requires a quantum circuit of depth
\[\Omega\Big( n+2^{\frac{n}{d+1}}+\max\limits_{j\in [d]}\Big\{\frac{2^{n/j}}{(\Pi_{i=j}^d n_i)^{1/j}}\Big\}\Big)\]
under $\Grid^{n_1,n_2,\cdots,n_d}_{n+m}$ constraint, using $m\ge 0$ ancillary qubits, to implement.
\end{lemma}
\begin{proof}
Recall that $n_1\ge n_2\ge \cdots\ge n_d$. Proposition \ref{prop:depth_lowerbound_graph} gives a depth lower bound $\Omega\Big(\max\Big\{n,\frac{2^n}{\Pi_{i=1}^d n_i}\Big\}\Big)$. Let $D$ denote the depth of the $n$-qubit QSP circuit implementing the quantum state, and $H=(V_C, E_C)$ the associated directed graph with reachable sets $S'_1, \ldots, S'_{D+1}$. Recall the arrangement of input register ${\sf R}_{\rm inp}$ in \S \ref{sec:diag_with_ancilla_grid_d}: Let $k$ be the minimum integer satisfying $n_1\cdots n_k \ge n$, and $n_k'$ be the minimum integer satisfying $n_1\cdots n_{k-1} n_k' \ge n$. (When $k=1$, $n_1\cdots n_{k-1}$ is defined to be 1.) Register ${\sf R}_{\rm inp}$ consists of the first $n$ qubits of sub-grid $\Grid_{n_1n_2\cdots n_{k-1}n'_k}^{n_1,n_2,\cdots, n_{k-1},n'_k,1,1,\cdots,1}$.
Note that, for $\Grid_{n+m}^{n_1, n_2, \ldots, n_d}$, $S'_{D+1} \subseteq [n_1]\times \cdots [n_{k-1}] \times [n_k'] \times \{1\}\times \cdots \times \{1\}$, $S'_{D}\subseteq [n_1]\times \cdots [n_{k-1}] \times [n_k'+1] \times [2]\times \cdots \times [2]$, $S'_{D-1}\subseteq [n_1]\times \cdots [n_{k-1}] \times [n_k'+2] \times [3]\times \cdots \times [3]$, and so on. Since $n_d\le \cdots \le n_1$, the last dimensions $[n_d]$, $[n_{d-1}]$ ... may be saturated as $i$ (in $S_i'$) decreases. In general,
we have the following bounds for $|S_i'|$, where, in the middle line, the last $\ell \in[d-k]$ dimensions are saturated.
\begin{equation}\label{eq:grid-Si-bound}
\abs{S'_{i}} \le \begin{cases}
O\left(n_1n_2\cdots n_{k-1}(n'_k+D-i+1)(D-i+2)^{d-k}\right) &\quad \text{if }D-i+2\le n_d,\\
O\left(n_1 n_2\cdots n_{k-1}(n'_k+D-i+1)(D-i+2)^{d-\ell-k}n_{d-\ell+1}\cdots n_d \right) &\quad \text{if } n_{d-\ell+1}< D-i+2\le n_{d-\ell},\\
n_1 n_2\cdots n_d & \quad \text{if } D-i+2>n_k.
\end{cases}
\end{equation}
We consider $d+1$ cases.
\begin{itemize}
\item Case 1: If $n_d\ge \Omega(2^{\frac{n}{d+1}})$, assume for the sake of contradiction that $D=o\left(2^{\frac{n}{d+1}}\right)$. Then $D=o(n_d)$ in this case, and for all $i\in [D]$,
\begin{align*}
|S_i'|=O\left(n_1n_2\cdots n_{k-1}(n'_k+D-i+1)(D-i+2)^{d-k}\right)=O(n(n'_k+D-i+1)(D-i+2)^{d-k})
\end{align*}
By Theorem~\ref{thm:reachable-set-bounds},
\begin{align*}
2^n-1\le O\left(\sum_{i=1}^D|S'_i|\right)= \sum_{i=1}^{D} O\left(n(n'_k+D-i+1)(D-i+2)^{d-k}\right)=O(n(2D)^{d-k+1})\le O((2D)^{d+1}),
\end{align*}
as $D\ge \Omega(n)$. This implies $D\ge \Omega(2^{n/(d+1)})$, which contradicts with our assumption. Therefore, $D$ must satisfy $D=\Omega(2^{n/(d+1)})$. In this case, it is not hard to verify that $\frac{2^{n/j}}{(n_j\cdots n_d)^{1/j}} \le O(2^{n/(d+1)})$ for all $j\in [d]$, and thus $D=\Omega(2^{n/(d+1)})=\Omega\Big(n+2^{\frac{n}{d+1}}+ \max\limits_{j\in [d]}\Big\{\frac{2^{n/j}}{(\Pi_{i=j}^d n_i)^{1/j}}\Big\}\Big)$.
\item Case $j$ ($2\le j\le d-k+1$): $n_d,n_{d-1},\ldots,n_{d-j+2}$ satisfy
\begin{equation}\label{eq:grid-lb-case-j}
n_d \le o( 2^{n/(d+1)}),\quad n_{d-i}\le o\Big(\frac{2^{\frac{n}{d-i+1}}}{(n_{d-i+1}\cdots n_{d})^{\frac{1}{d-i+1}}}\Big), \quad\forall i\in[j-2].
\end{equation}
and $n_{d-j+1}$ satisfies $n_{d-j+1}\ge\Omega\Big( \frac{2^{\frac{n}{d-j+2}}}{(n_{d-j+2}\cdots n_d)^{\frac{1}{d-j+2}}}\Big)$.
Assume
for the sake of contradiction that $D=o\Big(\frac{2^{\frac{n}{d-j+2}}}{(n_{d-j+2}\cdots n_{d})^{\frac{1}{d-j+2}}}\Big)$. Then we have $D=o(n_{d-j+1})$. We claim that $D\ge n_d$. Suppose that it does not hold, i.e, $D< n_d$. Based on Eq. \eqref{eq:grid-Si-bound} (the first case),
$|S'_i|\le O(n_1\cdots n_{k-1}(n'_k+D-i+1)(D-i+2)^{d-k})$ for all $i\in[D]$. Since $D<n_d$, $|S_i|$ satisfies
\begin{align*}
|S_i|&\le O((n_1\cdots n_{k-1})(n'_k+n_d)(n_d)^{d-k})
\end{align*}
Recall $n_d = o(2^{n/(d+1)})$, $n_1\cdots n_{k-1}n'_k=O(n)$, $n_1\cdots n_{k-1}$ is defined to 1 when $k=1$, and the assumption $\Omega(n)\le D<n_d$. We can obtain that the above bound is at most $O((n_d)^{d})$ both in case $k=1$ and $k\ge 2$. Thus
\[\sum_{i =1}^D|S'_i|\le D\cdot O((n_d)^{d}) = O((n_d)^{d+1}) = o(2^{n}).
\]
But according to Theorem \ref{thm:reachable-set-bounds}, $\sum_{i=1}^D|S'_i|\ge 2^n-1$, which contradicts the above equation. Therefore, we have $D\ge n_d$.
Recall that we assumed $D=o(n_{d-j+1})$, so $D$ falls in an interval $[n_{d-j+\tau+1}, n_{d-j+\tau})$ for some $1\le \tau\le j-1$. Now we upper bound $|S_i'|$ for different $i$. First consider those $i$ with $D-i+2\le n_d$: we have $D-i+2\le n_d\le n_{d-1} \le \cdots \le n_{d-j+\tau+1}$, thus by Eq.\eqref{eq:grid-Si-bound} (the first case)
\begin{align}\label{eq:reachable_set_size1}
|S_i'| &\le O(n_1\cdots n_{k-1}(n'_k+D-i+1)(D-i+2)^{d-k})\nonumber\\
&\le O(n_1\cdots n_{k-1}(n'_k+D-i+1)(D-i+2)^{d-j+\yuan{\tau}-k}n_{d-j+\yuan{\tau+1}}\cdots n_{d}).
\end{align}
Next consider those $i$ with $n_{d-\ell+1} < D-i+2 \le n_{d-\ell}$ for some $\ell\in[j-\tau-1]$: we have $D-i+2 \le n_{d-\ell} \le \cdots \le n_{d-j+\tau+1}$, thus by Eq.\eqref{eq:grid-Si-bound} (second case)
\begin{align}\label{eq:reachable_set_size2}
|S_i'|&\le O(n_1\cdots n_{k-1}(n'_k+D-i+1)(D-i+2)^{d-\ell-k}n_{d-\ell+1}\cdots n_d)\nonumber\\
&\le O(n_1\cdots n_{k-1}(n'_k+D-i+1)(D-i+2)^{d-j+\tau-k}n_{d-j+\tau+1}\cdots n_{d})
\end{align}
For $i$ with $1\le i<D-n_{d-j+\tau+1}+2$, we have $n_{d-j+\tau+1}\le D-i+2\le D+1\le n_{d-j+\tau}$, thus by Eq. \eqref{eq:grid-Si-bound} (the second case)
\begin{align}\label{eq:reachable_set_size3}
|S_i'|
&\le O(n_1\cdots n_{k-1}(n'_k+D-i+1)(D-i+2)^{d-j+\tau-k}n_{d-j+\tau+1}\cdots n_{d}).
\end{align}
By Theorem~\ref{thm:reachable-set-bounds}, we have
\begin{align*}
2^n-1\le O\left(\sum_{i=1}^D|S'_i|\right)
=O\left(\sum_{i=D-n_d+2}^D|S_i'|+\sum_{
\ell=1}^{j-\tau-1}\sum_{i=D-{n_{d-\ell}+2}}^{D-n_{d-\ell+1}+1}|S_i'|+\sum_{i=1}^{D-n_{d-j+\tau+1}+1}|S_i'|\right)
\end{align*}
Now we use Eq.\eqref{eq:reachable_set_size1}, Eq.\eqref{eq:reachable_set_size2}, and Eq.\eqref{eq:reachable_set_size3} to bound the first, second and third term, respectively, and obtain the upper bound
\[2^n-1 \le \sum_{i=1}^{D} O(n_1\cdots n_{k-1}(n'_k+D-i+1)(D-i+2)^{d-j+\tau-k} n_{d-j+\tau+1} \cdots n_{d}).
\]
Note that $D-i+1\le D$ for all $i\in D$, $n'_k\le n$ and $D\ge \Omega(n)$, therefore
\begin{align*}
\sum_{i=1}^D (n'_k+D-i+1)(D-i+2)^{d-j+\tau-k} \le \sum_{i=1}^D (n'_k+D)(D+1)^{d-j+\tau-k}
\le (2D)^{d-j+\tau+2-k},
\end{align*}
and the upper bound becomes
\[2^n-1 \le O(n_1\cdots n_{k-1}(2D)^{d-j+\tau+2-k}n_{d-j+\tau+1}\cdots n_d).\]
Recall that $n_1\cdots n_{k-1}=1$ if $k=1$ and $n_1\cdots n_{k-1} = O(n) = O(D)$ if $k\ge 2$. In either case, we have $2^n-1 \le O((2D)^{d-j+\tau+1} n_{d-j+\tau+1}\cdots n_d)$.
Thus $D\ge\Omega\left(\frac{2^{\frac{n}{d-j+\tau+1}}}{(n_{d-j+\tau+1}\cdots n_{d})^{\frac{1}{d-j+\tau+1}}}\right)$.
If $2\le \tau\le j-1$, i.e. $j-\tau\in [j-2]$, Eq.\eqref{eq:grid-lb-case-j} with $i$ set to be $j-\tau$ gives
$D\le o\left(\frac{2^{\frac{n}{d-j+\tau+1}}}{(n_{d-j+\tau+1}\cdots n_d)^{\frac{1}{d-j+\tau+1}}}\right)$, contradicting the above lower bound of $D$. Thus $\tau = 1$ and $D=\Omega\Big(\frac{2^{\frac{n}{d-j+2}}}{(n_{d-j+2}\cdots n_{d})^{\frac{1}{d-j+2}}}\Big)$.
Now we shall show that
\begin{align}\label{eq:grid-lb-D}
D=\Omega\Big(\max\Big\{n,\frac{2^{\frac{n}{d-j+2}}}{(n_{d-j+2}\cdots n_{d})^{\frac{1}{d-j+2}}}\Big\}\Big)=\Omega\Big( n+2^{\frac{n}{d+1}}+\max\limits_{j\in [d]}\Big\{\frac{2^{n/j}}{(\Pi_{i=j}^d n_i)^{1/j}}\Big\}\Big).
\end{align}
We first show the following facts
\begin{align}
&\frac{2^{\frac{n}{d-i+1}}}{(n_{d-i+1}\cdots n_{d})^{\frac{1}{d-i+1}}}\ge \frac{2^{\frac{n}{d-i+2}}}{(n_{d-i+2}\cdots n_{d})^{\frac{1}{d-i+2}}}, \quad \text{if~} 2\le i\le j-1. \label{eq:compare1}\\
&\frac{2^{\frac{n}{d-j+2}}}{(n_{d-j+2}\cdots n_{d})^{\frac{1}{d-j+2}}}\ge \Omega\left(\frac{2^{\frac{n}{k'}}}{(n_{k'}\cdots n_{d})^{\frac{1}{k'}}}\right), \quad \text{if~} 1\le k'\le d-j+1.\label{eq:compare2}
\end{align}
Eq.\eqref{eq:grid-lb-case-j} implies that $n_{d-i+1}\le \frac{2^{\frac{n}{d-i+2}}}{(n_{d-i+2}\cdots n_d)^{\frac{1}{d-i+2}}}$ for all $i\in \{2,\ldots,j-1\}$. Then we have
\begin{align*}
&\frac{2^{\frac{n}{d-i+1}}}{(n_{d-i+1}\cdots n_{d})^{\frac{1}{d-i+1}}}/ \frac{2^{\frac{n}{d-i+2}}}{(n_{d-i+2}\cdots n_{d})^{\frac{1}{d-i+2}}}
=\frac{2^{\frac{n}{(d-i+1)(d-i+2)}}}{(n_{d-i+1})^{\frac{1}{d-i+1}}(n_{d-i+2}\cdots n_{d})^{\frac{1}{(d-i+1)(d-i+2)}}}\\
\ge& (n_{d-i+1})^{\frac{1}{d-i+1}}/(n_{d-i+1})^{\frac{1}{d-i+1}}=1.
\end{align*}
Eq.~\eqref{eq:compare1} thus holds. Recall that $n_1\ge n_2\ge \cdots \ge n_{d-j+1}\ge \Omega\Big( \frac{2^{\frac{n}{d-j+2}}}{(n_{d-j+2}\cdots n_d)^{\frac{1}{d-j+2}}}\Big)$. Then we have
\begin{align*}
&\frac{2^{\frac{n}{d-j+2}}}{(n_{d-j+2}\cdots n_{d})^{\frac{1}{d-j+2}}}/ \frac{2^{\frac{n}{k'}}}{(n_{k'}\cdots n_{d})^{\frac{1}{k'}}}=\frac{(n_{d-j+2}\cdots n_d)^{\frac{d-j+2-k'}{(d-j+2)k'}}(n_{k'}\cdots n_{d-j+1})^{1/k'}}{2^{\frac{(d-j+2-k')n}{(d-j+2)k'}}}\\
&\ge \Omega\left(\frac{(n_{k'}\cdots n_{d-j+1})^{1/k'}}{(n_{d-j+1})^{\frac{d-j+2-k'}{k'}}}\right)\ge \Omega\left(\frac{(n_{d-j+1})^{\frac{d-j+1-k'+1}{k'}}}{(n_{d-j+1})^{\frac{d-j+2-k'}{k'}}}\right)=\Omega(1),
\end{align*}
and Eq.~\eqref{eq:compare2} holds.
Combining Eq.\eqref{eq:compare1} and Eq.\eqref{eq:compare2}, we see that
$\frac{2^{\frac{n}{d-j+2}}}{(n_{d-j+2}\cdots n_{d})^{\frac{1}{d-j+2}}}\ge \Omega\left(\frac{2^{\frac{n}{k'}}}{(n_{k'}\cdots n_d)^{\frac{1}{k}}}\right)$ for $k'\in[d]$. For Eq.\eqref{eq:grid-lb-D}, it remains to prove $\frac{2^{\frac{n}{d-j+2}}}{(n_{d-j+2}\cdots n_{d})^{\frac{1}{d-j+2}}}\ge 2^{\frac{n}{d+1}}$. Since $n_d\le 2^{n/(d+1)}$ (Eq.\eqref{eq:grid-lb-case-j}), it follows that $\frac{2^{n/d}}{(n_d)^{1/d}}\ge \frac{2^{n/d}}{(2^{n/(d+1)})^{1/d}}=2^{n/(d+1)}$. According to Eq. \eqref{eq:compare1}, we have
\[\frac{2^{\frac{n}{d-j+2}}}{(n_{d-j+2}\cdots n_{d})^{\frac{1}{d-j+2}}}\ge \frac{2^{\frac{n}{d-j+3}}}{(n_{d-j+3}\cdots n_{d})^{\frac{1}{d-j+3}}}\ge \cdots \ge \frac{2^{\frac{n}{d}}}{( n_{d})^{\frac{1}{d}}}\ge 2^{n/(d+1)}.\]
This completes the proof of Eq.\eqref{eq:grid-lb-D}
\item Case $j$ $(d-k+2\le j\le d)$: Same as Eq.\eqref{eq:grid-lb-case-j}, $n_d,n_{d-1},\ldots,n_{d-j+1}$ satisfy
\begin{equation*}
n_d \le o( 2^{n/(d+1)}), \quad n_{d-i}\le o\Big(\frac{2^{\frac{n}{d-i+1}}}{(n_{d-i+1}\cdots n_{d})^{\frac{1}{d-i+1}}}\Big),~ \forall i\in[j-2], \quad n_{d-j+1}\ge\Omega\Big( \frac{2^{\frac{n}{d-j+2}}}{(n_{d-j+2}\cdots n_d)^{\frac{1}{d-j+2}}}\Big).
\end{equation*}
Assume for the sake of contradiction that $D=o\Big( \frac{2^{\frac{n}{d-j+2}}}{(n_{d-j+2}\cdots n_d)^{\frac{1}{d-j+2}}}\Big)$. Then we have $n_{d-j+2}\cdots n_d=o\Big( \frac{2^{n}}{D^{d-j+2}}\Big)$. For all $i\in[D]$, we use a trivial size bound of $|S'_i|$ is $|S_i'|\le n_1n_2\cdots n_d$. Since in the current case $j$ we have $d-k+2\le j$, $n_{d-j+2}\cdots n_{k-1}$ is well defined and at least 1. Thus by Theorem~\ref{thm:reachable-set-bounds}, we have
\begin{equation}\label{eq:d-k+2<=j<=d_bound}
2^n-1\le O\left(\sum_{i=1}^D|S'_i|\right)\le O(Dn_1\cdots n_d)
\le O\left (Dn_1\cdots n_{k-1} n_{d-j+2}\cdots n_{k-1} n_{k}\cdots n_d\right).
\end{equation}
Since $n_{d-j+2}\cdots n_d=o\Big( \frac{2^{n}}{D^{d-j+2}}\Big)$, $n_1n_2\cdots n_{k-1}n'_k=O(n)$ and $D\ge \Omega (n)$, we have
\[O\left (Dn_1\cdots n_{k-1} n_{d-j+2}\cdots n_{k-1} n_{k}\cdots n_d\right)\le
o\left(\frac{n2^n}{D^{d-j+1}}\right)\le o\left(\frac{n2^n}{D}\right)= o(2^n).\]
This contradicts Eq.~\eqref{eq:d-k+2<=j<=d_bound}.
Therefore, the assumption that $D=o\Big( \frac{2^{\frac{n}{d-j+2}}}{(n_{d-j+2}\cdots n_d)^{\frac{1}{d-j+2}}}\Big)$ does not hold and $D$ satisfies $D=\Omega\Big(\frac{2^{\frac{n}{d-j+2}}}{(n_{d-j+2}\cdots n_{d})^{\frac{1}{d-j+2}}}\Big)$. By the same discussion in Case $j~(2\le j\le d-k+1)$, we have $\Omega\Big(\max\Big\{n,\frac{2^{\frac{n}{d-j+2}}}{(n_{d-j+2}\cdots n_{d})^{\frac{1}{d-j+2}}}\Big\}\Big)=\Omega\Big( n+2^{\frac{n}{d+1}}+\max\limits_{j\in [d]}\Big\{\frac{2^{n/j}}{(\Pi_{i=j}^d n_i)^{1/j}}\Big\}\Big)$.
\item Case $d+1$: $n_d,n_{d-1},\ldots, n_1$ satisfy
\begin{equation*}
n_d \le o( 2^{n/(d+1)}), \qquad n_{d-i}\le o\Big(\frac{2^{\frac{n}{d-i+1}}}{(n_{d-i+1}\cdots n_{d})^{\frac{1}{d-i+1}}}\Big),~ \forall i\in[d-1],
\end{equation*}
Proposition \ref{prop:depth_lowerbound_graph} gives a depth lower bound of $\Omega\Big(\max\Big\{n,\frac{2^n}{\Pi_{i=1}^d n_i}\Big\}\Big)$. For any $k'\ge 2$, the above inequality is rephrased as $n_{k'-1}\le o\left(\frac{2^{\frac{n}{k'}}}{(n_{k'}\cdots n_d)^{\frac{1}{k'}}}\right)$. We have
\begin{align*}
&\frac{2^{\frac{n}{k'-1}}}{(n_{k'-1}\cdots n_d)^{\frac{1}{k'-1}}}/\frac{2^{n/k'}}{(n_{k'}\cdots n_d)^{1/k'}}=\frac{2^{\frac{n}{k'(k'-1)}}}{(n_{k'-1})^{\frac{1}{k'-1}}(n_{k'}\cdots n_d)^{\frac{1}{k'(k'-1)}}}
\ge \frac{(n_{k'-1})^{\frac{1}{k'-1}}}{(n_{k'-1})^{\frac{1}{k'-1}}}=1,\forall k'\ge 2.
\end{align*}
Therefore, \[\frac{2^n}{n_1\cdots n_d}\ge \frac{2^{\frac{n}{2}}}{(n_2\cdots n_d)^{1/2}}\ge \cdots \ge\frac{2^{\frac{n}{d}}}{(n_d)^{1/d}}\ge \frac{2^{\frac{n}{d}}}{(2^{n/(d+1)})^{1/d}}=O(2^{n/(d+1)})\]
where the last inequality used $n_d\le o(2^{n/(d+1)})$. This implies $\Omega\Big(\max\Big\{n,\frac{2^n}{\Pi_{i=1}^d n_i}\Big\}\Big)=\Omega\Big( n+2^{\frac{n}{d+1}}+\max\limits_{j\in [d]}\Big\{\frac{2^{n/j}}{(\Pi_{i=j}^d n_i)^{1/j}}\Big\}\Big)$.
\end{itemize}
\end{proof}
Using the same argument, we can also show the following.
\begin{lemma}\label{lem:lower_bound_grid_k_Lambda}
There exists an $n$-qubit diagonal unitary matrix that requires a quantum circuit of depth \[\Omega\Big( n+2^{\frac{n}{d+1}}+\max\limits_{j\in [d]}\Big\{\frac{2^{n/j}}{(\Pi_{i=j}^d n_i)^{1/j}}\Big\}\Big)\] under $\Grid_{n+m}^{n_1,n_2,\cdots,n_d}$ constraint to be implemented, using $m\ge0$ ancillary qubits.
\end{lemma}
\begin{lemma}\label{lem:lower_bound_grid_k_US}
There exists an $n$-qubit unitary matrix which requires a quantum circuit of depth at least $\Omega\Big(n+4^{\frac{n}{d+1}}+\max\limits_{j\in [d]}\Big\{\frac{4^{n/j}}{(\Pi_{i=j}^d n_i)^{1/j}}\Big\}\Big)$ under $\Grid_{n+m}^{n_1,n_2,\cdots,n_d}$ constraint to be implemented, using $m\ge 0$ ancillary qubits.
\end{lemma}
\begin{proof}
Lemma \ref{lem:lowerbound_previous} and Theorem \ref{thm:depth_lower_bound_graph} give a depth lower bound of $\Omega\Big(\max\Big\{n,\frac{4^n}{\Pi_{i=1}^d n_i}\Big\}\Big)$. By the same argument as in the proof of Lemma \ref{lem:lower_bound_grid_k}, the following $d+1$ cases:
\begin{itemize}
\item Case 1: $n_d\ge \Omega(4^{\frac{n}{d+1}})$.
\item Case $j$ ($2\le j\le d$): $n_d,n_{d-1},\ldots,n_{d-j+1}$ satisfy
\begin{equation*}
n_d \le o( 4^{n/(d+1)}), n_{d-i}\le o\Big(\frac{4^{\frac{n}{d-i+1}}}{(n_{d-i+1}\cdots n_{d})^{\frac{1}{d-i+1}}}\Big), n_{d-j+1}\ge\Omega\Big( \frac{4^{\frac{n}{d-j+2}}}{(n_{d-j+2}\cdots n_d)^{\frac{1}{d-j+2}}}\Big), \quad\forall i\in[j-2].
\end{equation*}
\item Case $d+1$: $n_d,n_{d-1},\ldots,n_{1}$ satisfy
\begin{equation*}
n_d \le o( 4^{n/(d+1)}),\quad n_{d-i}\le o\Big(\frac{4^{\frac{n}{d-i+1}}}{(n_{d-i+1}\cdots n_{d})^{\frac{1}{d-i+1}}}\Big),\quad\forall i\in[d-1].
\end{equation*}
\end{itemize}
have depth lower bounds of
\begin{align*}
\Omega(4^{\frac{n}{d+1}})&=\Omega\Big(n+4^{\frac{n}{d+1}}+\max\limits_{j\in [d]}\Big\{\frac{4^{n/j}}{(\Pi_{i=j}^d n_i)^{1/j}}\Big\}\Big), \qquad \text{Case }1\\
\Omega\left(\frac{4^{\frac{n}{d-j+2}}}{(n_{d-j+2}\cdots n_d)^{\frac{1}{d-j+2}}}\right)&=\Omega\Big(n+4^{\frac{n}{d+1}}+\max\limits_{j\in [d]}\Big\{\frac{4^{n/j}}{(\Pi_{i=j}^d n_i)^{1/j}}\Big\}\Big),\qquad\text{Cases }2\le j \le d\\
\Omega\left(\max\left\{n,\frac{4^n}{\prod_{i=1}^d n_i}\right\}\right)&=\Omega\Big(n+4^{\frac{n}{d+1}}+\max\limits_{j\in [d]}\Big\{\frac{4^{n/j}}{(\Pi_{i=j}^d n_i)^{1/j}}\Big\}\Big),\qquad\text{Case }d+1
\end{align*}
\end{proof}
\begin{corollary}\label{coro:lower_bound_path}
Under $\Path_{n+m}$ constraint, using $m$ ancillary qubits,
\begin{enumerate}
\item the depth and size lower bounds for $n$-qubit QSP are $\Omega\left(\max\left\{2^{n/2},\frac{2^n}{n+m}\right\}\right)$ and $\Omega(2^n)$.
\item the depth and size lower bounds for $n$-qubit diagonal unitary matrices are $\Omega\left(\max\left\{2^{n/2},\frac{2^n}{n+m}\right\}\right)$ and $\Omega(2^n)$.
\item the depth and size lower bounds for $n$-qubit GUS are $\Omega\left(\max\left\{4^{n/2},\frac{4^n}{n+m}\right\}\right)$ and $\Omega(4^n)$.
\end{enumerate}
\end{corollary}
\begin{proof}
Let $d=1$, then $\Grid_{n+m}^{n_1,n_2,\ldots,n_d}$ is $\Path_{n+m}$. The results follow from Proposition \ref{prop:size_lowerbound_Lambda}, Lemmas \ref{lem:lower_bound_grid_k}, \ref{lem:lower_bound_grid_k_Lambda}, \ref{lem:lower_bound_grid_k_US} and Proposition \ref{prop:lowerbound_size_graph}.
\end{proof}
\begin{corollary}\label{coro:lower_bound_binary}
Under $\Tree_{n+m}(2)$ constraint, using $m$ ancillary qubits,
\begin{enumerate}
\item $n$-qubit QSP needs quantum circuits of depth $\Omega\left(\max\left\{n,\frac{2^n}{n+m}\right\}\right)$ and size $\Omega(2^n)$,
\item $n$-qubit diagonal unitary matrix needs quantum circuits of depth $\Omega\left(\max\left\{n,\frac{2^n}{n+m}\right\}\right)$ and size $\Omega(2^n)$,
\item $n$-qubit GUS needs quantum circuits of depth $\Omega\left(\max\left\{n,\frac{4^n}{n+m}\right\}\right)$ and size $\Omega(4^n)$.
\end{enumerate}
\end{corollary}
\begin{proof}
Follows from Propositions \ref{prop:lowerbound_size_graph}, \ref{prop:size_lowerbound_Lambda} and \ref{prop:depth_lowerbound_graph}.
\end{proof}
\begin{corollary}\label{coro:lower_bound_expander}
Under $\Expander_{n+m}$ constraint, using $m\ge 0$ ancillary qubits,
\begin{enumerate}
\item $n$-qubit QSP needs quantum circuits of depth $\Omega\left(\max\left\{n,\frac{2^n}{n+m}\right\}\right)$ and size $\Omega(2^n)$.
\item $n$-qubit diagonal unitary matrix needs quantum circuits of depth $\Omega\left(\max\left\{n,\frac{2^n}{n+m}\right\}\right)$ and size $\Omega(2^n)$.
\item $n$-qubit GUS needs quantum circuits of depth $\Omega\left(\max\left\{n,\frac{4^n}{n+m}\right\}\right)$ and size $\Omega(4^n)$.
\end{enumerate}
\end{corollary}
\begin{proof}
Follows from Theorem \ref{prop:lowerbound_size_graph}, and Propositions \ref{prop:size_lowerbound_Lambda} and \ref{prop:depth_lowerbound_graph}.
\end{proof}
\begin{lemma}\label{lem:depth_lower_dary}
There exist $n$-qubit quantum states, $n$-qubit diagonal unitary matrices and $n$-qubit unitary matrices which require quantum circuits under $\Tree_{n+m}(d)$ constraint of depth at least $\Omega\left(\max\left\{n,\frac{d2^n}{n+m}\right\}\right)$, $\Omega\left(\max\left\{n,\frac{d2^n}{n+m}\right\}\right)$ and $\Omega\left(\max\left\{n,\frac{d4^n}{n+m}\right\}\right)$ respectively, to implement, using $m\ge 0$ ancillary qubits.
\end{lemma}
\begin{proof}
See Lemma~\ref{lem:QSP_US_lowerbound_darytree} in Appendix \ref{append:d-arytree}.
\end{proof}
\begin{corollary}\label{coro:depth_lower_star}
There exist $n$-qubit quantum states and $n$-qubit unitary matrices which require quantum circuits under $\Star_{n+m}$ constraint of depth at least $\Omega (2^n)$ and $\Omega(4^n)$, respectively, to implement, using $m\ge 0$ ancillary qubits.
\end{corollary}
\begin{proof}
A $\Star_{n+m}$ is a $\Tree_{n+m}(n+m-1)$. The result follows from Lemma \ref{lem:depth_lower_dary}.
\end{proof}
\begin{lemma}\label{lem:lower_bound_brickwall_QSP}
There exists an $n$-qubit quantum state/diagonal unitary matrix which requires a circuit of depth at least
\[\Omega\left( \max\Big\{ 2^{n/3},\frac{2^{n/2}}{\sqrt{\min\{n_1,n_2\}}},\frac{2^n}{n+m}\Big\}\right)\]
under $\brickwall_{n+m}^{n_1,n_2,b_1,b_2}$ constraint with $b_1,~b_2=O(1)$ to implement, using $m\ge 0$ ancillary qubits.
\end{lemma}
\begin{lemma}\label{lem:lower_bound_brickwall_US}
There exists an $n$-qubit unitary matrix which requires a circuit of depth at least
\[\Omega\left( \max\Big\{ 4^{n/3},\frac{4^{n/2}}{\sqrt{\min\{n_1,n_2\}}},\frac{4^n}{n+m}\Big\}\right)\]
under $\brickwall_{n+m}^{n_1,n_2,b_1,b_2}$ constraint with $b_1,~b_2=O(1)$ to implement, using $m\ge 0 $ ancillary qubits.
\end{lemma}
The proofs of Lemmas \ref{lem:lower_bound_brickwall_QSP} and \ref{lem:lower_bound_brickwall_US} are essentially the same as those of Lemmas \ref{lem:lower_bound_grid_k} and \ref{lem:lower_bound_grid_k_US}, so we omit them here.
\section{Conclusions}
\label{sec:conlusion}
In this paper, we have investigated the effects of qubit connectivity on quantum circuit size and depth complexity. We have shown that, somewhat surprisingly, connectivity constraints do not increase the order of circuit size required for implementing almost all unitaries, as well as for quantum state preparation.
The circuit depth complexity is more subtle. We have shown that connectivity constraints do not increase the order of the circuit depth required for implementing almost all unitary operations, even for the very restricted case of 1D chains with nearest neighbour connectivity, and this remains true when $m$ ancilla are available, unless $m$ is exponentially large. However, compared with the unrestricted case, qubit connectivity does hinder space-depth trade-offs: it makes it harder to use a large number of ancilla qubits to achieve smaller depth.
We have investigated various constraint graphs, including $d$-dimensional grids, complete $d$-ary trees, expander graphs and general graphs. We have found that common measures for graph connectivity such as graph diameter, vertex degree, graph expansion, as well as less prominent measures such as the size of a maximum matching, all seem to have some impact on the required circuit depth.
These results combine analytic bounds with explicit circuit constructions, which hopefully have practical applications for circuit design as well. A number of interesting related research directions warrant futher study:
\begin{enumerate}
\item Better bounds. Gaps remain between upper and lower bounds for GUS in the {$d$-dimensional grid and $d$-ary tree} cases {when the number of ancillary qubits is large}. It would be technically interesting to close them in these settings.
\item More graph properties. What other graph properties have an important impact on quantum circuit depth complexity for certain natural families of unitaries?
\item More unitary families. We cannot hope to have an efficient algorithm to optimize the circuit complexity for any given unitary as it is QMA-hard \cite{Janzing2005identity}, but it would be interesting to have more circuit constructions for specific unitaries. Can we study some other families of unitaries which have structures that can be exploited to give efficient circuit constructions?
\item Small scale quantum circuits. Though our designs aim at achieving optimal asymptotic bounds, the constant factor hidden in the big-O notation is not large, and we hope our constructions may inspire efficient constructions for small scale quantum circuits, such as those on $10^2 \sim 10^5$ qubits. Our constructed circuits are all parameterized ones, which may have applications in designing ansatzes for variational quantum circuits for quantum machine learning or quantum chemistry.
\end{enumerate}
\bibliographystyle{alpha}
|
2,869,038,155,429 | arxiv | \section{Introduction}
\begin{figure}[!th]
\includegraphics[width=0.44\linewidth]{Billiard.pdf}
\includegraphics[width=0.54\linewidth]{BottomPlate.pdf}
\caption{Left panel: Schematic view of the Dirac billiard, which comprises 1033 metallic cylinders (gray disks) arranged on a triangular grid. In the upmost inset, red and turquoise dots indicate the positions of the voids. They are located on the interpenetrating triangular sublattices of the honeycomb lattice which is terminated by zigzag (ZZ) and armchair edges (AC), as indicated in the lower insets. The centers between two neighboring cylinders, marked by black dots, form a kagome structure. Right panel: Photograph of the basin of the resonator. The metallic cylinders are milled out of a circular brass plate with radius $R=$570~mm and height 19.5~mm. The red numbers denote the nine groups of, respectively, three antennas. To achieve superconductivity, the basin and the lid, which is a circular brass plate of radius $R$ and height 6~mm with screw holes at the positions of the cylinders and along the boundary, are covered with a lead coating, whose critical temperature is $T_c$=7.2~K, and then tightly screwed together through all holes. The resonator was cooled down to 4-6~K in a cryogenic chamber constructed by ULVAC Cryogenics in Kyoto, Japan. The inset to the right shows a zoom into one of the cylinders of diameter 4~mm and height 3~mm. The upper part is designed with a cut edge shape, as indicated by the yellow dashed lines, to achieve good electrical contact with the lid~\cite{Dietz2015}.}
\label{fig:Sketch_Diracbilliard}
\end{figure}
Superconducting microwave Dirac billiards (DBs) have been used since more than a decade to investigate properties of the energy levels and wave functions of artificial graphene and fullerene structures~\cite{Bittner2010,Bittner2012,Dietz2013,Dietz2015,Iachello2015,Dietz2015a,Dietz2015b,Dietz2016,Dietz2019a}. The experiments were performed with a DB, shown schematically in~\reffig{fig:Sketch_Diracbilliard}, whose shape has a $C_3$ symmetry, in the frequency range of the lowest transverse-magnetic (TM) mode, where the electric-field strength is perpendicular to the resonator plane and thus is governed by the scalar Helmholtz equation with Dirichlet boundary conditions (BCs) at the sidewalls of the cavity and cylinders. The Helmholtz equation is mathematically identical to the Schr\"odinger equation of a quantum billiard (QB) of corresponding shape, into which scatterers are inserted at the positions of the cylinders. The crucial advantage of such resonators as compared to honeycomb structures constructed from dielectric disks~\cite{Kuhl2010} is that superconducting high-precision measurements can be performed, which is indispensable for the determination of complete sequences of resonance frequencies.
The band structure of propagating modes of the DB exhibits two Dirac points (DPs), where the first and second, respectively, the fourth and fifth band touch each other conically, that are separated by a nearly flat band (FB). It is reminiscent of that of a honeycomb-kagome billiard (HKB) whose sites form a combination of a honeycomb and a kagome sublattice~\cite{Jacqmin2014,Lan2012,Lu2017a,Zhong2017,Maimaiti2020,Zhang2021}; see the upmost inset of~\reffig{fig:Sketch_Diracbilliard}. Indeed, below the FB the electric-field intensities are maximal at the voids, that are located at the centers of three neighboring metallic cylinders (grey disks), marked with red and turquoise dots in~\reffig{fig:Sketch_Diracbilliard} and form a honeycomb structure. In the FB they are maximal at the centers between adjacent cylinders, marked by black dots, that are at the sites of a kagome lattice, and above the FB on both of them~\cite{Maimaiti2020,Zhang2021}. Furthermore, below the FB the properties of DBs are well captured by a tight-binding model (TBM) for a graphene billiard (GB), and generally by one for a HKB~\cite{Zhang2021}. Dirac points are a characteristic of graphene, that attracted a lot of attention~\cite{Novoselov2004,Beenakker2008,Neto2009} because in the region of the conical valleys graphene features relativistic phenomena~\cite{DiVincenzo1984,Novoselov2004,Geim2007,Avouris2007,Miao2007,Ponomarenko2008,Beenakker2008,Zhang2008,Neto2009,Abergel2010,Zandbergen2010}, which triggered numerous realizations~\cite{Polini2013} of artificial graphene~\cite{Parimi2004,Joannopoulos2008,Bittner2010,Kuhl2010,Singha2011,Nadvornik2012,Gomes2012,Tarruell2012,Uehlinger2013,Rechtsmann2013,Rechtsmann2013a,Khanikaev2013,Wang2014,Shi2015,Bellec2013,Bellec2013a,Bellec2014}. In the vicinity of the band edges (BEs) the spectral properties coincide with those of a nonrelativistic QB of corresponding shape~\cite{Dietz2015,Dietz2013,Dietz2016}.
Classical billiards (CBs) with the shape of the DB have a chaotic dynamics. According to the Bohigas-Giannoni-Schmit conjecture the fluctuation properties in the energy spectra of nonrelativistic quantum systems with a chaotic classical counterpart are universal~\cite{Berry1977a,Berry1979,Casati1980,Bohigas1984} and coincide with those of random matrices from the Gaussian orthogonal ensemble (GOE) for time-reversal (${\mathcal T}$) invariant systems and the Gaussian unitary ensemble (GUE) if ${\mathcal T}\,$ invariance is violated. In the region of the conical valleys, that are located on, respectively, three of the corners of the first Brillouin zone~\cite{Wallace1947}, the two sets of valley eigenstates are well described by Dirac-Hamiltonians for massless spin-1/2 quasiparticles~\cite{Beenakker2008,Neto2009}. Therefore, we also investigated in Ref.~\cite{Zhang2021} properties of relativistic neutrino billiards (NBs) of the same shape introduced in Ref.~\cite{Berry1987}, that are governed by the Weyl equation~\cite{Weyl1929} for a spin-1/2 particle. The associated Dirac Hamiltonian is not invariant under time reversal, so the spectral properties of NBs with the shape of a chaotic CB typically coincide with those of random matrices from the GUE, if the shape has no additional geometric symmetry. It has been demonstrated in Refs.~\cite{Yupei2016,Yupei2022} that the spectral properties of GBs and NBs of corresponding shape do not coincide~\cite{Silvestrov2007,Ponomarenko2008,Libisch2009,Wurm2009,Huang2010,Wurm2011,Rycerz2012,Rycerz2013,Polini2013,Dietz2015,Dietz2016}. These discrepancies were attributed to intervalley scattering at the boundary of GBs~\cite{Wurm2009,Rycerz2012,Rycerz2013}. Similar observations were made for HKBs~\cite{Maimaiti2020,Zhang2021},
In this Letter we present experimental results for the DB shown in~\reffig{fig:Sketch_Diracbilliard}. Besides properties of the eigenmodes~\cite{Zhang2021} we, for the first time, analyzed fluctuation properties of the scattering ($S$) matrix describing the measurement process~\cite{Albeverio1996} and demonstrate that they provide a tool to detect localization, i.e., scarred wave functions.
\section{Review of the theoretical and numerical results presented in~\cite{Zhang2021}} The domain $\Omega$ of the DB shown inf~\reffig{fig:Sketch_Diracbilliard} is defined in the complex plane $w(r,\phi)=x(r,\phi)+iy(r,\phi)$ with $\phi\in [0,2\pi),\, r=[0,r_0]$ by the parametrization
\begin{equation}
w(r,\phi)=r\left[1+0.2\cos(3\phi)-0.2\sin(6\phi)\right]e^{i\phi}.
\label{coordinates}
\end{equation}
The boundary $\partial\Omega$ is given by $w(r=r_0,\phi)$. The eigenfunctions $\psi(r,\phi)$ and the electric-field strength of the corresponding QB and microwave billiard~\cite{Stoeckmann1990,Sridhar1991,Graef1992}, respectively, are governed by the Schr\"odinger equation with Dirichlet BCs along $\partial\Omega$. The symmetry group $C_3$ possesses three one-dimensional irreducible representations. Accordingly, the eigenstates of quantum systems with a $C_3$ symmetry can be assigned to three subspaces~\cite{Joyner2012,Leyvraz1996,Keating1997,Dembowski2000,Zhang2021}. For one symmetry class the eigenfunctions are invariant under rotation by $\frac{2\pi}{3}$ and real, for the other two they are complex and the eigenvalues are equal. Accordingly, the eigenvalue spectrum of QBs, and also GBs and HKBs with $C_3$ symmetry~\cite{Zhang2021} can be separated into singlets and pairwise degenerate doublets. In contrast, the spinor eigenfunctions of the corresponding NB can not be classified according to their transformation properties under rotation by $\frac{2\pi}{3}$~\cite{Zhang2021}. This is only possible for each component separately. Thus, their eigenvalues can be assigned to symmetry-projected subspectra.
In~\cite{Zhang2021} we computed with COMSOL Multiphysics the symmetry-projected resonance frequencies and electric-field distributions of the DB, also those of the corresponding QB, GB and HKB. For all of them the spectral properties of the singlets exhibit GOE statistics, those of the doublets GUE statistics~\cite{Leyvraz1996,Keating1996,Braun2011,Dembowski2000,Dembowski2003,Schaefer2003,Robbins1988,Seligman1994,Joyner2012}. For the whole spectrum they coincide with those of a GOE+2GUE, whose matrices are block diagonal with one GOE block and two GUE blocks of same dimension, for the NB with 3GUE. In Ref.~\cite{Zhang2021}, we computed the symmetry-projected eigenstates of massive NBs using the method developed in Refs.~\cite{Dietz2020,Dietz2022a}. For too small masses the eigenvalues corresponding to doublet partners are not degenerate, implying that we do only find agreement of the spectral properties of the NB with those of the DB, GB and HKB around the DPs for sufficiently large mass~\cite{Dietz2020}, even though these exhibit a selective excitation of the two sets of valley states~\cite{Lu2014,Lu2016,Lu2017,Ye2017,Xia2017}.
\section{The Dirac billiard}
We performed experiments at superconducting conditions. The construction of the DB is explained in the caption of~\reffig{fig:Sketch_Diracbilliard}. The basic ideas are the same as in~\cite{Dietz2015,Dietz2016}. The cavity consists of a top plate and a basin of 3~mm depth corresponding to a frequency range $f\lesssim 50$~GHz which contains 1033 metallic cylinders. We chose $r_0=30a_L/\sqrt{3}\simeq 208$~mmin~\refeq{coordinates} with $a_L=12$~mm denoting the lattice constant. The cylinder radius equals $a_L/6$. The sidewall passes through voids, implying Dirichlet BCs at these sites for the corresponding GB. The resonance frequencies were obtained from reflection and transmission spectra. For their measurement we used a Keysight N5227A Vector Network Analyzer (VNA), which sends a rf signal into the resonator at antenna $a$ and couples it out at the same or another antenna $b$ and records the relative phases $\phi_{ba}$ and the ratios of the microwave power, $\frac{P_{out,b}}{P_{in,a}}=|S_{ba}(f)|^2$ yielding the complex scattering matrix element $S_{ba}=|S_{ba}|e^{i\phi_{ba}}$~\cite{Dietz2008,Dietz2009,Dietz2010}. Nine groups of antenna ports consisting of three each, that were positioned such that the $C_3$ symmetry is preserved, were distributed over the whole billiard area, to minimize the possibility that a resonance is missing because the electric field strength is vanishing at the position of an antenna. The antennas penetrated through holes in the lid into the cavity by about 0.2~mm. The upper part of~\reffig{fig:spara} shows a measured transmission spectrum. Propagating modes are observed above the BE at $f\simeq 13.89$~GHz.
\begin{figure}[!th]
\includegraphics[width=0.8\linewidth]{Spectrum_DOS.pdf}
\caption{Upper part: A measured transmission spectrum. The lowest band of propagating modes starts at 13.89 GHz. Lower part: DOS (red) and smoothed DOS (black). The positions of the lower and upper Dirac point (DP1 and DP2) and the FB are indicated.}
\label{fig:spara}
\end{figure}
The positions of the resonances yield the resonance frequencies. Due to the degeneracies of doublet partners, finding them can be cumbersome or even impossible, because corresponding resonances overlap. To identify them and to classify them into singlets and doublets we employed a measurement method introduced in~\cite{Dembowski2003}, which is explained in the appendix. It relies on the feature that, when changing the relative phase between two ingoing signals of same amplitude, the position and shape of the singlets is basically not changed, whereas those of the doublets change considerably. Thereby, we were able to identify all resonance frequencies in the region of the lower BE and below the DP1. Even though the quality factor of the resonator was $Q>10^4$ we could not find all resonance frequencies in other region.
In the lower part of~\reffig{fig:spara} we show the density of states (DOS) $\rho(f)$ and the smoothed DOS (black curve). We observe two DPs, denoted by DP1 and DP2, van Hove singularities (VHSs) framing them and a FB. Their frequency values are listed in Tab. II of the appendix. Around the DP2 the DOS is distorted by an adjacent band~\cite{Zhang2021}. At the FB the resonance frequencies are macroscopically degenerate in a perfect honeycomb-kagome lattice, whereas in the DB degeneracies are slightly lifted due to experimental imperfection and the spreading of the wave-function components located on the sites of the lattice. We, indeed, had to include in the TBM for the HKB couplings and wave-function overlaps~\cite{Reich2002} for up to third-nearest neighbors in the GB sublattice to get agreement with the numerical and experimental DOS~\cite{Dietz2015,Maimaiti2020,Zhang2021}. In Fig.~S3 of the appendix, we compare the DOS and the integrated DOS $N(f)$, obtained from the experimental and computed resonance frequencies and find good agreement except at the VHSs, where they are nearly degenerate~\cite{Dietz2013}. In total 1912 resonance frequencies were identified in that frequency range.
\section{Spectral Fluctuations}
These were analyzed below the FB in three frequency ranges, namely around the BEs, the VHSs, and in the Dirac region~\cite{Dietz2015,Dietz2016}, that are clearly distinguishable in the DOS shown in Fig.~S3 of the appendix.
\begin{figure}[htbp]
\includegraphics[width=0.49\linewidth]{Spectral_Statistics_Exp_Singl_Doubl_Sch.pdf}
\includegraphics[width=0.49\linewidth]{Spectral_Statistics_Exp_Singl_Doubl_Dir.pdf}
\caption{Left: Nearest neighbor spacing distribution $P(s)$, cumulative nearest-neighbor spacing distribution $I(s)$, number variance $\Sigma^2(L)$ and Dyson-Mehta statistics $\Delta_3(L)$ at the lower BE for the singlets (red histograms and dots) and doublets (green histograms and squares). The solid and dashed-dot black lines show the curves for GOE and GUE statistics, respectively. Right: Same as left part in the Dirac region.}
\label{fig:spssch}
\end{figure}
We considered 189 levels for each symmetry class starting from the lower BE. Due to the presence of edge states, that lead to the peak observed in the DOS above the DP in Fig.~S3 of the appendix and yield nonuniversal contributions to the spectral properties~\cite{Dietz2015}, we only considered levels below the DP1, where each subspectrum comprises 26 levels. To unfold the resonance frequencies $f_i$ to average spacing unity, we ordered them by size and then replaced $f_i$ by the smooth part of $N(f)$, $\epsilon_i=N^{smooth}(f_i)$, which we determined by fitting a second order polynomial to $N(f_i)$~\cite{Dietz2015,Zhang2021}. In ~\reffig{fig:spssch} we show spectral properties for the singlets (red) and doublets (green) at the lower BE (left) and in the Dirac region (right). The former follow the GOE curves (black solid lines), the latter those of the GUE (black dashed-dotted lines). Deviations may be attributed to the small number of levels and to the presence of short periodic orbits~\cite{Zhang2021}. We, in addition, considered the distribution $P(r)$ and the cumulative distribution $I(r)$ of the ratios~\cite{Oganesyan2007,Atas2013} $r_i=\frac{\epsilon_{i+1}-\epsilon_i}{\epsilon_{i}-\epsilon_{i-1}}$, which are dimensionless so that unfolding is not needed~\cite{Dietz2016,Maimaiti2020}. The results for all resonance frequencies below the FB are shown in the left part of~\reffig{fig:Ratio}, those of the singlets (red) and doublets (green) at the lower BE in the right part. The former are compared to those of random matrices from the GOE+2GUE. In all, the spectral properties agree well with those obtained from the COMSOL Multiphysics computations in~\cite{Zhang2021}.
\begin{figure}[htbp]
\includegraphics[width=0.6\linewidth]{PR_IPR_Exp_Singl_Doubl_All_Sch.pdf}
\caption{Ratio distributions (upper panel) and cumulative ratio distributions (lower panels). (a), (c): All eigenfrequencies (red histogram and dots) below the FB. (b), (d): Singlets (green histogram and squares) and doublets (red histogram and dots) around the lower BE. The results are compared to those for GOE (solid black lines), GUE (dashed dotted black lines) and GOE+2GUE (turquoise).}
\label{fig:Ratio}
\end{figure}
\section{$S$-matrix Fluctuations}
We also investigated fluctuation properties of the $S$ matrix associated with the measurement process and compared them to random-matrix theory (RMT) predictions for quantum-chaotic scattering systems derived from the scattering matrix approach~\cite{Mahaux1969} which was developed in the context of compound nuclear reactions and extended to microwave resonators in~\cite{Albeverio1996},
\begin{equation}
S_{ba}(f) = \delta_{ba} - 2\pi i\left[\hat W^\dagger\left(f\hbox{$1\hskip -1.2pt\vrule depth 0pt height 1.6ex width 0.7pt\vrule depth 0pt height 0.3pt width 0.12em$}-\hat H^{eff}\right)^{-1}\hat W\right]_{ba}.
\label{eqn:Sab}
\end{equation}
Here, $\hat H^{eff}=\hat H-i\pi\hat W\hat W^\dagger$ with $\hat H$ modeling the universal spectral properties of the DB. Since we did not separate the resonance spectra by symmetry, we chose for $\hat H$ random matrices from the composite ensemble GOE+2GUE and from the 3GUE for comparison. The matrix elements of $\hat W$ are real, Gaussian distributed with $W_{a \mu}$ and $W_{b \mu}$ describing the coupling of the antenna channels to the resonator modes. Furthermore, we chose $\Lambda$ equal fictituous channels to account for the Ohmic losses in the walls of the resonator~\cite{Dietz2009a,Dietz2010}. Direct transmission between the antennas was negligible, so that the frequency-averaged $S$-matrix was diagonal, implying that $\sum_{\mu = 1}^N W_{e \mu} W_{e^\prime \mu}=N v_{e}^2 \delta_{ee^\prime}$~\cite{Verbaarschot1985}. The parameters $v^2_{e}$ denote the average strength of the coupling of the resonances to channels $e$. For $e=a,\ b$ they correspond to the average size of the electric field at the position of the antennas $a$ and $b$ and they yield the transmission coefficients $T_{e} = 1 - \vert\left\langle{S_{ee}}\right\rangle\vert^2$, which are experimentally accessible~\cite{Dietz2010}. Actually, $v_e$ and $\tau_{abs}=\Lambda T_c$ are the input parameters of the RMT model~\refeq{eqn:Sab} where they are assumed to be frequency independent. This is fulfilled because we analyzed data in windows of size $\leq 1$~GHz~\cite{Dietz2010}. We considered three parts of the DB, defined by the location of the antennas $a$ and $b$, namely an inner region (groups 1, 2) around the center of the billiard domain, a middle region (groups 3, 4, 5, 6) and an outer region (groups 7, 8, 9); see~\reffig{fig:Sketch_Diracbilliard}. In~\reffig{VertBEVH} distributions of the rescaled transmission amplitudes are shown around the lower (a) and upper (b) BE, and around the lower (c) and upper (d) VHS. At the BEs the distributions do not depend on the positions of the antennas and are well described by the RMT model~\refeq{eqn:Sab} both for the GOE+2GUE (green) and the 3GUE (turquoise) case which, actually, are barely distinguishable. There the wave-functions are similar to those of the corresponding QB~\cite{Zhang2021}. For the lower VHS we only find good agreement with the RMT results for the outer group and for the FB, shown in~\reffig{VertVHFB} (b), only for the inner group. Otherwise we do not find any agreement around the VHSs and FB. Instead, these distributions are well described by the $S$-matrix model~\refeq{eqn:Sab} when using power-law banded random matrices (PLBM)~\cite{Mirlin1996}, obtained by multipling the off-diagonal elements $H_{ij}$ of $\hat H$ by a factor $\vert i-j\vert^{-\alpha}$. This ensemble interpolates between localized ($\alpha \gtrsim 1$) and extended ($\alpha =0$) states. This is demonstrated in~\reffig{VertBEVH} (d) and in~\reffig{VertVHFB} (a)-(d). Thus these deviations may be attributed to localization of the electric-field intensity in parts of the DB, as confirmed by the examples shown in Figs.~S4 and~S5 in the appendix.
\begin{figure}[htbp]
\includegraphics[width=0.6\linewidth]{Vert_S12_BE_VH.pdf}
\caption{Distributions of the transmission amplitudes $r=\vert S_{12}\vert/\langle\vert S_{12}\vert\rangle$ (red histogram) in (a) the region around the lower band edge $f\in[15,16]$~GHz for antennas 4, 5 and 6, (b) same as (a) for the upper band edge $f\in[23,24]$~GHz, (c) around the lower VHS $f\in[17.3,17.6]$~GHz for antennas 1 and 2 and (d) the same as (c), but for the upper VHS $f\in[21,21.3]$~GHz. They are compared to distributions obtained from the RMT model~\refeq{eqn:Sab} with $\hat H$ from the GOE+2GUE (green histograms) and 3GUE (turquoise histograms). Best fit is found for $T_1=0.57, T_2=0.55$ and (a) $\tau_{abs}=1.0$, (b) $\tau_{abs}=0.8$, and for $T_1=0.67, T_2=0.69$ and $\tau_{abs}=1.0$ (c). In (d) we use corresponding PLBMs with $\alpha =0.3$ and otherwise the same values as in (c).
}\label{VertBEVH}
\end{figure}
\begin{figure}[htbp]
\includegraphics[width=0.505\linewidth]{Vert_S12_Flat_Band.pdf}
\includegraphics[width=0.475\linewidth]{Amplitude_Distribution_GOE_GUE_DP_all_DP_out_GB_out.pdf}
\caption{Left: Distributions of the transmission amplitudes $r=\vert S_{12}\vert/\langle\vert S_{12}\vert\rangle$ (red histograms) measured in the FB $f\in[28,29]$~GHz with all antennas (a), with antennas 1 and 2 (b), with antennas 3, 4, 5 and 6 (c) and with antennas 7, 8 and 9 (d). They are compared to the RMT model~\refeq{eqn:Sab} with the PLBMs (blue histograms) generated from random matrices from the GOE+2GUE for $T_1=0.67, T_2=0.69, \tau_{abs}=1.0$ and $\alpha =1.0$ (a), $\alpha =0.1$ (b), and $\alpha =0.7$ in (c) and (d). (e) Strength distribution in the Dirac region $f\in [18.4,19.1]$ (red triangles) obtained from antenna groups 7, 8 and 9, and from the computed wave functions of the GB in the same outer region (cyan line). They are compared to the analytical results for GOE (green dashed line), 3GUE (black solid line), GOE+2GUE (black dashed line) and to RMT simulations for PLBMs with $\alpha\simeq 0.7-0.8$ for 3GUE (orange dashed-dotted line) and GOE+2GUE (violet diamonds).}
\label{VertVHFB}
\end{figure}
At the DP the resonances are well isolated. Therefore, in that region we can obtain information on the properties of the wave-function components in terms of the strength distribution~\cite{Dembowski2005}. Namely, for sufficiently isolated resonances the $S$-matrix has the form
\begin{equation}
S_{a b} = \delta_{a b}
- i\frac{\sqrt{\Gamma_{\mu a}\Gamma_{\mu b}}}
{f-f_{\mu} + \frac{i}{2}\Gamma_{\mu}
},
\label{SMatrix}
\end{equation}
close to the $\mu$th resonance frequency $f_{\mu}$ with $\Gamma_{\rm\mu}$ denoting the total width of the corresponding resonance~\cite{Alt1995}. The partial widths $\Gamma_{\mu a}$ and $\Gamma_{\mu b}$ are proportional to the electric-field intensities at antennas $a$ and $b$. They cannot be determined individually, however, the strengths $z=\Gamma_{\mu a}\Gamma_{\mu b}$ may be obtained with high precision by fitting this expression to the resonances~\cite{Dembowski2005}. The strength distribution corresponds to the distribution of the products of the squared moduli of two wave-function components in the DB, or of two eigenvector components for the associated RMT model~\cite{Dietz2006c,Guhr1998}. For 3GUE it coincides with that of GUE, $P^{GUE}(z)=2K_0(2\sqrt{z})$, that of GOE+2GUE is given by $P^{GOE+2GUE}(z)=P^{GOE}(z)+2P^{GUE}(z)$, where $P^{GOE}(z)=K_0(\sqrt{z})/(\pi\sqrt{z})$. Here, $K_0(z)$ denotes the modified Bessel function of order zero. In~\reffig{VertVHFB} (e) we compare these analytical expresssions to the distributions obtained for the DB in the Dirac region (red triangles). However, like for the FB we find only agreement with the RMT distributions, when using the corresponding PLBM $\alpha\simeq 0.7-0.8$, where that for GOE+2GUE (violet diamonds) is better than that for 3GUE (oranged dashed-dotted).
\section{Conclusions}
We performed experiments with a superconducting DB, whose shape has a $C_3$ symmetry. To identify the resonance frequencies and to separate them into the three symmetry classes we successfully employed a procedure, which was originally developed for hollow microwave billiards~\cite{Dembowski2003}, thereby demonstrating that it is applicable even to complex structures like the DB. We confirm results which were obtained in Ref.~\cite{Zhang2021} from numerical computations, namely, find good agreement of the spectral properties with those of the QB, GB and HKB of corresponding shape, and with those of massive relativistic NBs only beyond a certain mass. We also investigated properties of the wave functions below the DP1, where the DOS is low, in terms of the strength distribution. We find good agreement with the corresponding distributions of random matrices from GOE+2GUE when using PLBMs, corroborating that the wave functions are localized~\cite{Bittner2012}. Yet, the spectral properties of the associated resonance frequencies agree well with those of typical quantum systems with a $C_3$ symmetry and a chaotic classical dynamics. Furthermore, we investigated the fluctuation properties of the measured $S$ matrix in the regions around the BEs, the VHSs and the FB, which are not accessible numerically. In the nonrelativistic regime we find good agreement with those of the RMT model~\refeq{eqn:Sab} for GOE+2GUE, whereas for the other regions we took account of the localization observed in parts of the DB by using PLBMs. Around the VHSs the ratio distributions agree well with those of random matrices from the GOE+2GUE. From these observations we may conclude that even in regions, where the wave functions are localized in parts of the DB, the spectral properties comply with those of typical quantum systems whose corresponding classical dynamics is chaotic~\cite{Dietz2016}.
|
2,869,038,155,430 | arxiv | \section{Introduction}
One strategy when studying representations of a family of groups is to find a category where the automorphism groups are the groups of interest. A functor from this category into some some category of modules amounts to a family of representations of the groups equipped with some extra data. The seminal instance is that of the symmetric groups and the category of finite sets and injective functions, denoted $\FI$. Church--Ellenberg--Farb--Nagpal showed that over a noetherian ring $k$, any submodule of a finitely generated $\FI$-module is finitely generated \cite{FImodNoetherian}. The implication of this finite generation property for the family of $S_n$ representations underlying an $\FI$-module is that beyond a certain point, all the data of higher-indexed representations are completely determined by the lower ones. \emph{Quod est superius est sicut quod inferius.}
For each finite group $G$, there is a category $\FI_G$ which enjoys many of the same properties as $\FI$. Just as in $\FI$, the objects of $\FI_G$ are finite sets, but the morphisms are injections where each element of the source is decorated with an element of $G$. Composition is given by pulling back the group elements along the injection, and multiplying where necessary, as demonstrated below with $g_i, h_j \in G$.
\[
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (1) at (0, 0) {$\bullet$};
\node [style=none] (2) at (0, -0.5) {$\bullet$};
\node [style=none] (3) at (0, -1) {$\bullet$};
\node [style=none] (4) at (2, 0) {$\bullet$};
\node [style=none] (5) at (2, -0.5) {$\bullet$};
\node [style=none] (6) at (2, -1) {$\bullet$};
\node [style=none] (7) at (2, -1.5) {$\bullet$};
\node [style=none] () at (3, -1) {$\circ$};
\node [style=none] () at (4, -2) {};
\node [style=none] () at (-0.3, 0) {\color{blue} $g_1$};
\node [style=none] () at (-0.3, -0.5) {\color{blue} $g_2$};
\node [style=none] () at (-0.3, -1) {\color{blue} $g_3$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1) to (5);
\draw [style=simple] (2) to (4);
\draw [style=simple] (3) to (7);
\end{pgfonlayer}
\end{tikzpicture}
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (1) at (0, 0) {$\bullet$};
\node [style=none] (2) at (0, -0.5) {$\bullet$};
\node [style=none] (3) at (0, -1) {$\bullet$};
\node [style=none] (4) at (0, -1.5) {$\bullet$};
\node [style=none] (5) at (2, 0) {$\bullet$};
\node [style=none] (6) at (2, -0.5) {$\bullet$};
\node [style=none] (7) at (2, -1) {$\bullet$};
\node [style=none] (8) at (2, -1.5) {$\bullet$};
\node [style=none] (9) at (2, -2) {$\bullet$};
\node [style=none] () at (3, -1) {=};
\node [style=none] () at (4, -2) {};
\node [style=none] () at (-0.3, 0) {\color{blue} $h_1$};
\node [style=none] () at (-0.3, -0.5) {\color{blue} $h_2$};
\node [style=none] () at (-0.3, -1) {\color{blue} $h_3$};
\node [style=none] () at (-0.3, -1.5) {\color{blue} $h_4$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1) to (9);
\draw [style=simple] (2) to (5);
\draw [style=simple] (3) to (6);
\draw [style=simple] (4) to (7);
\end{pgfonlayer}
\end{tikzpicture}
\begin{tikzpicture}
\begin{pgfonlayer}{nodelayer}
\node [style=none] (1) at (0, 0) {$\bullet$};
\node [style=none] (2) at (0, -0.5) {$\bullet$};
\node [style=none] (3) at (0, -1) {$\bullet$};
\node [style=none] (5) at (2, 0) {$\bullet$};
\node [style=none] (6) at (2, -0.5) {$\bullet$};
\node [style=none] (7) at (2, -1) {$\bullet$};
\node [style=none] (8) at (2, -1.5) {$\bullet$};
\node [style=none] (9) at (2, -2) {$\bullet$};
\node [style=none] () at (-0.5, 0) {\color{blue} $g_1h_2$};
\node [style=none] () at (-0.5, -0.5) {\color{blue} $g_2h_1$};
\node [style=none] () at (-0.5, -1) {\color{blue} $g_3h_4$};
\end{pgfonlayer}
\begin{pgfonlayer}{edgelayer}
\draw [style=simple] (1) to (5);
\draw [style=simple] (2) to (9);
\draw [style=simple] (3) to (7);
\end{pgfonlayer}
\end{tikzpicture}
\]
The automorphism groups here are precisely the wreath products $S_n \ltimes G^n$. Sam and Snowden defined this category and proved that its category of modules over a noetherian ring is noetherian \cite{RepGmaps}.
It can be seen without immense effort, once one is brought to ask the question, that the evident projection functor $\FI_G \to \FI$ is a \emph{fibration}. A \define{fibration of categories} is a functor $\A \to \X$ with a lifting property which allows one to formally pull back objects and arrows of $\A$ along arrows in $\X$. There is an equivalence between such fibrations and weak functors $\X\op \to \Cat$ valued in categories, functors, and natural transformations. The details of this classical equivalence can be found in \cite{2DCats}. Such a weak functor is known as an \define{$\X$-indexed category}.
The reverse process, constructing a category with a fibration to $\X$ from an $\X$-indexed category, is called the \define{Grothendieck construction}.
The following vague, but natural, question arises.
\begin{question}
\label{quest}
If $\X$ is a category which is known to have nice representation theoretic properties, under which conditions does the Grothendieck construction of a weak functor $\X\op \to \Cat$ inherit those desired properties from $\X$?
\end{question}
The vagueness in this question lies precisely in what is meant by ``nice representation theoretic properties''. Progress has been made to extract the essential categorical features of $\FI$ that allow for its apparent nice representation theoretic properties. Gan and Li \cite{EICat} have shown that under some simple combinatorial conditions, over a field of characteristic 0, finitely generated modules of an EI category (where every endomorphism is invertible) with objects essentially parameterized by $\N$ are noetherian. Gadish imposed some further categorical conditions, but was able to lift the demand for parameterization by $\N$, the motivating case being modules over $\FI^n$. In so doing, a noetherian result was obtained for a wider class of categories, designated \define{$\FI$-type categories} \cite{FItypeCats}. This paper seeks to provide an answer to \cref{quest} by providing necessary and sufficient conditions on an $\X$-indexed category such that the Grothendieck construction produces an $\FI$-type category, provided that $\X$ itself is of $\FI$ type.
Presently, we prove the following theorem in order to provide an answer to one instance of Question \ref{quest}.
\begin{thm*}[\cref{thm:main}]
Let $\X$ be an $\FI$-type category, and let $\M \maps \X\op \to \Cat$ be an indexed category. Then the Grothendieck construction applied to $\M$ produces an $\FI$-type category and the related fibration preserves pullbacks and weak pushouts if
\begin{enumerate}
\item the fibers are $\FI$ type categories
\item for every endomorphism $f \maps x \to x$ and object $a$ in the fiber over $x$, every map $a \to \M f(a)$ in $\M x$ is invertible
\item the inclusions $\M x \hookrightarrow \inta \M$ preserve pullbacks
\item $\M$ is weakly reversible
\end{enumerate}
for all objects $x,y$ and morphisms $f \maps x \to y$ in $\X$.
\end{thm*}
We do not expect the terms `weak pushout' or `weakly reversible' to be familiar to the reader, but are introduced within the paper where appropriate.
\cref{sec:Grothendieck} contains an overview of the theory of indexed categories, fibrations, and their equivalence via the Grothendieck construction. Examples relevant to representation theory motivate the definitions and constructions.
\cref{sec:Auto} unpacks in detail what happens to automorphism groups under the construction, and the relation to nonabelian cohomology.
In \cref{sec:GCFI}, we review Gadish's definition of $\FI$-type category and the representation stability results for such categories. We then state \cref{thm:main}, which gives sufficient conditions on an indexed category $\M \maps \X\op \to \Cat$ with an $\FI$-type base category, under which the Grothendieck construction produces an $\FI$-type total category, along with several examples.
\cref{sec:proofs} is dedicated to the proof of \cref{thm:main}. The definition of FI type category consists of several categorical properties, and the relationship of each one with the notion of Grothendieck fibrations is treated separately. We hope this aids any future researcher in identifying which conditions they need for their own scenario.
\subsection*{Acknowledgements}
A tremendous debt of gratitude is owed to Derek Lowenberg, who introduced me to representation stability and helped me carve out the ideas presented here. Reid Barton and Mike Shulman pointed out instructive examples which helped to identify necessary and sufficient conditions for the Grothendieck construction to produce an EI category. I would also like to thank John Baez, Jonathan Beardsley, Spencer Breiner, Scott Carter, Nir Gadish, Wee Liang Gan, David Jaz Myers, Todd Trimble, and Christina Vasilakopoulou for helpful discussions.
\section{Grothendieck fibrations}
\label{sec:Grothendieck}
A \emph{Grothendieck fibration} is a functor $P \maps \A \to \X$ which essentially allows you to divide the category $\A$ into subcategories $\A_x$ called \emph{fibers} over the objects $x$ of $\X$, with \emph{pullback} functors between them corresponding to the maps in $\X$. Such structures naturally appear in fields as widely varying as algebraic geometry \cite{Vistoli}, logic and theoretical computer science \cite{Jacobs}, and homotopy theory and higher category theory \cite{FramedBicats}. They have been studied extensively by category theorists, especially in the context of topos theory \cite{Grayfibredandcofibred, FibredAdjunctions, Handbook2, Elephant1, alaBenabou, 2DCats}.
The \emph{Grothendieck construction} produces such a fibration from a family of categories and functors indexed by the objects of the base category $\X$. In fact, any fibration is equivalent to one produced by this construction. In this way, there is an equivalence between fibrations and indexed categories. This offers two different perspectives of the same data, often allowing different tools and results to be used.
In this section, we recall the basic theory of indexed categories, Grothendieck fibrations, their equivalence via the Grothendieck construction, and examples found in algebra and representation theory. The full generality of this theory demands the use of 2-categorical concepts and language. There are however 1-categorical versions of everything, which we do our best to include whenever possible. We encourage the unfamiliar reader to look to these first as they read through.
Unabridged detail can be found in Johnson and Yau's book \cite{2DCats} for all 2-categorical concepts we use here, including 2-categories, pseudofunctors, pseudonatural transformations, and modifications. We will review an abbreviated form of this information for the reader's convenience.
\subsection{Indexed Categories}
We will use $\Cat$ to refer either to the category of categories and functors, or the 2-category of the same and natural transformations. We do our best to make the distinction clear either explicitly or by context.
Let $\X$ be a category. A \define{strict $\X$-indexed category} is a functor $\M \maps \X\op \to \Cat$. An \define{$\X$-indexed functor} is a natural transformation $\alpha \maps \M \To \N$. Let $\ICat_s(\X)$ denote the functor category $[\X\op, \Cat]$.
\begin{expl}[Group representations]
\label{ex:Grep1}
Let $\FinGrp$ denote the category of finite groups and group homomorphisms. Define the functor $\Rep \maps \FinGrp\op \to \Cat$ as follows. To a finite group $G$ assign the category of representations of $G$, $\Rep(G)$. To a group homomorphism $\phi \maps G \to H$ assign the functor $\phi^* \maps \Rep(H) \to \Rep(G)$ given by pulling back along $\phi$.
\end{expl}
\begin{expl}[Ring modules]
\label{ex:mod1}
Let $\Ring$ denote the category of rings and ring homomorphisms. Define the functor $\Mod \maps \Ring\op \to \Cat$ as follows. To a ring $R$ it assigns the category of $R$-modules, $R\Mod$. To a ring homomorphism $\phi \maps R \to S$ assign the functor $\phi^\ast \maps S\Mod \to R\Mod$ given by pulling back along $\phi$, restriction of scalars.
\end{expl}
\begin{expl}[Group action as a functor]
\label{ex:action1}
Groups can be thought of as one-object groupoids, and group homomorphisms as functors between these. Let $G$ and $H$ be groups and $A \maps G \to \Aut(H)$ a right action of $G$ on $H$. Denote the action of an element $g \in G$ on an element $h \in H$ by $h.g$. This can be thought of as a functor $A \maps G\op \to \Grp$ which sends the unique object of $G$ to $H$ itself, and every element of $G$ to the automorphisms specified by the action. By composing with the inclusion functor, we see this as an indexed category $G\op \xrightarrow A \Grp \hookrightarrow \Cat$.
\end{expl}
Let $\C$ and $\D$ be 2-categories. A \define{pseudofunctor} $F \maps \C \to \D$ consists of an object function $\ob\C \to \ob\C$ just as functors do, but now the assignment on morphisms is a functor $\C(c, c') \to \D(Fc, Fc')$ (thus including a function on the 2-morphisms which is associative and unital). The composition and unit preservation laws at the level of 1-morphisms are now weakened from equations to specified invertible 2-morphisms $\mu_{f,g} \maps F(f) \circ F(g) \To F(f \circ g)$ natural in $f$ and $g$, and $\eta_c \maps id_{Fc} \To F(id_c)$ natural in $c$. We call these the \define{compositor} and \define{unitor} maps respectively. Further, these maps must themselves satisfy some equations, which can be found in Johnson and Yau's book \cite{2DCats}.
There is a suitable generalization of natural transformations, called \define{pseudonatural transformations}. The naturality condition, instead of being the requirement for certain squares to commute, is extra structure in the form of invertible 2-morphisms, which again must satisfy some equations. Unlike with natural transformations, there is also a fitting notion of map \emph{between} pseudonatural transformations, called a \define{modification}. Just as a natural transformation assigns a morphism in the target category to each object of the source category, a modification assigns a 2-morphism in the target category to each object of the source category. Again, full detail can be found \emph{op cit}.
Let $\X$ be a category, thought of as a locally discrete 2-category. An \define{$\X$-indexed category} is a pseudofunctor $\M \maps \X\op \to \Cat$. An \define{$\X$-indexed functor} is a pseudonatural transformation $\alpha \maps \M \To \N$. A \define{$\X$-indexed natural transformation} is a modification. Let $\ICat(\X)$ denote the 2-category $[\X\op, \Cat]_{ps}$ of pseudofunctors, pseudonatural transformations, and modifications.
\begin{expl}[Slice and pullback]
\label{ex:slice}
Let $\X$ be a category, and $x$ an object of $\X$. The \define{slice category at $x$}, denoted $\X/x$, has arrows $f \maps y \to x$ as objects and commutative triangles
\[
\begin{tikzcd}
y
\arrow[rr, "h"]
\arrow[dr, swap, "f"]
&&
z
\arrow[dl, "g"]
\\&
x
\end{tikzcd}
\]
as morphisms. Assume $\X$ has pullbacks. Given a map $j \maps x \to y$ in $\X$, then we define $j^* \maps \X/y \to \X/x$ on an object $f \maps z \to y$ by taking the following pullback.
\[
\begin{tikzcd}
x \times_y z
\arrow[r]
\arrow[dr, phantom, "\lrcorner", pos = 0.1]
\arrow[d, swap, "j^*f"]
&
z
\arrow[d, "f"]
\\
x
\arrow[r, swap, "j"]
&
y
\end{tikzcd}
\]
For a morphism $h \maps f \to g$, $j^*(h)$ is provided by the universal property of pullbacks.
\[
\begin{tikzcd}
&
x \times_y w
\arrow[ddrr, phantom, "\lrcorner", pos = 0.1]
\arrow[rr]
\arrow[dd, "j^*g", pos = 0.75]
&&
w
\arrow[dd, "g"]
\\
x\times_y z
\arrow[drrr, phantom, "\lrcorner", pos = 0.1]
\arrow[ur, dashed, "j^*h"]
\arrow[rr, crossing over]
\arrow[dr, swap, "j^*f"]
&&
z
\arrow[ur, "h"]
\arrow[dr, swap, "f"]
\\&
x
\arrow[rr, swap, "j"]
&&
y
\end{tikzcd}
\]
Thus we define a pseudofunctor $\X/- \maps \X\op \to \Cat$ with the assignments on objects and morphisms as above, and the compositor and unitor derived from universal property of pullbacks. Despite these isomorphisms, this cannot in general be made into a strict functor \cite[Example 1.10.3.iv]{Jacobs, 279985}.
\end{expl}
\subsection{The Grothendieck Construction}
Before fibrations, we discuss the Grothendieck construction. This construction builds a category, denoted $\inta\M$, out of the data of an indexed category $\M$. As we shall see, this category turns out to always be fibred, and all fibred categories arise this way.
Given an indexed category $\M \maps \X\op \to \Cat$, let $\inta \M$ denote the category with:
\begin{itemize}
\item objects $(x,a)$ with $x \in \X$ and $a \in \M(x)$
\item morphisms $(f,k) \maps (x,a) \to (y,b)$ with $f \maps x \to y$ a morphism in $\X$, and $k \maps a \to (\M f)(b)$ a morphism in $\M x$;
\item composition $(g, \ell) \circ (f, k) \maps (x,a) \to (y,b) \to (z,c)$ is given by
\begin{equation}
\label{eq:comp_intM}
(g, \ell) \circ (f, k) := (g \circ f, \mu_{f,g} \circ \M f(\ell) \circ k)
\end{equation}
\item identity map $id_{(x,a)} \maps (x,a) \to (x,a)$ is given by $id_x \maps x \to x$ in $\X$ and $id_a \maps a \to \M(1_x)(a) = a$ in $\M x$.
\end{itemize}
Visualize the composition rule \cref{eq:comp_intM} as follows.
\[
\begin{tikzcd}
x
\arrow[d, swap, "f"]
\arrow[dd, bend left, "gf"]
&
a
\arrow[r, "k"]
&
\M fb
\arrow[r, "\M f(\ell)"]
&
\M f(\M g (c))
\arrow[r, "\mu_{f,g}"]
&
\M (gf) (c)
\\
y
\arrow[d, swap, "g"]
&&
b
\arrow[r, "\ell"]
\arrow[u, mapsto]
&
\M g (c)
\arrow[u, mapsto]
\\
z
&&&
c
\arrow[u, mapsto]
\arrow[uur, mapsto]
\end{tikzcd}
\]
In the case where $\M$ is a strict functor, the composition formula simplifies to the following.
\begin{equation}
\label{eq:comp_intMstrict}
(g, \ell) \circ (f, k) = (g \circ f, \M f(\ell) \circ k)
\end{equation}
Notice that as sets, we have
\begin{equation}
\label{eq:hom}
\inta\M((x,a),(y,b)) = \X(x,y) \times \M(x)(a,\M f(b)).
\end{equation}
\begin{expl}[Ring-module pairs]
\label{ex:mod2}
The Grothendieck construction applied to the indexed category $\Mod \maps \Ring\op \to \Cat$ from \cref{ex:mod1} gives the category $\inta\Mod$ where objects are ring-module pairs, $(R, M)$ with $R$ a ring and $M$ an $R$-module, and morphisms are pairs $(\phi, \phi^\sharp) \maps (R, M) \to (S,N)$ where $\phi \maps R \to S$ is a ring homomorphism, and $\phi^\sharp \maps M \to \phi^*(N)$ is an $R$-module homomorphism.
\end{expl}
\begin{lem}
\label{lem:invert}
Maps $f \maps x \to y$ in $\X$ and $k \maps a \to \M f(b)$ in $\M x$ are isomorphisms if and only if $(f,k) \maps (x,a) \to (y,b)$ is an isomorphism in $\inta \M$.
\end{lem}
The above result is more or less immediate. Note though that the inverse of $(f,k)$ is not $(f\inv, k\inv)$, as that map does not even have the right type. Instead, the inverse is $(f\inv, \M f\inv(k))$.
\begin{expl}[Semidirect product]
\label{ex:action2}
Let $A \maps G\op \to \Grp\hookrightarrow \Cat$ be as in \cref{ex:action1}, and let $\star_G$ and $\star_H$ denote the unique objects of $G$ and $H$ respectively. Then the Grothendieck construction applied to $A$ gives a category $\inta A$ with exactly one object $(\star_G, \star_H)$, and morphisms have the form $(g, h)$ with $g \in G$ and $h \in H$. By \cref{lem:invert}, $\inta A$ is a group. The composition rule \cref{eq:comp_intMstrict} specialized to this case is $(g_1,h_1) \circ (g_2,h_2) = (g_1 \circ g_2, g_2.h_1 \circ h_2)$. The group $\inta A$ is precisely the semidirect product $G \ltimes H$.
\end{expl}
\begin{expl}[Arrow category]
\label{ex:arrowcat}
Consider the indexed category of slices of \cref{ex:slice}. The Grothendieck construction $\int(\X/-)$ has for objects pairs $(x, f \maps y \to x)$ and for morphisms $(k, h) \maps (x, f \maps y \to x) \to (z, g \maps w \to z)$ with $k \maps x \to z$, and $h \maps f \to k^*g$. With some work, it can be seen that this category is equivalent to the \define{arrow category} of $\X$, denoted $\X^\to$, consisting of arrows of $\X$ for objects, and commutative squares for morphisms.
\end{expl}
\subsection{Fibrations}
A certain amount of information is lost when an indexed category is turned into its total category. Namely, there is no categorical property of $\inta\M$ which can tell you if two given objects were in the same fiber or not. This data can be preserved however by repackaging it into a functor. For any indexed category $\M$, there is a functor $P_\M \maps \inta \M \to \X$ given by projecting onto the first component for both objects and morphisms. This functor is especially nice. It admits a certain lifting property as we shall see. Such a functor is called a \emph{fibration}.
Consider a functor $P \maps \A \to \X$. A morphism $\phi \maps a \to b$ in $\A$ over a morphism $f = P(\phi) \maps x \to y$ in $\X$ is called \define{cartesian} if and only if, for all $g \maps x' \to x$ in $\X$ and $\theta \maps a' \to b$ in $\A$ with $P \theta = f \circ g$, there exists a unique arrow $\psi \maps a' \to a$ such that $P \psi = g$ and $\theta = \phi \circ \psi$:
\begin{equation}
\begin{tikzcd}[column sep = huge]
a'
\arrow[drr, "\theta"]
\arrow[dr, dashed, swap, "\exists!\psi"]
\arrow[dd, dotted, bend right]
\\&
a
\arrow[r, swap, "\phi"]
\arrow[dd, dotted, bend right]
&
b
\arrow[dd, dotted, bend right]
&
\text{in }\A
\\
x'
\arrow[drr, "f \circ g = P \theta"]
\arrow[dr, swap, "g"]
\\&
x
\arrow[r, swap, "f = P \phi"]
&
y
&
\text{in }\X
\end{tikzcd}
\end{equation}
For $x \in \ob\X$, the \define{fibre of $P$ over $x$} written $\A_x$, is the subcategory of $\A$ which consists of objects $a$ such that $P(a) = x$ and morphisms $\phi$ with $P(\phi) = 1_x$. The functor $P \maps \A \to \X$ is called a \define{fibration} if and only if, for all $f \maps x \to y$ in $\X$ and $b \in \A_y$, there is an object $a \in \A_x$ and a cartesian morphism $\phi \maps a \to b$ with $P(\phi) = f$; it is called a \define{cartesian lift} of $f$ to $b$. The category $\X$ is then called the \define{base} of the fibration, and $\A $ its \define{total category}.
\begin{lem}
\label{lem:isocartesian}
An isomorphism is always cartesian.
\end{lem}
A \define{fibred functor} $H \maps P \to Q$ between fibrations $P \maps \A \to \X$ and $Q \maps \B \to \X$ is given by a commutative triangle
\begin{equation}
\label{eq:fibredfunctor}
\begin{tikzcd}[row sep = huge]
\A
\arrow[rr, "H"]
\arrow[dr, swap, "P"]
&&
\B
\arrow[dl, "Q"]
\\&
\X
\end{tikzcd}
\end{equation}
where the top $H$ preserves cartesian liftings, meaning that if $\phi$ is $P$-cartesian, then $H\phi$ is $Q$-cartesian.
A \define{fibred natural transformation} is a natural transformation $\beta \maps H \To K$ such that $Q(\beta_a) = Pa$ for all objects $a \in \A$.
\begin{equation}
\label{eq:fibrednaturaltrans}
\begin{tikzcd}[row sep = huge]
\A
\arrow[rr, bend left, "H"]
\arrow[rr, phantom, "\Downarrow \beta"]
\arrow[rr, bend right, swap, "K"]
\arrow[dr, swap, "P"]
&&
\B
\arrow[dl, "Q"]
\\&
\X
\end{tikzcd}
\end{equation}
If $P\maps \A \to \X$ is a fibration, assuming the axiom of choice, we may select a cartesian arrow over each $f\maps x \to y$ in $\X$ and $b \in \A_y$, denoted by $\Cart(f, b) \maps f^*(b) \to b$. Such a choice of cartesian liftings is called a \define{cleaving} for $P$, which is then called a \define{cloven fibration}. Since identity maps are always cartesian, it is always possible to choose a cleaving where the lift of an identity is again an identity. This can be convenient, and will be used in the next section.
Suppose $P \maps \A \to \X$ is a cloven fibration. For any map $f \maps x \to y$ in the base category $\X$, the data of the cleaving can be used to define a \define{reindexing functor} between the fibre categories.
\begin{equation}
\label{reindexing}
f^*\maps \A_y \to \A_x
\end{equation}
Send an object $b \in \A_y$ to the domain of the chosen cartesian lift of $f$ to $b$, $\Cart(f,b) \maps f^*(b) \to b$. Given a map $\psi \maps b \to b'$ in $\A_y$, define $f^*(\psi)$ by the universal property of cartesian maps as follows.
\[
\begin{tikzcd}[column sep = huge]
f^*(b)
\arrow[rr, "{\Cart(f,b)}"]
\arrow[dr, dashed, swap, "\exists!f^*(\psi)"]
\arrow[dd, dotted, bend right]
&&
b
\arrow[ddd, dotted, bend left]
\arrow[d, "\psi"]
\\&
f^*(b')
\arrow[r, swap, "{\Cart(f,b')}"]
\arrow[dd, dotted, bend right]
&
b'
\arrow[dd, dotted, bend right]
&
\text{in }\A
\\
x
\arrow[drr, "f"]
\arrow[dr, swap, "id_x"]
\\&
x
\arrow[r, swap, "f"]
&
y
&
\text{in }\X
\end{tikzcd}
\]
We leave it to the interested reader to verify this defines a functor.
It can be verified by the cartesian universal property that $1_{\A_x} \cong (1_x)^*$ and that for composable morphism in the base category, $f^* \circ g^* \cong (g \circ f)^*$. Specific isomorphisms inhabiting these relations can be derived from a cleaving, a fact we shall employ in the next section. If these isomorphisms are equalities, we say the fibration is \define{split}.
Let $\M \maps \X\op \to \Cat$ be an $\X$-indexed category. Define the functor $P_\M \maps \inta \M \to \X$ by projecting onto the first component for both objects and morphisms.
\begin{lem}
The functor $P_\M$ defined above is always a fibration. Moreover, if $\M$ is a strict functor, then $P_\M$ is a split fibration.
\end{lem}
The cartesian lift of an arrow $f \maps x \to y$ in $\X$ to an object $(y,b)$ above $y$ is $(f, id_{\M fb}) \maps (x, \M fb) \to (y,b)$. Demonstrating that this satisfies the conditions of being cartesian involves no more than a calculation.
\begin{expl}[Products]
\label{ex:products}
Let $\X$ and $\Y$ be categories. The projection functor $\pi_\X \maps \X \times \Y \to \X$ is a fibration. Given a map $f \maps x \to x'$ in $\X$ and an object $(x',y)$ above $x'$, the map $(f, id_y) \maps (x,y) \to (x',y)$ is a cartesian lift of $f$ to $(x',y)$. The fiber over any object $x$ is equivalent to $\Y$. The reindexing functors provided by the cleaving above are always identity. Thus, $\X \times \Y$ is equivalent to the Grothendieck construction of \define{the constant indexed category} $\Delta\Y \maps \X\op \to \Cat$ given by $x \mapsto \Y$ for all $x \in \X$, and $f \mapsto id_\Y$ for all maps $f$ in $\X$.
\end{expl}
\begin{expl}[Split group extensions]
\label{ex:action3}
Let $G$, $H$, and $A$ be as in \cref{ex:action1} and \cref{ex:action2}. The condition of the projection $p \maps \inta A = G \ltimes H \to G$ being a fibration essentially amounts to saying it is surjective, since all invertible maps in $\inta A$ are cartesian. This makes up the surjective part of the short exact sequence expressing $\inta A$ as an extension of $G$ by $H$.
\[0\to H \xrightarrow{i} \inta A \xrightarrow{p} G \to 0
\]
The fact that $A$ is a strict functor tells us that the the compositor $\mu_{g_1, g_2} \maps Ag_1 \circ Ag_2 \To A(g_2g_1)$ is the identity, and thus we have a commutative diagram.
\[
\begin{tikzcd}
A(g_2g_1)(\star)
\arrow[d, equals]
\arrow[drr, "\Cart(g_2g_1)"]
\\
Ag_1(Ag_2(\star))
\arrow[r, swap, "\Cart(g_1)"]
&
Ag_2(\star)
\arrow[r, swap, "\Cart(g_2)"]
&
\star
\end{tikzcd}
\]
Note that all the objects in this diagram are the unique object of $\inta A$. The interesting part of the diagram, as usual, is what it says about the morphisms. It says that $\Cart$ is in fact a group homomorphism $\Cart \maps G \to \inta A$, splitting the sequence. This recovers the well-known fact that split extensions are always semidirect products, whence the naming of split fibrations.
\end{expl}
\begin{expl}[Codomain fibration]
Let $\X$ and $\X/-$ be as in \cref{ex:slice} and \cref{ex:arrowcat}. The projection $P\maps \int(\X/-) \simeq \X^\to \to \X$ maps an arrow to its codomain, and a commutative square to its lower edge. This fibration is known as the \define{codomain fibration} for obvious reasons, and the \define{fundamental fibration} because it ties together so many fundamental concepts of category theory.
\end{expl}
The indexed category $\M$ can be recovered up to isomorphism from the fibration $\M$. The fibre over an object $x$ in the base is equivalent to the category $\M(x)$. Identifying these, the reindexing functor $f^*$ induced by a map $f \maps x \to y$ in the base is precisely the functor $\M(f) \maps \M(y) \to \M(x)$. Moreover, every fibration comes from the Grothendieck construction of an indexed category, and every indexed category comes from a fibration. Indeed, the Grothendieck correspondence gives an equivalence between these two pieces of data. The idea of the following result was present in the work of Grothendieck where the eponymous construction was introduced \cite{Grothendieckcategoriesfibrees}. A detailed proof can be found in \cite{2DCats}.
\begin{thm}
\label{thm:Grothendieck}
The Grothendieck construction specifies an equivalence of 2-categories $\ICat(\X) \simeq \Fib(\X)$, which restricts to an equivalence of categories $\ICat_s(\X) \simeq \Fib_s(\X)$.
\end{thm}
\section{Extending automorphism groups}
\label{sec:Auto}
In this section, we describe the relationship between the automorphism groups of the objects in a category $\X$, and the automorphism groups of the objects in $\inta\M$, where $\M$ is some indexed category. Much of this relationship is known to experts, and some detail can be found for example in \cite{BaezShulman}. In general, an object in a category can have both invertible and non-invertible endomorphisms, and so it is of general interest to discuss the endomorphism monoids. However, in the context of $\FI$-type categories, all endomorphisms are invertible, and so this reduces to the case of automorphism groups anyway. For some discussion of monoids and the Grothendieck construction, see \cite{NetworkModels, monoidGrothConst}.
Let $\M \maps \X\op \to \Cat$ be an indexed category, and $(x,a)$ an object in $\inta \M$. If we assume without loss of generality that $\M x$ is skeletal, \cref{lem:invert} tells us that an automorphism $(f,k) \maps (x,a) \to (x,a)$ consists of an automorphism $f \maps x \to x$ in $\X$, and an automorphism $k \maps a \to a$ in $\M x$. The automorphism group of $(x,a)$ in $\inta \M$ is precisely the category $\inta\M|_x$ where $\M|_x \maps \Aut(x)\op \to \Grp$ is the appropriate restriction of $\M$ to the subcategory $\Aut(x)$ of $\X$ consisting of automorphisms of $x$.
Thus we can understand the automorphism groups of $\inta \M$ by first understanding what the Grothendieck construction does to pseudofunctors of the form $A \maps G \op \to \Grp$. Recall from \cref{ex:action1}, \cref{ex:action2}, and \cref{ex:action3} that if $A$ is a strict functor, then it is an action of $G$ on the group $A(\star)$, $\inta A$ is precisely the semidirect product, the fibration is the surjective part of the short exact sequence expressing $\inta A$ as an extension, and the fact that it is a split fibration corresponds exactly to the sequence being split.
This section is dedicated to the non-strict case.
\begin{prop}
Conflating groups with one-object groupoids, surjective homomorphisms are precisely fibrations.
\end{prop}
\begin{proof}
Let $p \maps E \to G$ be a fibration between groups. For each $g \in G$, there is a cartesian arrow $h$ in $E$ such that $p(h) = g$.
Let $p \maps E \to G$ be a surjective group homomorphism, i.e.\ a full functor between one-object groupoids. For an element $g \in G$, there is an element $h \in E$ such that $f(h) = g$. By \cref{lem:isocartesian}, $h$ is $p$-cartesian.
\end{proof}
\subsection{The 2-category of groups}
The category of groups $\Grp$ is a full subcategory of $\Cat$, but $\Cat$ is naturally a 2-category, with natural transformations as the 2-morphisms. What then are natural transformations between group homomorphisms? Let $G$ and $H$ be groups, and $f,k \maps G \to H$ be homomorphisms. A natural transformation $\alpha \maps f \To k$ consists of a map in $H$ for each object in $G$. Since there is only one object in $G$, $\alpha$ has a single component, an element of $H$ which we also refer to as $\alpha$. This element must satisfy the naturality condition:
\[
\begin{tikzcd}
\bullet
\arrow[r, "\alpha"]
\arrow[d, swap, "f(g)"]
&
\bullet
\arrow[d, "k(g)"]
\\
\bullet
\arrow[r, swap, "\alpha"]
&
\bullet
\end{tikzcd}
\]
i.e.\ $\alpha \cdot f(g) = k(g) \cdot \alpha$ for all $g \in G$. In this scenario, $f$ and $k$ are related by an inner automorphism. We sometimes refer to such a natural transformation as an \define{intertwining element}. As a special case, a natural transformation of the form $\alpha \maps id_G \To id_G \maps G \to G$ is precisely an element in the center of $G$. The horizontal composite $\alpha_2 \ast \alpha_1 \maps f_2 \circ f_1 \To g_2 \circ g_1$ of natural transformations $\alpha_1 \maps f_1 \To k_1$ and $\alpha_2 \maps f_2 \To k_2$
\[
\begin{tikzcd}[column sep = large]
G
\arrow[r, bend left = 40, "f_1"]
\arrow[r, phantom, "\Downarrow \alpha_1"]
\arrow[r, bend right = 40, swap, "k_1"]
&
H
\arrow[r, bend left = 40, "f_2"]
\arrow[r, phantom, "\Downarrow \alpha_2"]
\arrow[r, bend right = 40, swap, "k_2"]
&
K
\end{tikzcd}
\]
is given by $\alpha_2 \ast \alpha_1 = k_2(\alpha_1) \alpha_2$. Vertical composition of natural transformations $\alpha$ and $\beta$
\[
\begin{tikzcd}[column sep = huge]
G
\arrow[r, bend left=80, "f"', swap, ""{name = f}]
\arrow[r, "k"description, swap, ""{name = g}, ""'{name = gg}]
\arrow[r, bend right=80, ""{name = h}, "\ell"']
&
H
\arrow[from = f, to = gg, Rightarrow, "\alpha"]
\arrow[from = g, to = h, Rightarrow, "\beta"]
\end{tikzcd}\]
is just given by multiplication $\beta \circ \alpha = \beta\alpha$.
Note that this gives us for each pair of groups $G$ and $H$ a category $\Grp(G,H)$ where the objects are group homomorphisms $G \to H$, and morphisms are these intertwining elements, allowing us to consider $\Grp$ as a sub-2-category of $\Cat$. Since these intertwining elements have inverses, $\Grp(G,H)$ is actually a groupoid, making $\Grp$ a groupoid-enriched category, or (2,1)-category. It is also equivalent to the full sub-2-category of the (2,1)-category of groupoids consisting of the connected groupoids.
\begin{prop}
A pseudofunctor $A \maps G\op \to \Grp$ consists of
\begin{itemize}
\item a group $H$
\item a function $A \maps G \to \Aut(H)$ (we still denote it like a right action when convenient, for example $Ag(h) = h.g$)
\item a function $\phi \maps G \times G \to H$
\end{itemize}
such that
\begin{itemize}
\item $(h.g_1).g_2 = \phi_{g_1, g_2}\inv (h.(g_1g_2)) \phi_{g_1, g_2}$ for all $h \in H$
\item for three elements $g_1, g_2, g_3 \in G$, we have $\phi_{g_3g_2,g_1}(\phi_{g_2,g_3}.g_1) = \phi_{g_3,g_2g_1}$
\end{itemize}
\end{prop}
\begin{proof}
As with a strict action, the unique object of $G$ is mapped to an object of the target category, namely $H$, and each element of $G$ is mapped to a morphism in the target category, an automorphism of $H$. For strict functors, preservation of composites and identity maps is a property, but for pseudofunctors, preservation of composites and identity maps is mediated by invertible 2-morphisms, intertwining elements in our case. As noted previously, the identity preservation data can be chosen without loss of generality to be trivial. The data of the compositor for $A$ is an intertwining element of $H$ for every pair of composable morphisms in $G$. Since every pair of maps is composable in $G$, this is captured by a function of the form $\phi \maps G\times G \to H$. The two conditions are precisely the unit law and the associativity law for a pseudofunctor.
\end{proof}
This is essentially a right action of $G$ on $H$, where the action laws hold up to conjugation by a special family of elements. One might want to call such a thing a ``twisted (right) action'' of $G$ on $H$ \cite{TwistedAction}.
\subsection{Twisted action from surjection}
How is a pseudofunctor $A \maps G\op \to \Grp$ constructed from a surjection of groups $p \maps E \to G$? Keep in mind that any surjection is part of a unique short exact sequence by taking the kernel $K:=\ker p$.
\[0 \to K \xrightarrow i E \xrightarrow p G \to 0\]
So $A$ must send the unique object $\star$ of $G$ to the group $K$, since $K$ is the fiber of $p$ over $\star$. Let $s \maps G \to E$ be a (set-theoretic) section of $p$. This provides a cleaving of $p$ as a fibration, with $\Cart(g, \star_E) = s(g)$ for each $g \in G$. We use this to define the pseudofunctors action on morphisms by $Ag \maps K \to K$ by $Ag(k) = s(g)\inv k s(g)$. If $s$ happens to be a group homomorphism, making this extension split, then we would have
\begin{align*}
Ag(Ah(k))
&= s(g)\inv s(h)\inv k s(h) s(g)
\\&= s(hg)\inv k s(hg)
\\&= A(hg)(k).
\end{align*}
This tells us that $A$ is a strict functor, and we are done defining it. Otherwise, we must specify the compositor and unitor data.
To specify the unitor $\phi_0 \maps A(id_\star) = A(e_G) \To id_K$, we only need one element $\phi_0$ of $K$ such that $Ae_G(k) = \phi_0\inv k\phi_0$. Let $\phi_0 := s(e_G)$. Without loss of generality, we can assume that $s(e_G) = e_E$, and thus $\phi_0 = e_E$. To specify the compositor, for each pair of elements $\sigma, \tau \in G$, we need an element $\phi_{\sigma, \tau} \in K$ such that $\phi_{\sigma, \tau} \maps A\sigma \circ A\tau \To A(\tau \sigma)$, meaning $\phi_{\sigma, \tau} A\sigma(A\tau(k)) = A(\tau\sigma)(k) \phi_{\sigma, \tau}$. Let
\begin{align*}
\phi_{\sigma, \tau}
&:= s(\tau\sigma)\inv s(\tau) s(\sigma).
\end{align*}
\begin{prop}
Let $p \maps E \to G$ be a surjective group homomorphism. A set-theoretic section $s \maps G \to E$ determines a twisted (right) action $(A,\phi)$ of $G$ on $\ker p$ by $Ag(k) = s(g)\inv ks(g)$ and $\phi_{\sigma, \tau} = s(\tau\sigma)\inv s(\tau) s(\sigma)$. This is independent of the choice of section. Moreover, every twisted action of $G$ is determined by a surjection onto $G$ in this way.
\end{prop}
This is a direct corollary of \cref{thm:Grothendieck}.
\section{Fibrations of $\FI$-type categories}
\label{sec:GCFI}
In \cite{FItypeCats}, Gadish provides some purely categorical conditions on a category which are sufficient to prove some generalized representation stability results. Categories with these properties, which we recall below, are referred to as \emph{$\FI$-type categories}. In this section, we state sufficient conditions on an indexed category $\M \maps \X\op \to \Cat$ with base category of $\FI$-type for $\inta\M$ to also be of $\FI$-type. We then provide several examples of $\FI$-type categories, including those built using the Grothendieck construction.
\begin{defn}
\label{def:weakpushout}
A \define{weak pushout} diagram is a pullback square
\[
\begin{tikzcd}
p
\arrow[r, "\overline f_1"]
\arrow[d, swap, "\overline f_2"]
\arrow[dr, phantom, "\lrcorner", pos = 0.1]
&
c_1
\arrow[d, "f_1"]
\\
c_2
\arrow[r, swap, "f_2"]
&
d
\end{tikzcd}
\]
which is universal among pullback squares that agree on the top and left maps: for any pullback square
\[
\begin{tikzcd}
p
\arrow[r, "\overline f_1"]
\arrow[d, swap, "\overline f_2"]
\arrow[dr, phantom, "\lrcorner", pos = 0.1]
&
c_1
\arrow[d, "h_1"]
\\
c_2
\arrow[r, swap, "h_2"]
&
z
\end{tikzcd}\]
there exists a unique morphism $h \maps d \to z$ that makes the diagram
\[
\begin{tikzcd}
p
\arrow[r, "\overline f_1"]
\arrow[d, swap, "\overline f_2"]
&
c_1
\arrow[d, "f_1"]
\arrow[ddr, bend left, "h_1"]
\\
c_2
\arrow[r, swap, "f_2"]
\arrow[drr, swap, bend right, "h_2"]
&
d
\arrow[dr, dashed, "h"]
\\&&z
\end{tikzcd}
\]
commute. We call $d$ the \define{weak pushout object} and denote it by $c_2 \sqcup_p c_2$. The unique map $h$ induced from a pair of maps $h_i \maps c_i \to z$ is denoted by $h_1 \sqcup_p h_2$. A functor $F \maps \C \to \D$ is said to \define{preserve weak pushouts} if the image of a weak pushout square is also a weak pushout square.
\end{defn}
Note that a pushout is automatically a weak pushout. This restricted notion of pushout also appears in \cite{Devissage}, in the context of algebraic $K$-theory.
\begin{defn}[\cite{FItypeCats}]
\label{def:FItype}
A category $\X$ is of \define{$\FI$-type} if it satisfies the following conditions.
\begin{enumerate}
\item $\X$ is locally finite, i.e.\ all hom-sets are finite
\item every morphisms is a monomorphism
\item every endomorphism is an isomorphism
\item for every pair of objects $x$ and $y$, the group of automorphisms $\Aut_\X(y)$ acts transitively on the set $\Hom_\X(x, y)$
\item for every object $y$ there exist only finitely many isomorphism classes of objects $x$ for which $\Hom_\X(x, y) \neq \emptyset$ (we denote this by $x \leq y$)
\item $\X$ has pullbacks
\item $\X$ has weak pushouts
\end{enumerate}
\end{defn}
\begin{expl}
Every locally finite groupoid is $\FI$-type. This turns out not to be a terribly interesting fact on its own. For instance, Gadish's results about $\FI$-type categories when applied to $\FB$ essentially says that a subrepresentation of a finite dimensional representation of a symmetric group is also finite dimensional.
\end{expl}
\begin{thm}[\cite{FItypeCats}, Thm.\ C]
If $\C$ is a category of $\FI$-type, then the category of $\C$-modules over $\CC$ is Noetherian. That is, every submodule of a finitely generated $\C$-module is itself finitely generated.
\end{thm}
We require a technical definition before we can state the main theorem.
\begin{defn}
We say that an indexed category $\M \maps \X\op \to \Cat$ is \define{weakly reversible} if for each map $f \maps x \to y$ in $\X$, there is a weak pushout preserving functor $f_! \maps \M x \to \M y$ such that $f_!f^*$ is identity on objects, along with a natural transformation $\eta^f \maps id_{Fx} \To f^*f_!$.
\end{defn}
\begin{thm}
\label{thm:main}
Let $\X$ be an $\FI$-type category, and let $\M \maps \X\op \to \Cat$ be an indexed category. Then $\inta\M$ is an $\FI$-type category and the related fibration preserves pullbacks and weak pushouts if
\begin{enumerate}
\item the fibers are $\FI$ type categories
\item for every endomorphism $f \maps x \to x$ and object $a$ in the fiber over $x$, every map $a \to \M f(a)$ in $\M x$ is invertible
\item the inclusions $\M x \hookrightarrow \inta \M$ preserve pullbacks
\item $\M$ is weakly reversible
\end{enumerate}
for all objects $x,y$ and morphisms $f \maps x \to y$ in $\X$.
\end{thm}
This theorem is proven in the next section.
\begin{expl}[Finite products of $\FI$-type categories]
For categories $\X$ and $\Y$, recall the constant indexed category $\Delta \Y \maps \X\op \to \Cat$ from \cref{ex:products}. If $\X$ and $\Y$ are of $\FI$-type, then $\Delta \Y$ clearly satisfies the conditions of \cref{thm:main}. Thus products of $\FI$-type categories are also of $\FI$-type.
\end{expl}
\begin{expl}[$FI_G$]
Let $G$ be a group, and $G^\bullet \maps \FI\op \to \Grp$ be the indexed group defined as follows. For an object $n \in \FI$, $n \mapsto G^n$. A morphism $f \colon n \to m$ is mapped to the pullback $f^\ast \colon G^m \to G^n$. The symmetric group $S_n$ acts on $G^n$ by permuting the copies of $G$. The Grothendieck construction gives a category $\inta G^\bullet$ with objects $(n, \star_n)$ with $n \in\FI$ and $\star_n$ the unique object in $G^n$, and morphisms $(f,g) \maps (n, \star_n) \to (m, \star_m)$ with $f \maps n \to m$ a morphism in $\FI$, and $g$ an element of the group $G^n$.
The category $\inta G^\bullet$ is equivalent to $\FI_G$.
The functor $\FI_G \to \FI$ which simply forgets the $G$-decorations is a fibration.
\end{expl}
\begin{prop}[$FI_G$]
Let $G$ and $H$ be finite groupoids. Then $\FI_{G \sqcup H} \simeq \FI_G \times \FI_H$.
\end{prop}
\begin{proof}
Define a functor $\FI_G \times \FI_H \to \FI_{G \sqcup H}$ by sending an object $(m,n)$ to $m+n$, and a morphism $((f, g_1, \dots, g_m), (j, h_1, \dots h_n))$ to $(f+j, g_1, \dots, g_m, h_1, \dots h_n)$. This is clearly essentially surjective.
\end{proof}
\begin{expl}[Block permutations]
Let $\M \maps \FI\op \to \Cat$ be given by $n \mapsto \FI^n$ on objects, and for an injection $f \maps n \to p$, define $f^* \maps \FI^p \to \FI^n$ by $f^*(q_1, \dots, q_p) = (q_{f(1)}, \dots, q_{f(n)})$. What is $\inta \M$ like? The objects are tuples $(n, m_1, \dots, m_n)$ of finite sets. This is an $n$-part partition of the set $\sum^n m_i$. A map $(n, m_1, \dots, m_n) \to (p, q_1, \dots, q_p)$ consists of an injection $f \maps n \to p$, and $(g_1, \dots, g_n) \maps (m_1, \dots, m_n) \to f^*(q_1, \dots, q_p)$, which is the same as a family of injections $g_i \maps m_i \to q_{f(i)}$.
We naturally get a fibration $\inta\M \to \FI$ by projecting onto the first component, but we consider now another functor $\inta\M \to \FI$. Define a functor $c \maps \inta\M \to \FI$ by $(n, m_1, \dots, m_n) \mapsto \sum^n m_i$, and similar on morphisms. This is full and essentially surjective, but not faithful, which tells us that it is forgetting purely stuff.
\end{expl}
\section{Proof of \cref{thm:main}}
\label{sec:proofs}
In this section we provide a proof for \cref{thm:main}. We prove the corresponding statement for each property in \cref{def:FItype} separately to aid the reader that is only concerned with a subset of the properties.
\subsection{Locally finite}
\begin{lem}[Locally finite]
\label{lem:locfin}
Let $\M \maps \X\op \to \Cat$ be an indexed category with $\X$ locally finite. Then $\inta\M$ is locally finite if and only if $\M(x)$ is locally finite for each object $x \in \X$.
\end{lem}
\begin{proof}
This follows immediately from \cref{eq:hom}.
\end{proof}
\subsection{Monomorphisms}
Nothing interesting is needed for the condition of every map being a monomorphism. If the base has the property, then the total category will have the property if and only if the fibers all have the property. Going from total to fibers follows from the fact that a monomorphism in a category remains a monomorphism in all subcategories. The converse is a direct check.
\begin{lem}[Every map is a monomorphism]
Let $\X$ be a category where every map is a monomorphism, and $\M \maps \X\op \to \Cat$ be an indexed category. Then every map in $\inta \M$ is a monomorphism if and only if each map in the category $\M x$ is a monomorphism for each $x \in \X$.
\end{lem}
\begin{proof}
Let $\M \maps \X\op \to \Cat$ be such that every map in $\M x$ is a monomorphism. Let $(f,k) \maps (x,a) \to (y,b)$ and $(g_1, \ell_1), (g_2, \ell_2) \maps (z, c) \to (x,a)$ be such that $(f,k) \circ (g_1, \ell_1) = (f,k) \circ (g_2, \ell_2)$. Then we get $f \circ g_1 = f \circ g_2$, and since every map in $\X$ is a monomorphism, we get $g_1 = g_2$. We also get $\M g_1(k) \circ \ell_1 = \M g_2(k) \circ \ell_2 = \M g_1(k) \circ \ell_2$, and since every map in $\M z$ is a monomorphism, we have $\ell_1 = \ell_2$. Thus $(f,k)$ is a monomorphism.
To prove the converse, consider instead a fibration $P \maps \A \to \X$ such that every map in $\A$ or $\X$ is a monomorphism. Let $f \maps a \to b$ and $g_1, g_2 \maps c \to a$ be maps in $\A_x$ such that $f \circ g_1 = f \circ g_2$. Since these are maps in $\A$, $f$ is a monomorphism, and we get $g_1 = g_2$.
\end{proof}
\subsection{Endomorphisms are invertible}
A category where every endomorphism is an isomorphism is called an \define{EI category}. This condition is equivalent to the condition that every morphism between isomorphic objects is invertible. \cref{prop:EIicat} gives sufficient conditions for the Grothendieck construction of an indexed category $\M \maps \X\op \to \Cat$ with an EI base category to be an EI category. \cref{prop:EIfib} is the converse, though phrased in the language of fibrations. So these conditions are both necessary and sufficient.
\begin{prop}
\label{prop:EIicat}
Let $\X$ be an EI category, $\M \maps \X\op \to \Cat$ an indexed category such that
\begin{itemize}
\item $\M x$ is EI for all objects $x \in \X$
\item for every endomorphism $f \maps x \to x$ and object $a$ in the fiber over $x$, every map $a \to \M f(a)$ in $\M x$ is invertible
\end{itemize}
Then $\inta \M$ is EI.
\end{prop}
\begin{proof}
Assume that $\M x$ is an EI category for each object $x \in \X$ and for an endomorphism $f \maps x \to x$ every map $a \to \M f(a)$ in $\M x$ is invertible. Let $(f,k) \maps (x,a) \to (x,a)$ be an endomorphism in $\inta\M$. Then $f \maps x \to x$ is an endomorphism, and thus invertible, and $k \maps a \to \M f(a)$ must be invertible by assumption. Since $f$ and $k$ are both invertible, $(f,k)$ is invertible.
\end{proof}
It is important to notice here that we are not asking for $a$ to be isomorphic to $\M f(a)$. It is possible that there are no maps of the form $a \to \M f(a)$, in which case, the above condition would be vacuously true. Take for example $X$ to be the one-object groupoid whose automorphism group of the unique object is $\ZZ$, and take $\M$ to assign to the unique object the discrete category with object set $\ZZ$, and the action is given by translation. The category $\inta\M$ is a groupoid, and thus it is EI, but usually we do not have $a \cong \M f(a)$ or indeed any maps between them in either direction.
\begin{lem}
\label{lem:invertinfiber}
Let $\A$ and $\X$ be EI categories, and $P \maps \A \to \X$ be a fibration. Let $\phi \maps a \to b$ be a map in $\A_x$ which has an inverse $\phi\inv$ in $\A$. Then $\phi\inv$ is also in the fiber.
\end{lem}
\begin{proof}
$P(\phi\inv) = P(\phi)\inv = id_x\inv = id_x$.
\end{proof}
\begin{lem}
\label{lem:liftinvmono}
A cartesian lift of an invertible map is always a monomorphism.
\end{lem}
\begin{proof}
Let $P \maps \A \to \X$ be a fibration, $f \maps x \to y$ be an isomorphism in $\X$, and $\phi \maps a \to b$ a cartesian lift of $f$. Consider two maps $\psi_1, \psi_2 \maps a' \to a$ such that $\phi \circ \psi_1 = \phi \circ \psi_2$. Then we have the following setup.
\[
\begin{tikzcd}[column sep = huge]
a'
\arrow[drr, "\phi \circ \psi_1"]
\arrow[dr, dashed, swap, "\exists!\gamma"]
\arrow[dd, dotted, bend right]
\\&
a
\arrow[r, swap, "\phi"]
\arrow[dd, dotted, bend right]
&
b
\arrow[dd, dotted, bend right]
&
\A
\arrow[dd, "P"]
\\
x'
\arrow[drr, "f \circ P(\psi_1)"]
\arrow[dr, swap, "P(\psi_1)"]
\\&
x
\arrow[r, swap, "f"]
&
y
&
\X
\end{tikzcd}\]
By the definition of $\phi$ being cartesian, there is a unique map $\gamma \maps a' \to a$ in $\A$ such that $\phi \circ \gamma = \phi \circ \psi_1$ and $P(\gamma) = P(\psi_1)$. Clearly $\psi_1$ satisfies these conditions as well, so $\gamma = \psi_1$. Note that
\begin{align*}
P(\psi_1)
&= f\inv \circ f \circ P(\psi_1)
\\&= f\inv \circ P(\phi) \circ P(\psi_1)
\\&= f\inv \circ P(\phi \circ \psi_1)
\\&= f\inv \circ P(\phi \circ \psi_2)
\\&= f\inv \circ P(\phi) \circ P(\psi_2)
\\&= f\inv \circ f \circ P(\psi_2)
= P(\psi_2).
\end{align*}
Thus $\psi_1=\psi_2$.
\end{proof}
\begin{prop}
\label{prop:EIfib}
Let $P\maps \A \to \X$ be a fibration with $\A$ and $\X$ being EI categories. Then for each object $x \in \X$, the fiber $\A_x$ is an EI category, and for an endomorphism $f \maps x \to x$ in $\X$ and an object $a \in \A_x$, each map $a \to f^*a$ in $\A_x$ is invertible.
\end{prop}
\begin{proof}
Let $k \maps a \to a$ be an endomorphism in $\A_x$. Remember that being in the fiber means that $P(k) = id_x$. Since $k$ is an endomorphism in $\A$, it has an inverse. The map $k\inv$ is in the fiber $\A_x$ because $P(k\inv) = P(k)\inv = id_k\inv = id_x$. So the fibers are EI.
Let $\ell \maps a \to f^*a$ be a map in $\A_x$, and let $\phi \maps f^*a \to a$ denote the cartesian lift of $f$ to $a$. Then the composite $a \xrightarrow \ell f^*a \xrightarrow \phi a$ is an endomorphism in $\A$, and thus invertible. Let $\psi \maps a \to a$ denote the inverse.
We claim that $\psi \phi$ is inverse to $\ell$.
Notice $\phi \ell \psi \phi =\phi$, and since $\phi$ is a monomorphism by \cref{lem:invertinfiber}, then we have $\ell \psi \phi = id_{f^*a}$ as desired. By \cref{lem:liftinvmono}, since $\ell$ is invertible in $\A$, it is invertible in $\A_x$.
\end{proof}
\subsection{Increasing}
We say that a category is \define{increasing} if each object only has finitely many isomorphism classes of objects which map into it.
\begin{lem}
\label{lem:increasing}
Let $\X$ be an increasing category, and $\M \maps \X\op \to \Cat$ an indexed category. Then $\inta\M$ is increasing if and only if each fiber is increasing.
\end{lem}
\begin{proof}
Assume that each fiber is increasing. To have $(x,a) \leq (y,b)$, we must have a map $(f,k) \maps (x,a) \to (y,b)$, which consists of a map $f \maps x \to y$ and a map $k \maps a \to \M f(b)$. For a fixed $(y,b)$, there are finitely many choices for $x$, and finitely many choices for $a$. Thus $\inta\M$ is increasing.
Let $P \maps \A \to \X$ be a fibration where $\A$ and $\X$ are increasing. Let $x$ be an object in $\X$, and $a \leq b$ in $\A_x$, i.e.\ there is a map $f \maps a \to b$ in $\A_x$. Since $f$ is a map in $\A$, there are finitely many choices for $a$, and so the choices are the choices within $\A$ intersected with $\A_x$, which is still finite.
\end{proof}
\subsection{Transitive hom-action}
Following Gan--Li \cite{EICat}, we say that a category is \define{transitive} if for each pair of objects $x,y$, the monoid $End(y)$ acts transitively on $Hom(x,y)$. Keep in mind that in EI categories, and thus in $\FI$-type categories, the endomorphism monoids are the same as the automorphism groups.
\begin{lem}[Transitive]
\label{lem:transitive}
Let $\X$ be a transitive category, and $\M \maps \X\op \to \Cat$ be an indexed category. Then $\inta\M$ is transitive if and only if the fibers are transitive and given maps $k_1 \maps a \to \M f_1(b)$ and $k_2 \maps a \to \M f_2(b)$ in $\M x$, there exists a map $\ell \maps b \to \M g(b)$ which makes the following diagram commute, where $g \maps y \to y$ is the map such that $f_1 = g \circ f_2$.
\[
\begin{tikzcd}
a
\arrow[r, "k_1"]
\arrow[d, swap, "k_2"]
&
\M f_1(b)
\\
\M f_2(b)
\arrow[r, swap, "\M f_2(\ell)"]
&
\M f_2 \M g(b)
\arrow[u, "\M_{g, f_2}"', "\sim"]
\end{tikzcd}
\]
\end{lem}
\begin{proof}
Assume the latter condition. Let $(f_1,k_1), (f_1,k_1) \maps (x,a) \to (y,b)$. Since $f_1, f_2 \maps x \to y$ in $\X$, then there is a map $g \maps y \to y$ such that $f_1 = g \circ f_2$. Then by our assumption, we get a map $\ell \maps b \to \M g(b)$, which combined with $g$ gives a map $(g, \ell) \maps (y,b) \to (y,b)$.
\begin{align*}
(f_2, k_2) \circ (g, \ell)
&= (f_2 \circ g, \M g(k_2) \circ \ell \circ \M_{g,f_2})
\\&= (f_1, k_1).
\end{align*}
Assume that $\inta\M$ is transitive. Let $f_1, f_2 \maps x \to y$ in $\X$, $k_1 \maps a \to \M f_1(b)$ in $\M x$, $k_2 \maps a \to \M f_2(b)$ in $\M x$. These assemble into maps $(f_1, k_1), (f_2, k_2) \maps (x,a) \to (y,b)$ in $\inta\M$. Since $\inta\M$ is transitive, there is a map $(g, \ell) \maps (y,b) \to (y,b)$ such that $(f_1, k_2) = (g, \ell) \circ (f_2, k_2)$. This $\ell$ is the desired map, and satisfies the necessary equation.
\end{proof}
\subsection{Pullbacks}
This is the first section in which we will consider properties of the fibration as well as the total category. Indeed, all previous conditions in the definition of $\FI$-type category are properties which are preserved by any functor automatically.
\begin{thm}[\cite{Grayfibredandcofibred} Theorem 4.2]
Let $P \maps \A \to \X$ be a fibration, and let $\X$ have $J$-limits, for some small category $J$. Then $\A$ has $J$-limits and $P$ preserves them if and only if
\begin{enumerate}
\item each fibre $\A_x$ has $J$-limits
\item the inclusions $\A_x \hookrightarrow \A$ preserve $J$-limits.
\end{enumerate}
\end{thm}
For completeness, we include a proof of the case of pullbacks, where $J$ is set to be the category $\bullet \rightarrow \bullet \leftarrow \bullet$ (identity maps omitted).
\begin{proof}[Proof for pullbacks]
Assume the two conditions above. Take a diagram
\[
\begin{tikzcd}
&
b
\arrow[d, "\psi"]
\\
a
\arrow[r, "\phi", swap]
&
c
\end{tikzcd}
\]
in $\A$.
Apply $P$ to map it into $\X$, and then take a pullback.
\[
\begin{tikzcd}
x
\arrow[r, dashed, "g"]
\arrow[d, dashed, swap, "f"]
\arrow[dr, phantom, "\lrcorner", pos = 0.1]
&
Pb
\arrow[d, "P\psi"]
\\
Pa
\arrow[r, "P\phi", swap]
&
Pc
\end{tikzcd}
\]
Factor $\phi$ and $\psi$ into their vertical and horizontal parts, then reindex along $f$ and $g$ to obtain the following diagram.
\[
\begin{tikzcd}
&&&
d
\arrow[dll, dashed]
\arrow[drr, dashed]
\arrow[dddd, phantom, "\lrcorner"{rotate = -45}, pos = 0.1]
\\&
f^*a
\arrow[dl, "{\Cart(f,a)}", sloped]
\arrow[dr, "f^*\phi_v"]
&&&&
g^*b
\arrow[dl, "g^*\psi_v", swap]
\arrow[dr, "{\Cart(f,b)}", sloped]
\\
a
\arrow[dr, "\phi_v", swap]
&&
f^*P\phi^*c
\arrow[dl, "{\Cart(f, P\phi^*c)}"', sloped]
\arrow[d, "\sim", sloped]
&&
g^*P\psi^*c
\arrow[dr, "{\Cart(g, P\psi^*c)}"', sloped]
\arrow[d, "\sim", swap, sloped]
&&
b
\arrow[dl, "\psi_v"]
\\&
P\phi^*c
\arrow[drr, swap, "\psi_h = {\Cart(P\phi,c)}", bend right = 15]
&
(P\phi f)^*c
\arrow[dr, "{\Cart(P\phi f,c)}"description]
\arrow[rr, equals]
&&
(P\psi g)^*c
\arrow[dl, "{\Cart(P\psi g,c)}"description]
&
P\psi^*c
\arrow[dll, "\psi_h = {\Cart(P\psi,c)}", bend left = 15]
\\&&&
c
\end{tikzcd}
\]
We obtain the object $d$ and the dashed arrows above by taking the pullback of $f^*\phi_v$ and $g^*\psi_v$ in $\A_x$. By the second assumed condition, this is also a pullback in $\A$. We claim that the outer frame of the above diagram is a pullback square of our original diagram.
Now we prove the universal property. Let $q\in \A$ and $\alpha \maps q \to a$ and $\beta \maps q \to b$ be maps such that $\phi \circ \alpha = \psi \circ \beta$. We can factor $\alpha$ and $\beta$ through the cartesian maps as follows. Apply $P$ to the competitor diagram, and obtain the map $h \maps Pq \to x$ by universal property.
\[
\begin{tikzcd}
Pq
\arrow[drr, bend left, "P\beta"]
\arrow[dr, dashed, "\exists!h"]
\arrow[ddr, bend right, swap, "P\alpha"]
\\&
x
\arrow[dr, phantom, pos = 0.1, "\lrcorner"]
\arrow[r, "g"]
\arrow[d, swap, "f"]
&
Pb
\arrow[d, "P\psi"]
\\&
Pa
\arrow[r, swap, "P\phi"]
&
Pc
\end{tikzcd}
\]
We obtain the factorization of $\alpha$ through the cartesian lift $\Cart(f,a)$ by the cartesian property.
\[
\begin{tikzcd}[column sep = huge]
q
\arrow[drr, "\alpha"]
\arrow[dr, dashed, swap, "\exists!\overline \alpha"]
\arrow[dd, dotted, bend right]
\\&
f^*a
\arrow[r, swap, "{\Cart(f,a)}"]
\arrow[dd, dotted, bend right]
&
a
\arrow[dd, dotted, bend right]
\\
Pq
\arrow[drr, "P\alpha"]
\arrow[dr, swap, "h"]
\\&
x
\arrow[r, swap, "f"]
&
Pa
\end{tikzcd}
\]
Similarly, we obtain a map $\overline \beta \maps q \to g^*b$. Now we have a competitor diagram, and thus obtain the map $\eta \maps q \to d$ as follows.
\[
\begin{tikzcd}
q
\arrow[drr, bend left, "\overline\beta"]
\arrow[dr, dashed, "\exists!\eta"]
\arrow[ddr, bend right, swap, "\overline \alpha"]
\\&
d
\arrow[r, "\overline \gamma"]
\arrow[d, swap, "\overline \delta"]
&
g^*b
\arrow[d, "g^*\psi_v"]
\\&
f^*a
\arrow[r, swap, "f^*\phi_v"]
&
f^*P\phi^*c
\arrow[r, equals]
&
g^*P\psi^*c
\end{tikzcd}
\]
A computation shows this makes the necessary diagrams commute, and uniqueness follows from the universal property from which the map $\eta$ was originally derived, as well as the lifting properties from which $\overline \alpha$ and $\overline \beta$ were derived.
Assume $\A$ has pullbacks and $P$ preserves pullbacks. Let $x \in \X$ and let
\[
\begin{tikzcd}
&
b
\arrow[d, "g"]
\\
a
\arrow[r, swap, "f"]
&
c
\end{tikzcd}
\]
be a diagram in $\A_x$. This is also a diagram in $\A$, so we can take its pullback there.
\begin{equation}
\label{pullback}
\begin{tikzcd}
d
\arrow[d, swap, dashed, "p"]
\arrow[r, "q", dashed]
\arrow[dr, phantom, pos = 0.1, "\lrcorner"]
&
b
\arrow[d, "g"]
\\
a
\arrow[r, swap, "f"]
&
c
\end{tikzcd}
\end{equation}
Apply $P$ to this square.
\[
\begin{tikzcd}
Pd
\arrow[d, swap, "Pp"]
\arrow[r, "Pq"]
\arrow[dr, phantom, pos = 0.1, "\lrcorner"]
&
Pb
\arrow[d, "Pg"]
\\
Pa
\arrow[r, swap, "Pf"]
&
Pc
\end{tikzcd}
=
\begin{tikzcd}
Pd
\arrow[d, swap, "Pp"]
\arrow[r, "Pq"]
\arrow[dr, phantom, pos = 0.1, "\lrcorner"]
&
x
\arrow[d, "id_x"]
\\
x
\arrow[r, swap, "id_x"]
&
x
\end{tikzcd}
\]
Since $P$ preserves pullbacks, then $Pd$ and $Pp$ and $Pq$ must form the pullback of the constant diagram at $x$. Thus $Pd=x$, $Pp = Pq = id_x$, and the entire square (\ref{pullback}) is in the fiber $\A_x$.
Consider the following competitor diagram in $\A_x$, and the map $h \maps e \to d$ derived from the universal property of pullbacks in $\A$.
\[
\begin{tikzcd}
e
\arrow[drr, bend left, "s"]
\arrow[dr, dashed, "\exists!h"]
\arrow[ddr, bend right, swap, "r"]
\\&
d
\arrow[r, "q"]
\arrow[dr, phantom, pos = 0.1, "\lrcorner"]
\arrow[d, swap, "p"]
&
b\arrow[d, "g"]
\\&
a
\arrow[r, swap, "f"]
&
c
\end{tikzcd}
\]
\emph{A priori} we do not know the map $h$ is in the fiber. It could be the case that $Ph$ is a non-trivial endomorphism of $x$.
\begin{align*}
P(h)
&= id_x P(h)
\\&= P(p)P(h)
= P(ph)
\\&= P(r)
= id_x
\end{align*}
So $h$ is in $\A_x$. Uniqueness follows \emph{a fortiori} from uniqueness in $\A$. The fact that the inclusion $\A_x \hookrightarrow \A$ preserves pullbacks follows from the fact that we constructed the pullback in $\A$ to begin with.
\end{proof}
\subsection{Weak pushouts}
Given a fibration of categories with weak pushouts, where the functor preserves weak pushouts, it is straightforward to show that the fibers actually have weak pushouts. Include your diagram in the fiber into the total category, take weak pushout in the total category, and then factor the maps into their vertical parts. The resulting maps end up having equal codomain, and they form the legs of the desired square.
The other direction is more difficult. In the strong scenario where we have left adjoints to each reindexing functor, we can adjoint the fiber components of the legs, and pushforward into the fiber over the weak pushout of the base components. Then we can take weak pushout there, and adjoint back to get maps of the right type. This is a special case of a construction that works for any colimit. However, such left adjoints do not exist even in the case of $\FI_G$. Indeed, an adjunction between $G^n$ and $G^m$ would would be an isomorphism. This is the only subsection in this section of the paper with only sufficient conditions, not necessary and sufficient.
\begin{defn}
We say that an indexed category $\M \maps \X\op \to \Cat$ is \define{weakly reversible} if for each map $f \maps x \to y$ in $\X$, there is a weak pushout preserving functor $f_! \maps \M x \to \M y$ such that $f_!f^*$ is identity on objects, along with a natural transformation $\eta^f \maps id_{\M x} \To f^*f_!$.
\end{defn}
\begin{prop}
Let $\M \maps \X\op \to \Cat$ be a locally reversible indexed category with $\X$ and $\M x$ having weak pushouts for each $x \in \X$. Then $\inta \M$ has weak pushouts.
\end{prop}
\begin{proof}
Let $(z,c) \xleftarrow{(g,\ell)} (x,a) \xrightarrow{(f,k)} (y,b)$ be a diagram in $\inta \X$. Take the weak pushout of the base components of this diagram in $\X$. Let $w$ denote the weak pushout object.
\[
\begin{tikzcd}
x
\arrow[r, "f"]
\arrow[d, swap, "g"]
&
y
\arrow[d, dashed, "h"]
\\
z
\arrow[r, swap, "j", dashed]
&
w
\end{tikzcd}
\]
Consider the map $\overline k$ defined as the composite $f_!(a) \xrightarrow{f_!k}f_!f^*(b) = b$ in $\M y$, and the analogous map $\overline \ell$ defined as the composite $g_!(a) \xrightarrow{g_!(\ell)} g_!g^*(c) = c$ in $\M z$. Push these forward to $\M z$ by $h_!$ and $j_!$ respectively, then take the weak pushout in $\M z$.
\[
\begin{tikzcd}
&
h_!f_!(a)
\arrow[r, "h_!(\overline k)"]
\arrow[dl, equals]
&
h_!(b)
\arrow[dd, dashed, "\overline m"]
\\
j_!g_!(a)
\arrow[d, "j_!(\ell)"']
\\
j_!(c)
\arrow[rr, dashed, "\overline n"']
&&
d
\end{tikzcd}
\]
Adjoint the maps $\overline m$ and $\overline n$ to $\M y$ and $\M z$ respectively by defining the map $m \maps b \to h^*(d)$ to be the composite $b \xrightarrow{\eta^h_b} h^*h_! (b) \xrightarrow{h^*(\overline m)} h^*(d)$, and similarly the map $n \maps c \to j^*(d)$ to be the composite $c \xrightarrow{\eta^j_c} j^*j_!(c) \xrightarrow{j^*(\overline n)} j^*(d)$. We claim the following diagram is a weak pushout square.
\[
\begin{tikzcd}
(x,a)
\arrow[r, "{(f,k)}"]
\arrow[d, "{(g,\ell)}"']
&
(y,b)
\arrow[d, "{(h,m)}"]
\\
(z,c)
\arrow[r, "{(j,n)}"']
&
(w,d)
\end{tikzcd}
\]
The base component of this diagram is a weak pushout square by construction. The fiber component
\[
\begin{tikzcd}
a
\arrow[r, "k"]
\arrow[d, "\ell"']
&
f^*b
\arrow[r, "f^*m"]
&
f^*h^*d
\arrow[d, "\mu_{f,h}"]
\\
g^*c
\arrow[d, "g^*n"']
&&
(hf)^*d
\\
g^*j^*d
\arrow[r, "\mu_{g,j}"']
&
(jg)^*d
\arrow[ur, equals]
\end{tikzcd}
\]
is also a weak pushout square because the pushforward functors are assumed to preserve weak pushouts. Concluding the desired universal property from that of the base and fibre components is straightforward.
\end{proof}
\bibliographystyle{alpha}
|
2,869,038,155,431 | arxiv | \section{Introduction}
\label{sec:introduction}
Intersections of Schubert varieties in the Grassmanian provide the prototypical examples of many phenomena in geometry, representation theory and combinatorics. Let \( V \) be a finite dimensional complex vector space and \( \Gr(r,V) \), the \emph{Grassmanian} variety of \( r \)-dimensional subspaces of \( V \). To every partition \( \lambda \) with at most \( r \) rows and \( d = \dim V \) columns, and a full flag \( \Ff: \{0 \} = \Ff_0 \subset \Ff_1 \subset \cdots \subset \Ff_d = V \), we can associate the \emph{Schubert variety} \( \Shvar(\lambda,\Ff) \) of subspaces that meet \( \Ff \) in a way prescribed by \( \lambda \) (see Section~\ref{sec:preliminaries}).
The intersection theory of Schubert varieties is governed by the \emph{Littlewood-Richardson coefficients} \( c^\nu_{\lambda\mu} \). For a generic choices of flags \( \Ff,\Gg,\Hh \) (and appropriate partitions) the intersection
\[ \Shvar(\lambda,\Ff) \cap \Shvar(\mu,\Gg) \cap \Shvar(\nu^\comp,\Hh) \]
is a set of \( c^\nu_{\lambda\mu} \) points. Here \( \nu^\comp \) is the complement partition (see Section~\ref{sec:standard-tableaux}).
The Littlewood-Richardson coefficients have a multitude of different incarnations. They count \emph{Littlewood-Rchardson tableaux} and \emph{dual equivalence classes}. They count the multiplicity of the irreducible \( \gl_r \)-module \( L(\nu) \) in the tensor product \( L(\lambda) \otimes L(\mu) \), and they are the structure constants of the ring of symmetric functions for the basis of Schur polynomials.
More generally one can consider intersections of the form
\begin{equation}
\label{eq:schubert-intersection}
\Shvar(\lambda^{(1)},\Ff^{(1)}) \cap \Shvar(\lambda^{(2)},\Ff^{(2)}) \cap \cdots \cap \Shvar(\lambda^{(n)},
\Ff^{(n)}) \cap \Shvar(\mu^\comp,\Ff^{(\infty)})
\end{equation}
for a sequence of partitions \( \blambda = (\lambda^{(1)},\lambda^{(2)},\ldots,\lambda^{(n)}) \). For generic choices of flags (and appropriate choices of partitions) this intersection will again be finite and counted by Littlewood-Richardson coefficients \( c^\mu_{\blambda} \). This paper will concentrate on the special case when \( \lambda^{(i)} = (1) = \square \) for all \( i \). In that case, when \( \mu \) is a partition of \( n \), \( c^\mu_{\square,\square,\ldots,\square} \) is equal to the number of standard tableaux of shape \( \mu \) (which we will often think of as chains of shapes with terminal shape being \( \mu \)).
\subsection{Labelling Schubert intersections}
\label{sec:labell-schub-inters}
The obvious question which arises is whether there is a canonical bijection that realises this coincidence of numbers. In this form, the answer to the question is no. As one varies the flags, there can be monodromy, often the entire symmetric group will act on the fibre. However, implicit in work over the past decade are a number of ways to realise bijections between the above intersections and the set of standard tableaux when the flags are chosen to be osculating flags at real numbers. In short, we aim to show these bijections are actually the same and describe this bijection in elementary geometric terms.
We start with a rational normal curve \( \PP^1 \longrightarrow \PP V \). For any \( z \in \PP^1 \), let \( \Ff(z) \) be the full flag of \emph{osculating} subspaces to \( z \in \PP^1 \), that is, \( \Ff(z)_i \) is the unique \( i \)-dimensional subspace of \( \PP V \) having maximal intersection with the curve at \( z \).
The first bijection comes from work of Mukhin, Tarasov and Varchenko~\cite{Mukhin:2009et}. The authors show that the ring of functions on the scheme theoretic intersection \( \Shvar(\blambda,\mu^\comp;z,\infty) \) given in~(\ref{eq:schubert-intersection}), with \( \Ff^{(i)} = \Ff(z_i) \) for a tuple of distinct complex numbers \( z = (z_1,z_2,\ldots,z_n) \), is isomorphic to a certain commutative algebra of operators on
\[ L(\blambda)^{\sing}_\mu = \left[ L(\lambda^{(1)}) \otimes L(\lambda^{(2)}) \otimes \cdots \otimes L(\lambda^{(n)}) \right]^{\sing}_\mu, \]
the space of singular (i.e. highest weight) vectors of weight \( \mu \) in a tensor product of irreducible \( \gl_r \)-representations. If \( z \) is taken to be real such that \( z_1<z_2<\cdots <z_n \), then one can take a limit as \( z \to \infty \) (in a specified way), this commutative algebra tends to the algebra of \emph{Jucys-Murphy operators}. When \( \blambda = \square^n \) the spectrum of the Jucys-Murphy operators is in canonical correspondence with the set of standard tableaux of shape \( \mu \).
The second bijection comes from the work of Speyer~\cite{Speyer:2014gg}. The author constructs an explicit bijection between the points in~(\ref{eq:schubert-intersection}) and certain \emph{cylindrical growth diagrams}. Again, there is a natural way to place these objects in bijection with standard tableaux of shape \( \mu \).
\begin{Theorem}
\label{thm:labelling-theorem}
The bijections between \( \Shvar(\square^n,\mu^\comp;z,\infty) \) and \( \SYT(\mu) \), the set of standard tableaux of shape \( \mu \), defined by Speyer and Mukhin-Tarasov-Varchenko agree.
\end{Theorem}
\begin{Remark}
\label{rem:general-intersections}
In this paper we only consider intersections of Schubert varieties for sequences of partitions \( \blambda \) where \( \lambda_i = \square \) for all \( i \), however the question can be asked for any sequence of partitions. The methods of this paper can be applied in this more general setting, one only needs to understand what combinatorial objects should label the intersection points. If we think of a standard tableau as a chain of partitions, one box added at a time, then the correct generalisation would be a chain of partitions with \( \left| \lambda_i \right| \) boxes added at the \( i^{\text{th}} \) step, as well as the data of a dual equivalence class labelling the \( i^{\th} \) inclusion, in such a way that this dual equivalence class is slide equivalent to \( \lambda_i \).
\end{Remark}
\subsection{An elementary description}
\label{sec:an-elem-descr}
We give an elementary description of the bijection described in Theorem~\ref{thm:labelling-theorem}. We fix an \( n \)-tuple of real numbers \( z = (z_1,z_2,\ldots,z_n) \) such that \( z_1 < z_2 < \cdots < z_n \). Choose a subspace in the intersection
\[ X \in \Shvar(\square^n,\mu^\comp; z,\infty). \]
This point depends, in particular on \( z_n \in \RR \). We analyse what happens when we take the limit \( X_\infty = \lim_{z_n \to \infty} X \). The limit point exists since the Grassmanian is projective.
\begin{Proposition}
\label{prp:limit-exists-intro}
The limit point \( X_\infty \in \Gr(r,V) \) is contained in the Schubert cell \( \Shvar^{\circ}(\lambda^\comp;\infty) \) for some partition \( \lambda \) obtained from \( \mu \) by removing a single box.
\end{Proposition}
In particular this shows that \( X_\infty \in \Shvar(\square^{n-1},\lambda^\comp;z_1,\ldots,z_{n-1},\infty) \) and by induction we can associate to \( X_\infty \) a standard \( \lambda \)-tableaux \( S \). We then associate to \( X \) the unique standard \( \mu \)-tableau which is equal to \( S \) after removing the box containing \( n \).
\begin{Theorem}
\label{thm:alternative-bij}
The above process describes a bijection \( \Shvar(\square^n,\mu^\comp;z,\infty) \longrightarrow \SYT(\mu) \). Furthermore it coincides with the bijection from Theorem~\ref{thm:labelling-theorem}.
\end{Theorem}
\subsection{Method of proof}
\label{sec:method-proof}
We will first show that Speyer's bijection agrees with the bijection described in Section~\ref{sec:an-elem-descr}. To make the connection with Bethe vectors we use an intervening step, the critical points of the master function.
The master function is a certain function whose critical points give rise to Bethe vectors and from which one can also determine the spectrum of the Gaudin Hamiltonians. Mukhin-Tarasov-Varchenko give a map between points in Schubert intersections and critical points. We investigate the compatibility of the process outlined in Section~\ref{sec:an-elem-descr} with this map.
The asymptotics of critical points were investigated by Reshetikhin and Varchenko in~\cite{Reshetikhin:1995vs} and this result was exploited by Marcus~\cite{Marcus:2010vn} to provide a way of labelling critical points by standard tableaux. We will recall this as well as the proofs for the sake of convenience. This description will allow us to more easily compare the MTV and Speyer labellings.
\subsection{Structure of paper}
\label{sec:structure-paper}
In Section~\ref{sec:schubert-intersections} we review the basic definitions of Schubert varieties and outline an elementary procedure which associates a tableau to a point in a Schubert intersection. Sections~\ref{sec:mtv-labelling} and~\ref{sec:speyers-labelling} then define the MTV and Speyer labellings respectively and in Section~\ref{sec:agre-with-elem} a proof is given that the Speyer labelling coincides with the elementary labelling. Finally, in Sections~\ref{sec:algebr-bethe-ansatz} and~\ref{sec:crit-points-schub} the technology of the algebraic Bethe ansatz is introduced as well as Marcus' analysis of it, this is then used to prove that the Speyer and MTV labellings coincide.
\subsection{Acknowledgements}
\label{sec:acknowledgements}
The author thanks Iain Gordon and Arun Ram for helpful conversations and inspiration.
\section{Schubert intersections}
\label{sec:schubert-intersections}
In this section we give some background on intersections of Schubert varieties, the combinatorics of standard tableaux and use this to define an elementary procedure for labelling the points in certain intersections by standard tableaux.
\subsection{Preliminaries}
\label{sec:preliminaries}
Let \( V \) be a \( d \) dimensional vector space and \( \Gr(r,d) \) the \emph{Grassmanian} of \( r \)-planes in \( V \). Fix a rational normal curve \( \PP^1 \longrightarrow \PP V \) and for \( z \in \PP^1 \) let \( \Ff(z) \) be the full flag of subspaces of \( V \) osculating at \( z \). For concreteness and without loss of generality, we may choose a basis of \( V \) and identify \( V = \CC_{d-1}[x,y] \), the space of homogeneous degree \( d-1 \) polynomials. \( \Ff([a:b]) \) is then the flag
\[ (bx-ay)^{d-1}\CC_0[x,y] \subset (bx-ay)^{d-2}\CC_1[x,y] \subset \cdots \subset \CC_{d-1}[x,y]. \]
We will often identify \( \CC_d[x,y] \) with the space \( \CC_{<d}[u] \) of polynomials of degree less than \( d \), using the map \( x^iy^{d-i} \mapsto u^i \).
Let \( \lambda \) be a partition with at most \( r \) rows and \( d-r \) columns. Given a flag \( \Ff \) of subspaces of \( V \), the \emph{Schubert cell} \( \Shvar^\circ(\lambda;\Ff) \subset \Gr(r,d) \) is the subvariety of subspaces \( X \) such that
\[ \dim(X \cap F_k) = \# \setc{1 \le s \le r}{d-r+1-\lambda_s \le k}. \]
The closure of \( \Shvar^\circ(\lambda;\Ff) \) is denoted \( \Shvar(\lambda;\Ff) \) and is called the \emph{Schubert variety}. It is the subvariety of subspaces \( X \) such that
\[ \dim(X \cap F_k) \ge \# \setc{1 \le s \le r}{d-r+1-\lambda_s \le k}. \]
If \( \Ff = \Ff(z) \) we denote \( \Shvar(\lambda;\Ff(z)) \) by \( \Shvar(\lambda;z) \).
\subsection{Intersections}
\label{sec:intersections}
We will generally be interested in the intersection of \( k \) Schubert varieties. If \( \blambda = (\lambda^{(1)},\lambda^{(2)},\ldots,\lambda^{(k)}) \) is a sequence of partitions and \( z = (z_1,z_2,\ldots,z_k) \) then we will use the notation
\[ \Shvar(\blambda;z) = \bigcap_{i=1}^k \Shvar(\lambda^{(i)};z_i). \]
It is a theorem of Eisenbud and Harris~\cite{Eisenbud:1983fj} that when the \( z_i \) are distinct, the intersection \( \Shvar(\blambda,z) \) has maximum possible codimension \( \left| \blambda \right| = \sum_{i=1}^k \left|\lambda^{(i)}\right| \), where \( \left| \mu \right| \) is the size of the partition \( \mu \). This result has a strengthening as conjectured by Shapiro-Shapiro and proved by Mukhin-Tarasov-Varchenko.
\begin{Theorem}[\cite{Mukhin:2009cf}]
\label{thm:shaprio-conj}
When \( z = (z_1,z_2,\ldots,z_k) \) is a set of distinct real points and \( \left| \blambda \right| = r(d-r) \), the intersection \( \Shvar(\blambda;z) \) is a reduced union of real points of \( \Gr(r,d) \).
\end{Theorem}
We let \( X_n = \setc{ z \in \CC^n}{z_i \neq z_j} \) be the set of \( n \)-tuples of distinct complex numbers, \( X_n(\RR) \) the set of real points and \( X_{n}^< \subset X_n(\RR) \) the set of \( n \)-tuples of real numbers \( z=(z_1,z_2,\ldots,z_n) \) such that \( z_1<z_2<\ldots <z_n \).
\subsection{The Wronskian}
\label{sec:wronskian}
Let \( g_1(u),g_2(u),\ldots,g_k(u) \) be a collection of polynomials in the variable \( u \). Recall the \emph{Wronskian} is the determinant
\begin{equation*}
\Wr(g_1,g_2,\ldots,g_k) = \det
\begin{pmatrix}
g_1(u) & g_2(u) & \cdots & g_k(u) \\
g_1'(u) & g_2'(u) & \cdots & g_k'(u) \\
\vdots & \vdots & \ddots & \vdots \\
g_1^{(n-1)}(u) & g_2^{(n-1)}(u) & \cdots & g_k^{(n-1)}(u)
\end{pmatrix}.
\end{equation*}
\begin{Lemma}
\label{lem:linear-class-wronskian}
Up to a scalar factor, the Wronskian \( \Wr(g_1(u),\ldots,g_k(u)) \), depends only on the subspace of \( \CC[u] \) spanned by the polynomials \( g_i(u) \), and is zero if and only if the polynomials are linearly dependant.
\end{Lemma}
\begin{proof}
The result follows from the fact that the derivative is a linear operation, and that the determinant can at most be multiplied by a scalar after applying column operations.
\end{proof}
This allows us to define the \emph{Wronskian} \( \Wr\map{\Gr(r,d)}{\PP(\CC[u])} \) by \( \Wr(X) = \Wr(f_1,f_2,\ldots,f_r) \) for any basis \( \{f_1,f_2,\ldots,f_r\} \) of \( X \).
\subsection{Standard tableaux}
\label{sec:standard-tableaux}
\begin{figure}
\centering
\begin{tikzpicture}
\draw[thick] (0,0) rectangle (4,-2.5);
\draw[step = 0.25,help lines] (0,0) grid (4,-2.5);
\draw[thick] (2.5,0) -- (2.5,-0.75) -- (2,-0.75) -- (2,-1) -- (1.75,-1) -- (1.75,-1.75) -- (1,-1.75) -- (1,-2) -- (0,-2);
\node at (1,-1) {\( \mu \)};
\node at (3,-1.5) {\( \mu^\comp \)};
\end{tikzpicture}
\caption{The complementary partition}
\label{fig:compl-part}
\end{figure}
Given a partition \( \mu \) of \( n \), denote by \( \SYT(\mu) \) the set of standard \( \mu \)-tableaux, that is, the set of fillings of the diagram of shape \( \mu \) using \( 1,2,\ldots,n \) with increasing rows and columns. If \( \mu \) has at most \( r \) parts, and \( d-r \) columns, then we denote by \( \mu^\comp \) the partition obtained by embedding \( \mu \) into the top left of a \( r \times (d-r) \) rectangle, taking the complement and rotating the resulting shape by \( 180 \) degrees. See Figure~\ref{fig:compl-part}. I.e.
\[ \mu^\comp = (d-r-\mu_r,d-r-\mu_{r-1},\ldots,d-r-\mu_1) \]
Repeated application of the Pieri rule and Theorem~\ref{thm:shaprio-conj} imply that when \( z\in X_n(\RR) \) then
\[ \# \Shvar(\square^n,\mu^\comp; z,\infty) = \# \SYT(\mu). \]
For notational simplicity we will set
\[ \Shvar(z)_\mu = \Shvar(\square^n,\mu^\comp; z,\infty) = \Shvar(\square;z_1)\cap \Shvar(\square;z_2) \cap\cdots\cap\Shvar(\square;z_n)\cap \Shvar(\mu^\comp;\infty). \]
\subsection{An explicit bijection}
\label{sec:an-expl-biject}
In this section we will describe an explicit and elementary way to realise the numerical coincidence above as a bijection \( \Shvar(z)_\mu \longrightarrow \SYT(\mu) \). Fix an \( n \)-tuple of real numbers \( z \in X_n^< \) and fix a subspace \( X \in \Shvar(z)_\mu \). We will associate to it a standard \( \mu \)-tableaux. We will do this by induction on \( n \).
For \( n = 1 \), there is a single point in \( \Shvar(\square,\square^\comp;z,\infty) \) and a single \( \square \)-tableaux so the labelling is completely determined. For \( n > 1 \) let \( X \in \Shvar(z)_\mu \). We fix the first \( n-1 \) marked points at \( z_1,\ldots,z_{n-1} \), the last marked point at \( \infty \) and let the \( n^{\text{th}} \) marked point vary in the following sense.
Let \( z(s) = (z_1,z_2,\ldots,z_{n-1},s) \), then we have the family \( \Omega(z(s))_\mu \) over \( P_n = \RR - \{ z_1, z_2, \ldots, z_{n-1} \} \). Let \( X(s) \) be a local section of \( \Omega(z(s))_\mu \) such that \( X(z_n) = X \). Let
\begin{equation*}
X_\infty = \lim_{s \rightarrow \infty} p X(s) \in \Gr(r,d),
\end{equation*}
where \( p \) is the projection to \( \Gr(r,d) \).
\begin{Lemma}
\label{lem:limit-one-partition-smaller}
The limit point \( X_\infty \) exists and is contained in \( \Shvar^\circ(\lambda^\comp,\infty) \) for some partition \( \lambda \subset \mu \) such that \( \left| \lambda \right| + 1 = \left| \mu \right| \). In particular
\begin{equation*}
X_\infty \in \Shvar(z_1,z_2,\ldots,z_{n-1})_{\lambda}.
\end{equation*}
\end{Lemma}
\begin{proof}
Since the Grassmanian is projective, \( X_\infty \) must exist and since a Schubert variety is a union of smaller Schubert cells,
\begin{equation*}
\Shvar(\mu^\comp;\infty) = \bigsqcup_{\nu \supseteq \mu^\comp} \Shvar^\circ(\nu;\infty),
\end{equation*}
we must have that \( X_\infty \in \Shvar^\circ(\lambda^\comp;\infty) \) for some partition \( \lambda \) such that \( \lambda^\comp \supseteq \mu^\comp \), or equivalently, such that \( \lambda \subseteq \mu \).
Since each of the varieties \( \Shvar(\square;z_i) \) is closed, \( X_\infty \) must lie in the intersection \( \Shvar(z_1,z_2,\ldots,z_{n-1})_\lambda = \Shvar(\square^{n-1},\lambda^\comp;z_1,\ldots,z_{n-1},\infty) \). However this is empty unless \( \left| \lambda \right| \ge n-1 \). Hence either \( \lambda = \mu \) or \( \lambda \) is obtained by removing a single box from \( \mu \). To decide whether \( \abs{\lambda} = n \) or \( n-1 \) we use the Wronskian. By \cite[Lemma~4.2]{Mukhin:2009et}
\begin{equation*}
\Wr(X(s))(u) = (u-s)\prod_{a=1}^{n-1} (u - z_a).
\end{equation*}
By continuity, \( \Wr(X_\infty) = \prod_{a=1}^{n-1}(u-z_a) \). It is straightforward to see from the definition that if \( Y \in \Shvar^\circ(\nu^\comp;\infty) \) then \( \deg \Wr(Y) = \left| \nu \right| \). We have shown that \( \Wr(X_\infty) = n-1 \) so therefore \( \left| \lambda \right| = n-1 \).
\end{proof}
Summarising, \( X_\infty \in \Shvar(z_1,\ldots,z_{n-1})_\lambda \) and \( \left| \lambda \right| = n-1 \). Thus by induction we can assign a standard \( \lambda \)-tableau to \( X_\infty \). Let this tableau be \( T' \). Let \( \EL(X) \) be the unique, standard \( \mu \)-tableaux which is \( T' \) upon restriction to \( \lambda \subset \mu \).
\begin{Theorem}
\label{thm:labelling}
The map \( \EL\map{\Shvar(z)_\mu}{\SYT(\mu)}; X \mapsto \EL(X) \) is a bijection.
\end{Theorem}
We will prove this theorem by showing that it coincides with two maps, each of which is known to be a bijection.
\section{The MTV labelling}
\label{sec:mtv-labelling}
Now we describe a labelling of the points of \( \Shvar(z)_\mu \) by standard tableaux implicit in the work of Mukhin-Tarasov-Varchenko using the spectrum of \emph{Bethe algebras} which the aforementioned authors have identified with intersections of Schubert varieties. The idea is that these algebras degenerate to the algebra of Jucys-Murphy operators in a certain limit.
\subsection{Bethe algebras}
\label{sec:bethe-algebras}
Let \( \glr \) be the general linear Lie algebra and \( \glr[t] \) the \emph{current algebra} of \( \glr \)-valued polynomials in \( t \). For an element \( g \in \glr \), let \( g(u) = \sum_{s \ge 0} gt^su^{-s-1} \) be a formal power series with values in \( U(\glr[t]) \). The differential operator \( \Dd = \det \left( \delta_{ij}\partial_u - e_{ji}(u) \right) \) is defined by expansion along the first column and can be expended in powers of \( \partial_u \)
\[ \Dd = \sum_{i=0}^r \sum_{s \ge 0} B_{is}u^{-s}\partial_u^{r-i}, \text{ for } B_{is} \in U(\glr[t]). \]
the \emph{universal Bethe algebra} is the subalgebra \( \uBethe \subseteq U(\glr[t]) \) generated by the coefficients \( B_{is} \). The algebra \( \uBethe \) is a commutative subalgebra of \( U(\glr[t])^{\glr} \), i.e. it commutes with \( U(\glr) \subset U(\glr[t]) \) (see \cite[Proposition~8.2 and~8.3]{Mukhin:2006vt}).
Given a \( \glr \)-module \( M \) and \( z \in \CC \), let \( M(z) \) be the \( \glr[t] \)-module where \( t \) acts as multiplication by \( z \). Let \( L(\lambda) \) be the finite dimensional, irreducible, highest-weight module for \( \glr \), corresponding to the partition \( \lambda \). Then for a tuple of distinct complex numbers \( z \in X_n \), and a tuple of partitions \( \blambda = (\lambda^{(1)},\lambda^{(2)},\ldots,\lambda^{(n)}) \), the algebra \( \uBethe \) acts on
\[ L(\blambda;z)^\sing_\mu = \left[ L(\lambda^{(1)})(z_1) \otimes L(\lambda^{(2)}) \otimes \cdots \otimes L(\lambda^{(n)})(z_n) \right]^{\sing}_{\mu}, \]
the space of highest-weight vectors of weight \( \mu \) in the tensor product. We will be mostly interested in the case \( \blambda = \square^n \) in which case we will denote the above space by \( L(z)^\sing_\mu \). The image of \( \uBethe \) in \( \End(L(z)^\sing_\mu) \) will be denoted \( \uBethe(z)_\mu \).
\subsection{The spectrum and the Gaudin Hamiltonians}
\label{sec:spectr-gaud-hamilt}
The spectrum of the algebra \( \uBethe(z)_\mu \) will be denoted \( \bspec(z)_\mu \). Let \( V = L(\square) \), then \( L(z)_\mu \cong [V^{\otimes n}]^\sing_\mu \) as a \( U(\glr)^{\otimes n} \)-module. Define operators
\[ H_a(z) = \sum_{b \neq a} \frac{(a,b)}{z_a-z_b} \text{ for } 1 \le a \le n, \]
where \( (a,b) \) is the transposition swapping the \( a^\th \) and \( b^{\th} \) tensor factors of \( V^{\otimes n} \). The symmetric group \( \Sn \) acts on \( V^{\otimes n} \) and preserves the subspace \( S_\mu = [V^{\otimes n}]^\sing_\mu \) which is a copy of the irreducible \( \Sn \)-module corresponding to the partition \( \mu \).
\begin{Theorem}[{\cite[Theorem~3.2 and~Corollary~3.3]{Mukhin:2010ky}}]
\label{thm:propterties-bethe}
For generic \( z \in X_n \) (including all real \( z \in X_n(\RR) \)),
\begin{enumerate}
\item the algebra \( \uBethe(z)_\mu \) is generated by the operators \( H_a(z) \), and
\item the algebra \( \uBethe(z)_\mu \) has simple spectrum, that is, \( \bspec(z)_\mu \) is a reduced set of \( \dim S_\mu = \# \SYT(\mu) \) points.
\end{enumerate}
\end{Theorem}
The coincidence of \( \# \bspec(z)_\mu = \#SYT \) in the above theorem can again be realised using a limiting process. Let \( z \in X_n^< \). Choose a path \( z(t) = (z_1(t), z_2(t), \ldots,z_n(t)) \in X_n^< \) such that
\begin{enumerate}
\item \( z(1) = z \),
\item \( \lim_{t \to \infty} z_i(t) = \infty \), and
\item \( \lim_{t \to \infty} z_i(t)/z_{i+1}(t) = 0 \).
\end{enumerate}
Then, in this limit \( \lim_{t\to\infty} z_a(t)H_a((z(t))) = L_a = \sum_{b < a} (a,b) \), the \( a^\th \) \emph{Jucys-Murphy operator} on \( S_\mu \). These operators are well known to have simple spectrum on \( S_\mu \), and their spectrum can be canonically identified with \( \SYT(\mu) \) in the following way. If \( v \in S_\mu \) is a joint eigenvector such that \( L_av = c_av \), then we associate to the eigenspace \( \CC v \) the unique standard tableau \( T \) where the box containing \( a \) has content \( c_a \). Thus we obtain a bijection by parallel transport
\[ p:\bspec(z)_\mu \longrightarrow \JMspec_\mu \cong \SYT(\mu) \]
where \( \JMspec_\mu \) is the joint spectrum of the Jucys-Murphy operators on \( S_\mu \). Since the parameter space is contractible, this does not depend on the choice of path.
\subsection{Isomorphism to Schubert intersections}
\label{sec:isom-schub-inters}
Let \( \chi \in \bspec(z)_\mu \) be a closed point and identify this with a functional \( \chi\map{\uBethe(z)_\mu}{\CC} \). Consider the differential operator on \( \CC[u] \) given by
\[ \chi(\Dd) = \sum_{i=0}^r \sum_{s \ge 0} \chi(B_{is})u^{-s}\partial_u^{r-i} \]
and denote the kernel of this operator by \( X_\chi = \ker \chi(\Dd) \). By~\cite[Lemma~5.6]{Mukhin:2004ks} \( X_\chi \) is an \( r \)-dimensional subspace of \( \CC_d[u] \) and moreover \( X_\chi \in \Shvar(z)_\mu \subset \Gr(r,d) \).
\begin{Theorem}[{\cite[Theorem~5.13]{Mukhin:2009et}}]
\label{thm:mtv-isomorphism}
The map \( \kappa_z\map{\bspec(z)_\mu}{\Shvar(z)_\mu} \) given by \( \kappa_z(\chi) = X_\chi = \ker \chi(\Dd) \) is an isomorphism of schemes.
\end{Theorem}
For real \( z \) such that \( z_1<z_2< \cdots < z_n \), we obtain a bijection \( \MTV\map{\Shvar(z)_\mu}{\SYT(\mu)} \) by \( \MTV = p\circ\kappa_z^{-1} \).
\section{Speyer's labelling}
\label{sec:speyers-labelling}
A third labelling of \( \Shvar(z)_\mu \) is present in the work of Speyer~\cite{Speyer:2014gg}. The definition is somewhat involved and is described below in as much detail as will be needed later. One important point is that Speyer in fact describes many possible labellings and we choose one that is natural in a particular sense explained later.
\subsection{Speyer's flat families}
\label{sec:spey-flat-famil}
Let \( \overline{M}_{0,k} \) be the moduli space of stable rational curves with \( k \) marked points. It has a dense open set \( M_{0,k} \) consisting of those curves with a single irreducible component. Fix a curve \( C \in M_{0,k} \) with marked points \( (z_1,z_2,\ldots,z_k) \) and a three element set \( A = \{ 1 \le i_0 < i_1 < i_{\infty} \le k \} \). Let \( \phi_A(C):\PP^1 \longrightarrow \PP^1 \) be the unique isomorphism such that \( \phi_A(C)(z_{i_0})=0 \), \( \phi_A(C)(z_{i_1})=1 \), and \( \phi_A(C)(z_{i_{\infty}}) = \infty \). For each three element set \( A \), we denote a copy of the Grassmanian \( \Gr(r,d)_A = \Gr(r,d) \). The map \( \phi_A(C) \) induces an isomorphism
\[ \phi_A(C): \Gr(r,d) \longrightarrow \Gr(r,d)_A \]
by \( [x:y] \mapsto \phi_A(C)([x:y]) \). Speyer's family \( \Gg(r,d) \) is the closure of the image of
\begin{equation*}
\begin{tikzcd}[row sep=0.7]
M_{0,k} \times \Gr(r,d) \arrow[rr] && \overline{M}_{0,k} \times \prod_{\#A=3} \Gr(r,d)_A \\
(C,X) \arrow[rr,mapsto] && (C,\phi_A(C)(X)).
\end{tikzcd}
\end{equation*}
The product runs over all three element subsets of \( [k] \). Given a sequence of partitions \( \blambda = (\lambda^{(1)},\lambda^{(2)},\ldots,\lambda^{(k)}) \) we also define the family \( \Ss(\blambda) = \Gg(r,d)\cap\bigcap_{a \in A}\Shvar(\lambda^{(a)};z_a) \).
\begin{Theorem}[\cite{Speyer:2014gg}]
\label{thm:speyer-flatness-cm}
The families \( \Gg(r,d) \) and \( \Ss(\blambda) \) are flat and Cohen-Macauley over \( \overline{M}_{0,k} \). Furthermore, if \( \abs{\blambda}=r(d-r) \) then \( \Ss(\blambda)(\RR) \) are a topological covering of \( \overline{M}_{0,k}(\RR) \).
\end{Theorem}
Speyer makes a detailed analysis of the fibres of these families which we summarise here. Given a curve \( C \in \overline{M}_{0,k} \), a \emph{node labelling} for \( C \) is a function \( \nu \) which assigns to every pair \( (C_i,d) \) of an irreducible component of \( C \), and a node \( x \in C_i \), a partition \( \nu(C_i,x) \) with at most \( r \) rows and \( d-r \) columns in such a way that if \( x \in C_i \cap C_j \) then \( \nu(C_i,x)^\comp = \nu(C_j,x) \). Denote the set of node labellings by \( \ttN_C \).
Let \( C_i \subset C \) be an irreducible component. For \( a \in [k] \), let \( d_i(a) \in C_i \) be either the point marked by \( a \) if it is on \( C_i \), or the node by which the marked point is connected to \( C_i \). For a three element set \( A = \{ a_0, a_1, a_\infty \} \subset [k] \) define a map \( \phi_{A,i}\map{\PP^1}{\PP^1} \) that maps \( d_i(a_0) \) to \( 0 \), \( d_i(a_1) \) to \( 1 \) and \( d_i(a_\infty) \) to \( \infty \). We also have an associated map \( \phi_{A,i}\map{\Gr(r,d)}{\Gr(r,d)_A} \). We obtain an embedding \( \Gr(r,d) \hookrightarrow \prod_{\#d_i(A)=3} \Gr(r,d)_A \) given by \( X \mapsto (\phi_{A,i}(X))_{A} \), where the product is over all three element sets \( A \subset [k] \), such that \( \#d_i(A)=3 \). Denote the image by \( \Gr(r,d)_{C_i} \). We will use notation of the form \( \Shvar(\lambda;z)_{C_i} \subset \Gr(r,d)_{C_i} \).
\begin{Theorem}[{\cite{Speyer:2014gg}}]
\label{thm:speyer-fibre}
For a stable curve \( C \in \overline{M}_{0,k} \) with irreducible components \( C_1,C_2,\ldots, C_l \) the fibres of the families \( \Gg(r,d) \) and \( \Ss(\blambda) \) are given by
\begin{align*}
\Gg(r,d)(C) &= \bigcup_{\nu \in \ttN_C} \prod_{i} \bigcap_{d \in D_i} \Shvar(\nu(C_i,d);d)_{C_i}, \\
\Ss(\lambda_{\bullet})(C) &= \bigcup_{\nu \in \ttN_C} \prod_{i} \left( \bigcap_{d \in D_i} \Shvar(\nu(C_i,d),d)_{C_i} \cap \bigcap_{p \in P_i} \Shvar(\lambda^{(p)},p)_{C_i} \right),
\end{align*}
where \( D_i \) is the set of nodes on the component \( C_i \) and \( P_i \) is the set of marked points on \( C_i \).
\end{Theorem}
\subsection{Labelling the fibre}
\label{sec:labelling-fibre}
Now we set \( k=n+1 \) and \( \blambda = (\square^n,\mu^\comp) \) for a partition \( \mu \), of \( n \). Fix \( z \in X_n^< \) and identify this with the stable curve \( C \) with marked points \( z_1,z_2,\ldots,z_n,\infty \) on a single irreducible component. Curves of this type form a connected component \( \mathcal{O} \subset M_{0,n+1}(\RR) \subset \overline{M}_{0,n+1}(\RR) \). We will use the covering \( \Ss(\square^n,\mu)(\RR) \) of \( \overline{M}_{0,n+1}(\RR) \) to label the points of \( \Shvar(z)_\mu \) by \( \SYT(\mu) \). At the point \( C \), the fibre of \( \Ss(\square^n,\mu^\comp) \) is isomorphic to \( \Shvar(z)_\mu \) by construction.
For \( 1 \le q < n \), choose a generic point \( C_q \in \overline{M}_{0,n+1}(\RR) \), in the boundary of this connected component, where the points marked by \( 1,2,\ldots,q \) are on a single irreducible component and the points marked by \( q+1,q+2,\ldots,n,\infty \) on the second component. Now choose any path in \( \overline{\mathcal{O}} \), from \( C \) to \( C_q \) and consider the unique lift of this path to \( \Ss(\square^n,\mu^\comp)(\RR) \) starting at \( X \). By Theorem~\ref{thm:speyer-fibre}, the endpoint of this path over \( C_q \) determines a node labelling \( \nu \) of \( C_q \). Let \( C_q' \) be the irreducible component containing the pint marked by \( \infty \) and \( d \in C_q' \), the unique nodal point. Let \( \mu_q = \nu(C'_q,d) \). Speyer shows that \( \mu_q \subset \mu_{q+1} \) and that \( \abs{\mu_q}=q \). Let \( \Sp(X) \) be the standard \( \mu \)-tableau determined by the inclusions
\[ \emptyset \subset \mu_1 \subset \mu_2 \subset \cdots \subset \mu_{n-1} \subset \mu. \]
Speyer also shows the resulting map \( \Sp\map{\Shvar(z)_\mu}{\SYT(\mu)} \) is a bijection.
\begin{Remark}
\label{rem:what-speyer-does}
In fact Speyer constructs a bijection to a slightly different combinatorial set: the set of \emph{dual equivalence growth diagrams} of a certain shape. These objects are in bijection with standard tableaux, however there is a choice involved that is not entirely natural. The choice we have made above is the only one for which Theorem~\ref{thm:labelling-theorem} is true and it is natural in this sense.
\end{Remark}
\subsection{Associahedra}
\label{sec:associahedra}
The variety \( \M[k](\RR) \) is a CW-complex and is tiled by associahedra of dimension \( k-3 \). For example, when \( k=5 \), the variety \( \M[5](\RR) \) is tiled by \( 2 \)-associahedra, i.e. by pentagons. For each connected component \( \Oo \subset \oM(\RR) \), its closure \( \overline{\Oo} \subset \M[k](\RR) \) is such an associahedron. We can lift this CW-structure and the tiling to \( \Ss(\blambda)(\RR) \). The connected components of \( \oM[k](\RR) \), and thus the associahedra tiling \( \M[k](\RR) \) are labelled by circular orderings of \( 1,2,\ldots,k \). If \( \Theta \subset \M[k](\RR) \) is the associahedron corresponding to curves where the points marked by \( 1,2,\ldots,k \) are in increasing order, then we denote by \( \Theta_{pq} \) the facet of \( \Theta \) determined by curves with two irreducible components and where the points marked by \( p,p+1,\ldots,q-1 \) are on one of these components. Since each associahedron is simply connected, the process described in Section~\ref{sec:labelling-fibre} shows that the associahedra in \( \Ss(\square^n,\mu^\comp) \) lying above \( \Theta \subset \M(\RR) \) are labelled by \( \SYT(\mu) \).
\section{Agreement with elementary labelling}
\label{sec:agre-with-elem}
In this section we prove that that the labellings of \( \Shvar(z)_\mu \) described in Sections~\ref{sec:schubert-intersections} and~\ref{sec:speyers-labelling} agree. The proof proceeds by directly checking Speyer's definition is compatible with the limiting process from Section~\ref{sec:an-expl-biject}.
\subsection{Levinson result}
\label{sec:levinson-result}
First, we briefly outline a result of Levinson~\cite{Levinson:2017fp} that will be used in Section~\ref{sec:coninc-with-spey}.
Recall that \( \Ss(\blambda,\square) \) is a subvariety of \( \overline{M}_{0,n+1}\times\prod_{A} \Gr(r,d)_A \), where \( A \) ranges over three element subsets of \( [n+1] \). Let \( \pi \) be the projection onto \( \prod_{A \subset [n]} \Gr(r,d)_A \). That is \( \pi \) projects onto those Grassmanians for subsets \( A \) which do not contain \( n+1 \).
Suppose \( \abs{\blambda} = r(d-r) - 1 \) and let \( c_{n+1}\map{\M}{\M[n]} \) be the contraction map at the point marked by \( n+1 \) (forgetting the point marked by \( n+1 \) and contracting any unstable irreducible components). The morphism \( c_{n+1} \) allows us to think of \( \Ss(\blambda,\square) \) as a family over \( \M[n](\CC) \) (rather than \( \M(\CC) \)).
\begin{Theorem}[{\cite[Theorem~2.8]{Levinson:2017fp}}]
\label{thm:Levinson}
The map \( \pi \) produces an isomorphism onto \( \Ss(\blambda) \) and we have the following commutative diagram,
\begin{center}
\begin{tikzcd}
\Ss(\blambda,\square) \arrow[r,"\pi"] \arrow[d] & \Ss(\blambda) \arrow[d] \\
\M(\CC) \arrow[r,"c_{n+1}"] & \M[n](\CC).
\end{tikzcd}
\end{center}
Thus \( \pi \) is an isomorphism of families over \( \M[n](\CC) \).
\end{Theorem}
\subsection{The labellings agree}
\label{sec:coninc-with-spey}
We show in this section that the labelling of points described in Section~\ref{sec:an-expl-biject} coincides with Speyer's labelling, that is \( \EL = \Sp \). For clarity of exposition, the bulk of the proof is organised into a series of lemmas below. Let \( X \in \Shvar(z)_\mu \) and let \( T = \Sp(X) \in\SYT(\mu) \). We will use the notation \( X(s), X_\infty \in \Gr(r,d) \) from Section~\ref{sec:an-expl-biject}. We use \( T|_{n-1} \) to denote the tableaux obtained from \( T \) by removing the box containing \( n \). We let \( \Shvar(\square^n,\mu^\comp) \longrightarrow \oM \) be the finite map whose fibre over the point \( z = (z_1,z_2,\ldots,z_n,\infty) \) is \( \Shvar(z)_\mu \).
\begin{Lemma}
\label{lem:Xinfty-in-lambda}
Let \( \lambda = \mathrm{sh}(T|_{n-1}) \), then \( X_\infty = \lim_{s \to \infty} X(s) \in \Shvar^\circ(\lambda^\comp;\infty) \).
\end{Lemma}
\begin{proof}
Let \( \iota_\mu \) be the inclusion \( \Shvar(\square^n,\mu^\comp) \hookrightarrow \Ss(\square^n,\mu^\comp) \). That is, for \( (C,X) \) a point of \( \Shvar(\square^n,\mu^\comp) \subseteq \oM \times \Gr(r,d) \),
\begin{equation*}
\iota_\mu(C,E) = (\phi_A(C)(E))_{A \subset [n+1]}.
\end{equation*}
If \( p \) is the projection \( \Shvar(\square^n,\mu^\comp) \rightarrow \Gr(r,d) \) and \( p_{\{ 1,2,3 \}} \) is the map \( \Ss(\square^n,\mu^\comp) \rightarrow \Gr(r,d) \) defined by \( p_{\{1,2,3\}}(C,(E_A))=\phi_{\{1,2,3\}}(C)^{-1}(E_{\{1,2,3\}}) \), then we have a commutative diagram
\begin{equation*}
\begin{tikzcd}
& \Gr(r,d) \\
\Shvar(\square^n,\mu^\comp) \arrow[d] \arrow[r,hook,"\iota_\mu"] \arrow[ur,"p"]
& \Ss(\square^n,\mu^\comp) \arrow[d] \arrow[u,"p_{\{ 1,2,3 \}}"'] \\
\oM \arrow[r,hook] & \M.
\end{tikzcd}
\end{equation*}
Let \( C \) be the stable curve with marked points \( z \) and \( C(s) \) the family of stable curves with marked points \( (z_1,\ldots,z_{n-1},s,\infty) \), so \( C = C(z_n) \). We have
\begin{align*}
X_\infty &= \lim_{s \rightarrow \infty} p(C(s),X(s)) \\
&= \lim_{s \rightarrow \infty} p_{\{ 1,2,3 \}} \iota_\mu (C(s),X(s)) \\
&= p_{\{ 1,2,3 \}}Y,
\end{align*}
where \( Y = \lim_{s \rightarrow \infty} \iota_\mu (C(s),X(s)) \).
The point \( Y \) lies over the stable curve \( \lim_{s\to \infty} C(s) \), which has two components, \( C_1 \) with marked points \( z_1,z_2,\ldots,z_{n-1} \) and a node at \( \infty \), and \( C_2 \) with marked points at \( 1 \) and \( \infty \) and a node at \( 0 \). Thus by Theorem~\ref{thm:speyer-fibre},
\begin{equation*}
Y \in \Shvar(\square^{n-1},\lambda^\comp;z_1,\ldots,z_{n-1},\infty)_{C_1} \times \Shvar(\lambda,\square,\mu^\comp;0,1,\infty)_{C_2}.
\end{equation*}
The partition \( \lambda \) appearing in the node labelling must be \( \mathrm{sh}(T|_{n-1}) \) since we assumed \( T = \Sp(X) \). Now \( p_{\{ 1,2,3 \}} \) is simply projection onto the first factor and is an isomorphism onto \( \Gr(r,d) \) so \( X_\infty = p_{\{ 1,2,3 \}}Y \in \Shvar(\lambda^\comp;\infty) \). However \( X_\infty \notin \Shvar(\nu^\comp;\infty) \) for any \( \nu \) such that \( \abs{\nu} < \abs{\lambda} \) and so we must have that \( X_\infty \in \Shvar^\circ(\lambda^\comp;\infty) \).
\end{proof}
\begin{figure}
\centering
\begin{tikzcd}[column sep=17pt]
& & \Gr(r,d) & & \\ \\
\Shvar(\square^{n}, \mu^\comp) \arrow[r,hook,"\iota_\mu"] \arrow[d] \arrow[uurr,"p"] &
\Ss(\square^{n}, \mu^\comp) \arrow[r,"\pi"] \arrow[d] &
\Ss(\square^{n-1},\mu^\comp) \arrow[d] \arrow[uu,"p_{\set{1,2,3}}"'] &
\Ss(\square^{n-1},\lambda^\comp) \arrow[l,hook',"\zeta"'] \arrow[d] &
\Shvar(\square^{n}, \lambda^\comp) \arrow[l,hook',"\iota_\lambda"'] \arrow[d] \arrow[uull,"p'"'] \\
\oM \arrow[r,hook] & \M \arrow[r] & \M[n] & \M[n] \arrow[l] & \oM[n] \arrow[l,hook',]
\end{tikzcd}
\caption{The relationship between various projections}
\label{fig:big-comm-diagram}
\end{figure}
Our aim will now be to calculate the Speyer labelling of the point \( X_\infty \) in \( \Ss(\square^{n-1},\lambda^\comp) \). However we only have information about the labelling of the points \( X(s) \) in \( \Ss(\square^n,\mu^\comp) \). To relate these two covering spaces we will use Theorem~\ref{thm:Levinson}. With this theorem we produce the large commutative diagram in Figure~\ref{fig:big-comm-diagram}.
In Figure~\ref{fig:big-comm-diagram} \( \iota_\mu \) is the inclusion \( \Shvar(\square^n,\mu^\comp) \hookrightarrow \Ss(\square^n,\mu^\comp) \) described above. The inclusion \( \iota_\lambda \) is defined similarly. The morphism \( \pi \) is the isomorphism appearing in Theorem~\ref{thm:Levinson} and the inclusion \( \zeta \) is induced by the inclusion of \( \Shvar(\lambda^\comp;\infty) \) into \( \Shvar(\mu^\comp;\infty) \).
\begin{Lemma}
\label{lem:tracing-Xinfty}
Let \( Y = \lim_{s\rightarrow \infty} \iota_\mu(C(s),X(s)) \), and let \( C_\infty \) be the stable curve with marked points \( (z_1,z_2,\ldots,z_{n-1},\infty) \) (the component \( C_1 \) as in the proof of Lemma~\ref{lem:Xinfty-in-lambda}). Then
\begin{equation*}
Y = \pi^{-1}\zeta\iota_{\lambda}(C_\infty,X_\infty).
\end{equation*}
\end{Lemma}
\begin{proof}
We will show \( p_{\{1,2,3\}}\pi Y = p_{\{1,2,3\}}\zeta\iota_\lambda(C_\infty,X_\infty) \). Since \( p_{\{1,2,3\}} \) is injective on fibres, this is enough to prove the Lemma.
This amounts to tracing \( X_\infty \) around the diagram. By commutativity of the diagram,
\begin{align*}
p_{\{1,2,3\}}\zeta\iota_{\lambda}(C_\infty,X_\infty) &= p'(C_\infty,X_\infty) = X_\infty.
\end{align*}
Now
\begin{align*}
p_{\{1,2,3\}}\pi Y &= p_{\{1,2,3\}}\pi\lim_{s \rightarrow \infty} \iota_\mu(C(s),X(s)) \\
&= \lim_{s \rightarrow \infty} p_{\{1,2,3\}}\pi\iota_\mu(C(s),X(s)) \\
&= \lim_{s \rightarrow \infty} p(C(s),X(s)) = X_\infty. \qedhere
\end{align*}
\end{proof}
\begin{Lemma}
\label{lem:proj-of-generic-point}
Let \( \Theta \) be the associahedron in \( \Ss(\square^{\abs{\nu}},\nu^\comp) \) labelled by \( S \in \SYT(\nu) \). For generic \( E \in \Theta_{1q} \)
\begin{equation*}
p_{\{1,2,3\}}E \in \Shvar^\circ(\tau^\comp;\infty),
\end{equation*}
where \( \tau = \mathrm{sh}(S|_{q}) \).
\end{Lemma}
\begin{proof}
This is a direct application of the Theorem~\ref{thm:speyer-fibre}, which says if \( E \) is generic then
\begin{equation*}
E \in \Shvar(\square^q,\tau^\comp;u_1,\ldots,u_q,\infty)_{C_1} \times \Shvar(\tau,\square^{\abs{\nu} - q},\nu^\comp;0,u_{q+1},\ldots,u_{\abs{\nu}},\infty)_{C_2}.
\end{equation*}
Then \( p_{\{1,2,3\}} \) is projection onto the first factor.
\end{proof}
\begin{Remark}
\label{rem:how-generic-for-prj-lemma}
The proof of Lemma~\ref{lem:proj-of-generic-point} shows in fact we can make the stronger assumption that \( E \) is a generic point of \( \Theta_{1p} \cap \Theta_{1q} \) as long as \( p \geq q \).
\end{Remark}
\begin{Lemma}
\label{lem:identification-of-action-on-cell}
Let \( \Theta \) be the \( (n-2) \)-associahedron in \( \Ss(\square^n,\mu^\comp) \) labelled by \( T \) and let \( \tilde{\Theta} \) be the \( (n-3) \)-associahedron in \( \Ss(\square^{n-1},\lambda^\comp) \) containing \( \iota_\lambda(X_\infty, C_\infty) \). Then \( \pi^{-1}\zeta(\tilde{\Theta}) = \Theta_{1n} \).
\end{Lemma}
\begin{proof}
Since the maps downstairs in Figure~\ref{fig:big-comm-diagram} are all cell maps, the maps upstairs must also be cell maps. Hence \( \pi^{-1}\zeta(\tilde{\Theta}) \) must be \( \Theta_{ij}' \), the face of some \( (n-2) \)-associahedron \( \Theta' \) in \( \Ss(\square^n,\mu^\comp) \).
Thus \( \Theta_{ij}' \) must contain the point \( \pi^{-1}\zeta\iota_\lambda (C_\infty,X_\infty) \). By Lemma~\ref{lem:tracing-Xinfty}
\begin{equation*}
\pi^{-1}\zeta\iota_\lambda (C_\infty,X_\infty) = Y = \lim_{s \to \infty} \iota_\mu(C(s),X(s)).
\end{equation*}
We know \( \iota_\mu(C(s),X(s)) \in \Theta \) so \( Y \in \Theta_{1n} \). Hence \( \Theta_{ij}' = \Theta_{1n} \).
\end{proof}
\begin{Lemma}
\label{lem:limit-of-T-point}
\( \Sp(X_{\infty}) = T|_{n-1} \).
\end{Lemma}
\begin{proof}
To show the equality we must show the point \( X_\infty \) is labelled by the tableau \( T|_{n-1} \). That means we must show, for each \( 2 < q < n \) and a generic point \( E \in \tilde{\Theta}_{1q} \), that \( p_{\{1,2,3\}}E \in \Shvar^\circ(\tau^\comp;\infty) \) for \( \tau = \mathrm{sh}(T|_q) \). By commutativity of Figure~\ref{fig:big-comm-diagram}
\begin{equation*}
p_{\{1,2,3\}}E = p_{\{1,2,3\}}\pi^{-1} \zeta(E).
\end{equation*}
Lemma~\ref{lem:identification-of-action-on-cell} tells us \( \pi^{-1}\zeta(\tilde{\Theta}_{1q}) = \Theta_{1q}\cap\Theta_{1n} \). Since \( X \in \Theta \) and \( \Sp(X)=T \) (which means \( \Theta \) is the \( (n-2) \)-associahedron labelled by \( T \)) and \( \pi^{-1} \zeta(E) \) is generic, we must have that \( p_{\{1,2,3\}}\pi^{-1} \zeta(E) \in \Shvar^\circ(\tau^\comp,\infty) \).
\end{proof}
\begin{Theorem}
\label{thm:conincides-with-Speyer-label}
We have that \( \Sp = \EL \). That is, if \( X \in \Shvar(\square^n,\mu^\mathtt{c}; z,\infty) \) then the processes described in Sections~\ref{sec:an-expl-biject} and~\ref{sec:labelling-fibre} produce the same tableau.
\end{Theorem}
\begin{proof}
According to Lemma~\ref{lem:Xinfty-in-lambda} \( X_\infty = \lim_{s \rightarrow \infty} X(s) \in \Shvar^\circ(\lambda^\comp;\infty) \). Let \( T = \Sp(X) \). Now Lemma~\ref{lem:limit-of-T-point} tells us \( \Sp(X_\infty) = T|_{n-1} \). This means the tableau \( \EL(X) \) is a tableau of shape \( \mu \) whose restriction to \( n-1 \) is \( T|_{n-1} \). However the unique tableau satisfying these properties is \( T \).
\end{proof}
\section{The algebraic Bethe Ansatz}
\label{sec:algebr-bethe-ansatz}
Along with Bethe algebras and Schubert intersections, there is a third important player in the story, the critical points of the \emph{master function}. The relationship between these three objects has been studied extensively by Mukhin, Tarasov and Varchenko, see for example~\cite{Mukhin:2012vk}. Critical points have a labelling by standard tableaux in a similar way to points in the spectrum of Bethe algebras (see Section~\ref{sec:spectr-gaud-hamilt}), this is described by Marcus~\cite{Marcus:2010vn} and for the sake of convenience we recall the proof of this result. This will be used to finally identify the MTV and Speyer labellings.
\subsection{Notation}
\label{sec:notation}
Let \( \cartan \) be the Cartan subalgebra of \( \glr \) (so \( \cartan \) is the algebra of diagonal matrices). Let \( \left( \cdot, \cdot \right) \) denote the trace form on \( \glr \) (i.e. the normalised Killing form). Let \( h_i = e_{ii} - e_{i+1,i+1} \) for \( i = 1,\ldots,r-1 \). Let \( \veps_i \in \cartan^* \) be the dual vector to \( e_{ii} \) and \( \alpha_i = \veps_i - \veps_{i+1} \) the dual vector to \( h_i \). With this notation the trace form, transported to \( \cartan^* \), has the following values,
\begin{align}
\label{eq:killing-form}
\left( \veps_i, \veps_j \right) &= \delta_{ij}, \\
\left( \alpha_i, \alpha_j \right) &=
\begin{cases}
2 & \text{if } i=j \\
-1 & \text{if } \abs{i-j} = 1 \\
0 & \text{if } \abs{i-j} > 1.
\end{cases}
\end{align}
We identify a partition \( \lambda \) with at most \( r \) parts, with the \( \glr \)-weight \( \sum \lambda^{(i)} \veps_i \).
\subsection{The master function and critical points}
\label{sec:master-function}
Let \( z \in X_n \) be complex parameters, \( \blambda = (\lambda^{(1)},\lambda^{(2)},\ldots,\lambda^{(n)}) \) be a sequence of partitions and let \( \mu \) be a partition such that \( \abs{\mu} = \abs{\lambda_\bullet} \). We also require that there exist non-negative integers \( l_i \) such that \( \mu = \sum_{s} \lambda^{(s)} - \sum_{i=1}^{r-1} l_i \alpha_i \). This last requirement ensures \( \mu \) appears as a weight in \( L(\lamb) \). We let \( t_i^{(j)} \) be a set of complex variables for \( i = 1,2,\ldots, r-1 \) and \( j = 1,2,\ldots,l_i \).
The \emph{master function} is the rational function
\begin{equation*}
\begin{split}
& \Phi(\lamb,\mu;z,t) = \Phi(z,t) = \\ &
\prod_{1 \le a < b \le n} (z_a - z_b)^{\left( \lambda_a,\lambda_b \right)}
\prod_{a=1}^n \prod_{i = 1}^{r-1} \prod_{j=1}^{l_i} (z_a - t^{(j)}_i)^{-\left( \lambda_a, \alpha_i \right)}
\prod_{(i,a) < (j,b)} (t^{(a)}_i - t^{(b)}_j)^{\left( \alpha_i,\alpha_j \right)}.
\end{split}
\end{equation*}
The ordering \( (i,a) < (j,b) \) is taken lexicographically. Let \( S = \log \Phi \). The \emph{Bethe ansatz equations}\index{Bethe ansatz equations} are given by the system of rational functions,
\begin{equation}
\label{eq:bethe-ansatz-equations}
\frac{\partial S}{\partial t^{(j)}_i} = \frac{\partial}{\partial t^{(j)}_i} \log \Phi(z,t) = 0 \quad \text{for } i = 1,2,\ldots,r-1 \text{ and } j = 1,2,\ldots,l_i.
\end{equation}
A solution to the Bethe ansatz equations is called a \emph{critical point}.\index{critical point} We say a critical point \( t = (t_i^{(j)}) \) is \emph{nondegenerate}\index{critical point!nondegenerate} if the Hessian of \( S \),
\begin{equation*}
\Hess(S) = \det \left( \frac{\partial S}{\partial t^{(a)}_i \partial t^{(b)}} \right)_{(i,a),(j,b)},
\end{equation*}
evaluated at \( t \) is invertible.
Let \( m = l_1 + l_2 + \ldots + l_{r-1} \). The Bethe ansatz equations are rational functions on \( X_n \times \CC^{m} \), regular away from the finite collection of hyperplanes given by \( t^{(a)}_i - t^{(b)}_j = 0 \). Let \( \tcrit(\lambda_\bullet)_\mu \) denote the vanishing set of the Bethe ansatz equations, considered as a family over \( X_n \). Let \( \Symcrit \) be the product of symmetric groups \( S_{l_1}\times S_{l_2} \times \cdots S_{l_{r-1}} \subset S_m \), which acts on \( \CC^m \) by permuting the coordinates \( t_i^{(j)} \) with the same lower index. Using~(\ref{eq:killing-form}),
\begin{equation*}
\prod_{(i,a) < (j,b)} (t^{(a)}_i - t^{(b)}_j)^{\left( \alpha_i,\alpha_j \right)} = \prod_{i=1}^{r-2}\prod_{a=1}^{l_i}\prod_{b=1}^{l_{i+1}} (t_i^{(a)}-t_{i+1}^{(b)})^{-1} \prod_{i=1}^{r-1}\prod_{1\le a<b \le l_i} (t_i^{(a)}-t_i^{(b)})^2.
\end{equation*}
Thus \( \Phi(z,t) \) is invariant under the action of \( \Symcrit \). The quotient \( \tcrit(\lamb)_\mu/\Symcrit \) will be denoted \( \crit(\lamb)_\mu \) and the open subset of nondegenerate critical points \( \crit(\lamb)_\mu^{\nondeg} \). Let \( \infP = \PP\CC[u] \) be the infinite dimensional projective space associated to the polynomial ring. We think of \( \infP \) as the space of monic polynomials. For any \( a \in \NN \), there is an embedding \( \CC^a/S_a \hookrightarrow \infP \) given by sending the orbit of a point \( (t_1,\ldots,t_a) \in \CC^a \) to the unique monic polynomial of degree \( a \), with roots \( t_1,\ldots,t_a \). We will identify \( \crit(\lamb)_\mu \) with its image in \( X_n \times (\infP)^{r-1} \) and denote the tuple of monic polynomials associated to a critical point \( t = (t^{(j)}_i) \) by \( y^t = (y^t_1,\ldots,y^t_{r-1}) \). To be clear, this means if \( t = (t_i^{(j)}) \) is a solution of the Bethe ansatz equations~(\ref{eq:bethe-ansatz-equations}), then \( y^t_i \) is a monic polynomial in \( u \), with roots \( t_i^{(1)}, t_i^{(2)}, \ldots, t_i^{(l_i)} \). Let \( \critfam \) denote the projection \( \crit(\lamb)_\mu \rightarrow X_n \). Denote the fibre of \( \crit(\lamb)_\mu \) over \( z \in X_n \) by \( \crit(\lamb;z)_\mu \).
\begin{Theorem}[{\cite[Theorem~6.1]{Mukhin:2012vk}}]
\label{thm:weight-function}
There exists a function, called the \emph{universal weight function},\index{universal weight function} \( \omega\map{X_n \times \CC^m/\Symcrit}{L(\lamb)_\mu} \) such that, for a critical point \( t = (t_i^{(j)}) \) in \( \crit(\lamb;z)_\mu \), then
\begin{enumerate}
\item\label{item:omega-singular} \( \omega(z,t) \in L(\lamb)_\mu^\sing \),
\item\label{item:nondeg-nonzero} the critical point \( t \) is nondegenerate if and only if \( \omega(z,t) \) is nonzero,
\item\label{item:crit-linind} if \( t' \in \crit(\lamb;z)_\mu \) is a critical point distinct from \( t \), and both are nondegenerate then \( \omega(z,t) \) and \( \omega(z,t') \) are linearly independent,
\item\label{item:sim-eigenvector} \( \omega(z,t) \) is a simultaneous eigenvector for \( \uBethe(\lamb;z)_\mu \), and
\item\label{item:eigenvalue-ofGaudin} the eigenvalue of \( H_a(z) \) acting on \( \omega(z,t) \) is
\begin{equation*}
\frac{\partial S}{\partial z_a} (z,t).
\end{equation*}
\end{enumerate}
\end{Theorem}
\subsection{Examples of the Bethe ansatz equations}
\label{sec:examples-bethe-ansatz}
Below are some examples of the Bethe ansatz equations in simple cases. Explicitly, in full generality, the Bethe ansatz equations are
\begin{equation}
\label{eq:explicit-BA-eqns}
\begin{split}
\frac{\partial S}{\partial t^{(j)}_i} =
- \sum_{a = 1}^n \left( \alpha_i,\lambda_a \right) \frac{1}{t^{(j)}_i-z_a} + \sum_{(k,a) \neq (i,j)} \left( \alpha_i,\alpha_k \right) \frac{1}{t^{(j)}_i - t^{(a)}_k} = 0.
\end{split}
\end{equation}
\begin{Example}
\label{exm:empty-critical-points}
In the case \( n = 1 \), with \( \lamb = (\lambda) \), the only choice for \( \mu \) is \( \mu = \lambda \). Thus \( l_i = 0 \) for all \( i \), the variable \( t \) is simply an empty variable. The master function becomes \( \Phi(z,t) = 1 \). The Bethe ansatz equations in this case are vacuously satisfied and there is a single unique critical point \( t_{\emptyset} \) (the empty critical point). The polynomial \( y^{t_{\emptyset}}_i \) is the unique monic polynomial with no roots, i.e. the constant polynomial \( 1 \). Thus \( \crit(\lambda;z)_\mu \subset (\infP)^{r-1} \) is a single point.
\end{Example}
\begin{Example}
\label{exm:case-all-box}
In this paper we will be primarily interested in the case \( \lambda_i = \square = \veps_1 \) for all \( i \). In this case \( \abs{\mu} = n \). Since the highest possible weight in \( V^{\otimes n} \) is \( (n) = n\veps_1 \), the integer \( l_i \) is the number of boxes in \( \mu \) sitting strictly below the \( i^{\text{th}} \) row.
In this case \( \left( \veps_1,\veps_1 \right) = 1 \), and \( \left( \alpha_i, \veps_1 \right) = \delta_{1,i} \). Thus the master function becomes
\begin{equation*}
\begin{split}
\Phi(z,t) = \prod_{1 \le a < b \le n} (z_a - z_b)
\prod_{a=1}^n \prod_{j=1}^{l_1} (t^{(1)}_j - z_a)^{-1}
\prod_{i=1}^{r-2}\prod_{a=1}^{l_i}\prod_{b=1}^{l_{i+1}} (t_i^{(a)}-t_{i+1}^{(b)})^{-1} \\
\prod_{i=1}^{r-1}\prod_{1\le a<b \le l_i} (t_i^{(a)}-t_i^{(b)})^2.
\end{split}
\end{equation*}
\end{Example}
\begin{Example}
\label{exm:case-n-equals-2}
Next consider the special case when \( n=2 \), so \( \lamb = (\lambda^{(1)},\lambda^{(2)}) \) for some partitions \( \lambda^{(1)} \) and \( \lambda^{(2)} \). Let \( z = (z_1,z_2) \). Make a change of variables
\begin{equation*}
s^{(j)}_i = \frac{t^{(j)}_i - z_1}{z_2 - z_1}.
\end{equation*}
In these new variables, the Bethe ansatz equations become
\begin{equation*}
\begin{split}
0 = \frac{\partial S}{\partial s^{(j)}_i} \frac{\partial s^{(j)}_i}{\partial t^{(j)}_i} = \left(
- \frac{(\lambda_1,\alpha_i)}{s^{(j)}_i} - \frac{(\lambda_2,\alpha_i)}{s^{(j)}_i-1} + \sum_{(k,a) \neq (i,j)} \frac{(\alpha_i, \alpha_k)}{s^{(j)}_i - s^{(a)}_k} \right) \frac{1}{z_2-z_1},
\end{split}
\end{equation*}
which can be rearranged to
\begin{equation}
\label{eq:n=2-bethe-ansatz}
\frac{(\lambda_1,\alpha_i)}{s^{(j)}_i} + \frac{(\lambda_2,\alpha_i)}{s^{(j)}_i-1} = \sum_{(k,a) \neq (i,j)} \frac{(\alpha_i, \alpha_k)}{s^{(j)}_i - s^{(a)}_k},
\end{equation}
and thus do not depend on \( z_1 \) and \( z_2 \). These are the \emph{transformed bethe ansatz equations}.\index{Bethe ansatz equations!transformed} The set of (orbits of) solutions of~(\ref{eq:n=2-bethe-ansatz}) is denoted \( \Scrit(\lambda^{(1)},\lambda^{(2)})_\mu \).
Consider the special case, when \( \lambda_1 = \lambda \) and \( \lambda_2 = \square \).
By the Pieri rule, for the \( \mu \)-weight space to be nonzero, \( \mu \) must be obtained from \( \lambda \) by adding a single box. Suppose the box is added in row \( e \). Then \( l_i = 1 \) for \( i = 1,2,\ldots,e-1 \) and \( l_i = 0 \) otherwise. Setting \( s_i = s^{(1)}_i \) for \( i=1,2,\ldots, e-1 \), equation~(\ref{eq:n=2-bethe-ansatz}) can be rewritten,
\begin{equation}
\label{eq:n=2-bethe-ansatz-pieri}
\frac{(\lambda_1,\alpha_i)}{s_i} + \frac{\delta_{1i}}{s_i-1} = \frac{\delta_{i1}-1}{s_i - s_{i-1}} + \frac{\delta_{(i+1)e}-1}{s_i - s_{i+1}}.
\end{equation}
\end{Example}
\begin{Proposition}[{\cite[Lemma~7.2]{Marcus:2010vn}}]
\label{prp:unique-solution-n2-BAeqn}
There is a unique solution to the transformed Bethe ansatz equations~(\ref{eq:n=2-bethe-ansatz-pieri}), that is \( \Scrit(\lambda,\square)_\mu \) is a single point. In particular
\begin{equation}
\label{eq:s1-unique-solution}
s_1 = 1- \left( \lambda^{(1)} - c \right)^{-1},
\end{equation}
where \( c \) is the content of the box \( \mu \setminus \lambda \).
\end{Proposition}
\subsection{Asymptotics of critical points and Marcus' labelling}
\label{sec:marcus-label-crit}
Later, we will need a result about the asymptotics of critical points as we send the parameters to infinity. Reshetikhin and Varchenko~\cite{Reshetikhin:1995vs} explain how to glue two nondegenerate critical points to obtain a critical point for a larger master function with parameters \( z = (z_1,z_2,\ldots,z_{n+k}) \). Their theorem allows one to track the analytic continuation of this new critical point as we send the parameters \( z_{n+1},\ldots,z_{n+k} \) to infinity and shows that asymptotically we recover the two critical points we started with. The set up for the theorem is the following data, two sequences of partitions,
\begin{itemize}
\item \( \lamb = (\lambda_1,\lambda_2,\ldots,\lambda_n) \), and
\item \( \lamb' = (\lambda_1',\lambda_2',\ldots,\lambda_k') \).
\end{itemize}
Three additional partitions,
\begin{itemize}
\item \( \nu = \sum_i \lambda_i - \sum_j a_j \alpha_j \),
\item \( \nu' = \sum_i \lambda_i' - \sum_j b_j \alpha_j \), and
\item \( \mu = \nu + \nu' - \sum_j c_j \alpha_j = \sum_i \lambda_i + \sum_i \lambda_i' - \sum_j (a_j+b_j+c_j)\alpha_j \),
\end{itemize}
for nonnegative integers \( a_j,b_j \) and \( c_j \). Two nondegenerate critical points
\begin{itemize}
\item \( u = (u^{(j)}_i) \in \crit(\lamb;z)_\nu^\nondeg \), and
\item \( v = (v^{(j)}_i) \in \crit(\lamb';x)_{\nu'}^\nondeg \),
\end{itemize}
for complex points, \( z = (z_1,z_2,\ldots,z_n) \) and \( x = (x_1,x_2,\ldots,x_k) \); and finally a solution, \( s = (s^{(j)}_i) \in \Scrit(\nu,\nu')_\mu \), to the transformed Bethe ansatz equations~(\ref{eq:n=2-bethe-ansatz}).
\begin{Theorem}[{\cite[Theorem~6.1]{Reshetikhin:1995vs}}]
\label{thm:RV-colliding-crit-points}
In the limit when \( z_{n+1}, z_{n+2},\ldots, z_{n+k} \) are sent to \( \infty \) in such a way that \( z_{n+i} - z_{n+1} \) remain finite for \( i = 1,2,\ldots,k \), there exists a unique nondegenerate critical point \( t = (t^{(j)}_i) \in \mathcal{C}(\lamb,\lamb';z)_\mu^\nondeg \) such that asymptotically, the critical point has the form
\begin{equation*}
t^{(j)}_i(z) =
\begin{cases}
u^{(j)}_i(z_1,\ldots,z_n) + O(z_{n+1}^{-1}) &\text{if } 1 \le j \le a_i, \\
s^{(j)}_i z_{n+1} + O(1) &\text{if } a_i < j \le a_i + c_i, \\
v^{(j)}_i(x_1,\ldots,x_k) +z_{n+1} + O(z_{n+1}^{-1}) &\text{if } a_i + c_i < j \le a_i + b_i + c_i,
\end{cases}
\end{equation*}
where \( x_i = z_{n+i} - z_{n+1} \) for \( i = 1,2,\ldots,k \).
\end{Theorem}
\begin{Corollary}
\label{cor:RV-theorem-limit-polys}
Let \( t,u,v \) and \( s \) be as in Theorem~\ref{thm:RV-colliding-crit-points}. Taking a limit \( z_{n+i} \to \infty \) such that \( z_{n+i} - z_{n+1} \) is bounded, (which we denote \( \lim_{z\to \infty} \)) we have
\begin{equation*}
\lim_{z \to \infty} y^t = y^u.
\end{equation*}
\end{Corollary}
\begin{proof}
This is a direct application of Theorem~\ref{thm:RV-colliding-crit-points} to the definition of \( y^t \).
\end{proof}
We restrict our attention to critical points for \( z = (z_1,z_2,\ldots,z_n) \) and \( n \)-tuple of distinct real numbers such that \( z_1 < z_2 < \ldots < z_n \). In the limit when \( z_1, z_2,\ldots,z_n \to \infty \) such that \( z_i = o(z_{i+1}) \), Marcus, \cite{Marcus:2010vn}, describes a method to label critical points in \( \crit(\square^n;z)_\mu \) by standard \( \mu \)-tableaux. Marcus' theorem is recalled below, along with the proof. Recall, if \( T \in \SYT(\mu) \) then \( T|_{n-1} \) is the tableaux obtained by removing the box containing \( n \) from \( T \).
\begin{Theorem}[{\cite[Theorem~6.1]{Marcus:2010vn}}]
\label{thm:Marcus-main-theorem}\index{tableau!standard}
Given a standard tableaux, \( T \), of shape \( \mu \), there is a unique critical point \( t_T \in \crit(\square^n;z)_\mu \) such that, if we set \( y^T = y^{t_T} \), then
\begin{equation}
\label{eq:lim-zn-restrict-tab}
\lim_{z_n \to \infty} y^T = y^{T|_{n-1}},
\end{equation}
and taking the limit \( z_1,z_2,\ldots,z_n \) such that \( \abs{z_i} << \abs{z_{i+1}} \), asymptotically
\begin{equation}
\label{eq:limit-eigenvalues}
z_a \frac{\partial S}{\partial z_a} \sim c_T(i) + O(z_i^{-1}).
\end{equation}
\end{Theorem}
\begin{proof}
We will prove this by induction on \( n \). For \( n=1 \), the only partition is \( \square \) and thus there is a unique tableau \( T \). From Example~\ref{exm:empty-critical-points} we know \( \crit(\square;z)_\square \) contains a unique critical point, the empty critical point \( t_\emptyset \) and we simply set \( t_T = t_\emptyset \). Thus \( y^T = 1 \). The equations~(\ref{eq:lim-zn-restrict-tab}) and~(\ref{eq:limit-eigenvalues}) are vacuously satisfied.
For general \( n \), we will use Theorem~\ref{thm:RV-colliding-crit-points} to inductively build a critical point corresponding to \( T \in \SYT(\mu) \). Let \( \lambda = \mathrm{sh}(T|_{n-1}) \), the partition obtained by removing the box labelled \( n \) in \( T \), from \( \mu \). By induction, there is a unique critical point \( t_{T|_{n-1}} \in \crit(\square^{n-1};z_1,\ldots,z_{n-1})_\lambda \). To build a critical point in \( \crit(\square^n;z)_\mu \), we need to fix a critical point in \( \crit(\square;0)_\square \), and a transformed critical point in \( \Scrit(\lambda,\square)_\mu \). The former contains only the empty critical point and the latter contains a unique point \( s = (s_1,\ldots,s_{e-1}) \) (where \( e \) is the row containing \( n \) in \( T \)) by Proposition~\ref{prp:unique-solution-n2-BAeqn}, where
\begin{equation}
\label{eq:s1-content-n}
s_1 = 1- \left( \lambda^{(1)} - c_T(n) \right)^{-1}.
\end{equation}
Thus, given \( t_{T|_{n-1}} \), and the data of where to add an \( n^{\text{th}} \) box to \( T|_{n-1} \), we obtain by Theorem~\ref{thm:RV-colliding-crit-points} a unique critical point \( t_T \in \crit(\square^n;z)_\mu \). By Corollary~\ref{cor:RV-theorem-limit-polys} we obtain~(\ref{eq:lim-zn-restrict-tab}).
All that is left to do is to prove~(\ref{eq:limit-eigenvalues}). This will also be done by induction. We need to investigate the eigenvalues
\begin{equation*}
z_a \frac{\partial S}{\partial z_a} = \sum_{b \neq a} \frac{z_a}{z_a - z_b} - \sum_{j=1}^{\abs{\mu} - \mu^{(1)}} \frac{z_a}{z_a - t^{(j)}_1},
\end{equation*}
of the operators \( z_a H_a(z) \). Suppose first that \( a < n \), then
\begin{equation*}
z_a \frac{\partial S}{\partial z_a} = \sum_{\substack{b \neq a \\ b < n}} \frac{z_a}{z_a - z_b} - \sum_{j=1}^{\abs{\mu} - \mu^{(1)}-\delta_{e>1}} \frac{z_a}{z_a - t^{(j)}_1} + \frac{z_a}{z_a-z_n} - \frac{\delta_{e>1}z_a}{z_a - t^{(\abs{\mu}-\mu^{(1)})}_1}.
\end{equation*}
But in the limit \( z_a/(z_a-z_n) \sim 0 \) and by Theorem~\ref{thm:RV-colliding-crit-points} \( t^{(\abs{\mu}-\mu^{(1)})}_1 \sim s_1 z_{n} + O(1) \) so
\begin{align*}
z_a \frac{\partial S}{\partial z_a} &\sim \sum_{\substack{b \neq a \\ b < n}} \frac{z_a}{z_a - z_b} - \sum_{j=1}^{\abs{\mu} - \mu^{(1)}-\delta_{e>1}} \frac{z_a}{z_a - t^{(j)}_1} \\
&= \sum_{\substack{b \neq a \\ b < n}} \frac{z_a}{z_a - z_b} - \sum_{j=1}^{\abs{\lambda} - \lambda^{(1)}} \frac{z_a}{z_a - t^{(j)}_1} \\
&= z_a \frac{\partial S'}{\partial z_a},
\end{align*}
where \( S' = S(\square^{n-1},\lambda;z_1,\ldots,z_{n-1}) \) is the logarithm of the master function for the weight \( \lambda \). Thus by induction~(\ref{eq:limit-eigenvalues}) holds for \( a < n \).
Now we need to check~(\ref{eq:limit-eigenvalues}) for \( a=n \). This turns out to be a simple calculation. By Theorem~\ref{thm:RV-colliding-crit-points}
\begin{align*}
z_n \frac{\partial S}{\partial z_n} &= \sum_{b < n} \frac{z_n}{z_n - z_b} - \sum_{j=1}^{\abs{\mu} - \mu^{(1)}} \frac{z_n}{z_n - t^{(j)}_1} \\
&\sim (n-1) - (\abs{\mu} - \mu^{(1)} - \delta_{e>1}) - \frac{\delta_{e>1}}{1-s_1} + O(z_n^{-1}) \\
&= \mu^{(1)} - \delta_{e1} - \frac{\delta_{e>1}}{1-s_1} + O(z_n^{-1}).
\intertext{Using~(\ref{eq:s1-content-n}),}
z_n \frac{\partial S}{\partial z_n} &\sim \mu^{(1)} - \delta_{e1} - \delta_{e>1} \left( \lambda^{(1)} - c_T(n) \right) + O(z_n^{-1}).
\end{align*}
If \( e > 1 \) then \( \mu^{(1)} = \lambda^{(1)} \), and if \( e = 1 \) then \( c_T(n) = \mu^{(1)} - 1 \) so the Theorem is proved.
\end{proof}
\section{Critical points and Schubert intersections}
\label{sec:crit-points-schub}
In this final section we describe a relationship between critical points and Schubert intersections called the coordinate map. We use this and the fact that critical points determine the eigenvalues of the Bethe algebras to prove that the MTV and Speyer labellings agree. The most important part of the argument is a careful analysis of exactly when the coordinate map is continuous.
\subsection{The coordinate map}
\label{sec:second-mtv-iso}
This section describes the relationship between critical points for the master function and Schubert intersections.
Let \( X \in \Gr(r,d) \). Since \( \Gr(r,d) \) is paved by Schubert cells \( X \in \Shcell(\mu^\comp;\infty) \) for some unique partition \( \mu \). By definition \( X \) is an \( r \)-dimensional vector space of polynomials in the variable \( u \), of degree less than \( d \). Let \( l_i \) be the number of boxes below the \( i^{\text{th}} \) row in \( \mu \) (c.f Example~\ref{exm:case-all-box}). Set \( d_i = \mu_i + r -1 \). We can choose a ordered basis \( f_1(u), f_2(u), \ldots, f_r(u) \) of monic polynomials with descending degrees \( d_i \). Consider the polynomials
\begin{equation*}
y_a(u) = \mathrm{Wr}(f_{a+1}(u),\ldots,f_r(u)), \quad a = 0,1,\ldots,r-1.
\end{equation*}
The polynomial \( y_a \) has degree \( l_a \). Denote its roots by \( t^{(1)}_a,t^{(2)}_a,\ldots,t^{(l_a)}_{a} \). The polynomial \( y_a(u) \) determines a point in \( \infP \). The following lemma demonstrates the polynomials \( y_a(u) \in \infP \) depend only on \( X \) and not the basis chosen.
\begin{Lemma}
\label{lem:MTV-morphism-well-defined}
Suppose \( \{ f_i(u) \} \) and \( \{ f'_i(u) \} \) are two bases of \( X \) of monic polynomials with descending degrees. Then
\begin{equation*}
\Wr(f_{a+1}(u),\ldots,f_r(u)) = \alpha \Wr(f'_{a+1}(u),\ldots,f'_r(u))
\end{equation*}
for some scalar \( \alpha \in \CC \).
\end{Lemma}
\begin{proof}
We use the fact that the descending sequence \( d_1 > d_2 > \ldots >d_r \) of degrees for any such basis is determined entirely by the partition \( \mu \). That is, \( \deg f_i(u) = \deg f'_i(u) = d_i \). By Lemma~\ref{lem:linear-class-wronskian} the Wronskian \( \Wr(g_1,\ldots,g_k) \) is determined by the space spanned by the polynomials \( g_1,\ldots,g_k \), we must prove
\begin{equation}
\label{eq:span-f-equals-span-fprime}
\CC\set{ f_{a+1}(u), \ldots, f_r(u) } = \CC\set{ f'_{a+1}(u), \ldots, f'_r(u) }.
\end{equation}
Since both bases span \( X \), \( f'_a(u) = \alpha_1 f_1(u) + \alpha_2 f_2(u) + \ldots + \alpha_r f_r(u) \) for some complex numbers \( \alpha_i \). But the degrees of the \( f_i \) are strictly descending so \( \alpha_i = 0 \) for \( i > a \). Hence \( f'_a(u) \in \CC\{ f_a(u), \ldots, f_r(u) \} \). By induction (\ref{eq:span-f-equals-span-fprime}) must be true.
\end{proof}
The map \( \mtvSC\map{\Gr(r,d)}{\left( \infP \right)^r} \) defined by
\begin{equation*}
\mtvSC(X) = (y_a)_{a=0}^{r-1} = \left( \Wr(f_{a+1}(u),\ldots,f_r(u)) \right)_{a=0}^{r-1}
\end{equation*}
for some choice of monic basis of descending degrees \( f_1(u),f_2(u),\ldots,f_r(u) \), of \( X \) is called the \emph{coordinate map}.
\begin{Remark}
\label{rem:MTV-map-not-cont}\index{coordinate map!continuity of}
The coordinate map is not continuous! This is easily seen in an example. Let \( r=2 \) and \( d=3 \). Consider the \( 1 \)-parameter family of subspaces
\begin{equation*}
X(s) = \CC \{ u^2 + s, u \}.
\end{equation*}
In this case \( \mtvSC(X(s)) = \left( s-u^2, u \right) \). However
\begin{equation*}
X_\infty = \lim_{s \to \infty}X(s) = \CC\{ 1, u \}.
\end{equation*}
So \( \mtvSC(X_\infty) = \left( 1, 1 \right) \), which is clearly not the same as \( \lim_{s \to \infty} \mtvSC(X(s)) = \left( 1, u \right) \).
\end{Remark}
Essentially the problem in Remark~\ref{rem:MTV-map-not-cont} is the monic basis of descending degrees which we are using to calculate \( \mtvSC(X(s)) \) no longer has descending degrees in the limit. Whenever we can find a continuous family of monic bases of descending degrees the map \( \mtvSC \) will be continuous since taking the Wronskian of a tuple of polynomials is algebraic. An important case in which we can do this is for a a Schubert cell. If \( X \in \Shcell(\mu^\comp;\infty) \), we can find a unique basis of the form
\begin{equation*}
f_i(u) = u^{d_i} + \sum_{\substack{j = 1 \\ d_i-j \notin \mathbf{d}}} a_{ij} u^{d_i - j},
\end{equation*}
where \( \mathbf{d} = (d_1,d_2,\ldots,d_r) \). The \( a_{ij} \) define algebraic coordinates on \( \Shcell(\mu^\comp;\infty) \). Hence \( \mtvSC \) is algebraic when restricted to any open Schubert cell. In Section~\ref{sec:partial-continuity} we will prove that the coordinate map is continuous along certain paths in \( \Gr(r,d) \) which are allowed to have limit points outside a Schubert cell.
\begin{Theorem}[{\cite[Theorem~5.3]{Mukhin:2012vk}}]
\label{thm:coord-map-onto-crit}
The image of \( \Shcell(\square^n,\mu^\comp;z,\infty) \) under the coordinate map is contained in \( \crit(\square^n;z)_\mu \).
\end{Theorem}
\subsection{Partial continuity of the coordinate map}
\label{sec:partial-continuity}
\index{coordinate map}
\index{coordinate map!continuity of}
We will need a little more information about when the coordinate map is continuous. Let \( \mu \) be a partition and let \( \lambda \subseteq \mu \) be a partition with one less box, that is \( \abs{\lambda} = \abs{\mu} -1 \). Denote by \( e \) the row of \( \mu \) from which we need to remove a box to obtain \( \lambda \).
Let \( d_i = \mu_i + r - 1 \) and \( d'_i = \lambda_i +r -1 \). We denote the respective decreasing sequences by \( \mathbf{d} = (d_1,d_2,\ldots,d_r) \) and \( \mathbf{d}' = (d'_1,d'_2,\ldots,d'_r) \). Recall that \( X \in \Shcell(\mu^\comp;\infty) \) (respectively \( X \in \Shcell(\lambda^\comp;\infty) \)) if and only if there exists a basis \( f_1,f_2,\ldots,f_r \) of \( X \) such that \( \deg f_i = d_i \) (respectively \( \deg f_i = d'_i \)). In particular if \( f \in X \) then \( \deg f \in \mathbf{d} \) (respectively \( \deg f \in \mathbf{d}' \)). Since we removed a single box from \( \mu \) in row \( e \) to obtain \( \lambda \) we have
\begin{equation*}
d'_i =
\begin{cases}
d_i & \text{if } i \neq e \\
d_e - 1 & \text{if } i = e.
\end{cases}
\end{equation*}
Fix \( X \in \Shcell(\mu^\comp;\infty) \). Let \( X(s) \in \Shcell(\mu^\comp;\infty) \) be a continuous one-parameter family over \( \RR \) of subspaces. We thus have a unique basis
\begin{equation*}
f_i(u;s) = u^{d_i} + \sum_{\substack{j = 1 \\ d_i-j \notin \mathbf{d}}} a_{ij}(s) u^{d_i - j},
\end{equation*}
for \( X(s) \), for each \( s \in \RR \). The \( a_{ij}\map{\RR}{\CC} \) are continuous functions.
\begin{Lemma}
\label{lem:bounds-on-aij}
The limit point \( X_\infty \) of this family is contained in \( \Shcell(\lambda^\comp,\infty) \) if and only if
\begin{align}
\label{eq:lim_a_ij-i-not=-e}
\lim_{s \to \infty} \abs{a_{ij}(s)} &< \infty \quad \text{for } i \neq e \\
\intertext{and}
\label{eq:lim_a_e1}
\lim_{s \to \infty} \abs{a_{e1}(s)} &= \infty, \\
\label{eq:lim_a_ej-div-a_e1}
\lim_{s \to \infty} \abs{\frac{a_{ej}(s)}{a_{e1}(s)}} &< \infty.
\end{align}
\end{Lemma}
\begin{proof}
If properties (\ref{eq:lim_a_ij-i-not=-e}), (\ref{eq:lim_a_e1}) and (\ref{eq:lim_a_ej-div-a_e1}) hold then in the limit, the basis \( f_1,\ldots,f_r \) is a sequence of \( r \) polynomials which have descending degrees \( d'_1 > d'_2 > \ldots > d'_r \). Hence \( X_\infty \in \Shcell(\lambda^\comp;\infty) \).
In the other direction, if (\ref{eq:lim_a_ij-i-not=-e}) fails to hold, then there exists an \( i \neq e \) and a \( j \) such that \( \lim_{s \to \infty} \abs{a_{ij}(s)} = \infty \). There are two cases. First if \( i < e \), then choose \( j \) such that \( a_{ij}(s) \) has the fastest growth (for fixed \( i \)). Thus the polynomial
\begin{equation*}
\lim_{s \to \infty} a_{ij}(s)^{-1}f_i(u;s) \in X_\infty
\end{equation*}
has degree \( d_i-j \). But \( d_i-j \notin \mathbf{d} \) and \( d_i-j < d_e-1 \) thus \( d_i-j \notin \mathbf{d}' \) and so we must have \( X_\infty \notin \Shcell(\lambda^\comp;\infty) \).
Now for the second case, assume there exists an \( i \) such that \( i > e \) and such that \( \lim_{s \to \infty} \abs{a_{ij}(s)} = \infty \). For any \( f \in X_\infty \) there exist functions \( \alpha_i\map{\RR}{\CC} \) such that
\begin{equation*}
f = \lim_{u \to \infty} \alpha_1(s) f_1(u;s) + \alpha_2(s) f_2(u;s) + \ldots + \alpha_r(s) f_r(u;s).
\end{equation*}
If \( X_\infty \in \Shcell(\lambda^\comp;\infty) \) then we can choose \( f \) so that \( \deg f = d_i \). In order for this to be true we first need \( \lim_{u \to \infty} \alpha_k(s) = 0 \) for \( k > i \) (since \( \lim_{u \to \infty} f_k(u;s) \) exists in this case and has degree \( d_k > d_i \)). By assumption
\begin{equation*}
\deg \lim_{u \to \infty} \alpha_i(s) f_i(u;s) < d_i
\end{equation*}
and for \( k < i \) we also have
\begin{equation*}
\deg \lim_{u \to \infty} \alpha_k(s) f_k(u;s) < d_i.
\end{equation*}
Thus we have a contradiction and \( X_\infty \notin \Shcell(\lambda^\comp;\infty) \).
Finally if conditions (\ref{eq:lim_a_e1}) and (\ref{eq:lim_a_ej-div-a_e1}) do not hold then let \( j \) be minimal with respect to the condition \( a_{ej}(s) \) has the largest order of growth. By assumption \( j > 1 \). Then the degree of \( \lim_{u \to \infty} a_{ej}(s)^{-1}f_e(u;s) \) is \( d_e-j \notin \mathbf{d}' \). Hence \( X_\infty \notin \Shcell(\lambda^\comp;\infty) \).
\end{proof}
Using this lemma we can prove the continuity of the coordinate map \( \mtvSC \) along certain kinds of paths. Heuristically, the paths along which \( \mtvSC \) is continuous are those in \( \Gr(r,d) \) which remain inside a Schubert cell and if they pass into another Schubert cell, do so only in a way in which the partition labelling the Schubert cell is a single box smaller than the partition labelling the original Schubert cell. Let \( X, X(s) \) and \( X_\infty \) be as above.
\begin{Proposition}
\label{prp:cont-of-theta-along-paths}
If \( X_\infty \in \Shcell(\lambda^\comp;\infty) \) then
\begin{equation*}
\mtvSC(X_\infty) = \mtvSC\left( \lim_{u \to \infty} X(s) \right) = \lim_{u \to \infty} \mtvSC(X(s)).
\end{equation*}
\end{Proposition}
\begin{proof}
Since \( X_\infty \in \Shcell(\lambda^\comp;\infty) \) by Lemma~\ref{lem:bounds-on-aij} we have conditions (\ref{eq:lim_a_ij-i-not=-e}), (\ref{eq:lim_a_e1}) and (\ref{eq:lim_a_ej-div-a_e1}) and thus have a monic basis of descending degrees
\begin{align*}
f_i^\infty(u) = \lim_{u \to \infty} f_i(u;s) &= u^{d_i} + \sum_{\substack{j = 1 \\ d_i-j \notin \mathbf{d}}} a_{ij}^\infty u^{d_i - j}, \quad \text{for } i \neq e, \\
f_e^\infty(u) = \lim_{u \to \infty} a_{e1}(s)^{-1} f_e(u;s) &= u^{d_e-1} + \sum_{\substack{j = 2 \\ d_i-j \notin \mathbf{d}}} b_{j}^\infty u^{d_i - j}.
\end{align*}
Here \( a^\infty_{ij} = \lim_{u \to \infty}a_{ij}(s) \) and \( b_{j}^\infty = \lim_{u\to\infty} a_{ej}(s)/a_{e1}(s) \) which exist by (\ref{eq:lim_a_ij-i-not=-e}) and (\ref{eq:lim_a_ej-div-a_e1}).
Let \( X^a(s) = \CC\{ f_a,\ldots,f_r \} \) and \( X^a_\infty(s) = \CC\{ f^\infty_a,\ldots,f^\infty_r \} \). We can use these spaces to calculate the Wronskian. That is
\begin{align*}
\mtvSC(X(s)) &= \left( \Wr(X^{1}(s)), \ldots, \Wr(X^r(s)) \right) \\
\intertext{and}
\mtvSC(X_\infty) &= \left( \Wr(X^{1}_\infty(s)), \ldots, \Wr(X^r_\infty(s)) \right)
\end{align*}
Since \( \lim_{s \to \infty}X^a(s) = X^a_\infty(s) \) and since the Wronskian is continuous
\begin{align*}
\lim_{u \to \infty} \mtvSC(X(s)) &= \lim_{s \to \infty} \left( \Wr(X^{1}(s)), \ldots, \Wr(X^r(s)) \right) \\
&= \left( \Wr(\lim_{s \to \infty} X^{1}(s)), \ldots, \Wr(\lim_{s \to \infty} X^r(s)) \right) \\
&= \left( \Wr(X^{1}_\infty(s)), \ldots, \Wr(X^r_\infty(s)) \right) \\
&= \mtvSC(X_\infty). \qedhere
\end{align*}
\end{proof}
\subsection{Identifying the Speyer and MTV labellings}
\label{sec:ident-spey-mtv}
We are now in a position to prove the main theorem, that the Speyer and MTV labellings agree. In Theorem~\ref{thm:conincides-with-Speyer-label} we saw that \( \Sp = \EL \) so it will be enough to show that \( \MTV = \EL \). First we show that Speyer's labelling is compatible with Marcus' and the coordinate map.
\begin{Theorem}
\label{thm:speyer-equals-marcus}
If \( X \in \Shvar(z)_\mu \) and \( \EL(X)=T \in \SYT(\mu) \), then \( \theta(X) = t_T \).
\end{Theorem}
\begin{proof}
We prove this theorem by induction on \( n \). In the case \( n = 1 \), Both \( \Shvar(z)_\square \) and \( \crit(\square;z)_\square \) consist of a single point both of which are labelled by the unique \( \square \)-tableaux. Thus by Theorem~\ref{thm:coord-map-onto-crit} they are mapped to each other by \( \theta \).
For \( n > 1 \), as above, let \( X(s) \in \Shvar(\square^n,\mu^\comp;z_1,\ldots,z_{n-1},s,\infty) \) be the unique family of points passing though \( X \).
By Lemma~\ref{lem:Xinfty-in-lambda}, the limit \( X_\infty = \lim_{s \to \infty}X(s) \) is contained in \( \Shcell(\lambda^\comp;\infty) \) where \( \lambda = \mathrm{sh}(T|_{n-1}) \). We also know by Lemma~\ref{lem:limit-of-T-point} that \( \EL(X_\infty) = T|_{n-1} \). By the induction hypothesis \( \mtvSC(X_\infty) = t_{T|_{n-1}} \). In particular, by Proposition~\ref{prp:cont-of-theta-along-paths}
\begin{equation*}
\lim_{s \to \infty} \mtvSC(X(s)) = \mtvSC(X_\infty) = t_{T|_{n-1}}.
\end{equation*}
However, by Theorem~\ref{thm:RV-colliding-crit-points} and by the definition of Marcus' labelling there is a unique family of critical points with this property, namely the family passing through \( t_T \). Hence we have that \( \mtvSC(X_T) = t_T \).
\end{proof}
\begin{Theorem}
\label{thm:MTV-equals-Speyer}
If \( X \in \Shvar(z)_\mu \) then \( \MTV(X) = \EL(X) = \Sp(X) \).
\end{Theorem}
\begin{proof}
We recall briefly from Section~\ref{sec:mtv-labelling} how \( \MTV\map{\Shvar(z)_\mu}{\SYT(\mu)} \) is defined. We consider the functional \( \chi = \kappa_z^{-1}(X) \in \bspec(z)_\mu \) and then take a limit as \( z_i \to \infty \) such that \( z_i/z_{i+1} \to 0 \). The eigenvalues \( \lim_{z\to\infty} \chi(z_aH_a) \) determine the content of the box containing \( a \) in the tableau \( \MTV(X) \).
By \cite[Corollary~8.7]{Mukhin:2012vk} \( \omega \circ \mtvSC(X) \) is a simultaneous eigenvector for \( \uBethe(z)_\mu \) with eigenvalues given by \( \chi = \kappa_z^{-1}(X) \in \bspec(z)_\mu \). Let \( \EL(X)=T \in \SYT(\mu) \). Then by Theorem~\ref{thm:speyer-equals-marcus} \( \theta(X) = t_{T} \) and by Theorem~\ref{thm:weight-function},~(\ref{item:eigenvalue-ofGaudin}),
\begin{equation*}
\chi(z_a H_a(z)) = z_a \frac{\partial S}{\partial z_a}(z,t_T),
\end{equation*}
so by Theorem~\ref{thm:Marcus-main-theorem},
\begin{equation*}
\chi(z_a H_a(z)) = c_T(a) + O(z_a^{-1}).
\end{equation*}
which implies that \( \MTV(X) = T = \EL(T) \).
\end{proof}
\bibliographystyle{alpha}
|
2,869,038,155,432 | arxiv | \section*{SUMMARY}
Despite advances in artificial intelligence models, neural networks still cannot achieve human performance, partly due to differences in how information is encoded and processed compared to human brain. Information in an artificial neural network (ANN) is represented using a statistical method and processed as a fitting function, enabling handling of structural patterns in image, text, and speech processing. However, substantial changes to the statistical characteristics of the data, for example, reversing the background of an image, dramatically reduce the performance. Here, we propose a quantum superposition spiking neural network (QS-SNN) inspired by quantum mechanisms and phenomena in the brain, which can handle reversal of image background color. The QS-SNN incorporates quantum theory with brain-inspired spiking neural network models from a computational perspective, resulting in more robust performance compared with traditional ANN models, especially when processing noisy inputs. The results presented here will inform future efforts to develop brain-inspired artificial intelligence.
\section*{INTRODUCTION}
Many machine learning methods using quantum algorithms have been developed to improve parallel computation. Quantum computers have also been shown to be more powerful than classical computers when running certain specialized algorithms, including Shor's quantum factoring algorithm~\citep{shor1999polynomial}, Grover's database search algorithm~\citep{grover1996fast}, and other quantum-inspired computational algorithms~\citep{manju2014applications}.
Quantum computation can also be used to find eigenvalues and eigenvectors of large matrices. For example, the traditional principal components analysis (PCA) algorithm calculates eigenvalues by decomposition of the covariance matrix; however, the computational resource cost increases exponentially with increasing matrix dimensions. For an unknown low-rank density matrix, quantum-enhanced PCA can reveal the quantum eigenvectors associated with the large eigenvalues; this approach is exponentially faster than the traditional method~\citep{lloyd2014quantum}.
K-means is a classic machine learning algorithm that classifies unlabeled datasets into $k$ distinct clusters. A quantum-inspired genetic algorithm for K-means has been proposed, in which a qubit-based representation is employed for exploration and exploitation in discrete ``0'' and ``1'' hyperspace. This algorithm was shown to obtain the optimal number of clusters and the optimal cluster centroids~\citep{xiao2010quantum}. Quantum algorithms have also been used to speed up the solving of subroutine problems and matrix inversion problems~\citep{harrow2009quantum}; for example, Grover's algorithm~\citep{grover1996fast} provides quadratic speedup of a search of unstructured databases.
The quantum perceptron and quantum neuron computational models combine quantum theory with neural networks~\citep{schuld2014quest}. Compared with the classical perceptron model, the quantum perceptron requires fewer resources and benefits from the advantages of parallel quantum computing~\citep{schuld2015simulating, torrontegui2019unitary}. The quantum neuron model~\citep{cao2017quantum, mangini2020quantum} can also be used to realize classical neurons with sigmoid or step function activation by encoding inputs in quantum superposition, thereby processing the whole dataset at once. Deep quantum neural networks \citep{beer2020training} raise the prospect of deploying deep learning algorithms on quantum computers.
Spiking neural networks (SNN) represent the third generation of neural network models \citep{maass1997networks} and are biologically plausible from neuron, synapse, network, and learning principles perspectives. Unlike the perceptron model, neurons in an SNN accept signals from pre-synaptic neurons, integrating the post-synaptic potential and firing a spike when the somatic voltage exceeds a threshold. After spiking, the neuron voltage is reset in preparation for the next integrate-and-fire process. SNN are powerful tools for representation and processing of spatial-temporal information. Many types of SNN have been proposed for different purposes. Examples include visual pathway-inspired classification models \citep{zenke2015diverse,zeng2017improving}, basal ganglia-based decision-making models~\citep{herice2016decision,cox2019striatal,zhao2017towards}, and other shallow SNN \citep{khalil2017effects,shrestha2018slayer}. Different SNN may include different types of biologically plausible neurons, e.g., the leaky integrate-and-fire (LIF) model \citep{gerstner2002spiking}, Hodgkin--Huxley model, Izhikevich model \citep{izhikevich2003simple}, and spike response model \citep{gerstner2001framework}. In addition, many different types of synaptic plasticity principles have been used for learning, including spike-timing-dependent plasticity \citep{dan2004spike,fremaux2016neuromodulated}, Hebbian learning \citep{song2000competitive}, and reward-based tuning plasticity \citep{herice2016decision}.
Quantum superposition SNN has theoretic basis in both biology~\citep{Vaziri_2010} and computational models~\citep{kristensen2019artificial}. From one perspective, spiking neuron models, such as the LIF and Izhikevich models, can be reformed by quantum algorithms in order to accelerate their processing using a quantum computer. On the other hand, quantum effects such as entanglement and superposition are regarded as special information-interactive methods
and can be used to modify the classical SNN framework to generate similar behavior to that of particles in the quantum domain. In this work, we follow the latter approach. More specifically, we use a quantum superposition mechanism to encode complementary information simultaneously and further transfer it to spike trains, which are suitable for SNN processing. In our proposed quantum superposition SNN (QS-SNN) model, quantum state representation is integrated with spatio-temporal spike trains in SNN. This characteristic is conducive to good model performance not only on standard image classification tasks but also when handling color-inverted images. QS-SNN encodes the original image and the color-inverted image in the format of quantum superposition; the changing background context demonstrated by the spiking phase and spiking rate contains the image pixels' identity information.
We combine quantum superposition information encoding with SNN for three reasons. First, the possible influence of quantum effects on biological processes and the related quantum brain hypothesis have been theoretically investigated~\citep{Vaziri_2010, FISHER2015593,Weingarten2016New}. Second, quantum superposition states are represented by vectors in complex Hilbert space, in contrast to traditional ANN, which operate in real space only; this is more representative of brain spikes, as the spiking rate and spiking phase spatio-temporal property also have complex number representation. In essence, SNN are more appropriate for quantum-inspired superposition information encoding. Third, current quantum machine learning methods, especially those used for quantum image processing, focus on encoding a classical image in the quantum state, with the image processing methods accelerated by quantum computing~\citep{iyengar2020analysis}. There has been less exploration of the possibility of using a quantum superposition state coding mechanism for different pattern information-processing frameworks. More importantly, owing to the use of statistical methods and fitting functions, current ANN show a huge performance drop when required to recognize a background-inverted image. This inspired us to develop a new information representation method unlike that used in traditional models. The integration of characteristics from SNN and quantum theory is intended to achieve a better representation of multi-states and potentially enable easier solving of tasks that are challenging for traditional ANN and SNN models.
The subsequent sections describe how complementary superposition information is generated and transferred to spatio-temporal spike trains. A two-compartment SNN is used to process the spikes. The proposed model, combining complementary superposition information encoding with the SNN spatio-temporal property, can successfully recognize a background color-inverted image, which is hard for traditional ANN models.
\subsection*{Complementary superposition information encoding}
\subsubsection*{Quantum image processing}
Quantum image processing combines image processing methods with quantum information theory. There are many approaches to internal representation of an image in a quantum computer, including flexible representation of quantum images (FRQI), NEQR, GQIR, MCQI, and
QBIP~\citep{iyengar2020analysis, mastriani2020quantum}, which transfer the image to appropriate quantum states for the next step of quantum computing. Our approach is inspired by the FRQI method \citep{le2011FRQI}, as shown in Equations (\ref{FRQI}) and (\ref{FRQI_limit}):
\begin{equation}
\mathinner{|I(\theta)\rangle}=\frac{1}{2^n}\sum\limits_{i=0}^{2^{2n}-1}(sin(\theta_{i})\mathinner{|0\rangle}+cos(\theta_{i})\mathinner{|1\rangle})\mathinner{|i\rangle},
\label{FRQI}
\end{equation}
\begin{equation}
\theta_{i} \in [0, \frac{\pi}{2}], i= 1, 2, 3, \dots, 2^{2n}-1,
\label{FRQI_limit}
\end{equation}
where $\mathinner{|I(\theta)\rangle}$ is the quantum image, qubit $\mathinner{|i\rangle}$ represents the position of a pixel in the image, and $\theta=(\theta_{0},\theta_{1},\dots,\theta_{2^{2n}-1})$ encodes the color information of the pixels. FRQI satisfies the quantum state constraint in Equation (\ref{FRQI_limit_img}):
\begin{equation}
\parallel\mathinner{|I(\theta)\rangle}\parallel=\frac{1}{2^n}\sqrt{\sum\limits_{i=0}^{2^{2n}-1}(cos^2\theta_{i}+sin^2\theta_{i})}=1.
\label{FRQI_limit_img}
\end{equation}
\subsubsection*{Complementary superposition spikes}
We propose a complementary superposition information encoding method and establish a linkage between quantum image formation and spatio-temporal spike trains. The complement code is widely used in computer science to turn subtraction into addition. We encode the original information and complementary information into a superposition state; one example is shown in Equation (\ref{com_super}), with the rightmost sign bit removed and taking the complement:
\begin{equation}
\mathinner{|I(\theta_{i})\rangle}=cos(\theta_{i})\mathinner{|[0000001]_b\rangle}+sin(\theta_{i})\mathinner{|[1111110]_b\rangle}.
\label{com_super}
\end{equation}
Equation (\ref{com_super}) is an illustration of how complementary superposition information encoding works, with no factual significance. In this work, we focus on quantum image superposition encoding, as in Equation (\ref{QS-SNN}). However, it should be noted that any form of information that has a complement format, not just an image, can be encoded as a superposition state. Images in complementary quantum superposition states are further transferred to spike trains, as depicted in Figure \ref{image_theta_spike}. An image in its complementary state has an inverted background.
\begin{equation}
\mathinner{|I(\theta)\rangle}=\frac{1}{2^n}\sum\limits_{i=0}^{2^{2n}-1}(cos(\theta_{i})\mathinner{|x_{i}\rangle}+sin(\theta_{i})\mathinner{|\bar{x}_{i}\rangle})\otimes\mathinner{|i\rangle}, \\
\label{QS-SNN}
\end{equation}
\begin{equation}
\theta_{i} \in [0, \frac{\pi}{2}], i= 1, 2, 3, \dots, 2^{2n}-1.
\label{QS-SNN_limitation}
\end{equation}
The complementary quantum superposition encoding is shown in Equations (\ref{QS-SNN}) and (\ref{QS-SNN_limitation}), where the $\mathinner{|i\rangle}$ represent pixel positions. Unlike FRQI, which uses qubits only for color encoding, here we use complementary qubits for encoding both original image pixels $x_{i}$ and the color-inverted image $\bar{x}_{i}$ with $\bar{x}_{i} = 1-x_{i}$, supposing the pixel $x_{i}$ domain ranges from 0 to 1.0. The parameter $\theta_i$ represents the degree of quantum image $\mathinner{|I\rangle}$, mixing the original state $\mathinner{|x\rangle}$ and reverse state $\mathinner{|\bar{x}\rangle}$.
We designed quantum circuit for the generation of quantum superposition image $\mathinner{|I(\theta_i)\rangle}$ as shown in the figure~\ref{fig_Si}(A), which is also discussed in~\citep{le2011FRQI, dendukuri2018image}. The quantum state $\mathinner{|x_i\rangle}$ is processed by Hadamard transform $H$ and controlled $NOT$ gate to form the complementary state $\mathinner{|\beta_{ix_i}\rangle}$ with:
\begin{equation}
\mathinner{|\beta_{ix_i}\rangle}=\frac{\mathinner{|0,x_i\rangle}+(-1)^i\mathinner{|1,\bar{x}_i\rangle}}{\sqrt{2}},
\label{breta_ixi}
\end{equation}
Then rotation matrices $R_i$ is used to encode phase information as
\begin{equation}
R_i = \left[
\begin{matrix}
cos\frac{\theta_i}{2} & -sin\frac{\theta_i}{2} \\ \\
sin\frac{\theta_i}{2} & cos\frac{\theta_i}{2}
\end{matrix}
\right]
\label{R_i}
\end{equation}
Finally, the superposition state $\mathinner{|I(\theta_{i})\rangle}$ is measured and two states are retrieved with probability $P_i$ and $Q_i$.
The complex information in quantum encoding is similar to signal processing in SNN. Neuron spikes can encode spatio-temporal information with specific spiking rates and spiking times, which can be used to represent quantum information.
Neuron spikes have the attribute of spatiotemporal dimension, which are identical in shape but differ significantly in frequency and phase, seeing Izhikevich neuron model~\citep{izhikevich2003simple} in Figure S1, and are well-suited to the implementation of the vector form quantum image in Equation (\ref{QS-SNN}). We use spike trains with firing rate $r_{i}$ and firing phase $\varphi_i$ to represent quantum image state $\mathinner{|I(\theta_i)\rangle}$. As shown in figure~\ref{fig_Si}(A), the spike trains containing information of $\mathinner{|I(\theta_i)\rangle}$ can be generated using Equation(\ref{spike_rate}) and Equation (\ref{theta_cal}).
\begin{equation}
r_{i}=\frac{\parallel\mathinner{|I(\theta)\rangle}\parallel - sin(\varphi_i)}{cos(\varphi_i)-sin(\varphi_i)},
\label{spike_rate}
\end{equation}
\begin{equation}
\varphi_i=\mathcal{F}\{arctan(\frac{P_j}{Q_j}) | j =1, 2,...,N.\},
\label{theta_cal}
\end{equation}
Notation $\mathcal{F}\{X_i\}$ is set operation, and is specific in different tasks. The superposition state encoding $\mathinner{|I(\theta_{i})\rangle}$ is transferred to spike trains $S_i(t;\varphi_i)$, which is generated from a Poisson spikes $S_i(t)$ with spike rate $r_i$ and extended phase $\varphi_i$ shown as Equation~(\ref{spiketrains}). Here, $T$ is the time interval of neuron processing spikes received from pre-synaptic neurons, and $T_{sp}$ is the spiking time window in this period, as shown in Figure~\ref{fig_Si}(B):
\begin{equation}
S_{i}(t;\varphi_i)=S_i(t-t_0),
\label{spiketrains}
\end{equation}
\begin{equation}
t_0 = \frac{\varphi_i}{\pi/2}*(T-T_{sp}).
\label{phases}
\end{equation}
\subsection*{Two-compartment SNN}
\subsubsection*{Synapses with time-differential convolution}
Synapses play an important part in the conversion of information from spikes in pre-synaptic neurons to membrane potential (or current) in post-synaptic neurons. In this work, the time-differential kernel (TCK) convolution synapse is used, as shown in Equations (\ref{convolution}) and (\ref{postmembrane}) and Figure S2. The spikes, $S_i$, from pre-synaptic neurons are convoluted with a kernel and then integrated with the dendrite membrane potential $V_{b}(t)$. This process can be considered as a stimulus-response convolution with the form of a Dirac function~\citep{urbanczik2014learning}:
\begin{equation}
\left\{\begin{array}{l}
\kappa(t)= \zeta(t) - \zeta(-t) \\
\zeta(t)=\Theta(t)(e^{-\frac{t}{\tau}})
\end{array},\right.
\label{convolution}
\end{equation}
\begin{equation}
V_{j}^{b}(t) = \sum\limits_{i}w_{i,j}\parallel\int_{-T}^{+T}\kappa(\tau)S_i(\tau)\,d\tau\parallel.
\label{postmembrane}
\end{equation}
\subsubsection*{Two-compartment neurons}
Both the hidden layer and the output layer contain biologically plausible two-compartment neurons, which dynamically update the somatic membrane potential $V_i(t)$ with the dendrite membrane potential $V^b_{i}(t)$, as shown in Figure S2.
In the compartment neuron model, $V_i^h(t)$ is the membrane potential of neuron $i$ in the hidden layer, which is updated with Equation (\ref{hidden}); $g_B$, $g_L$, and $\tau_L$ are hyperparameters that represent synapse conductance, leaky conductance, and the integrated time constant, respectively; $V^{PSP}_{j}(t)$ is the synaptic input from neuron $j$; $V_i^{h,b}(t)$ is the dendrite potential with adjustable threshold $b_i^h$ in the hidden layer; and $w_{ij}^h$ is the synaptic weight between the input and hidden layers:
\begin{equation}
\left\{\begin{array}{l}
\tau_L\frac{dV_i^{h}(t)}{dt}=-V_i^h(t)+\frac{g_B}{g_L}(V_i^{h,b}(t)- V_i^{h}(t)) \\
V_i^{h,b}(t) = \sum\limits_{j}w_{ij}^hV^{PSP}_j(t) + b_i^h \\
V^{PSP}_j(t)=\parallel\int_{-T}^{+T}\kappa(\tau)S_j(\tau)\,d\tau\parallel.
\end{array}\right.
\label{hidden}
\end{equation}
The somatic neuron model in the output layer contains 10 neurons corresponding to 10 classes of the MNIST dataset. As shown in Equation (\ref{outputneuron}), the hidden layer neurons deliver signals to the output layer with integrated spike rate $r_i$, which is differentiable; hence, it can be tuned with back-propagation. Here, $V_i^{o}(t)$ is the membrane potential in the output layer, $V_i^{o,b}(t)$ is the dendrite potential, and $r_{max}$ is the hyperparameter for rescaling of fire-rate signals:
\begin{equation}
\left\{\begin{array}{l}
\tau_L\frac{dV_i^{o}(t)}{dt}=-V_i^o(t)+\frac{g_B}{g_L}(V_i^{o,b}-V_i^o(t)) \\
V_i^{o,b}(t) = \sum\limits_{j}w_{ij}^or_{j}(t) + b_i^o \\
r_j=r_{max}\sigma(V_j^h) \\
\sigma(x)=1/(1-exp(-x)).
\end{array}\right.
\label{outputneuron}
\end{equation}
The shallow three-layered architecture is shown in Figure \ref{QS-SNNarchitecture}. The input layer receives quantum spikes with encoding of complementary qubits. The hidden layer of QS-SNN consists of a two-compartment model with time-differential convolution synapses. Neurons in the output layer are integrated spike-rate neurons, which receive an integrated fire rate from pre-synaptic neurons, as well as the teaching signal $V_I$ as information about class labels.
\subsection*{Computational experiments}
We examined the performance of the QS-SNN framework on a classification task using background-color-inverted images from the MNIST~\citep{lecun2010mnist} and Fashion-MNIST~\citep{xiao2017Fashion} datasets. QS-SNN encodes the original image and its color-inverted mirror as complementary superposition states and transfers them to spiking trains as an input signal to the two-compartment SNN. The dendrite prediction and proximal gradient methods used to train this model can be found in STAR Methods.
For comparison, we also tested several deep learning models on the color-inverted datasets, including a fully connected ANN, a 10-layers CNN~\citep{LeNet}, VGG~\citep{Simonyan2015VeryDC}, ResNet~\citep{He2016DeepRL} and DeseNet~\citep{huang2017densely}. All models are trained with original image $x_i$ and then tested on the background reverse image $I(\theta_i)$. The only difference is that, for QS-SNN, quantum superposition state image $\mathinner{|I(\theta_i)\rangle}$ is transferred to spike trains which is compatible with spiking neural networks. In other words, our essential idea is that the spatiotemporal property of neuron spikes enables the brain to transform spatially variant information into time differential information.
In this work, we formulated the spike trains transformation as the quantum superposition shown in Equation (\ref{QS-SNN}). And we have also demonstrated the numerical calculation of $I(\theta_{i})$ which is used for traditional ANN and CNN model testing in STAR Methods.
In addition, it is worth noting that the superposition state image $\mathinner{|I(\theta_i)\rangle}$ is constructed from original information $\mathinner{|x_i\rangle}$ and reverse image $\mathinner{|\bar{x}_i\rangle}$. The original image and its complementary reverse information are maintained in the superposition state encoding $\mathinner{|I(\theta_i)\rangle}$ at the same time. Because SNN is not processing pixel value directly, we transformed $\mathinner{|I(\theta_i)\rangle}$ to spike trains, which can be regarded as different expression of superposition state encoding image in spatiotemporal dimension.
\subsubsection*{Standard and color-inverted images}
The standard MNIST dataset contains images of 10 classes of handwritten digits from 0 to 9; images are 28x28 pixels in size, with 60,000 and 10,000 training and test samples, respectively. Fashion-MNIST has the same image size and the same training and testing split as MNIST but contains grayscale images of different types of clothes and shoes.
The original MNIST and Fashion-MNIST images and their color-inverted versions, with different degrees of inversion as measured by parameter $\theta$, are depicted in Figure \ref{shift_exp}(A) and (B), respectively.
To be specific, the spiking phase estimating operation $\mathcal{F}\{X_i\}$ in Equation (\ref{theta_cal}) is set as piecewise selection function as
\begin{equation}
\varphi_i= \left\{\begin{array}{l}
arctan(\frac{P_j}{Q_j}), \quad j = i, \\
0, \qquad\qquad\quad \ j \neq i.
\end{array}\right.
\end{equation}
\subsubsection* {Robustness to reverse pixel noise and Gaussian noise}
Besides the effects of changing the whole background, we were interested in the capability of QS-SNN to handle other types of destabilization of images. For this purpose, we added reverse spike pixels and Gaussian noise to the MNIST and Fashion-MNIST images, and further tested the performance of QS-SNN in comparison with that of ANN and CNN. Reverse spike noise is created by randomly flipping image pixels to their reverse color and can be described as $Reverse(image[i])=1 -image[i]$. The position $i$ of the pixel to be flipped is randomly chosen, as shown in Figure \ref{image_reverse_pixels}(A) and (B).
The noisy images were encoded and processed in the same way as described in Algorithm S1. However, in the color-inverted experiment, all pixels of reverse degree $\theta_i$ were the same, resulting in the same change being applied to the whole image. By contrast, in the reverse pixel noise experiment, only a proportion of randomly chosen image pixels were changed; thus, every image pixel had a specific $\theta_i$ parameter. These image pixels are transferred to spike trains with heterogeneous phase $\varphi_i$. Specially in reverse pixel experiment we took the mean operation for $\mathcal{F}\{\cdot \}$ as the estimation phase:
\begin{equation}
\varphi_i = \frac{1}{N}\sum_{j=1}^{N}arctan(\frac{P_j}{Q_j})
\end{equation}
Additive white Gaussian noise (AWGN) is commonly used to test system robustness. We also examined the performance of the proposed QS-SNN on AWGN MNIST and Fashion-MNIST images, as shown in Figure~\ref{gaussian_noise}(A) and (B).
In contrast to color-inverted noise, AWGN results in uncorrelated disturbances on the original image. We were interested in the robustness of our proposed method when faced with this challenging condition. The procedure used to process AWGN images was the same as that used in the reverse pixel noise experiment, except that the whole image phase was estimated using half the median operation:
\begin{equation}
\varphi_{i} = M\{arctan(\frac{P_j}{Q_j})|j=1,...,N\}
\end{equation}
\section*{RESULTS}
\subsection*{Standard and color-inverted datasets experiment}
We constructed a three-layer QS-SNN with 500 hidden layer neurons and 10 output layer neurons. The structure of the experimental fully connected ANN was set to be the same for comparison. A simple CNN structure with three convolution layers and two pooling operations was used to determine the ability of different feature extraction methods to deal with inverted background images. We also tested VGG16, ResNet101, and DenseNet121 to investigate whether deeper structures could classify color-inverted images correctly. ANN, CNN, and QS-SNN were trained for 20 epochs with the Adam optimization method, and the learning rate was set to 0.001. VGG16, ResNet101, and DenseNet121 were trained for 400 epochs using stochastic gradient descent
with learning rate 0.1, momentum 0.9, weight decay 5e-4, and learning rate decay 0.8 every 10 epochs. In the training phase, only the original image ($\theta=0$) was used; the testing phase used different color-inverted images ($\theta$ ranging from 0 to $\frac{\pi}{2}$). All results were obtained from the final epoch test step.
The results showed that the traditional fully connected ANN and convolution models struggled to handle huge changes in image properties such as background reversal, even when the spatial features of the image remained the same. Our proposed method showed much better performance than these traditional models (see Figures \ref{shift_exp}(C) and (D) and Tables S2, S3 for details). Significant performance degradation occurred when processing color-inverted images with ANN and CNN, and even deeper networks such as VGG16, ResNet101, and Densenet121 experienced problems with color-inverted image classification. By contrast, QS-SNN, although affected by a similar performance drop when images were made blurry ($\theta$ from 0 to $\frac{4\pi}{16}$), regained its ability when the images' backgrounds were inverted and the clarity was improved ($\theta$ from $\frac{4\pi}{16}$ to $\frac{8\pi}{16}$). When the image color was fully inverted ($\theta=\frac{8\pi}{16}$), QS-SNN retained the same accuracy as when classifying the original data ($\theta=0$) and correctly recognized color-inverted MNIST and Fashion-MNIST images.
\subsection* {Robustness to noise experiments}
Compared with other state-of-the-art models, the performance of QS-SNN was closer to human vision capacity. As more flipped-pixel noise was added to the images ($r=0$ to $0.5$), they became increasingly difficult to recognize, as indicated by the left side of the `U'-curve for QS-SNN in Figure \ref{image_reverse_pixels}(C), (D).
However, as more noise was added to the pixels, the image features became clear again. When $r=1.0$, with all pixels reversed, there was no conflict with the features of the original image when $r=0$. QS-SNN can exploit these conditions owing to its image superposition encoding method (Equation \ref{QS-SNN}), which is similar to the human vision system. As shown in Figure \ref{image_reverse_pixels}(C), (D) and Tables S4, S5, randomly inverting image pixels caused substantial performance degradation of ANN and CNN, as well as of the deep networks. On the contrary, the red `U'-shaped curve for QS-SNN indicated that it recovered its accuracy as the image's features became clear but the background was inverted ($r=0.6$ to $1.0$).
Gaussian noise influenced all networks significantly, with all methods showing a performance drop as noise (standard deviation; $std$) increased, as shown in Figure \ref{gaussian_noise}(C), (D) and Tables S6, S7. QS-SNN behaved more stably on the AWGN image processing task, with accuracies of 90.2\% and 82.3\% on the MNIST and Fashion-MNIST datasets, respectively, for $std=0.4$; by contrast, the other methods achieved no more than 60\% and 50\%, respectively. Images with $std=0.4$ are not very difficult for human vision to distinguish. Thus, by combining a brain-inspired spiking network with a quantum mechanism, we obtain a more robust approach to images with noise disturbance, similar to the performance of human vision.
\section*{Figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=1.0\linewidth]{fig/figure1.pdf}
\caption{Quantum complementary superposition information encoding. \textbf{(A)} The horizontal axis and longitudinal axis represent $\mathinner{|x\rangle}$ and $\mathinner{|\bar{x}\rangle}$, respectively. The parameter $\theta$ in Equation (\ref{QS-SNN}) measures the degree to which the image background is inverted, from $\theta=0$ (the original image) to the complementary state $\theta=\frac{\pi}{2}$ (totally inverted background). \textbf{(B)} The top pictures show images inverted to different degrees, and the spikes to which they are encoded. The bottom axis corresponds to the value of $\theta$. It should be noted that the pictures are intuitive demonstration instead of exact display.}
\label{image_theta_spike}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=1.0\linewidth]{fig/figure2.pdf}
\caption{Quantum superposition spike trains. \textbf{(A)} The circuit to generate quantum image. Only one image pixel state is depicted for perspicuity. \textbf{(B)} A schematic diagram shows the transformation of quantum superposition states to spike trains $S_{i}(t;\varphi_i)$. With parameter $\theta_i$ increasing, spike trains are shifted in time dimension. $T$ is a simulation period in which spikes emerge within the $T_{sp}$ time window. Also, the relation of parameter $\theta_i$ and spiking phase $\varphi_i$ is intuitive example and not exact correspondence.}
\label{fig_Si}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=1.0\linewidth]{fig/figure3.pdf}
\caption{Quantum superposition SNN, three-layer architecture of QS-SNN with TCK synapses and two-compartment neurons. Images are transferred to spikes as network inputs. The hidden layer is composed of 500 two-compartment neurons with dendrite and soma. The output layer contains 10 two-compartment neurons corresponding to 10 classes. In the training period, only original images are fed to the network, whereas in the test period, the trained network is tested with inverted-background images. Neurons with maximum spiking rates at the output layer are taken as the network prediction and output.}
\label{QS-SNNarchitecture}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=1.0\linewidth]{fig/figure4.pdf}
\caption{Classification of color-inverted images. \textbf{(A)} MNIST background color-inverted images. Parameter $\theta$ takes values from 0 to $\frac{\pi}{2}$, denoting the degree of color inversion. \textbf{(B)} Fashion-MNIST background-color-inverted images. \textbf{(C)} Background color-inverted MNIST classification results. QS-SNN initially showed performance degeneration similar to that of the fully connected ANN and CNN, with $\theta$ values from 0 to $\frac{4\pi}{16}$. However, as the background-inversion degree further increased, QS-SNN gradually recovered its accuracy, whereas the other networks did not. When the background was totally inverted ($\theta=\frac{8\pi}{16}$), QS-SNN showed almost the same performance as when classifying original images, whereas the second-best network (VGG16) retained only half its original accuracy. \textbf{(D)} Background color-inverted Fashion-MNIST results. Similar results as in the MNIST experiment were achieved, with QS-SNN showing an even greater advantage (right-hand side).}
\label{shift_exp}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=1.0\linewidth]{fig/figure5.pdf}
\caption{Reverse spike noise results. \textbf{(A)} Reverse spike noise MNIST. The possibility of pixel inversion is controlled by parameter $r$. When $r=0$, no noise is added, i.e., the image is original data. When $r=1.0$, all pixels are flipped. \textbf{(B)} Reverse spike noise Fashion-MNIST. \textbf{(C)} Classification of MNIST images with reverse noise. QS-SNN performed better compared with the inverted background experiment. \textbf{(D)} Classification of Fashion-MNIST images with reverse noise.}
\label{image_reverse_pixels}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=1.0\linewidth]{fig/figure6.pdf}
\caption{Gaussian noise image classification. \textbf{(A)} Additive white Gaussian noise on MNIST. The mean of Gaussian random noise was set to zero, and different $std$ values were used. \textbf{(B)} Additive white Gaussian noise on Fashion-MNIST. \textbf{(C)} Classification of MNIST images with Gaussian noise. Compared with other networks, QS-SNN showed much slower degeneration. \textbf{(D)} Classification of Fashion-MNIST images with Gaussian noise. QS-SNN performed even better compared with its results on MNIST.}
\label{gaussian_noise}
\end{figure}
\section*{DISCUSSION}
This work aimed to integrate quantum theory with a biologically plausible SNN. Quantum image encoding and quantum superposition states were used for information representation, followed by processing with a spatial-temporal SNN. A time-convolution synapse was built to obtain neuron process phase information, and dendrite prediction with a proximal gradient method was used to train the QS-SNN. The proposed QS-SNN showed good performance on color-inverted image recognition tasks that were very challenging to other models. Compared with traditional ANN models, QS-SNN showed better generalization ability.
It is worth noting that the quantum brain hypothesis is quite controversial. Nevertheless, this paper does not aim to provide direct persuasive evidence for the quantum brain hypothesis but to explore novel information processing methods inspired by quantum information theory and brain spiking signal transmission.
\subsection*{Limitations of the study}
Our model was inspired by quantum image processing methods, in particular, quantum image superposition state presentation. The model and corresponding experiments were run on a classical computer and did not use any quantum hardware; thus, our work could not benefit from quantum computing. Artificial neurons can be reformed to run on quantum computers~\citep{schuld2014quest, cao2017quantum, mangini2020quantum}. Efforts to build a quantum spiking neuron are still at a preliminary stage~\citep{kristensen2019artificial}. Simulating spiking neuronal networks on a classical computer is hindered by heavy resource consumption and slow processing.
Future work includes modifying both the quantum superposition encoding strategy and the SNN architecture to suit quantum computing better. In computational neuroscience research, neuronal spikes are typically generated with the Poisson process, which samples data from the binomial distribution. Quantum bits, also named qubits, are fundamental components in a quantum computer. A qubit can take the value of "0" or "1" with a certain probability, which is very similar to neuronal spikes. Thus, a set of qubits can encode all possible states of a spike train, as well as the quantum superposition images, in the quantum computer. Although it requires much effort to reconstruct spiking neural models suited for quantum computing, it is significant in neurology and artificial intelligence research to explore more quantum-inspired mechanisms to explain brain functions that traditional theories fail to.
\subsection*{Method details}
\subsubsection*{Generating background inverse image}
Different degree of background color inverse images $\mathinner{|I(\Theta)\rangle}$ using for experiment are generated according to the quantum superposition encoding. Here we describe the numerical form value of quantum image used to test model. Suppose the original image is represented by $X$ and the reversed image is $\bar{X}$. Parameter $\Theta$ controls the proportion of original image and reverse image in background color inverse images with
\begin{equation}
I(\Theta)=Xcos(\Theta)+\bar{X}sin(\Theta)
\label{color_invese_image_exp}
\end{equation}
\subsubsection*{Learning procedure of the QS-SNN algorithm}
Dendrite prediction~\citep{urbanczik2014learning} and proximal gradient methods are used for tuning of QS-SNN. In Equation (\ref{inputi}), $I_i$ is the teaching current, which is the integration of correct labels in $g_{E_i}(E_E-U_{1})$ and wrong labels in $g_{I_i}(E_I-U_{1})$. $E_E$ (8 mV) and $E_I$ (-8 mV) are the excitatory and inhibitory standard membrane potentials, respectively. The teaching current is injected to the soma of neurons in the output layer, generating added potential $V_{I_i}$ with membrane resistance $r_B$, as shown in Equation (\ref{outputmem}):
\begin{equation}
\left\{\begin{array}{l}
I^{ject}_{i}=g_{E_i}(E_E-U_{1})+g_{I_i}(E_I-U_{1}) \\
g_{E_i}=
\left\{
\begin{array}{lr}
1, \quad i=label, \\
0, \quad i\neq label.
\end{array}
\right. \\
g_{I_i}=
\left\{
\begin{array}{lr}
0, \quad i=label, \\
1, \quad i\neq label.
\end{array}
\right. \\
V_{I_{i}} = r_B \cdot I^{ject}_{i},
\end{array}\right.
\label{inputi}
\end{equation}
\begin{equation}
\tau_L\frac{dV_i^{o}(t)}{dt}=-V_i^o(t)+\frac{g_B}{g_L}(V_i^{o,b}-V_i^o(t))+V_{I_{i}}-V_i^o(t).
\label{outputmem}
\end{equation}
Setting the left side of Equation (\ref{outputmem}) to zero and $V_i^{o}=V_{I_{i}}$, we get the steady state of somatic potentials $V_i^{o}$ and $V_i^{o,b}$ with $V_i^{o*}=g_B/(g_B+g_L)V_i^{o,b}$. The dendrite prediction rule defines the soma-dendrite error as Equation (\ref{lerror}):
\begin{equation}
L=\frac{1}{2}\sum\limits_{i=0}^N\parallel r_{max}\sigma(V_i^{o})-r_{max}\sigma(V_i^{o*})\parallel^2.
\label{lerror}
\end{equation}
Minimizing this error based on the differential chain rule, we obtain updated synaptic weights $w_{ij}^o$, as shown in Equations (\ref{diffw1}) and (\ref{diffb1}):
\begin{equation}
\begin{aligned}
\frac{\partial L}{\partial w_{ij}^o}&=\frac{\partial L}{\partial V_{i}^{o,b}}\frac{\partial V_{i}^{o,b}}{\partial w_{ij}^o}\\
&=r_{max}\frac{g_B}{g_B+g_L}\left[\sigma(V_i^{o*})-\sigma(V_i^{o})\right]\sigma'(V_i^{o*})r_{j} \\
&=\delta^{o}_{i}r_{j},
\end{aligned}
\label{diffw1}
\end{equation}
\begin{equation}
\begin{aligned}
\frac{\partial L}{\partial b_{o}^1}&=\frac{\partial L}{\partial V_{i}^{o,b}}\frac{\partial V_{i}^{o,b}}{\partial b_{i}^o}\\
&=r_{max}\frac{g_B}{g_B+g_L}\left[\sigma(V_i^{o*})-\sigma(V_i^{o})\right]\sigma'(V_i^{o*})\\
&=\delta^{o}_{i}.
\end{aligned}
\label{diffb1}
\end{equation}
Equation (\ref{update1}) shows the iterative updating of synaptic weights $w_{ij}^y$ and bias $b_{i}^y$:
\begin{equation}
\left\{\begin{array}{l}
w_{ij}^o \gets w_{ij}^o - \eta\frac{\partial L}{\partial w_{ij}^o} \\
b_{i}^o \gets b_{i}^o - \eta\frac{\partial L}{\partial b_{i}^o}.
\end{array}\right.
\label{update1}
\end{equation}
For the hidden layer, error signal $\delta_{i}$ is passed from the previous layer, and neuron synapses $w_{ij}^h$ are adapted using Equations (\ref{diffw0}) and (\ref{diffb0}):
\begin{equation}
\begin{aligned}
\frac{\partial L}{\partial w_{ij}^h}&=\sum\limits_k\frac{\partial L}{\partial V_{k}^{o,b}} \frac{\partial V_{k}^{o,b}}{\partial r_{i}^h} \frac{\partial r_{i}^{h}}{\partial V_{i}^{h}}
\frac{\partial V_{i}^{h}}{\partial V_{i}^{h, b}} \frac{\partial V_{i}^{h, b}}{\partial w_{ij}^h} \\
&=\sum\limits_k\delta^{o}_{k} w_{ki}^{o} r_{max} \frac{g_B}{g_B+g_L} \sigma'(V_i^{h}) V_{j}^{PSP} \\
&=\sum\limits_k\delta^{o}_{k} \delta^{h}_{i} w_{ki}^{o} V_{j}^{PSP},
\end{aligned}
\label{diffw0}
\end{equation}
\begin{equation}
\begin{aligned}
\frac{\partial L}{\partial b_{i}^h}&=\sum\limits_k\frac{\partial L}{\partial V_{k}^{o,b}} \frac{\partial V_{k}^{o,b}}{\partial r_{i}^h} \frac{\partial r_{i}^{h}}{\partial V_{i}^{h}}
\frac{\partial V_{i}^{h}}{\partial V_{i}^{h, b}} \frac{\partial V_{i}^{h, b}}{\partial b_{i}^h} \\
&=\sum\limits_k\delta^{o}_{k} w_{ki}^{o} r_{max} \frac{g_B}{g_B+g_L} \sigma'(V_i^{h}) \\
&=\sum\limits_k\delta^{o}_{k} \delta^{h}_{i} w_{ki}^{o}.
\end{aligned}
\label{diffb0}
\end{equation}
Equation (\ref{update0}) shows the iterative updating of synaptic weights $w_{ij}^h$ and bias $b_{i}^h$:
\begin{equation}
\left\{\begin{array}{l}
w_{ij}^h \gets w_{ij}^h - \eta\frac{\partial L}{\partial w_{ij}^h} \\
b_{i}^h \gets b_{i}^h - \eta\frac{\partial L}{\partial b_{i}^h}.
\end{array}\right.
\label{update0}
\end{equation}
The training and test procedure for the QS-SNN model is shown in Algorithm~\ref{alg_QS-SNN}.
\begin{algorithm}[htbp]
\footnotesize
\caption{The learning procedure of QS-SNN.}
\label{alg_QS-SNN}
\newcommand{\State\hspace{\algorithmicindent}}{\State\hspace{\algorithmicindent}}
\begin{algorithmic}
\State {\bf 1.} Initialize weights $W_{j,i}$ with random uniform distribution, membrane potential states $V_i$, and other \\
\quad related hyperparameters as in Table S1.
\State {\bf 2.} Start training procedure with only original images in training dataset, $\theta_{i}=0$:
\State\hspace{\algorithmicindent} 2.1 Load training samples.
\State\hspace{\algorithmicindent} 2.2 Construct quantum superposition state representations of images.
\State\hspace{\algorithmicindent} 2.3 Input neuron spikes as Poisson process with spiking rate and phase time according to quantum \\ \quad \quad superposition image.
\State\hspace{\algorithmicindent} 2.4 Process time-differential convolution to obtain dynamical updating of membrane potential of \\ \quad \quad post-synaptic neurons.
\State\hspace{\algorithmicindent} 2.5 Update multi-layer membrane potential.
\State\hspace{\algorithmicindent} 2.6 Train the QS-SNN with dendrite prediction and proximal gradient.
\State\hspace{\algorithmicindent} 2.7 Select neurons in output layer with maximum spiking rate as the output class.
\State {\bf 3.} Start test procedure using color-inverse images with different degree of color inversion from the test \\ \qquad dataset, $\theta_{i}=0, \frac{\pi}{16}, \dots, \frac{8\pi}{16}$.
\State\hspace{\algorithmicindent} 3.1 Load the test samples and transfer to spike trains as in steps 2.2 and 2.3.
\State\hspace{\algorithmicindent} 3.2 Test the performance of the trained QS-SNN on color-inverse images.
\State\hspace{\algorithmicindent} 3.3 Output the test performance.
\end{algorithmic}
\end{algorithm}
\section*{Acknowledgments}
This study was supported by the new generation of artificial intelligence major project of the Ministry of Science and Technology of the People's Republic of China (Grant No. 2020AAA0104305),
the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDB32070100) and the Beijing Municipal Commission of Science and Technology (Grant No. Z181100001518006).
\subsection*{Author contributions}
Y.S. wrote the code, performed the experiments, analyzed the data, and wrote the manuscript. Y.Z. proposed and supervised the project and contributed to writing the manuscript. T.Z. participated in helpful discussions and contributed to writing the manuscript.
\subsection*{Declaration of interests}
The authors declare that they have no competing interests.
\setstretch{1.1}
|
2,869,038,155,433 | arxiv | \section{Introduction}
The famous article by Landesman and Lazer \cite{L} considered a
semilinear Dirichlet problem at resonance (on a bounded domain $D \subset \mathbb{R}^n$)
\begin{equation} \label{i1}
\Delta u+\lambda_k u+g(u)=f(x), \quad x \in D, \quad u=0 \quad \text{on } \partial D\,.
\end{equation}
Assuming that $g(u)$ has finite limits at $\pm \infty$, and $\lambda _k$ is a
simple eigenvalue of $-\Delta$, they gave a necessary and sufficient
condition for the existence of solutions. This nonlinear version of the
Fredholm alternative has generated an enormous body of research, and perhaps
it can be seen as the beginning of the modern theory of nonlinear PDE's.
Soon after publication of \cite{L}, a more elementary proof was given by
Williams \cite{W}. Both \cite{L} and \cite{W} were using Schauder's fixed
point theorem.
Williams \cite{W} has observed that one can also handle the case of multiple
eigenvalues under a straightforward extension of the Landesman and Lazer condition.
No examples when this condition can be verified for repeated eigenvalues were
ever given. Our first result is to observe that another famous result of
Lazer and Leach \cite{L1}, on forced harmonic oscillators at resonance,
provides an example for this theorem, for the case of double eigenvalues
of the periodic problem in one dimension. This appears to be the only known example
in case $\lambda _k$ is not simple. It relies on some special properties of the
sine and cosine functions. Thus one has a uniform framework for these results,
into which the existence result of de Figueiredo and Ni \cite{FN}, its
generalization by Iannacci et al \cite{INW} (and our two extensions)
also fit in nicely. We review these results, and connect them to the recent
work of Korman and Li \cite{KL}, and Korman \cite{K2}.
We use a similar approach to give a rather complete discussion of
$2 \times 2$ elliptic systems at resonance, in case its linear part has
constant coefficients. Requiring that finite limits at infinity exist,
as in Landesman and Lazer, appears to be too restrictive for systems.
Instead, we use more general inequality type conditions, which are rooted
in Lazer and Leach \cite{L1}. We derive several sufficient conditions
of this type for systems at resonance, which turned out to depend
on the spectral properties of the linear part.
\section{An exposition and extensions of known results}
Given a bounded domain $D \subset \mathbb{R}^n$, with a smooth boundary, we denote
by $\lambda _k$ the eigenvalues of the Dirichlet problem
\begin{equation} \label{dl}
\Delta u+\lambda u=0, \quad x \in D, \quad u=0 \quad \text{on }\partial D \,,
\end{equation}
and by $\varphi _k(x)$ the corresponding eigenfunctions. For the \emph{resonant} problem
\begin{equation} \label{1}
\Delta u+\lambda_k u=f(x), \quad x \in D, \quad u=0 \quad \text{on }\partial D \,,
\end{equation}
with a given $f(x) \in L^2(D)$, the following well-known
\emph{Fredholm alternative} holds: the problem \eqref{1} has a
solution if and only if
\begin{equation} \label{2}
\int _D f(x) \varphi _k(x) \, dx=0 \,.
\end{equation}
One could expect things to be considerably harder for the nonlinear problem
\begin{equation}
\label{3}
\Delta u+\lambda_k u+g(u)=f(x), \quad x \in D, \quad u=0 \quad \text{on } \partial D \,.
\end{equation}
However, in the classical papers of Lazer and Leach \cite{L1}, and
Landesman and Lazer \cite{L} an interesting class of nonlinearities $g(u)$
was identified, for which one still has an analog of the Fredholm alternative.
Namely, one assumes that the finite limits $g(-\infty)$ and $g(\infty)$ exist,
and
\begin{equation} \label{4}
g(-\infty)<g(u)<g(\infty), \quad \text{for all } u \in R \,.
\end{equation}
From \eqref{3},
\begin{equation} \label{4.1}
\int _D g(u(x)) \varphi _k(x) \, dx=\int _D f(x) \varphi _k(x) \, dx \,,
\end{equation}
which implies, in view of \eqref{4},
\begin{equation} \label{5}
\begin{aligned}
&g(-\infty) \int_{\varphi _k>0} \varphi _k \, dx+g(\infty) \int_{\varphi _k<0} \varphi _k \, dx\\
&< \int _D f(x) \varphi _k \, dx
<g(\infty) \int_{\varphi _k>0} \varphi _k \, dx+g(-\infty) \int_{\varphi _k<0} \varphi _k \, dx \,.
\end{aligned}
\end{equation}
This is a necessary condition for solvability. It was proved by Landesman and
Lazer \cite{L} that this condition is also sufficient for solvability.
\begin{theorem}[\cite{L}]\label{thm:1}
Assume that $\lambda _k$ is a simple eigenvalue, while $g(u) \in C(R)$ satisfies
\eqref{4}. Then for any $f(x) \in L^2(D)$ satisfying \eqref{5},
the problem \eqref{3} has a solution $u(x) \in W^{2,2}(D) \cap W_0^{1,2}(D)$.
\end{theorem}
We shall prove the sufficiency part under a condition on $g(u)$, which
is more general than \eqref{4}. This condition had originated in Lazer and
Leach \cite{L1}, and it turned out to be appropriate when studying systems
(see the next section).
We assume that $g(u) \in C(R)$ is bounded on $R$, and there exist numbers
$c$, $d$, $C$ and $D$, with $c<d$ and $C<D$, such that
\begin{gather} \label{20}
g(u) > D \quad \text{for $u > d$} \,, \\
\label{21}
g(u) < C \quad \text{for $u < c$} \,.
\end{gather}
Define
\[
L_2=D \int _{\varphi _k>0} \varphi _k \, dx+C \int _{\varphi _k<0} \varphi _k \, dx, \quad
L_1=C \int _{\varphi _k>0} \varphi _k \, dx+D \int _{\varphi _k<0} \varphi _k \, dx \,.
\]
Observe that $L_2>L_1$, because $D>C$. We shall denote by $\varphi _k^{\perp}$
the subspace of $L^2(D)$, consisting of functions satisfying \eqref{2}.
\begin{theorem}\label{thm:3}
Assume that $\lambda _k$ is a simple eigenvalue, while $g(u) \in C(R)$ is
bounded on $R$, and satisfies \eqref{20} and \eqref{21}. Then for any
$f(x) \in L^{2}(D)$ satisfying
\begin{equation} \label{22}
L_1<\int _D f(x) \varphi _k \, dx<L_2 \,,
\end{equation}
problem \eqref{3} has a solution $u(x) \in W^{2,2}(D) \cap W_0^{1,2}(D)$.
\end{theorem}
\begin{proof}
Normalize $\varphi _k(x)$, so that $\int _D \varphi ^2_k(x) \, dx=1$. Denoting
$A_k=\int _D f(x) \varphi _k \, dx$, we decompose $f(x)=A_k \varphi _k (x)+e(x)$,
with $e(x) \in \varphi _k^{\perp}$ (where $\varphi _k^{\perp}$ is a subspace of
$L^2(D)$). Similarly, we decompose the solution $u(x)=\xi _k \varphi _k (x)+U(x)$,
with $U(x) \in \varphi _k^{\perp}$. We rewrite \eqref{4.1}, and then \eqref{3}, as
\begin{gather} \label{6}
\int _D g(\xi _k \varphi _k (x)+U(x)) \varphi _k(x) \, dx=A_k \,, \\
\label{7} \begin{gathered}
\Delta U+\lambda_k U=-g(\xi _k \varphi _k +U)+\varphi _k \int _D g(\xi _k \varphi _k +U )
\varphi _k\, dx+e, \quad x\in D \\
U=0 \quad \text{on } \partial D \,.
\end{gathered}
\end{gather}
Equations \eqref{6} and \eqref{7} constitute the classical Lyapunov-Schmidt
reduction of the problem \eqref{3}. To solve this system, we set up a
map $T: (\eta _k,V) \to (\xi _k,U)$, taking the space
$R \times \varphi _k^{\perp}$ into itself, by solving the equation
\begin{equation}\label{8}
\begin{gathered}
\Delta U+\lambda_k U=-g(\eta _k \varphi _k +V)+\varphi _k \int _D g(\eta _k \varphi _k +V )
\varphi _k \, dx+e, \; x\in D \\
U=0 \quad \text{on } \partial D \,,
\end{gathered}
\end{equation}
for $U$, and then setting
\begin{equation} \label{8.1}
\xi _k=\eta _k +A_k- \int _D g(\eta _k \varphi _k (x)+U(x)) \varphi _k(x) \, dx \,.
\end{equation}
The right hand side of \eqref{8} is orthogonal to $\varphi _k$, and so by the
Fredholm alternative we can find infinitely many solutions $U=U_0+c \varphi _k$.
Then we can choose $c$, so that $U \in \varphi _k^{\perp}$.
Assume first that $f(x) \in L^{\infty}(D)$. By the elliptic theory,
we can estimate $\|U\|_{W^{2,p}(D)}$ by the $L^p$ norm of the right hand
side of \eqref{8} plus $\|U\|_{L^p(D)}$, for any $p>1$. Since the homogeneous
equation, corresponding to \eqref{8}, has only the trivial solution in
$\varphi _k^{\perp}$, the $\|U\|_{L^p(D)}$ term can be dropped, giving us a
uniform estimate of $\|U\|_{W^{2,p}(D)}$. By the Sobolev embedding, for
some constant $c>0$ (for $p$ large enough)
\begin{equation} \label{9}
\|U\|_{L^{\infty}(D)} \leq c \quad \text{uniformly in }
(\eta _k,V) \in R \times \varphi _k^{\perp} \,.
\end{equation}
This implies that if $\eta _k$ is large and positive, the integral in \eqref{6}
is greater than $L_2$. When $\eta _k$ is large in absolute value and negative,
the integral in \eqref{6} is smaller than $L_1$. By our condition \eqref{22},
it follows that for $\eta _k$ large and positive, $\xi _k<\eta _k$, while
for $\eta _k$ large in absolute value and negative, $\xi _k>\eta _k$.
Hence, we can find a large $N$, so that if $\eta _k \in (-N,N)$, then
$\xi _k \in (-N,N)$. The map $T: (\eta _k,V) \to (\xi _k,U)$ is a continuous
and compact map, taking a sufficiently large ball of $R \times \varphi _k^{\perp}$
into itself. By Schauder's fixed point theorem (see e.g., Nirenberg \cite{N})
it has a fixed point, which gives us a solution of the problem \eqref{3}.
(A fixed point of \eqref{8.1} is a solution of \eqref{6}.)
In case $f(x) \in L^{2}(D)$, a little more care is needed to show that the
integral in \eqref{8.1} is greater (smaller) than $A_k$, for $\eta _k$
positive (negative) and large. Elliptic estimates give us
\begin{equation}
\label{9.1}
\|U\|_{L^{2}(D)} \leq c \quad \text{uniformly in }
(\eta _k,V) \in R \times \varphi _k^{\perp} \,.
\end{equation}
Set $G=\sup _{x \in D, \; u \in R } |g(u) \varphi _k(x)|$, and decompose
\begin{equation}
\label{9.2}
\int _D g(\eta _k \varphi _k (x)+U(x)) \varphi _k(x) \, dx= \int _{\varphi _k >0} g \varphi _k \, dx+ \int _{\varphi _k <0} g \varphi _k \, dx \,.
\end{equation}
The first integral we decompose further, keeping the same integrand,
\[
\int _{\varphi _k >0} \, dx=\int _{0<\varphi _k <\delta} \, dx
+ \int _{A_2} \, dx+ \int _{A_3} \, dx \equiv I_1+I_2+I_3 \,,
\]
with $A_2=(\varphi _k >\delta) \cap \left( |U|>\frac{\eta _k \varphi _k}{2} \right)$,
and $A_3=(\varphi _k >\delta) \cap \left( |U|<\frac{\eta _k \varphi _k}{2} \right)$.
Given any $\epsilon$, we fix $\delta$ so that the measure of the set
$\{ 0<\varphi _k(x) <\delta \}$ is less than $\epsilon$. Then
$|I_1| < \epsilon G$. In $I_2$ we integrate over the set, where
$|U|>\frac{\eta _k \delta}{2}$. Since $U$ is bounded in $L^2$ uniformly in $\eta _k$,
the measure of this set will get smaller than $\epsilon$ for $\eta _k$ large,
and then $|I_2| < \epsilon G$. In $I_3$ we have $g(u)>D$ for $\eta _k$ large,
and we integrate over the subset of the domain $D_+=\{x : \varphi _k(x)>0 \}$,
whose measure is smaller than that of $D_+$ by no more than $2 \epsilon$.
Hence $I_3>D \int _{\varphi _k >0} \varphi _k(x) \, dx-2 \epsilon G $, and then
\[
\int _{\varphi _k >0} g\varphi _k \, dx>D \int _{\varphi _k >0} \varphi _k(x) \, dx-4 \epsilon G \,.
\]
Proceeding similarly with the second integral in \eqref{9.2}, we conclude that
\[
\int _D g(\eta _k \varphi _k (x)+U(x)) \varphi _k(x) \, dx>L_2-8\epsilon G >A_k \,,
\]
if $\epsilon$ is small, which can be achieved for $\eta _k>0$ and large.
Similarly, we show that this integral is smaller than $A_k$ for $\eta _k<0$ and large.
\end{proof}
In case $\lambda _k$ has a multidimensional eigenspace, a generalization of
Landesman and Lazer \cite{L} result follows by a similar argument,
under a suitable condition. This was observed by S.A. Williams \cite{W},
back in 1970. Apparently no such examples in the PDE case for the
Williams \cite{W} condition were ever given. We remark that Williams \cite{W}
contained a proof of Landesman and Lazer \cite{L} result, which is
similar to the one above, see also a recent paper of Hastings and McLeod \cite{H}.
Our proof below is a little shorter than in \cite{W}.
\begin{theorem}[\cite{W}] \label{thm:10}
Assume that $g(u)$ satisfies \eqref{4}, $f(x) \in L^2(D)$, while for any $w(x)$
belonging to the eigenspace of $\lambda _k$
\begin{equation} \label{9.4}
\int _D f(x) w(x) \, dx<g(\infty) \int _{w>0} w \, dx
+g(-\infty) \int _{w<0} w \, dx \,.
\end{equation}
Then problem \eqref{3} has a solution. Condition \eqref{9.4} is also necessary
for the existence of solutions.
\end{theorem}
\begin{proof}
Let $E \subset L^2(D)$ denote the eigenspace of $\lambda _k$, and let $\varphi _1$,
$\varphi _2,\dots,\varphi _m$ be its orthogonal basis, with
$\int _D \varphi ^2_i(x) \, dx=1$, $1 \leq i \leq m$. Denoting
$A_i=\int _D f(x) \varphi _i \, dx$, we decompose
$f(x)=\sum_{i=1}^m A_i \varphi _i (x)+e(x)$, with $e(x) \in E ^{\perp}$
(where $E ^{\perp}$ is a subspace of $L^2(D)$). Similarly, we decompose the
solution $u(x)=\sum_{i=1}^m \xi_i \varphi _i (x)+U(x)$, with $U(x) \in E^{\perp}$.
We have
\begin{gather} \label{9.41}
\int _D g(\sum_{i=1}^m \xi_i \varphi _i+U(x)) \varphi _i(x) \, dx=A_i,
\quad i=1,\ldots,m \,, \\
\label{9.42}
\begin{gathered}
\Delta U+\lambda_k U=-g(\sum_{i=1}^m \xi_i \varphi _i +U)+\sum_{i=1}^m \varphi _i
\int _D g(\sum_{i=1}^m \xi_i \varphi _i+U ) \varphi _i\, dx+e \\
U=0 \quad \text{on } \partial D \,.
\end{gathered}
\end{gather}
Equations \eqref{9.41} and \eqref{9.42} constitute the classical
Lyapunov-Schmidt reduction of problem \eqref{3}. To solve this system,
we set up a map $T: (\eta _1,\ldots, \eta _m,V) \to (\xi _1,\ldots, \xi _ m,U)$,
taking the space $R^m \times E^{\perp}$ into itself, by solving the equation
\begin{equation} \label{9.43}
\begin{gathered}
\Delta U+\lambda_k U=-g(\sum_{i=1}^m \eta _i \varphi _i +V
)+\sum_{i=1}^m\varphi _i \int _D g(\sum_{i=1}^m \eta _i \varphi _i +V ) \varphi _i \, dx+e \\
U=0 \quad \text{on } \partial D \,,
\end{gathered}
\end{equation}
for $U$, and then setting
\begin{equation} \label{9.44}
\xi _i=\eta _i +A_i- \int _D g(\sum_{i=1}^m \eta _i \varphi _i (x)+U(x)) \varphi _i(x)
\, dx, \quad i=1,\ldots,m \,.
\end{equation}
The right-hand side of \eqref{9.43} is orthogonal to all $\varphi _i$, and so by
the Fredholm alternative we can find infinitely many solutions
$U=U_0+\sum_{i=1}^m c_i \varphi _i$. Then we can choose $c_i$, so that $U \in E^{\perp}$.
As before, we get a bound on $\|U\|_{L^{2}(D)}$, uniformly in
$(\eta _1,\ldots, \eta _m,V)$. Denoting
$I_i=\int _D g(\sum_{i=1}^m \eta _i \varphi _i (x)+U(x)) \varphi _i(x) \, dx$, we have
\begin{equation} \label{9.45}
\sum_{i=1}^m \xi _i ^2=\sum_{i=1}^m \eta _i ^2+2\sum_{i=1}^m
\eta _i(A_i-I_i)+\sum_{i=1}^m (A_i-I_i)^2 \,.
\end{equation}
Denoting $w=\sum_{i=1}^m \frac{\eta _i}{\sqrt{\eta _1^2+\cdots +\eta _m^2}} \varphi _i$,
we have
\begin{align*}
\sum_{i=1}^m \eta _i(A_i-I_i)
&=\sqrt{\eta _1^2+\cdots +\eta _m^2}
\Big(\int _D f w \, dx-\int _D g(\sqrt{\eta _1^2+\cdots +\eta _m^2} w+U) w \, dx\Big) \\
& <-\epsilon \sqrt{\eta _1^2+\cdots +\eta _m^2} \,,
\end{align*}
for some $\epsilon >0$, when $\sqrt{\eta _1^2+\cdots +\eta _m^2}$ is large,
in view of condition \eqref{9.4}.
If we denote by $h$ an upper bound on all of $(A_i-I_i)^2$, then from \eqref{9.45}
\[
\sum_{i=1}^m \xi _i ^2<\sum_{i=1}^m \eta _i ^2-2\epsilon \sqrt{\eta _1^2+
\cdots +\eta _m^2}+mh<\sum_{i=1}^m \eta _i ^2 \,,
\]
for $\sqrt{\eta _1^2+\cdots +\eta _m^2}$ large.
Then the map $T$ is a compact and continuous map of a sufficiently large ball
in $R^m \times E^{\perp}$ into itself, and we have a solution by Schauder's
fixed point theorem.
\end{proof}
Condition \eqref{9.4} implies the Landesman and Lazer \cite{L} condition \eqref{5}.
Indeed, if $\lambda _k$ is a simple eigenvalue, then $w=b \varphi _k$. If the constant
$b>0$ ($b<0$), then \eqref{9.4} implies the right (left) inequality in \eqref{5}.
In the ODE case a famous example of resonance with a two-dimensional eigen\-space
is the result of Lazer and Leach \cite{L1}, which we describe next.
We seek to find $2 \pi$ periodic solutions $u=u(t)$ of
\begin{equation} \label{10}
u''+n^2u +g(u)=f(t) \,,
\end{equation}
with a given continuous $2 \pi$ periodic $f(t)=f(t+2\pi)$, and $n$
is a positive integer. The linear part of this problem is at resonance, because
\[
u''+n^2u=0
\]
has a two-dimensional $2 \pi$ periodic null space, spanned by
$\cos nt$ and $\sin nt$. The following result is included in Lazer and
Leach \cite{L1}.
\begin{theorem}[\cite{L1}] \label{thm:2}
Assume that $g(u) \in C(R)$ satisfies \eqref{4}. Define
\begin{equation}
\label{***}
A=\int_0^{2\pi} f(t) \cos nt \, dt, \quad B=\int_0^{2\pi} f(t) \sin nt \, dt \,.
\end{equation}
Then a $2 \pi$ periodic solution of \eqref{10} exists if and only if
\begin{equation} \label{11}
\sqrt{A^2+B^2}<2\left(g(\infty)-g(-\infty) \right) \,.
\end{equation}
\end{theorem}
The following elementary lemma is easy to prove.
\begin{lemma}\label{lma:2}
Consider a function $\cos (nt-\delta)$, with an integer $n$ and any real $\delta$.
Denote $P=\{t \in (0,2\pi) : \cos (nt-\delta)>0 \}$ and
$N=\{t \in (0,2\pi) : \cos (nt-\delta)<0 \}$. Then
\[
\int_P \cos (nt-\delta) \, dt=2, \quad \int_N \cos (nt-\delta) \, dt=-2 \,.
\]
\end{lemma}
We show next that Theorem \ref{thm:2} of Lazer and Leach provides
an example to Theorem \ref{thm:10} of Williams.
Indeed, any eigenfunction corresponding to $\lambda _n =n^2$ is of the form
$w=a \cos nt +b \sin nt$, $a,b \in R$, or $w =\sqrt{a^2+b^2} \cos (nt-\delta)$
for some $\delta$. The left hand side of \eqref{9.4} is $aA+bB$, while the
integral on the right is equal to
$2 \sqrt{a^2+b^2} \left(g(\infty)-g(-\infty) \right)$, in view of
Lemma \ref{lma:2}. We then rewrite \eqref{9.4} in terms of a scalar product
of two vectors
\[
\Big(\frac{a}{\sqrt{a^2+b^2}}, \frac{b}{\sqrt{a^2+b^2}} \Big) \cdot (A,B)<2
\left(g(\infty)-g(-\infty) \right) \quad \text{for all $a$ and $b$}\,,
\]
which is equivalent to the Lazer and Leach condition \eqref{11}.
Another perturbation of a harmonic oscillator at resonance was considered
by Lazer and Frederickson \cite{FL}, and Lazer \cite{L2}:
\begin{equation} \label{f1}
u''+g(u)u'+n^2u=f(t) \,.
\end{equation}
Here $f(t) \in C(R)$ satisfies $f(t+2\pi)=f(t)$ for all $t$,
$g(u) \in C(R)$, $n \geq 1$ is an integer.
Define $G(u)=\int_0^u g(t) \, dt$. We assume that the finite limits
$G(\infty)$ and $G(-\infty)$ exist, and
\begin{equation} \label{f2}
G(-\infty)<G(u)<G(\infty) \quad \text{for all } u \,.
\end{equation}
\begin{theorem}\label{thm:***}
Assume that \eqref{f2} holds, and let $A$ and $B$ be again defined by \eqref{***}.
Then the condition
\begin{equation} \label{f3}
\sqrt{A^2+B^2}<2n \left(G(\infty)-G(-\infty) \right)
\end{equation}
is necessary and sufficient for the existence of $2\pi$ periodic solution
of \eqref{f1}.
\end{theorem}
This result was proved in Lazer \cite{L2} for $n=1$, and by Korman and
Li \cite{KL}, for $n \geq 1$. Observe that the condition for solvability
now depends on $n$, unlike the Lazer and Leach condition \eqref{11}.
Also, Theorem \ref{thm:***} does not carry over to boundary value problems,
see \cite{KL}, unlike the result of Lazer and Leach.
Next, we discuss the result of de Figueiredo and Ni \cite{FN},
involving resonance at the principal eigenvalue.
\begin{theorem}[\cite{FN}] \label{thm:4}
Consider the problem
\begin{equation} \label{23}
\Delta u+\lambda_1 u+g(u)=e(x), \quad x\in D, \quad u=0 \quad \text{on }
\partial D \,.
\end{equation}
Assume that $e(x) \in L^{\infty}(D)$ satisfies $\int _D e(x) \varphi _1(x) \, dx=0$,
while the function $g(u) \in C(R)$ is a bounded function, satisfying
\begin{equation} \label{24}
ug(u)>0, \quad \text{for all $u \in R$} \,.
\end{equation}
Then the problem \eqref{23} has a solution $u(x) \in W^{2,p}(D) \cap W_0^{1,p}(D)$,
for any $p>1$.
\end{theorem}
If, in addition to \eqref{24}, we have
\begin{equation} \label{25}
\liminf _{u \to \infty} g(u)>0, \quad \text{and} \quad
\limsup _{u \to -\infty} g(u)<0 \,,
\end{equation}
then the previous Theorem \ref{thm:3} applies.
The result of de Figueiredo and Ni \cite{FN} allows either one (or both)
of these limits to be zero.
\begin{proof}[Proof of Theorem \ref{thm:4}]
Again we follow the proof of Theorem \ref{thm:3}. As before, we set up
the map $T: (\eta _1,V) \to (\xi _1,U)$, taking the space $R \times \varphi _1^{\perp}$
into itself. We use \eqref{8}, with $k=1$, to compute $U$, while equation
\eqref{8.1} takes the form
\begin{equation} \label{26}
\xi _1=\eta _1-\int _D g(\eta _1 \varphi _1 (x)+U(x)) \varphi _1(x) \, dx\,.
\end{equation}
Since $\|U\|_{L^{\infty}(D)}$ is bounded uniformly in $(\eta _1,V)$, we can
find $M>0$, so that for all $x \in D$
\begin{gather*}
\xi _1 \varphi _1 (x)+U(x)>0, \quad \text{for $\xi _1>M$} \\
\xi _1 \varphi _1 (x)+U(x)<0, \quad \text{for $\xi _1<-M$} \,.
\end{gather*}
Hence, $\xi _1<\eta _1$ for $\eta _1>0$ and large, while $\xi _1>\eta _1$
for $\eta _1<0$ and $|\eta _1|$ large. As before, the map
$T: (\eta _1,V) \to (\xi _1,U)$ is a continuous and compact map, taking
a sufficiently large ball of $R \times \varphi _1^{\perp}$ into itself,
and the proof follows by Schauder's fixed point theorem.
\end{proof}
This result was generalized to unbounded $g(u)$ by Iannacci, Nkashama and
Ward \cite{INW}. We now extend Theorem \ref{thm:4} to the higher eigenvalues.
\begin{theorem}
Consider the problem
\begin{equation} \label{26.1}
\Delta u+\lambda_k u+g(u)=e(x), \quad x \in D, \quad u=0 \quad \text{on } \partial D \,.
\end{equation}
Assume that $\lambda _k$, $ k \geq 1$, is a simple eigenvalue, and
$e(x) \in L^{\infty}(D)$ satisfies
$\int _D e(x) \varphi _k(x) \, dx=0$, while the function $g(u) \in C(R)$
is a bounded function, satisfying \eqref{24} and \eqref{25}.
Then the problem \eqref{26.1} has a solution
$u(x) \in W^{2,p}(D) \cap W_0^{1,p}(D)$, for any $p>1$.
\end{theorem}
\begin{proof}
We follow the proof of Theorem \ref{thm:3}, replacing problem \eqref{26.1}
by its Lyapunov-Schmidt decomposition \eqref{6}, \eqref{7}, then setting
up the map $T$, given by \eqref{8} and
\[
\xi _k=\eta _k-\int _D g(\eta _k \varphi _k (x)+U(x)) \varphi _k(x) \, dx\,.
\]
Since $g(u)$ is bounded, the $L^{\infty}$ estimate \eqref{9} holds.
By our conditions on $g(u)$, $\xi _k <\eta _k$ ($\xi _k >\eta _k$),
provided that $\eta _k>0$ ($\eta _k<0$) and $|\eta _k|$ is large.
As in the proof of Theorem \ref{thm:3}, we conclude that the map $T$
has a fixed point.
\end{proof}
We now review another extension of the result of Iannacci et al \cite{INW}
to the problem
\begin{equation} \label{230}
\Delta u +\lambda _1 u+g(u)=\mu _1 \varphi _1+e(x) \quad \text{on }\Omega, \quad u=0 \quad
\text{on } \partial D \,,
\end{equation}
with $e(x) \in \varphi _1 ^\perp$. Decompose the solution $u(x)=\xi _1 \varphi _1+U$,
with $U \in \varphi _1^\perp$. We wish to find a solution pair
$(u, \mu _1)=(u, \mu _1)(\xi _1)$, i.e., the global solution curve.
We proved the following result in Korman \cite{K2}.
\begin{theorem}
Assume that $g(u) \in C^1(R)$ satisfies
\begin{gather*}
u g(u) >0 \quad \text{for all } u \in R \,, \\
g'(u) \leq \gamma< \lambda _2-\lambda _1 \quad \text{for all } u \in R \,.
\end{gather*}
Then there is a continuous curve of solutions of \eqref{230}:
$(u(\xi _1),\mu _1(\xi _1))$, $u \in H^2(D) \cap H^1_0(D)$, with
$-\infty<\xi _1<\infty$, and $\int _D u(\xi _1) \varphi _1 \, dx=\xi _1$.
This curve exhausts the solution set of \eqref{230}. The continuous
function $\mu _1(\xi _1)$ is positive for $\xi _1 >0$ and large,
and $ \mu _1(\xi _1)<0$ for $\xi _1 <0$ and $|\xi _1|$ large.
In particular, $\mu _1(\xi^0 _1)=0$ at some $\xi^0 _1$, i.e.,
we have a solution of
\[
\Delta u +\lambda _1 u+g(u)=e(x) \quad \text{on }D, \quad u=0 \quad \text{on }
\partial D \,.
\]
\end{theorem}
We see that the result of Iannacci et al \cite{INW} corresponds to just
one point on this solution curve.
\section{Resonance for systems}
We begin by considering linear systems of the type
\begin{equation} \label{30}
\begin{gathered}
\Delta u+au+bv=f(x) \quad x \in D, \quad u=0 \quad \text{on } \partial D \\
\Delta v+cu+dv=g(x) \quad x \in D, \quad v=0 \quad \text{on } \partial D \,,
\end{gathered}
\end{equation}
with given functions $f(x)$ and $g(x)$, and constants $a$, $b$, $c$ and $d$.
As before, we denote by $\lambda _k$ the eigenvalues of $-\Delta$ with the
Dirichlet boundary conditions (see \eqref{dl}),
and by $\varphi _k(x)$ the corresponding eigenfunctions. We shall assume throughout
this section that $\int _D \varphi ^2_k(x) \, dx=1$.
\begin{lemma}\label{lma:30}
Assume that
\[
(a-\lambda _k)(d-\lambda _k)-bc \ne 0, \quad \text{for all $k \geq 1$} \,,
\]
i.e., all eigenvalues of $-\Delta$ are not equal to the eigenvalues of the matrix
$\begin{bmatrix}
a & b\\
c & d
\end{bmatrix}$.
Then for any pair $(f,g) \in L^2(D) \times L^2(D)$ there exists a unique
solution $(u,v)$, with $u$ and $v \in W^{2,2}(D) \cap W_0^{1,2}(D) $.
Moreover, for some $c>0$
\[
\|u\|_{W^{2,2}(D)}+\|v\|_{W^{2,2}(D)}
\leq c \left( \|f\|_{L^2(D)}+\|g\|_{L^2(D)} \right) \,.
\]
\end{lemma}
\begin{proof}
Using Fourier series, $f(x)=\Sigma _{k=1}^{\infty} f_k \varphi _k$,
$g(x)=\Sigma _{k=1}^{\infty} g_k \varphi _k$, $u(x)=\Sigma _{k=1}^{\infty} u_k \varphi _k$
and $v(x)=\Sigma _{k=1}^{\infty} v_k \varphi _k$, we obtain the unique
solution $(u,v) \in L^2(D) \times L^2(D)$. Using elliptic estimates
for each equation separately, we obtain
\begin{equation} \label{31}
\|u\|_{W^{2,2}(D)}+\|v\|_{W^{2,2}(D)}
\leq c \left( \|u\|_{L^2(D)}+\|v\|_{L^2(D)}+\|f\|_{L^2(D)}+\|g\|_{L^2(D)} \right) \,.
\end{equation}
Since we have uniqueness of solution for \eqref{30}, the extra terms
$\|u\|_{L^2(D)}$ and $\|v\|_{L^2(D)}$ are removed in a standard way.
\end{proof}
The following two lemmas are proved similarly.
\begin{lemma}\label{lma:31}
Consider the problem (here $\mu$ is a constant)
\begin{equation} \label{32}
\begin{gathered}
\Delta u+\lambda _k u=f(x) \quad x\in D, \quad u=0 \quad \text{on } \partial D \\
\Delta v+\mu v=g(x) \quad x\in D, \quad v=0 \quad \text{on } \partial D \,,
\end{gathered}
\end{equation}
Assume that $\mu \ne \lambda _n$, for all $n \geq 1$, $f(x) \in \varphi ^{\perp}_k$,
$g(x) \in L^2(D)$. One can select a solution such that $u(x) \in \varphi ^{\perp}_k$,
and $v(x) \in L^2(D)$. Such a solution is unique, and the estimate \eqref{31} holds.
\end{lemma}
\begin{lemma}\label{lma:32}
Consider the problem
\begin{equation} \label{33}
\begin{gathered}
\Delta u+\lambda _k u=f(x) \quad x\in D, \quad u=0 \quad \text{on } \partial D \\
\Delta v+\lambda _m v=g(x) \quad x\in D, \quad v=0 \quad \text{on } \partial D \,,
\end{gathered}
\end{equation}
Assume that $f(x) \in \varphi ^{\perp}_k$, $g(x) \in \varphi ^{\perp}_m$.
One can select a solution such that $u(x) \in \varphi ^{\perp}_k$, and
$v(x) \in \varphi ^{\perp}_m$. Such a solution is unique, and the
estimate \eqref{31} holds.
\end{lemma}
\begin{lemma}\label{lma:33}
Consider the problem
\begin{equation} \label{34}
\begin{gathered}
\Delta u+\lambda _k u+v=f(x) \quad x\in D, \quad u=0 \quad \text{on } \partial D \\
\Delta v+\lambda _k v=g(x) \quad x\in D, \quad v=0 \quad \text{on } \partial D \,,
\end{gathered}
\end{equation}a
Assume that $f(x) \in \varphi ^{\perp}_k$, $g(x) \in \varphi ^{\perp}_k$.
One can select a solution such that $u(x) \in \varphi ^{\perp}_k$, and
$v(x) \in \varphi ^{\perp}_k$. Such a solution is unique, and the estimate
\eqref{31} holds.
\end{lemma}
\begin{proof}
The second equation in \eqref{34} has infinitely many solutions of the
form $v=v_0+c \varphi _k$. We now select $c$, so that $v \in \varphi ^{\perp}_k$.
The first equation then takes the form
\[
\Delta u+\lambda _k u=f-v_0-c \varphi _k \in \varphi ^{\perp}_k \,.
\]
This equation has infinitely many solutions of the form
$u=u_0+c_1 \varphi _k$. We select $c_1$, so that $u \in \varphi ^{\perp}_k$.
\end{proof}
We now turn to nonlinear systems
\begin{equation} \label{35}
\begin{gathered}
\Delta u+au+bv+f(u,v)=h(x) \quad x\in D, \quad u=0 \quad \text{on } \partial D \\
\Delta v+cu+dv+g(u,v)=k(x) \quad x\in D, \quad v=0 \quad \text{on } \partial D \,,
\end{gathered}
\end{equation}a
with given $h(x)$, $k(x)$ and bounded $f(u,v)$, $g(u,v)$.
If there is no resonance, existence of solutions is easy.
\begin{theorem}\label{thm:31}
Assume that $(a-\lambda _n)(d-\lambda _n)-bc \ne 0$, for all $n \geq 1$, and
$f(u,v)$, $g(u,v)$ are bounded.
Then for any pair $(h,k) \in L^2(D) \times L^2(D)$ there exists a solution
$(u,v)$, with $u$ and $v \in W^{2,2}(D) \cap W_0^{1,2}(D) $.
\end{theorem}
\begin{proof}
The map $(w,z) \to (u,v)$, obtained by solving
\begin{gather*}
\Delta u+au+bv=h(x)-f(w,z) \quad x\in D, \quad u=0 \quad \text{on } \partial D \\
\Delta v+cu+dv=k(x)-g(w,z) \quad x\in D, \quad v=0 \quad \text{on } \partial D \,,
\end{gather*}
in view of Lemma \ref{lma:30} (and Sobolev's embedding), is a compact and
continuous map of a sufficiently large ball around the origin in
$L^2(D) \times L^2(D)$ into itself, and Schauder's fixed point theorem applies.
\end{proof}
Next, we discuss the case of resonance, when one of the eigenvalues of
the coefficient matrix
$A=\begin{bmatrix}
a & b\\
c & d
\end{bmatrix}$ is $\lambda _k$.
We distinguish the following four possibilities: the second eigenvalue
of $A$ is not equal to $\lambda _m$ for all $m \geq 1$, the second eigenvalue
of $A$ is equal to $\lambda _m$ for some $m \ne k$, the second eigenvalue
of $A$ is equal to $\lambda _k$, and the matrix $A$ is diagonalizable,
and finally the second eigenvalue of $A$ is equal to $\lambda _k$, and
the matrix $A$ is not diagonalizable.
By a linear change of variables, $(u,v) \to (u_1,v_1)$,
$\begin{bmatrix}
u \\
v
\end{bmatrix}
=Q \begin{bmatrix}
u_1 \\
v_1
\end{bmatrix}$,
with a non-singular matrix $Q$, we transform the system \eqref{35} into
\[
\Delta
\begin{bmatrix}
u_1 \\
v_1
\end{bmatrix}
+Q^{-1}AQ \begin{bmatrix}
u_1 \\
v_1
\end{bmatrix}
=Q^{-1}\begin{bmatrix}
h \\
k
\end{bmatrix}
\]
with the matrix $Q^{-1}AQ$ being either diagonal, or the Jordan block
$\begin{bmatrix}
\lambda _k & 1\\
0 & \lambda _k
\end{bmatrix}$.
Let us assume that this change of variables has been performed, so that
there are three canonical cases to consider.
We consider the system
\begin{equation} \label{36}
\begin{gathered}
\Delta u+\lambda _k u+f(u,v)=h(x) \quad x\in D, \quad u=0 \quad \text{on } \partial D \\
\Delta v+\mu v+g(u,v)=k(x) \quad x\in D, \quad v=0 \quad \text{on } \partial D \,.
\end{gathered}
\end{equation}
We assume that $f(u,v)$, $g(u,v) \in C(R \times R)$ are bounded on $R\times R$,
and there exist numbers $c$, $d$, $C$ and $D$, with $c<d$ and $C<D$, such that
\begin{gather} \label{37}
f(u,v) > D \quad \text{for $u > d$}, \quad \text{uniformly in }v \in R \,, \\
\label{38}
f(u,v) < C \quad \text{for } u < c, \quad \text{uniformly in }v \in R \,.
\end{gather}
Define
\[
L_2=D \int _{\varphi _k>0} \varphi _k \, dx+C \int _{\varphi _k<0} \varphi _k \, dx,
\quad L_1=C \int _{\varphi _k>0} \varphi _k \, dx+D \int _{\varphi _k<0} \varphi _k \, dx \,.
\]
Observe that $L_2>L_1$, because $D>C$.
\begin{theorem}\label{thm:30}
Assume that $\lambda _k$ is a simple eigenvalue, $\mu \ne \lambda _n$ for all
$n \geq 1$, while $f(u,v)$, $g(u,v) \in C(R \times R)$ are bounded on $R\times R$,
and satisfy \eqref{37} and \eqref{38}. Assume that $h(x)$, $k(x) \in L^p(D)$,
for some $p>n$. Assume finally that
\begin{equation} \label{39}
L_1<\int _D h(x) \varphi _k(x) \, dx<L_2 \,.
\end{equation}
Then the problem \eqref{36} has a solution $(u,v)$, with
$u,v \in W^{2,p}(D) \cap W_0^{1,p}(D) $.
\end{theorem}
\begin{proof}
Denoting $A_k=\int _D h(x) \varphi _k \, dx$, we decompose
$h(x)=A_k \varphi _k (x)+e(x)$, with $e(x) \in \varphi _k^{\perp}$
(where $\varphi _k^{\perp}$ is a subspace of $L^2(D)$).
Similarly, we decompose the solution $u(x)=\xi _k \varphi _k (x)+U(x)$,
with $U(x) \in \varphi _k^{\perp}$.
Multiply the first equation in \eqref{36} by $\varphi _k$, and integrate
\begin{equation} \label{40}
\int _D f(\xi _k \varphi _k (x)+U(x),v) \varphi _k(x) \, dx=A_k \,.
\end{equation}
Then the first equation in \eqref{36} becomes
\begin{equation} \label{41}
\begin{gathered}
\Delta U+\lambda_k U=-f(\xi _k \varphi _k +U,v)+\varphi _k \int _D f(\xi _k \varphi _k +U ,v) \varphi _k\, dx
+e, \, x\in D \\
U=0 \quad \text{on } \partial D \,.
\end{gathered}
\end{equation}
Equations \eqref{40} and \eqref{41} constitute the classical Lyapunov-Schmidt
reduction of the first equation in \eqref{36}. To solve \eqref{40}, \eqref{41}
and the second equation in \eqref{36}, we set up a map
$T: (\alpha _k,W,Z) \to (\xi _k,U,V)$, taking the space
$R \times \varphi _k^{\perp} \times L^2(D)$ into itself, by solving
(separately) the linear equations
\begin{equation} \label{42}
\begin{gathered}
\Delta U+\lambda_k U=-f(\alpha _k \varphi _k +W,Z)+\varphi _k \int _D f(\alpha _k \varphi _k +W ,Z)
\varphi _k\, dx+e, \\
\Delta V+\mu V=-g(\alpha _k \varphi _k +W,Z)+k(x) \\
U=V=0 \quad \text{on } \partial D \,,
\end{gathered}
\end{equation}
and then setting
\begin{equation} \label{43}
\xi _k=\alpha _k +A_k- \int _D f(\alpha _k \varphi _k +U ,V) \varphi _k(x) \, dx \,.
\end{equation}
By Lemma \ref{lma:31}, the map $T$ is well defined, and $\|U\|_{W^{2,p}(D)}$
and $\|V\|_{W^{2,p}(D)}$ are bounded, and then by the Sobolev embedding
$\|U\|_{C^1(D)}$ and $\|V\|_{C^1(D)}$ are bounded uniformly in $(\alpha _k,W,Z)$.
This implies that if $\alpha _k$ is large and positive, the integral in \eqref{43}
is greater than $L_2>A_k$. When $\alpha _k$ is large in absolute value and negative,
the integral in \eqref{43} is smaller than $L_1<A_k$.
It follows that for $\alpha _k$ large and positive, $\xi _k<\alpha _k$,
while for $\alpha _k$ large in absolute value and negative, $\xi _k>\alpha _k$.
Hence, we can find a large $N$, so that if $\alpha _k \in (-N,N)$,
then $\xi _k \in (-N,N)$. The map $T$ is a continuous and compact map,
taking a sufficiently large ball of $R \times \varphi _k^{\perp} \times L^2(D)$
into itself. By Schauder's fixed point theorem it has a fixed point,
which gives us a solution of \eqref{36}.
\end{proof}
We consider next the system
\begin{equation} \label{46}
\begin{gathered}
\Delta u+\lambda _k u+f(u,v)=h(x) \quad x\in D, \quad u=0 \quad \text{on } \partial D \\
\Delta v+\lambda _ m v+g(u,v)=k(x) \quad x\in D, \quad v=0 \quad \text{on } \partial D \,,
\end{gathered}
\end{equation}a
which includes, in particular, the case $m=k$.
Assume that there exist numbers $c_1$, $d_1$, $C_1$ and $D_1$, with
$c_1<d_1$ and $C_1<D_1$, such that
\begin{gather} \label{47}
g(u,v) > D_1 \quad \text{for }v > d_1, \quad \text{uniformly in } u \in R\,, \\
\label{48}
g(u,v) < C_1 \quad \text{for } v < c_1, \quad \text{uniformly in } u \in R \,.
\end{gather}
Define
\begin{gather*}
M_2=D_1 \int _{\varphi _m>0} \varphi _m \, dx+C_1 \int _{\varphi _m<0} \varphi _m \, dx, \\
M_1=C_1 \int _{\varphi _m>0} \varphi _m \, dx+D_1 \int _{\varphi _m<0} \varphi _m \, dx \,.
\end{gather*}
Observe that $M_2>M_1$, because $D_1>C_1$.
\begin{theorem}\label{thm:5}
Assume $\lambda _k$ and $\lambda _m$ are simple eigenvalues, while
$f(u,v)$, $g(u,v)$ belong to $C(R \times R)$ are bounded on $R\times R$, and
satisfy \eqref{37}, \eqref{38} and \eqref{47}, \eqref{48}. Assume that
$h(x)$, $k(x) \in L^p(D)$, for some $p>n$. Assume finally that
\begin{equation} \label{49}
L_1<\int _D h(x) \varphi _k(x) \, dx<L_2, \quad \text{and} \quad
M_1<\int _D k(x) \varphi _m (x)\, dx<M_2 \,.
\end{equation}
Then problem \eqref{46} has a solution $(u,v)$, with
$u,v \in W^{2,p}(D) \cap W_0^{1,p}(D) $.
\end{theorem}
\begin{proof}
Denoting $A_k=\int _D h(x) \varphi _k \, dx$ and
$B_m=\int _D k(x) \varphi _m \, dx$, we decompose
$h(x)=A_k \varphi _k (x)+e_1(x)$ and $k(x)=B_m \varphi _m (x)+e_2(x)$,
with $e_1(x) \in \varphi _k^{\perp}$ and $e_2(x) \in \varphi _m^{\perp}$.
Similarly, we decompose the solution $u(x)=\xi _k \varphi _k (x)+U(x)$
and $v(x)=\eta _m \varphi _m (x)+V(x)$, with $U(x) \in \varphi _k^{\perp}$
and $V(x) \in \varphi _m^{\perp}$. As before, we write down the Lyapunov-Schmidt
reduction of our problem \eqref{46}
\begin{gather*}
\int _D f(\xi _k \varphi _k +U ,\eta _m \varphi _m+V) \varphi _k(x) \, dx=A_k \\
\int _D g(\xi _k \varphi _k +U ,\eta _m \varphi _m+V) \varphi _m(x) \, dx=B_m \\
\begin{aligned}
\Delta U+\lambda_k U
&=-f(\xi _k \varphi _k +U,\eta _m \varphi _m+V)\\
&\quad +\varphi _k \int _D f(\xi _k \varphi _k +U ,\eta _m \varphi _m+V) \varphi _k\, dx+e_1,
\end{aligned} \\
\begin{aligned}
\Delta V+\lambda_m V
&=-g(\xi _k \varphi _k +U,\eta _m \varphi _m+V) \\
&\quad +\varphi _m \int _D g(\xi _k \varphi _k +U ,\eta _m \varphi _m+V) \varphi _m\, dx+e_2,
\end{aligned} \\
U=V=0 \quad \text{on } \partial D \,.
\end{gather*}
To solve this system, we set up a map
$T: (\alpha _k,W,\beta _m,Z) \to (\xi _k,U,\eta _m,V)$,
taking the space $(R \times \varphi _k^{\perp}) \times (R \times \varphi_m ^{\perp})$
into itself, by solving (separately) the linear problems
\begin{gather*}
\begin{aligned}
\Delta U+\lambda_k U
&=-f(\alpha _k \varphi _k +W,\beta _m \varphi _m+Z) \\
&\quad +\varphi _k \int _D f(\alpha _k \varphi _k +W ,\beta _m \varphi _m+Z) \varphi _k\, dx+e_1,
\end{aligned} \\
\begin{aligned}
\Delta V+\lambda _m V
&=-g(\alpha _k \varphi _k +W,\beta _m \varphi _m+Z) \\
&\quad +\varphi _m \int _D g(\alpha _k \varphi _k +W ,\beta _m \varphi _m+Z) \varphi _m\, dx+e_2,
\end{aligned}\\
U=V=0 \quad \text{on } \partial D \,,
\end{gather*}
and then setting
\begin{equation} \label{55}
\begin{gathered}
\xi _k=\alpha _k +A_k- \int _D f(\alpha _k \varphi _k +U ,\beta _m \varphi _m+V) \varphi _k(x) \, dx \\
\eta _m=\beta _m +B_m -\int _D g(\alpha _k \varphi _k +U ,\beta _m \varphi _m+V) \varphi _m\, dx \,.
\end{gathered}
\end{equation}
By Lemma \ref{lma:32}, the map $T$ is well defined, and
$\|U\|_{C^1(D)}$ and $\|V\|_{C^1(D)}$ are bounded uniformly in
$(\alpha _k,W,\beta _m,Z)$.
This implies that if $\alpha _k$ is large and positive,
$\int _D f(\alpha _k \varphi _k +U ,\beta _m \varphi _m+V) \varphi _k\, dx>L_2>A_k$.
When $\alpha _k$ is large in absolute value and negative, this integral
is smaller than $L_1<A_k$.
It follows that for $\alpha _k$ large and positive, $\xi _k<\alpha _k$,
while for $\alpha _k$ large and negative, $\xi _k>\alpha _k$. Hence, we
can find a large $N$, so that if $\alpha _k \in (-N,N)$, then
$\xi _k \in (-N,N)$, and arguing similarly with the second line in \eqref{55},
we see that if $\beta _m \in (-N,N)$, then $\eta _m \in (-N,N)$
(possibly with a larger $N$). The map $T$ is a continuous and compact map,
taking a sufficiently large ball of
$(R \times \varphi _k^{\perp}) \times (R \times \varphi_m ^{\perp})$ into itself.
By Schauder's fixed point theorem it has a fixed point, which gives us
a solution of \eqref{46}.
\end{proof}
We now turn to the final case
\begin{equation} \label{56}
\begin{gathered}
\Delta u+\lambda _k u+v+f(u,v)=h(x) \quad x\in D, \quad u=0 \quad \text{on } \partial D \\
\Delta v+\lambda _ k v+g(u,v)=k(x) \quad x\in D, \quad v=0 \quad \text{on } \partial D \,.
\end{gathered}
\end{equation}
Assume that there exist numbers $c_2$, $d_2$, $C_2$ and $D_2$, with
$c_2<d_2$ and $C_2<D_2$, such that
\begin{gather} \label{57}
g(u,v) > D_2 \quad \text{for } u > d_2, \quad \text{uniformly in }v \in R \,, \\
\label{58}
g(u,v) < C_2 \quad \text{for $u < c_2$}, \quad \text{uniformly in $v \in R$} \,.
\end{gather}
Define
\[
N_2=D_2 \int _{\varphi _k>0} \varphi _k \, dx+C_2 \int _{\varphi _k<0} \varphi _k \, dx, \quad
N_1=C_2 \int _{\varphi _k>0} \varphi _k \, dx+D_2 \int _{\varphi _k<0} \varphi _k \, dx \,.
\]
Observe that $N_2>N_1$, because $D_2>C_2$.
\begin{theorem}\label{thm:8}
Assume that $\lambda _k$ is a simple eigenvalue, while $f(u,v)$,
$g(u,v) \in C(R \times R)$ are bounded on $R\times R$, and $g$
satisfies \eqref{57}, \eqref{58}. Assume that $h(x)$, $k(x) \in L^p(D)$,
for some $p>n$. Assume finally that
\begin{equation} \label{59}
N_1<\int _D k(x) \varphi _k \, dx<N_2 \,.
\end{equation}
Then problem \eqref{56} has a solution $(u,v)$, with
$u,v \in W^{2,p}(D) \cap W_0^{1,p}(D) $.
\end{theorem}
\begin{proof}
Denoting $A_k=\int _D h(x) \varphi _k \, dx$ and $B_k=\int _D k(x) \varphi _k \, dx$,
we decompose $h(x)=A_k \varphi _k (x)+e_1(x)$ and $k(x)=B_k \varphi _k (x)+e_2(x)$,
with $e_1, e_2 \in \varphi _k^{\perp}$, and also
decompose the solution $u(x)=\xi _k \varphi _k (x)+U(x)$ and
$v(x)=\eta _k \varphi _k (x)+V(x)$, with $U, V \in \varphi _k^{\perp}$.
The Lyapunov-Schmidt reduction of our problem \eqref{56} is
\begin{equation} \label{59.1}
\begin{gathered}
\eta _k+ \int _D f(\xi _k \varphi _k +U ,\eta _k \varphi _k+V) \varphi _k(x) \, dx=A_k \\
\int _D g(\xi _k \varphi _k +U ,\eta _k \varphi _k+V) \varphi _k(x) \, dx=B_k
\\
\begin{aligned}
\Delta U+\lambda_k U+V
&=-f(\xi _k \varphi _k +U,\eta _k \varphi _k+V)\\
&\quad +\varphi _k \int _D f(\xi _k \varphi _k +U ,\eta _k \varphi _k+V) \varphi _k\, dx+e_1,
\end{aligned}\\
\begin{aligned}
\Delta V+\lambda_k V
&=-g(\xi _k \varphi _k +U,\eta _k \varphi _k+V) \\
&\quad +\varphi _k \int _D g(\xi _k \varphi _k +U ,\eta _k \varphi _k+V) \varphi _k\, dx+e_2,
\end{aligned}\\
U=V=0 \quad \text{on } \partial D \,.
\end{gathered}
\end{equation}
To solve this system, we set up a map
$T: (\alpha _k,W,\beta _k,Z) \to (\xi _k,U,\eta _k,V)$, taking the
space $(R \times \varphi _k^{\perp}) \times (R \times \varphi_k ^{\perp})$ into itself,
by solving the linear system
\begin{gather*}
\begin{aligned}
\Delta U+\lambda_k U+V
&=-f(\alpha _k \varphi _k +W,\beta _k \varphi _k+Z)\\
&\quad +\varphi _k \int _D f(\alpha _k \varphi _k +W ,\beta _k \varphi _k+Z) \varphi _k\, dx+e_1,
\end{aligned} \\
\begin{aligned}
\Delta V+\lambda _k V
&=-g(\alpha _k \varphi _k +W,\beta _k \varphi _k+Z) \\
&\quad +\varphi _k \int _D g(\alpha _k \varphi _k +W ,\beta _k \varphi _k+Z) \varphi _k\, dx+e_2,
\end{aligned} \\
U=V=0 \quad \text{on } \partial D \,,
\end{gather*}
and then setting
\begin{equation} \label{60}
\begin{gathered}
\xi _k=\alpha _k +B_k- \int _D g(\alpha _k \varphi _k +U ,\beta _k \varphi _k+V) \varphi _k(x) \, dx \\
\eta _k=A_k -\int _D f(\alpha _k \varphi _k +U ,\beta _k \varphi _k+V) \varphi _k\, dx \,.
\end{gathered}
\end{equation}
Fixed points of $T$ give us solutions of \eqref{59.1}, and hence of \eqref{56}.
By Lemma \ref{lma:33}, the map $T$ is well defined, and
$\|U\|_{C^1(D)}$ and $\|V\|_{C^1(D)}$ are bounded uniformly in
$(\alpha _k,W,\beta _k,Z)$.
This implies that if $\alpha _k$ is large and positive,
$\int _D g(\alpha _k \varphi _k +U ,\beta _k \varphi _k+V) \varphi _k\, dx>N_2>B_k$.
When $\alpha _k$ is large in absolute value and negative, this integral
is smaller than $N_1<B_k$.
It follows that for $\alpha _k$ large and positive, $\xi _k<\alpha _k$,
while for $\alpha _k$ large and negative, $\xi _k>\alpha _k$. Hence, we can
find a large $N$, so that if $\alpha _k \in (-N,N)$, then $\xi _k \in (-N,N)$.
Since the right hand side of the second line in \eqref{60} is bounded,
we see that if $\beta _k \in (-N,N)$, then $\eta _k \in (-N,N)$
(possibly with a larger $N$). The map $T$ is a continuous and compact map,
taking a sufficiently large ball of
$(R \times \varphi _k^{\perp}) \times (R \times \varphi_k ^{\perp})$ into itself.
By Schauder's fixed point theorem it has a fixed point, which gives us
a solution of \eqref{56}.
\end{proof}
\section{Appendix: A direct proof of the theorem of Lazer and Leach}
Many proofs of this classical result are available, including the one above,
and a recent proof in the paper of Hastings and McLeod \cite{H},
which also has references to other proofs. In this appendix we present
a proof which is consistent with our approach in the present paper.
\begin{proof}[Proof of Theorem \ref{thm:2}]
Let $L^2_n=\{ u(t) \in L^2(R), u(t+2\pi)=u(t) \quad \text{for all }t :
\int_0^{2 \pi} u(t) \cos nt \, dt=\int_0^{2 \pi} u(t) \sin nt \, dt=0\}$.
As before, we decompose
\begin{equation} \label{14}
\begin{gathered}
f(t)=\frac{A}{\pi} \cos nt+\frac{B}{\pi} \cos nt+e(t) \\
u(t)=\xi \cos nt+\eta\cos nt+U(t) \,,
\end{gathered}
\end{equation}
with $e(t),U(t) \in L^2_n$. Multiply \eqref{10} by $\cos nt$
and $\sin nt$ respectively, and integrate
\begin{equation} \label{15}
\begin{gathered}
\int_0^{2\pi} g \left(\xi \cos nt+\eta\cos nt+U(t) \right) \cos nt \, dt=A \\
\int_0^{2\pi} g \left(\xi \cos nt+\eta\cos nt+U(t) \right) \sin nt \, dt=B \,.
\end{gathered}
\end{equation}
Using these equations, and the ansatz \eqref{14} in \eqref{10}
\begin{equation} \label{16}
\begin{aligned}
U''+n^2U
&=-g \left(\xi \cos nt+\eta\cos nt+U(t) \right) \\
&\quad +\frac{1}{\pi} \Big( \int_0^{2\pi} g \left(\xi \cos nt+\eta\cos nt+U(t)
\right) \cos nt \, dt \Big) \cos nt \\
&\quad +\frac{1}{\pi} \Big( \int_0^{2\pi} g \left(\xi \cos nt+\eta\cos nt+U(t)
\right) \sin nt \, dt \Big) \sin nt +e(t) \,.
\end{aligned}
\end{equation}
Equations \eqref{15} and \eqref{16} provide the Lyapunov-Schmidt reduction
of \eqref{10}.
To prove the necessity part, we multiply the first equation in \eqref{15} by
$\frac{A}{\sqrt{A^2+B^2}}$, the second one by $\frac{B}{\sqrt{A^2+B^2}}$,
and add, putting the result in the form
\[
\int_0^{2\pi} g \left(\xi \cos nt+\eta \sin nt+U(t) \right)
\cos (nt-\delta) \, dt=\sqrt{A^2+B^2} \,,
\]
for some $\delta$. Using Lemma \ref{lma:2}, the integral on the left is
bounded from above by
\[
g(\infty)\int_P \cos (nt-\delta) \, dt +g(-\infty)\int_N \cos (nt-\delta)\, dt
=2\left(g(\infty)-g(-\infty) \right) \,.
\]
Turning to the sufficiency part, we set up a map $T: (a,b,V) \to (\xi,\eta,U)$,
taking $R \times R \times L^2_n$ into itself, by solving
\begin{equation} \label{17}
\begin{aligned}
U''+n^2U &=-g \left(a \cos nt+b\sin nt+V(t) \right) \\
&\quad +\frac{1}{\pi} \Big( \int_0^{2\pi} g \left(a \cos nt+b \sin nt+V(t) \right)
\cos nt \, dt \Big) \cos nt \\
&\quad +\frac{1}{\pi} \Big( \int_0^{2\pi} g \left(a \cos nt+b \sin nt+V(t) \right)
\sin nt \, dt \Big) \sin nt +e(t)
\end{aligned}
\end{equation}
for $U$, and then set
\begin{equation} \label{17a}
\begin{gathered}
\xi=a+A-\int_0^{2\pi} g \left(a \cos nt+b \cos nt+U(t) \right) \cos nt \, dt \\
\eta=b+B-\int_0^{2\pi} g \left(a \cos nt+b \cos nt+U(t) \right) \sin nt \, dt \,.
\end{gathered}
\end{equation}
The right hand side of \eqref{17} is in $L^2_n$, and hence \eqref{17}
has infinitely many solutions of the form $U=U_0+c_1 \cos nt+c_2 \sin nt$.
We select the unique pair $(c_1,c_2)$, so that $U \in L^2_n$. By the elliptic
theory, we then have (since $g(u)$ is bounded)
\begin{equation} \label{18}
\|U\|_{L^{\infty}} \leq c, \quad \text{with some $c>0$, uniformly in $ (a,b,V)$} \,.
\end{equation}
We need to show that a sufficiently large ball in $(a,b)$ plane is mapped
into itself in $(\xi, \eta)$ plane. We have
\[
a \cos nt+b\sin nt=\sqrt{a^2+b^2} \cos (nt-\delta _1), \quad
\text{for some $\delta _1$} \,.
\]
Then for $a^2+b^2$ large
\begin{align*}
& aA+bB-a \int_0^{2\pi} g \left(a \cos nt+b \cos nt+U(t) \right) \cos nt \, dt\\
&-b \int_0^{2\pi} g \left(a \cos nt+b \cos nt+U(t) \right) \sin nt \, dt \\
&\leq \sqrt{a^2+b^2} \Big[ \sqrt{A^2+B^2}-\int_0^{2\pi} g
\left( \sqrt{a^2+b^2} \cos (nt-\delta _1)+U \right) \cos (nt-\delta _1) \, dt \Big] \\
& <-\mu \sqrt{a^2+b^2}, \quad \text{for some } \mu>0\,,
\end{align*}
because the integral in the brackets on a sufficiently large ball gets
arbitrary close to $2\left(g(\infty)-g(-\infty) \right)>\sqrt{A^2+B^2}$.
Since $g(u)$ is bounded, we can find $h>0$ so that
\begin{gather*}
\Big( A-\int_0^{2\pi} g \left(a \cos nt+b \cos nt+U(t) \right)
\cos nt \, dt \Big)^2 <h \,, \\
\Big(B-\int_0^{2\pi} g \left(a \cos nt+b \cos nt+U(t) \right)
\sin nt \, dt \Big)^2 <h \,.
\end{gather*}
Then from \eqref{17a}
\[
\xi ^2+\eta ^2<a^2+b^2-2\mu \sqrt{a^2+b^2}+2h<a^2+b^2 \,,
\]
for $a^2+b^2$ large.
Then the map $T$ is a compact and continuous map of a sufficiently
large ball in $R \times R \times L^2_n$ into itself, and we have a
solution by Schauder's fixed point theorem.
\end{proof}
\section{Appendix: Perturbations of forced harmonic oscillators at resonance
without Lazer-Leach condition}
We present next a result of de Figueiredo and Ni's \cite{FN} type for
harmonic oscillators at resonance:
\begin{equation} \label{70}
u''+n^2 u+g(u)=e(t) \,.
\end{equation}
\begin{theorem}
Assume that $g(u) \in C(R)$ is a bounded function, and
\begin{gather}\label{71}
u g(u)>0 \quad \text{for all $u \in R$} \,, \\
\label{72}
\liminf _{u \to \infty} g(u)>0, \quad \limsup _{u \to -\infty} g(u)<0 \,.
\end{gather}
Assume that $e(t) \in C(R)$ is a $2 \pi$ periodic function, satisfying
\begin{equation} \label{73}
\int _0^{2\pi} e(t) \sin nt \, dt=\int _0^{2\pi} e(t) \cos nt \, dt=0 \,.
\end{equation}
Then problem \eqref{70} has a $2 \pi$ periodic solutions.
\end{theorem}
\begin{proof}
We follow the proof of Theorem \ref{thm:2}. As before, equations \eqref{15},
with $A=B=0$, and \eqref{16} provide the Lyapunov-Schmidt reduction
of \eqref{70}. To solve these equations, we again set up the map
$T: (a,b,V) \to (\xi,\eta,U)$, taking $R \times R \times L^2_n$ into itself,
by solving \eqref{17} and then setting
\begin{equation} \label{74}
\begin{gathered}
\xi=a-\int_0^{2\pi} g \left(a \cos nt+b \cos nt+U(t) \right) \cos nt \, dt
\equiv a -I_1 \\
\eta=b-\int_0^{2\pi} g \left(a \cos nt+b \cos nt+U(t) \right) \sin nt \, dt
\equiv b -I_2\,.
\end{gathered}
\end{equation}
As before,
\begin{equation} \label{75}
\|U\|_{L^{\infty}} \leq c, \quad \text{with some $c>0$, uniformly in $ (a,b,V)$} \,.
\end{equation}
We have
\[
\xi ^2+\eta ^2 =a^2+b^2-2(aI_1+bI_2)+I^2_1+I^2_2 \,.
\]
Using the condition \eqref{72} and the estimate \eqref{75}, we can find a
constant $\mu>0$ such that
\[
aI_1+bI_2
=\sqrt{a^2+b^2} \int_0^{2\pi} g \Big( \sqrt{a^2+b^2} \cos (nt-\delta _1)+U \Big)
\cos (nt-\delta _1) \, dt
\geq \mu \sqrt{a^2+b^2} \,,
\]
for $a^2+b^2$ large. Denoting by $h$ a bound on $I_1$ and $I_2$, we have
\[
\xi ^2+\eta ^2<a^2+b^2-2\mu \sqrt{a^2+b^2}+2h<a^2+b^2 \,,
\]
for $a^2+b^2$ large.
Then the map $T$ is a compact and continuous map of a sufficiently large
ball in $R \times R \times L^2_n$ into itself, and we have a solution by
Schauder's fixed point theorem.
\end{proof}
|
2,869,038,155,434 | arxiv | \section{Noise channels Kraus operators}\label{Noise_channels_Kraus}
To take account of noise in HBAC, we analyze the following noise channels. The single-qubit noise channels' Kraus operators are as follows \cite{nielsen_chuang_2010}
\begin{enumerate}
\item
Generalized amplitude damping channel:
\begin{align}
E_0&=
\sqrt{p}\begin{pmatrix}
1 & 0 \\
0 & \sqrt{1-\gamma}
\end{pmatrix},
&
E_1&=
\sqrt{p}\begin{pmatrix}
0 & \sqrt{\gamma} \\
0 & 0
\end{pmatrix}, \nonumber
\\
E_2&=
\sqrt{1-p}\begin{pmatrix}
\sqrt{1-\gamma} & 0 \\
0 & 1
\end{pmatrix},
&
E_3&=
\sqrt{1-p}\begin{pmatrix}
0 & 0 \\
\sqrt{\gamma} & 0
\end{pmatrix}.
\label{App.Eq:1}
\end{align}
Where $ \gamma $ and $ p $ are the parameters that the generalized amplitude damping noises are identified with them. From Fig.~\ref{fig:2} we consider that a quantum noise flips the basis of a qubit Hilbert space from $\ket{0}$ to $\ket{1}$ and vice versa, with probabilities $\alpha$ and $\beta$, respectively. This implies that the probability of remaining in $\ket{0}$ is $1-\alpha$ and for $\ket{1}$ is $1-\beta$ as well. Consequently, for the GAD noise, $\alpha = \gamma (1-p)$ and $\beta = \gamma p$ (Table.~\ref{table:1}).
\item
Depolarizing channel:
\begin{align}
E_0&=\sqrt{1-\frac{3p}{4}} \mathds{1},
&
E_1&=\sqrt{\frac{p}{4}}X,\nonumber
\\
E_2&=\sqrt{\frac{p}{4}}Y,
&
E_3&=\sqrt{\frac{p}{4}}Z.
\label{App.Eq:2}
\end{align}
For depolarizing channel, $\alpha$ and $\beta$ are the same and equal to $\frac{p}{2}$ (Table.~\ref{table:1}).
\item
Bit-flip channel:
\begin{align}
E_0&=\sqrt{p}\mathds{1},
&
E_1&=\sqrt{1 - p}X.
\label{App.Eq:3}
\end{align}
In bit-flip channel $1 - p$ is the probability of flipping $\ket{0}$ and $\ket{1}$ to each other and they remain in their state with probability $p$ (Table.~\ref{table:1}).
\end{enumerate}
It is worth mentioning that the phase damping channel does not alter the diagonal elements of the density matrix (for more information see the appendix \ref{Non_diagonal_density_matrices}). Therefore, we do not consider the phase damping channel in our calculations.
\section{TSAC and Noise channels' transfer matrices}\label{Noise_channels_transfer_matrix}
According to Eq.~\ref{Eq:computational_changes} the effect of both TSAC and the noise on the diagonal elements is given by $ T_{\text{TSAC}} T_{\text{Noise}}$. For a system with $n$ computational qubits, the transfer matrix of TSAC is as follows \cite{raeisi2019novel}
\begin{equation}
T_{\text{TSAC}} = \frac{1}{z} \begin{pmatrix}
e^{\varepsilon_0} & e^{\varepsilon_0} & 0 & \cdots & 0 \\
e^{-\varepsilon_0} & 0 & e^{\varepsilon_0} & \cdots & 0 \\
0 & e^{-\varepsilon_0} & 0 & \cdots & 0 \\
0 & 0 & \cdots & \ddots & \vdots \\
0 & 0 & \cdots & e^{-\varepsilon_0} & e^{-\varepsilon_0}
\end{pmatrix}_{2^n \times 2^n},
\label{Eq: T_TSAC}
\end{equation}
where $z = (e^{-\varepsilon_0} + e^{\varepsilon_0})$, the partition function of the reset qubit. And the noise for single-qubit is explained in the appendix \ref{Noise_channels_Kraus}. Each single-qubit state is changed with probabilities $\alpha$ and $\beta$ individually in a multiple-qubit system, due to the noise. The reason is that uncorrelated noise channels are applied in the process (Fig.~\ref{fig:1}). Accordingly, the transfer matrix of each noise channel is a $2^n \times 2^n$ matrix which its elements are given as
\begin{equation}
T_{\text{Noise},ij} = (1-\alpha)^{d_H(\Bar{i}\wedge \Bar{j})}\: \alpha^{d_H(\Bar{i}\wedge j)}
\: \beta^{d_H(i\wedge \Bar{j})} \: (1-\beta)^{d_H(i\wedge j)}
\label{App.Eq:4}
\end{equation}
where $i$ and $j$ are referred to arbitrary states of a multiple-qubit system and $d_H(x)$ is the hamming distance between $x$ and $0$. For example, for $n=3$, the state $\ket{i} = \ket{010}$ transfers to the state $\ket{j} = \ket{100}$ with the probability equal to $T_{\text{Noise},ij} = (1-\alpha) \alpha \beta$.
For the GAD, depolarizing and bit-flip channels the transfer matrix has only one eigenvalue equal to $1$, and the others' absolute values are less than $1$, as we verify this numerically. Consequently, the system converges to the eigenstate corresponding to $1$ eigenvalue.
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
\textbf{probability} & \textbf{GAD} & \textbf{bit-flip} & \textbf{Depolarizing} \\ \hline
$\alpha$ & $\gamma(1\!-\!p)$ & $1\!-\!p$ & $\frac{p}{2}$ \\ \hline
$\beta$ & $\gamma p$ & $1\!-\!p$ & $\frac{p}{2}$ \\ \hline
\end{tabular}
\caption{The $\alpha$ and $\beta$ for quantum noise (Fig.~\ref{fig:2}). The noise flips the basis of a single-qubit Hilbert space from $\ket{0}$ to $\ket{1}$ with probability $\alpha$ and flips the state from $\ket{1}$ to $\ket{0}$ with probability $\beta$.}
\label{table:1}
\end{table}
\section{Purity enhancement volume}\label{Enhancement_volume}
To define a measure for enhancement, we calculate the volume of the green region and integrate it numerically for fixed $\varepsilon_0$ and $n$. Specifically, we discretize the parameter space ($p$,$\gamma$) and divide it into $N_p \times N_\gamma$ segments of length $ \Delta p \times \Delta \gamma$ respectively. We set $\Delta p =\Delta \gamma = 0.01$ and $N_p = \frac{1}{\Delta p}, \;N_{\gamma} = \frac{1}{\Delta \gamma}$. Thus, the volume is calculated as follows
\begin{equation}
V = \sum_{i,j}^{N_p,N_{\gamma}} \Delta p \, \Delta \gamma \max(0 , E_{i,j}),
\label{App.Eq:5}
\end{equation}
where $E_{i,j}$ is the purity enhancement for $ p = p_i$, the $i$th values for $p$ and $\gamma = \gamma_j$, the $j$th values for $\gamma$. Eq.~\ref{App.Eq:5} only takes into account the region where the purity enhancement is positive, i.e. the green zone and for the rest of the space, assigns zero which does not contribute to the sum.
\section{HBAC with Depolarizing and bit-flip noise channels}
\label{HBAC_Depolarizing_Bitflip}
To compare the effect of other noise channels that we mentioned (appendix \ref{Noise_channels_Kraus}), we study them as well as the GAD channel. These channels are applied in the cooling process for both TSAC and PPA. In the same procedure as the GAD noise, for TSAC, we find the eigenstate corresponding to $1$ eigenvalue for each $T_{\text{TSAC}}T_{\text{Dep}}$ and $T_{\text{TSAC}}T_{\text{BF}}$. For PPA, we simulate the process until the system reaches its steady state. The results are shown in Fig.\ref{fig:6} for $n=3$ and $\varepsilon_0=0.11$. It shows that no enhancement occurs in these cases for different sets of parameters. Therefore, we are not interested in these channels in our work.
\begin{figure}
\includegraphics[scale=0.48]{Figure6.pdf}
\caption{The polarization of the target qubit versus the noise channel parameter ($p$) for TSAC and PPA with noise. The system consists of 3 computational qubits and $\varepsilon_0 = 0.11$. As shown in the figure, these channels lead to a lower polarization than
HBAC limit for every values of $p$.}
\label{fig:6}
\end{figure}
\section{Non-diagonal density matrices}\label{Non_diagonal_density_matrices}
The diagonal elements of the density matrix determine polarization, therefore only considering the changes of the diagonal elements is sufficient to calculate polarization. It was shown in \cite{raeisi2019novel} that the TSAC technique can take non-diagonal density matrices to the asymptomatic state of HBAC. In the presence of the mentioned noise channels, we show that the off-diagonal elements of the density matrix do not affect the diagonal elements and hence polarization of the first qubit.
\begin{equation}
\rho^\prime = C(\rho) = \sum_{i = 1}^{M = m^n} E_i \rho E_i^\dagger
\label{App.Eq:6}
\end{equation}
The diagonal element of the density matrix after the noise is \begin{equation}
\rho^\prime_{j j} = \sum_{i = 1}^{M} \bra{j} E_i \rho E_i^\dagger \ket{j} = \sum_{k,l = 1}^{d}
\rho_{kl} \quad \big( \sum_{i = 1}^{M} \bra{j} E_i \ket{l} \bra{k} E_i^\dagger \ket{j} \big ),
\label{App.Eq:7}
\end{equation}
where $ d = 2^{n+1} $.
Since the GAD noise Kraus operators are incoherent operators (IO) (\cite{PhysRevLett.117.030401}, \cite{RevModPhys.89.041003}) we have the following equation
\begin{equation}
(E_i^\dagger)_{k,j} (E_i)_{j,l} = \delta_{k,l}.
\label{App.Eq:8}
\end{equation}
And the Eq.~\ref{App.Eq:7} can be simplified as
\begin{equation}
\rho^\prime_{j j} = \sum_{l = 1}^{d}
\rho_{ll} \quad \big( \sum_{i = 1}^{M} |(E_i)_{l,j}|^2 \big ).
\label{App.Eq:9}
\end{equation}
It shows that the diagonal elements of the density matrix after the noise $ \{ \rho^\prime_{ii} \}_{i = 1}^{d}$ only depend on the diagonal elements of the initial density matrix $ \{ \rho_{ii} \}_{i = 1}^{d}$.
\end{document}
|
2,869,038,155,435 | arxiv | \section{Introduction}
The mean-field game (MFG) theory, introduced by the authors of \cite{caines2013mean} and \cite{lasry2007mean} almost concurrently, provides a powerful framework to study stochastic dynamic games where
(i) the number of players involved in the game is large, (ii) each individual player's impact on the network is infinitesimal, and (iii) players' identities are indistinguishable.
The central idea of the MFG theory is to approximate, in an appropriate sense, the original large-population game problem by a single-player optimal control problem, in which individual player's best response to the mean field (average behavior of the population) is analyzed.
Typically, the solution to the latter problem is characterized by a pair of backward Hamilton-Jacobi-Bellman (HJB) and forward Fokker-Planck-Kolmogorov (FPK) equations; the HJB equation guarantees player-by-player optimality, while the FPK equation guarantees time consistency of the solution.
The coupled HJB-FPK systems, as well as alternative mathematical characterizations (e.g., McKean-Vlasov systems), have been studied extensively \cite{lasry2007mean,achdou2010mean,carmona2013probabilistic}.
There has been a recent growth in the literature on MFGs and its applications.
MFGs under Linear Quadratic (LQ) \cite{Huang2010,HLW2016,MT2017} and more general settings \cite{Huang2012, CLM2015} are both extensively explored.
MFGs with a major agent and a large number of minor agents are studied \cite{Huang2012} and applied to design decentralized security defense decisions in a mobile ad hoc network \cite{WYTH2014}. MFGs with multiple classes of players are investigated in \cite{TH2011}. The authors of \cite{BTB2016} studied the existence of robust (minimax) equilibrium in a class of stochastic dynamic games.
In \cite{ZTB2011}, the authors analyzed the equilibrium of a hybrid stochastic game in which the dynamics of agents are affected by continuous disturbance as well as random switching signals.
Risk-sensitive MFGs were considered in \cite{tembine2014risk}.
While continuous-time continuous-state models are commonly used in the references above,
\cite{jovanovic1988anonymous, weintraub2006oblivious,gomes2010discrete,salhab2018mean,saldi2018markov} have considered the MFG in discrete-time and/or discrete-state regime.
The issues of time inconsistency in MFG and mean-field type optimal control problems are discussed in \cite{bensoussan2013linear,djehiche2016characterization,cisse2014cooperative}.
While substantial progress has been made on the MFG literature in recent years, there has been a long history of mean-field-like approaches to large-population games in the transportation research literature \cite{sheffi1984urban}.
A well-known consequence of a mean-field-like analysis of the traffic user equilibrium is the Wardrop's first principle \cite{wardrop1952some,correa2011wardrop}, which provides the following characterization of the traffic condition at an equilibrium: \emph{journey times on all the routes actually
used are equal, and less than those which
would be experienced by a single vehicle on
any unused route}. This result, as well as a generalized concept known as \emph{stochastic user equilibrium} (SUE) \cite{daganzo1977stochastic}, has played a major role in the transportation research, including the convergence analysis of users' day-to-day routing policy adjustment process
\cite{fisk1980some,dial1971probabilistic,powell1982convergence,sheffi1982algorithm,liu2009method}.
However, currently only a limited number of results are available connecting the transportation research and recent progress in the MFG theory. The work \cite{salhab2018mean} considers discrete-time discrete-state mean-field route choice games.
In \cite{CLM2015}, the authors modeled the interaction between drivers on a straight road as a non-cooperative game and characterized its MFE.
In \cite{BZP2017}, the authors considered a continuous-time Markov chain to model the aggregated behavior of drivers on a traffic network. {\color{black} A Markovian framework for traffic assignment problems is introduced in \cite{baillon2008markovian}, which is similar to the problem formulation adopted in this paper. A connection between large-population Markov Decision Processes (MDPs) and MFGs has been discussed in a recent work \cite{yu2019primal}.} MFG has been applied to pedestrian crowd dynamics modeling in \cite{lachapelle2011mean,dogbe2010modeling}.
In this paper, we apply the MFG theory to study the strategic behavior of infinitesimal drivers traveling over an urban traffic network. Specifically, we consider a discrete-time dynamic stochastic game wherein, at each intersection, each driver randomly selects one of the outgoing links as her next destination according to a randomized policy. We assume that individual drivers' dynamics are decoupled from each other, while their cost functions are coupled.
In particular, we assume that the cost function for each driver is congestion-dependent, and is affine in the logarithm of the number of drivers taking the same route.
We regard the congestion-dependent term in the cost function as an incentive mechanism (toll charge) imposed by the Traffic System Operator (TSO).
Although the assumed structure of cost functionals is restrictive, the purpose of this paper is to show that the considered class of MFGs exhibits a \emph{linearly solvable} nature, and requires somewhat different treatments from the standard MFG formalism.
We emphasize that the computational advantages that follow from this special property are notable both from the existing MFG and the transportation research perspectives.
Contributions of this paper are summarized as follows:
\begin{enumerate}[leftmargin=*]
\item Linear solvability: We prove that the MFE of the game described above is given by the solution to a linearly solvable MDP \cite{todorov2007linearly}, meaning that it can be computed by performing a sequence of matrix multiplications backward in time \emph{only once}, without any need of forward-in-time computations.
This offers a tremendous computational advantage over the conventional characterization of the MFE where there is a need to solve a forward-backward HJB-FPK system, which is often a non-trivial task \cite{achdou2010mean}.
\item Strong time-consistency: Due to the backward-only characterization, the MFE in our setting is shown to be \emph{strongly time-consistent} \cite{basar1999dynamic}, a stronger property than what follows from the standard forward-backward characterization of MFEs.
\item MFE and fictitious play: With an aid of numerical simulation, we show that the derived MFE can be interpreted as a limit point of the belief path of the \emph{fictitious play} process \cite{monderer1996fictitious} in a scenario where the traffic routing game is repeated.
\end{enumerate}
The rest of the paper is organized as follows: The traffic routing game is set up in Section~\ref{secprob} and its mean field approximation is discussed in Section~\ref{secmfapprox}. The linearly solvable MDPs are reviewed in Section~\ref{secoptcontrol}, which is used to derive the MFE of the traffic routing game in Section~\ref{secmfe}.
Time consistency of the derived MFE is studied in Section~\ref{sectc}. A connection between MFE and fictitious play is investigated in Section~\ref{secfictitious}. Numerical studies are summarized in Section~\ref{secsimulation} before we conclude in Section~\ref{secconclude}.
\section{Problem Formulation}
\label{secprob}
The traffic game studied in this paper is formulated as an $N$-player, $T$-stage dynamic game.
Denote by $\mathcal{N}=\{1, 2, \cdots , N\}$ the set of players (drivers) and by $\mathcal{T}=\{0, 1, \cdots , T-1\}$ the set of time steps at which players make decisions.
\subsection{Traffic graph}
The \emph{traffic graph} is a directed graph $\mathcal{G}=(\mathcal{V},\mathcal{E})$, where $\mathcal{V}=\{1, 2, ... , V\}$ is the set of nodes (intersections) and $\mathcal{E}=\{1, 2, ... , E \}$ is the set of directed edges (links). For each $i\in \mathcal{V}$, denote by $\mathcal{V}(i) \subseteq \mathcal{V}$ the set of intersections to which there is a directed link from the intersection $i$.
At any given time step $t\in \mathcal{T}$, each player is located at an intersection.
The node at which the $n$-th player is located at time step $t$ is denoted by $i_{n,t}\in \mathcal{V}$.
At every time step, player $n$ at location $i_{n,t}$ selects her next destination $j_{n,t}\in \mathcal{V}(i_{n,t})$.
By selecting $j_{n,t}$ at time $t$, the player $n$ moves to the node $j_{n,t}$ at time $t+1$ deterministically (i.e., $i_{n,t+1}=j_{n,t}$).
\subsection{Routing policy}
At every time step $t$, each player selects her next destination according to a randomized routing policy.
Let $\Delta^J$ be the $J$-dimensional probability simplex, and
$Q_{n,t}^i=\{Q_{n,t}^{ij}\}_{j\in \mathcal{V}(i)}\in \Delta^{|\mathcal{V}(i)|-1}$ be the probability distribution according to which player $n$ at intersection $i$ selects the next destination $j \in \mathcal{V}(i)$.
We consider the collection $Q_{n,t}=\{Q_{n,t}^i\}_{i\in \mathcal{V}}$ of such probability distributions as the \emph{policy} of player $n$ at time $t$.
For each $n\in \mathcal{N}$ and $t\in \mathcal{T}$, notice that $Q_{n,t}\in \mathcal{Q}$, where
\[
\mathcal{Q}=\left\{\{Q^i\}_{i\in\mathcal{V}}: Q^i \in \Delta^{|\mathcal{V}(i)|-1} \;\; \forall i\in\mathcal{V} \right\}
\]
is the space of admissible policies.
Suppose that the initial locations of players $\{i_{n,0}\}_{n\in\mathcal{N}}$ are independent and identically distributed random variables with
$P_{n,0}=P_{0}\in\Delta^{|\mathcal{V}|-1}$.
Note that if the policy $\{Q_{n,t}\}_{t\in\mathcal{T}}$ of player $n$ is fixed, then the probability distribution $P_{n,t}=\{P_{n,t}^i\}_{i\in \mathcal{V}}$ of her location at time $t$ is computed recursively by
\begin{equation}
\label{eqdyn}
P_{n,t+1}^j=\sum_i P_{n,t}^i Q_{n,t}^{ij} \;\; \forall t\in\mathcal{T}, j \in\mathcal{V}.
\end{equation}
If $(i_{n,t}, j_{n,t})$ is the location-action pair of player $n$ at time $t$, it has the joint distribution $P_{n,t}^i Q_{n,t}^{ij}$.
We assume that location-action pairs $(i_{n,t}, j_{n,t})$ and $(i_{m,t}, j_{m,t})$ for two different players $m\neq n$ are drawn independently under individual policies $\{Q_{n,t}\}_{t\in\mathcal{T}}$ and $\{Q_{m,t}\}_{t\in\mathcal{T}}$.
With a slight abuse of notation, we sometimes write $Q_n:=\{Q_{n,t}\}_{t\in\mathcal{T}}$ for simplicity.
\subsection{Cost functional}
We assume that, at each time step, the cost functional for each player has two components as specified below:
\subsubsection{Travel cost}
For each $i\in\mathcal{V}$, $j\in\mathcal{V}(i)$ and $t\in\mathcal{T}$, let $C_t^{ij}$ be a given constant representing the cost (e.g., fuel cost) for every player selecting $j$ at location $i$ at time $t$.
\subsubsection{Tax cost}
We assume that players are also subject to individual and time-varying tax penalties calculated by the TSO.
{\color{black} The tax charged to player $n$ at time step $t$ depends not only on her own location-action pair at $t$, but also on the behavior of the entire population at that time step.}
Specifically, we consider the log-population tax mechanism, where the tax charged to player $n$ taking action $j$ at location $i$ at time $t$ is
\begin{equation}
\label{eqtax}
\pi_{N,t,n}^{ij}= \alpha \left( \log \frac{K_{N,t}^{ij}}{K_{N,t}^i }-\log R_t^{ij} \right)
\end{equation}
Here, $\alpha>0$ is a fixed constant characterizing the ``aggressiveness'' of the tax mechanism.
In \eqref{eqtax}, $K_{N,t}^i$ is the number of players (including player $n$) who are located at the intersection $i$ at time $t$. Likewise, $K_{N,t}^{ij}$ is the number of players (including player $n$) who takes the action $j$ at the intersection $i$ at time $t$.
The parameters $R_t^{ij}> 0$ are fixed constants satisfying
$\sum_j R_t^{ij}=1$ for all $i$. We interpret $R_t^{ij}$ as the ``reference'' routing policy specified by the TSO in advance. Notice that \eqref{eqtax} indicates that agent $n$ receives a positive reward by taking action $j$ at location $i$ at time $t$ if $K_{N,t}^{ij}/K_{N,t}^{i} < R_t^{ij}$ {\color{black} (i.e., the realization of the traffic flow is below the designated congestion level)}, while she is penalized by doing so if $K_{N,t}^{ij}/K_{N,t}^{i} > R_t^{ij}$. Since $K_{N,t}^{i}$ and $K_{N,t}^{ij}$ are random variables, $\pi_{N,t,n}^{ij}$ is also a random variable.
We assume that the TSO is able to observe $K_{N,t}^{i}$ and $K_{N,t}^{ij}$ at every time step so that $\pi_{N,t,n}^{ij}$ is computable.\footnote{Whenever $\pi_{N,t,n}^{ij}$ is computed, we have both $K_{N,t}^{ij}\geq 1$ and $K_{N,t}^{i}\geq 1$ since at least player $n$ herself is counted. Hence \eqref{eqtax} is well-defined.}
In what follows, we assume that each player is risk neutral.
{\color{black} That is, each player is interested in choosing a policy that minimizes the expected sum of travel and tax costs incurred over the planning horizon $\mathcal{T}$.
For player $n$ whose location-action pair at time step $t$ is $(i,j)$, the expected tax cost incurred at that time step can be expressed as
\begin{equation}
\label{eqexp_pi_general}
\Pi_{N,n,t}^{ij}\triangleq \mathbb{E}\left[\pi_{N,n,t}^{ij} \mid i_{n,t}=i, j_{n,t}=j \right].
\end{equation}
\ifdefined\LONGVERSION
As we detail in equation \eqref{eqtijgeneral} in Appendix~\ref{appbinomial}, for each location-action pair $(i,j)$, $\Pi_{N,n,t}^{ij}$ can be expressed in terms of $Q_{-n}\triangleq\{Q_m\}_{m\neq n}$.
\else
As we detail in \cite[Appendix A, equation (22)]{arXivversion}, for each location-action pair $(i,j)$, $\Pi_{N,n,t}^{ij}$ can be expressed in terms of $Q_{-n}\triangleq\{Q_m\}_{m\neq n}$.
\fi
The fact that $\Pi_{N,n,t}^{ij}$ does not depend on player $n$'s own policy will be used to analyze the optimal control problem \eqref{eqprob_n} below.\footnote{Although the value of $\Pi_{N,n,t}^{ij}$ for each $(i,j)$ cannot be altered by player $n$'s policy, she can minimize the total cost by an appropriate route choice (e.g., by avoiding links with high toll fees).}}
\subsection{Traffic routing game}
\label{sectrafficgame}
Overall, the cost functional to be minimized by the $n$-th player in the considered game is given by
\begin{equation}
J\left(Q_n, Q_{-n}\right)=\sum_{t=0}^{T-1} \sum_{i,j} P_{n,t}^i Q_{n,t}^{ij}\left(C_t^{ij}+\Pi_{N,n,t}^{ij}\right). \label{eqobjn}
\end{equation}
Notice that this quantity depends not only on the $n$-th player's own policy $Q_n$ but also on the other players' policies $Q_{-n}$ through the term $\Pi_{N,n,t}^{ij}$.
Equation \eqref{eqobjn} defines an $N$-player dynamic game, which we call the \emph{traffic routing game} hereafter.
We introduce the following equilibrium concepts.
\begin{definition}
The $N$-tuple of strategies $\{Q_{n}^*\}_{n\in\mathcal{N}}$ is said to be a \emph{Nash equilibrium} if the inequality $J\left(Q_n, Q_{-n}^*\right) \geq J\left(Q_n^*, Q_{-n}^*\right)$ holds for each $n\in \mathcal{N}$ and $Q_n$.
\end{definition}
\begin{definition}
The $N$-tuple of strategies $\{Q_{n}^*\}_{n\in\mathcal{N}}$ is said to be \emph{symmetric} if $Q_1^*=Q_2^*=\cdots=Q_N^*$.
\end{definition}
\begin{remark}
The $N$-player game described above is a \emph{symmetric game} in the sense of \cite{cheng2004notes}. Thus, \cite[Theorem~3]{cheng2004notes} is applicable to show that it has a symmetric Nash equilibrium.
\end{remark}
\begin{remark}
We assume that players are able to compute a Nash equilibrium strategy $\{Q_{n}^*\}_{n\in\mathcal{N}}$ prior to the execution of the game based on the public knowledge $\mathcal{G}, \alpha, \mathcal{N}, \mathcal{T}, R_t^{ij}, C_t^{ij}$ and $P_{0}$. Often the case, it is favorable that a Nash equilibrium is \emph{time-consistent} in that no player is given an incentive to deviate from the precomputed equilibrium routing policy after observing
real-time data (such as $K_{N,t}^{i}$ and $K_{N,t}^{ij}$).
In Section~\ref{sectc}, we discuss a notable time consistency property of an equilibrium of the traffic routing game formulated above in the large-population limit $N\rightarrow \infty$.
\end{remark}
\section{Mean Field Approximation}
\label{secmfapprox}
In the remainder of this paper, we are concerned with the large-population limit $N\rightarrow \infty$ of the traffic routing game.
\begin{definition}
A set of strategies $\{Q_{n}^*\}_{n\in\mathcal{N}}$ is said to be an \emph{MFE} if the following conditions are satisfied.
\begin{itemize}
\item[(a)] It is symmetric, i.e., $Q_1^*=Q_2^*=\cdots=Q_N^*$.
\item[(b)] There exists a sequence $\epsilon_N$ satisfying $\epsilon_N \searrow 0$ as $N\rightarrow \infty$ such that for each $n \in \mathcal{N}=\{1, 2, ... , N\}$ and $Q_n$, the inequality $J(Q_n,Q_{-n}^*)+\epsilon_N \geq J(Q_n^*,Q_{-n}^*)$ holds.
\end{itemize}
\end{definition}
{\color{black}
Now, we derive a condition that an MFE must satisfy by analyzing player $n$'s best response when all other players adopt a homogeneous routing policy $Q^*=\{Q_t^*\}_{t\in\mathcal{T}}$. Since $Q^*$ is adopted by all players other than $n$, the probability that a specific player $m(\neq n)$ is located at $i$ is given by $P_t^{i*}$, where $P^*=\{P_t^*\}_{t\in\mathcal{T}}$ is computed recursively by
\[
P_{t+1}^{j*}=\sum_i P_t^{i*} Q_t^{ij*} \;\; \forall j\in\mathcal{V}.
\]
Player $n$'s best response is characterized by the solution to the following optimal control problem:
\begin{equation}
\label{eqprob_n}
\min_{\{Q_{t}\}_{t\in\mathcal{T}}} \sum_{t=0}^{T-1} \sum_{i,j} P_{t}^i Q_{t}^{ij}\left(C_t^{ij}+\Pi_{N,n,t}^{ij}\right).
\end{equation}
Here, we note that $\Pi_{N,n,t}^{ij}$ is fully determined by the homogeneous policy $Q^*$ adopted by all other players.
\ifdefined\LONGVERSION
(The detail is shown in equation \eqref{eqtjieq} in Appendix~\ref{appbinomial}.)
\else
(The detail is shown in \cite[Appendix A, equation (23)]{arXivversion}.)
\fi
In \eqref{eqprob_n}, we wrote $P_{t}$ and $Q_{t}$ in place of $P_{n,t}$ and $Q_{n,t}$ to simplify the notation.
To analyze player $n$'s best response when $N\rightarrow \infty$, we compute the quantity $\lim_{N\rightarrow \infty} \Pi_{N,n,t}^{ij}$ as follows:}
\begin{lemma}
\label{lemlimpi}
Let $\Pi_{N,n,t}^{ij}$ be defined by \eqref{eqexp_pi_general}. If $Q_{m,t}=Q_t^*$ for all $m\neq n$ and $P_t^{i*}Q_t^{ij*}>0$, then
\[
\lim_{N\rightarrow \infty} \Pi_{N,n,t}^{ij}=\alpha \log \frac{Q_t^{ij*}}{R_t^{ij}}.
\]
\end{lemma}
\begin{proof}
\ifdefined\LONGVERSION
Appendix~\ref{app1}.
\else
\cite[Appendix B]{arXivversion}.
\fi
\end{proof}
Intuitively, Lemma~\ref{lemlimpi} shows that the optimal control problem \eqref{eqprob_n} when $N$ is large is ``close to'' the optimal control problem:
\begin{equation}
\min_{\{Q_{t}\}_{t\in\mathcal{T}}} \sum_{t=0}^{T-1} \sum_{i,j} \!P_{t}^i Q_{t}^{ij}\!\left(\!C_t^{ij}\!+\!\alpha\log\frac{Q_t^{ij*}}{R_t^{ij}} \right). \label{eqprob_n_inf}
\end{equation}
In order for the policy $Q^*$ to constitute an MFE, the policy $Q^*$ itself needs to be the best response by player $n$. In particular, $Q^*$ must solve the optimal control problem \eqref{eqprob_n_inf}. That is, the following {\color{black} fixed point condition} must be satisfied:
\begin{equation}
Q^*\in \argmin\limits_{\{Q_{t}\}_{t\in\mathcal{T}}} \sum_{t=0}^{T-1} \sum_{i,j} \!P_{t}^i Q_{t}^{ij}\!\left(\!C_t^{ij}\!+\!\alpha\log\frac{Q_t^{ij*}}{R_t^{ij}} \right). \label{eqconsistency}
\end{equation}
In the next two sections, we show that the condition \eqref{eqconsistency} is closely related to the class of optimal control problems known as \emph{linearly-solvable MDPs} \cite{todorov2007linearly,dvijotham2011unified}. Based on this observation, we show that an MFE can be computed efficiently.
\section{Linearly Solvable MDPs}
\label{secoptcontrol}
In this section, we review linearly-solvable MDPs \cite{todorov2007linearly,dvijotham2011unified} and their solution algorithms. For each $t\in\mathcal{T}$, let $P_t$ be the probability distribution over $\mathcal{V}$ that evolves according to
\begin{equation}
\label{eqpq2}
P_{t+1}^j=\sum_i P_t^i Q_t^{ij} \;\; \forall j\in\mathcal{V}
\end{equation}
with the initial state $P_0$.
We assume $C_t^{ij}$, $R_t^{ij}$ for each $t\in\mathcal{T}, i\in\mathcal{V}, j\in\mathcal{V}$ and $\alpha$ are given positive constants.
Consider the $T$-step optimal control problem:
\begin{equation}
\min_{\{Q_t\}_{t\in\mathcal{T}}}\sum_{t=0}^{T-1}\sum_{i,j}P_t^i Q_t^{ij}\left(C_t^{ij}+\alpha \log\frac{ Q_t^{ij}}{R_t^{ij}} \right).
\label{eqoptcontrol2}
\end{equation}
The logarithmic term in \eqref{eqoptcontrol2} can be written as the Kullback--Leibler (KL) divergence from the reference policy $R_t^{ij}$ to the selected policy $Q_t^{ij}$. For this reason \eqref{eqoptcontrol2} is also known as the \emph{KL control} problem \cite{theodorou2010generalized}. Notice the similarity and difference between the optimal control problems \eqref{eqprob_n_inf} and \eqref{eqoptcontrol2}; in \eqref{eqprob_n_inf} the logarithmic term is a fixed constant ($Q^*$ is given), while in \eqref{eqoptcontrol2} the logarithmic term depends on the chosen policy $Q$.
To solve \eqref{eqoptcontrol2} by backward dynamic programming, for each $t\in\mathcal{T}$, introduce the value function:
\[
V_{t}(P_{t}) \triangleq
\min_{\{Q_{\tau}\}_{\tau=t}^{T-1}} \sum_{\tau=t}^{T-1} \sum_{i,j} P_{\tau}^i Q_{\tau}^{ij}\!\left(C_\tau^{ij}+\alpha\log\frac{Q_\tau^{ij}}{R_\tau^{ij}} \right)
\]
and the associated Bellman equation
\begin{equation}
{\color{black}
V_{t}(P_{t}) =\min_{Q_t} \Big\{ \sum_{i,j} P_t^i Q_t^{ij}\!\left(C_t^{ij}+\alpha\log\frac{Q_t^{ij}}{R_t^{ij}} \right)+V_{t+1}(P_{t+1}) \Big\}} \label{eqbellmankl}
\end{equation}
with the terminal condition $V_T(\cdot)=0$.
The next theorem states that the Bellman equation \eqref{eqbellmankl} can be linearized by a change of variables (the Cole-Hopf transformation), and thus the
optimal control problem \eqref{eqoptcontrol2} is reduced to solving a linear system \cite{todorov2007linearly}.
\begin{theorem}
\label{theo1}
Let $\{\phi_t\}_{t\in\mathcal{T}}$ be the sequence of $V$-dimensional vectors defined by the backward recursion
\begin{equation}
\label{eqphi}
\phi_t^i=\sum_j R_t^{ij} \exp \left(-\frac{C_t^{ij}}{\alpha}\right)\phi_{t+1}^j \;\; \forall i\in\mathcal{V}
\end{equation}
with the terminal condition $\phi_T^i=1 \; \forall i$.
Then, for each $t=0, 1, \cdots, T$ and $P_t$, the value function can be written as
\begin{equation}
V_t(P_t)=-\alpha\sum_i P_t^i \log \phi_t^i. \label{eqvt}
\end{equation}
Moreover, the optimal policy for \eqref{eqoptcontrol2} is given by
\begin{equation}
Q_t^{ij*}=\frac{\phi_{t+1}^j}{\phi_t^i}R_t^{ij}\exp\left(-\frac{C_t^{ij}}{\alpha}\right). \label{eqoptq}
\end{equation}
\end{theorem}
\begin{proof}
\ifdefined\LONGVERSION
Appendix~\ref{apptheo1}.
\else
\cite[Appendix C]{arXivversion}.
\fi
\end{proof}
We stress that \eqref{eqphi} is linear in $\phi$ and can be computed by matrix multiplications backward in time.
\section{Mean Field Equilibrium}
\label{secmfe}
{\color{black}
In this section, we investigate the relationship between the optimal control problem \eqref{eqoptcontrol2} and the fixed point condition \eqref{eqconsistency} for an MFE in the traffic routing game.
To this end, we introduce the value function for the optimal control problem \eqref{eqprob_n_inf}, defined by
\[
\tilde{V}_{t}(P_{t}) \triangleq \min_{\{Q_{\tau}\}_{\tau=t}^{T-1}} \!\sum_{\tau=t}^{T-1} \sum_{i,j} P_{\tau}^i Q_{\tau}^{ij}\!\left(C_\tau^{ij}\!+\!\alpha\log\frac{Q_\tau^{ij*}}{R_\tau^{ij}} \right)
\]
The value function satisfies the Bellman equation:
\begin{equation}
\tilde{V}_{t}(P_{t})= \min_{Q_{t}} \Big\{ \sum_{i,j}P_{t}^i Q_{t}^{ij}\left(C_t^{ij}\!+\alpha\log\frac{Q_t^{ij*}}{R_t^{ij}}\right)+\tilde{V}_{t+1}(P_{t+1}) \Big\} \label{eqbellman3}
\end{equation}
with the terminal condition $\tilde{V}_T(\cdot)=0$.
We emphasize the distinction between $\tilde{V}_{t}(\cdot)$ and $V_{t}(\cdot)$.
As in the previous section, $V_{t}(\cdot)$ is the value function associated with the KL control problem \eqref{eqoptcontrol2}, whereas $\tilde{V}_{t}(\cdot)$ is the value function associated with the optimal control problem \eqref{eqprob_n_inf}.
Despite this difference, the next lemma shows an intimate connection between $V_{t}(\cdot)$ and $\tilde{V}_{t}(\cdot)$.
In particular, if the parameter $Q^*$ in \eqref{eqprob_n_inf} is chosen to be the solution to the KL control problem \eqref{eqoptcontrol2}, then the objective function in \eqref{eqprob_n_inf} becomes a constant that does not depend on the decision variable $\{Q_t\}_{t\in\mathcal{T}}$ (the \emph{equalizer property}\footnote{We note that the equalizer property (the term borrowed from \cite{grunwald2004game}) of the minimizers of free energy functions is well-known in statistical mechanics, information theory, and robust Bayes estimation theory. } of the optimal KL control policy).
Moreover, under this circumstance, the value function $\tilde{V}_{t}(\cdot)$ for \eqref{eqprob_n_inf} coincides with the value function $V_{t}(\cdot)$ for the KL control problem \eqref{eqoptcontrol2}.
\begin{lemma}
\label{lembestresponse}
If $\{Q_t^*\}_{t\in\mathcal{T}}$ in \eqref{eqprob_n_inf} is fixed to be the solution to the KL control problem \eqref{eqoptcontrol2}, then an arbitrary policy $\{Q_{t}\}_{t\in\mathcal{T}}$ with $Q_{t}\in\mathcal{Q}$ is an optimal solution to \eqref{eqprob_n_inf}.
Moreover, for each $t\in\mathcal{T}$ and $P_{t}$, we have
\begin{equation}
\label{eqlembestresponse}
\tilde{V}_{t}(P_{t})=-\alpha \sum\nolimits_i P_{t}^i \log \phi_t^{i}
\end{equation}
where $\{\phi_t\}_{t\in\mathcal{T}}$ is the sequence calculated by \eqref{eqphi}.
\end{lemma}
\begin{proof}
We show \eqref{eqlembestresponse} by backward induction. If $t=T$, the claim trivially holds due to the definition $\tilde{V}_{T}(P_T)=0$ and the fact that the terminal condition for \eqref{eqphi} is given by $\phi_T^{i}=1$. Thus, for $0\leq t \leq T-1$, assume that
\[
\tilde{V}_{t+1}(P_{t+1})=-\alpha \sum_j P_{t+1}^j \log \phi_{t+1}^{j}
\]
holds.
Using $\rho_t^{ij}=C_t^{ij}-\alpha \log \phi_{t+1}^{j}$, the Bellman equation \eqref{eqbellman3} can be written as
\begin{equation}
\tilde{V}_{t}(P_{t})=\min_{Q_{t}} \sum_{i,j}P_{t}^i Q_{t}^{ij}\left(\rho_t^{ij}+\alpha\log\frac{Q_t^{ij*}}{R_t^{ij}}\right). \label{eqbellman4}
\end{equation}
Substituting $Q_t^{ij*}$ obtained by \eqref{eqoptq} into \eqref{eqbellman4}, we have
\begin{subequations}
\label{eqvnt}
\begin{align}
\tilde{V}_{t}(P_{t}) &=\min_{Q_{t}}\sum\nolimits_{i,j} P_{t}^i Q_{t}^{ij}\left(-\alpha \log \phi_t^{i}\right) \label{eqvntb1} \\
&=\min_{Q_{t}} \sum\nolimits_i P_{t}^i\left(-\alpha \log \phi_t^{i}\right) \underbrace{\sum\nolimits_j Q_{t}^{ij}}_{=1} \label{eqvntb2} \\
&=-\alpha \sum\nolimits_i P_{t}^i \log \phi_t^{i}. \label{eqvntb3}
\end{align}
\end{subequations}
This completes the proof of \eqref{eqlembestresponse}.
The chain of equalities \eqref{eqvnt} also shows that the decision variable $Q_t$ vanishes in the ``min'' operator, indicating that any $Q_t\in \mathcal{Q}$ is a minimizer.
This shows the equalizer property of $\{Q_t^*\}_{t\in\mathcal{T}}$.
\end{proof}
Lemma~\ref{lembestresponse} provides the following insights into the MFE of the traffic routing game: Suppose that all the players except the player $n$ adopt the policy $Q^*$ (the optimal solution to \eqref{eqoptcontrol2}) and the number of players tends to infinity. Since $Q^*$ will equalize the costs of all alternative routing policies for player $n$, any routing policy will be a best response for her. In particular, this means that the policy $Q^*$ itself will also be one of the best responses, and thus the fixed point condition \eqref{eqconsistency} will be satisfied.
Therefore, $Q^*$ will be an MFE of the considered traffic routing game. The following theorem, which is the main result of this paper, confirms this intuition.
\begin{theorem}
\label{theomfe}
A symmetric strategy profile $Q_{n,t}^{ij}=Q_t^{ij*}$ for each $n\in\mathcal{N}, t\in \mathcal{T}$ and $i,j\in\mathcal{V}$, where $Q_t^{ij*}$ is obtained by \eqref{eqphi}--\eqref{eqoptq}, is an MFE of the traffic routing game.
\end{theorem}
\begin{proof}
\ifdefined\LONGVERSION
Appendix~\ref{apptheomfe}.
\else
\cite[Appendix D]{arXivversion}.
\fi
\end{proof}
Theorem~\ref{theomfe}, together with Theorem~\ref{theo1}, provides an efficient algorithm for computing an MFE of the traffic routing game presented in Section~\ref{secprob}.
In particular, we remark that the MFE can be computed by the backward-in-time recursion \eqref{eqphi}--\eqref{eqoptq}.
This is in stark contrast to the standard MFG formalism in which a coupled pair of forward and backward equations must be solved to obtain an MFE.
Finally, we remark that the equalizer property of the MFE $Q^*$ characterized
by Lemma~\ref{lembestresponse} is a reminiscent of the \emph{Wardrop's first principle}, stating that costs are equal on all the routes used at the equilibrium. Although the costs usually mean journey times in the literature around Wardrop's principles \cite{wardrop1952some,correa2011wardrop}, the cost in our setting is the sum of the travel costs and the tax costs as stated in \eqref{eqobjn}. In this sense, Lemma~\ref{lembestresponse} can be viewed as an extension of the standard description of the Wardrop's first principle.
}
\section{Weak and strong time consistency}
\label{sectc}
{\color{black} This short section presents another notable property of the MFE derived in the previous section.}
Let $Q_{n,t}=Q_t$ for each $n\in\mathcal{N}$ and $0 \leq t \leq T-1$ be a symmetric strategy profile, and $P_t$ be the probability distribution over $\mathcal{V}$ induced by $Q_t$ as in \eqref{eqpq2}.
For every time step $0 \leq t \leq T-1$, a dynamic game restricted to the time horizon $\{t, t+1, ... , T-1\}$ with the initial condition $P_t$ is called the \emph{subgame} of the original game.
The following are natural extensions of the \emph{strong and weak time consistency} concepts in the dynamic game theory \cite{basar1999dynamic} to MFGs.
\begin{definition}
An MFE strategy profile $Q^*$ is said to be:
\begin{enumerate}[leftmargin=*]
\item \emph{weakly time-consistent} if for every $0 \leq t \leq T-1$, $\{Q_s^*\}_{t\leq s \leq T-1}$ constitutes an MFE of the subgame restricted to $\{t, t+1, ... , T-1\}$ when $\{Q_s\}_{0\leq s \leq t-1}=\{Q_s^*\}_{0\leq s \leq t-1}$.
\item \emph{strongly time-consistent} of for every $0 \leq t \leq T-1$, $\{Q_s^*\}_{t\leq s \leq T-1}$ constitutes an MFE of the subgame restricted to $\{t, t+1, ... , T-1\}$ when regardless of the policy $\{Q_s\}_{0\leq s \leq t-1}$ implemented in the past.
\end{enumerate}
\end{definition}
In the standard MFG formalism \cite{caines2013mean,lasry2007mean} where the MFE is characterized by a forward-backward HJB-FPK system, the equilibrium policy is only weakly time-consistent in general. This is because, in the event of $P_t$ not being consistent with the distribution induced by $\{Q_s^*\}_{0\leq s \leq t-1}$, the MFE of the subgame must be recalculated by solving the HJB-FPK system over $t\leq s \leq T-1$.
In contrast, the MFE considered in this paper is characterized only by a backward equation (Theorems~\ref{theo1} and \ref{theomfe}). A notable consequence of this fact is that even if the initial condition $P_t$ is inconsistent with the planned distribution, it does not alter the fact that $\{Q_s^*\}_{t\leq s \leq T-1}$ constitutes an MFE of the subgame restricted to $t\leq s \leq T-1$. Therefore, the MFE characterized by Theorems~\ref{theo1} and \ref{theomfe} is strongly time-consistent.
\section{Mean field equilibrium and fictitious play}
\label{secfictitious}
{\color{black} The equalizer property of the MFE characterized by Lemma~\ref{lembestresponse} raises the following question regarding the stability of the equilibrium:
If the MFE equalizes the costs of all the available route selection policies, what incentivizes individual players to stay at the MFE policy $Q^*$?
In this section, we reason about the stability of MFE by relating it with the convergence of the \emph{fictitious play} process \cite{monderer1996fictitious} for an associated repetitive game.
Convergence of fictitious play processes have been studied in depth in \cite{monderer1996fictitious} and \cite{shamma2005dynamic}. We also remark that fictitious play for day-to-day policy adjustments for traffic routing has been considered in \cite{garcia2000fictitious,xiao2013average}.
Fictitious play in the context of MFGs has been studied in the recent work \cite{cardaliaguet2017learning}.
}
\begin{figure}[t]
\centering
\includegraphics[width=0.6\columnwidth]{path2.pdf}
\caption{Simple path choice problem.}
\label{fig:path}
\vspace{-5ex}
\end{figure}
Consider the situation in which the traffic routing game is repeated on a daily basis, and individual players update their routing policies based on their past experiences.
For simplicity, we only consider a single-origin-single-destination, $N$-player traffic routing game shown in Figure~\ref{fig:path}. We assume that there are $J$ parallel routes from the origin to the destination. All players are initially located at the origin node.
Each route $j$ is associated with the travel cost $c^j$ and the tax cost $\alpha\log\frac{K_N^j}{NR^j}$, where $K_N^j$ is the number of players selecting route $j$. As before, $\alpha, C^j, R^j$ are given constants. {\color{black}
By \emph{fictitious play}, we mean the following day-to-day policy adjustment mechanism for individual players:
On day one, each player $n\in\mathcal{N}$ makes initial guesses on player $m$'s mixed strategies (for all $m\neq n$) for their route selection. Player $n$'s belief on player $m$'s policy is demoted by $Q_{n\rightarrow m}[1]\in\Delta^{J-1}$. Assuming that $Q_{n\rightarrow m}[1], \forall m\neq n$ are fixed, player $n$ selects a route with the lowest expected cost. Player $n$'s route selection is observed and recorded by all players at the end of day one.
On day $\ell$, player $n$'s belief $Q_{n\rightarrow m}[\ell]\in\Delta^{J-1}$ for each $m\neq n$ is set to be equal to the vector of observed empirical frequencies of player $m$'s route choices up to day $\ell-1$. The process is repeated on a daily basis. We call $Q_{n\rightarrow m}[\ell]$, $\ell=1,2, \cdots$ for all pairs $m\neq n$ the \emph{belief paths}.
}
The process is summarized in Algorithm~\ref{alg1}.
\begin{algorithm}[h]
\label{alg1}
\caption{The fictitious play process for the simplified traffic routing game.}
{\bf Step 0:} On day one, each player $n$ initializes a mixed strategy in belief
$
Q_{n\rightarrow m}[1] \in \Delta^{J-1}$ for each $m \neq n
$
according to which she believes player $m$ select routes.
{\bf Step 1:} At the beginning of day $\ell$, each player $n$ fixes assumed mixed strategy $Q_{n\rightarrow m}[\ell]\in \Delta^{J-1}$ according to which she believes player $m$ select routes. Based on this assumption, she selects her best response $r_n[\ell]=\argmin_j y_n^j[\ell]$, where $y_n^j[\ell]$ is the assumed cost of selecting route $j$, i.e.,
\begin{equation}
\label{eqfppbd}
y_{n}^j[\ell]=\mathbb{E} \left(C^j + \alpha \log \frac{K_N^j}{NR^j}\right).
\end{equation}
{\bf Step 2:} At the end of day $k$, each player $n$ updates her belief based on observations $r_m[\ell], m \neq n$ by
\begin{equation}
\label{eqfpupdate}
Q_{n\rightarrow m}[\ell+1]=\frac{\ell}{\ell+1}Q_{n\rightarrow m}[\ell]+\frac{1}{\ell+1}\delta(r_m[\ell])
\end{equation}
where $\delta(r)$ is the indicator vector whose $r$-th entry is
one and all other entries are zero.
Return to Step 1.
\end{algorithm}
{\color{black} In what follows, we show that Algorithm~\ref{alg1} converges to a unique symmetric Nash equilibrium of the $N$-player game shown in Figure~\ref{fig:path} if the initial belief is symmetric. A numerical simulation is presented in Section~\ref{secfpsim} to demonstrate this convergence behavior, where we also observe that the policy obtained in the limit of the belief path is closely approximated by the MFE if $N$ is sufficiently large. This observation provides the MFE with an interpretation as a steady-state value of the players' day-to-day belief adjustment processes in a large population traffic routing game.
Convergence of Algorithm~\ref{alg1} is a straightforward consequence of Monderer and Shapley \cite{monderer1996fictitious}, where it is shown that every belief path for $N$-player games with identical payoff functions converges to an equilibrium.
This result is directly applicable to the $N$-player traffic routing game shown in Figure~\ref{fig:path} since it is clearly a symmetric game. The only caveat is that there is no guarantee that the belief path converges to a symmetric equilibrium if the game has multiple Nash equilibria (including non-symmetric ones). However, this difficulty can be circumvented if we impose an additional assumption that the initial belief is symmetric, i.e., $Q_{n\rightarrow m}[1]=Q[1]$ for some $Q[1]\in\Delta^{J-1}$ for all $(m,n)$ pairs. If the initial belief is symmetric, the belief path generated by Algorithm~\ref{alg1} remains symmetric, i.e., $Q_{n\rightarrow m}[\ell]=Q[\ell]$, $y_n[\ell]=y[\ell]$ and $r_n[\ell]=r[\ell]$ for $\ell\geq 1$. In this case, equations \eqref{eqfppbd} and \eqref{eqfpupdate} are simplified to
\begin{align*}
y^i[\ell]=C^j+\alpha \sum_{k=0}^{N-1} \log &\left(\frac{k+1}{NR^j}\right)
{N\!-\!1 \choose k}\\&\times (Q^j[\ell])^k (1-Q^j[\ell])^{N-1-k}
\end{align*}
and
\[
Q[\ell+1]=\frac{\ell}{\ell+1}Q[\ell]+\frac{1}{\ell+1}\delta(r[\ell])
\]
respectively.
Combined with the convergence result by Monderer and Shapley \cite{monderer1996fictitious}, it can be concluded that every limit point of the belief path generated by Algorithm~\ref{alg1} with symmetric initial belief is a symmetric equilibrium.}
The next lemma shows that there exists a unique symmetric equilibrium in the simple traffic routing game in Figure~\ref{fig:path} with finite number of players.
\begin{lemma}
\label{lemsinglestage}
There exists a unique symmetric equilibrium, denoted by $Q^{(N)*}$, in the $N$-player traffic routing game shown in Figure~\ref{fig:path}.
\end{lemma}
\begin{proof}
\ifdefined\LONGVERSION
Appendix~\ref{app2}.
\else
\cite[Appendix E]{arXivversion}.
\fi
\end{proof}
{\color{black}
Now, consider the limit $\lim_{N\rightarrow \infty} Q^{(N)*}$ and its relationship with the MFE $Q^*$.
Notice that the MFE of the traffic routing game in Figure~\ref{fig:path} is characterized as the unique solution to the following convex optimization problem:}
\begin{equation}
\label{eqfpprob}
\min_{Q\in\Delta^{J-1}} \sum_{j=1}^J Q^j \left(C^j+\alpha\log\frac{Q^j}{R^j}\right).
\end{equation}
{\color{black}
In Section~\ref{secfpsim}, we perform a simulation study where we observe that $Q^{*}$ is a good approximation of $Q^{(N)*}$ when $N$ is sufficiently large. Although the condition under which the identity $\lim_{N\rightarrow \infty} Q^{(N)*}=Q^{*}$ holds must be studied carefully in the future,\footnote{While Lemma~\ref{lemsinglestage} establishes the uniqueness of the symmetric equilibrium for a simple traffic routing game shown in Figure~\ref{fig:path}, its extension to the general class of traffic routing game formulated in Section~\ref{secprob} is currently unknown. The proof of the identity $\lim_{N\rightarrow \infty} Q^{(N)*}=Q^{*}$ (for both the simple game in Figure~\ref{fig:path} and the general setup in Section~\ref{secprob}) must be postponed as future work.} this observation suggests an important practical interpretation of the MFE: namely, it is an approximation of the limit point of the belief path (or equivalently, the empirical frequency of each player to take particular routes) of the symmetric fictitious play when $N$ is large. This provides an answer to the question regarding the stability of the MFE raised in the beginning of this section.}
\section{Numerical Illustration}
\label{secsimulation}
{\color{black}
In this section, we present numerical simulations that illustrate the main results obtained in Sections~\ref{secmfe} and \ref{secfictitious}.}
\subsection{Traffic routing game and congestion control}
\label{secsimulation1}
We first illustrate the result of Theorem~\ref{theomfe} applied to a traffic routing game shown in Fig.~\ref{fig:simulation}.
At $t=0$, the population is concentrated in the origin cell (indicated by ``O'').
For $t\in\mathcal{T}$, the travel cost for each player is
\[
C_t^{ij}=\begin{cases}
C_{\text{term}} & \text{ if } j=i\\
1+C_{\text{term}} & \text{ if } j\in\mathcal{V}(i)\\
100000+C_{\text{term}} & \text{ if } j\not\in\mathcal{V}(i) \text{ or } j \text{ is an obstacle}
\end{cases}
\]
where $\mathcal{V}(i)$ contains the north, east, south, and west neighborhood of the cell $i$. To incorporate the terminal cost, we introduce $C_{\text{term}}=0$ if $t=0, 1, \cdots, T-1$ and $C_{\text{term}}=10\sqrt{\text{dist}(j, \text{D})}$ if $t=T-1$, where $\text{dist}(j, \text{D})$ is the Manhattan distance between the player's final location $j$ and the destination cell (indicated by ``D'').
As the reference distribution, we use $R_t^{ij}=1/|\mathcal{V}(i)|$ (uniform distribution) for each $i\in\mathcal{V}$ and $t\in \mathcal{T}$ to incentivize players to spread over the traffic graph.
For various values of $\alpha >0$, the backward formula \eqref{eqphi} is solved and the optimal policy is calculated by \eqref{eqoptq}.
If $\alpha$ is small (e.g., $\alpha=0.1$), it is expected that players will take the shortest path since the action cost is dominant compared to the tax cost \eqref{eqtax}. This is confirmed by numerical simulation; three figures in the top row of Fig.~\ref{fig:simulation} show snapshots of the population distribution at time steps $t=20, 35$ and $50$. In the bottom row, similar plots are generated with a larger $\alpha$ ($\alpha=1$). In this case, it can be seen that the equilibrium strategy will choose longer paths with higher probability to reduce congestion.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{result_mod1.png}
\caption{Mean-field traffic routing game with $T=70$ over a traffic graph with $100$ nodes (grid world with obstacles). Plots show vehicle distribution $P_t$ at $t=20, 35, 50$ and for $\alpha=0.1$ and $1$.}
\label{fig:simulation}
\vspace{-5ex}
\end{figure}
\subsection{Symmetric fictitious play}
\label{secfpsim}
Next, we present a numerical demonstration of the symmetric fictitious play studied in Section~\ref{secfictitious}.
Consider a simple traffic graph in Figure~\ref{fig:path} with three alternative paths ($J=3$). We set travel costs $(C^1, C^2, C^3)=(2,1,3)$, while fixing $R^1=R^2=R^3=1/3$ and $\alpha=1$. Figure~\ref{fig:fp} shows the belief path generated by the policy update rule \eqref{eqfpupdate} with the initial policy $Q[1]=(1/3, 1/3, 1/3)$. The left shows the case with $20$ players ($N=20$), while the right plot shows the case with $N=200$. The MFE
\[
Q^*=\frac{1}{\sum_{j} R^j\exp(-c^j)} \begin{bmatrix} R^1\exp(-c^1) \\
R^2\exp(-c^2) \\
R^3\exp(-c^3)
\end{bmatrix}=\begin{bmatrix} 0.245 \\
0.665 \\
0.090
\end{bmatrix}
\]
is also shown in each plot. The plot for $N=20$ shows that, while the belief path is convergent, there is a certain offset between its limit point and the MFE. This is because the number of players is not sufficiently large. On the other hand, when $N=200$, the MFE $Q^*$ is a good approximate to the limit point of the belief path.
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\textwidth]{FP_Monderer_Shapley.pdf}
\caption{Convergence of the belief path generated by the symmetric fictitious play \eqref{eqfpupdate} in the $N$-player single-stage traffic routing game with three ($J=3$) alternative paths shown in Figure~\ref{fig:path}. The left plot shows the case with $N=20$ while the right plot show the case with $N=200$. The value of MFE $Q^*$ is also shown.}
\label{fig:fp}
\end{figure*}
\section{Conclusion and Future Work}
\label{secconclude}
In this paper, we showed that the MFE of a large-population traffic routing game under the log-population tax mechanism can be obtained via the linearly solvable MDP.
Strong time consistency of the derived MFE was discussed.
A connection between the MFE and fictitious play was investigated.
While this paper is restricted discrete-time discrete-state formalisms, its continuous-time continuous-state counterpart is worth investigating in the future. The interface between the existing traffic SUE theory \cite{sheffi1984urban} and MFG must be thoroughly studied in the future work. Convergence of fictitious play and its relationship with MFE presented in Section~\ref{secfictitious} should be studied in more general settings. {\color{black} Linear solvability renders the proposed MFG framework attractive as an incentive mechanism for TSOs for the purpose of traffic congestion mitigation; however, questions from the perspectives of mechanism design theory, such as how to tune parameters $\alpha$ and $R$ (which are assumed given in this paper) to balance the efficiency and budget, are unexplored.
Finally, generalization to non-homogeneous MFGs with multiple classes of players (which was recently studied in \cite{pedram2019}) needs further investigation.
}
\appendices
\ifdefined\LONGVERSION
\section{Explicit expression of \eqref{eqexp_pi_general}}
\label{appbinomial}
Since player $n$'s probability of taking action $j$ at location $i$ at time step $t$ is given by $P_{n,t}^iQ_{n,t}^{ij}$, the total number $K_{N,t}^{ij}$ of such players follows the Poisson binomial distribution
\[
\text{Pr}(K_{N,t}^{ij}=k)=\sum_{A\in F_k}\prod_{n\in A} P_{n,t}^iQ_{n,t}^{ij}\!\!\prod_{n^c\in A^c}\!\!(1-P_{n^c,t}^iQ_{n^c, t}^{ij}).
\]
Here, $F_k$ is the set of all subsets of size $k$ that can be selected from $\mathcal{N}=\{1,2, ..., N\}$, and $A^c=F_k\backslash A$. Similarly, the distribution of $K_{N,t}^{i}$ is given by
\[
\text{Pr}(K_{N,t}^{i}=k)=\sum_{A\in F_k}\prod_{n\in A} P_{n,t}^i\!\!\prod_{n^c\in A^c}\!\!(1-P_{n^c,t}^i).
\]
Notice also that the conditional distribution of $K_{N,t}^{ij}$ given player $n$'s location-action pair $(i_{n,t},j_{n,t})=(i,j)$ is
\begin{align}
&\text{Pr}(K_{N,t}^{ij} =k+1 \mid i_{n,t}=i, j_{n,t}=j) \nonumber \\
&=\sum_{A\in F_k^{-n}}\prod_{m\in A} P_{m,t}^iQ_{m,t}^{ij}\!\!\prod_{m^c\in A^{-c}}\!\!(1-P_{m^c,t}^iQ_{m^c,t}^{ij}). \label{eqrhadcond}
\end{align}
Here, $F_k^{-n}$ is the set of all subsets of size $k$ that can be selected from $\mathcal{N}\backslash \{n\}$, and $A^{-c}=F_k^{-n}\backslash A$. Similarly, the conditional distribution of $K_{N,t}^{i}$ given $i_{n,t}=i$ is
\begin{align}
&\text{Pr}(K_{N,t}^{i} =k+1 \mid i_{n,t}=i) \nonumber \\
&=\sum_{A\in F_k^{-n}}\prod_{m\in A} P_{m,t}^i\!\prod_{m^c\in A^{-c}}\!\!(1-P_{m^c,t}^i). \label{eqrhadcond2}
\end{align}
Therefore, given the prior knowledge that the player $n$'s location-action pair at time $t$ is $(i,j)$, the expectation of her tax penalty $\pi_{N,n,t}^{ij}$ is
\begin{align}
&\Pi_{N,n,t}^{ij}\triangleq \mathbb{E}\left[\pi_{N,n,t}^{ij} \mid i_{n,t}=i, j_{n,t}=j \right] \nonumber \\
=&\sum_{k=0}^{N-1} \!\alpha\log\frac{k+1}{N}\!\!\!\!\sum_{A\in F_k^{-n}}\!\prod_{m\in A}\!\! P_{m,t}^iQ_{m,t}^{ij} \!\!\!\! \prod_{m^c\in A^c}\!\!\!(1\!-\!P_{m^c\!, t}^iQ_{m^c\!, t}^{ij}) \nonumber \\
&- \sum_{k=0}^{N-1} \!\alpha\log\frac{k+1}{N}\!\!\!\!\sum_{A\in F_k^{-n}}\!\prod_{m\in A}\!\! P_{m,t}^i \!\!\!\! \prod_{m^c\in A^c}\!\!\!(1\!-\!P_{m^c\!, t}^i) \nonumber \\
&- \alpha\log R_t^{ij}.
\label{eqtijgeneral}
\end{align}
Notice that the quantity \eqref{eqtijgeneral} depends on the strategies $Q_{-n}\triangleq \{Q_m\}_{m \neq n}$, but not on $Q_n$. In other words, $\pi_{N,n,t}^{ij}$ is a random variable whose distribution does not depend on player $n$'s own strategy.
To evaluate $\Pi_{N,n,t}^{ij}$ when all players other than player $n$ takes the same strategy (i.e., $Q_{m,t}=Q_t^*$ for $m\neq n$), notice that the conditional distributions of $K_{N,t}^{ij}$ and $K_{N,t}^{i}$ given $(i_{n,t}, j_{n,t})=(i,j)$, provided by \eqref{eqrhadcond} and \eqref{eqrhadcond2}, simplify to the binomial distributions
\begin{align*}
&\text{Pr}(K^{ij}_{N,t}=k+1|i_{n,t}=i, j_{n,t}=j)\\
&={N\!-\!1 \choose k}(P_t^{i*}Q_t^{ij*})^k (1-P_t^{i*}Q_t^{ij*})^{N-1-k}\\
&\text{Pr}(K^{i}_{N,t}=k+1|i_{n,t}=i)\\
&={N\!-\!1 \choose k}(P_t^{i*})^k (1-P_t^{i*})^{N-1-k}.
\end{align*}
Thus, the expression \eqref{eqtijgeneral} simplifies to
\begin{align}
&\Pi_{N,n,t}^{ij}=\mathbb{E}\left[\pi_{N,n,t}^{ij} \mid i_n=i, j_n=j \right] \nonumber \\
&=\!\sum_{k=0}^{N-1}\!\alpha \log\frac{k+1}{N}{N\!-\!1 \choose k}(P_t^{i*}Q_t^{ij*})^k (1\!-\!P_t^{i*}Q_t^{ij*})^{N-1-k} \nonumber \\
&\;\;\;-\!\sum_{k=0}^{N-1}\!\alpha \log\frac{k+1}{N}{N\!-\!1 \choose k}(P_t^{i*})^k (1\!-\!P_t^{i*})^{N-1-k} \nonumber \\
&\;\;\;- \alpha \log R_t^{ij}
\label{eqtjieq}
\end{align}
\fi
\ifdefined\LONGVERSION
\section{Proof of Lemma~\ref{lemlimpi}}
\label{app1}
Let $K^i_{N,-n,t}$ denote the number of agents, except agent $n$, which are located at intersection $i$ at time $t$ and let $K^{ij}_{N,-n,t}$ denote the number of agents, except agent $n$, which are located at intersection $i$ at time $t$ and select intersection $j$ as their next destination. Thus, we have $K^i_{N,-n,t}=\sum_{l\neq n}\I{i_{l,t}=i}$ and $K^{ij}_{N,-n,t}=\sum_{l\neq n}\I{i_{l,t}=i, j_{l,t}=j}$,
where $\I{\cdot}$ is the indicator function.
Then, $\Pi_{N,n,t}^{ij*}$ can be written as
\begin{align}
\Pi_{N,n,t}^{ij*}=&\ES{\log\tfrac{1+K^{ij}_{N,-n,t}}{1+K^i_{N,-n,t}}}-\log R_t^{ij}\nonumber\\
=&\ES{\logp{ \tfrac{1+K^{ij}_{N,-n,t}}{N}}}-\nonumber\\
&\ES{\logp{\tfrac{1+K^i_{N,-n,t}}{N}}}-\log R_t^{ij}\nonumber
\end{align}
Using Jensen inequality, we have
\begin{align}
\ES{\logp{\tfrac{1+K^{ij}_{N,-n,t}}{N}}}&\leq \logp{\frac{1}{N}+\ES{\tfrac{K^{ij}_{N,-n,t}}{N}}}\nonumber\\
&\stackrel{(a)}{=}\logp{\frac{1}{N}+\frac{N-1}{N}P_t^{i*}Q_t^{ij*}}\nonumber
\end{align}
where $(a)$ follows from $\ES{\frac{K^{ij}_{N,-n,t}}{N}}=\frac{N-1}{N}P_t^{i*}Q_t^{ij*}$ and the fact that all the agents employ the policy $\left\{Q^{*}_t\right\}$. Thus, we have
\begin{align}
\limsup_{N\rightarrow\infty}\ES{\log \tfrac{1+K^{ij}_{N,-n,t}}{N}}\leq \log P_t^{i*}Q_t^{ij*}
\end{align}
Next we show the other direction. For $\epsilon\in\left(0,\right.\left.\frac{ P_t^{i*}Q_t^{ij*}}{2}\right]$, we can write $\ES{\log \frac{1+K^{ij}_{N,-n,t}}{N}}$ as
\begin{align}
\ES{\log \tfrac{1+K^{ij}_{N,-n,t}}{N}}=&\ES{\log \tfrac{1+K^{ij}_{N,-n,t}}{N}\I{\tfrac{K^{ij}_{N,-n,t}}{N}>\epsilon}}+\nonumber\\
&\ES{\log \tfrac{1+K^{ij}_{N,-n,t}}{N}\I{\tfrac{K^{ij}_{N,-n,t}}{N}\leq \epsilon}}\nonumber
\end{align}
Using the Hoeffding inequality, it follows that $\frac{K^{ij}_{N,-n,t}}{N}$ converges to $P_t^{i*}Q_t^{ij*}$ in probability as $N$ becomes large. From continues mapping theorem, we have the convergence of $\log \frac{K^{ij}_{N,-n,t}}{N}$ in probability to $\log P_t^{i*}Q_t^{ij*} $ for $P_t^{i*}Q_t^{ij*}>0$. Similarly, $\I{\frac{K^{ij}_{N,-n,t}}{N}>\epsilon}$ converges to 1 in probability. Thus, from Slutsky's Theorem, we have $\log \frac{1+K^{ij}_{N,-n,t}}{N}\I{\frac{K^{ij}_{N,-n,t}}{N}>\epsilon}$ converges to $\log P_t^{i*}Q_t^{ij*}$ in distribution. Using Fatou's lemma and the fact that $\log \frac{1+K^{ij}_{N,-n,t}}{N}\I{\frac{K^{ij}_{N,-n,t}}{N}>\epsilon}\geq \log\epsilon $ , we have
\begin{align}
\liminf_{N\rightarrow\infty} \ES{\log\tfrac{1+K^{ij}_{N,-n,t}}{N}\I{\tfrac{K^{ij}_{N,-n,t}}{N}>\epsilon}}\geq \log P_t^{i*}Q_t^{ij*}\nonumber
\end{align}
We also have
\begin{align*}
&\abs{\ES{\log \tfrac{1+K^{ij}_{N,-n,t}}{N}\I{\tfrac{K^{ij}_{N,-n,t}}{N}\leq \epsilon}}} \\
&\leq \logp{N}\PRP{\tfrac{K^{ij}_{N,-n,t}}{N}\leq \epsilon}
\end{align*}
Using the Hoeffding inequality, it is straightforward to show that $\PRP{\frac{K^{ij}_{N,-n,t}}{N}\leq \epsilon}$ decays to zero exponentially in $N$ which implies that
\begin{align}
\lim_{N\rightarrow\infty}\ES{\log \tfrac{1+K^{ij}_{N,-n,t}}{N}\I{\tfrac{K^{ij}_{N,-n,t}}{N}\leq \epsilon}}=0\nonumber
\end{align}
Thus, we have
\begin{align}
\liminf_{N\rightarrow\infty}\ES{\log \tfrac{1+K^{ij}_{N,-n,t}}{N}}\geq \log P_t^{i*}Q_t^{ij*}
\end{align}
which implies that $\lim_{N\rightarrow\infty}\ES{\logp{ \frac{1+K^{ij}_{N,-n,t}}{N}}}=\log P^{i*}_tQ^{ij*}_t$. Following similar steps, it is straightforward to show that $\lim_{N\rightarrow\infty}\ES{\logp{ \frac{1+K^{i}_{N,-n,t}}{N}}}=\log P^{i*}_t$ which completes the proof.
\fi
\ifdefined\LONGVERSION
\section{Proof of Theorem~\ref{theo1}}
\label{apptheo1}
Due to the choice of the terminal condition $\phi_T^i=1$,
\[
V_T(P_T)=\sum_iP_T^i C_T^i = -\alpha\sum_i P_T^i \log \phi_T^i=0
\]
holds. To complete the proof by backward induction, assume that
$
V_{t+1}(P_{t+1})=-\alpha\sum_i P_{t+1}^i \log \phi_{t+1}^i
$
holds for some $0\leq t \leq T-1$. {\color{black} Then, due to the Bellman equation \eqref{eqbellmankl}, we have
\begin{equation}
\label{eq_apdx_c}
V_{t}(P_{t}) =\min_{Q_t} \left\{ \sum_{i,j} P_t^i Q_t^{ij}\left(\rho_t^{ij}+\alpha\log\frac{Q_t^{ij}}{R_t^{ij}} \right)\right\}
\end{equation}
where $\rho_t^{ij}=C_t^{ij}-\alpha\log \phi_{t+1}^j$ is a constant.
To obtain a minimizer for \eqref{eq_apdx_c} subject to the constraints $P_t^i\left(\sum_j Q_t^{ij}-1\right)=0 \; \forall i$, introduce the Lagrangian function
\begin{align*}
L(Q_t, \lambda_t)=& \sum_{i,j} P_t^i Q_t^{ij}\left(\rho_t^{ij}+\alpha\log\frac{Q_t^{ij}}{R_t^{ij}} \right) \\
&-\sum_i \lambda_t^i P_t^i \left(\sum_j Q_t^{ij}-1\right).
\end{align*}
For each $i$ such that $P_t^i>0$, we obtain from the optimality condition $\frac{\partial L}{\partial Q_t^{ij}}=0$ that
\[
Q_t^{ij}=R_t^{ij}\exp\left(\frac{\lambda_t^i}{\alpha}-1-\frac{\rho_t^{ij}}{\alpha}\right).
\]
In order to satisfy $\sum_j Q_t^{ij}=1$, the Lagrange multiplier $\lambda_t^i$ must be chosen so that
\[
Q_t^{ij}=\frac{R_t^{ij}}{Z_t^i}\exp\left(-\frac{\rho_t^{ij}}{\alpha}\right)
\]
where the normalization constant $Z_t^i$ can be computed as
\begin{align*}
Z_t^i&=\sum_j R_t^{ij}\exp\left(-\frac{\rho_t^{ij}}{\alpha}\right) \\
&=\sum_j R_t^{ij}\exp\left(-\frac{C_t^{ij}}{\alpha}\right)\phi_{t+1}^j \\
&=\phi_t^i.
\end{align*}
Therefore, we have shown that a minimizer for \eqref{eq_apdx_c} is obtained as
\[
Q_t^{ij*}=\frac{R_t^{ij}}{\phi_t^i}\exp\left(-\frac{\rho_t^{ij}}{\alpha}\right)
\]
from which \eqref{eqoptq} also follows.} By substitution, the optimal value is shown to be
$
V_t(P_t)=-\alpha \sum_i P_t^i \log \phi_t^i.
$
This completes the induction proof.
\fi
\ifdefined\LONGVERSION
\section{Proof of Theorem~\ref{theomfe}}
\label{apptheomfe}
Let the policies $Q_{m,t}^{ij}=Q_t^{ij*}$ for $m \neq n$ be fixed. It is sufficient to show that there exists a sequence $\epsilon_N \searrow 0$ such that the cost of adopting a strategy $Q_{n,t}^{ij}=Q_t^{ij*}$ for player $n$ is no greater than $\epsilon_N $ plus the cost of adopting any other policy.
Since
\[
\Pi_{N,n,t}^{ij}\rightarrow \alpha \log \frac{Q_t^{ij*}}{R_t^{ij}} \text{ as } N\rightarrow \infty,
\]
there exists a sequence $\delta_N \searrow 0$ such that
\[
\Pi_{N,n,t}^{ij}+\delta_N > \alpha \log \frac{Q_t^{ij*}}{R_t^{ij}} \;\; \forall i, j, t.
\]
Now, for all policy $\{Q_{n,t}\}_{t\in\mathcal{T}}$ of player $n$ and the induced distributions $P_{n,t}^j=\sum_i P_{n,t}^i Q_{n,t}^{ij}$, we have
\begin{align*}
&\sum_{t=0}^{T-1}\sum_{i,j} P_{n,t}^i Q_{n,t}^{ij} \left(C_t^{ij}+\Pi_{N,n,t}^{ij}\right) \\
&>\sum_{t=0}^{T-1}\sum_{i,j} P_{n,t}^i Q_{n,t}^{ij} \left(C_t^{ij}+\alpha \log \frac{Q_t^{ij*}}{R_t^{ij}}- \delta_N \right) \\
&=\sum_{t=0}^{T-1}\sum_{i,j} P_{n,t}^i Q_{n,t}^{ij} \left(C_t^{ij}+\alpha \log \frac{Q_t^{ij*}}{R_t^{ij}}\right)- T V^2 \delta_N\\
&\geq \min_{\{Q_{n,t}\}_{t\in\mathcal{T}}}\sum_{t=0}^{T-1}\sum_{i,j} P_{n,t}^i Q_{n,t}^{ij} \left(C_t^{ij}+\alpha \log \frac{Q_t^{ij*}}{R_t^{ij}}\right) \\
& \hspace{5ex} - T V^2 \delta_N.
\end{align*}
Notice that the minimization in the last line is attained by adopting $Q_{n,t}^{ij}=Q_t^{ij*}$. Since $\epsilon_N \triangleq T V^2 \delta_N \searrow 0$, this completes the proof.
\fi
\ifdefined\LONGVERSION
\section{Proof of Lemma~\ref{lemsinglestage}}
\label{app2}
For each $j=1, 2, ... , J$, define $f_N^j: [0, 1]\rightarrow \mathbb{R}$ by
\begin{align*}
&f_N^j(Q^j)\triangleq \\
& c^j+\alpha\sum_{n=0}^{N-1}\log\frac{n+1}{NR^j}{N\!-\!1 \choose n}(Q^j)^n (1-Q^j)^{N-1-n}.
\end{align*}
Notice that $f_N^j(Q)$ is a continuous and strictly increasing function. If $Q^*\in \Delta^{J-1}$ is a symmetric Nash equilibrium of the $N$-player single-stage traffic routing game, it satisfies the condition
\begin{equation}
\label{eqNEcond}
Q^{*}\in \argmin_{Q \in \Delta^{J-1}} \sum_{j=1}^J Q^j f_N^j(Q^{j*}).
\end{equation}
From the KKT condition, $Q^*\in \Delta^{J-1}$ satisfies \eqref{eqNEcond} if and only if there exists $\lambda\in\mathbb{R}$ such that
\begin{subequations}
\label{eqKKT}
\begin{align}
f_N^j(Q^{j*})&=\lambda \text{ for all } j \text{ such that } Q^{j*}>0 \label{eqKKT1} \\
f_N^j(0)&\geq \lambda \text{ for all } j \text{ such that } Q^{j*}=0. \label{eqKKT2}
\end{align}
\end{subequations}
Alternatively, the above condition can directly be obtained from the Wardrop's first principle \cite{wardrop1952some}.
Thus, it is sufficient to show that there exists a unique $Q^*\in \Delta^{J-1}$ satisfying the condition \eqref{eqKKT}. For each $j=1, 2, ... , N$, define $g^j:\mathbb{R}\rightarrow [0,1]$ by
\begin{equation}
\label{eqgdef}
g^j(\lambda)=\begin{cases}
0 & \text{ if } \lambda \leq f^j(0) \\
(f^j)^{-1}(\lambda) & \text{ if } f^j(0) < \lambda \leq f^j(1) \\
1 & \text{ if } f^j(1)<1.
\end{cases}
\end{equation}
The next claim follows from the intermediate value theorem and the monotonicity of $g^j$ for each $j$.
\begin{claim}
\label{claim1}
There exists $\lambda \in \mathbb{R}$ such that $g(\lambda)=1$. Moreover, if there exist $\lambda_1, \lambda_2\in\mathbb{R}$ such that $g(\lambda_1)=g(\lambda_2)$, then $g^j(\lambda_1)=g^j(\lambda_2)$ for each $j$.
\end{claim}
\begin{claim}
\label{claim2}
If $Q^*\in\Delta^{J-1}$ and $\lambda\in\mathbb{R}$ satisfy \eqref{eqKKT}, then it is necessary that $g(\lambda)=1$.
\end{claim}
\begin{proof}
Assume \eqref{eqKKT} and $g(\lambda)\neq 1$. Then
\begin{subequations}
\label{eqgchain}
\begin{align}
\sum\nolimits_j Q^{j*} &= \sum\nolimits_{j: Q^{j*}>0} Q^{j*} \label{eqgchain1}\\
&= \sum\nolimits_{j: Q^{j*}>0} g^j(\lambda) \label{eqgchain2}\\
&= \sum\nolimits_{j: Q^{j*}>0} g^j(\lambda) + \sum\nolimits_{j: Q^{j*}=0} g^j(\lambda) \label{eqgchain3} \\
&= \sum\nolimits_{j} g^j(\lambda) \\
&= g(\lambda)\neq 1.
\end{align}
\end{subequations}
Equality \eqref{eqgchain2} follows from \eqref{eqKKT1} and \eqref{eqgdef}. Equality \eqref{eqgchain3} holds since, for $j$ such that $Q^{j*}=0$, we have $f_N^j(0)\geq \lambda$ by \eqref{eqKKT2} and thus, by definition \eqref{eqgdef}, $g^j(\lambda)=0$. However, the chain of equalities \eqref{eqgchain} is a contradiction to $Q^*\in\Delta^{J-1}$.
\end{proof}
Now, pick any $\lambda\in\mathbb{R}$ such that $g(\lambda)=1$, and construct $Q^*$ by $Q^{j*}=g^j(\lambda)$. By Claim~\ref{claim1}, such a $Q^*$ is unique. It is easy to check that $Q^*\in\Delta^{J-1}$ and the condition \eqref{eqKKT} is satisfied. This construction, together with Claim~\ref{claim2}, shows that there exists a unique $Q^*\in\Delta^{J-1}$ satisfying the condition \eqref{eqKKT}.
\fi
\section*{Acknowledgment}
The authors would like to thank Mr. Matthew T. Morris and Mr. James S. Stanesic at the University of Texas at Austin for their contributions to the numerical study in Section~\ref{secsimulation}. The first author also acknowledges valuable discussions with Dr. Tamer Ba\c{s}ar at the University of Illinois at Urbana-Champaign.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
2,869,038,155,436 | arxiv | \section{Beam Diagnostics}
\subsection{Beam Intensity and Position/Angle Monitoring}
As with all parity-violation experiments, highly linear, low noise beam property measurements are needed in
order to correct helicity-correlated false asymmetries. The $Q_{weak}\;${} experiment will use the beam monitoring
already installed in Hall C that was successfully used for the G0 experiment. This includes four microwave
cavity beam charge monitors, a total of twelve 4-wire SEE beam position monitors, and two microwave cavity beam
position monitors. This equipment is described in more detail in the 2004 $Q_{weak}\;${} jeopardy proposal. An
important upgrade that will be made to the readout of the equipment is the replacement of the TRIUMF 2 MHz
voltage-to-frequency converters with the TRIUMF 18 bit, 500 KHz, sampling ADC's which have effectively 27 bit
resolution. The beam position and angle can be deliberately varied using an air-core coil beam modulation system
that already exists in the hall from G0. Another change since 2004 is the addition of new collaborators
from University of Virginia and Syracuse University who have significant experience in these areas from running
parity-violation experiments in Hall A with similar beam monitoring equipment and controls.
\subsection{Higher Order Beam Moments}
Our last proposal was written shortly after simulations showed a small sensitivity
to beam spot size modulation. Since then, we have made some progress in understanding how spot size changes
would influence our azimuthal dependence, how such changes might be produced, and how to measure them. So far,
beam spot size changes are the only higher order moment of a beam parameter which is appears potentially large
enough to pose a challenge for $Q_{weak}\;$. However, we note that our solution for monitoring beam spot size changes
can be applied to any beam parameter which can be rotated into coordinate space somewhere along the beamline.
\paragraph*{Spot size systematics:}
In our azimuthally symmetric detector, after corrections for changes in the first moments of beam properties,
the experimental asymmetry becomes
$$
A(\phi) = A + B \cos(\phi) + C \sin(\phi) + D\cos(2\phi) + E\sin(2\phi)
$$
$\bullet$ The A term is dominated by parity violation (PV) but may contain relatively small contributions from
several classes of beam spot size changes. It can also contain a small contribution from leakage from the
$\phi$-dependent terms when the latter are large.
$\bullet$ The B and C (``dipole'') terms are due to the product of residual transverse electron polarization and
two-photon exchange. Unless one carefully nulls the transverse beam polarizations, the natural magnitude of
these parity conserving (PC) dipole terms will be similar to that of the PV asymmetry of interest.\cite{Pitt} In
principle, the contribution of these terms will vanish when averaged over the 8 detector bars, but broken
symmetries in the apparatus may cause a small leakage of the PC asymmetry into the PV
asymmetry, which will be challenging
to accurately measure and correct. Conservatively assuming that our apparatus is only symmetric to about 1\%, we
therefore plan to suppress the magnitude of the dipole terms by feeding back to the injector to null the
transverse beam polarization. If the magnitude of the B and C terms is significantly smaller than the PV
contribution to A, this potential PC leakage issue can be ignored.
$\bullet$ Beam spot size changes along a single preferred axis could make small contributions to {\em both} the
offset (A) and the quadrupole terms (D and E). The contribution to A could be corrected in principle, but raises
the possibility that even if there are no beam spot size changes, a dipole with weak statistical significance
would lead us to shift the central value of our PV asymmetry by half the error bar.
When the modulation has no preferred axis (a radial breathing mode), it contributes a false asymmetry to A, but
does not provide any helpful diagnostic contribution to the $\phi$-dependent terms. An interesting but harmless
special case occurs if the major axis of the beam spot is first aligned along the x-axis for one spin state,
then the y-axis for the opposite spin state; such a toggling modulation would generate a pure quadrupole.
Laser table studies are planned to set upper limits on helicity correlated spot size changes. However, even if
the spot size never changes on the laser table, there are still downstream elements (the vacuum window and the
gun cathode) which could produce spot size changes. These effects, too, can be studied though with increasing
difficulty: window effects can potentially be studied with a suitably stressed mock-up, and cathode changes can
be studied by moving the laser spot on the cathode.
The insertion of a half-wave plate effectively reverses the physics asymmetry. Any helicity-independent false
asymmetries, such as electronic pickup from the injector, can therefore be cancelled exactly by subtraction
provided the offsets have not drifted between slow reversals. On the other hand, false asymmetries which
reverse along with the physics asymmetry are problematical. Unfortunately, we believe that most classes of beam
spot size changes will be caused by spatial nonuniformities in bi-refringence, and therefore will flip sign with
the polarization change.
To summarize, not all classes of potential beam spot size changes which cause a false asymmetry will provide a
diagnostic $\cos(2\phi)$-like dependence in the detectors. Most potential mechanisms for spot size changes
cannot be cancelled using the half-wave plate. Laser table studies are beginning which will help us understand
the potential phenomenon better. Clearly, we need to measure, or at least bound, such potential effects on the
beamline inside Hall C. Plans for an appropriate detector are discussed in the next section.
\paragraph*{Beam spot size monitor:}
A non-intercepting beam monitor whose signal is linear in the position coordinate can only return the first
moment, $<x>$. Hence, a principal requirement for accessing the second moment, $\sigma_x$, is a monitor with
nonlinear response. Another important requirement is that the monitor be insensitive to beam position changes,
so that the small expected signal for spot size modulation is not swamped by normal position jitter.
In our 2004 proposal, we suggested using an offset pair of 4-wire BPMs to measure helicity-correlated size
changes, but subsequent simulations showed the sensitivity to spot size modulation was extremely small. The good
news is that the position information derived from a 4-wire BPM is essentially free of contamination from higher
order moments.
\begin{figure}[h]
\begin{center}
\rotatebox{+90.}{\resizebox{1.5in}{5.in}{\includegraphics{SpotSizeMonitor.eps}}}
\end{center}
\caption{{\em Sketch of the 3-cavity concept for measuring helicity correlated beam spot size changes, $\Delta
\sigma_x$ and $\Delta \sigma_y$. The curves are an attempt to represent $E_z(x,y,t)$.}} \label{Concept}
\end{figure}
We finally arrived at the scheme requiring 3 cavities sketched in Figure \ref{Concept}. The only way power can
be coupled from the beam into the cavity is via the interaction of the bunch charge with $E_z(x,y,t)$. Hence the
larger the longitudinal electric field, the larger the signal. The basic idea is that the beam spot size is
sensed by the higher curvature in $E_z(x,y,t)$ in two of the cavities, while the lower curvature cavity
normalizes the beam current. Note that none of these cavities has any first order position sensitivity when the
beam is perfectly on axis, so one expects that the position sensitivity would still be weak for a realistic
alignment scenario. This is confirmed analytically below.
We will use a standard BCM pillbox cavity resonating in the $TM_{010}$ mode at $3\times f_{rep}$=1497 MHz to
normalize the beam current, while a pair of rectangular cavities resonating in the $TM_{310}$ mode at $6 \times
f_{rep}$ = 2994 MHz will measure changes in $\sigma_x$ and $\sigma_y$.
The ratio of the rectangular to cylindrical cavity signals would be proportional to $1+\epsilon*\sigma_i$ in
first order. The contribution of the finite spot size to this ratio is only about 0.1\%, but small spot size
{\em changes} will reveal themselves as a helicity correlated asymmetry calculated using the cavity signals.
Calibration will be done by modulating quadrupoles or the dipoles of the fast raster system.
In Reference \cite{MackCavity}, we performed detailed analytic calculations, and confirmed some important
features with MAFIA. Here, we repeat a small part of that work. Using the $TM_{010}$ mode from a JLab
cylindrical BCM as a current measurement, we define the normalized $TM_{310}$ signal as $ V_{310} \equiv
\frac{V_{310}}{V_{010}}$ where the $V$'s are downconverted signal voltages converted to DC, and the helicity
correlated asymmetry for the $TM_{310}$ mode is defined as
\begin{eqnarray}
A_{310} \equiv \frac{\bf{ V_{310}^+ - V_{310}^-}}{\bf{ V_{310}^+ + V_{310}^-}}
\end{eqnarray}
If the helicity correlated width and offsets are $ w^{\pm} = w_0 \pm \Delta w/2$ and $ x^{\pm} = x_0 \pm \Delta
x/2$ respectively, then it can be shown that
\begin{eqnarray}
A_{310} = -\frac{3}{8} (\frac{\pi}{a})^2 w_0 \Delta w -\frac{9}{8} (\frac{\pi}{a })^2 x_0 \Delta x ,
\end{eqnarray}
where the first term contains the spot size changes of interest and the second term is the background due to
beam position changes.
These results are given in Figure \ref{Sensitivities} for an interesting range of parameters. For $a$ = 20 cm,
$w$ = 0.4 cm, and the smallest size modulation to which the experiment is sensitive ($\Delta w$ = 0.1 microns),
then the spot size asymmetry is $A_{310}$ = 37 ppb. For reasonable alignment tolerances, the corresponding
correction for position jitter would be smaller by an order of magnitude.
The measurement time will be determined by the poorly
known electronic noise floor. If the TRIUMF sampling ADC's are the limiting factor, then it
will take only minutes to establish whether the spot size is changing by an amount large enough to affect the
experiment.
\begin{figure}[h]
\begin{center}
\rotatebox{0.}{\resizebox{6.0in}{3in}{\includegraphics{spotsize_and_position.eps}}}
\end{center}
\caption{ {\em Left: Expected asymmetry from the normalized spot size monitor cavity as a function of the
helicity correlated change in width. The threshold of concern for $Q_{weak}\;$ is about 0.1 microns. Right: Expected
asymmetry from the normalized spot size monitor cavity as a function of the beam-cavity misalignment. The
position regressions are generally relatively small.}} \label{Sensitivities}
\end{figure}
\subsection{Luminosity Monitors}
The luminosity monitors are deployed in locations where their count
rates will much higher than the main detectors, so the resulting
statistical errors are small. They will be used for two purposes.
Since they have a much smaller statistical error per
measurement period than the main detector, they are much more
sensitive to the onset of target density fluctuations. Second, the
luminosity monitor will be used as a valuable ``null asymmetry
monitor'', since it is expected to have much smaller asymmetry
than the main detector; thus if its asymmetry is non-zero it could
indicate the presence of a false helicity-correlated effect in the
experiment.
Since 2004, we have expanded our plans for the luminosity monitors, and
they will now be deployed in two locations - an upstream location
on the front face of the primary defining collimator and a downstream
location about 17 meters downstream of the target. The upstream set
will primarily detect M{\o}ller scattered electrons at about 6 degrees;
this cross section is insensitive to beam energy and angle changes,
so this set will be ideal for monitoring target density fluctuations.
The downstream set will be located at a scattering angle of about
0.5$^{\circ}$ and will be equally sensitive to M{\o}ller and e-p
elastic electrons. This set will be equally or more sensitive to
helicity-correlated beam properties than the main detector.
The detectors will be {\v C}erenkov detectors with quartz (Spectrosil
2000; the same grade as the main detectors) as the active medium
- 3 cm x 5 cm x 2 cm for the downstream version and 7 cm x 25 cm x 2 cm
for the upstream version. Light from the detectors will be transported
via air-core light guides coated with polished and chemically
brightened anodized aluminum. Figure~\ref{lumifigs} shows the collected
light per cosmic ray event from a prototype of the downstream luminosity
monitor. The collected light yield is adequate for our purposes, and
the observed value of the fractional photoelectron fluctuations
$\sigma_{pe}/\langle pe \rangle \sim 4/7$ implies only a 15\% increase
beyond counting statistics in the luminosity monitor per measurement
period. This is acceptable because the luminosity monitor is simulated
to have a count rate that implies a factor of at least 6 smaller statistical
error then the main detector from pure counting statistics.
\begin{figure}[htbp]
\centerline{\includegraphics[width=6.in]{lumifigs.eps}} \caption{\em a) Observed photoelectron yield per cosmic
ray event in a prototype detector with quartz and 30 cm long air lightguide. b) Observed differential
non-linearity of phototube cathode current output versus DC current level.} \label{lumifigs}
\end{figure}
The photodetector will be a Hammamatsu R375, 10 stage, multi-alkali
photomultiplier read out in ``photodiode mode'' (all dynodes tied
together). We have made studies of the linearity of this scheme
using a setup with a DC and AC light emitting diode, a picoammeter
to read the photodector cathode current, and a lock-in amplifier
to monitor the photodector AC response as the DC light level is
varied. Some typical results are shown in Figure~\ref{lumifigs}. The
observed differential non-linearity ($< 0.2\%$) is below our
required 1$\%$ over our operating range.
\section{Beam Time Request}
\subsection{Basic Operational Requirements and Constraints}
\label{Basic Operational Requirements and Constraints}
The first part of this section briefly summarizes some of the requirements important for the $Q_W^{p}\;$ measurement. A
number of factors constrain the range of acceptable incident beam energies and currents. The nominal beam
energy was very carefully selected based upon extensive optimization of kinematics, hadronic backgrounds,
control of beam properties (requirements scale inversely with the asymmetry), practical solid angle acceptance
issues, detector dimensions, deliverable cooling for the target and magnet power supply issues. More details can
be found in the 2004 Proposal, Technical Design Report and numerous other progress reports that are available on
the $Q_{weak}\;$ document server which is accessible from the web site ``http://www.jlab.org/qweak/".
The second part of this section is our formal summary of the requested beam time broken down by activity. Our
beam time request covers all aspects of conducting the measurement, including hardware commissioning,
systematic studies, background measurements, calibration measurements and production running. Since the
experiment will probably be run in no more than three blocks of almost contiguous time, our ability to deliver a
fully analyzed 8\% initial measurement prior to beginning the final 4\% production running is severely
handicapped. However, we plan to generate the 8\% measurement from the first several weeks of running of
sufficient quality. This will mostly likely be the running just after commissioning has been completed. Due to
schedule uncertainties, this period may or may not end up being contiguous with the commissioning period. In
either case, we will analyze this first set of data as rapidly as possible in order to
obtain the maximum information concerning the quality of all parameters prior to committing to our final
production running configuration.
\subsubsection{Parity Quality Longitudinally Polarized Beam}
A quick summary of the $Q_{weak}\;$ experiment requirements includes: 85\% polarized parity quality beam delivered at
of 125 Hz up to possibly 500 Hz (1 ms) pseudo random helicity reversal with the polarization settling periods
contributing no more than 5\% gating dead time. The faster rapid helicity reversal rates of either 125 Hz or 500
Hz are to be compared to the present much slower reversal rate refered to as ``30 Hz reversal".
Due to significant recent advances by the
JLab polarized source group, these requirements now appear straightforward. The final decision on how fast to
flip the spin during production running will depend on results from ongoing source performance tests, the
practical experience of the upcoming Pb parity experiment which has adopted our higher flip rate scheme, and
noise studies on our LH$_2$ cryo-target during the $Q_{weak}\;$ commissioning phase. Changing the flip rate and
reversal pattern is now relatively simple for the polarized source group to implement. Although the Pb parity
experiment will likely have been run in Hall A just before $Q_{weak}\;$ starts, it is important that polarized beam
experiments running concurrent with $Q_{weak}\;$ make the necessary preparations to handle the faster reversal rate.
\newpage
We also require that there be no significant (less than $10^{-9}$) pickup of the coherent reversal signal
through electrical ground loops as observed by our integrators due to leakage of the prompt reversal signal at
the injector. We have worked with the polarized source group on this and other issues, and they have implemented
hardware changes which should accomplish this requirement. The measurement also requires stable source and
accelerator operation, with modest trip rates, energy and position feedback locks, acceptable halo, beam motion,
size and current modulation within specified limits. Additional detailed requirements are given elsewhere in
this document and in the 2004 proposal.
Although the measurement's sensitivity to residual transverse polarization is strongly suppressed by both
kinematics and our 8 fold detector symmetry, the acceptable upper limit is about 5\% on residual transverse
polarization. Therefore we require that the polarized source be configured to deliver ``full" longitudinal
polarized beam to Hall C, and that the experiment be allowed to adjust or implement feedback as necessary to
keep the residual transverse polarization level acceptable. We will also need the capability to implement
precision feedback to control the helicity correlated beam energy, intensity, and position. Basically, these are
the standard complement of precision beam controls afforded to parity measurements at JLab.
There are so called "magic" energies which allow Hall C to have full longitudinal polarization while preserving
excellent polarization in Halls A and B. These are illustrated with a few examples in Table
\ref{Magic_Energies}.
\begin{table}[htb]
\centering\caption{ {\em Examples of Polarized Beam "Magic" Energies}}
\label{Magic_Energies}
\begin{tabular}{lllll}
& & & \\
$E_{linac}$ & $E_{A or B}$ & ~~$E_C$ & Available $P_e$ \\
(GeV) & (GeV) & (GeV) & ~~A/B/C \\ \hline
\\
\\
1.069 & 5.405 & 1.129 & 98\% / 100\% / 100\% & \\
1.088 & 5.503 & 1.150 & 96\% / 100\% / 100\% &\\
1.108 & 5.602 & 1.170 & 94\% / 100\% / 100\% &\\
& & & \\ \hline \hline
\end{tabular}
\end{table}
\subsubsection{Energy, Precision and Background Tradeoffs}
The nominal design beam energy for the $Q_{weak}\;$ experiment is 1.165 GeV. We have modelled the experiment to
determine how the uncertainties change as a function of incident beam energy. Table
\ref{Kinematics_and_Backgrounds} considers three extreme cases, for incident energies of 1.095 GeV, 1.165 GeV
and 1.240 GeV. In the simulations, the magnetic field was scaled by the incident beam energy.
\begin{table}[htb]
\centering\caption{ {\em Examples: Kinematics and Background Tradeoffs }}
\label{Kinematics_and_Backgrounds}
\begin{tabular}{lllll}
& & & \\ \hline
\\
Beam Energy (GeV) & 1.095 & 1.165 & 1.240 & \\
Rate (MHz) & 911 & 810 & 702 & \\
Average $Q^2$ (GeV/$c)^2$ & 0.0229 & 0.0258 & 0.0292 & \\
Statistical uncertainty (\%) & 3.37 & 3.20 & 3.05 & \\
Hadronic uncertainty (\%) & 1.37 & 1.51 & 1.68 & \\
All other errors (\%) & 2.00 & 2.09 & 2.18 & \\
Relative Error on $Q^p_W$ (\%) & 4.15 & 4.11 & 4.11 & \\
& & & \\ \hline \hline
Note: The quoted errors are the errors on $Q^p_W$.
\end{tabular}
\end{table}
Although, the overall figure-of-merit (and therefore the total uncertainty on $Q_W^{p}\;$ ) is relatively flat with
incident beam energy the average $Q^2$ raises from about 0.023 (1.095 GeV) to 0.029 (1.240 GeV). The 0.029 case
is undesirable as the error contribution due to the residual hadronic background increases. Table
\ref{Kinematics_and_Backgrounds} shows only the average $Q^2$, when in reality we have a tail of higher $Q^2$
events within our acceptance, so we desire to keep the average $Q^2$ small. Lower beam energies suppress the
hadronic uncertainty. However, if the incident beam energy is lowered significantly below ~1.1 GeV then the
overall figure-of-merit deteriorates rapidly because the asymmetry-weighted statistical error increases.
Therefore, from these arguments we prefer an incident beam energy of greater than 1.1 GeV but less than 1.165
GeV.
\subsubsection{Maximum Magnet and Power Supply Capabilities}
The installed AC, DC power supply, cables and magnet have been designed to operate at 8615 Amps which corresponds to an
incident beam energy of 1.165 GeV. The maximum current capability of the power supply and the magnet (cooling
and forces) is 9500 Amps. The reserve is largely consumed by headroom needed for reliable and safe operation of the power supply-magnet system and to allow
for a modest current increase required because we had to slightly enlarge the radial separation of the coils as a result of tolerance considerations
encountered during the trial assembly at MIT-Bates. The net result leaves us with a modest 4\% reserve over and above operation at 1.165 GeV.
However, we will not know for certain how much reserve there is in the power supply or how high
in field we can actually energize the magnet until the field mapping has occurred at MIT-Bates in the late
Spring of 2008. It is possible that operation much above 1.165 GeV will not prove practical for other reasons.
\subsubsection{Cryogenics Available to Cool the Target and Accelerator}
The experiment requires sufficient cooling to operate the 35 cm LH$_2$ hydrogen target with high current beam
and keep the noise contribution due to ``boiling' and other density fluctuations (noise) significantly smaller
that the total statistical error per helicity pulse pair. This requirement and the suppression of other
helicity correlated beam ``residuals" are the primary reasons we will be flipping the beam helicity at the new
much higher rate. As the beam energy is increased above the nominal 1.165 GeV the accelerator rapidly requires
more cryogens, and therefore less are available for our 2.4 kW target. This problem was addressed in 2004 with
extensive discussions between the $Q_{weak}\;$ collaboration, the Accelerator Division, the FEL group, the Cryogenics
group, and the Physics Division leadership. These discussions occurred over about 6 months and culminated in an
agreement (``cryo-agreement.pdf" - available on the document server at ``http://www.jlab.org/qweak/") between
all parties concerning how the cryogenic requirements of the $Q_{weak}\;$ experiment would be achieved and the
constraints that would be imposed on running in Hall A, Hall B and the FEL. The agreement was predicated on 5.5
GeV, 5-pass operation, and under the assumption a lower power target program in Hall A could be run concurrently
with $Q_{weak}\;$ and the FEL. However, the agreement also showed that at 5.8 GeV, an additional 8 g/sec of CHL
reserve capacity would be lost. In such a case, it would be necessary to shut down either Hall A or the FEL load
in order to run $Q_{weak}\;$. At beam energies below 5.5 GeV, it is easier for the CHL to deliver the required
cooling.
\subsection{$Q^2$ Calibrations}
In this section, we discuss the aspects of the beam time request
related to the $Q^2$ calibration.
{\bf Commissioning} For all of the commissioning activities, it is
assumed that the listed tasks will be appropriately interleaved
with other commissioning activities to insure that adequate time
can be given to analyzing the data and planning appropriately. Below,
we list the tasks we anticipate carrying out during the $Q^2$ related
commissioning activities listed in Table~\ref{Beam_Request_Itemized}.
{\bf Commissioning: 0.1 - 5 nA beam and diagnostics - 4 days} The goal of this
activity is to establish the procedures for routine ``on-demand''
delivery of beam in the 0.1 - 5 nA current range and to commission
all the needed hardware to monitor the intensity, position, and
size of the low current beam. Tasks to be carried out
during this activity are:
\begin{itemize}
\item Establish the laser and chopper slit settings need to deliver
the range of low current beam desired
\item Establish the gains and thresholds of the ``halo'' monitor detectors
\item Calibrate the beam current monitoring system (aluminum target plus
halo monitors) by cross calibration with the injector Faraday cup and
other techniques
\item Run several selected superharp monitors and optimize the beam
position/size measuring system (by determining which halo monitor
detectors give the best signal to noise)
\item After establishing the beam position/size measuring system,
determine if any further beam tuning is needed to optimize the beam
size and eliminate any halos
\item Run for several hours with low current beam (0.1 nA) to monitor
the stability of the beam properties using the optimized
detector/superharps chosen after analysis of the data from the previous
tasks
\end{itemize}
{\bf Commissioning: Region I, II, III tracking - 7 days} The goal of this activity is to do beam-related
commissioning of the tracking system components (the three sets of tracking chambers, trigger scintillator, and
quartz scanner). Tasks to be carried out during this activity are:
\begin{itemize}
\item Establish that insertion of the tracking system hardware can be done
in an acceptable amount of time ($<$ 4 hours)
\item At 0.1 nA, run with the full tracking system to establish the trigger
timing and measure the wire start times
\item Run the tracking system at 0.1 nA beam current with a variety of
targets: vertical/horizontal wire grids, carbon target, liquid hydrogen
target
\item At 100 nA with a liquid hydrogen target, run
with only
the Region III drift
chambers, trigger scintillator, main quartz detector, and quartz
scanner to establish the
relationship between the Region III and quartz scanner light-weighted
$Q^2$ maps
\item With a hydrogen gas target, study the rate dependent tracking
efficiency of the region II chambers (the most sensitive) by varying
the beam current over the range of 0.1 - 10 nA (corresponding to
rates ranging from 7 kHz - 700 kHz in the region II chambers)
\end{itemize}
{\bf Commissioning: Initial $Q^2$ Measurement - 4 days} The goal of this
activity is to perform an initial $Q^2$ measurement in each of the eight
octants (a total of four measurements since two octants can be done at
at time) and to measure the sensitivity of the $Q^2$ measurements to
several variables.
\begin{itemize}
\item For each octant pair, take enough data to satisfy our usual
requirements (about 2 hours per octant pair) with liquid hydrogen target
\item For a single octant pair, take data under a variety of conditions
with liquid hydrogen target -
vary magnetic field to move the elastic distribution across the
detector, vary beam position and angle
\item With a hydrogen gas target, take data for two different target pressures for one octant pair; the
difference will yield a $Q^2$ distribution for a hydrogen target with minimal external bremsstrahlung; this will
be useful in benchmarking our $Q^2$ simulations
\end{itemize}
{\bf Production: $Q^2$ measurements - 12 days} The $Q^2$ related
measurements during production running will consist of two activities:
\begin{itemize}
\item Scans of a main detector {\v C}erenkov bar using the quartz scanner at the nominal running current of 180
(or 150) $\mu$A. These can be done parasitically during regular production running. It is anticipated that a
typical scan for a single octant will take less than half an hour.
\item Dedicated calibration runs at low beam current ($\sim$ 0.15 nA). The
total time needed for one of these measurements is expected to be about
1 day. This includes 8 hours for setup/backout and a conservative 16 hours for
data-taking. The minimum needed data taking-time per octant pair is $\sim$ 2 hours.
This amount of data will yield 1$\%$ relative statistical error
per pixel assuming the quartz
bar is divided into 360 1 cm x 10 cm pixels for the $Q^2$ analysis.
\end{itemize}
The overall estimate of 12 days needed for this activity comes from assuming that
we will perform a $Q^2$ measurement roughly once per month during the $\sim$ 12
calendar months of production running.
\subsection{Requested Time}
\label{beamrequest}
Recognizing that there are unknowns with regard to the total cryogenic cooling capacity available, the need for
flexibility with respect to other experimental programs at JLab, and potential limits on the quantity and
quality of very high power polarized beam from a single gun, we present two scenarios for the $Q_{weak}\;$ beam time
request. These are for the case when the high power production running and associated backgrounds measurements
are limited to 150~$\mu$A, and the ideal condition when the source and accelerator can deliver on demand the
full 180~$\mu$A of beam current. The difference in time requested is not as dramatic as one might initially
expect, as many of the commissioning and systematic measurements will be performed at reduced beam currents.
Therefore, as detailed in Table~\ref{Beam_Request_Itemized} we request approval for 198 (PAC) days if the
production running current is 180~$\mu$A or alternately 223 (PAC) days if the production running is limited to a
maximum of 150~$\mu$A. This requests covers the commissioning of the new beamline and Compton Polarimeter, all
experiment sub-systems, and the experiment-specific setup for the polarized source. It includes time for an
initial 8\% measurement and for the full production run associated with a 4\% measurement of the weak charge of
the proton. ``Production" refers solely to full current running on the LH$_2$ target. Allowable overhead includes
time for background measurements, $Q^2$ calibrations, beam polarization measurements, systematic checks, and the
configuration changes needed to accomplish these. We assume that time needed to optimize $P^2 I$ in the injector
will come out of the factor of two in scheduled days versus PAC days ({\em i.e.}, it is unallowed overhead).
In summary, since the 2004 proposal, our beam request for the experiment has been significantly refined and
takes into account recent experiences concerning the commissioning of major new hardware. Based on the recent
experiences of parity measurements at JLab, better estimates of the effort required to conduct systematics
studies and backgrounds measurements are now incorporated. The request allows for serious commissioning of the
most critical sub-systems for $Q_{weak}\;$ -- specifically, our high power 2.4 kW cryo-target system, extensive new
tracking hardware, a new Hall C beamline and polarimeter. The request accounts for expected inefficiencies due
to the compression of the entire measurement into a very tight calendar period just prior to the end of the 6
GeV program at JLab.
\begin{table}[p]
\centering\caption{ {\em Itemized beam request for the experiment.}}
\label{Beam_Request_Itemized}
\begin{tabular}{lll}
& & \\
Category & Time & Time \\
& ($I_{max}$ = 180~$\mu$A) & ($I_{max}$ = 150~$\mu$A) \\ \hline
\\
{\bf Beamline/Polarimeter Commissioning:} & & \\
\\
Beam line and Compton & 14 days & 14 days \\
\\
{\bf Experiment Commissioning:} & & \\
\\
High Power Cryotarget & 7 days & 7 days \\
Main Detectors, Lumi's & 4 days & 4 days \\
Transverse Pol. measurement & 2 days & 2 days \\
Initial $Q^2$ measurement & 4 days & 4 days \\
QTOR Magnet & 2 days & 2 days \\
Neutral Axis Studies & 5 days & 5 days \\
0.1-5 nA beam and diagnostics, & 4 days & 4 days \\
Regions I, II, III tracking & 7 days & 7 days \\
Background Studies & 7 days & 7 days \\
\\
Commissioning subtotal & 56 days & 56 days
\\
\\
{\bf Production: $e+p$ elastic on $LH_2$} & 106 days & 127 days \\
& & \\
{\bf Overhead:}
\\ & & \\
Configuration changes & 4 days & 4 days \\
Al window background & 3 days & 4 days \\
Inelastic background & 3 days & 4 days \\
Soft background measurements & 3 days & 4 days \\
Polarimetry measurements & 4 days & 4 days \\
$Q^2$ measurements & 12 days & 12 days \\
Systematics: $I_{beam}$ dependence, etc. & 7 days & 8 days \\
\\
Overhead subtotal & 36 days & 40 days \\
& & \\
& \\ \hline
& & \\
{\bf Total:} & {\bf 198 days} & {\bf 223 days} \\
& & \\ \hline
\end{tabular}
\end{table}
\section{Infrastructure}
\subsection{QTOR Power Supply}
The new 2MVA power supply for the QTOR magnetic spectrometer has completed its final tests at the vendor and
will be packed and shipped to the MIT/Bates facility in December, 2007. Jefferson Laboratory operations funds
in the amount of \$54,000 have been awarded to MIT/Bates under a contract to perform full power tests and
mapping of the magnet including paying for the required AC power cables and electricity. This pre-ops work at
MIT-Bates should allow the assembly and testing of the magnet at Jefferson Laboratory to proceed efficiently
during actual installation in Hall C. New 2MVA lines have been installed in Hall C such as to allow the final
installation of the power supply to be straightforward.
\subsection{Support structures}
The magnet is now assembled in its support structure at MIT-Bates. The large rotation system components for the
region three drift chambers have been delivered to Jefferson Laboratory and are awaiting trial assembly in the
JLab Test Lab building. The CAD assembly drawing of the experiment is shown in Figure~\ref{fig:CAD_of_Qweak} and
continues to be detailed as sub-systems are engineered and delivered to the laboratory.
\begin{figure}[h]
\vspace*{0.2cm}
\begin{center}
\includegraphics[width=16.5cm,angle=0]{CAD_Qweak.eps}
\caption{{\em CAD layout drawing of the $Q_{weak}\;$ experiment without shielding.}}
\label{fig:CAD_of_Qweak}
\end{center}
\end{figure}
\subsection{Collimators and Shielding}
\begin{figure}[h!]
\vspace*{0.2cm}
\begin{center}
\includegraphics[width=12cm,angle=0]{CAD_Collimators.eps}
\caption{{\em Close up of the $Q_{weak}\;$ triple collimator system.}}
\label{fig:CAD_of_Collimators}
\end{center}
\end{figure}
Figure~\ref{fig:CAD_of_Collimators} shows a close up view of the collimator assembly, but without the transverse
concrete shielding that will form a shielding vault between the first and second collimator assemblies. The
design of collimator support / adjustment structures are also shown. The defining collimator will be located
directly downstream from the concrete shielding vault. There will be small gaps between the vault and primary
collimator to allow access for air core light pipes from an upstream luminosity monitor which will be mounted on
the lower upstream face of the primary collimator.
Line-of-sight background particles from the $LH_2$ target region to the quartz detectors are blocked primarily
by the first and last collimator bodies. Line-of-slight backgrounds generated from the tungsten collar which is
located around the beam pipe at the first clean-up collimator must penetrate two Pb secondary collimator bodies
to reach the quartz detectors. This tungsten primary beam collimator is necessary to shield the QTOR magnet
region from small angle target scattering. The maximum beam pipe diameter through the QTOR magnet is limited by
the magnet design and kinematics constraints. Of the potential additional large angle backgrounds generated by
the tungsten, neutrons are the most penetrating.
Background simulations have been performed for the basic experiment geometry shown above plus the beamline to
assess secondary electromagnetic backgrounds. A simplified model to study only the neutron source terms has also
been run. This work is ongoing by the $Q_{weak}\;$ simulation group and the JLab RADCON group, respectively; it is
presently anticipated that backgrounds will be within acceptable limits. The detailed configuration of
transverse shielding around the target area and first cleanup through the primary collimator region required to
achieve optimal shielding from a site boundary perspective is still under investigation by the JLab RADCON
group.
After obtaining budgetary quotes from a number of vendors and further discussions with our simulation group, the
decision was made to construct all collimators from a Pb alloy containing 5\% Sb (a hardening agent).
Because of their smaller dimensions, the central defining and upstream clean-up collimators can be affordably
cut using precision electrical discharge machining (EDM). This is the most cost-effective high precision
manufacturing technique we have found and should allow tolerances approaching 130 microns on all critical
surfaces. These mechanical tolerances are conservative, based on our models of how false asymmetries can be
generated by geometrical misalignments and helicity correlated beam properties. The final, large clean-up
collimator will be cast, as it does not require precision tolerances, and its support structure will be
relatively simple. Preliminary manufacturing prints for all collimator assemblies have been sent out for
budgetary quotes. The latest round of these quotes indicates that we should be able to stay within our budget
goals for the collimators and achieve or do better than all required tolerances.
\section{Data Acquisition}
\label{DAQ}
The $Q_{weak}\;$ experiment requires two distinct modes of data acquisition: the current mode measurement of the
quartz bar signals, and the low current tracking mode measurements in which individual particles will trigger
the DAQ. These two DAQ schemes will be implemented as two essentially independent systems with separate crates
and DAQ/analysis software with some sharing of beam line instrumentation electronics.
\subsection{Current mode DAQ}
The experimental asymmetry measurements will be made with the current mode data acquisition. The core of this
system is the readout of the TRIUMF ADC modules. These ADC modules integrate the current from each quartz bar
photomultiplier tube. In normal operation, the ADC's allow a four-fold oversampling of the planned $250~{\rm
Hz}$ helicity readout (4~ms helicity windows), and should allow the same oversampling of a higher helicity
readout rate of up to $1000~{\rm Hz}$ (1~ms helicity windows). With additional timing signals from the polarized
source, the DAQ would be able to take events with greater than four-fold oversampling of the helicity windows.
In addition to digitizing the current from the quartz bar PMT's and several shielded background detectors, the
same type of ADC will be used to digitize information from both the injector and Hall C beam line monitors. This
beam line information will include charge cavities, BPM's and luminosity monitors. The ADC's for these signals
will be located in a separate crate so that any small helicity correlations in beam parameter signals will not
be present in the same crate as the detector ADC's.
Since the previous proposal submission, a quartz scanner has been added to the experiment, to allow measurements
of the scattered electron profile at the full beam current. The two PMT's of the quartz scanner will be
instrumented by both a scaler counting discriminated pulses and by a scaler counting pulses from a
voltage-to-frequency converter to allow charge integration of the PMT signals over the measurement interval. A
small subset of the pulses in the quartz scanner will be measured with a conventional ADC to monitor the gain of
the PMT's. The PMT's will have additional instrumentation as part of the low-current mode DAQ, which will be
described in the following section.
The rate and volume of data for current mode acquisition is comparable to what has been demonstrated to work
with the typical DAQ and analysis capabilities. In the four-fold oversampling mode, each ADC channel produces
six 32-bit data words per event. With a total of 168 channels (for the detector, injector beam line, and Hall C
beam line), and allowing a 50\% overhead for headers, an event size of 6048 bytes is estimated.
With a readout rate of $250~{\rm Hz}$, the $Q_{weak}\;$ data rate will be about 1500 kBytes/second, comparable to the
data rate of the G0 forward angle DAQ, in which the DAQ was able to easily operate with 0\% deadtime. At this
rate, a 2200 hour run would produce a data set of about 12 TB. Scaling from G0 analysis rates, an analysis on
this data set using a single fast CPU would take about 3 months.
A readout rate of $ 1000~{\rm Hz}$ would lead to a correspondingly higher data rate and may require using a
subset of the readout channels. Data acquisition test stands have been in use at Ohio University and at JLab
since early 2007. The TRIUMF ADC module performance has been tested at the planned $250~{\rm Hz}$ helicity
reversal rate, and limited tests have been performed at rates of up to $1000~{\rm Hz}$. Several minor issues in
the module readout have been identified and corrected during these tests. Figure~\ref{F:VQWK_asymmetry} shows
the distribution of quartet asymmetries during an 80-minute battery test at Ohio University. The battery was
used to generate a 6~$\mu$A input current for a TRIUMF preamplifier, which produced a 4.82~V input signal for
the ADC (6~V above the baseline). The ADC accumulated 2000 voltage samples over four 4~ms integration windows
each separated by a holdoff of 0.2~ms, for a quartet measurement interval of 16.8~ms. The sigma of the asymmetry
of 2.3~ppm corresponds to a sigma on the voltage 11.~$\mu$V over the 16.8~ms of the quartet integration,
yielding an 1.4~$\mu\mathrm{V}/\sqrt{\mathrm{Hz}}$, or 240~ppb/$\sqrt{\mathrm{Hz}}$ as compared to the 6~V
signal. As a test of the module's performance in the hall environment, it will be connected to beam line
instrumentation channels during beam tests at JLab in January 2008.
\begin{figure}[tbh]
\vspace*{0.3cm} \centering
\includegraphics[width=0.75\textwidth]{run182_chan8_asym.eps}
\caption{{\em Battery test asymmetry distribution for ($+--+$) quartets with a 4~ms integration window and
4.2~ms gate period \label{F:VQWK_asymmetry}}}
\end{figure}
\subsection{Low current tracking DAQ}
The $Q_{weak}\;${} apparatus will be partially instrumented with tracking detectors in order to study optics and
acceptance. Measurements with the tracking system will be done at low beam current, so that individual particles
can be tracked through the magnet. For this mode of measurement, the quartz bar photomultiplier tubes will be
instrumented with parallel electronics so that the timing and amplitude of individual particles can be recorded.
The tracking data acquisition will operate like a conventional DAQ, triggering on individual particles. The
front end electronics will be all VME, using the JLAB F1TDC for wire chambers and timing signals, and commercial
VME ADC modules for the GEM detectors and PMT amplitudes. As the hardware needed for tracking measurements is
different from the current mode hardware, the tracking DAQ can be operated as a distinct system, allowing
development of the two DAQ modes to proceed in parallel. The tracking DAQ will have the option of reading beam
line information from the same VME crate used for this purpose in the current mode DAQ. The quartz scanner PMT's
will be instrumented with ADC and TDC channels during the low current running, in addition to the integrating
electronics described in the previous section. This will allow tracking of events which hit the scanner, and
will allow comparison of the integrating mode readout at low and high currents.
\subsection{Beam Feedback}
A real-time analysis, similar to that used for G0, is planned. In addition to providing prompt diagnostic
information, this analysis will calculate helicity correlated beam properties such as current, position and
energy. The results of these calculations can be used for feedback on the beam.
\section{Detector System and Low Noise Electronics}
The $Q_{weak}\;$ main detector collects \v{C}erenkov light produced by electrons passing through thin, fused silica
(synthetic quartz) radiators. After many bounces, photons reach the ends of the rectangular bars by total
internal reflection, and are then collected by 5'' PMT's with UV-transmitting windows. The distribution of
elastic and inelastic tracks at the focal plane is shown in Figure \ref{MDenvelope}. While the signal during
parity violation measurements is the DC anode current, for background and systematic checks at very low
luminosities we will increase the PMT gains for pulsed-mode data taking.
\begin{figure}[h]
\begin{center}
\rotatebox{0.}{\resizebox{6.0in}{2.5in}{\includegraphics{Juliette_dotplot_BFIL104.eps}}}
\end{center}
\caption{{\em Dotplot showing the approximate distributions of the elastic (blue) and inelastic (red) events
with respect to a radiator bar. The properly weighted fraction of inelastic to elastic tracks on a bar is
0.04\%.}} \label{MDenvelope}
\end{figure}
In the following sections, we summarize progress since our last Jeopardy proposal on the design and construction
of the optical assemblies, the PMT and voltage dividers, and our understanding of the detector's expected
performance.
\subsection{Progress on the Optical Assembly}
\paragraph*{Final Radiator Specifications:}
We limited the radiator length to 200 cm to allow all the detectors to lie in a single plane without
interference between adjacent octants. But because a single 200 cm long bar would have cost 4 times as much as a
single 100 cm long bar, we ordered pairs of 100 cm long bars which will have to be glued.
The quartz bar thickness was carefully optimized to minimize excess noise. Excess noise is a scale factor which
multiplies the statistical error of the experiment calculated assuming that all electrons are detected with the
same weight. If this factor is 1.01, for example, then the experiment would have to run 2\% longer (or more
efficiently) to achieve the same statistical error as if there were no excess noise. Simulations showed that
radiators that are too thin have poor resolution due to low average photoelectron yields, while radiators that
are too thick have poor resolution due to shower fluctuations. (Low light production from electrons traversing
edges of the bars was also taken into account.) We ordered bars of 1.25 cm thickness, half the thickness of our
prototype bars, because this was near the minimum excess noise of 3.8\% \cite{Gericke}. The width in the bend
direction was also changed from 16 cm to 18 cm to
capture more of the elastic beam envelope.
The net effect of these changes in radiator geometry allowed us to reduce the material costs by 50\% while
keeping the error bar on $Q_W^{p}\;$ constant. The final dimensions for one active radiator are 200 cm x 18 cm x 1.25
cm. To keep the PMT's away from the scattered beam envelope, UV-transmitting lightguides of dimensions 18 cm x
18 cm x 1.25 cm are attached to each end. A complete optical assembly for one octant of the spectrometer
therefore has dimensions of 236 cm x 18 cm x 1.25 cm. Parameters of the optical assemblies are summarized in
Table \ref{BARspecs}.
\begin{table}[h]
\centering \caption{\label{BARspecs}{\em Updated parameters for the optical assembly. The bar tilt angle is
measured with respect to the vertical.}} \small \vspace*{0.2cm}
\begin{tabular}{c|c}
Parameter & Value \\ \hline
shape & rectangular solid \\
radiator size & 200 cm (L) x 18 cm (W) x 1.25 cm (T) \\
optical assembly size & 236 cm (L) x 18 cm (W) x 1.25 cm (T) \\
radiator material & fused silica: Spectrosil 2000 \\
lightguide material & fused silica: JGS1-UV \\
glue & Shin-Etsu Silicones SES406 \\
expected excess noise & 3.8\% \\
detector position & Z = 570 cm downstream of the magnet center \\
& R = 319 cm from the beam axis (inner edge) \\
bar tilt angle & 0 degrees \\ \hline \hline
\end{tabular}
\end{table}
\normalsize
\paragraph*{Procurement and Quality Control:}
The radiator bars were procured from St.\ Gobain Quartz. Delivery was completed in fall,
2006. The bars are made of Spectrosil 2000, which is an artificial fused silica with low fluorescence and
excellent radiation hardness due to the very low concentration of impurities such as iron. The ingot foundry for
Spectrosil 2000, as well as the polishing subcontractor, are in England.
About 2/3 of the \$300K cost was for labor for the optical grade polish needed to ensure a total internal
reflection coefficient of 0.996. Overall dimensions, flatness, and polish quality appear to be within
specifications\cite{Elliott}. The bevels on some bars were occasionally wider than even our relaxed
specification of 1 mm $\pm$ 0.5 mm, but simulations showed that the resulting loss of photoelectrons would be
modest.
The lightguides were procured from Scionix and delivery was completed early 2007.
They are made of a Chinese brand of artificial fused silica termed JGS1-UV. According to BaBar DIRC group
tests, this brand is equal or superior in quality to Spectrosil 2000 for the small pieces that were
tested. The delivered lightguides have tiny scattered pits, but these occupy too small a fraction of the surface
area to cause observable deterioration in performance.
Scintillation in the radiator bars could potentially cause a dilution of the elastic $e-$ light yield from the
absorption of x-ray backgrounds. To verify that our batch of Spectrosil 2000 had the expected low scintillation
coefficient, we had to expand upon a technique previously used at SLAC. The basic idea\cite{SLAC}
was to use a 300
$\mu$Ci $^{55}Fe$ source which produces a 6 keV x-ray. Because this x-ray energy is far too low to produce
\v{C}erenkov radiation via Compton scattering, any prompt light production by these x-rays must be due to
scintillation. Unfortunately, source-in minus source-out rate measurements did not produce results which were
precise enough for our needs, due to instability in the dark rate of the PMT. We dramatically improved our
sensitivity by chopping the x-rays and detecting the rate modulation in a spectrum analyzer. After normalizing
the result, we found a non-zero scintillation coefficient of order 0.01 photons/MeV\cite{Mack}. Combining this
coefficient with the simulated x-ray background spectrum, the dilution of our PV signal from scintillation
should be negligible even if the background were several orders of magnitude larger. However, we are not reliant
on simulations for the background spectrum, as we will measure it directly during the experiment.
\begin{figure}[h]
\begin{center}
\rotatebox{0.}{\resizebox{2.5in}{3.75in}{\includegraphics{glue_jig.eps}}}
\end{center}
\caption{{\em Gluing jig in the vertical position. The height is approximately 2.5 meters.}} \label{gluejig}
\end{figure}
\paragraph*{Gluing:}
Our central glue joint has to be reasonably strong and UV transparent even after a dose
of 100 kRad. After considering several glues, we found that Shin-Etsu Silicones SES-406 cured into a tough
material which adhered extremely well to even our ultra-smooth quartz surfaces\cite{Patrick}. Spectrophotometer
measurements were then made of the transmission through two glued slides of Spectrosil 2000, over the wavelength
range 250-500 nm.
The glue joint was found to absorb less than 2\% of the light below 350 nm, and was completely
transparent above 350 nm. Since the light only crosses 2-3 glue joints in the optical assembly (the PMTs will
also be glued to the lightguides), this degree of transmission is very satisfactory.
We then tested whether the light transmission would suffer radiation damage during the experiment. The glued
slides and several experimental controls were sent to Nuclear Services at NC State for irradiation to 100 kRad.
On return of the samples, we found no additional loss in transmission. We then increased the total dose to 1.1
MRad, to make sure the glue would still pose no problem if a pre-radiator were used. Again, no deterioration in
transmission was seen at the level of $\pm$0.1\%. These tests were completed in August 2007, and are partially
summarized in Reference \cite{Katie}.
Because the main detector must transmit photons down to 250 nm in wavelength, we could not use convenient,
quick-hardening UV-catalyzed glues. We therefore designed and built a gluing jig to hold the long quartz pieces
in alignment during a 24 hour curing period, as illustrated in Figure \ref{gluejig}. Our first attempts used
full-scale plastic models of the quartz pieces. After modifying our procedures and our equipment to minimize
potential damage to the expensive fused silica elements, we finally began gluing quartz in mid-October, 2007.
When our Yerevan collaborators return to JLab, we should be able to finish one complete optical assembly every
few days. Since we are only manufacturing a total of 9 assemblies (8 for production data-taking and one hot
spare), completion of all optical assemblies will take less than one month.
\subsection{Progress on PMT's}
Updated parameters for the PMT signal chain are given in Table \ref{PMTspecs}. The few changes from the last
proposal are due to our production bars being half as thick as the prototype bars, hence dropping the
photoelectron yield by a factor of 2. The nominal gains were increased by the same factor to keep the signal
magnitudes into the ADC approximately the same.
\begin{table}[h]
\centering \caption{\label{PMTspecs}{\em Updated parameters for the PMT signal chain. }} \small
\begin{tabular}{c|c}
& \\
{\bf Parameter } & {\bf Value} \\ \hline
{\bf current mode:} & \\
$I_{cathode}$ & 3 nA \\
gain & 2000 \\
$I_{anode}$ & 6 $\mu$A \\
non-linearity (achieved) & 5$\times10^{-3}$ \\
& \\
{\bf pulsed mode:} & \\
$I_{cathode}$ & 3.2 pA at 1 MHz \\
gain & 2$\times 10^6$ \\
$I_{anode}$ & 6.4 $\mu$A \\
$V_{signal}$ (with x10 amp.)
& 16 mV for 1 pe; \hspace{.25cm} 320mV for 20 pe \\
non-linearity (goal) & $<$ $10^{-2}$ \\ \hline \hline
\end{tabular}
\end{table}
\normalsize
\paragraph*{Tube Procurement and Quality Control:}
The Electron Tubes D753WKB is a short, low-cost PMT with a 5'' diameter UVT glass window which allows detection
of UV \v{C}erenkov photons down to an effective low wavelength cutoff of 250 nm. These tubes have SbCs dynodes for
improved dynode stability, and custom S20 cathodes to minimize potential nonlinearities at our high cathode
current of 3 nA. These high cathode currents limit the maximum PMT gain to $\cal O$(1000). The 6 $\mu$A signals
are converted to 6 V signals using low-noise preamplifiers before being sent on cables to upstairs ADC's. After
a painless procurement, all 28 PMT's had arrived by summer 2005\cite{Mack2} and were all checked for basic
functionality by January 2006\cite{Gericke2}.
\paragraph*{Voltage Dividers:}
At this time, the design of our current mode divider is complete and ready for procurement. We have a promising
pulsed-mode divider design but need to do one more test before it can be finalized. Details are discussed below.
\underline{Current-Mode Divider}
Our current-mode divider has to provide the nominal gain of 2000 with low noise, high linearity, and good
operational flexibility. After some initial modelling using test-ticket parameters, we began using the 5th
dynode as an anode in an attempt to optimize the linearity at this unusually low gain. Techniques for measuring
gain and linearity were developed, several prototypes were studied\cite{Mitchell}, and a satisfactory 7-stage
design was finally selected at the end of summer 2007.
The left panel of Figure \ref{PMTperformance} shows that although the nominal gain is 2000, we can vary the gain
from 500 to 16,000. This huge dynamic range will allow considerable freedom in remotely adjusting the gain.
The dark current for the 7-stage design is shown on the right panel of Figure \ref{PMTperformance}. The
corresponding signal dilution is at most 0.05\%, and possibly negligible if the dark current is stable enough to
be treated as part of the ADC pedestal.
Preliminary measurements show the nonlinearity to be 0.5\% at the nominal 6 $\mu$A operating load. Although this
is higher than our goal of 0.1\%\cite{Mack3}, it is acceptable since the most important corrections (distortion
of the charge and physics asymmetries) would still be relatively small.
\begin{figure}[h]
\begin{center}
\subfigure[]{\includegraphics[width=0.48\textwidth]{newgain.eps}}
\subfigure[]{\includegraphics[width=0.48\textwidth]{newdark.eps}}
\end{center}
\caption{{\em Left: Gain vs Voltage for 5- and 7-stage prototypes. Right: Dark Current vs Voltage for 5- and
7-stage prototypes. (In both cases, the green marks the operating range for the chosen 7-stage design. The PMT
selected for these measurements has a representative performance.)} }\label{PMTperformance}
\end{figure}
\underline{Pulsed Mode Divider}
Our high-gain dividers will only be used for event-mode tests at very low luminosities.
One important application will be to provide discriminated radiator signals during tracking-based acceptance
studies. Another important application will be in bias-free background studies in which we use flash ADC's to
acquire 1 second long buffers of 100\% live radiator signal history. Because some backgrounds will take the form
of single photoelectron pulses, it is important that we be able to resolve single photoelectrons in our
event-mode ADC's. The gain of our 10-stage PMT's is too low for this purpose (2$\times$10${^6}$), so the anode
signals will be amplified with external PS777 units owned by Hall C. A batch of noisy zener diodes slowed down
our prototyping efforts at the beginning of summer 2007, but by the end of the summer we had both good cosmic
ray pulses and a quiet baseline. Once we have proven that we can resolve single photoelectrons in the upstairs
counting house, we will begin procurement of these dividers as well.
\subsection{New Detector Performance Studies}
{\bf Optimizing the Radiator Tilt Angle}
While both PMT's detect some light from every track, the average number of photoelectrons and the uniformity of
the sum of the two ends is sensitive to the tilt angle as defined in the left panel of Figure \ref{PEvsTilt}. To
optimize the tilt angle, we used a GEANT model which was previously benchmarked using cosmic and in-beam test
data with half-size prototypes\cite{Carlini}.
\begin{figure}[h!]
\begin{center}
\rotatebox{0.}{\resizebox{6.in}{2.5in}{\includegraphics{PE_vs_Tilt_2plots.eps}}}
\end{center}
\caption{{\em Left: Definition of the detector tilt angle. Right: Simulation of photoelectron number versus bar
longitudinal coordinate for different tilt angles.}} \label{PEvsTilt}
\end{figure}
Results of the simulation are shown in the right panel of Figure \ref{PEvsTilt}, in which average photoelectron
number is plotted versus the tilt angle and a coordinate along the length of the bar. When the electrons are
close to normal incidence on the bar (about 23 degrees in this coordinate system), one obtains the highest
average photoelectron number. This position also minimizes the excess noise, but the uniformity of the
collection is not good. The best uniformity, with an adequate average photoelectron number and only 1.5\% higher
excess noise, is found when the radiator is facing nearly
vertically (0 degrees). We chose this more uniform
configuration to control the magnitude of a systematic correction discussed immediately below.
\paragraph*{Detector Bias:}
This section deals with the somewhat subtle issue of detector biases in a precision, integrating experiment that
determines the PV asymmetry from an average over the $Q^2$ acceptance of the apparatus, weighted by the light
yield in the \v{C}erenkov detectors. From the left panel of Figure \ref{Q2bias}, one sees that lower $Q^2$ events
are focused more toward the central half of the detector bar. Because the parity violating asymmetry is
approximately proportional to $Q^2$, this means that any nonuniformity in the detector response along the bar
will bias the asymmetry from the average value a naive Monte Carlo would predict if it were weighted by the
number of scattering events. We will correct this by measuring the detector response during the experiment
using the Region III chambers. Here we estimate the magnitude of the effect.
\begin{figure}[h!]
\begin{center}
\subfigure[]{\includegraphics[width=0.53\textwidth]{left_870.eps}}
\subfigure[]{\includegraphics[width=0.44\textwidth]{right_2.eps}}
\end{center}
\caption{{\em Left panel: At a point 3 meters downstream from the $Q^2$ focus on the main detector, it is easy
to see that the lower $Q^2$ events are more focused toward the center half of the radiator. Right panel:
Simulation of photoelectron number versus bar longitudinal coordinate for the nominal tilt angle of zero
degrees. Tracks near the ends of the bars receive about 5\% higher weight than those near the center of the
bars. }} \label{Q2bias}
\end{figure}
For the nominal tilt angle of 0 degrees, the predicted average photoelectron number versus position along the
bar is given in the right panel of Figure \ref{Q2bias}. The response is uniform to within 5\%, with slightly
less weight being given to the lower $Q^2$ events which are concentrated in the center of the bars. When we
include this bias in our simulation, we find the detector would measure an asymmetry (or average $Q^2$) which is
2.5\% higher than if the detector bias were not taken into account. However, the contribution to our final error
bar will be negligible since we will accurately map out the detector
response with the Region III chambers. The fact that our detectors are radiation-hard will
greatly simplify this effort.
\paragraph*{Costs vs Benefits in Using a Pre-radiator:}
The decision of whether to use a pre-radiator involves a trade-off between statistical and systematic errors.
For a thin electron detector as we have chosen for $Q_{weak}\;$, soft backgrounds can be reduced by using a
pre-radiator. Not only would the pre-radiator amplify the electron signal by showering (Figure \ref{planB} left
panel), but it would also attenuate soft background. However, while the Signal/Background ratio would definitely
improve by at least an order of magnitude, shower fluctuations would lead to an increase in the statistical
error of the experiment. Here we estimate the cost of using a pre-radiator, expressed in units of lost beam
hours.
\begin{figure}[h]
\begin{center}
\rotatebox{0.}{\resizebox{6.in}{2.5in}{\includegraphics{preradiator.eps}}}
\end{center}
\caption{{\em Left panel: Simulation showing shower production by a lead pre-radiator located before the fused
silica \v{C}erenkov radiator. Right panel: Excess noise versus thickness of the pre-radiator, showing a minimum of
12\% for an optimal ``shower-max'' pre-radiator consisting of 2 cm of lead. }} \label{planB}
\end{figure}
We simulated the excess noise as a function of pre-radiator thickness as shown in the right panel of Figure
\ref{planB}. The fluctuations are minimized for a 2 cm thickness of lead corresponding to shower-max at our beam
energy. The minimum excess noise with an optimal pre-radiator is 12\%, which would represent a loss of about 350
beam hours when compared to our nominal detector with no pre-radiator.\footnote{The excess noise would be
smaller at significantly higher beam energies.}
In conclusion, since our expected soft backgrounds are only a few times 0.1\%, and the cost of using a
pre-radiator in beam hours is significant, we do not plan to use a pre-radiator. However, the detector housing
will contain mounting brackets for the heavy lead panels in case they are needed.
\subsection{The Q$_{weak}$ Current Mode Electronics}
\paragraph*{Preamplifiers:}
All the TRIUMF current to voltage preamplifiers have now been made and tested. Two versions have been prepared;
we have 14 ``Main-style'' at JLab with a gain selection of $V_{out}/I_{in}$ = 0.5, 1, 2, or 4 M$\Omega$, and 14
``Lumi-style'' at Virgina Tech with a gain selection of $V_{out}/I_{in}$ = 0.5, 1, 25, or 50 M$\Omega$.
The preamplifiers were tested at JLab for radiation hardness. No changes in the gain, noise, or DC level were
noticed after 18 krad integrated dose. This easily meets the experimental specification of no deterioration
after 1 krad. The amplifiers were also used during a test run with the G0 lumi detectors in March, 2007.
\paragraph*{Digital Integrators:}
The TRIUMF current mode electronics consists of the low noise current-to-voltage preamplifiers followed by
digital integrators. The integrators are triggered at the start of each spin state and integrate for a precise
pre-set spin duration. The system clock of all the digital integrators will be slaved to the same 20 MHz clock
used to generate the spin sequence at the electron source. Figure \ref{fig:VME-int} shows the layout of an
8-channel digital integrator. When triggered, the device integrates all the input signals for the preset time.
The integration time and many other parameters can be set through the VME bus.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=110mm]{VME-integrator-layout.eps}\\[2mm]
\caption{{\em Layout of the VME digital integrator. The analog signals from the 8 inputs first pass through
sharp cutoff 50 kHz anti-aliasing filters then are digitized by 18-bit ADCs operating at up to 500 ksps. The
Field Programmable Gate Array (FPGA) calculates the sums over the selected interval and delivers the results to
the VME bus. The outputs are 32 bit words, allowing integrals as long as 30 ms at 500 kcps. }}
\end{center}
\label{fig:VME-int}
\end{figure}
Internally, the analog signals to be integrated first pass through sharp cutoff 50 kHz anti-aliasing filters
then are digitized by 18-bit ADCs operating at up to 500 kilosamples per second (ksps). The Field Programmable
Gate Array (FPGA) calculates the sums over the selected interval and delivers the results to the VME bus. Since
the 18 bit ADC digitizes each sample as one of $2^{18}$ possible codes, a 1 ms integral at 500 ksps has almost
$2^{27}$ possible values, and quantization noise on the integral is negligible compared to other sources of
noise.
Four prototypes of the VME digital integrator are now ready. One has has been tested at TRIUMF and is now at
Ohio University undergoing further tests with a realistic Q$_{weak}$ DAQ system. The other three are at TRIUMF
and will be delivered to JLab and Ohio for further testing. We hope to have these tests complete by the end of
2007, at which time TRIUMF will proceed with building of the remaining integrators.
\paragraph*{Short Spin States:}
The heat load on the $Q_{weak}\;$ target will be over 2000 Watts. Great care has been taken in the target design to
suppress boiling at high current. These efforts will be complemented by a data acquisition strategy designed to
minimize the effect of target density fluctuations on the asymmetry widths. Since the noise from target boiling
falls off at higher frequencies, the experiment now plans to use very short spin states, perhaps as short as 1
ms per spin state. In such cases it will be important that the time taken to settle on a new spin state be very
short; the JLab injector group has indicated that less that 0.1 ms can likely be achieved.
In the past, we planned to read out beamline instrumentation such as beam position monitors (BPMs) and beam
current monitors (BCMs) with the existing voltage-to-frequency converters. In the event of very short spin
states, however, the least count error on the VFCs would be excessive. For this reason we now plan to replace
the VFCs with TRIUMF digital integrators. In addition to the 14 on order for the main experiment, 16 modules
have been ordered for Hall-C instrumentation and 6 modules for the injector.
\paragraph*{Noise:}
Tests of the 1 M$\Omega$ preamplifiers with 200 pf capacitance input cable and a 50 kHz sharp-cutoff filter on
the output showed noise of 70 $\mu$V$_{rms}$ to 80 $\mu$V$_{rms}$, corresponding to a density of less than 0.4
$\mu$V$/\sqrt{Hz}$. Tests of a prototype VME integrator with the inputs terminated were made at TRIUMF. The
noise on a 2 ms integral was 11 $\mu$V$_{rms}$, implying an effective noise of 0.7 $\mu$V$/\sqrt{Hz}$ at the
integrator input.
Figure \ref{fig:signals} shows the nature of the Qweak current-mode signals. During primary data-taking, the
6.4 $\mu$A current from the photomultiplier tube anode is made up of rather large charge quanta of 50,000$e$
that set the shot noise. Table \ref{noisetable} compares the shot noise under various running conditions to the
purely electronic noise. The beam-ON case assumes a count rate of 800 MHz, with 20 photoelectrons per event and
a noiseless PMT gain of 2500, giving an anode current of 6.4 $\mu$A. For the ``LED'' and ``lowest possible''
cases, it is assumed that the current into the preamplifier is still 6.4 $\mu$A, and that it is delivered to an
I to V preamplifier with $V_{out}/I_{in}$ = 1 M$\Omega$. The purely electronic noise is much smaller than it
needs to be during running conditions, but low noise is valuable in that it permits zero-asymmetry control
measurements using current sources to be made at the part per billion level in a relatively short time ($<$1
day).
\begin{figure}[h]
\begin{center}
\includegraphics[width=110mm]{Qweak-signals-3.eps}\\[2mm]
\caption{{\em Nature of the $Q_{weak}\;$ signals. During primary data-taking, the 6.4 $\mu$A current from the
photomultiplier tube anode is made up of rather large charge quanta of 50,000e. }} \label{fig:signals}
\end{center}
\end{figure}
\begin{table}[ht]
\begin{center}
\caption{Noise at integrator input for 6.4 $\mu$A from different sources, assuming a preamplifier with
$V_{out}/I_{in}$ = 1 M$\Omega$. The ppm column is as a fraction of 6.4 V.}
\begin{tabular*}{150mm}{@{\extracolsep{\fill}}lccr@{}}
\hline\hline
Condition & charge quantum & noise & noise on 1 ms \\
& (e) & ($\mu$V/$\sqrt{Hz}$) & integral (ppm) \\
\hline
beam-ON shot noise & 50000 & 320 & 1120 \\
shot noise during LED tests & 2500 & 72 & 250 \\
lowest possible shot noise on 6.4 $\mu$A & 1 & 1.4 & 5 \\
preamplifier noise & & 0.4 & 1.2 \\
digital integrator noise & & 0.7 & 2.4 \\
\hline\hline
\end{tabular*}
\end{center}
\label{noisetable}
\end{table}
\paragraph*{Modulated Current Source:}
To assist in testing our full data acquisition system, TRIUMF is designing a Modulated Current Source. The
source will provide a reference current of nominally 5 $\mu$A, upon which a very small modulation is
superimposed to simulate the parity violating signal. The reference design specifies a switch-selectable choice
of 16 modulations from $10^{-6}$ to $10^{-9}$. These very small currents are formed by applying a voltage ramp
to a small capacitor. The module will respond to external spin state signals, or can run in stand-alone mode. By
placing such a source in Hall C, we will be able to show that we are able to detect a very small modulated
analog signal in the presence of all ambient sources of electronic noise.
\section{Qweak Magnetic Spectrometer}
A key component of the $Q_{weak}\;$ apparatus is the magnetic spectrometer `QTOR', whose toroidal field will focus
elastically scattered electrons onto a set of eight V-shaped, rectangular in cross section synthetic quartz
\v{C}erenkov detectors. The axially symmetric acceptance in this geometry is very important because it reduces
the sensitivity to a number of systematic error contributions. A resistive toroidal spectrometer magnet with
water-cooled coils was selected for $Q_{weak}\;${} because of the low cost and inherent reliability relative to a
superconducting solution.
\begin{figure}[h!]
\begin{center}
\vspace*{0.2cm}
\includegraphics[width=150mm]{Karen2.eps}
\caption{{\em $Q_{weak}\;$ spectrometer magnet and G0/TRIUMF field mapper at MIT-Bates. }} \label{fig:magnetpic}
\end{center}
\end{figure}
The coil geometry was optimized in a series of simulation studies using GEANT plus numerical integration over
the conductor's current distributions to determine the magnetic field. The simplest and least expensive QTOR
coil design that meets the needs of the $Q_{weak}\;${} experiment is a simple racetrack structure. Each coil package
consists of a double pancake structure, with each layer consisting of two, 2.20 m long straight sections, and
two semicircular curved sections with inner radius 0.235 m and outer radius 0.75 m. The copper conductor has a
cross section of 2.3 in by 1.5 in with a center hole of 0.8 in in diameter. The total DC current under operating
conditions will be 8650 A at 146 V. A GEANT Monte Carlo simulation was used to study the effects of coil
misalignments on the $Q^2$ distribution at the focal plane as well as on the symmetry of the 8-octant system as
required for systematic error reduction. The simulation results have been used to set coil alignment and field
uniformity requirements for the assembly of the spectrometer.
The $Q_{weak}\;$ magnetic spectrometer and support structure were assembled at MIT-Bates in the spring and summer of
2007, as shown in Figure \ref{fig:magnetpic}. After assembly of pairs of pancakes in their coil holders, all
eight coils were placed in the main magnet frame and aligned using the QTOR survey monument system installed in
the assembly hall at MIT-Bates. As the coils were nearly touching along the radial inside edges, the decision
was made to move the coils radially outward by 0.50 inches with respect to the original design. This has some
negative effect on the focal plane image, but simulations have shown that the good focus can be maintained by
slightly increasing the magnetic field strength, within the overhead of the power supply specifications. An
updated field map for QTOR was generated using a custom Biot-Savart numerical code in the spring of 2007. This
new field map incorporates as-built dimensions of each individual coil and also the modified radial coil
positions noted above. The field map covers a very large volume which extends well beyond the physical limits
of the magnet. The full field map has been incorporated in the $Q_{weak}\;$ GEANT simulation package.
The next steps in commissioning the $Q_{weak}\;$ spectrometer are to test the magnet at high power and obtain a
precise experimental field map. Once the power supply has been delivered, which is anticipated in December 2007,
it will be connected to the AC power feed, QTOR, and the cooling water. Magnetic field measurements and fine
alignments of the coil positions will then follow. MIT has negotiated a new rate structure with the local power
utility company which will allow us to test the QTOR power supply to the full design current while the power
supply is still under warranty, and will allow the coils to settle into their fully energized positions before
precision field mapping and adjustments to the coil alignments take place. Proceeding in this manner will reduce
the uncertainties during installation in Hall C at JLab with its inherent pressure for time.
\subsection{QTOR Magnetic Verification}
A magnetic field mapping apparatus, built by the Canadian group for the G0 experiment, will be employed to map
the QTOR spectrometer field. Two types of measurements will be made to assess the QTOR magnetic field.
Initially, the spatial current distribution in the eight coil windings will be ascertained by using the
zero-crossing technique developed for G0, as described below, and adjustments will be made to the individual
coil positions as necessary. Subsequently, absolute field strengths will be determined at selected points
along the central electron trajectories to verify that the associated $\int \vec{B}.d\vec{\ell}$ is matched to
the required 0.4\% for all sectors.
For the G0 experiment, an automated field measuring apparatus was used to determine the locations of a set of
the zero-crossing locations of specific field components at selected points of symmetry of the magnet.
Determination of these zero-crossing points then allowed the determinations of the actual coil locations and
hence, in principle, the complete specification of the magnetic field. The system is capable of providing an
absolute position determination of $\pm$0.2 mm, and a field determination of $\pm$0.2 Gauss, in order to resolve
a zero-crossing position to within $\pm$0.3 mm. The field mapping system consists of a programmable gantry with
full 3D motion within a (4 x 4 x 2) m$^3$ volume, and a set of high precision Hall probes, thermocouples and
clinometers mounted on the end of a probe boom on the gantry.
The objective of the zero-crossing measurements for Q$_{weak}$ is to determine all coil positions to the
required tolerances of $\pm 1.5$ mm and coil angles to $\pm 0.1^\circ$. The analysis program originally
developed for G0 to extract the coil positions from zero crossing measurement data has recently been reworked.
For G0, the analysis procedure used to extract the coil positions from the measured zero-crossing points was
tested against computer simulations, where known coil displacements were used to generate simulated data. Not
only were the `displaced' coil positions correctly extracted, but the relative orientations and positions of the
Hall probes themselves could also be extracted. Analysis of the experimental data resulted in a full
determination of the residual coil displacements for all 8 coils. Initial tests on the code as modified for
Q$_{weak}$ with simulated zero-crossing displacements provided excellent reproduction of the actual coil
displacements imposed in software.
Results are illustrated in Figure \ref{zx}, lending confidence in the technique for $Q_{weak}\;$. \\
\begin{figure}[h]
\centerline{\psfig{figure=zx1-GG.eps,height=5.8cm} \hspace*{0.02cm} \psfig{figure=zx2-GG.eps,height=5.8cm} }
\caption{\em Illustration of the zero-crossing technique, tested in software, for extracting coil positions
from the QTOR field mapping data. Simulated input coil displacements (solid circles) and positions fitted to
the zero-crossing data (crosses) are shown here for all 8 coils. } \label{zx}
\end{figure}
The G0 field mapper was shipped from UIUC, and it arrived at MIT-Bates in early August, 2007. Following this,
TRIUMF and U.Manitoba personnel travelled to MIT-Bates to reassemble and recommission the system. The gantry
motion was tested and appeared to be moving smoothly, the magnetic field sensors were reading out correctly, and
the control software appeared to be working as expected. Updated collision-avoidance software was installed,
but has not been fully tested yet against the proposed zero-crossing points.
In mid-October 2007, TRIUMF and U.Manitoba personnel again travelled to MIT-Bates to tune the gantry electronics
and to recalibrate the gantry motion using a laser-tracker. At this time, the field mapper has essentially been
recommissioned. Depending on the arrival date and installation of the QTOR power supply, the full magnetic
verification measurements are expected to begin in the spring of 2008.
\section{Collaboration and Management Issues}
\label{collaboration}
The $Q_{weak}\;${} collaboration presently consists of 86 individuals from 25 institutions. The collaboration list is
kept at the experiment's web page, at \verb+http://www.jlab.org/qweak+. A document server provides access to the
collaborations technical archive. The archive includes key documents such as the original 2001 proposal, the
2003 Technical Design Report including the review committees findings, the 2003 project management plan
including all quarterly progress reports to the DOE, the 2004 jeopardy proposal, and this document the 2007
jeopardy update.
The $Q_{weak}\;$ experiment operates as a managed project. A Project Management Plan dated June 28, 2004 is in place
and defines our interaction with the DOE. In addition, the management plan describes the management
organization, the cost, schedule, and performance requirements and controls, contingency plans, and reporting.
The individual Work Packages of the experiment are described there along with their detailed cost and schedule
breakdowns.
The major capital construction funding was provided by the US DOE through Jefferson Lab (\$1.91M), the US NSF
through a MRI (\$590k) which has University matching funds (\$452k) associated with it, and the Canadian NSERC
($\sim$\$315k). Including a small additional NSF grant (\$50k), the total budget for the experiment is \$3.316M.
The experiment aims to begin installation in Hall C at Jefferson Lab in the fall of 2009.
Besides the Spokespersons, construction project Work Package Leaders, and Operations Team Leaders, the
collaboration has a Principal Investigator, a Project Manager, and an Institutional Council. The Institutional
Council consists of representatives from each of the major ``stakeholder" institutions.
The capital construction work within the formal project has been broken down according to a Work Breakdown
Structure described in the Project Management Plan. Each major WBS line item has a Work Package Leader
associated with it. These major activities, Hall C infrastructure upgrades (such as the Compton polarimeter),
several smaller ``post" project management plan submission sub-systems, and other ``operations" related
activities are summarized in Table~\ref{table-wbs}. Activities listed under the category of ``operations " will
expand as we approach installation time to include: Hardware installation tasks, readiness reviews, sub-system
commissioning/calibration and tasks associated with production running.
\begin{table*}[ht]
\centering
\begin{tabular}{|l|l|l|l|} \hline
Category & Title &
Leader &
Institute \\ \hline \hline
Management & Principal Investigator & R. Carlini & JLab \\ \hline
Management & Spokespersons & R. Carlini, M. Finn& JLab, W\&M, \\
& & S. Kowalski, S. Page & MIT, UManitoba \\ \hline
Management & Project Manager & G. Smith & JLab \\ \hline
WP1 & Detector System & D. Mack & JLab \\ \hline
WP1.1 & Detector Design & D. Mack &JLab \\ \hline
WP1.2 & Detector Bars & D. Mack & JLab \\ \hline
WP1.3 & Detector Electronics & Larry Lee, Des Ramsay & TRIUMF \& UManitoba \\ \hline
WP1.4 & Detector Support & A. Opper & GWU \\ \hline
WP2 & Target System & G. Smith & JLab \\ \hline
WP3 & Experiment Simulation & N. Simicevic & LaTech \\ \hline
WP4 & Magnet & S. Kowalski & MIT \\ \hline
WP5 & Tracking System & D. Armstrong & W\&M \\ \hline
WP5.1 & WC1--GEMs & S. Wells/T. Forest & LaTech/Idaho State \\ \hline
WP5.2 & WC2--HDCs & M. Pitt & VPI \\ \hline
WP5.3 & WC3--VDCs & J. M. Finn & W\&M \\ \hline
WP5.4 & Trigger Counters & A. Opper & GWU \\ \hline
WP6 & Infrastructure & R. Carlini & JLab \\ \hline
WP7 & Magnet Fabrication & Wim van Oers & TRIUMF \& UManitoba \\ \hline
WP8 & Luminosity Monitor & M. Pitt & VPI \\ \hline
Operations& Magnet Mapping & L. Lee & TRIUMF \& UManitoba\\ \hline
Operations& Profile Scanner & J. Martin & UWinnipeg \\ \hline
Operations& Compton \& New Beamline & D. Gaskell & JLab \\ \hline
Operations&Compton e- Detectors & J. Martin/D. Dutta & UWinnipeg/UMiss \\
& & H. Mkrtchyan & Yerevan \\ \hline
Operations & Polarized Beam Properties& Matt Poelker/K. Paschke& JLab/UVA \\ \hline
Operations & GEANT Simulations & K. Grimm/A. Opper & LaTech/GWU \\ \hline
Operations & Data Acquisition & P. King & Ohio University \\ \hline
\end{tabular}
\caption[beamspecs]{{\em Stakeholder and Management Structure of the $Q_W^{p}\;$\ experiment. }}
\label{table-wbs}
\end{table*}
\newpage
\section{Manpower }
The data-taking for the $Q_{weak}\;$ experiment will demand significant investment of time from the collaboration.
Assuming about 54 weeks calendar weeks of beam time, including both commissioning and production running, and
with three shifts/day, each staffed with three collaborators, we will need to staff 3400 shifts. In addition, we
will need 27 different 2-week long run `coordinatorships', another significant investment of time. Finally, we
will require on-site presence of ``experts'' for each of the subsystems, especially during the commissioning
phase.
Three individuals per shift will certainly be needed during the demanding commissioning phase. We might be able
to reduce that to two people during the presumably ``routine'' production running. Experience with the G0
experiment showed that this was possible, even for a demanding parity-violation experiment, during the later
parts of the production data-taking. However, along with the mandatory cryogenic target operator and the shift
leader, the continuous operation of a Compton polarimeter may require a 3rd person on shift; experience during
the HAPPEX experiments with the Hall A Compton showed that it was not until significant experience had been
obtained with the polarimeter that it could be usefully run without fairly constant attention.
The collaboration has grown significantly since the last update to the PAC, with 86 collaborators at present,
and we continue to welcome new collaborators and institutions. Fortunately, we have 7 faculty members who plan
to take a sabbatical leave at JLab during the installation and running of the experiment (D. Armstrong, J.
Birchall, P. King, A. Opper, S. Page, M. Pitt, J. Roche), and we hope that the PAC will encourage the Lab to
support these sabbaticals. We presently have a total of 7 graduate thesis students already identified for the
experiment, with the expectation of more joining in the near future.
Using the present collaboration size, then, the average shift load will be 44 shifts per collaborator, spread out over two calendar
years, or 22 shifts/person/year, with about 1/3 of the collaborators also taking on a two-week long duty as run coordinator. This
represents a significant but not unreasonable investment of time. Thesis students and postdocs, of course, will typically take a larger
share than faculty with teaching responsibilities.
\section{Overview of the Experiment}
\label{overview}
The $Q_{weak}\;$ collaboration will carry out the first precision measurement of the proton's weak charge, $Q^p_w
=1-4\sin^{2}\theta_{W}$, at JLab, building on technical advances that have been made in the laboratory's
world-leading parity violation program and using the results of earlier PVES experiments to experimentally
constrain hadronic corrections. The experiment is a high precision measurement of the parity violating
asymmetry in elastic $ep$ scattering at $Q^{2} = 0.026 \;\;GeV^2$; the results will determine the proton's weak
charge with $4\%$ combined statistical and systematic errors.
\begin{figure}[ht]
\vspace*{0.2cm}\begin{center}
\includegraphics[width=14cm,angle=0]{New_Qweak_Layout.eps}
\caption{\em CAD layout of the $Q_{weak}\;$ apparatus. The beam and scattered electrons travel from left to right,
through the target, the first collimator, the Region 1 GEM detectors, the two-stage second precision collimator
which surrounds the region 2 drift chambers, the toroidal magnet, the shielding wall, the region 3 drift
chambers, the trigger scintillators and finally through the quartz \v{C}erenkov detectors. The tracking system
chambers and trigger scintillators will be retracted during high current running when $Q_{weak}\;$ asymmetry data are
acquired. Luminosity monitors, (not shown) will monitor target fluctuations and provide a sensitive null
asymmetry test. \label{fig:New_Layout}}
\end{center}
\end{figure}
A sketch showing the layout of the experiment is shown in Figure~\ref{fig:New_Layout}. The major systems of the
experiment include: A 2.5 kW $LH_2$ cryo-target system, a series of Pb collimators which define the $Q^2$
acceptance, an 8 segment toroidal magnet, 8 \v{C}erenkov detectors plus electronics, beamline instrumentation and
the rapid helicity reversing polarized source. The toroidal magnetic field will focus elastically scattered
electrons onto the main \v{C}erenkov detectors, while bending inelastically scattered electrons out of the detector
acceptance. The experiment nominally requires 180 $\mu A$ of 1.2 GeV/c primary electron beam current with 85\%
average longitudinal polarization.
%
The experimental technique relies on hardware focusing of the e-p elastic electron peak onto 8 radially
symmetric synthetic quartz \v{C}erenkov detectors which will be read out in current mode via low gain
photo-multiplier tubes which drive custom low-noise high-gain current-to-voltage converters. Each voltage signal
is input into a custom 18-bit ADC and then read out phase locked with the reversal of the polarized beam
helicity. The asymmetry is then computed by calculating the beam helicity correlated normalized scattering rate.
To suppress noise and random variations in beam properties, a very rapid helicity reversal rate is employed.
Additional instrumentation monitors the various critical real time properties of the beam.
Basic parameters of the experiment are summarized in Table~\ref{EXTB_11}. The production time shown includes
the time required for a separate initial 8\% measurement as well as the final 4\% result. Significant additional
beam time is required for systematics and calibration studies, as detailed later in the beam time request.
\begin{table}[htb]
\caption{\em Basic parameters of the $Q_{weak}^{p}$ experiment.} \label{EXTB_11}
\begin{center}
\begin{tabular}{lc}
\multicolumn{1}{c}{Parameter}&
\multicolumn{1}{c}{Value}\\
\hline\hline
Incident Beam Energy & 1.165 GeV \\
Beam Polarization & 85\% \\
Beam Current & 180 $\mu$A \\
Target Thickness & 35 cm (0.04$X_{0}$) \\
Full Current Production Running & 2544 hours \\
Nominal Scattering Angle & 7.9$^{\circ}$ \\
Scattering Angle Acceptance & $\pm$3$^{\circ}$ \\
$\phi$ Acceptance & 49\% of 2$\pi$ \\
Solid Angle & $\Delta\Omega$ = 37 msr \\
Acceptance Averaged $Q^{2}$ & $<Q^{2}>$= 0.026 $(GeV/c)^{2}$ \\
Acceptance Averaged Physics Asymmetry & $<A>$ = -0.234 ppm \\
Acceptance Averaged Expt'l Asymmetry & $<A>$ = -0.200 ppm \\
Integrated Cross Section & 4.0 $\mu$b \\
Integrated Rate (all sectors) & 6.5 GHz (or .81 GHz per sector) \\
\hline\hline
\end{tabular}
\end{center}
\end{table}
The main technical challenges result from the small expected asymmetry of approximately -0.3 ppm; we will
measure this asymmetry to $\pm 2.1$\% statistical and $\pm1.3$\% systematic errors. The optimum kinematics
corresponds to an incident beam energy of E$_0$ = 1.165 GeV and nominal scattered electron angle $\theta_e = 7.9
$ degrees. Fixing $Q^2$ = 0.026 (GeV/c)$^2 $ limits nucleon structure contributions which increase with $Q^2$
and avoids very small asymmetries where corrections from helicity correlated beam parameters begin to dominate
the measurement uncertainty. With these constraints applied, the figure-of-merit becomes relatively insensitive
to the primary beam energy; using a higher beam energy will result in a longer measuring time with stronger
magnetic field requirements, smaller scattering angles, and the possibility of opening new secondary production
channels that might contribute to backgrounds.
The high statistical precision required implies high beam current (180 $\mu$A), a long liquid hydrogen target
(35 cm) and a large-acceptance detector operated in current mode. The polarized source now routinely delivers
reasonably high beam currents at 85\% polarization; developments for $Q_{weak}\;$ are focusing on more reliable
operation at higher current, control of helicity correlated properties and rapid helicity reversal rates up to
500 Hz (1 ms). Radiation hardness, insensitivity to backgrounds, uniformity of response, and low intrinsic noise
are criteria that are optimized by the choice of quartz \v{C}erenkov bars for the main detectors. The combined beam
current and target length requirements lead to a cooling requirement of approximately 2.5 kW, considerably over
the present capacity of the JLab End Station Refrigerator (ESR). This will require us to draw additional
refrigeration capacity from the central helium liquefier (CHL), providing a cost effective solution for the
required target cooling power. We note that the combination of high beam current and a long target flask will
make the $Q_{weak}\;$ target the highest power cryotarget in the world by a factor of several.
It is essential to maximize the fraction of the detector signal (total \v{C}erenkov light output in current mode)
arising from the electrons of interest, and to measure this fraction experimentally. In addition, the asymmetry
due to background must be corrected for, and we must measure both the detector-signal-weighted $<Q^2>$ and
$<Q^4>$ -- the latter in order to subtract the appropriate hadronic form factor contribution -- in order to be
able to extract a precise value for $Q_W^{p}\;$ from the measured asymmetry.
The $Q^2$ definition will be optimized by ensuring that the entrance aperture of the main collimator will define
the acceptance for elastically scattered events. Careful construction and precise surveying of the collimator
geometry together with optics and GEANT Monte Carlo studies are essential to understand the $Q^2$ acceptance of
the system. This information will be extracted from ancillary measurements at low beam current, in which the
quartz \v{C}erenkov detectors are read out in pulse mode and individual particles are tracked through the
spectrometer system. The \v{C}erenkov detector front end electronics are designed to operate in both current mode
and pulse mode for compatibility with both the parity measurements and the ancillary $<Q^2>$ calibration runs.
The tracking system will be capable of mapping the $<Q^2>$ acceptance to $\pm 1\%$ in two opposing octants
simultaneously; the tracking chambers will be mounted on a rotating wheel assembly as shown in Figure
\ref{fig:New_Layout} so that the entire system can be mapped in 4 sequential measurements. The front chambers
are based on the CERN `GEM' design, chosen for their fast time response and good position resolution. The
chambers plus trigger scintillator system will be retracted during normal $Q_{weak}\;$ data taking at high current.
The experimental asymmetry must be corrected for inelastic and room background contributions as well as hadronic
form factor effects. Simulations indicate that the former will be small, the main contribution coming from
target walls, which can be measured and subtracted. The quadrature sum of systematic error contributions to $Q_W^{p}\;$,
including the hadronic form factor uncertainty, is expected to be 2.6\%. Experimental systematic errors are
minimized by construction of a symmetric apparatus, optimization of the target design and shielding,
utilization of feedback loops in the electron source to null out helicity correlated beam excursions and careful
attention to beam polarimetry. We will carry out a program of ancillary measurements to determine the system
response to helicity correlated beam properties and background terms.
The electron beam polarization must be measured with an absolute uncertainty at the 1\% level. At present, this
can be achieved in Hall C using an existing M$\o$ller polarimeter, which can only be operated at currents below
8 $\mu$A. Work is progressing to upgrade the M$\o$ller for higher beam current operation. A major effort to
build a Compton polarimeter in Hall C at Jefferson Lab is also underway; the Compton polarimeter will provide a
continuous on-line measurement of the beam polarization at full current (180 $\mu$A) which would otherwise not
be achievable. During the commissioning period, the new Compton will become an absolute measurement device by
calibrating it using the proven Hall C high precision M$\o$ller Polarimeter and cross checking against its
sister Compton polarimeter in Hall A.
The $Q_{weak}\;$ apparatus also include two luminosity monitors consisting of an array of \v{C}erenkov detectors located
on the upstream face of the primary collimator and located downstream of the $Q_{weak}\;$ experiment at a very small
scattering angle. The detectors will be instrumented with photomultiplier tubes operated at unity gain and
read out in current mode; the high rate of forward scattered electrons and the resulting small statistical error
in the luminosity monitor signals will enable us to use this device for removing our sensitivity to target
density fluctuations. In addition, the luminosity monitor will provide a valuable null asymmetry test, since it
is expected to have a negligible physics asymmetry as compared to the main detector. We will apply the same
corrections procedure for helicity correlated beam properties to both the main detectors and to the luminosity
monitor - if the systematic error sensitivities are well understood, we should be able to correct the luminosity
monitor to zero asymmetry within errors, which gives an independent validation of the corrections procedure used
to analyze the main detector data.
\begin{table}[h]
\centering \caption{\em \label{errorbudget}Total error estimate for the $Q_{weak}\;$ experiment. The contributions to
both the physics asymmetry and the extracted $Q_W^{p}${} are given. In most cases, the error magnification
due to the 33\% hadronic dilution is a factor of 1.49. The enhancement for the $Q^2$ term is somewhat larger.
}
\small
\begin{tabular}{ccc}
&&\\
{\bf Source of }&{\bf Contribution to}&{\bf Contribution to}\\
{\bf error}&{\bf $\Delta A_{phys}/A_{phys}$ }&{\bf $\Delta$$Q_W^{p}$ /$Q_W^{p}$ } \\ \hline
Counting Statistics& 2.1\% & 3.2\% \\
Hadronic structure & --- & 1.5 \% \\
Beam polarimetry & 1.0 \% & 1.5\% \\
Absolute $Q^{2}$ & 0.5\% & 1.0\% \\
Backgrounds & 0.5\% & 0.7\% \\
Helicity-correlated & & \\
beam properties & 0.5\% & 0.7\% \\ \hline
TOTAL: & 2.5\% & 4.1\% \\
\end{tabular}
\end{table}
Table ~\ref{errorbudget} summarizes the statistical and systematic error contributions to the $Q_W^{p}\;$ measurement
that are anticipated for the experiment. Note that the hadronic and statistical uncertainties were determined by
assuming the Standard Model asymmetry at the reference design $Q^2$=0.026 $GeV^2$. The actual asymmetry
precision and hadronic uncertainty will be affected slightly by the $Q^2$ (incident beam energy) the final world
PVES data set used (additional results are anticipated from G0 ``backwards" and PVA4) and the degrees of freedom
allowed in the global fit.
\section{Precision Polarimetry}
The dominant experimental systematic uncertainty in $Q_{weak}\;$ will result from corrections due to beam polarization
($\frac{\delta P_e}{P_e} = 1 \%$).
Since the previous $Q_{weak}\;$ proposal update, work has continued with the goal of improving the performance of
the existing Hall C electron polarimeter, the Basel M\o ller Polarimeter, in particular in the area of extending
the operability of the M\o ller to high currents. In addition, design of a new Compton Polarimeter for Hall C is
also proceeding, with the aim of determining the incident beam polarization to the 1\% level statistical
uncertainty on the timescale of one hour, and monitored on a continuous basis. While the Compton polarimeter is
being designed with the goal of achieving 1\% systematic precision in mind, we will rely on cross--calibration
with the Hall C M\o ller during the initial phases of its use and use the Compton primarily as a relative
monitor of the polarization.
\subsection{M\o ller Polarimeter Operation at High Currents }
Since the submission of the last $Q_{weak}\;$ update, more studies have been performed attempting to extend the
operation of the Hall C M\o ller Polarimeter to high currents. The nominal operating current of the Hall C M\o
ller is $\approx$~2~$\mu$A, this limit set by the need to keep foil heating effects (and hence target
depolarization) low. Studies have been underway to extend the operating current of the Hall C M\o ller to
$\approx$~100~$\mu$A using a fast magnetic beam kicker in conjunction with a thin iron strip or wire target. The
short duration of the kick (on the order of $\mu$s) ensures minimal target heating effects.
As of the submission of the previous update, preliminary tests of a first generation kicker had been performed
on a 20$\mu$m diameter iron wire target. These tests were partially successful, but pointed out the need for a
different kind of target to keep instantaneous rates low and random coincidences under control. In December
2004, a second round of tests were performed with a 1~$\mu$m thick iron strip target. The results of these tests
are shown in Fig.~\ref{kicker_dec04}~\cite{kicker_proceedings}. In this case, the kicker scanned the beam across
the iron foil for 10~$\mu$s at a frequency of 5 to 10 kHz for beam currents up to 40~$\mu$A. Higher currents
were not accessible due to beam loss issues, likely due to an unoptimized beam tune. As can be seen from the
figure, the technique worked in a global sense, but the polarization measurements were not stable enough to
prove stability at the 1\% level. In particular, control measurements made at 2~$\mu$A with no kicker at the
beginning and end of the test run varied as much as 3\%. The source of these fluctuations is unclear since these
tests were performed during a running period when the beam polarization was not being regularly measured in Hall
C. Nonetheless, these results were taken as proof of concept. Further tests were planned for the G0 Backward
Angle run in 2006, but were not possible due to the extremely low beam energy. We hope to make further
measurements during the currently running $G_E^P$ experiment in Hall C.
For reference, we show the table of kicker performance properties needed to make polarization measurements at
various currents (Tab.~\ref{kicker_op}). In particular, we wish to note that a kicker magnet capable of the
shortest kick interval (2~$\mu$s) has been constructed and is ready for installation.
\begin{table}[htb]
\caption{\em Operating parameters for a planned beam--kicker system that will allow operation of the Hall C M\o
ller Polarimeter at high currents. $\Delta t_{kick}$ refers to the total interval of time for which the beam
will be deflected from its nominal path onto a half--foil or strip target. In order to keep beam heating effects
to a minimum, the kick interval must be shorter at higher currents.} \label{kicker_op}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline $I_{beam}$ ($\mu$A) & $\Delta t_{kick}$ ($\mu$s) & f$_{kick}$ (Hz) \\ \hline
200 & 2 & 2500 \\
100 & 4 & 2500 \\
50 & 8 & 2500 \\
20 & 20 & 2500 \\\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}[h!]
\begin{center}
\vspace*{0.3cm}
\includegraphics[width=4.5in]{kicker2004_cur.eps}
\end{center}
\caption{\label{kicker_dec04} \em Polarization measurements using the Hall C M\o ller Polarimeter and a second
generation kicker magnet and iron strip target. Globally, the technique yields asymmetries independent of beam
current at the several percent level. However, instabilities of unknown origin (likely beam related) make it
impossible to show that the kicker system yields measurements stable to 1\%.}
\end{figure}
\subsection{Hall C Compton Polarimeter}
In Compton polarimetry, circularly polarized photons from a laser are scattered from polarized electrons in the
electron beam. Scattering rates measured in electron and photon detectors determine the cross-section asymmetry
and hence polarization.
A schematic diagram of the Compton polarimeter is shown in Fig.~\ref{fig:chicane}.
\begin{figure}[b]
\begin{center}
\vspace*{0.3cm}
\includegraphics[width=\textwidth]{layout-new2.eps}
\end{center}
\caption{{\em Schematic diagram of the Compton polarimeter chicane.}} \label{fig:chicane}
\end{figure}
A four-element vertical dipole chicane is used to displace the Compton interaction point from the beam axis,
allowing the scattered photons to be detected in a PbWO$_4$ calorimeter. The scattered electrons will be
momentum analyzed in the 3rd dipole of the chicane, and will be detected by a diamond strip tracker comprised of
four planes. The diamond strip tracking detector is a new development, and will be constructed in collaboration
between groups from the Universities of Winnipeg, Manitoba, TRIUMF, and Mississippi State University. The
design of this detector, with a focus on systematic uncertainties in polarization extraction, was studied in a
recent honours thesis\cite{Storey}.
The development of diamond detectors for minimum ionizing radiation has been led by the CERN RD42 collaboration.
H.\ Kagan (Ohio State U., OSU) and W.\ Trischuk (U.\ Toronto), who are collaborators in CERN RD42, have assisted
us in the initial fabrication and testing of diamond detectors. As a first step, two test detectors were
fabricated with a 6~mm diameter electrode of Cr-Au was sputtered onto each face of each diamond sample. A
picture of a resultant two-electrode detector, and pulse-height spectra using the detector with a
minimum-ionizing beta-source ($^{90}$Sr) are shown in Fig.~\ref{fig:phs}. The results show a charge-collection
depth of 230~$\mu$m when the detector is biased to 1000~V, consistent with typical good results achieved by CERN
RD42. We are in the process of building test setups similar to the OSU test setup at both U.\ Winnipeg and at
Mississippi State U.
We are also in the process of designing and fabricating prototype strip detectors at both OSU and at the
Nanosystem Fabrication Laboratory at U. Manitoba. At the time of writing, a half size (10 $\times$ 10 mm$^2$)
prototype detector with 15 strips has just been completed at OSU, as illustrated in Fig.~\ref{fig:OSU2}, and is
currently being tested. Each step of the fabrication process is well-understood at this time and progress is
continuing smoothly. In addition to the funds already obtained from NSERC and from DOE, the Canadian group has
requested from NSERC funds to complete the vacuum chamber, the motion mechanism, and funds for a spare set of
detector planes.
\newpage
\vspace*{-0.5cm}
\begin{figure}[t]
\subfigure[]{\includegraphics[width=0.54\textwidth]{first-diamond.eps}}
\subfigure[]{\includegraphics[width=0.45\textwidth]{M107.eps}} \caption{{\em (a) Photograph of two-electrode
prototype diamond detector, after successful metallization. (b) Pulse height spectra for two-electrode
prototype sensing minimum ionizing betas, showing energy deposition well-separated from pedestal. Upper and
lower plots show consistent results achieved when biasing each side of the detector. Various curves at 500~V
show stability of the response over hours. \label{fig:phs}}}
\end{figure}
\vspace*{0.1cm}
\begin{figure}[h!]
\begin{center}
\subfigure[]{\includegraphics[width=0.45\textwidth]{prototype1.eps}}
\subfigure[]{\includegraphics[width=0.38\textwidth]{prototype2.eps}} \caption{{\em Two views of the first
15-strip diamond detector prototype, recently fabricated at OSU. \label{fig:OSU2}}}
\end{center}
\end{figure}
As noted in the previous $Q_{weak}\;${} update, the construction of a Compton Polarimeter in Hall C will require a
substantial re--work of the Hall C beamline. Our plan in 2004 had been to insert the 4--dipole chicane for the
polarimeter downstream of the existing M\o ller Polarimeter. This plan has been re--evaluated by CASA and a new
design has been developed that inserts the Hall C Compton upstream of the M\o ller Polarimeter, shifting the M\o
ller and all other downstream beam elements closer to the Hall C pivot~\cite{benesch_report}. This is shown
schematically in Fig.~\ref{beamline_2007}. This beamline concept has been vetted by CASA, the $Q_{weak}\;${}
collaboration, and Hall C physics staff and is currently in the design and engineering stage. It also worth
noting that the modified beamline design calls for slightly longer dipoles than originally proposed (1.25~m as
opposed to 1~m). This modification will facilitate operation after the 12~GeV upgrade. The design and
procurement of these dipoles, as well as the associated stands and vacuum chambers is being done by MIT--Bates
under the auspices of an M.O.U. between Jefferson Lab and MIT--Bates.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.5\linewidth,angle=-90]{compton_beamline_2007.eps}
\end{center}
\caption{\em Schematic of the modifications required to insert the Compton dipole chicane in the Hall C beam
line. Additional quadrupoles will be required to achieve a tightly focused beam at the Compton interaction
point. The M\o ller Polarimeter will be shifted $\approx$ 11 m closer to the Hall C target position, while other
beamline elements, the fast raster for example, will move less by making use of the space between the M\o ller
``vacuum'' legs.} \label{beamline_2007}
\end{figure}
Finally, at the suggestion of the JLab Polarized Source Group, we are pursuing the implementation of a high
power fiber laser for the Compton Polarimeter. The Polarized Source Group has recently begun using ``fiber''
lasers commonly used by the telecommunications industry. These systems amplify relatively low power diode seed
lasers to tens of Watts. While the potential average power is still relatively low ($\approx$20 W of green light
after frequency doubling the 1064 nm output of the fiber laser system), these lasers have the advantage that
they can be pulsed at the same repetition rate as the Jefferson Lab electron beam.
For a laser
pulsed at the same repetition rate as the electron beam (499 MHz) with a narrow pulse structure ($\approx$ 35
ps), the increase in luminosity as compared to a CW laser system is approximately,
\begin{equation}
\frac{\mathcal{L}_{pulsed}}{\mathcal{L}_{CW}} \approx \frac{c}{f\sqrt{2\pi}} {1 \over \sqrt{ \sigma^2_{e,z} +
\sigma^2_{\gamma,z} + \frac{1}{\sin^2{\alpha/2}}(\sigma^2_e+\sigma^2_\gamma)}},
\end{equation}
where $f$ is the laser/electron repetition rate and $\sigma_{e,z}$ ($\sigma_{\gamma,z}$) represent the
longitudinal size of the electron (laser) pulse. For typical values of the electron/laser beam sizes and pulse
widths, this ratio is approximately 20. This means that an RF pulsed laser with 20 W average power represents an
``effective'' laser power of 400 W.
Currently, we are actively pursuing the fiber laser option as our laser system of choice. Such a system poses
some small risk since a fiber laser system of this precise configuration has not been built at Jefferson Lab
before and is not completely a ``turn-key'' system. However, the likelihood of success is high due to the
extensive experience of the Polarized Source Group with fiber lasers, and additional on-site experience with
high power frequency doubling. The high power fiber amplifier has been ordered, and we should know within months
whether such a system is tenable or not. If this system proves unworkable, the commercial pulsed green laser
option discussed in the previous $Q_{weak}\;$ update~\cite{coherent_evolution} is a safe fall--back option.
\section{Simulations and Backgrounds}
In the 2004 PAC proposal~\cite{Carlini2} for $Q_{weak}\;$, we discussed initial simulations of backgrounds from a
variety of sources, including the background in the $Q_W^{p}\;$ measurement originating from the $LH_2$ target windows.
For 3.5 mil thick aluminum windows, we indicated an expected contribution of about 11\% of the free $ep$ elastic
asymmetry, which must be measured and corrected for. Those background studies are explicitly included in our
$Q_{weak}\;$ beam request. Since the last PAC submission, we have done extensive studies with the GEANT-based
$Q_{weak}\;${} simulation to identify other sources of backgrounds, as well as to quantify and reduce them by
optimizing the design of the $Q_{weak}\;$ collimator system. This work is the focus of our proposal update discussion;
as our simulations have improved and the design of the collimator system has evolved, changes to reduce
backgrounds were not allowed to negatively affect the figure-of-merit (FOM) for the experiment.
The $Q_{weak}\;$ simulation model was originally developed from the G0 simulation; both experiments use toroidal
magnets. The origin of our coordinate system is at the at the center of the QTOR, with z along the beam
direction, x vertical, and y toward beam-right, making a right-handed system. The simulation includes the LH$_2$\
target, target windows, the beamline, the acceptance-defining collimator that has a clean-up collimator upstream
and down stream of it, the QTOR magnetic field based on a recent calculated field map, QTOR coils, QTOR support
structure elements that are near the $ep$-elastic envelope, lintel-like photon shields, a shielding hut wall,
and quartz \v{C}erenkov bars. A GEANT-generated view of equipment that is typically used in investigations of
backgrounds is shown in Fig.~\ref{fig_geant-pers}. We track secondary electrons and photons down to 0.5 MeV
because \v{C}erenkov light production is barely possible with 0.35 MeV photons in fused silica, which has an
index of refraction of 1.48.
\begin{figure}[h!]
\centerline{\includegraphics[width=5.5in]{geant_perspective.eps}} \caption{{\em GEANT-generated view of
equipment elements used in a typical simulation to investigate backgrounds. The upstream clean-up collimator is
shown in green, the acceptance-defining collimator is yellow, the downstream clean-up collimator is aqua, the
QTOR coils are red, the QTOR support structure is green, the lintel photon shields are yellow, and the
\v{C}erenkov bar is white. The shielding hut wall has been removed from this figure.
}}
\label{fig_geant-pers}
\end{figure}
\subsection{Collimator Design}
\label{sim-status}
The $Q_{weak}\;$ collimator system will play a crucial role in defining the $Q^2$ acceptance and the figure of merit
for the experiment. Its geometrical symmetry and alignment with respect to the target and detector systems will
be major factors in determining the sensitivity to systematic errors associated with helicity correlated beam
motion. A very substantial effort has thus been spent on optimizing the collimator design using the $Q_{weak}\;$ GEANT
simulation software.
When the defining collimator opening was finalized, the support structure for QTOR was already fixed. This
defined the maximum size of the scattered electron envelope through the QTOR region. The initial step in
optimizing the design was to choose the aperture and longitudinal location of the collimator to give the largest
possible acceptance while not interfering with the QTOR support structure. Both upstream and downstream
locations were considered. The downstream option -- as close as possible to the entrance of QTOR -- gave the
larger acceptance; this is because the extended target becomes more ``point-like'' as the defining aperture is
moved downstream. Once this maximum aperture was determined, the collimator was ``trimmed'' further in order to
fit the scattered electron envelope onto quartz detector bar of reasonable size and shape at the focal plane.
Extensive studies were carried out to optimize the shape of both the collimator aperture and the detector to
minimize the overall error on $Q_W^{p}\;$, while also keeping the contamination from inelastic events acceptably low.
The final collimator design consists of three sequential elements, the middle of which is the
acceptance-defining collimator, with the other two inserted for `clean-up' purposes. A summary of the $Q_{weak}\;$
collimator geometries is given in Table \ref{tab:collimators}.
\begin{table}[h]
\begin{center}
\caption{ {\em $Q_{weak}\;$ collimator system. All elements are made of a machineable alloy consisting of 95\% Pb
and 5\% Sb}} \vspace{0.25in}
\begin{tabular}{|l|c|c|}
\hline Element & Upstream z (cm) & Thickness (cm) \\ \hline Upstream & - 583.4 & 15.2 \\
Acceptance Defining & -385.7 & 15 \\
Downstream clean-up & -271.9 & 11 \\
\hline \hline
\end{tabular}
\label{tab:collimators}
\end{center}
\end{table}
While the first two elements of the collimator system will precisely machined, it is desirable that the
$ep$-elastic electrons that are detected by the \v{C}erenkov bars do not hit the upstream and downstream
collimators. If we had a point-like target and thin collimators, this would be a simple geometry problem; with a
35 cm long target and collimators tens of radiation lengths thick, the situation is less straightforward. We
used the JLab 3-D CAD code to design the upstream collimator aperture so that it clears the $ep$-elastic
envelope by at least 0.5 cm, and we verified this clearance with the GEANT simulation. We show the envelope of
the $ep$-elastic electrons that are detected by the \v{C}erenkov bar at the upstream and downstream sides of
collimators \#1 and \#2 in Figures \ref{fig:coll1} and \ref{fig:coll2}, respectively. Note that the envelope
fills the downstream side of the acceptance-defining collimator and easily clears collimator \#1. The image for
collimator \#3 is similar to that of collimator \#1.
\begin{figure}[hbtp]
\begin{center}
\subfigure[] {\includegraphics[width=3.0in]{coll_1_us.eps}}
\subfigure[]{\includegraphics[width=3.0in]{coll_1_ds.eps}} \caption{{\em X-Y image of the ep-elastic electrons
that hit the \v{C}erenkov bar at collimator \#1. a) upstream end; b) downstream end. The aperture of
collimator \#1 is shown in red.
}}
\label{fig:coll1}
\end{center}
\end{figure}
\begin{figure}[hbtp]
\begin{center}
\subfigure[] {\includegraphics[width=3.0in]{coll_2_us.eps}} \subfigure[]
{\includegraphics[width=3.0in]{coll_2_ds.eps}} \caption{{\em X-Y image of the ep-elastic electrons that hit the
\v{C}erenkov bar at collimator \#2. a) upstream end; b) downstream end. The aperture of collimator \#2 is
shown in red.
}}
\label{fig:coll2}
\end{center}
\end{figure}
\subsection{Backgrounds}
After a decade of commissioning large experiments at JLab, it is clear to us that signals
are often easy to calculate, but backgrounds are difficult to estimate and are therefore
one of the most important factors in determining the success or failure of an experiment.
Essential features of the $Q_{weak}\;$ apparatus favoring a high signal to noise ratio are the
use of a magnetic
spectrometer to separate elastic from inelastic events, and the choice of main detectors that
are located in a shielded detector hut and are sensitive only to relativistic particles.
However, the integrating nature of our
experiment and our 2\% asymmetry goal mean that we are potentially sensitive to percent-level
soft backgrounds which may be difficult to measure and correct with high accuracy.
The $Q_{weak}\;$ collaboration has had a strong simulation team since the original proposal, and
a significant part of this effort has been dedicated to background reduction.
The experiment has adopted a 2-bounce design
philosophy, which means that indirect backgrounds must have scattered at least twice after
leaving the target before they reach the main detector.
Where possible, the design strategy is
2-bounce-plus-shielding.
Our efforts over the last 3 years can be summarized as uncovering potential percent-level
backgrounds, modifying the design of the experiment to reduce them by an order of magnitude,
and then developing strategies to measure the remaining effects.
Since the last Jeopardy proposal, a potential neutral background of the 1-bounce type of $\cal{O}$ (1\%) was uncovered\cite{Liang}.
Insertion of a lead block in each octant will prevent $\gamma$ rays
created on the defining collimator from reaching the main detector, hence reducing this potential background by
almost an order of magnitude. We also discovered that a significant 1-bounce background can arise if the shield
house window aperture is too tight on the low energy loss side, causing showering into the main detector. The shield
house window will be designed to avoid this.
Our current background estimates are summarized in Table \ref{tab:back_rates}. Most of these rates will decrease
as we further optimize the design of the experiment and make the simulation more realistic. For example, the
most significant background in Table \ref{tab:back_rates} is partly from Compton scattering of M\o
ller-generated $\gamma$ rays in air. At 0.6\%, it seems unusually large for a 2-bounce background; while this
background cannot be eliminated completely without replacing the air with vacuum, preliminary studies suggest
that it will be reduced by 2/3 with the incorporation of a shield house to reduce the solid angle acceptance for
this background source. It will be reduced further if a thin dead-layer is placed in front of the main
detector bars to stop electrons below a few MeV.
We have a good understanding of expected 0-bounce backgrounds in $Q_{weak}\;$, $i.e.$ backgrounds coming directly from
the experimental target such as target window backgrounds and inelastic electrons from pion electroproduction.
There has been significant refinement in the inelastic studies, described in detail in the following section.
\begin{table}
\begin{center}
\caption{ {\em Background rates in the $Q_{weak}\;$ main detectors (relative to the elastic rate and weighted by
relative asymmetry and light production) as predicted by the $Q_{weak}\;$ simulation using the latest collimator
design plus an internal "lintel" collimator to block photons, but no detector shield house. The highest (M\o
ller) rates will be reduced by a shield house; the next highest rate (inelastic electrons) is very sensitive to
the shielding house aperture, which remains to be optimized. It is important to note that these are background rate
estimates, and not the uncertainty to which they could be corrected - which will further suppress their contribution
to the final experimental uncertainty. }} \vspace{0.25in}
\begin{tabular}{|l|c|}
\hline Background & Rate \\ \hline M\o ller Electrons & 0.58\% $\pm$ 0.04\% \\ \hline M\o ller Photons & 0.21\%
$\pm$ 0.01\% \\ \hline Inelastic Electrons & 0.250\% $\pm$ 0.012\% \\ \hline
Elastic Photons & 0.15\% $\pm$ 0.005\% \\ \hline
Inelastic Photons & negligible \\ \hline
\hline
\end{tabular}
\label{tab:back_rates}
\end{center}
\end{table}
\begin{figure}[hbtp]
\begin{center}
\rotatebox{0.}{\resizebox{6in}{6in}{\includegraphics{4_panel_errors_new.eps}}}
\end{center}
\caption{ {\em Systematic study for E = 1.165 GeV in which the QTOR magnetic field scale factor, BFIL, and the
position of a radiator bar are varied
to optimize running conditions. The `x' coordinate refers to the radial distance from the beamline of the lower end of the top detector bar.
Clockwise from the upper left panel:
statistical error on the proton weak charge, inelastic background correction
required (weighted
for both rate and asymmetry),
the first moment of $Q^2$ neglecting detector bias, and the elastic rate
on a single bar. The nominal operating point we have selected is BFIL = 1.04 with the lower edge of the radiator
at 319 cm. } }\label{JulietteScan}
\end{figure}
\subsection{Inelastics from pion electroproduction}
\label{inelast-back}
Most of the inelastic electrons due to pion electroproduction are swept off the radiator bars by the QTOR
magnetic field. A seemingly negligible fraction, only 0.02\% by rate, strikes the outer radial edges of the
bars. However, since the inelastic astymmetry is expected to be an order of magnitude larger than the elastic
asymmetry\cite{hammer}, the correction for this inelastic background is estimated to be about 0.2\%.
Since the last PAC update, we have systematically examined the dependence of the $Q_{weak}\;$ statistical error and
the inelastic background on the radial position of the radiator bars. As the lower edge of a bar is moved to
larger radius, the statistical error on $Q_{weak}\;$ initially decreases, as shown in the upper left panel of Figure
\ref{JulietteScan}. However, if the radial coordinate increases too much, the statistical error increases again
because the elastic locus begins to slip off the lower edge of the bar. Furthermore, an increasing radius
corresponds to larger energy loss, resulting in a rapid increase in the inelastic contamination with increasing
radial position, as shown in the upper right panel of Figure \ref{JulietteScan}.
In this trade-off between between statistical and systematic errors, we have conservatively assumed a relatively
large uncertainty on the inelastic background. In the lower right panel of Figure \ref{JulietteScan}, one notes
a few cm wide plateau in which the average $Q^2$ is fortuitously stationary, which would allow us to reduce
another potential systematic error. For these reasons, we have chosen a nominal radius of 319 cm until we can
confirm the simulations during commissioning.
It is expected that our tracking detectors will permit a clean separation of elastic scattering and pion
electroproduction near threshold, allowing us to determine the relative rate of inelastic tracks to high
accuracy. As for the inelastic asymmetry, since it is expected to be relatively large, it should be possible to
measure it in current mode with small statistical errors in only a day of beam time. This will require lowering
the QTOR field, which will change its focusing properties and will dump elastic electrons onto the front of the
detector shielding wall. The uncertainty in the inelastic asymmetry measurement will be dominated by
systematics, such as in-showering from elastic electrons striking the inner radial edge of the shield house
windows. Data from the $G^0$ backward angle run will also provide a cross-check on predictions of the inelastic
asymmetry.
\subsection{M\o ller scattering ($e+e\rightarrow e+e)$}
\label{soft-back}
During the final optimization of the experimental layout, we moved our defining collimator downstream to improve
several important contributions to the figure of merit. However, it then became possible for the main detector
to directly view an illuminated portion of the defining collimator, producing an $\cal{O}$(1\%) soft background.
M\o ller electrons have fairly low energy in the neighborhood of 100 MeV, so they are not transmitted through
the QTOR field and into the main detector region. In simulations, these low energy electrons produce a rather
distinctive ``fountain'' as they are repelled by the QTOR field. However, since the M\o ller cross section is
roughly 1000 $\times$ larger than the $ep$ elastic cross section at our kinematics, dumping them on the 2nd,
acceptance-defining collimator then produced a $\gamma$ flux through the main detectors which rivaled the flux
of elastic electrons.
Increasing the spectrometer bend angle would in principle allow the 3rd collimator to block all of the hot spot
on the 2nd, defining collimator. However, as this would move the elastic focus too far upstream, it was
rejected as an undesirable option. Our solution is to simply insert a single lead baffle into each octant of
QTOR to block the $\gamma$ rays directed toward the main detector, as illustrated in figure \ref{fig:sideview}.
This additional element is 15 radiation lengths of lead (8.4 cm) thick, 16 cm high, and 62 cm wide; it is
referred to as the ``lintel'' collimator as it sits between QTOR coils like a lintel. Alignment requirements for
the lintel are relatively loose: there will be 1 cm separation between the nearest edges of the elastic electron
envelope and the $\gamma$ rays we wish to block.
\begin{figure}[ht]
\centerline{\includegraphics[width=6.0in]{sideview_with_events_new.eps}} \caption{{\em Side view of the $Q_{weak}\;$
apparatus with simulated events. Electrons, shown in red, are selected by collimator \#2 and are deflected by
QTOR toward the main detector \v{C}erenkov bar. Some electrons hit the collimator \#2 aperture and produce a
photon shower that is in direct line of sight to the \v{C}erenkov bar. These photons are blocked from hitting
the bar by a lintel collimator shown in white in the top octant.
}}
\label{fig:sideview}
\end{figure}
We also checked and found the lintel itself to be a negligible source of additional background. It is located
deep enough inside QTOR that most of the M\o ller electrons do not penetrate to that distance, but not so deep
that much dispersion has built up for low energy loss electrons. Thus, only a narrow range of high energy loss
electrons are intercepted by the lintel, and the lintel is thick enough to stop most of the shower products.
Most of the small, remaining M\o ller-generated photon background (0.2\%) originates from azimuthally defocused
M\o ller electrons which strike the QTOR coils before they are ejected by the magnetic field.
\subsection{Beam-defining collimator}
As part of our overall strategy to minimize background in the main
detector, we plan to insert a beam-defining collimator downstream of the
target. Without this, scattered events from the target would
directly illuminate the entire beampipe from shortly downstream of the
target to the exit of the QTOR beampipe, presenting a difficult shielding
problem.
Of course, events can still strike this
region of the beamline after one bounce from the beam-defining collimator. The beamline inside the detector hut
is therefore a source of 2-bounce background, but it is separated from the main detectors by 4'' of lead
shielding. Simulations still have to be done to verify whether this is sufficient. It would be expensive and
cumbersome to upgrade the beamline shielding, but on the other hand, there is plenty of room for additional
local shielding of the main detector modules.
To describe the plan in more detail, small angle scattered particles (in the 0.5 - 4.5$^{\circ}$
range) will be blocked by a $\sim$ 20 radiation length tungsten collimator
about 1 meter downstream of the target. This collimator is designed so
that any scattered particles that do not interact with it experience their
first interaction in the beampipe far downstream of the main detectors.
The collimator will be fabricated from a ductile, sintered W-Cu mixture to
prevent shattering in the event of a beam strike. The site boundary dose
is being calculated, as is the power deposition in the collimator. The
results of these calculations will allow us to finalize the design of the
beam-defining collimator and its local shielding. In a small region just
downstream of the target, residual activities are expected to quickly
exceed the threshold for a High Radiation Area (1 R/hour on contact).
\subsection{Neutrons and other soft backgrounds}
A \v{C}erenkov radiator medium only produces light due to the passage of relativistic, charged particles.
Furthermore, our Spectrosil 2000 radiator material consists of almost pure $SiO_2$, so it contains no free
protons and has a very small scintillation coefficient. Our main detector is therefore insensitive to neutrons,
but light production by neutrons is still possible by multi-step processes, and the backgrounds may be
significant if the neutron field is sufficiently intense. We are steadily acquiring the tools to simulate this
difficult problem, and since any simulations would have to be
benchmarked, a proposal to put some of our detector elements into an epithermal neutron beam
at LANSCE for benchmarking purposes has been submitted.
Below we summarize the scope of an initial neutron simulation and how neutron backgrounds can
be quantified.
The principal light production mechanism by neutrons in our detectors will be from the production of an excited
compound nucleus via neutron absorption, which decays to the ground state, emitting $\gamma$-rays. The
$\gamma$-rays then Compton scatter from the atomic electrons. Depending on the particular element, the isotope
may then decay further emitting either a $\beta$ or $\alpha$. However, both $^{29}Si$ and $^{17}O$ are stable
isotopes, so that the latter is not the case to first order. Because low energy neutron capture cross sections
are proportional to $1/v$, with $v$ being the velocity of the neutron, upcoming simulations will focus on
thermal to epithermal neutrons which we refer to henceforth as slow neutrons. Our first concern will be the
$SiO_2$ radiator material due to its large volume and direct optical coupling to the PMT's. Simulations will
focus on $Si$ because its slow neutron capture cross sections are many orders of magnitude larger than those on
$O$ and yield a fairly hard $\gamma$-ray spectrum.
Backgrounds due to activation of detector materials with $>>$ 1 second decay time constant
will show up as an apparent slow shift in the detector pedestal. Provided that we monitor this pedestal shift,
there will be no signal dilution. Since the accelerator routinely trips off every 10 minutes or so, and an
accurate pedestal reading takes far less than a second, we will have plenty of data to quantify such long
time-constant backgrounds no matter where
the neutron capture occurs (the radiator, the PMT's, detector support structures, shielding
concrete, etc.) Solid-state relaxation phenomena in the fused silica radiators such as long-lived luminescence
can be studied in the same fashion.
Any soft background with $<<$ 1 second decay time constant cannot be treated as a pedestal shift and so will
produce a dilution of the elastic electron signal. To help quantify these effects, we plan to move soft
background detectors to various locations inside the detector shield house. These signals will be acquired with
the preamplifers set to 50 times higher gain than nominal to provide sensitivity of better than 0.1\%. Three
movable types of background detectors (a complete detector assembly, a PMT in a small dark box, and a
preamplifier) will help us understand the source of any background. This information can be combined with
simulations and dosimetry information from standard TLD's to suggest shielding improvements as needed.
The soft background detectors described above are most useful for diffuse backgrounds.
However, the main detectors can also see soft backgrounds which are beamed through the
window in the shield house wall. In principle, this can be quantified during pulsed-mode
running by triggering on the main detector with a very low (0.5 photoelectron) threshold and looking for an
appropriate minimum ionizing hit in the overlapping trigger scintillator. The bias would be small, but it might
be difficult in practice to set the threshold that low. A more robust and completely unbiased method will be to
take continuous blocks of main detector and scintillator data using 250 MHz flash ADCs, and correlate the two
signals offline. Appropriate prototype modules from the JLab Electronics Group are at least a year overdue, but
may be available soon. If it looks like production versions of the JLab flash ADC's won't be available in time,
then we will purchase much more expensive but off-the-shelf versions from Struck.
Since our last Jeopardy proposal, we understand better how the very high single photoelectron rates from our S20
photocathodes will affect the soft background measurements described above. Such dilution will be completely
negligible during production running. However, during pulsed mode running, reducing the dark rate dilution to
O(0.1\%) will require Region III rates of 0.5 MHz, hand-picking our lowest noise PMTs, and a well
air-conditioned operating environment.
\section{Systematic Errors and Polarized Source Requirements}
\label{systematics}
Changes of beam properties with helicity can lead to false parity asymmetries. Parity violating scattering
experiments generally have dealt with this by keeping helicity correlations as low as possible, by measuring residual
correlations and by making corrections for them based on measured sensitivities. The measured parity asymmetry,
$A_{meas}$, is written in terms of the physics asymmetry, $A_{phys}$, in the following way for sufficiently small
helicity correlations:
\begin{equation}
A_{meas} = A_{phys} + \sum_{i=1}^{n} \Big(\frac{\partial A}{\partial P_i}\Big)\delta P_i,
\label{eq:corr_procedure}
\end{equation}
where beam parameter $P_i$\ changes on helicity reversal to $P_i^{\pm} = P_i \pm \delta P_i$. The detector
sensitivities $\partial A/\partial P_i$\ can be determined preferably by deliberate modulation of the relevant
beam parameter or from natural variation of beam parameters. The helicity-correlated beam parameter differences,
$\delta P_i$, are measured continuously during data-taking. From estimates of the sensitivity of our apparatus,
we can set requirements on how accurately beam parameters have to be measured and upper limits on acceptable
helicity-correlated beam properties.
The 2004 update proposal described GEANT simulations that led to predictions of the sensitivity of our apparatus
to helicity-correlated beam intensity, position, angle, and size modulations, and set limits on both DC and
helicity correlated beam property values aimed at constraining individual beam-related false asymmetries to be
no larger than 6 $\times 10^{-9}$, i.e. the same size as the statistical error in the parity asymmetry
measurement. Ancilliary measurements and diagnostic apparatus must be sufficiently sensitive to permit
systematic error corrections to be made to $\pm$ 10\% of this upper limit in each case.
These sensitivity estimates have been updated as the designs of the target, collimator and detector systems have
evolved to their current, final specifications. In addition, considerable experience has been gained from recent
PVES experiments at JLab, and it is now both advisable and feasible to aim for more stringent control of most
beam-related systematics for $Q_{weak}\;$. Accordingly, our goal is now to keep individual beam-related false
asymmetries to be no larger than 6 $\times 10^{-10}$. The only exception is for helicity-correlated size
modulation, where we are developing techniques to measure small modulations for the first time at JLab -- in
this case, we retain our previous criterion for the maximum false asymmetry to be no larger than 6 $\times
10^{-9}$ to set an initial goal for source and instrumentation development to address this challenging
systematic issue.
\subsection{Summary of Beam Requirements}
Table \ref{Table:BeamSpec} gives limits on allowable beam properties and detector asymmetry which should keep
any false asymmetry generated by helicity correlations in the beam to less than $6\times 10^{-10}$. Column 2
gives the limit on DC values while column 3 shows the limit on the helicity-correlated properties averaged over
the whole run. Column 4 shows the allowable random noise in a measured beam parameter which is consistent with
meeting our systematic error goals.
\begin{table}[h]
\caption{{\em Summary of systematic error requirements for $Q_{weak}\;$.}\label{Table:BeamSpec}} \vspace*{0.2cm}
\begin{tabular}{l|c|c|c} \hline
Parameter & Max. DC value & Max. run-averaged & Max. noise during \\
& & helicity-correlated value & quartet spin cycle \\
& & (2544 hours) & (8 ms) \\ \hline
Beam intensity & & $A_Q < 10^{-7}$ & $< 3 \times 10^{-4}$ \\ \hline
Beam energy & $\Delta E/E \leq 10^{-3}$ & $\Delta E/E \leq 10^{-9}$ & $\Delta E/E \leq 3 \times 10^{-6}$ \\
& ($Q^2$\ measurement) & 3.5 nm @ 35 mm/\% & 12 $\mu$m @ 35 mm/\% \\ \hline
Beam position & 2.5 mm & $\langle \delta x \rangle < 2$\ nm & $7 \ \mu$m \\
\hline Beam angle & $\theta_0 = 60\ \mu$rad & $\langle\delta\theta \rangle < 30$\ nrad & $100\ \mu$rad
\\ \hline
Beam diameter & 4 mm rastered & $\langle\delta\sigma \rangle <$0.7 $\mu$m & $< 2$ mm \\
& ($\simeq 100\ \mu$m unrastered) & (unrastered) & \\ \hline \hline
\end{tabular}
\end{table}
\vspace*{0.2cm}
\subsection{Parity Quality Beam at JLab}
\paragraph*{Beam Intensity:}
The $Q_{weak}\;$ experiment must control the integrated beam intensity asymmetry between the beam helicity states to
be smaller than 0.1 ppm. The HAPPEx and G0 collaborations have demonstrated control of this asymmetry at the
level of 0.2 ppm, via careful setup and implementation of a feedback system. The intensity asymmetry was
measured continuously using beam charge monitors in the experimental halls. The resulting values were used to
determine the necessary corrections, which were applied at the polarized source.
The dominant cause of intensity asymmetry in the polarized source is a difference between the small residual
component of linear polarization in the nearly (99.9\%) circularly polarized laser beam for each helicity state.
This difference interacts with the intrinsic analyzing power in quantum efficiency of the strained GaAs
photocathode to modulate the electron beam intensity, correlated with helicity. While it is possible to
compensate for this in a feedback loop by selectively attenuating the laser intensity, this approach does not
eliminate the difference in linear polarization which gives rise to the effect in the first place. Linear
polarization components also contribute to changes in the electron beam trajectory and spot size, and in fact
the laser attenuation system itself can also contribute to changes in the electron beam trajectory.
A preferred approach to reduce intensity asymmetries is to correct the difference in linear polarization of the
laser beam between helicity states by applying offset voltages to the Pockels cell used to create the circular
polarization. Electron beam intensity asymmetries can be adjusted using these voltage offsets, which are
commonly referred to as ``PITA'' voltages. In beam studies, helicity-correlated changes in electron beam {\em
position} have consistently been seen to be reduced when the charge asymmetry was corrected using the PITA
voltage. This effect is not typically observed with intensity-attenuation methods. While both correction
mechanisms have been successfully employed in feedback control of electron beam intensity asymmetries, the PITA
mechanism is preferred for the reasons explained here.
For $Q_{weak}\;$, we anticipate using a similar feedback system to that used by G0 and HAPPEx-II, but updating PITA
voltage corrections on the time scale of 10-100 seconds, with only small upgrades required to accommodate the
faster helicity flip rate.
\paragraph*{Beam Position and Angle:}
The $Q_{weak}\;$ experiment must control the helicity-correlated asymmetry in beam position to 2 nm and in angle to 10
nrad. The HAPPEx-II collaboration, working with the electron gun group, was very successful at controlling
these position differences at the source through a combination of carefully selected laser optics components and
novel alignment techniques~\cite{KDP2007}. In addition, significant work was performed by CASA physicists to
maintain the electron beam optics throughout the machine close to design specification, thereby avoiding
phase-space correlations which might exaggerate intrinsically small helicity-correlated effects. As a result,
helicity-correlated position differences, averaged over the HAPPEx-II run, were held to $< 2$~nm and angle
differences to $<1$~nrad, without active feedback on the beam trajectory.
In contrast, to control helicity-correlations in beam position, the G0 experiment used the ``PZT system'',
which consists of a mirror in the laser beam path mounted on a piezo-electric transducer. The laser beam
position could be adjusted in a helicity-correlated way to compensate for any helicity-correlated beam position
measured in the experimental halls. While this system did achieve the desired specifications, it was difficult
to maintain for two reasons. The response of the system would change with the tune of the accelerator, so the
system had to be recalibrated every 2-3 days. Secondly, there was a significant coupling between
helicity-correlated position differences and intensity asymmetries due to scraping at apertures in the injector.
For $Q_{weak}\;${}, we are developing the capability to control the position and angle of the beam using corrector
coils either in the Hall C beamline or in the 5 MeV region of the injector, which is downstream of where most of
the significant interception of the beam on apertures occurs. This will eliminate the problem of the coupling
between helicity-correlated position differences and intensity asymmetries. If the corrector coils are
implemented in the Hall C line, then the calibration of the system should be much more constant and independent
of the accelerator tune. Finally, the current PZT system only really allows adjustment of helicity-correlated
position differences at the experimental target. A system based on correction coils can be used to
independently null both helicity-correlated position and angle differences at the $Q_{weak}\;${} target.
It is worth noting that the estimated sensitivity of the individual HAPPEx detectors to helicity-correlated beam
position differences is approximately the same (within a factor of two) as that for the individual $Q_{weak}\;$
detector elements. The symmetric cancellation between the left and right High Resolution Spectrometers used by
HAPPEX-II was imperfect, and led to only a factor of $\sim 5$ reduction in sensitivity to beam motion. In
addition, with only two independent detectors, it was difficult to demonstrate the precision of the final
correction to better than, approximately, the size of the correction itself. $Q_{weak}\;$, with an 8-fold symmetric
detector system, will be capable of more complete cross-checks of the applied corrections; a factor of 30
reduction in sensitivity to beam motion is expected by averaging over all 8 detector elements in $Q_{weak}\;$.
A more subtle advantage of $Q_{weak}\;$ over the HAPPEx-II effort lies in its comparatively longer running time. The
small averaged helicity-correlated position changes observed during HAPPEX-II were all consistent with the
statistical noise expected from the observed magnitude of beam jitter. That is, the approximately 1~nm observed
position difference represents an upper limit on the true systematic change in beam position under helicity
reversal. Assuming that the random jitter in beam trajectory is not larger for $Q_{weak}\;$ than for HAPPEx-II, the
longer $Q_{weak}\;$ running time will allow a measurement of systematic position differences, at the level specified
for systematic error control, early in the running period.
\paragraph*{Beam Size:}
The $Q_{weak}\;$ experiment requires that the beam spot size must not change by more than 0.7 $\mu$m ($\delta \sigma$)
upon helicity flip. While this effect was estimated to be negligible in previous measurements at Jefferson Lab,
it is potentially important for the high precision of the $Q_{weak}\;$ experiment. Studies of elements of the
polarized source optics made in preparation for the SLAC E-158 experiment have suggested that spot size
asymmetries larger than $10^{-3}$ (or 0.1 $\mu$m) are unlikely. Further studies are planned using a test bed
being developed at the University of Virginia, with the goal of placing a firm upper-limit on the possible
helicity-correlated spot size asymmetry which could be generated in the polarized electron source.
\paragraph*{Transverse Beam Polarization:}
The two photon exchange terms in the elastic scattering process produce a transverse beam spin asymmetry of the
order of $10^{-5}$ to $10^{-6}$ \cite{Carlson:2007sp,Afanasev:2004hp_v2}, comparable to the PV asymmetry. A residual
transverse component of the beam polarization will result in contamination of the PV asymmetry measurement with
the beam normal single spin asymmetry, which will largely cancel when averaged over the 8 independent main
detector elements. A reasonable limit for beam spin alignment for a precision PV measurement would be to limit
the transverse polarization component to 5\%, which corresponds to a beam spin alignment of about 3$^{\circ}$\ to
longitudinal.
To verify the beam spin alignment in Hall C, it will be necessary to conduct a mini-spin dance during the
commissioning period, after major accelerator reconfigurations, such as energy changes, and after extended
accelerator down periods. In a mini-spin dance, the longitudinal polarization of the beam in Hall C is measured
as a function of the angle setting of the Wien filter in the injector to determine the Wien filter setting to
maximize the longitudinal polarization. Typically data are taken at four or five Wien filter settings,
requiring about one shift of beam time.
It should be noted that azimuthal asymmetries measured with the $Q_{weak}\;$ luminosity monitors can serve as a
monitor of the transverse component of the beam spin during standard running periods. This technique was used
successfully in the G0 backward angle measurement. Because the luminosity monitors can accept a mixture of M\o
ller and $e-p$ electrons, the transverse asymmetry can be difficult to calculate; however, luminosity monitor
measurements taken during either a mini-spin dance, or during dedicated transverse asymmetry measurements, will
allow us to calibrate the luminosity monitor azimuthal asymmetry as a monitor of the transverse component of the
beam polarization. A part of the spin dance program brief runs will need to occur utilizing fully vertical and horizontal transverse
polarization to cross-check our ability to measure azimuthal asymmetries.
\subsection{Recent Progress in the Polarized Source}
In addition to the reduction of helicity-correlated position differences, efforts have been invested in making
the Jefferson Lab polarized source increasingly robust in high-current operation. The strained-superlattice GaAs
cathodes are now considered standard photocathode material at CEBAF, with demonstrated rugged and reliable
performance at polarization ~85\%. The modelocked Ti-Sapphire lasers, which could be balky and difficult to
maintain, have been replaced with reliable fiber-based drive lasers. The purchase of a powerful fiber amplifier
is planned for $Q_{weak}\;$ to provide more laser headroom for longer periods of uninterrupted operation. A new
``load-locked'' photogun has been installed at the CEBAF photoinjector during 2007 summer shutdown. This new
gun was commissioned at the Injector Test Cave and demonstrated improved high current performance compared to
the ``vent/bake'' guns that have been used since 1998. Besides improved vacuum, the gun design accommodates
rapid photocathode reactivation and replacement, to minimize accelerator downtime.
It has been standard practice at Jefferson Lab to flip the helicity state of the beam at a rate of 30~Hz.
Much more rapid helicity reversal, in the range of 125-500~Hz, is planned for $Q_{weak}\;$. A new Pockels cell high
voltage switch has been developed to provide 500~Hz (1 ms) helicity flipping. The Pockels cell voltage is flipped
using LED-driven opto-couplers placed directly on the cell, thereby eliminating the capacitance of a long cable.
Rise/fall times around 50~$\mu$sec at 500~Hz flip rate were measured in bench tests. This new switch was
installed at the CEBAF photoinjector, with beam tests to happen soon.
In addition we anticipate that the newer available reversal rates of 125~Hz (up to 500~Hz) will improve many of
the helicity correlated properties of the beam. The higher reversal will also provide oversampling capability to
observe 60~Hz and multiples of the line frequency. Ideally, oversampling and real time analysis should allow us
to provide guidance to the source group as to when line noise is unacceptably large, as well as a capability to
reduce 60~Hz noise at the front end of the accelerator prior to our production run. These efforts are underway,
and significant progress has already been achieved in minimizing line correlated beam modulations.\\
\subsection{Low Current Operation During Calibration Running}
During the $Q^2$ calibration running with the tracking system, the beam current will be reduced to achieve
acceptable rates to run in pulse counting mode. The drift chambers that set the upper limit on the beam current
in order for the full tracking system to run are the region 2 horizontal drift chambers. Due to the high flux
of low energy M{\o}ller electrons, the beam current needs to be $\sim$0.15~nA in order for the count rate in
these chambers to be tolerable (approximately 400~kHz). In March 2007, the collaboration performed beam tests in
collaboration with the accelerator division to show that this low beam current could succesfully be delivered
to and monitored in Hall C.
\newpage
A procedure has been developed for establishing low beam currents using the polarized source laser attenuator
and the injection region chopper slits. The procedure was robust enough that the accelerator operator on shift
was easily able to adjust the beam current for us over a large dynamic range (0.15 nA - 5 $\mu$A) on demand with
only minimal ($<$ 10 minute) wait times. The stability of the low current 0.15 nA beam in Hall C was measured
using a lucite {\v C}erenkov detector detecting scattered electrons at small angles from an aluminum target.
Figure~\ref{beamstability} shows a typical stability plot measured over 1000 seconds. The observed $\pm$10$\%$
stability is adequate for our purposes.
\begin{figure}[htbp]
\centerline{\includegraphics[width=5.5in]{beamstability.eps}} \caption{\em Rate in lucite detector versus time
for 0.15 nA beam current running.} \label{beamstability}
\end{figure}
Most important for the $Q^2$ determination is the stability of the beam position, angle, and size. This was
monitored by monitoring the count rate in the lucite detector as a superharp monitor with tungsten wires was
slowly scanned through the beam. Typical results of such a scan are shown in Figure~\ref{beamposition}. The
conclusion of several scans like this over several hours was that the beam positions varied at most by 0.3 mm
over that time period. This is more than adequate for our purposes. Simulations have shown that a 0.3 mm
position shift corresponds to a worst case of $<$0.18$\%$ shift in the measured value of $Q^2$.
\begin{figure}[h!]
\centerline{\includegraphics[width=5.5in]{beamposition.eps}} \caption{\em Rate in lucite detector versus wire
position during a slow superharp scan.} \label{beamposition}
\end{figure}
\section{Cryotarget}
\subsection{Specifications} The $Q_{weak}\;$ $LH_2$ target must be 35 cm long and exhibit azimuthal symmetry. Density
fluctuations must not contribute significantly to the asymmetry width in the experiment, even though the target
will be used with up to 180 $\mu$A of 1.165 GeV beam. Sufficient cooling power must be provided to remove 2.5 kW
of power. The target must provide up to $\pm$2" of horizontal motion as well as enough vertical motion to
accommodate a dummy target plus several solid and optics target configurations. A CAD model of the target is
shown in Fig.~\ref{targetcad}.
\begin{figure}[hp]
\begin{center}
\rotatebox{0.}{\resizebox{3.0in}{!}{\includegraphics{pic1.eps}}}
\end{center}
\caption{{\em CAD model of the target as it stands in fall, 2007. The entire system is hung off the top plate,
which also supports the lifter motor, relief stack, and cooling connections. The target cell is located near the
bottom of the picture. Each leg of the target loop contains a heat exchanger, one for 4K and one for 15K Helium
coolant. The pump occupies one of the upper corners of the loop. }} \label{targetcad}
\end{figure}
\subsection{Cooling Power} In the fall of 2004, a scheme was worked out with laboratory management, the target
and cryo groups, which essentially guaranteed our experiment the cooling power it needs. Many options were
studied. Several of these were found to be viable. The default scheme agreed upon at these meetings was the one
which was easiest and cheapest to implement, and had the least impact on other halls and the FEL while still
meeting the minimum requirements of the experiment. This scheme requires us to design and build two independent
heat exchangers. One will remove heat from the target using 4K He coolant from the excess capacity of the CHL.
The other will make use of the more traditional 15K He coolant from the ESR. The 1.2 kW capacity of the ESR is
thus augmented by the CHL to meet the needs of the $Q_{weak}\;$ experiment. M\o ller operation is taken into account,
as well as low power experiments in Hall A. An improvement to the scheme was undertaken in FY07, to build a
(portable) heat exchanger external to the target which will recover the unused enthalpy in the returning CHL
coolant and supply it to the ESR.
\subsection{Cell Design} Computational Fluid Dynamics (CFD) codes were employed (for the first time in the design
of a cryotarget, as far as we are aware) to study various cell designs. Variations on both longitudinal (G0 and
SAMPLE-like) cells as well as transverse cells were studied. Temperature and density profiles of the $LH_2$
flowing through the cells were obtained for realistic mass flow (1.1 kg/s), beam power deposition (2.5 kW in a
4x4 mm$^2$ raster area) and initial thermodynamic conditions (20 K $\&$ 50 psia). Window temperatures were
tabulated for each design where the beam enters and exits the cell to characterize the unavoidable film boiling
in those regions. Monte Carlo calculations were undertaken to assess the impact of basic cell geometries on the
backgrounds in the experiment. While some work in this area remains to be done, the CFD calculations have
steered us to a basic transverse flow cell design (see Fig.~\ref{targetcfd}) consisting of a conical shape which
puts all the scattered electrons of interest out normal to the exit window of the cell. The input manifold will
direct flow across both the entrance and exit windows as well as across the middle of the cell. The exit
manifold is a simple slot along the length of the cell. The cell volume is about 5 liters, and the head loss is
less than 0.4 psi for the design flow of 15 liters/sec (~$\simeq$1.1 kg/s mass flow).
\begin{figure}[hhhtttb]
\begin{center}
\rotatebox{0.}{\resizebox{4.0in}{!}{\includegraphics{Celldensity.eps}}}
\end{center}
\caption{{\em Computational fluid dynamics calculation showing the density profile of the $LH_2$ flowing in the
target (from the bottom to the top in the figure). Density units are kg/m$^3$.}} \label{targetcfd}
\end{figure}
\subsection{Heat Exchangers} The detailed heat exchanger design was approved by the cryo group as part of our
cooling power negotiations. All the parts for both the 4K and 15 K heat exchangers have been ordered and are on
site, including the pre-wound coils of 0.5" diameter finned Cu tubing (see Fig.~\ref{targethx}). The heat
exchangers were designed to provide a huge overhead in cooling power. We are now thinking about scaling back the
design to achieve a more modest cooling power overhead in order to reduce volume and with it $LH_2$ inventory.
Two schemes to do that are presently being considered, each of which makes use of the fin tube coils already on
hand. In parallel, a flow diagram is being prepared which describes the plumbing required for $Q_{weak}\;$ in the
hall.
\begin{figure}[hhhtttb]
\begin{center}
\rotatebox{0.}{\resizebox{3.3in}{!}{\includegraphics{hx.eps}}}
\end{center}
\caption{{\em Photo showing the main componenents of the heat exchangers.}} \label{targethx}
\end{figure}
\subsection{Pump} Considerable attention was given to calculating realistic specifications for the pump: volume
flow and head loss around the entire target loop. Together these parameters give rise to viscous heating in the
loop which can quickly become a show-stopper in terms of the available cooling power if not kept in check. We
eventually settled on 15 l/s volume flow and 2 psi head for the pump specifications. Our actual calculated head
loss was only 1.3 psi for the loop, but given the difficulty of this type of calculation, and the assumptions
that have to be made about geometries that are still in a bit of flux, we settled on 2 psi as a conservative
head specification for the pump. This keeps viscous heating below a few hundred Watts. The calculation was first
baselined to the G0 target, and satisfactory agreement with measured head loss and flow in that target was
obtained.
The $Q_{weak}\;$ flow and head parameters point squarely to a centrifugal pump design. The requirement that the target
be able to move horizontally as well as vertically favors a submersible design over one with an external motor.
Commercial pump vendors want more money than we have budgeted for a pump meeting our specs. As a result an
effort to build one in-house is just getting off the ground. Tests planned for the spring and summer 2008 will
tell us if our in-house effort has been successful or not. If not then we will still have time at that point to
go commercial, with a little help from the lab to push our budget envelope.
\subsection{Target Motion} The target motion systems have been designed and most of the parts have been procured.
A weight of 2000 lbs was assumed for the design of both systems. The horizontal motion system will provide
$\pm$2" of travel. The vertical lifter will provide 22" of travel, enough for the $LH_2$ target, dummy targets
for background subtraction, and several solid targets including optics foils, and a target out position. A
position repeatability of better than 13 $\mu$m can be achieved. As part of this effort, the scattering chamber
was also designed and most parts procured, as well as the $H_2$ relief system internal to the scattering
chamber.
The latter will consist of a 2 $\frac{7}{8}$" diameter cold, straight pipe inside a concentric heat shield,
which penetrates the top lid of the scattering chamber into a concentric bellows to accommodate the full range
of motion of the target lifter system. The straight pipe will tie in to the loop via a short length of 3" flex
hose at the top of the loop, which accommodates the small horizontal motion. The design further accommodates a
small diameter fill line connected to the opposite side of the pump. This small $\frac{1}{4}$ " line, needed to
measure the pump head, will be situated inside the larger return pipe and thus will share the return line's
thermal shield and bellows.
Both a thick and a thin dummy target are planned. Both are being designed such that scattered electrons which
reach the quartz do not pass through any of the other targets. The optics targets will be used primarily to tune
the vertex reconstruction from the region 1 and 2 chambers. Separate vertical and horizontal wire grids will be
placed at several z locations, with the raster system set up to illuminate all wires. The solid targets
envisioned include a hole target for beam position and halo studies, a BeO viewer, and a C target for basic
tuneup operations.
\subsection{Heater}
The first of two heaters has been successfully built and characterized in LN2 at Mississippi State. The heater
is wound in four parallel sections from 0.057" diameter nichrome wire. The
resistance at 80K was 1.226 Ohms. Based on these actual measurements, two 60V, 50A heater power supplies were
purchased.
\subsection{Relief/Safety Calculations:} This work has begun. The first step was to reproduce the calculations
that have already been performed for the existing Hall C standard pivot target. The code developed for that
check can now be considered a reliable template for the $Q_{weak}\;$ application. Our goal is to continue to move
forward on this such that we are in a position to defend our design at a design and safety review by March,
2008.
\subsection{Target Boiling Considerations}
In order to mitigate the effects of target boiling on the measurement, the $LH_2$ must flow as fast as possible
across the beam axis. Alternatively, the beam must move more quickly across the target fluid. An effort to
double the existing raster frequency was completed successfully on the bench and will now become part of our
default experimental configuration for $Q_{weak}\;$.
Likewise, increasing the helicity reversal frequency from the standard 30 Hz to 250 Hz should help mitigate
boiling in two ways. First, the noise spectrum at 250 Hz is quieter than at 30 Hz. Second, the target boiling
contribution can be about three times larger than at 30 Hz without increasing the experiment's running time,
because the statistical width per quartet is three times greater at 250 Hz. Tests were completed in the spring
of 2007 which demonstrated that 250 Hz helicity reversal can be delivered by the accelerator.
\section{Introduction} \label{intro}
The Standard Model (SM) has been extremely successful at describing a comprehensive range of phenomena in
nuclear and particle physics. After three decades of rigorous experimental testing, the only indication of a
shortcoming of the SM lies in the discovery of neutrino oscillations~\cite{Ahn:2002up}. That discovery has
renewed interest in identifying other places where physics beyond the Standard Model might
be observed. There
are two principal strategies in the search for new physics, and ultimately a more fundamental description of
nature. The first is to build increasingly energetic colliders, such as the Large Hadron Collider (LHC) at CERN,
which aim to excite matter into a new form. The second, more subtle approach is to perform precision
measurements at moderate energies, where any observed discrepancy with the Standard Model will reveal the
signature of these new forms of matter \cite{Erler:2004cx,RamseyMusolf:2006vr}. Results from the $Q_{weak}\;$
measurement at Jefferson Laboratory, in conjunction with existing measurements of parity-violating electron
scattering, will constrain the possibility of relevant physics beyond the Standard Model to the multi-TeV energy
scale and beyond.
\begin{figure} [hh!]
\begin{center}
\rotatebox{0.}{\resizebox{5.5in}{4.0in}{\includegraphics{s2w_2004_4_new1.eps}}}
\end{center} \vspace*{-0.5cm}
\caption{\em Calculated running of the weak mixing angle in the
Standard Model, as defined in the modified minimal subtraction
scheme\cite{Erler:2004in}. The uncertainty in the predicted running
corresponds to the thickness of the blue curve. The black error bars show the current situation, while the
red error bar (with arbitrarily chosen vertical location) refers to
the proposed 4\% $Q_{weak}\;${} measurement. The existing measurements are
from atomic parity violation (APV)~\cite{Bennett:1999pd}, SLAC E-158~\cite{Anthony:2005pm}, deep inelastic
neutrino-nucleus scattering (NuTeV)~\cite{Zeller:2001hh}, and from $Z^{0}$ pole
asymmetries (LEP+SLC)~\cite{Yao:2006px}.} \label{RUNNINGTHETA}
\end{figure}
The $Q_{weak}\;$ collaboration proposes\footnote{This proposal and other documents are available
at the home page of the $Q_{weak}\;$ Collaboration:
``http://www.jlab.org/Qweak/''.} to carry out the first precision measurement of
the proton's weak charge:
\begin{equation}
Q^p_w =1-4\sin^{2}\theta_{W} \label{eq:qweak}
\end{equation}
at JLab, building on technical advances that have been made in the laboratory's world-leading parity-violation
program and using the results of earlier experiments to constrain hadronic corrections. The experiment is a high
precision measurement of the parity-violating asymmetry in elastic $ep$ scattering at $Q^{2} = 0.026\,{\mathrm
GeV}^2$ employing approximately 180 $\mu A$ of 85\% polarized beam on a 35 cm liquid hydrogen target. It will
determine the proton's weak charge with about $4\%$ combined statistical and systematic errors.
In the absence of physics beyond the Standard Model, our experiment will provide a $\simeq$0.3\% measurement of
$\sin^{2}\theta_{W}$, making this the most precise stand alone measurement of the weak mixing angle at low
$Q^{2}$, and in combination with other parity measurements, a high precision determination of the weak charges
of the up and down quarks. Our proposed measurement of $Q_W^{p}\;$ will be performed with significantly smaller
statistical and systematic errors than existing low $Q^2$ data. Any significant deviation from the Standard
Model prediction at low $Q^{2}$ would be a signal of new physics, whereas agreement would place new and
significant constraints on possible Standard Model extensions.
The Standard Model makes a firm prediction for $Q^p_w$, based on the running of the weak mixing angle,
$\sin^{2}\theta_{W}$, from the $Z^{0}$ pole down to low energies, as shown in Figure~\ref{RUNNINGTHETA}. The
precise measurements near the $Z^{0}$ pole anchor the curve at one particular energy scale. The shape of the
curve away from this point is a prediction of the Standard Model, and to test this prediction one needs precise
off-peak measurements. Currently there are several precise off-peak determinations of $\sin^{2}\theta_{W}$: one
from atomic parity violation (APV)~\cite{Bennett:1999pd}; and another from E-158 at SLAC which measured
$\sin^{2}\theta_{W}$ from parity-violating $\vec{e}e$ (M\o ller) scattering at low $Q^2$~\cite{Anthony:2005pm}.
The result from deep inelastic neutrino-nucleus scattering~\cite{Zeller:2001hh} is less clearly interpretable.
It is worth noting that radiative corrections affect the proton and electron weak charges rather differently; in
addition to the effect from the running of $\sin^{2}{\hat\theta}_{W}(\mu^2)$, there is a relatively large WW box
graph contribution to the proton weak charge that does not appear in the case of the electron. This
contribution compensates numerically for nearly all of the effect of the running of the weak mixing angle, so
that the final Standard Model result for the proton's weak charge is close to what it would be at tree level,
which is not so for the electron.
The $Q_{weak}\;$ experiment (E02-020) was initially approved at the 21st meeting of the Jefferson Laboratory Program
Advisory Committee in January, 2002, and was awarded an ``A'' scientific rating, which was reconfirmed in
January, 2005. Major equipment construction activities are underway at collaborating institutions and
commercial vendors. A schedule has been adopted for the experiment, with the aim of initial installation in
JLab's Hall C in 2009. This document is a review of the current status of the experiment, with emphasis on
critical systems requirements and performance, schedule, and a beam time request to complete the measurements to
the proposed accuracy in 2010-2012 at JLab.
\subsection{Extracting $Q_W^{p}\;$ from Experimental Data}
Electroweak theory can rigorously derive a low-energy effective interaction between the electron and the quarks
that can be used to predict low-energy electroweak observables. Assuming that conventional, Standard Model
effects arising from non-perturbative QCD or many-body interactions are under sufficient theoretical control,
any deviation from the predictions of that effective interaction is then an unambiguous signal of physics beyond
the Standard Model. The recent measurements of parity-violating electron scattering (PVES) on nuclear targets
have made possible a dramatic improvement in the accuracy with which we probe the weak neutral-current sector of
the Standard Model at low energy. The existence of this set of high-precision, internally-consistent PVES
measurements provides the critical key to the interpretation of the asymmetry to be measured by the proposed
$Q_{weak}\;$ experiment. Specifically, they provide direct experimental determination of the contribution of hadronic
form factors to our very low $Q^2$ asymmetry measurement.
For the purpose of this measurement, the relevant piece of the weak
force which characterizes the virtual-exchange of a $Z^0$-boson
between an electron and an up or down quark can be parameterized by
the constants, $C_{1u(d)}$, that are defined through the effective
four-point interaction by \cite{Yao:2006px}
\begin{equation}
{\cal L}_{\rm NC}^{eq}=-\frac{G_F}{\sqrt{2}}\bar{e}\gamma_\mu\gamma_5e \sum_q C_{1q}\bar{q}\gamma^\mu q\,.
\label{eq:LSM}
\end{equation}
These effective couplings are known to high-precision within the Standard Model, from precision measurements at
the $Z$-pole \cite{ZPOLE:2005em} and evolution to the relevant low-energy scale
\cite{Marciano:1983ss,Erler:2003yk,Erler:2004in}. There are also parity-violating contributions arising from
the lepton vector-current coupling to the quark axial-vector-current, with couplings, $C_{2q}$, defined in a
similar manner. Although the PVES asymmetries are also dependent on the $C_{2q}$'s, they cannot be extracted
from these measurements without input from nonperturbative QCD computations.
As currently summarized by the Particle Data Group (PDG)~\cite{Yao:2006px}, existing data, particularly the
determination of the Cesium weak charge using atomic parity violation~\cite{Bennett:1999pd}, constrain the
combination of the up and down quark ``charges'', $ZC_{1u} + NC_{1d}$. Since $Z=55$ and $N=78$ for Cesium, its
weak charge has comparable sensitivities to $C_{1u}$ and $C_{1d}$. The proton weak charge, in contrast, is more
strongly-dependent on $C_{1u}$: $Q_W^P=-2(2C_{1u}+C_{1d})$. Thus, knowldge of the two weak charges can permit a
separate determination of $C_{1u}$ and $C_{1d}$. As illustrated in Figure \ref{fig:C1qNEW}, combining the $Q_W^{p}\;$
measurement with with the previous experimental results will lead to a significant improvement in the allowed
range of values for $C_{1u}$ and $C_{1d}$. This constraint will be determined within the experimental
uncertainties of the electroweak structure of the proton. Assuming the Standard Model holds, the resulting new
limits on the values allowed for these fundamental constants will severely constrain relevant new physics to a
mass scale for new weakly coupled physics of $\sim$2--6~TeV.
During the past 15 years much of the experimental interest in precision PVES measurements on nuclear targets has
been focussed on the strange-quark content of the nucleon. Progress in revealing the strangeness form factors
has seen a dramatic improvement with experimental results being reported by SAMPLE at
MIT-Bates~\cite{Ito:2003mr,Spayde:2003nr}, PVA4 at Mainz~\cite{Maas:2004ta,Maas:2004dh} and the HAPPEX
\cite{Aniol:2005zf,Aniol:2005zg} and G0 \cite{Armstrong:2005hs} Collaborations at Jefferson Lab. Depending on
the target and kinematic configuration, these measurements are sensitive to different linear combinations of the
strangeness form factors, $G_E^s$ and $G_M^s$, and the effective axial form factor $G_A^e$ that receives
${\cal O}(\alpha)$ contributions from the nucleon anapole form factor
\cite{Haxton:1989ap,Musolf:1990ts,Zhu:2000gn}.
A global analysis \cite{Young:2006jc} of the present PVES data yields a determination of the strange-quark form
factors, namely $G_E^s=0.002\pm0.018$ and $G_M^s=-0.01\pm0.25$ at $Q^2$ =0.1 GeV$^2$ (correlation coefficient
$-0.96$). This fit does not include the value of the neutral current axial form factor determined from neutron
$\beta$-decay, Standard Model electroweak corrections to vector electron-axial vector quark couplings, $C_{2q}$,
and theoretical estimates of the anapole contribution obtained using chiral perturbation theory. Should one
further adopt the value of $G_A^e$ obtained from these inputs\cite{Zhu:2000gn}, these values shift by less than
one standard deviation (with $G_E^s=-0.011\pm0.016$ and $G_M^s=0.22\pm0.20$). Nevertheless, even with the fits
constrained by data alone, one can now ascertain that, at the 95\% confidence level (CL), strange quarks
contribute less than 5\% of the mean-square charge radius and less than 6\% of the magnetic moment of the
proton. This determination of the strangeness form factors intimately relies on the accurate knowledge of the
low-energy electroweak parameters of Eq.~\ref{eq:LSM}. Therefore, this potential uncertainty as it relates to
our $Q_W^{p}\;$ measurement has turned out to be of minimal significance and is absorbed into the general fitting
procedure to separate the hadronic background from the weak charge, as described below.
A global analysis of the PVES measurements can fit the world data with a systematic expansion of the relevant
form factors in powers of $Q^2$. In this way one can make the greatest use of the entire data set, including the
extensive study of the dependence on momentum transfer between $0.1$ and $0.3\,{\rm GeV}^2$ by the G0 experiment
\cite{Armstrong:2005hs}. By including the existing world PVES data and the anticipated results from the $Q_W^{p}\;$
measurement, the two coupling constants, $C_{1u}$ and $C_{1d}$, and the hadronic background term can be
determined by the data. Most of the existing PVES data have been acquired with hydrogen targets. For small momentum transfer, in the
forward-scattering limit, the parity-violating asymmetry can be written as
\begin{equation}
\label{eq:alrq}
A_{LR}^p \simeq A_0 \left[ Q_{\rm weak}^p Q^2 + B_4 Q^4 +\ldots \right]\,,
\end{equation}
where the overall normalization is given by $A_0=-G_\mu/(4\pi\alpha\sqrt{2})$. The leading term in this
expansion directly probes the weak charge of the proton, related to the quark weak charges by $Q_{\rm
weak}^p=G_E^{Zp}(0)=-2(2C_{1u}+C_{1d})$. The next-to-leading order term, $B_4$, is the first place that
hadronic structure enters, with the dominant source of uncertainty coming from the neutral-weak, mean-square
electric radius and magnetic moment. Under the assumption of charge symmetry, this uncertainty translates to the
knowledge of the strangeness mean-square electric radius and magnetic moment. By considering different
phenomenological parameterizations of the elastic form factors, it has been confirmed that the potential
uncertainties from this source will have a small impact on our final result from the $Q_W^{p}\;$ measurement. Indeed,
the existing PVES data alone, taken over the range $0.1<Q^2<0.3\,{\rm GeV}^2$, allow a reliable extrapolation in $Q^2$
to extract the $B_4$ $Q^4$ term contribution to the measured asymmetry. Thus, we are confident that when the
$Q_W^{p}\;$ data become available, a clean extraction of the proton's weak charge will be possible.
\begin{figure}[hh!]
\begin{center}
\includegraphics[width=14cm,angle=0]{QweakExtrap1.eps}
\caption{{\em Normalized $ep$ parity-violating asymmetry measurements, extrapolated to the forward-angle limit
using
all current world data~\cite{Young:2007zs}. The extrapolation to $Q^2=0$
illustrates the methodology we plan to use to measure the proton's
weak charge after the JLab $Q_W^{p}\;$ results are obtained. The previous
experimental limit on $Q_W^{p}\;$ (within uncertainties on the neutron weak
charge) is shown by the triangular data point, and the Standard
Model prediction is indicated by the star. The solid curve and shaded region
indicate, respectively, the best fit and 1-$\sigma$ bound, based
upon a global fit to all electroweak data. The dotted curve shows
the resulting fit if one incorporates the theoretical
value of $G_A^e$, the effective axial vector form factor of the nucleon~\cite{Zhu:2000gn}.
With the inclusion of the anticipated data from the $Q_{weak}\;$ experiment at
$Q^2$ = 0.026 $GeV^2$, a new global analysis will be able extract
the weak charges separated from hadronic form factor
contributions.} } \label{fig:extrap}
\end{center}
\end{figure}
Figure~\ref{fig:extrap} shows the various existing $ep$ asymmetry measurements, extrapolated to zero degrees as
explained below. The data are normalized as $\overline{A_{LR}^p} \equiv A_{LR}^p/(A_0 Q^2)$, such that the
intercept at $Q^2=0$ has the value $Q_W^{p}\;$. The fitted curve and uncertainty band are the result of the full global
fits, where helium, deuterium and all earlier relevant neutral-weak current measurements
\cite{Yao:2006px,Erler:2004cx} are also incorporated.
Because the existing PVES measurements have been performed at different scattering angles, the data points
displayed in Fig.~\ref{fig:extrap} have been rotated to the forward-angle limit using the global fit of this
analysis, with the outer error bar on the data points indicating the uncertainty arising from the $\theta\to 0$
extrapolation. The dominant source of uncertainty in this extrapolation lies in the determination of the
contribution of the effective axial vector form factor $G_A^e$. The experimentally-constrained uncertainty on
$G_A^e$ is relatively large compared to computations obtained using the value of $g_A$ from neutron
$\beta$-decay plus isospin symmetry, Standard Model electroweak radiative corrections to the $C_{2q}$ couplings,
and a chiral perturbation theory computation of the anapole contribution supplemented with a vector meson
dominance model estimate of the corresponding low-energy constants\cite{Zhu:2000gn} . Further constraining our
fits to this theoretical value for $G_A^e$ yields the dotted curve in Fig.~\ref{fig:extrap}, where the
difference with the experimentally determined (less precise) fit is always less than one standard deviation;
this effect will have a small impact on the final weak charge extraction.
The resulting measurement of the proton's weak charge by the $Q_{weak}\;$ experiment provides an independent
constraint to combine with the precise atomic parity-violation measurement on Cesium
\cite{Bennett:1999pd,Ginges:2003qt}, which primarily constrains the isoscalar combination of the weak quark
charges. The preliminary combined analysis using only existing data is shown in Fig.~\ref{fig:C1qNEW}. This
analysis involves the simultaneous fitting of both the hadronic structure (strangeness and anapole) and
electroweak parameters ($C_{1u,d}$, $C_{2u,d}$) and demonstrates excellent agreement with the data.
\begin{figure}[hb!]
\begin{center}
\vspace*{0.2cm}
\includegraphics[width=12cm,angle=0]{C1qNEW.eps}
\caption{{\em Constraints on the neutral weak effective couplings of
Eq.~(\ref{eq:LSM}). The dotted contour displays the experimental
limits (95\% CL) reported in the PDG~\cite{Yao:2006px} together
with the prediction of the Standard Model (black star). The filled
ellipse denotes the current constraint provided by recent PVES
scattering measurements on hydrogen, deuterium and helium targets
(at 1 standard deviation), while the smaller solid contour (95\%
CL) indicates the full constraint obtained by combining all
existing results. The solid blue line indicates the anticipated
constraint from the planned $Q_W^{p}\;$ measurement, assuming the
SM value. All other experimental limits are
at 1 $\sigma$.}} \label{fig:C1qNEW}
\end{center}
\end{figure}
\newpage
Whatever the dynamical origin, new physics can be expressed in terms
of an effective contact interaction \cite{Erler:2003yk},
\begin{equation}
\label{eq:lnew}
{\cal L}_{\rm NP}^{eq}=\frac{g^2}{\Lambda^2}\bar{e}\gamma_\mu\gamma_5 e \sum_q h_V^q \bar{q}\gamma^\mu q\,.
\end{equation}
With the characteristic energy scale, $\Lambda$, and coupling
strength, $g$, the values of the effective couplings $h_V^q$ will vary depending on the particular new physics scenario leading to Eq.~(\ref{eq:lnew}) \cite{RamseyMusolf:1999qk}. In the case of a low-energy E$_6$ $Z'$ boson, for example, one could expect a non-zero value of $h_V^d$ (depending on the pattern of symmetry breaking) and a vanishing coupling $h_V^u$, whereas a right-handed $Z'$ boson would induce non-zero $h_V^u$ and $h_V^d$. More generally, in any given scenario, the values of the $h_V^q$ will determine the sensitivity of $Q_W^{p}\;$ to the mass-to-coupling ratio, $\Lambda/g$, which can be as large as a several TeV in some cases. The reach of the $Q_{weak}\;$ experiment for different illustrative models is given in Table \ref{tab:newphysicsscale}.
\begin{table}[hh!]
\vspace*{-0.1cm} \caption{{\em The sensitivity of various current and future low energy precision measurements
to the new physics scale $\Lambda$ in different models. Also shown are the direct search limits from the
current colliders (LEP, CDF and Hera) and the indirect search limits from the current electroweak precision fit.
The various new physics scales presented here are the mass of $Z^\prime$ associated with an extra ${\rm
U}(1)_\chi$ group arising in $E_6$ models [$m({Z_\chi})$] or in left-right symmetric models [$m(Z_{LR})$]; the
mass of a leptoquark in the up quark sector [$m_{LQ}$(up)], or the down quark sector [$m_{LQ}$(down)]; the
compositeness scale for the $e-q$ or the $e-e$ compositeness interaction. Entries with ``--'' either do not
exist or do not apply. This Table is adapted from Ref.~ \cite{RamseyMusolf:2006vr}.}}
\label{tab:newphysicsscale} \vspace*{0.4cm}
\begin{tabular}{|c|cc|cc|cc|}
\hline
&\multicolumn{2}{c|}{$Z^\prime$ models}&
\multicolumn{2}{c|}{leptoquark}&\multicolumn{2}{c}{compositeness}\\
&$m(Z_\chi)$&$m(Z_{LR})$&$m_{LQ}$(up)&$m_{LQ}$(down)&$e-q$&$e-e$\\ \hline
Current direct search limits&0.69&0.63&0.3&0.3&--&--\\
Current electroweak fit&0.78&0.86&1.5&1.5&11$-$26&8$-$10\\
0.6\% $Q_W({\rm Cs})$&1.2&1.3&5.1&5.4&28&-- \\
13.1\% $Q_W(e)$&0.66&0.34&--&--&--&13\\
4\% $Q_W(p)$&0.95&0.45&6.5&4.6&28&--\\
\hline \hline
\end{tabular}
\end{table}
\vspace*{0.1cm}
The proposed analysis technique for the anticipated $Q_{weak}\;$ experiment's data in conjunction with the world's
existing data on PVES demonstrates that the effect of the hadronic form factors can be separated from the
low-energy, effective weak charges $C_{1u}$ and $C_{1d}$~\cite{Young:2007zs}. Combining the resulting constraint with that obtained
from the study of atomic parity violation data will result in an extremely tight range of allowed values for
{\bf both} $C_{1u}$ and $C_{1d}$, as illustrated in Fig.~\ref{fig:C1qNEW}. Even if the results of the $Q_W^{p}\;$
measurement are in agreement with the predictions of the Standard Model, the reduction in the range of allowed
values of $C_{1u}$ and $C_{1d}$ is such that it will severely limit the possibilities of relevant new physics
below a mass scale $\sim$1--6 TeV for weakly coupled theories (see Table \ref{tab:newphysicsscale}).
Of course, it is also possible that $Q_{weak}\;$ could discover a deviation from the Standard Model which would
constrain both the mass--coupling ratio and flavor dependence of the relevant new physics, such as a $Z^\prime$
or leptoquark. In the event of a discovery at the LHC, then experiments such as $Q_{weak}\;$ will play a key role in
determining the characteristics of the new interaction.
\subsection{$Q_W^{p}\;$: the Standard Model Prediction and Beyond}
The prospect of the $Q_{weak}\;$ experiment has stimulated considerable theoretical activity related to both the
interpretability of the measurement and its prospective implications for new physics. As indicated above, the
interpretability of the experiments depends on both the precision with which $Q_W^{p}\;$ can be extracted from the
measured asymmetry as well as the degree of theoretical confidence in the Standard Model prediction for the weak
charge. In the case of the $Q_W^{p}\;$ extraction, the issue is illustrated by Eq.~(\ref{eq:alrq}), indicating that one
must determine the hadronic \lq\lq B" term that describes the subleading $Q^2$-dependence of the asymmetry with
sufficient precision. The \lq\lq B"-term is constrained by the existing world data set of PV electron scattering
measurements performed at MIT-Bates, Jefferson Lab, and Mainz. Recently the authors of Ref.~\cite{Young:2007zs}
analyzed the implications of the world PVES data set in the range $0.1\ (\textrm{GeV}/c)^2\leq Q^2\leq 0.3\
(\textrm{GeV}/c)^2 $ and extrapolated the results to $Q^2=0$ to obtain the current PVES value for \begin{equation}
Q_W^p=0.055\pm 0.017 \end{equation} (present world average). Inclusion of this planned low-$Q^2$, high-statistics
measurement by the $Q_{weak}\;$ collaboration will reduce this extracted uncertainty to $0.003$. This error includes
both the impact of the experimental uncertainty in the hadronic``B" term, as well as the anticipated uncertainty
in $A_{PV}$.
\begin{figure}[b]
\begin{center}
\epsfig{figure=qweak_boxfigs.eps,width=4.in} \caption{{\em Standard Model box graph contributions to the weak
charge of
the proton. Panel (a) gives the $Z\gamma$ corrections while panel
(b) gives the WW and ZZ box contributions. }}
\label{fig:box}
\end{center}
\end{figure}
The impact of a four percent determination of $Q_W^p$ depends on both the precision with which the Standard
Model value for the weak charge can be computed, as well as on its sensitivity to various possible sources of
new physics. Writing \begin{equation} Q_W^p=Q_W^P(\textrm{SM})+\Delta Q_W^p(\textrm{new}), \end{equation} the present theoretical
prediction in the SM gives\cite{Erler:2004in,Erler:2003yk}
\begin{equation}
Q_W^p(\textrm{SM})=0.0713\pm 0.0008
\end{equation}
where the uncertainty (1.1\%) is determined by combining several sources of theoretical uncertainty in
quadrature. The largest uncertainty arises from the value of the MS-bar weak mixing angle at the $Z^0$-pole:
$\Delta\sin{\hat\theta}_W(M_Z)$, followed by uncertainties in hadronic contributions to the $Z\gamma$ box graph
corrections [see Fig. \ref{fig:box}a], hadronic contributions to the \lq\lq running" of $\sin{\hat\theta}_W(Q)$
between $Q=M_Z$ and $Q\approx 0$ , and higher order perturbative QCD contributions to the WW and ZZ box graphs
[see Fig. \ref{fig:box}b]. Charge symmetry violations rigorously vanish in the $Q^2=0$ limit and their effects
at non-vanishing $Q^2$ can be absorbed into the hadronic \lq\lq B" term that is experimentally constrained. Note
that the theoretical, hadronic physics uncertainties in $Q_W^p$ have been substantially reduced since the time
of the original $Q_{weak}\;$ proposal. These theoretical errors are summarized in Table \ref{tab:qwerror}.
The precision with which the $Q_W^{p}\;$ measurement can probe the effects of new physics in $\Delta
Q_W^p(\textrm{new})$ depend on the combined experimental error in $Q_W^p$ and the theoretical uncertainty in
$\Delta Q_W^p(\textrm{SM})$. Since the anticipated experimental error $\pm 4.1\%$ is much larger than the
theoretical uncertainty in $\Delta Q_W^p(\textrm{SM})$, the sensitivity to new physics is set by the $Q_{weak}\;$
experimental precision. A comprehensive study of contributions from various scenarios for new physics has been
outlined in Refs.~\cite{RamseyMusolf:1999qk,Erler:2003yk,RamseyMusolf:2006vr,Erler:2004cx,Kurylov:2003zh}. The
results of those studies indicate that the $Q_W^{p}\;$ measurement is highly complementary as a probe of new physics
when compared to other electroweak precision measurements as well as studies at the LHC.
As a semileptonic process, PV $ep$ scattering is a particularly unique probe of leptoquark (LQ) interactions or
their supersymmetric analogs, R-parity violating interactions of supersymmetric particles with leptons and
quarks. Given the present constraints from the global set of direct and indirect searches, many LQ models could
lead to 10\% or larger shifts in $Q_W^{p}\;$ from its SM value, with larger corrections possible in some cases. LQ
interactions are particularly interesting in the context of grand unified theories that evade constraints from
searches for proton decay and that generate neutrino mass through the see-saw mechanism (see, {\em e.g.},
Ref.~\cite{Perez:2006hj} and references therein). Similarly, if TeV-scale R-parity violating (RPV) interactions
are present in supersymmetry, they would imply that neutrinos are Majorana particles and could generate
contributions to neutrinoless double beta-decay ($0\nu\beta\beta$) at an observable level in the next generation
of $0\nu\beta\beta$ searches. Given the present constraints on RPV interactions derived from both low- and
high-energy precision measurements, effects of up to $\sim 15\%$ in $Q_W^p$ could be generated by such
interactions\cite{RamseyMusolf:2006vr}.
\begin{table}[hb]
\vspace*{0.2cm} \caption{{\em Contributions to the uncertainty in $Q_W^p(\textrm{SM})$ \cite{Erler:2004in}.}}
\begin{center}
\begin{tabular}{|c|c|}
\hline Source & uncertainty\\ \hline
$\Delta\sin{\hat\theta}_W(M_Z)$ & $\pm 0.0006$\\
$Z\gamma$ box & $\pm 0.0005$\\
$\Delta\sin{\hat\theta}_W(Q)_\mathrm{hadronic}$ & $\pm 0.0003$ \\
$WW$, $ZZ$ box - pQCD & $\pm 0.0001$ \\
Charge sym & 0 \\
\hline
Total & $\pm 0.0008$ \\
\hline
\end{tabular}
\end{center}
\label{tab:qwerror}
\end{table}
\section{Tracking System Overview}
The parity-violating asymmetry at $Q_{weak}\;$ kinematics is directly
proportional to the momentum transfer $Q^2$; hence, it is essential that we make a precise determination of
$Q^2$. We need to determine the acceptance-weighted distribution of $Q^2$, weighted by the analog response of the
\v{C}erenkov detectors to within an accuracy of $\approx 1\%$. Recent simulations have shown that the anticipated
non-uniformity of light collection in the \v{C}erenkov detectors will shift the $Q^2$ by 2.5\%, demonstrating
the crucial need for a direct measurement. This is the primary motivation for the tracking system; an
additional motivation is the measurement of any non-elastic backgrounds contributing to the asymmetry
measurement, such as inelastic events from the target, scattering from the target windows, and general
background in the experimental hall. Finally, since the hadronic structure contribution to the measured
asymmetry goes with higher powers of $Q^2$, the tracking system will be used to determine the important higher
moments of the effective kinematics, needed to correct for the hadron structure dependent terms.
For elastic scattering,
$$
\left. {Q^2} \right. = {{4 E^2\sin ^2{\theta
\mathord{\left/ {\vphantom {\theta 2}}
\right. \kern-\nulldelimiterspace} 2}} \over {1+2{E \over
M}\sin ^2{\theta \mathord{\left/ {\vphantom {\theta 2}}
\right. \kern-\nulldelimiterspace} 2}}}
$$
where $E$ is the incident electron energy, $\theta$ the scattering angle and $M$ the proton mass. In principle,
a measurement of any two of $E$, $\theta$, or $E'$ (the scattered energy) yields $Q^2$. The absolute beam
energy will be known to $\leq 0.1$\% accuracy using the Hall C energy measurement system, corresponding to a
0.2\% error in $Q^2$. As the entrance collimator is designed to be the sole limiting aperture for elastically
scattered events, good knowledge of the collimator geometry and location with respect to the target and the beam
axis might seem to suffice for determining $Q^2$. The average radius of the as-built defining collimator will be
determined by CMM (Coordinate Measuring Machine) to better than 25 $\mu$m (0.01\% of the radius). The distance
from the target center to the defining aperture will be determined using redundant survey techniques to better
than 3 mm (0.1\% of the distance). The purely geometrical contribution to the $Q^2$ determination is therefore
$dQ^2/Q^2 = 2 d\theta/\theta = 2\sqrt{ (dR/R)^2 + (dL/L)^2} = 0.2\%$. The contributions from the beam energy
uncertainty and the angle uncertainty when combined in quadrature are therefore only 0.3\%. However, we expect
that contributions such as the uncertainties in the detailed corrections for ionization and radiative energy
loss, collimator transparency, the angular dependence of the e+p elastic cross section, {\em etc.}, may
ultimately limit the $Q^2$ measurement to 0.5\%. Our significant investment in tracking detectors is expected to
help us quantify such subtleties, as well as to confirm the predicted inelastic contribution to the detected
electron flux. Last, but not least, we need to weight the experimental $Q^2$ distribution with the analog
response of the \v{C}erenkov detector in order to determine the effective central $Q^2$.
Rather than rely solely on a simulation to account for all of these effects, we choose to {\em measure} them
with a dedicated tracking system. These measurements will be made in special calibration runs in which the beam
current is reduced to less than 1 nA, allowing the use of the tracking system. Recent tests have shown that at
currents as low as 100 picoamps the beam is still stable, and that adequate beam current and position
measurements can be made using special harp scans and a halo monitoring device. In this $Q^2$ measurement mode,
the \v{C}erenkov detectors will be read out in pulse mode and individual particles will be tracked through the
spectrometer system using a set of chambers (Region I, Region II, and Region III, described below). This
information will allow us to determine, on an event-by-event basis, the scattering angle, interaction vertex (to
correct $E$ for dE/dx and radiation in the target), $E'$ (to confirm elastic scattering) and location and
entrance angle of the electron on the \v{C}erenkov detectors.
The tracking system~\cite{Grimm:2005hf} consists of three regions of tracking chambers: the Region I vertex
chambers, based on GEM (Gas Electron Multiplier) technology, will have excellent position resolution and will be
located directly after the primary collimator. The Region II horizontal drift chamber (HDC) pair will be just
before the entrance of the spectrometer magnet; together with Region I, they will determine the scattering angle
to high accuracy. The Region III chambers, a pair of vertical drift chambers (VDC's), will be located just
upstream of the focal surface. They will allow momentum analysis to ensure that the detected events are true
elastic electrons and will characterize the particle trajectories entering the \v{C}erenkov detector (and so
allow us to map out its analog response). The tracking event trigger will be provided by plastic trigger
scintillators positioned between the VDC's and the \v{C}erenkov bars. Finally, a quartz ``scanner'' will be
mounted behind the \v{C}erenkov bars in one sector, to be used as a non-invasive monitor of the stability of the
$Q^2$ distribution during high-intensity production data-taking, and to verify that the distribution measured in
the low beam-current calibration runs is compatible with that measured at full beam intensity.
For each region, two sets of of chambers are being constructed, which will be mounted on rotator devices to
cover two opposing octants, and which will allow them to be sequentially rotated to map, in turn, all octants of
the apparatus.\\
\subsection{Region I - Gas Electron Multiplier Chambers}
\begin{figure}[ht]
\subfigure[]{\includegraphics[width=0.55\textwidth]{Qweak_PrototypeGEM.eps}}
\subfigure[]{\includegraphics[width=0.38\textwidth]{Qweak_Rotator_atLatech_GG.eps}} \caption{{\em (a): Working
prototype GEM chamber. (b) Region I Rotator assembly. } \label{fig:GEMS}}
\end{figure}
The Region I tracking system is designed to track the scattered electrons less than 1 meter away from the target
and in opposite octants. The tracking system will be in a high radiation environment despite being behind the
first collimation element. This tracking system uses an ionization chamber equipped with Gas Electron Multiplier
(GEM) preamplifiers in order to handle these high rates as well as enable a measurement of an ionization event's
location within the chamber with a resolution of 100~$\mu$m. The system contains two ionization chambers located
180$^{\circ}$ apart on a rotatable mounting system which allows two opposing octants to be measured
simultaneously. The rotatable mounting system can move the chambers through a 180$^{\circ}$ angle such that
measurements may be made in all octants.
The ionization chamber final design has been completed, and the chambers are currently being constructed. The
GEM preamplifier foils have already been acquired, and the readout board is currently being manufactured at
CERN. The ionization chamber itself has been machined and will be assembled upon receipt of the readout boards,
which are currently scheduled for delivery in the first quarter of 2008. We anticipate that the detector will
have similar performance to the prototype detector (see Fig.~\ref{fig:GEMS}a) constructed previously.
The readout electronics for Region I use the VFAT board from CERN to digitize the analog signals on the readout
board and send them to a VME crate to be recorded. The control system (a ``gum-stick" microcomputer) for the
VFAT board has been acquired and tested. A signal junction box to transfer the control signals for 6 VFAT boards
to our control system has been designed. The 6 VFAT boards will digitize the analog output signals from a single
detector. The junction box will also collect digital detector signals and transfer them to a VME crate for
readout. A CAEN V1495 FPGA module has been purchased and is currently being programmed to transfer the digital
signals to CODA (the data acquisition).
The infrastructure to mount and rotate the chambers into position is 90\% complete (see Fig.~\ref{fig:GEMS}b).
The system uses four caster wheels to mount an aluminum ring that has teeth on its outer surface to mesh with a
worm gear in order to rotate the ring. One stepper motor is used on the worm gear to rotate the detector to
within 1 mrad. A stepper motor for each detector has been mounted on the ring itself in order to position each
detector radially using fixed stops. A controller for the all the stepper motors has been programmed to move the
detector between octants. A GUI is currently under development which will be used in the counting house to
position the detectors.
\subsection{Region II - Horizontal Drift Chambers}
This second set of chambers will be located just upstream of the QTOR magnet. Their purpose is to determine the
position and direction cosines of the scattered electrons as they enter the magnet, and, along with the Region I
vertex detectors, to provide an accurate measurement of the target vertex and scattering angle. The Region II
drift chambers are horizontal drift chambers (HDC). We are building two sets of two chambers, each set being
separated by 0.4 m to provide angular resolution of $\sim$0.6 mrad for position resolutions of $ \sim 200\
\mu$m. Each chamber has an active area of 38 cm x 28 cm, wire pitch of 5.84 mm, and six planes (in an
$xuvx'u'v'$ configuration, where the stereo $u$ and $v$ planes are tilted at an angle of 53$^{\circ}$). There
are a total of 768 sense wires with a corresponding number of electronic channels. The sense wires are being
read out with commercially available Nanometrics N-277 preamp/discriminator cards.
We have all systems in place to construct and test the chambers. For construction, we have a wire stringing
area, wire scanning apparatus, and gas-tight high voltage test box. For testing we have a cosmic ray test stand
with a DAQ system instrumented with the same JLab F1 TDCs that we will use in the experiment. To date, we have
completed a prototype chamber and the first full production chamber. Photographs of that chamber along with a
typical drift time distribution for one of the wires is shown in Figure~\ref{HDCfigure}.
\begin{figure}[h!]
\vspace*{0.2cm} \centerline{\includegraphics[width=6.5in]{regionii_fig_mod.eps}} \caption{\em Upper left: Norman
Morgan uses the wire scanning apparatus to measure wire positions. Upper right: Undergraduate Elizabeth Bonnell
shows the first completed production chamber. Lower left: a typical drift time distribution from testing with
this chamber; it cuts off at about 120 nsec as expected for our drift cell size. Lower right: corresponding
drift distance to drift time correlation.} \label{HDCfigure}
\end{figure}
The chambers will be mounted on opposite sides of a rotator instrumented to be remotely rotated so that all
eight octants can be covered. The rotator will be designed so that the chambers can be in an ``in-beam'' and
``parked'' position. Design and procurement of that device will be done in collaboration with a Jefferson Lab
engineer.
\subsection{Region III - Vertical Drift Chambers}
The Region III vertical drift chamber design is based on and has evolved from the very successful VDC's used in
the Hall A High Resolution Spectrometers ~\cite{Fissum:2001st}. Over the last three years, we have made
considerable progress on the Region III project. A large clean room was constructed and is in use. The large
Region III rotating arm assembly, on which the detectors will be mounted, has been designed and constructed, and
is on site at JLab, awaiting final assembly (see Fig.~\ref{rotator}b). The rotator is a gym-wheel construction
with welded extrusion as a radial rail system holding the VDC's (see Fig.~\ref{rotator}a). The dual VDC's will
be moved along the rails for locking them into either an IN position (for tracking runs) or an OUT position (for
production runs).
\begin{figure}[h!]
\vspace*{0.2cm} \subfigure[]{\includegraphics[width=0.47\textwidth]{rotator.eps}}
\subfigure[]{\includegraphics[width=0.52\textwidth]{gymwheel.eps}}
\caption{{\em (left) Design of the Region III support structure and rotator. (right) The completed ``gymwheel''
for the Region 3 rotator. }} \label{rotator}
\end{figure}
The tracking response of the VDC's has been modelled using the GARFIELD
simulation package~\cite{GARFIELD}. We studied the correlation between the vertical distance (above/below the
wire) and the arrival time of the first drift electron~\cite{WECHSLER,ERIN}. A fast, accurate method was
developed for reconstructing the drift distance as a function of drift time and track angle. The projected
intrinsic position resolution per wire plane, for a track hitting at least 4 drift cells, is $\Delta$x$\sim$50
$\mu m$ and $\Delta$y$\sim$75 $\mu m$, more than adequate for this application.
The chamber design was finalized, and all parts of the G10 frame assembly have been machined and are in-house.
Each of the G10 frames was machined in four separate pieces (due to their large size, very few shops could
handle the machining in one piece), which need to be epoxied together. The epoxying technique has been
prototyped, and a complete chamber's worth of frames has been assembled.
Various chamber assembly jigs and tools have been designed and built, including a frame gluing jig, a wire
positioning jig, a ``wire scanner" system, and a tension measuring device. These provide essential tools for
quality control and verification of the wire position and alignment, with a design precision of $\leq$50 $\mu
m$. The wire scanner (see Fig.~\ref{stringing}) moves a CCD camera over the drift chamber wire plane using a
precision translation stage. Wire scanner measurements of 70 wires test strung on the assembly jig have verified
that we can achieve this precision. The tension measuring device uses the same assembly and the classical
technique of finding the resonant frequency for current-carrying wires oscillating in a magnetic field.
Stringing and final assembly of the chambers is awaiting the delivery of the electronics daughter boards, needed
to mate our wires to the electronics readout cards. Assembly will begin in the coming weeks and will take about
six months to complete.
\begin{figure}[h!]
\vspace*{0.3cm} \centerline{\includegraphics[width=4.5in]{Stringingjig.eps}} \caption{{\em Region III wire
stringing jig and wire scanner system. }} \label{stringing}
\end{figure}
For the front-end electronics (preamp-discriminator) we have selected a system based on the MAD chip~\cite{MAD},
which was developed at CERN for modern wire chambers. We have tested these chips and they meet all our
specifications. They will be mounted on circuit boards of a proven design already in use at Jefferson
Laboratory; the boards were sent out for manufacturing in mid-September, 2007.
To save the significant expense of instrumenting each of the 2248 sense wires in the 4 VDC chambers with an
individual TDC channel, we have adopted a delay-line multiplexing scheme (see Fig.~\ref{delayline}). The signals
from many (18) wires from separated locations in a given chamber are ganged together and put onto two signal
paths, leading to two individual TDC channels. The drift time is decoded from the sum of the two TDC times, and
the wire that was hit is identified via the difference between the two times, thus saving a factor of 9 in
number (and cost) of TDC's. We will use the new JLab standard 64-channel F1 multihit TDC for the final
digitization.
Our delay-line implementation adopts a novel technique - instead of the classical use of analog cable delay
(simple, but expensive and bulky) we will use a digital delay line, based on ECL gates as delay chips. The LVDS
(low-voltage differential signal) signals from the MAD chips will be converted to ECL and duplicated in custom
conversion boards; the ECL signals will then be fed to a string of ECL gates which provide the quantized delays
(1.3 ns per chip). The LVDS to ECL boards and the multiplexing boards have been completely designed by the JLab
electronics group in consultation with the W\&M group, and prototypes are under procurement, with testing
planned in the next few weeks.
\begin{figure}[h!]
\vspace*{1.0cm} \centerline{\includegraphics[width=6.2in]{DelayLine3.eps}}
\caption{{\em Region III Delay-line readout schematic. }} \label{delayline}
\end{figure}
\subsection{Trigger Scintillator}
\label{trig_scint}
Scintillation counters will provide the trigger and time reference for the calibration system. These are large
enough to shadow the quartz \v{C}erenkov bars, and tests with a prototype indicate that they have sufficient
energy resolution and timing capabilities to identify multiparticle events and veto neutrals. The scintillators
are long bars mounted between the \v{C}erenkov bars and the Region 3 chambers with a photomultiplier tube at
each end.
\begin{figure}[hbtp]
\centerline{\includegraphics[width=6.0in]{trig-scint-final.eps}}
\caption{\em{Schematic of trigger scintillator and lightguides.
}}
\label{fig_trig-scint}
\end{figure}
Each scintillator is made from BC408 (i.e. $\sim$ 3 ns time constant)
and is 218.45
cm long, 30.48 cm high, and 1 cm thick.
To minimize loss of light from the scintillator corners, each light guide is made of a row of ``fingers'' that
couple to the 30.5 cm $\times $ 1 cm ends of the scintillator and overlap each other to form a squared off
circle that is circumscribed within the PMT, as shown in Fig.~\ref{fig_trig-scint} and
Fig.~\ref{fig_trig-scint-pmt}, respectively. We will use Photonis XP4312B 3 inch PMTs, which have a high gain
($\sim 3 \times 10^7$) and a uniform response over their photocathode areas \cite{XP4312B}. These three inch
PMTs have photocathodes that are 6.8 cm in diameter; this corresponds to a photocathode area of 36.3 $\rm cm^2$,
which will accommodate the 30.5 $\rm cm^2$ scintillator ends. Tests with a scintillator and lightguide
prototype show that we can expect 70 to 210 electrons to be produced by the photocathode for every electron
going through the scintillator. Combining this with a high gain PMT ($\sim 10^7$) yields 100 to 300 pC of
charge in the 3 nsec of the signal.
\begin{figure}[hbtp]
\centerline{\includegraphics[width=5.5in]{trig-scint-pmt.eps}}
\caption{\em{Schematic of trigger scintillator lightguide coupling to PMT.
}}
\label{fig_trig-scint-pmt}
\end{figure}
Saint-Gobain made, assembled, and wrapped the scintillators and lightguides and then delivered them to George
Washington University (GW) in October 2007. GW did not have a working DAQ system until recently, when we
purchased a VME crate, VME-USB interface, VME modules, and downloaded the recommended software for this VME-USB
interface system from the National Superconducting Cyclotron Facility at Michigan State University. We can now
acquire and analyze ADC and TDC signals from our prototype detectors at GW.
Given the extreme ratio of thickness to length and weight of the PMT's, the scintillator counters and PMTs must
be supported by a frame which is then mountable to the rear VDC in the Region 3 package. The support frames were
designed and built by JLab staff and are ready for use. When the fall 2007 semester ends we will move the
support stands to GW, position the trigger scintillators on the stands, and verify that the response
characteristics of the trigger scintillors meet our design goals. The scintillators will then be ready to be
moved to JLab for the experiment.
\subsection{Focal-Plane Scanner}
The tracking system in $Q_{weak}\;$\ will operated at low beam current in order to determine $\langle Q^2\rangle$ for
the light accepted by the main \v{C}erenkov detectors. The parity-violating asymmetry measurement, on the other
hand, will be conducted at high beam current, where the tracking system will be inoperable. Since our previous
Jeopardy Proposal, we have conceived of a new device, a focal-plane scanner. The scanner is a tracking device
with the important property that it is operable in counting mode at both low and high beam currents. The scanner
has been fully funded by NSERC (Canada) and is nearing completion of the construction phase at University of
Winnipeg.
The focal-plane scanner consists of a quartz \v{C}erenkov detector with small active area which is scanned in
the focal plane of the spectrometer, just behind the main detectors. A scan consists of moving the detector
across the fiducial area of the main \v{C}erenkov detectors to make rate measurements. The scanner would impact
knowledge of tracking results at high beam currents as follows. First, comparison of tracking system results
with scanner results would be conducted at a low beam current acceptable to the Region III drift chambers. Then,
scanner results would be acquired at high beam current. If the two scanner measurements would be found to
agree, then the tracking results would be believable at high current to high confidence.
A photograph of the scanner is shown in Fig.~\ref{fig:scanner}(a). In the scanner, two pieces of fused silica
(synthetic quartz) are placed one in front of the other, with a 1~$\times$~1~cm$^2$ active area.
Each piece of
quartz acts as a \v{C}erenkov radiator, and each is coupled to a photomultiplier tube (PMT) by an air-core light
pipe coated with specular, reflective Alzak (polished and chemically brightened anodized aluminum).
Fig~\ref{fig:scanner}(b) shows the pulse-height distribution in one of PMT's when the detector is calibrated
with cosmic-ray muons. A 2D linear motion assembly is used to scan the detector. The motion assembly consists
of two stainless-steel ball-screw driven tables, one mounted on the other, producing x-y motion. The linear
motion assembly is driven by servo-motors controlled remotely by a computer.
One complete scanner system is being constructed, to be mounted in one of the $Q_{weak}\;$\ octants. It would be
possible to move the scanner to other octants by hand; however, it is not envisioned that this would be pursued
unless a systematic effect arose that was octant-specific.
Similar quartz-radiator scanner devices were used successfully in both
the E158 and HAPPEx experiments. In E158, the device was found to
particularly important, as it was the only means with which to study
spectrometer optics and make studies of backgrounds. It is envisioned
that the $Q_{weak}\;$\ focal-plane scanner would also be used to impact
questions of backgrounds, as well as spectrometer optics, particularly
the stability of such quantities with beam current during parity violation
measurements. \\
\begin{figure}[ht]
\subfigure[]{\includegraphics[width=0.45\textwidth]{scanner-annotated-11-6-7.eps}}
\subfigure[]{\includegraphics[width=0.54\textwidth]{jies-results.eps}} \caption{{\em (a) Photograph of the
focal-plane scanner system under assembly at U. Winnipeg. (b) Pulse-height distribution in photomultiplier tube
when testing scanner with cosmic rays shows sufficient light yield.\label{fig:scanner}}}
\end{figure}
\subsection{Track Reconstruction Software}
The $Q_{weak}\;$\ Track Reconstruction software (QTR) must utilize the full capability of each detector region. We
need to to perform fast track reconstruction and to compile a statistics database, and the software must be
versatile so that it can be updated to perform other tasks, such as various detector calibrations. The $Q_{weak}\;$
track reconstruction software is adapted from that used in the HERMES experiment, as discussed below.
The HERMES experimental setup had many similarities to that of $Q_{weak}\;$\ \cite{WANDER}. In both experiments,
particles traverse two straight tracks separated by a curved track in a magnetic field. In the initial phase of
HERMES, as in $Q_{weak}\;$\, no detectors were present within the magnetic field, which greatly simplifies the track
reconstruction. Additionally, the HERMES software was designed to use many of the best tracking techniques
available \cite{BOCK,BLUM}. Thus, the HERMES track reconstruction software is an ideal model for $Q_{weak}\;$\. The
$Q_{weak}\;$ tracking group has rewritten the HERMES reconstruction package into C++ and altered it to be more
object-oriented. We are developing the QTR package to be as versatile as possible.
One of the core components of QTR is the use of pattern recognition. For Region III, patterns are generated for
a small subset of wires in one of the VDC planes. The set of patterns represents all possible straight line
tracks of interest. Using symmetry relationships, the pattern set can be used for the entire detector and can
easily be searched to be identified with a track. The ability to compare a set of hits in a detector to a known
good track allows for fast track identification, powerful noise and background rejection, and the ability to
easily resolve left/right wire ambiguities in drift chambers. Additionally, initial track parameters can be
associated with the track to improve fit calculations. Region I and II will use a single pattern database to
identify track segments upstream of the magnetic field.
The pattern recognition utilities have been completed, and a first-round pattern database algorithm has been
written for the Region III chambers. Mock Region III data have been successfully fitted to track segments in
each plane and matched together. Currently, progress is being made on using the VDC straight-track segments to
compare to hits in the trigger scintillator and \v{C}erenkov bars which lie downstream of the VDC. The software
package is also being refined for an initial release which will be used in association with Monte Carlo
simulations.
|
2,869,038,155,437 | arxiv | \section{Introduction}
Similar to domains such as social networks or social tagging systems~\cite{lacic2014recommending,seitlinger2015attention,kowald2013social}, the personalization of online content has become one of the key drivers for news portals to increase user engagement and convince readers to become paying subscribers~\cite{garcin2013pen,garcin2014swissinfo,deSouza2018DNN}.
A natural way for news portals to do this, is to provide their users with articles that are fresh and popular. This is typically achieved via simple most-popular news recommendations, especially since this approach has been shown to provide accurate recommendations in offline evaluation settings~\cite{kille2015overview}. However, such an approach could amplify popularity bias with respect to users' news consumption. This means that the equal representation of non-popular, but informative content in the recommendation lists is put into question, since articles from the ``long tail'' do not have the same chance of being represented and served to the user~\cite{abdollahpouri2019popularity}.
Since nowadays, readers tend to consume news content on smaller user interface types (e.g., mobile devices)~\cite{KARIMI20181203,newman2015reuters}, the impact of popularity bias may even get amplified due to the reduced number of recommendations that can be shown~\cite{kim2015eye}.
In this paper, we therefore discuss the introduction of personalized, content-based news articles on DiePresse, a popular Austrian news platform, focusing on two aspects: (i) user interface type, and (ii) popularity bias mitigation.
To do so, we performed a two-weeks online study that started in October 2020, in which we compared the impact of recommendations with respect to different user groups, i.e., anonymous and subscribed (logged-in and paying) users, as well as different user interface types, i.e., desktop, mobile and tablet devices (see Section~\ref{sec:exp_setup}). Specifically, we address two research questions:
\vspace{2mm}
\noindent
\textbf{RQ1:} How does the user interface type impact the performance of news recommendations?
\noindent
\textbf{RQ2:} Can we mitigate popularity bias by introducing personalized, content-based news recommendations?
\vspace{2mm}
\noindent
We investigate RQ1 in Section~\ref{sec:rq1} and RQ2 in Section~\ref{sec:rq2}. Additionally, we discuss the impact of two significant events, i.e., (i) the COVID-19 lockdown announcement in Austria, and (ii) the Vienna terror attack, on the consumption behavior of users. We hope that our findings will help other news platform providers assessing the impact of introducing personalized article recommendations.
\section{Experimental Setup} \label{sec:exp_setup}
\noindent
\textbf{Study Design.}
In order to answer our two research questions, we performed a two-weeks online user study, which started on the 27th of October 2020 and ended on the 9th of November 2020. Here, we focused on three user interface types, i.e., desktop, mobile and tablet devices, as well as investigated two user groups, i.e., anonymous and subscribed users. About $89\%$ of the traffic (i.e., $2,371,451$ user interactions) was produced by anonymous users, where a majority of them (i.e., $77.3\%$) read news articles on a mobile device.
Interestingly, subscribed users exhibited a more focused reading behavior and only interacted with a small subset of all articles that were read during our online study (i.e., around $18.7\%$ out of $17,372$ articles). Within the two-weeks period, two significant events happened: (i) the COVID-19 lockdown announcement in Austria on the 31st of October 2020, and (ii) the Vienna terror attack on the 2nd of November 2020. The articles related to these events were the most popular ones in our study.
\vspace{2mm}
\noindent
\textbf{Calculation of Recommendations.}
We follow a content-based approach to recommend news articles to users~\cite{lops2011content}. Therefore, we represent each news article using a 25-dimensional topic vector calculated using Latent Dirichlet Allocation (LDA)~\cite{blei2003latent}. Each user was also represented by a 25-dimensional topic vector, where the user's topic weights are calculated as the mean of the news articles' topic weights read by the user. In case of subscribed users, the read articles consist of the entire user history and in case of anonymous users, the read articles consist of the articles read in the current session.
Next, these topic vectors are used to match users and news articles using Cosine similarity in order to find top-$n$ news article recommendations for a given user. For our study, we set $n = 6$ recommended articles. For this step, only news articles are taken into account that have been published within the last 48 hours. Additionally, editors had the possibility to also include older (but relevant) articles into this recommendation pool (e.g., a more general article describing COVID-19 measurements).
In total, we experimented with four variants of our content-based recommendation approach: (i) recommendations only including articles of the last 48 hours, (ii) recommendations also including the editors' choices, and (iii) and (iv) recommendations, where we also included a collaborative component by mixing the user's topic vector with the topic vectors of similar users for the variants (i) and (ii), respectively. Additionally, we also tested a most-popular approach, since this algorithm was already present in DiePresse before the user study started. However, we did not find any significant differences between these five approaches with respect to recommendation accuracy in our two-weeks study and therefore, we did not distinguish between the approaches and report the results for all calculated recommendations in the remainder of this paper.
\section{RQ1: User Interface Type} \label{sec:rq1}
Most studies focus on improving the accuracy of the recommendation algorithms, but recent research has shown that this has only a partial effect on the final user experience~\cite{knijnenburg2012explaining}.
The user interface is namely a key factor that impacts the usability, acceptance and selection behavior within a recommender system~\cite{felfernig2012preface}. Additionally, in news platforms, we can see a trend that shifts from classical desktop devices to mobile ones.
Moreover, users are biased towards clicking on higher ranked results (i.e., position bias)~\cite{craswell2008experimental}. When evaluating personalized news recommendations, it becomes even more important to understand the user acceptance of recommendations for smaller user interface types, where it is much harder for the user to see all recommended options due to the limited size.
In our study, we therefore investigate to what extent the user interface type impacts the performance of news recommendations (RQ1). As mentioned, we differentiate between three different user interface types, i.e., interacting with articles on a (i) desktop, (ii) mobile, and (iii) tablet device. In order to measure the acceptance of recommendations shown via the chosen user interface type, we use the following two evaluation metrics~\cite{garcin2014swissinfo}:
\vspace{2mm}
\noindent
\textbf{Recommendation-Seen-Ratio (RSR)} is defined as the ratio between the number of times the user actually saw recommendations (i.e., scrolled to the corresponding recommendation section in the user interface) and the number of recommendations that were generated for a user.
\noindent
\textbf{Click-Through-Rate (CTR)} is measured by the ratio between the number of actually clicked recommendations and the number of seen recommendations.
\vspace{2mm}
\noindent
As shown in Table~\ref{tab:ui_choice}, the smaller user interface size of a mobile device heavily impacts the probability of a user to actually see the list of recommended articles. This may be due to the fact that reaching the position where the recommendations are displayed is harder in comparison to a larger desktop or tablet device, where the recommendation section can be reached without scrolling. Interestingly enough, once a user has seen the list of recommended articles, users who use a mobile device exhibit a much higher CTR.
Again, we hypothesize that if a user has put more effort into reaching the list of recommended articles, the user is more likely to accept the recommendation and interact with it.
When looking at Figure~\ref{fig:ui}, we can see a consistent trend during the two weeks of our study regarding the user interface types for both the RSR and CTR measures. However, notable differences are the fluctuations of the evaluation measures for the two significant events that happened during the study period. For instance, the positive peak in the RSR and the negative peak in CTR that can be spotted around the 31st of October was caused by the COVID-19 lockdown announcement in Austria. For the smaller user interfaces (i.e., mobile and tablet devices) this actually increased the likelihood of the recommendation to be seen since users have invested more energy in engaging with the content of the news articles. On the contrary, we saw a drop in the CTR, which was mostly caused by anonymous users since the content-based, personalized recommendations did not provide articles that they expected at that moment (i.e., popular ones solely related to the event).
Another key event can be spotted on the 2nd of November, the day the Vienna terror attack happened. This was by far the most read article with a lot of attack-specific information during the period of the online study. Across all three user interface types, this has caused a drop in the likelihood of a recommendation to be seen at all. Interestingly enough, the CTR in this case does not seem to be influenced. We investigated this in more detail and noticed that a smaller drop was only noticeable for the relatively small number of subscribed users using a mobile device and thus, this does not influence the results shown in Figure~\ref{fig:ui}.
\def\arraystretch{1}%
\begin{table}[t!]
\caption{RQ1: Acceptance of recommended articles with respect to the user interface type. We find that the probability of a recommendation to be seen (i.e., RSR) is the highest for desktop devices, while the probability of interacting with recommendations (i.e., CTR) is the highest for mobile devices. Highest values are shown in bold.}
\begin{tabular}{r|C{2cm}|C{2cm}|C{2cm}}
\hline
Metric & Desktop & Mobile & Tablet \\ \hlin
RSR: Recommendation-Seen-Ratio (\%) & \textbf{26.88} & 17.55 & 26.71 \\ \hline
CTR: Click-Through-Rate (\%) & 10.53 & \textbf{13.40} & 11.37 \\ \hline
\end{tabular}
\vspace{-3mm}
\label{tab:ui_choice}
\end{table}
\def\arraystretch{1}%
\begin{figure}[t]
\centering
\subfloat[Recommendation-Seen-Ratio.]{
\includegraphics[width=.49\textwidth]{rsr.pdf}}
~
\subfloat[Click-Through-Rate.]{
\includegraphics[width=.49\textwidth]{ctr.pdf}}
\caption{RQ1: Acceptance of recommended articles for the two weeks of our study with respect to (a) RSR, and (ii) CTR. The size of the dots represent the number of reading events on a specific day for a specific user interface type.}
\vspace{-4mm}
\label{fig:ui}
\end{figure}
\section{RQ2: Mitigating Popularity Bias} \label{sec:rq2}
Many recommender systems are affected by popularity bias, which leads to an overrepresentation of popular items in the recommendation lists. One potential issue of this is that unpopular items (i.e., so-called long-tail items) are recommended rarely~\cite{kowald2021support,kowald2020unfairness}.
The news article domain is an example where ignoring popularity bias could have a significant societal effect. For example, a potentially controversial news article could easily impose a narrow ideology to a large population of readers~\cite{flaxman2016filter}. This effect could even be strengthened by providing unpersonalized, most-popular news recommendations as it is currently done by many online news platforms (including DiePresse) since these popularity-based approaches are easy to implement and also provide good offline recommendation performance~\cite{garcin2014swissinfo,KARIMI20181203}.
We hypothesize that the introduction of personalized, content-based recommendations (see Section~\ref{sec:exp_setup}) could lead to more balanced recommendation lists in contrast to most-popular recommendations. This way also long-tail news articles are recommended and thus, popularity bias could be mitigated.
Additionally, we believe that this effect differs between different groups of users and therefore, we distinguish between anonymous and subscribed users.
We measure popularity bias in news article consumption by means of the skewness~\cite{bellogin2017statistical} of the article popularity distribution, i.e., the distribution of the number of reads per article. Skewness measures the asymmetry of a probability distribution, and thus a high, positive skewness value depicts a right-tailed distribution, which indicates biased news consumption with respect to article popularity. On the contrary, a small skewness value depicts a more balanced popularity distribution with respect to head and tail, and thus indicates that also non-popular articles are read. As another measure, we calculate the kurtosis of the popularity distribution, which measures the ``tailedness'' of a distribution. Again, higher values indicate a higher tendency for popularity bias.
For both metrics, we hypothesize that the values at the end of our two-weeks study are smaller than at the beginning, which would indicate that the personalized recommendations helped to mitigate popularity bias.
The plots in Figure~\ref{fig:pop_bias} show the results addressing RQ2. For both metrics, i.e., skewness and kurtosis, we see a large gap between anonymous users and subscribers at the beginning of the study (i.e., 27th of October 2020), where only most-popular recommendations were shown to the users. While anonymous users have mainly read popular articles, subscribers were also interested in unpopular articles. This makes sense since subscribed users typically visit news portals for consuming articles within their area of interest, which will also include articles from the long-tail, while anonymous users typically visit news portals for getting a quick overview of recent events, which will mainly include popular articles. Based on this, a most-popular recommendation approach does not impact subscribers as much as it impacts anonymous users.
However, when looking at the last day of the study (i.e., 9th of November 2020), there is a considerably lower difference between anonymous and subscribed users anymore. We also see that the values at the beginning and at the end of the study are nearly the same in case of subscribed users, which shows that these users are not prone to popularity bias, and thus also personalized recommendations do not affect their reading behavior in this respect.
With respect to RQ2, we find that the introduction of personalized recommendations can help to mitigate popularity bias in case of anonymous users. Furthermore, we see two significant peaks in the distributions that are in line with the COVID-19 lockdown announcement in Austria and the Vienna terror attack. Hence, in case of significant events also subscribed users are prone to popularity bias.
\begin{figure}[t]
\centering
\subfloat[Skewness.]{
\includegraphics[width=.49\textwidth]{skew.pdf}}
~
\subfloat[Kurtosis.]{
\includegraphics[width=.49\textwidth]{kurt.pdf}}
\caption{RQ2: Impact of personalized, content-based recommendations on the popularity bias in news article consumption measured by (a) skewness and (b) kurtosis based on the number of article reads for each day of our two-weeks study. Popularity bias can be mitigated by introducing personalized recommendations in case of anonymous users.}
\vspace{-4mm}
\label{fig:pop_bias}
\end{figure}
\section{Conclusion}
In this paper, we discussed the introduction of personalized, content-based news recommendations on DiePresse, a popular Austrian news platform, focusing on two specific aspects: user interface type (RQ1), and popularity bias mitigation (RQ2).
With respect to RQ1, we find that the probability of recommendations to be seen is the highest for desktop devices, while the probability of clicking the recommendations is the highest for mobile devices.
With respect to RQ2, we find that personalized, content-based news recommendations result in a more balanced distribution of news articles' readership popularity for anonymous users.
For future work, we plan to conduct a longer study, in which we also want to study the impact of different recommendation algorithms (e.g., content-based vs. collaborative ones) on converting anonymous users into paying subscribers. Furthermore, we plan to investigate other evaluation metrics, such as recommendation diversity, serendipity and novelty.
\vspace{2mm}
\noindent
\textbf{Acknowledgements.} This work was funded by the H2020 projects TRUSTS (GA: 871481), TRIPLE (GA: 863420), and the FFG COMET program. The authors want to thank Aliz Budapest for supporting the study execution.
\bibliographystyle{splncs04}
\input{main.bbl}
\end{document}
\endinput
|
2,869,038,155,438 | arxiv | \section{Introduction and Motivation}
The minimal supergravity model (mSUGRA)~\cite{sugra} is commonly regarded as
the paradigm framework for phenomenological analyses of weak scale
supersymmetry. The visible sector is taken to consist of the particles
of the Minimal Supersymmetric Standard Model\cite{mssm}
(MSSM).
One posits, in addition, the existence of
``hidden sector'' field(s), which couple to ordinary matter fields
and their superpartners only
via gravity. The conservation of $R$-parity is assumed.
Supersymmetry is broken
in a hidden sector of the theory;
supersymmetry breaking is then communicated to the visible sector via
gravitational interactions. The technical
assumption of minimality implies
that kinetic terms for matter fields take the canonical form;
this assumption, which is equivalent to assuming an approximate global $U(n)$
symmetry between $n$ chiral multiplets, leads to a common mass squared
$m_0^2$ for all
scalar fields, and a common trilinear term $A_0$ for all $A$ parameters.
These parameters, which determine the sparticle-particle mass splitting
in the observable sector are taken to be comparable to the weak scale,
$M_{weak}$ .
In addition, motivated by the apparently successful gauge coupling
unification in the MSSM,
one usually adopts a common value $m_{1/2}$ for all gaugino masses at the
scale $M_{GUT}\simeq 2\times 10^{16}$ GeV. For simplicity, it is
commonly assumed that in fact the scalar masses and trilinear terms unify at
$M_{GUT}$ as well. The resulting effective theory, valid at energy scales
$E<M_{GUT}$, is then just the MSSM with the usual soft SUSY breaking terms,
which in this case are unified at $M_{GUT}$.
The soft SUSY breaking scalar and gaugino mases, the
trilinear $A$ terms and in addition a bilinear soft term $B$, the gauge and
Yukawa couplings and the supersymmetric $\mu$ term are all then evolved
from $M_{GUT}$ to some scale $M\simeq M_{weak}$ using renormalization
group equations (RGE's).
The large top quark Yukawa coupling causes the squared mass of one of the
Higgs fields to be driven negative, resulting in the
breakdown of electroweak symmetry; this determines the value
of $\mu^2$. Finally, it is customary to trade the parameter $B$ for
$\tan\beta$, the ratio of Higgs field vacuum expectation values.
The resulting weak scale spectrum of superpartners and their couplings can
thus be derived in terms of four continuous plus one discrete
parameters
\begin{equation}
m_0,\ m_{1/2},\ A_0,\ \tan\beta\ {\rm and} \mathop{\rm sgn}(\mu),
\end{equation}
in addition to the usual parameters of the standard model.
The consequences of the mSUGRA model have been investigated for collider
experiments at the CERN LEP2 $e^+e^-$ collider\cite{lep2}, the Fermilab
Tevatron $p\bar p$ collider\cite{bcpt,tevatron}, the CERN LHC $pp$
collider\cite{lhc} and a possible next linear $e^+e^-$ collider (NLC)
operating at $\sqrt{s}\simeq 500$ GeV\cite{nlc,NOJ}. In all
but the last of these studies (where the effect of the tau Yukawa
coupling on aspects of the phenomenology of the stau sector is carefully
examined), small to moderate values of the parameter $\tan\beta\sim
2-10$ have been adopted. This was due in part to the fact that event
generators such as ISAJET\cite{isajet} had not been constructed to
provide reliable calculations for large $\tan\beta$. In particular,
effects of tau and bottom Yukawa couplings,
\begin{equation}
f_b={g m_b\over\sqrt{2}M_W\cos\beta},
\ f_{\tau}={g m_{\tau}\over\sqrt{2}M_W\cos\beta}
\end{equation}
which become comparable to the electroweak gauge couplings and even to
the top Yukawa coupling $f_t=g m_t/(\sqrt{2}M_W\sin\beta)$ if
$\tan\beta$ is large, had not been completely included. The correct
inclusion of these couplings has a significant impact~\cite{drees,prl}
on the search for supersymmetry at colliders.
In the mSUGRA model, the parameter $\tan\beta$ can be as large as
$\tan\beta\sim {m_t/m_b}$, where the quark masses are evaluated at
a scale $\sim M_{weak}$; since the running $m_b$ is considerably smaller
than 5~GeV, $\tan\beta$ values up to 45-50 are possible. Such large
$\tan\beta$ values are indeed preferred in some $SO(10)$ GUT models with
Yukawa coupling unification. In practice, one finds that if $\tan\beta$
is chosen to be too large, $f_b$ diverges before $M_{GUT}$. A slightly
stronger upper limit on $\tan\beta$ is obtained from the requirement
that $m_A^2$, the mass of the pseudo-scalar Higgs boson, should be
positive. The precise value of the upper bound on $\tan\beta$
depends somewhat on the other mSUGRA parameters.
In a recent Letter\cite{prl},
we reported on an upgrade of the event generator ISAJET
that correctly incorporated the effects of $\tau$ and $b$ Yukawa interactions
so that it
would provide reliable predictions for
supersymmetry with large $\tan\beta$.
Novel phenomenological implications special to
large values of
$\tan\beta$ were pointed out: in particular, it was noted that while
Tevatron signals in multilepton ($e$ and $\mu$) channels were
greatly reduced, there could be new signals involving $b$-jets and
$\tau$-leptons via which
to search for SUSY.
In this paper, we focus our attention on the search for
supersymmetry at the Main Injector (MI) upgrade of
the Fermilab Tevatron $p\bar p$
collider,
($\sqrt{s}=2$ TeV, integrated luminosity $\int{\cal L}dt=2~$fb$^{-1}$)
and the proposed TeV33 upgrade
($\sqrt{s}=2$ TeV, integrated luminosity $\int{\cal L}dt=25~$fb$^{-1}$)
for the case where $\tan\beta$ is large.
\subsection{Sparticles masses at large $\tan\beta$}
Large $b$ and $\tau$ Yukawa couplings
significantly alter the mass spectra of the sparticles and Higgs bosons as
shown in Fig.~1. Here we plot various sparticle and Higgs boson
masses versus $\tan\beta$ for mSUGRA parameters
$m_{1/2}=150$ GeV, $A_0=0$ and {\it a}) $m_0=150$ GeV and {\it b})
$m_0=500$ GeV, for both signs of $\mu$. We fix the pole mass $m_t = 170$~GeV.
The $b$ and $\tau$ Yukawa couplings contribute negatively to the
renormalization group running of the sbottom and stau soft masses, driving
them to lower values than soft masses for the corresponding
first and second generation squarks and sleptons. In addition,
the off-diagonal terms in the sbottom and stau mass-squared matrices
$m_b(-A_b+\mu\tan\beta$) and $m_{\tau}(-A_{\tau}+\mu\tan\beta$)
can result in significant mixing between
left and right sbottom and stau gauge eigenstates,
and a possible further decrease in
the physical masses for the lighter of the two sbottom (and stau)
mass eigenstates $m_{\tilde b_1}$ and $m_{\tilde \tau_1}$.
If $\tan\beta$ is small, $\tilde \tau_1 \simeq \tilde \tau_R$, while (because of top
quark Yukawa interactions) $\tilde b_1 \simeq \tilde b_L$.
The impact of bottom and tau Yukawa interactions can be seen
in Fig.~1: $m_{\tilde \tau_1}\simeq m_{\tilde e_R}$ at low $\tan\beta$, and
as $\tan\beta$ increases, $m_{\tilde \tau_1}$ decreases, while $m_{\tilde e_R}$
remains constant. Likewise, $m_{\tilde b_1}$ decreases with increasing
$\tan\beta$, while $m_{\tilde d_L}$ remains constant. In the case of
frame {\it a}), ultimately $m_{\tilde \tau_1}$ drops below $m_{\widetilde W_1}$ and $m_{\widetilde Z_2}$
so that the two body decays $\widetilde W_1\rightarrow \tilde \tau_1\nu_\tau$ and
$\widetilde Z_2\rightarrow\tilde \tau_1\tau$ become allowed, and dominate the
branching fractions.
It is well known that at low to moderate values of $\tan\beta$,
the large top Yukawa coupling drives the Higgs
mass $m_{H_2}^2$ to negative values, resulting in a breakdown of electroweak
symmetry. At large $\tan\beta$, the large $b$ and $\tau$ Yukawa couplings
drive the other soft Higgs mass-squared $m_{H_1}^2$ to small or negative values
as well. This results overall in a {\it decrease}
in mass for the pseudo-scalar
Higgs $m_A$ relative to its value at small $\tan\beta$. Since the values of the
heavy scalar and charged Higgs boson masses are related to $m_A$,
they decrease as well. This effect is also illustrated in Fig.~1, where
the mass $m_A$ decreases dramatically with increasing $\tan\beta$.
The curves are terminated at the value of $\tan\beta$ beyond which
$m_A^2
< 0$, and the correct pattern of electroweak symmetry breaking is not
obtained as already mentioned.
We found that the pseudoscalar mass $m_A$, obtained using the 1-loop
effective potential, is unstable by up to factors of two
against scale variations for relatively low values of scale choice
$Q\sim M_Z$.
This instability would be presumably corrected by inclusion of
2-loop corrections.
We find the choice of scale
$Q\sim\sqrt{m_{\tilde t_L}m_{\tilde t_R}}$ to empirically yield stable predictions of
Higgs boson masses in the RG improved 1-loop effective potential
(where we include contributions from all third generation
particles and sparticles).
This scale choice effectively includes some
important two loop effects, and yields predictions for light scalar Higgs boson
masses $m_h$ in close accord with the results of Ref. \cite{carena}.
\subsection{Sparticle decays at large $\tan\beta$}
For large values of $\tan\beta$, $b$ and $\tau$ Yukawa couplings become
comparable in strength to the usual gauge interactions, so that Yukawa
interaction
contributions to sparticle decay rates are non-negligible and can even
dominate. This could manifest itself as lepton non-universality in SUSY
events. Also, because of the reduction of masses referred to
above, chargino and neutralino decays to stau, sbottom
and various Higgs bosons
may be allowed, even if the corresponding decays would be
kinematically forbidden for small $\tan\beta$ values.
The reduced stau, sbottom, and Higgs masses can also
increase sparticle branching ratios to third generation particles
via virtual effects. These enhanced decays to third generation
particles can radically alter
the expected SUSY signatures at colliders.
We have
re-calculated the branching fractions for the $\tilde g$, $\tilde b_i$, $\tilde t_i$,
$\tilde \tau_i$, $\tilde\nu_{\tau}$, $\widetilde W_i$, $\widetilde Z_i$, $h$, $H$, $A$ and $H^\pm$
particles and sparticles including sbottom and stau mixing as well as
effects of $b$ and $\tau$ Yukawa interactions.
For Higgs boson decays, we use the formulae in Ref. \cite{bisset}.
We have recalculated the decay widths for
$\tilde g\rightarrow tb\widetilde W_i$ and $\tilde g\rightarrow b\bar{b}\widetilde Z_i$. These have been
calculated previously
by Bartl {\it et al.}\cite{bartl}; our results agree with theirs if we
use pole fermion masses to calculate
the Yukawa couplings. In ISAJET, we use the
running Yukawa couplings evaluated at the scale $Q=m_{\tilde g}$ ($m_t$) to compute
decay rates for the gluino ($\widetilde W_i$,$\widetilde Z_i$). This seems a more
appropriate choice, and it significantly alters
the decay widths when effects of $f_b$ are important.
The $\widetilde Z_i\rightarrow \tau\bar{\tau}\widetilde Z_j$ and $\widetilde Z_i\rightarrow b\bar{b}\widetilde Z_j$
decays take place via eight diagrams ($\tilde f_{1,2}$,
$\bar{\tilde f}_{1,2}$, $Z$, $h$, $H$ and $A$ exchanges). In our
calculation of $\tilde g$ and $\widetilde Z_i$ decays,
we have neglected $b$ and $\tau$ masses except
in the Yukawa couplings and in the phase space integration.
We have also computed
the widths for decays $\widetilde W_i\rightarrow\widetilde Z_j \tau\nu$ which are mediated by
$W$, $\tilde \tau_{1,2}$, $\tilde\nu_{\tau}$ and $H^{\pm}$ exchanges; in these cases,
we retain $m_{\tau}$ effects only in the Yukawa couplings. Formulae for
these three-body decays are presented in the Appendix.
To illustrate the importance of the Yukawa coupling effects,
we show selected branching ratios of
$\widetilde W_1$ and $\widetilde Z_2$ in Fig.~2.
In all frames we take $\mu >0$.
Frames {\it a})
and {\it b}) are for the mSUGRA case
($m_0,\ m_{1/2},\ A_0 )=(150,150,0)$ GeV; frames {\it c}) and {\it d})
show the same branching fractions, but take $m_0=500$ GeV instead.
In frame {\it a}), for low $\tan\beta$
we see that the $\widetilde W_1\rightarrow e\nu\widetilde Z_1$ and $\widetilde W_1\rightarrow\tau\nu\widetilde Z_1$
branching ratios are very close in magnitude, reflecting the smallness
of $f_{\tau}$. For $\tan\beta \agt 10$, these
branchings begin to diverge, with the branching to $\tau$'s
becoming increasingly
dominant. For $\tan\beta >40$, the two body mode $\widetilde W_1\rightarrow \tilde \tau_1\nu$
opens up and quickly dominates. Since this decay
is followed by $\tilde \tau_1\rightarrow \tau\widetilde Z_1$, the end products of chargino
decays here are almost exclusively
tau leptons plus missing energy.
In frame {\it b}), we see at low $\tan\beta$ the $\widetilde Z_2\rightarrow
e\bar{e}\widetilde Z_1$ and $\widetilde Z_2\rightarrow \tau\bar{\tau}\widetilde Z_1$ branchings are large
($\sim 10\%$) and equal, again because of the smallness of the Yukawa
coupling. Except for parameter regions where the leptonic decays of
$\widetilde Z_2$ are strongly suppressed, $\widetilde W_1\widetilde Z_2$ production leads to the
well known $3\ell$ ($=e,\mu$) signature for the Tevatron
collider\cite{trilep}. As $\tan\beta$ increases beyond about 5, these
branchings again diverge, and increasingly $\widetilde Z_2\rightarrow\tau\bar{\tau}\widetilde Z_1$
dominates. Results of phenomenological analyses of trilepton signals for
$\tan\beta \sim 8-10$ obtained using older versions of ISAJET should,
therefore, be
interpreted with caution. For $\tan\beta >40$, $\widetilde Z_2\rightarrow \tau\tilde \tau_1$
opens up, and becomes quickly close to 100\%. Near the edge of parameter
space ($\tan\beta \sim 45$), the $\widetilde Z_2\rightarrow \widetilde Z_1 h$ decay opens up,
resulting in a reduction of the $\widetilde Z_2\rightarrow \tau\tilde \tau_1$ branching
fraction.
In frame {\it c}), the large value of $m_0=500$ GeV yields a large value
of $m_{\tilde \tau_1}$ (and other slepton masses) even if $\tan\beta$ is
large. In this case, the $\widetilde W_1$ branching fractions are dominated by
the virtual $W$ boson, so that $B(\widetilde W_1\rightarrow \widetilde Z_1 e\nu )$ and $B(\widetilde W_1\rightarrow
\widetilde Z_1 \tau\nu )$ are nearly equal over almost the entire range of
$\tan\beta$. The branching fractions of $\widetilde Z_2$ for $m_0=500$ GeV are
shown in frame {\it d}). As in frame {\it c}), the branching fraction of
$\widetilde Z_2$ to $\tau$'s and $e$'s is nearly the same except when
$\tan\beta \geq 35-40$. In this case, there is a steadily increasing branching
fraction of $\widetilde Z_2\rightarrow \widetilde Z_1 b\bar{b}$ (and to some extent, also of
$\widetilde Z_2 \rightarrow \widetilde Z_1 \tau\bar{\tau}$), which is mainly a reflection of
the increasing importance of virtual Higgs bosons in the $\widetilde Z_2$
three-body decays. We mention that for values of $\tan\beta$ somewhat below
the range where the decay $\widetilde Z_2 \rightarrow \widetilde Z_1 h$ becomes kinematically
allowed, contributions from {\it all} neutral Higgs bosons are important.
The above considerations motivated us to begin a systematic exploration
of how signals for supersymmetry may be altered if $\tan\beta$ indeed
turns out to be very large. To facilitate this analysis, we have
incorporated the above calculations into the computer program
ISAJET 7.32, so that realistic simulations of sparticle production and
decay can be made for large $\tan\beta$.
Another important effect at large $\tan\beta$ is that
tau Yukawa interactions can alter the mean polarization of the
$\tau$'s produced in chargino and neutralino decays. This, in turn, alters
the energy distribution of the visible decay products of the $\tau$. The
$\tau$ polarization information is saved in ISAJET and used to dictate the
energy distribution of the $\tau$ decay products.
The rest of this paper is organized as follows. In Sec. II, we describe
aspects of our event generation and analysis program for Tevatron experiments,
including a catalogue of some of the possible signals for supersymmetry
at large $\tan\beta$. In Sec. III, we present numerical results of our
generation of supersymmetric signals and SM backgrounds, and show the reach of
the Tevatron MI and TeV33 in the parameter space of the mSUGRA model.
In Sec. IV, we present a summary and conclusions from our work.
Some lengthy three-body decay formulae are included in the Appendix.
\section{Event simulation, signatures and cuts}
In several previous works\cite{bcpt}, a variety of signal
channels for the discovery of
supersymmetry at the Tevatron were investigated, and plots were shown for
the reach of the Tevatron MI and TeV33 in the parameter space of the mSUGRA
model. The simulation of SUSY signal events was restricted to parameter
space values of $\tan\beta =2$ and 10. The promising discovery channels that
were investigated included the following:
\begin{itemize}
\item multi-jet $+E\llap/_T$ events (veto hard, isolated leptons) (J0L),
\item events with a single isolated lepton plus jets $+E\llap/_T$ (J1L),
\item events with two opposite sign isolated leptons plus jets $+E\llap/_T$ (JOS),
\item events with two same sign isolated leptons plus jets $+E\llap/_T$ (JSS),
\item events with three isolated leptons plus jets $+E\llap/_T$ (J3L),
\item events with two isolated leptons $+E\llap/_T$ (no jets, clean) (COS),
\item events with three isolated leptons $+E\llap/_T$ (no jets, clean) (C3L).
\end{itemize}
In these samples, the number of leptons is {\it exactly} that indicated,
so that these samples are non-overlapping.
For Tevatron data samples on the order of 0.1 fb$^{-1}$, the J0L
signal generally gave the best reach for supersymmetry. It is the classic
signature for detecting gluinos and squarks at hadron colliders. For
larger data samples typical of those expected at the MI or TeV33,
the C3L signal usually gave the best reach. In the present paper, we will
extend these results to the large $\tan\beta$ region of mSUGRA parameter
space; we will also look for new signatures which may be indicative of
supersymmetry at large $\tan\beta$.
By examining the branching fractions in Fig.~\ref{nfig2}, we expect in
general at large $\tan\beta$ that there would be a reduction in
supersymmetric events containing isolated $e$'s or $\mu$'s. We also
expect for large $\tan\beta$ and small $m_0$ a more conspicuous
presence of isolated $\tau$ leptons (defined by hadronic one- or three-
charged prong jets as discussed below). For large $\tan\beta$ and large
$m_0$, we expect an increased presence of tagged $b$-jets (defined by
displaced decay vertices or by identification of a muon inside of a
jet). For these reasons, we have expanded the set of event topologies
via which to search for SUSY to
include, in addition:
\begin{itemize}
\item multi-jet $+E\llap/_T$ events which include at least one tagged $b$-jet
(J0LB),
\item multi-jet $+E\llap/_T$ events which include at least one tagged $\tau$-jet
(J0LT),
\item multi-jet $+E\llap/_T$ events which include at least
either a tagged $b$-jet or
a tagged $\tau$-jet (J0LBT),
\item opposite-sign isolated dilepton plus jet $+E\llap/_T$ events where
at least one of
the isolated leptons is actually a tagged $\tau$-jet (JOST),
\item same-sign isolated dilepton plus jet $+E\llap/_T$ events where at least one of
the isolated leptons is actually a tagged $\tau$-jet (JSST),
\item isolated trilepton plus jet $+E\llap/_T$ events where at least one of
the isolated leptons is actually a tagged $\tau$-jet (J3LT),
\item clean opposite-sign isolated dilepton $+E\llap/_T$ events where at least
one of the isolated leptons is actually a tagged $\tau$-jet (COST),
\item clean isolated trilepton $+E\llap/_T$ events where at least one of
the isolated leptons is actually a tagged $\tau$-jet (C3LT).
\end{itemize}
We note that some of these event samples are no longer non-overlapping;
for instance, the J0LB sample is a subset of the canonical $E\llap/_T$ (J0L)
sample. In the tau samples, the lepton multiplicity is again exactly that
indicated, except that at least one of the leptons is required to be
identified as a $\tau$.
To model the experimental conditions at the Tevatron, we use the toy
calorimeter simulation package ISAPLT. We simulate calorimetry covering
$-4<\eta <4$ with cell size $\Delta\eta\times\Delta\phi =0.1\times
0.0875$. We take the hadronic (electromagnetic) energy resolution to be
$70\% /\sqrt{E}$ ($15\% /\sqrt{E}$). Jets are defined as hadronic
clusters with $E_T > 15$~GeV within a cone with $\Delta
R=\sqrt{\Delta\eta^2 +\Delta\phi^2} =0.7$. We require that $|\eta_j|
\leq 3.5$. Muons and electrons are classified as isolated if they have
$p_T>5$~GeV, $|\eta (\ell )|<2.5$, and the visible activity within a
cone of $R=0.3$ about the lepton direction is less than $max(E_T(\ell
)/4,2\ {\rm GeV})$. For tagged $b$-jets, we require a jet (using the
above jet requirement) to have in addition $|\eta_j|<2$ and to contain a
$b$-hadron. Then the jet is identified as a $b$-jet with a 50\%
efficiency. To identify a $\tau$-jet, we require a jet with just 1 or 3
charged prongs with $p_T>1$ GeV within $10^\circ$ of the jet axis, and
no other charged prongs within $30^\circ$ of the jet axis. The invariant
mass of the 3 prong jets must be less than $m_{\tau}$, and the net
charge of the 3 prongs should be $\pm 1$. QCD jets with $p_T = 15$
($\geq 50$)~GeV are mis-identified as $\tau$ jets with a
probability~\cite{prob} of 0.5\% (0.1\%), with a linear interpolation in
between. In our analysis, we neglect multiple scattering effects,
non-physics backgrounds from photon or jet misidentification, and make
no attempt to explicitly simulate any particular detector.
We incorporate in our analysis the following trigger conditions:
\begin{enumerate}
\item one isolated lepton with $p_T(\ell) > 15$~GeV and $E\llap/_T >15$ GeV,
\item $E\llap/_T >35$ GeV,
\item two isolated leptons each with $E_T>10$ GeV and $E\llap/_T >10$ GeV,
\item one isolated lepton with $E_T>10$ GeV plus at least one jet plus
$E\llap/_T >15$ GeV,
\item at least four jets per event, each with $E_T>15$ GeV.
\end{enumerate}
Thus, every signal or background event must satisfy at least one of the
above conditions.
We have generated the following physics background processes using ISAJET:
$t\bar t$ production, $W+$jets, $Z+$jets, $WW$, $WZ$ and $ZZ$ production
and QCD (mainly from $b\bar b$ and $c\bar c$ production). Each
background subprocess was generated with subprocess final state particles
in $p_T$ bins of $25-50$ GeV, $50-100$ GeV, $100-200$ GeV, $200-400$ GeV
and $400-600$ GeV.
\section{The reach of the Fermilab Tevatron for mSUGRA}
We present our main results for the reach of the Tevatron for mSUGRA
at large $\tan\beta$ in the $m_0\ vs.\ m_{1/2}$ parameter space plane for
$A_0=0$ and for $\tan\beta =2,20,35$ and 45. Our results are shown for
$\mu >0$ only. For small $\tan\beta\sim 2$, the $\mu <0$ results differ
substantially from the $\mu >0$ results, and are shown in Ref. \cite{bcpt}.
As $\tan\beta$ increases, the positive and negative $\mu$ results become
increasingly indistinguishable.
In Fig.~\ref{nfig3} we show for orientation contours of constant
$m_{\tilde g}$ and $m_{\tilde q}$ in the $m_0\ vs.\ m_{1/2}$ plane. The bricked
regions are excluded by either lack of appropriate electroweak symmetry
breaking, or due to the $\tilde \tau_1$ or $\widetilde W_1$ being the LSP instead of
the $\widetilde Z_1$. The gray regions are excluded by previous experimental
sparticle searches, and the excluded region~\cite{lep2} is dominantly
formed by the LEP2 bound that $m_{\widetilde W_1}>80$ GeV~\cite{lepbnd}. The most
noticeable feature of Fig.~\ref{nfig3} is that the theoretically
excluded region increases significantly as $\tan\beta$ increases. In the
low $m_0$ region, this is due to the decrease in $\tilde \tau_1$ mass, making
it become the LSP. The contours of $m_{\tilde g}$ and $m_{\tilde q}$ on the other
hand are relatively constant and change little with $\tan\beta$. The
region to the left of the dotted lines denotes where the decay modes
$\widetilde W_1\rightarrow\tilde \tau_1\nu$ and $\widetilde Z_2\rightarrow \tilde \tau_1\tau$ become accessible.
As in our previous analysis of signals at low $\tan\beta$\cite{bcpt},
for channels involving jets, we require of all signals,
\begin{itemize}
\item jet multiplicity, $n_{jets}-n_{\tau -jets} \geq 2$,
\item $E\llap/_T > 40$~GeV, and
\item $E_T(j_1), \ E_T(j_2) \ > \ E_T^c$ and $E\llap/_T > E_T^c$,
\end{itemize}
where the parameter $E_T^c$ is taken to be
$E_T^c=15,40,60,80,100,120,140,160$ GeV. This requirement serves to give
some optimization of cuts for different masses of SUSY particles.
We generate signal events for each point on a 25~GeV~$\times$~25~GeV
grid in the $m_0-m_{1/2}$ plane. For an observable signal, we require at
least 5 signal events after all cuts (including those detailed below) are
imposed, with $N_{signal}$ exceeding $5\sqrt{N_{background}}$. Any
signal is considered observable if it meets the observability criteria
for at least {\it one} of the values of $E_T^c$. In addition, we
require the ratio of signal/background to exceed 0.2 for all
luminosities.
\subsection{Reach via the J0L channel}
As in Ref. \cite{bcpt}, for multijet$+E\llap/_T$ events (J0L),
we require in addition to the above,
\begin{itemize}
\item transverse sphericity $S_T > 0.2$, and
\item $\Delta\phi(\vec{E\llap/_T},\vec{E_{Tj}})>30^o$.
\end{itemize}
In Fig.~\ref{nfig4}, we show the Tevatron reach via the J0L channel.
Black squares denote parameter space points accessible to Tevatron
experiments with 0.1 fb$^{-1}$ of integrated luminosity (approximately
the Run I data sample); points denoted by gray squares are accessible
with 2 fb$^{-1}$ while those with open squares are accessible with 25
fb$^{-1}$. Points denoted by $\times$ are not visible at any of the
luminosity upgrade options considered. In frame {\it a}), no black
squares are visible; regions normally accessible to Tevatron experiments
with just 0.1
fb$^{-1}$ of integrated luminosity have been excluded by the negative
results of LEP2 searches for charginos. This is strictly valid only
within the model framework, and should not be regarded as a direct bound
on $m_{\tilde g}$. Regardless of the LEP2 bounds, Tevatron experiments should
directly probe this region via the independent search for strongly
interacting sparticles. Note that even within the mSUGRA framework, for
$\mu <0$ and $\tan\beta =2$, where $m_{\widetilde W_1}$ is considerable heavier
for the same $m_{1/2}$ values, there still exist parameter space points
accessible with only 0.1 fb$^{-1}$\cite{bcpt}. A significant number of gray
squares appear in frame {\it a}), denoting regions with $m_{\tilde g}\sim
400$ GeV that can be probed at the MI. As $\tan\beta$ increases, the
theoretically excluded region absorbs some of these points at low $m_0$,
while some of the high $m_0$ points become inaccessible. In the latter
case, much of the signal actually comes from $\widetilde W_1\overline{\widetilde W_1}$ and
$\widetilde W_1\widetilde Z_2$ production, and these particles decay decreasingly into
jetty final states, so the J0L signal diminishes. Finally, for very large
$\tan\beta =45$, none of the parameter space in this channel is open to
MI searches. For TeV33, we see that $m_{1/2}\sim 175$ GeV ($m_{\tilde g}\sim
475$ GeV) can be probed in all of the frames {\it a})-{\it d}) as long
as $m_0$ is not much larger. The
largest reach occurs when $E_T^c$ attains its largest value of
$E_T^c=160$ GeV.
\subsection{Reach via the J0LB channel}
In Fig.~\ref{nfig5}, we show the reach in the $E\llap/_T +$jets channel,
where in addition we require at least one tagged $b$-jet
(J0LB). Comparing with Fig.~\ref{nfig4}, we see that the requirement of
a tagged $b$-jet considerably reduces the reach of the MI. Furthermore,
the parameter space points with $m_{1/2}=175$ GeV are no longer
accessible to TeV33. In other words, a higher $E_T^c$ value is more
efficient in maximizing signal-to-background for large $m_{1/2}$ than
requiring an extra $b$-jet. However, for large $m_0$ and $m_{1/2}\sim
125-150$ GeV, the extra $b$-tag does somewhat increase the reach of
TeV33 for SUSY. Comparison of Fig.~\ref{nfig4} and \ref{nfig5} shows
three additional points accessible in frame {\it a}), two in frame {\it
b}), and one in frame {\it d}). We have also tried to extend the
parameter space reach by requiring an identified $\tau$-jet (J0LT) or
either a $\tau$ or $b$ jet (J0LBT) along with $E\llap/_T +$ jets. In both of
these cases, no additional reach was achieved beyond the results of Figs
\ref{nfig4} and \ref{nfig5}.
\subsection{Reach via the JOS and JSS channels}
The reach of Tevatron upgrades on the JOS channel is presented in
Fig.~\ref{nfig6}.
We require, in addition to the conditions at the beginning of this Section,
\begin{itemize}
\item events with exactly two opposite sign isolated leptons ($e$ and $\mu$),
with $E_T(\ell_1)> 10$~GeV and a veto of $\tau$-jets.
\end{itemize}
At the Tevatron at low $\tan\beta$, signals in this channel mainly come
from $\widetilde W_1\widetilde Z_2$ production, where $\widetilde Z_2$ decays leptonically, and
$\widetilde W_1$ decays hadronically, while top production is a major source of
SM background. There is significant reach by the Tevatron
MI and TeV33 in this channel at low $\tan\beta$, as seen in frame {\it
a}). As $\tan\beta$ increases, the $\widetilde Z_2$ leptonic branching fraction
decreases (see Fig.~\ref{nfig2}), so that the MI has no reach in this
channel for $\tan\beta \geq 20$. The reach of TeV33 is severely limited in
this channel at high $\tan\beta$ as well.
We have also examined the reach of the MI and TeV33 for same-sign dileptons
(JSS channel), where we require in addition
\begin{itemize}
\item events to contain exactly two same sign
isolated leptons, again with $E_T(\ell_1)> 10$~GeV and a veto of
$\tau$-jets.
\end{itemize}
The reach of Tevatron upgrades in this channel for mSUGRA
is not very promising. The signal should result mainly from $\tilde g\tg$ and
$\tilde g\tilde q$ production mechanisms, but these have only small cross sections
for parameter space points beyond the reach of LEP2. We found almost no
reach for mSUGRA in this channel beyond the LEP2 bounds for {\it any}
values of $\tan\beta$.
We have also studied the Tevatron reach in the dilepton plus jets channels
where we required in addition that at least one of the leptons be a tagged
$\tau$-jet: the JOST and JSST channels. In each of these cases,
a small increase in reach was obtained
for large values of $\tan\beta$ and low $m_0$
beyond the corresponding ``tau-less'' channels. Most of this additional
region can also be probed via the J3L channel discussed below,
so we do not show these results here.
\subsection{Reach via the J3L channel}
For small values of $\tan\beta$,
the J3L channel considerably increases the region of mSUGRA parameters
beyond what can be probed via the $E\llap/_T$ channel at a high luminosity
Tevatron. In addition to the generic cuts for all the signals involving
jets, we require the following analysis cuts for the J3L channel:
\begin{itemize}
\item events containing exactly three isolated leptons with
$E_T(\ell_1)>10$~GeV and a veto of $\tau$-jets, plus
\item we veto events with $|M(\ell^+\ell^-)-M_Z|<8$~GeV.
\end{itemize}
The reach in the J3L channel
after all cuts are imposed is shown in Fig.~\ref{nfig7}.
Since the signal almost always
involves a leptonically decaying $\widetilde Z_2$, it is not surprising to see
that the large reach at low $\tan\beta$
is gradually diminished until there is almost no reach for $\tan\beta\sim 45$.
We have also examined the Tevatron reach in the trilepton plus jets
channels where we required in addition that at least one of the leptons
be a tagged $\tau$-jet: the J3LT channel. As before, only a slight
additional reach was obtained at large $\tan\beta$ and low $m_0$ beyond
what could be probed via the ``tau-less'' J3L channel. Here, and in the
jetty dilepton channels mentioned above, this is presumably because
secondary leptons from tau decay tend to be soft, and fail to satisfy
the acceptance requirements. Again, we do not show these results here.
\subsection{Reach via the C3L and C3LT channels}
For small $\tan\beta \sim 2$, and a large enough integrated luminosity,
the maximum reach of the Tevatron was often achieved via the clean
trilepton channel from $\widetilde W_1\widetilde Z_2\rightarrow 3\ell+E\llap/_T$. For the C3L signal,
following our earlier analysis\cite{bcpt} we implement the following
cuts:
\begin{itemize}
\item we require 3 {\it isolated} leptons ($e$ and $\mu$)
within $|\eta_{\ell} |<2.5$
in each event, with $E_T(\ell_1)>20$
GeV, $E_T(\ell_2)>15$ GeV, and $E_T(\ell_3)>10$ GeV,
\item we require $E\llap/_T >25$ GeV,
\item we require that the
invariant mass of any opposite-sign, same flavor dilepton pair not reconstruct
the $Z$ mass, {\it i.e.} we require that
$|m(\ell\bar{\ell})-M_Z|\geq 8$~GeV,
\item we finally require the events to be {\it clean}, {\it i.e.} we veto
events with jets.
\end{itemize}
Our calculated background in this channel is 0.2 fb.
In Fig.~\ref{nfig8}, we show the reach in the C3L channel for the four
cases of $\tan\beta$. In frame {\it a}), we see at low $\tan\beta$ that
indeed there is no reach beyond the current LEP2 bound in the C3L
channel for 0.1 fb$^{-1}$. For the MI integrated luminosity, however,
there is considerable reach to values of $m_{1/2}\sim 225$ GeV, and for
TeV33, the reach extends to $m_{1/2}\sim 250$ GeV, corresponding to
$m_{\tilde g}\sim 700$ GeV! As $\tan\beta$ increases, the branching fraction
for a
leptonic decay
of $\widetilde Z_2$ and $\widetilde W_1$ decrease. In frame {\it b}), in fact, we
find {\it no} reach for SUSY via the C3L channel for MI and considerably
reduced reach for TeV33, except at large $m_0$.
For smaller values of $m_0$ a complicated interference between various
amplitudes reduces the leptonic decay width of $\widetilde Z_2$. As $\tan\beta$
increases even further to 35 and 45 as in frames {\it c}) and {\it d}),
the C3L reach is wiped out at low $m_0$. Some reach remains at large
$m_0$ in frame {\it c}), where the branching fraction $BF(\widetilde Z_2\rightarrow
\ell\bar{\ell}\widetilde Z_1)\sim BF(Z\rightarrow \ell\bar{\ell})$. In frame {\it d}),
most of this region also becomes
inaccessible because of the increased importance of (virtual) Higgs
boson mediated decays of $\widetilde Z_2$ which lead to a strong enhancement of
its decay to $b\bar{b}\widetilde Z_1$.
We have also examined the reach for clean trileptons, where one of the
leptons is actually an identified $\tau$-jet (C3LT). In this case, we
relax the additional $p_T$ requirements on the leptons. This increases
the chance of detecting the softer secondary leptons from the decay of tau(s).
Trigger 4 presumably plays an important role for this
class of events.
The reach via this channel is shown in Fig.~\ref{nfig9}.
In frames {\it b}), {\it c}) and {\it d}), significant additional reach
is gained in the low $m_0$ regions, beyond that shown in any of the
previous figures! Notice that the region where the signal is observable
is where chargino and neutralino decays to real $\tilde \tau_1$ are accessible
(see Fig.~\ref{nfig3}). The reach in the C3LT channel effectively
extends the reach of TeV33 to $m_{1/2}\sim 250$ GeV for at least some
value of $m_0$ for all the values of large $\tan\beta$ considered. We
remark that the gain in reach via channels involving taus is limited
because we require the presence of additional hard leptons ($e$ or
$\mu$), jets or $E\llap/_T$ in order to be able to trigger on the
event. Because secondary leptons from the decay of a tau tend to be
soft, the development of an efficient $\tau$ trigger may significantly
enhance the reach when $\tan\beta$ is large.
\subsection{Reach via the COS and COST channels}
In our previous studies \cite{bcpt} we had already noted that for small
values of $\tan\beta$, a study of the
clean opposite sign dilepton channel (COS) would allow a confirmation of
the signal in the C3L channel for a large range of mSUGRA parameters.
For the COS channel, we require
\begin{itemize}
\item exactly two {\it isolated} OS (either $e$ or $\mu$ ) leptons
in each event, with $E_T(\ell_1)>10$ GeV and $E_T(\ell_2)>7$ GeV, and
$|\eta (\ell ) |<2.5$.
In addition, we require {\it no} jets, which
effectively reduces most of the $t\bar t$ background.
\item We require $E\llap/_T >25$ GeV to remove
backgrounds from Drell-Yan dilepton production, and also
the bulk of the background from $\gamma^*, Z\rightarrow\tau\bar{\tau}$ decay.
\item We require $\phi (\ell\bar{\ell})<150^0$, to further reduce
$\gamma^*,Z\rightarrow\tau\bar{\tau}$ background.
\item We require the $Z$ mass cut:
invariant mass of any opposite-sign, same flavor dilepton pair not reconstruct
the $Z$ mass, {\it i.e.} $ \left| m(\ell\bar{\ell})- M_Z \right| > 8$ GeV.
Finally, we require $B=|\vec{E\llap/_T}|+|p_T(\ell_1)|+|p_T(\ell_2)|<100$ GeV.
\end{itemize}
Our calculated background in this case is 64 fb.
We have checked that while there is an observable signal at the MI
(TeV33) for $m_{1/2}\sim 150$~(175)~GeV, and if $m_0 \alt 100$~GeV,
there is no observable signal for any of the allowed regions of the
plane if $\tan\beta \geq 20$. We have also examined
this channel by requiring in addition that at least one of the leptons
be an identified $\tau$-jet (COST). In this case, no reach for mSUGRA was
found for any of the $\tan\beta$ values considered. We therefore do not
show these figures.
\section{Summary and Conclusions}
To summarize the reach of Tevatron upgrades for large and small
$\tan\beta$, we show in Fig.~\ref{nfig11} the SUSY reach via all of the
channels that were examined, for both the upgrade options of the
Tevatron. Thus, if a parameter space point is accessible via any
channel, we place an appropriate box, corresponding to the integrated
luminosity that is required. The cumulative reach shown in the figure
is completely established with
just four channels: J0L, J0LB, C3L and C3LT. For some points, the signal
may be observable in more than one of these or other channels
studied in this paper.
It is possible that
some additional reach may be gained by combining several channels to gain a net
``$5\sigma$'' signal, even though the significance in each of these
channels is somewhat smaller. We do not consider this added detail
here.
We see from Fig.~\ref{nfig11} that as $\tan\beta$ increases, the SUSY
reach of Tevatron upgrades is significantly reduced. For the MI option,
there is no reach beyond current LEP2 bounds that can be established at
$\tan\beta =45$. The TeV33 option has some reach in all frames, but
clearly a much reduced reach for large $\tan\beta$. In particular, there
are parameter regions just beyond the current LEP2 bounds for which
there will be {\it no observable signal} even with the luminosity of
TeV33.
The reduction of the reach is mostly due to the depletion of leptonic
signals, especially the clean three lepton signal, in the region of large
$\tan\beta$. Note that the branching ratio for $\widetilde W_1$ and $\widetilde Z_2$
to decay into electrons and muons plus missing particles is actually quite
large if charginos and neutralinos dominantly decay into real or virtual
$\tilde{\tau}_1$. However, the secondary leptons produced in subsequent
$\tau$ decays are usually too soft to pass our trigger criteria or
acceptance cuts. It might be worthwhile to investigate whether these
cuts can be lowered without introducing unacceptably large backgrounds
({\it e.g.} from heavy flavors, where the lepton happens to be isolated
and the jet is lost, or from jets faking leptons) or via a development of a
special trilepton trigger.
Modes with identified (hadronically decaying) taus could only partly
compensate this loss of reach in the leptonic channels. Again the
problem seems to be that the hadronic decay products of the $\tau$
leptons are frequently too soft to pass the cut $E_T(\tau-{\rm jet})>15$
GeV. It might be worthwhile to study if this cut can be lowered, {\it
e.g.} by focussing only on one--prong $\tau$ decays, for which QCD
backgrounds are much smaller than in the three--prong channel. In
addition, the triggers adopted in our study are not very efficient for
events with rather soft leptons plus $\tau-$jets, as in our C3LT sample.
We therefore believe that the reach of future Tevatron runs could be
extended significantly in the region of large $\tan \beta$ if it is
possible to devise strategies to reliably identify, and perhaps even
trigger on taus with visible $p_T$ smaller than 15~GeV. We remark,
however, that even without such
developments, experiments at the LHC will probe the entire parameter
plane shown at least via the $E\llap/_T$ channel.
\acknowledgments
We thank Vernon Barger for reading the manuscript.
One of us (XT) is grateful for the hospitality of
the Asia-Pacific Centre for Theoretical
Physics where part of this work was
carried out. HB and XT thank the Aspen Center for Physics for
hospitality during the period that part of this work was
done.
This research was supported in part by the U.~S. Department of Energy
under contract number DE-FG05-87ER40319, DE-AC02-76CH00016, and
DE-FG-03-94ER40833.
\bigskip
|
2,869,038,155,439 | arxiv | \section{Introduction}
Data synthesis, as can be conveniently performed in graphic engines, provides valuable convenience and flexibility for the computer vision area~\cite{richter2016playing,sakaridis2018semantic,ruiz2019learning,tremblay2018training,sun2019dissecting}. One can synthesize a large amount of training data under various combinations of environmental factors even from a small number of 3D object/scene models. However, there exists a huge domain gap between synthetic data and real-world data~\cite{kar2019meta,ruiz2019learning}. In order to effectively alleviate such a domain gap, it should be addressed from two levels: \textbf{content level} and \textbf{appearance level}~\cite{kar2019meta}. While much existing work focuses on appearance level domain adaptation~\cite{deng2018image,hoffman2018cycada,zhong2018camera}, we focus on the content level, \emph{i.e.,} learning to synthesise data with similar content to the real data, as different computer vision tasks require different image contents
Our system is designed based on the following considerations. It is expensive to collect large-scale real-world datasets for muti-camera system like re-ID. During annotation, one needs to associate an object across different cameras, which is a difficult and laborious process as objects might exhibit very different appearances in different cameras. In addition, there also has been an increasing concern over privacy and data security, which makes collection of large real datasets difficult~\cite{ristani2016MTMC,yao2020information}. On the other hand, we can see that datasets can be very different in their content. Here content means the object layout, illumination, and background in the image. For example, the VehicleID dataset~\cite{liu2016deep} consists mostly of car rears and car fronts, while vehicle viewpoints in the VeRi-776 dataset~\cite{liu2016large} cover a very diverse range. Though the VehicleID dataset has a large number of identities which is useful for model training, this content-level domain gap might cause a model trained on VehicleID to have poor performance on VeRi. Most existing domain adaptation methods work on the pixel level or the feature level so as to allow the source and target domains to have similar appearance or feature distributions. However, these approaches are not capable of handling content differences, as can often be encountered when training on synthetic data and testing on real data.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1\linewidth]{system_flow.pdf}
\end{center}
\caption{
System workflow. (\textbf{Left:}) given a list of attributes and their values, we use a renderer (\emph{i.e.,} Unity) for vehicle simulation. We compute the Fr\'{e}chet Inception Distance (FID) between the synthetic and real vehicles to indicate their distribution difference. By updating the values of attributes using the proposed attribute descent algorithm, we can minimize FID along the training iterations. (\textbf{Right:}) we use the learned attributes values that minimize FID to generate synthetic data to be used for re-ID model training.
}
\label{fig:system_flow}
\end{figure}
Based on above considerations, we aim to utilize flexible 3D graphic engine to
1) scale up the real-world training data without labeling and privacy concerns, and 2) build synthetic data with \emph{less content domain gap} to real-world data. To this end, we make contributions from two aspects.
First, we introduce a large-scale synthetic dataset named VehicleX, which lays the foundation of our work. It contains 272 backbone models, with different colored textures, and creates 1,362 different vehicles. Similar to many existing 3D synthetic datasets such as PersonX~\cite{sun2019dissecting} and ShapeNet~\cite{chang2015shapenet}, VehicleX has editable attributes and is able to generate a large training set by varying object and environment attributes.
Second, based on the VehicleX, we propose an attribute descent method which automatically configures the platform attributes, such that the synthetic data shares similar content distributions with the real data of interest. As shown in Figure~\ref{fig:system_flow}, specifically, we manipulate the range of five key attributes closely related to the real dataset content. To measure the distribution discrepancy between the synthetic and real data, we use the FID score and aim to minimize it. In each epoch, we optimize the values of attributes in a specific sequence.
We show the effectiveness of attribute descent by training with VehicleX only and joint training with real-world datasets. The synthetic training data with optimized attributes can improve re-ID accuracy under both settings.
Furthermore, under our joint training scheme, with VehicleX data, we achieve competitive re-ID accuracy with the state-of-the-art approaches, validating the effectiveness of learning from synthetic data. A subset of VehicleX has been used in the 4th AICITY challenge~\cite{naphade20204th}.\footnote{\url{https://www.aicitychallenge.org/}}
\section{Related Work}
\textbf{Vehicle re-identification} has received increasing attention in the past few years, and many effective systems have been proposed~\cite{khorramshahi2019dual,wang2017orientation,tang2019pamtri,zhou2018aware}, generally with specially designed or fine-tuned architectures.
In this paper, our baseline system is built with commonly used loss functions~\cite{zheng2016mars,hermans2017defense,szegedy2016rethinking} with no bells and whistles. Depending on the camera conditions, location and environment, existing vehicle re-ID datasets usually have their own distinct characteristics. For example, images in the VehicleID~\cite{liu2016deep} are either captured from the car front or the back. In comparison, the VeRi-776~\cite{liu2016large} includes a wider range of viewpoints.
The recently introduced CityFlow~\cite{tang2019cityflow} has distinct camera heights and backgrounds. Apart from dataset differences, there also exists huge differences between cameras in a single dataset~\cite{zhong2018camstyle}. For example, a camera filming a crossroad naturally has more vehicles orientation than a camera on a straight road.
Because of these characteristics, within a specific dataset, we learn attributes for each camera and simulate that filming environment in a 3D engine. As a result, our proposed data simulation approach will make synthetic data more similar to the real-world in key attributes, and thus can effectively augment re-ID datasets due to its strong ability in content adaptation.
\textbf{Appearance(style)-level domain adaptation.} Domain adaptation is often used to reduce the domain gaps between the distributions of two datasets. Till now, the majority of work in this field focuses on discrepancies in image style, such as real vs. synthetic~\cite{bak2018domain} and real vs. sketch~\cite{peng2019moment}. For example, some use the cycle generative adversarial network (CycleGAN) to reduce the style gap between two domains~\cite{hoffman2018cycada,shrivastava2017learning,deng2018image}, as well as various constraints being exerted on the generative model such that useful properties are preserved.
While these works have been shown to be effective in reducing the style domain gap, a fundamental problem remains to be solved, \emph{i.e.}, the content difference.
\begin{figure}[t]
\centering
\begin{center}
\includegraphics[width=1\linewidth]{samples.pdf}
\caption{The VehicleX engine. (A) An illustration of the rendering platform. We adjust the vehicle orientation, light direction and intensity, camera height, and the distance between the camera and the vehicle. (B) 16 different vehicle identities are shown.
}
\label{fig: Platform}
\end{center}
\end{figure}
\textbf{Content-level domain adaptation}, to our knowledge, has been discussed by only a few existing works~\cite{kar2019meta,ruiz2019learning}. For~\cite{ruiz2019learning}, their main contribution is clever usage of the task loss to guide the domain adaptation procedure. But for the re-ID task, we will search attributes for each camera but get task loss across camera systems. That is, task loss can only be gotten when all camera attributes are set. As a result, it is hard to optimise attributes for a single camera using loss from a cross-camera system. For~\cite{kar2019meta}, they use Graph Convolution Neural Network (GCN) to optimise the probability grammar for scene generation (\emph{e.g.,} detection task). Their target is to solve the relationship between multiple objects. But in re-ID settings, we only have one object (car) to optimize. As their method cannot be directly used for the re-ID task, we adopt their advantages and make new contributions. On the one hand, we adopt the idea of Ruiz \emph{et al.}~\cite{ruiz2019learning} that represents attributes using predefined distributions. We are also motivated by Kar \emph{et al.}~\cite{kar2019meta}, who suggest that some GAN evaluation metrics (\emph{e.g.,} KID~\cite{binkowski2018demystifying}) are potentially useful to measure content differences. In practice, we propose attribute descent, which does not involve random variables and has easy-to-configure step sizes.
\textbf{Learning from 3D simulation.} Due to low data acquisition costs, learning from 3D world is an attractive way to increase training set scale. But unlike other synthetic data (\emph{e.g.,} images generated by GAN~\cite{zheng2017unlabeled}), 3D simulation provides more accurate data labeling, flexibility in content generation and scalability in resolution, as GAN generated image may suffers from these problems. In the 3D simulation area, many applications exist in areas such as semantic segmentation~\cite{hoffman2018cycada,gaidon2016virtual,xue2020learning}, navigation~\cite{kolve2017ai2}, detection~\cite{kar2019meta,hou2020multiview}, object re-identification~\cite{sun2019dissecting,tang2019pamtri}, \emph{etc}. Usually, prior knowledge is utilized during data synthesis since we will inevitably need to determine the distribution of attributes in our defined environment. Tremblay \emph{et al.} suggest that attribute randomness in a reasonable range is beneficial~\cite{tremblay2018training}. Even if it is random, we need to specify the range of random variables in advance. Our work investigates and learns these attribute distributions for vehicle re-ID.
\section{VehicleX Engine}
We introduce a large-scale synthetic dataset generator named VehicleX that includes three components: (1) vehicles rendered using the graphics engine Unity, (2) a Python API that interacts with the Unity 3D engine, and (3) detailed labels including car type and color.
VehicleX has \textbf{a diverse range of realistic backbone models and textures}, allowing it to be able to adapt to the variance of real-world datasets. It has
272 backbones that are hand-crafted by professional 3D modelers. The backbones include ten mainstream vehicle types including sedan, SUV, van, hatchback, MPV, pickup, bus, truck, estate, sportscar and RV. Each backbone represents a real-world model. From these backbones, we obtain 1,362 variances (\emph{i.e.,} identities) by adding various colored textures or accessories. A comparison of VehicleX with some existing vehicle re-ID datasets is presented in Table~\ref{table:Datasets}. VehicleX is three times larger than the synthetic PAMTRI dataset~\cite{tang2019pamtri} in identities, and can potentially render an unlimited number of images from various attributes.
In experiments, we will show that our VehicleX benefits real-world testing either when used alone or in conjunction with a real-world training set.
In this work, VehicleX can be set to training mode and testing mode. In training mode, VehicleX will render images with black background and these images will be used for attribute descent (see Section~\ref{sec:attribute_descent}); in comparison, the testing mode uses random images (\emph{e.g.,} from CityFlow~\cite{tang2019cityflow}) as backgrounds, and generates attribute-adjusted images. In addition, to increase randomness and diversity, the testing mode contains random street objects such as lamp posts, billboards and trash cans. Figure~\ref{fig: Platform} shows the simulation platform, and some sample vehicle identities.
\begin{table}[t]\footnotesize
\caption{Comparison of some real-world and synthetic vehicle re-ID datasets. "Attr" denotes whether the dataset has attribute labels (\emph{e.g.,} orientation). Our identities are different 3D models, thus can potentially render an unlimited number of images under different environment and camera settings. VehicleX is released open source and can be used to generate (possess) an unlimited number of images (cameras).
\label{table:Datasets}}
\begin{center}
\setlength{\tabcolsep}{3.1mm}{
\begin{tabular}{c|l|c|c|c|c}
\Xhline{1.2pt}
\multicolumn{2}{c|}{Datasets} & \#IDs & \multicolumn{1}{c|}{\#Images} & \#Cameras & \# Attr \footnotesize \\
\hline
\multirow{4}{*}{real} & VehicleID~\cite{liu2016deep} &26,328 &222,629 &2 & \xmark \\
&CompCar~\cite{yang2015large} &4,701 &136,726 & - & \xmark \\
&VeRi-776~\cite{liu2016large} & 776 & 49,357 & 20 &\cmark \\
&CityFlow~\cite{tang2019cityflow} & 666 & 56,277 & 40 & \xmark \\
\hline
\multirow{2}{*}{{synthetic}} &PAMTRI~\cite{tang2019pamtri} & 402 & 41,000 & - & \cmark \\
& VehicleX & 1,362 & $\infty$ & $\infty$ & \cmark \\
\Xhline{1.2pt}
\end{tabular}}
\end{center}
\end{table}
We build the \textbf{Unity-Python interface} using the Unity ML-Agents toolkit~\cite{juliani2018unity}. It allows Python to modify the attributes of the environment and vehicles, and obtain the rendered images.
With this API, given the attributes needed, users can easily obtain rendered images without expert knowledge about Unity. The code of this API is released together with VehicleX.
VehicleX is a large scale public 3D vehicle dataset, with real-world vehicle types. We focus on vehicle re-ID task in this paper but our proposed 3D vehicle models also has potential benefits for many other tasks, such as semantic segmentation, object detection, fine-grained classification, 3D generation or reconstruction. It gives flexibility to computer vision systems to freely edit the content of the object, thus enabling new research in content-level image analysis.
\begin{figure}[t]
\centering
\begin{center}
\includegraphics[width=1\linewidth]{attribute_edit.pdf}
\caption{(\textbf{Left:}) Attribute editing. We rotate the vehicle, edit light direction and intensity, or change the camera height and distance.
Numbers in the bracket correspond to the attribute values in Unity. (\textbf{Right:}) We further add random backgrounds and distractors to the attribute-adjusted vehicles when they are used in the re-ID model.
}
\label{fig: editing}
\end{center}
\end{figure}
\section{Proposed Method}\label{sec:attribute_descent}
\subsection{Attribute Distribution Modeling}\label{sec:att_model}
\textbf{Important attributes.}\label{sec:attributes}
For vehicle re-ID, we consider the following attributes to be potentially influential on the training set simulation and testing accuracy. Figure~\ref{fig: editing} shows examples of the attribute editing process.
\begin{itemize}[noitemsep,topsep=0pt]
\item \textbf{Vehicle orientation} is the horizontal viewpoint of a vehicle and takes a value between 0\degree and 359\degree. In the real world, this attribute is important because the camera position is usually fixed and vehicles usually move along predefined trajectories. Therefore, the distribute of vehicle orientation of real world dataset is usually multimodal and tend to exhibit certain patterns under a certain camera view.
\item \textbf{Light direction} simulates daylight as cars are generally presented in outdoor scenes.
Here, we assume directional parallel light, and the light direction is modeled from east to west, which is the movement trajectory of the sun.
\item \textbf{Light intensity} is usually considered a critical factor for re-ID tasks. Factors include glass refection and shadows will seriously influence the results. We manually defined a reasonable range for intensity from dark to light.
\item \textbf{Camera height} describes the vertical distance from the ground, and significantly influences viewpoints.
\item \textbf{Camera distance} determines the horizontal distance from vehicles. This factor has a strong effect on the vehicle resolution since the resolution of the entire image is predefined as 1920$\times$1080. Additionally, the distance has slight impacts on viewpoints.
\end{itemize}
\begin{figure}[t]
\centering
\begin{center}
\includegraphics[width=1\linewidth]{fid-map.pdf}
\caption{Attribute descent visualization on the VehicleID~\cite{liu2016deep}. (A) The FID-mAP curve through training iterations. The FID successively drops (lower is better) and domain adaptation mAP successively increases (higher is better) during attribute descent. For illustration simplicity, we use ``light'' to denote light direction and intensity, and use ``cam.'' to denote camera height and distance. (B) We show the synthetic vehicles in each iteration. We initialize the attributes by setting orientation to right, the light intensity to dark, light direction to west, camera height to being equal to the vehicle, and camera distance to medium. The content of those images become more and more similar to (C) the target real images through the optimization procedure.
}
\label{fig:training}
\end{center}
\end{figure}
\textbf{Distribution modeling.}
We model the aforementioned attributes with single Gaussian distributions or Gaussian Mixture Model (GMM). This modeling strategy is also used in Ruiz \emph{et al.}'s work~\cite{ruiz2019learning}.
We denote the attribute list as: $ \mathcal{A} = (a_1, a_2,...,a_N)$, where $N$ is the number of attributes considered in the system, and $a_i, i = 1,...,N$ is the random variable representing the $i$th attribute.
For the vehicle orientation, we use a GMM to capture its distribution. This is based on our prior knowledge that the field-of-view of a camera covers either a road or an intersection. If we do not consider vehicle turning, there are rarely more than four major directions at a crossroad. In this work, we set a GMM with 6 components.
For lighting conditions and camera attributes, we use four independent Gaussian distributions.
Therefore, given $N$ attributes, we optimize $M$ mean values of the Gaussians, where $M\geq N$.
We speculate that the means of the Gaussian distributions or components are more important than the standard deviations because means reflect how the majority of the vehicles look. Although our method has the ability to handle variances, this would significantly increase the search space. As such, we predefine the values of standard deviations
and only optimize the means of all the Gaussians $\bm{\mu}= (\mu_1, \mu_2, ..., \mu_M)$, where $\mu_i\in \mathbb{R}, i=1,...,M$ is the mean of the $m$th Gaussian. As a result, given the means $\bm{\mu}$ of the Gaussians, we can sample an attribute list as $\mathcal{A} \sim G(\bm{\mu})$, where G is a function that generates a set of attributes given means of Gaussian.
\subsection{Optimization}
The objective of our optimization is to train a model to generate a dataset that has a similar content distribution with respect to a target real dataset.
\textbf{Measuring distribution difference.}
We need to precisely define the distribution difference before we apply any optimization algorithm. There potentially exists two directions: using the appearance difference, and the task loss on the validation set. But as re-ID is a cross-camera task, it is indirect and difficult for us to optimise attributes for a single camera using loss from a cross-camera system. So we focus on the appearance difference. For the appearance difference,
we use the Fr\'{e}chet Inception Distance (FID)~\cite{heusel2017gans} to quantitatively measure the distribution difference between two datasets.
Adversarial loss is not used as the measurement directly since there exists a huge appearance difference between synthetic and real data, and the discriminator would easily detect the specific detailed differences between real and generated, and yet not be useful.
Formally, we denote the sets of synthetic data and real data as $X_s$ and $X_r$ respectively,
where $X_s = \{ \mathcal{R}(\mathcal{A}_1), \cdots, \mathcal{R}(\mathcal{A}_K) | \mathcal{A}_k \sim G(\bm{\mu}) \}$, and $\mathcal{R}$
is our rendering function through the 3D graphics engine working on a given attribute list $\mathcal{A}$ that controls the environment. $K$ is the number of images in the synthetic dataset. For the FID calculation, we employ the Inception-V3 network~\cite{szegedy2016rethinking} to map an image into its feature space. We view the feature as a multivariate real-valued random variable and assume that it follows a Gaussian distribution.
To measure the distribution difference between two Gaussians, we resort to their means and covariance matrices. Under FID, the distribution difference between synthetic data and real data is written as,
\begin{equation}
\begin{split}
\mbox{FID}(X_s, X_r) = \left \| \bm{\mu}_s - \bm{\mu}_r \right \|^{2}_{2} +
Tr(\bm{\Sigma}_s + \bm{\Sigma}_r -2 (\bm{\Sigma}_s \bm{\Sigma}_r)^{\frac{1}{2}}),
\end{split}
\label{eq:fid}
\end{equation}
where $\bm{\mu}_s$ and $\bm{\Sigma}_s$ denote the mean and covariance matrix of the feature distribution of the synthetic data, and $\bm{\mu}_r$ and $\bm{\Sigma}_r$ are from the real data.
\textbf{Attribute descent.}
An important difficulty for attribute optimization is that the rendering function (through the 3D engine Unity) is not differentiable, so the widely used gradient-descent based methods cannot be readily used.
Under this situation, there exist several methods for gradient estimation, such as finite-difference~\cite{kar2019meta} and reinforcement learning~\cite{ruiz2019learning}.
However, these methods are developed in scenarios where there are many parameters to optimize. In comparison, our system only contains a few parameters, allowing us to design a more stable and efficient approach that is sufficiently effective in finding a close to global minimum.
We are motivated by coordinate descent, an optimization algorithm that can work in derivative-free contexts~\cite{wright2015coordinate}. The most commonly known algorithm that uses coordinate descent is $k$-means~\cite{lloyd1982least}.
Coordinate descent successively minimizes along coordinate directions to find a minimum of a function. The algorithm selects a coordinate to perform the search at each iteration. Compared with grid search, coordinate descent significantly reduces the search time, based on the hypothesis that each parameter is relatively independent. For our designed attributes, we study their independence in subsection~\ref{sec:single_cam}.
Using Eq.~\ref{eq:fid} as the objective function, we propose attribute descent to optimize each single attribute in the attribute list. Specifically, we view each attribute as a coordinate in the coordinate descent algorithm. In each iteration, we successively change the value of an attribute to search for the minimum value of the objective function. Formally, for our defined parameters $\bm{\mu}$ for attributes list $\mathcal{A}$, the objective is to find
\begin{equation}
\begin{split}
\bm{\mu} = \mathop{\arg\min}_{\bm{\mu}} \mbox{FID}(X_s, X_r),\qquad\\
X_s = \{ \mathcal{R}(\mathcal{A}_1), \cdots, \mathcal{R}(\mathcal{A}_K) | \mathcal{A}_k \sim G(\bm{\mu}) \}.
\end{split}
\end{equation}
We achieve this objective iteratively. Initially, we have
\begin{equation}
\bm{\mu}^{0} = (\mu_{1}^{0}, \cdots, \mu_{M}^{0}),
\end{equation}
At epoch $j$, we optimize a single variable $\mu_{i}^{j}$ in $\bm{\mu}$,
\begin{equation}
\begin{split}
\mu_{i}^{j} = \mathop{\arg\min}_{z \in S_{i}} \mbox{FID}(X_s, X_r),\qquad\\
X_s = \{ \mathcal{R}(\mathcal{A}_1), \cdots, \mathcal{R}(\mathcal{A}_K) | \mathcal{A}_k \sim G(\mu_1^{j}, \\
\cdots, \mu_{i-1}^{j}, z, \mu_{i+1}^{j-1}, \cdots, \mu_M^{j-1}) \},
\end{split}
\end{equation}
where
the $S_i, i=1,...,M$ define a specific search space for mean variable $\mu_i$. For example, the search space for vehicle orientation is from $0^{\circ}$ to $330^{\circ}$ by $30^{\circ}$ degree increments; the search space for camera height is the equally divided editable range with 9 segments. $j = 1,\cdots, J$ are the training epochs. One epoch is defined as all attributes being updated once.
In this algorithm, we perform greedy search for the optimized value of an attribute in each iteration, and achieve a local minimum for each attribute when fixing the rest.
\textbf{Discussion.}
In Section~\ref{sec:single_cam} we show that attribute descent (non-gradient solution) is superior to our implementation of reinforcement learning (gradient-based solution). Attribute descent, inherited from the coordinate descent algorithm, is simple to implement and steadily leads to convergence. It is a new optimization tool in the learning to synthesize literature and avoids drawbacks such as difficulty in optimization and sensitivity to hyper-parameters.
That being said, we note that our method is effective in small-scale environments like vehicle bounding boxes where only a small number of attributes need to be optimized. In more complex environments, we suspect that reinforcement learning algorithms should also be effective.
\begin{figure}[t]
\begin{center}
\includegraphics[width=4.0in]{spgan.pdf}
\caption{Images w/ and w/o style domain adaptation. (A) Synthetic images without style domain adaptation. (B)(C)(D) We translate images in (A) to the style of VeRi, VehicleID and CityFlow, respectively, using SPGAN~\cite{deng2018image}.
\label{figure:SPGAN}
\end{center}
\end{figure}
\section{Experiment}
\subsection{Datasets and Evaluation Protocol}
We use three real-world datasets for evaluation. \textbf{VehicleID}~\cite{liu2016deep} is at a large scale, containing 222,629 images of 26,328 identities. Half of the identities are used for training, and the other half for testing. Officially there are 3 test splits.
The \textbf{VeRi-776} dataset~\cite{liu2016large} contains 49,357 images of 776 vehicles captured by 20 cameras. The vehicle viewpoints and illumination cover a diverse range. The training set has 37,778 images, corresponding to 576 identities; the test set has 11,579 images of 200 identities. There are 1,678 query images. The train / test sets share the same 20 cameras. \textbf{CityFlow}~\cite{tang2019cityflow} has more complex environments, and it has 40 cameras in a diverse environment where 34 are used in the training set. The dataset has in total 666 IDs where half are used for training and the rest for testing.
We use mean average precision (mAP) and Rank-1 accuracy to measure the re-ID performance.
\subsection{Implementation Details}
\textbf{Data generation.} For the VehicleID dataset,
we only optimize a single attribute list targeting the VehicleID training set.
But most re-ID datasets like VeRi-776 and CityFlow are naturally divided according to multiple camera views. Since a specific camera view usually has stable attribute features (\emph{e.g.,} viewpoint), we perform the proposed attribute descent algorithm on each individual camera, so as to simulate images with similar content to images from each camera. For example, we optimize 20 attribute lists using the VeRi-776 training set, which has 20 cameras. Attribute descent is performed for two epochs.
\begin{wraptable}{r}{4cm}
\caption{Re-ID accuracy (mAP) w/ and w/o style DA when training with synthetic data only. We clearly observe style DA brings significant improvement and thus is necessary.}
\centering
\resizebox{3.4cm}{0.6cm}{
\begin{tabular}{c|cc}
\Xhline{1.2pt} StyleDA & VehicleID & VeRi \\
\hline
\xmark & 24.36 & 12.35 \\
\cmark & \textbf{35.33} & \textbf{21.29} \\
\Xhline{1.2pt}
\end{tabular}}
\label{table:styleDA}
\end{wraptable}
One epoch is defined as all attributes in the list being updated once.
\textbf{Image style transformation.} We apply SPGAN~\cite{deng2018image} for image style transformation, which is a state-of-the-art algorithm in style level domain adaptive re-ID. Sample results are shown in Figure~\ref{figure:SPGAN} and influence is shown in Table~\ref{table:styleDA}.
Image translation models are trained using 112,042 images with random attributes as source domain and the training set in three vehicle datasets as target domain separately. When performing SPGAN for learned attributes data, we directly inference the learned attributes images, based on the fact that our learned attributes are a subset of the random range.
\textbf{Baseline configuration.}
For VeRi and VehicleID, we use the ID-discriminative embedding (IDE)~\cite{zheng2016mars}. We adopt the strategy from~\cite{luo2019bag} which adds batch normalization and removes ReLU after the final feature layer.
We also use the part-based convolution baseline (PCB)~\cite{sun2018beyond} on VeRi for improved accuracy. In PCB, we horizontally divide the picture into six equal parts and perform classification on each part. For CityFlow training, we use the setting from~\cite{luo2019bag} using a combination of the cross-entropy loss and the triplet loss.
\setlength{\tabcolsep}{3.3mm}
\begin{table*}[t]
\small
\caption{Method comparison on VehicleID in data augmentation. Our method is built on IDE~\cite{zheng2016mars} with the cross-entropy (CE) loss. Attribute descent consistently improves over both the baseline and random attributes, and is competitive compared with the state-of-the-art. ``R'' means training use real data only. ``R+S'' denotes that both synthetic data and real data are used in training. ``Small'', ``Medium'' and ``Large'' refers to the number of vehicles on the VehicleID test set~\cite{liu2016deep}.}
\resizebox{1\textwidth}{!}
\begin{tabular}{p{2cm}|p{0.6cm}<{\centering}|ccc|ccc|ccc}
\Xhline{1.2pt}
\multirow{2}{*}{Method} & \multirow{2}{*}{Data} & & Small & & & Medium & & & Large & \\ \cline{3-11}
& & Rank-1 & Rank-5 & mAP & Rank-1 & Rank-5 & mAP & Rank-1 & Rank-5 & mAP \\ \hline
RAM~\cite{liu2018ram} & R & 75.2 & 91.5 & - & 72.3 & 87.0 & - & 67.7 & 84.5 & - \\
AAVER~\cite{khorramshahi2019dual} & R & 74.69 & 93.82 & - & 68.62 & 89.95 & - & 63.54 & 85.64 & - \\
GSTE~\cite{bai2018group} & R & 75.9 & 84.2 & 75.4 & 74.8 & 83.6 & 74.3 & 74.0 & 82.7 & 72.4 \\
\hline
IDE (CE loss) & R & 77.35 & 90.28 & 83.10 & 75.24 & 87.45 & 80.73 & 72.78 & 85.56 & 78.51 \\
Ran. Attr. & R+S & 80.2 & 93.98 & 85.95 & 76.94 & 90.84 & 82.67 & 73.45 & 88.66 & 80.55 \\
Attr. Desc. & R+S & \textbf{81.50} & \textbf{94.85} & \textbf{87.33} & \textbf{77.62} & \textbf{92.20} & \textbf{83.88} & \textbf{74.87} & \textbf{89.90} & \textbf{81.35}
\\\Xhline{1.2pt}
\end{tabular}}
\label{table:comparison-VID}
\end{table*}
\begin{table}[t]
\centering
\caption{FID values between the generated data and VehicleID after Epoch I and II (attribute descent is performed for two epochs). Different orders of attributes are tested.
`C', `O' and `L' refer to camera, orientation and lighting, respectively. After Epoch II, the FID values are generally similar, suggesting that the correlation among attributes is weak, and so they are mostly independent.}
\resizebox{1\textwidth}{!}{
\begin{tabular}{l|cccccc}
\Xhline{1.2pt}
& C $\rightarrow$ O $\rightarrow$ L & C $\rightarrow$ L $\rightarrow$ O & O $\rightarrow$ L $\rightarrow$ C & O $\rightarrow$ C $\rightarrow$ L & L $\rightarrow$ C $\rightarrow$ O & L $\rightarrow$ O $\rightarrow$ C \\
\hline
FID (Epoch I) & 98.38 & 99.57 & 78.67 & 80.94 & 104.84 & 81.20 \\
FID (Epoch II) & 78.42 & 77.18 & 77.96 & 79.54 & 78.48 & 77.06 \\ \hline
\Xhline{1.2pt}
\end{tabular}}
\label{table:attr_dep}
\end{table}
\textbf{Experiment protocol.} We evaluate our method on both vehicleX training and joint training settings. Under vehicleX training, we train our model on VehicleX and test on real world data. Under joint training, we combine the VehicleX data and real world data and perform two-stage training; testing is
\begin{wraptable}{r}{5.2cm}
\caption{Comparison of the Re-ID accuracy (mAP) of two stage training when performing joint training. We can see a significant performance boost from Stage \uppercase\expandafter{\romannumeral1} to Stage \uppercase\expandafter{\romannumeral2}. }
\centering
\resizebox{5cm}{0.6cm}{
\begin{tabular}{p{1.2cm}|p{1cm}<{\centering}p{0.7cm}<{\centering}p{1.2cm}<{\centering}}
\Xhline{1.2pt} & VehicleID & VeRi & CityFlow \\
\hline
Stage \uppercase\expandafter{\romannumeral1} & 77.54 & 69.39 & 33.54 \\
Stage \uppercase\expandafter{\romannumeral2} & \textbf{81.35} & \textbf{70.62} & \textbf{37.16} \\
\Xhline{1.2pt}
\end{tabular}}
\label{table:two-stage}
\end{wraptable}
on the same real-world data.
\textbf{Two-stage training} is conducted in joint training with three real-world datasets~\cite{zheng2019vehiclenet}. We mix synthetic dataset and a real-world dataset in the first stage and finetune on the real-world dataset only in the second stage. Taking CityFlow for example, in the first stage, we train on both real and synthetic data. We classify vehicle images into one of the 1,695 (333 from real + 1,362 from synthetic) identities. In the second stage, we replace the classification layer with a new classifier that will be trained on the real dataset (recognizing 333 classes). Table~\ref{table:two-stage} shows significant improvements with this method.
\begin{table}[t]
\centering
\caption{Re-ID test accuracy (mAP) on VehicleID test set (large) using various training datasets with~\cite{luo2019bag}. The first four training sets are generated by random attributes, random search, LTS and attribute descent, respectively. The last two training sets are real-world ones. FID measures domain gap between the training sets and VehicleID.}
\resizebox{1\textwidth}{!}{
\begin{tabular}{l|cccccc}
\Xhline{1.2pt}
& Ran. Attr. & Ran. Sear. & LTS & Attr. Desc. & VeRi & Cityflow \\
\hline
FID & 134.75 & 109.94 & 95.27 & 77.96 & - & - \\
\hline
mAP & 22.00 & 26.35 & 32.21 & 35.33 & 38.59 & 45.57 \\
\Xhline{1.2pt}
\end{tabular}}
\label{table:method_comp}
\end{table}
\begin{table}[t]
\small
\caption{Method comparison when testing on VeRi-776. Both VehicleX training and joint training results are included. ``R'' means training with real data only, ``S'' represents training use synthetic data only and ``R+S'' denotes the joint training. VID$\rightarrow$VeRi shows the result trained on VehicleID, test on VeRi and Cityflow$\rightarrow$VeRi means the result trained on Cityflow, test on VeRi. In addition to some state-of-the-art methods, we summarize the results on top of two baselines, \emph{i.e.,} IDE~\cite{zheng2016mars} and PCB~\cite{sun2018beyond}.}
\resizebox{1\textwidth}{!}{
\begin{tabular}{l|r|c|ccc}
\Xhline{1.2pt}
Experiment & Method & Data & Rank-1 & Rank-5 & mAP \\
\hline
\multirow{5}{*}{VehicleX Training} & ImageNet & R & 30.57 & 47.85 & 8.19 \\
& VID $\rightarrow$VeRi & R & 59.24 & 71.16 & 20.32 \\
& Cityflow $\rightarrow$ VeRi & R & 69.96 & 81.35 & 26.71 \\
\cline{2-6}
& Ran. Attr. & S & 43.56 & 61.98 & 18.36 \\
& Attr. Desc. & S & 51.25 & 67.70 & 21.29 \\
\Xhline{1.2pt}
\multirow{7}{*}{Joint Training} & VANet ~\cite{chu2019vehicle} & R & 89.78 & 95.99 & 66.34 \\
& AAVER ~\cite{khorramshahi2019dual} & R & 90.17 & 94.34 & 66.35 \\
& PAMTRI ~\cite{tang2019pamtri} & R+S & 92.86 & 96.97 & 71.88 \\
\cline{2-6}
& IDE & R & 92.73 & 96.78 & 66.54 \\
& Ran. Attr. & R+S & 93.21 & 96.20 & 69.28 \\
& Attr. Desc. & R+S & 93.44 & 97.26 & 70.62 \\
\cline{2-6}
& Attr. Desc. (PCB) & R+S & \textbf{94.34} & \textbf{97.91} & \textbf{74.51} \\
\Xhline{1.2pt}
\end{tabular}}
\label{table:comparison-VeRi}
\end{table}
\subsection{Evaluation}\label{sec:single_cam}
\textbf{Analysis of attribute descent process.} Figure~\ref{fig:training} shows how the re-ID accuracy and FID change during along the training iterations. We observe that attributes are successively optimized when FID decreases and mAP increases.
Furthermore, from the slope of the FID curve we can see that
orientation has the largest impact on the distribution difference and mAP, with a huge FID drop from 147.85 to 91.14 and large mAP increase from 12.1\% to 21.94\%. Lighting is the second most impactful (-7.2 FID, +10.7\% mAP), and camera attributes are the third (-4.11 FID, +2.4\% mAP).
\textbf{Effectiveness of learned synthetic data.} Learned synthetic data can be used as a training set alone, or in conjunction with real training data for data augmentation. We show the results of both cases on the three datasets in Table~\ref{table:comparison-VID} (VehicleID), Table~\ref{table:comparison-VeRi} (VeRi) and Table~\ref{table:comparison-AIC} (CityFlow).
From these results we observe that when used as training data alone, learned attributes achieve much higher re-ID accuracy than random attributes. For example, on the VeRi-776 dataset, attribute descent has a +7.69\% improvement in Rank-1 accuracy over random attributes.
Moreover, attribute learning also benefits the data augmentation setting. For example, on CityFlow and VeRi, the improvement of learned attributes over random attributes is +1.49\% and +3.87\% in Rank-1 accuracy, respectively. Although this improvement looks small in number, we show that the improvement is statistical significant (Figure~\ref{figure: stas_analy}).
We note that the improvement of using synthetic data as a training set is more significant than for data augmentation. When the training set consists of only the synthetic data, a higher quality of attributes will have a more direct impact on the re-ID results.
\textbf{Few dependencies between attributes.} We proceed study on dependency by testing whether the order of attributes matters.
From Table~\ref{table:attr_dep} it is clear that attribute orders do not affect the downward trend, the only clear dependency is the relationship between orientation and camera. If we learn camera attributes before orientation, the accuracy will be influenced. But such influence will be eliminated by performing the attribute descent twice. Based on the few dependencies between attributes, we make it possible to use attribute descent rather than grid search, saving computation time.
\begin{figure}[t]
\centering
\includegraphics[width=0.72\textwidth]{stas_analy.pdf}
\caption{Performance comparison between learned attributes and random attributes in joint training. We present mAP on three datasets and use statistical significance analysis to show the training stability.
$*$ means {statistically significant} ($i.e., 0.01 < p$-value $< 0.05$) and $**$ denotes {statistically very significant} ($i.e., 0.001 < p$-value $< 0.01$).}
\label{figure: stas_analy}
\end{figure}
\textbf{Attribute descent performs better than multiple methods: 1) random attribute 2) random search 3) LTS~\cite{ruiz2019learning}.} For LTS, we follow their ideas but we replace the task loss with FID score,
\begin{SCtable}
\resizebox{7cm}{1.4cm}{
\begin{tabular}{p{2.2cm}<{\centering}|p{0.6cm}<{\centering}|p{0.6cm}<{\centering}p{0.8cm}<{\centering}p{0.6cm}<{\centering}}
\Xhline{1.2pt}
Method & Data & R-1 & R-20 & mAP \\
\hline
BA~\cite{kumar2019vehicle} & R & 49.62 & 80.04 & 25.61 \\
BS~\cite{kumar2019vehicle} & R & 49.05 & 78.80 & 25.57 \\
PAMTRI~\cite{tang2019pamtri} & R+S & 59.7& 80.13 & 33.81 \\\hline
IDE(CE+Tri.)& R & 56.75& 72.24 & 30.21 \\
Ran. Attr. & R+S & 63.59 & 82.60 & 35.96 \\
Attr. Desc. & R+S & \textbf{64.07} & \textbf{83.27} & \textbf{37.16} \\
\Xhline{1.2pt}
\end{tabular}}
\caption{Method comparison on CityFlow with joint training. Our baseline is built with a combination of the CE loss and the triplet loss~\cite{luo2019bag}. Rank-1, Rank-20 and the mAP are calculated by the online server.}\label{table:comparison-AIC}
\end{SCtable}
since task loss is not generalised to a re-ID task. Our reproduced LTS uses the same distribution definition and initialization as attribute descent. In order to make a fair comparison, we report values from 200 iterations of training (\emph{i.e.,} compute FID score 200 times). Random search is a strong baseline in hyper-parameter optimization~\cite{bergstra2012random}.
In practice, we randomly sample attribute values 200 times and choose an attribute list with the best FID score. The result comparison is shown in Table~\ref{table:method_comp}. First, under the same task network, all learned attributes (\emph{i.e., random search, LTS and attribute descend}) perform better than random attributes, in both FID and mAP, showing that learned attributes significantly improves the result, and that content differences matter. Second, random search does not perform well in a limited search time. It has been shown that random search performs well when there exists many less important parameters~\cite{bergstra2012random}. But in our search space, all attributes contribute to the distribution differences as shown in Figure~\ref{fig:training}, thus random search has no advantage in helping find important attributes. Third, LTS works but it does not find a better FID score than attribute descent. LTS seems to fall into a local optimum and does not reach a global one.
A example of local optima is LTS outputs are either outputs of car front or rear, whereas the VehicleID contains both car front and rear. With a more hand-crafted design, we will definitely reach a better performing LTS framework. But at this stage, attribute descent is a simple realized method that finds a better solution with few iterations. It deserves to be a strong baseline in this field.
\textbf{Comparison with the state-of-the-art.} When the synthetic data is used in conjunction with real-world training set, we achieve very competitive accuracy compared with the state-of-the-art (Table~\ref{table:comparison-VID}, Table~\ref{table:comparison-VeRi} and Table~\ref{table:comparison-AIC}). For example on VehicleID (Small), our method is +5.6\% higher than~\cite{bai2018group} in
Rank-1 accuracy. On CityFlow, our method is higher than~\cite{tang2019pamtri} by +7.32\% in Rank-1 accuracy.
\section{Conclusion}
This paper study the domain gap problem
between synthetic data and real data
from the content level. That is, we automatically edit the source domain image content in a graphic engine so as to reduce the content gap between the synthetic images and the real images. We use this idea to study the vehicle re-ID task, where the usage of vehicle bounding boxes decreases the set of attributes to be optimized. Fewer attributes-of-interest and low dependencies between them allow us to optimize them one by one using our proposed attribute descent approach. We show that the learned attributes bring about improvement in re-ID accuracy with statistical significance. Moreover, our experiment reveals some important insights regarding the usage of synthetic data, \emph{e.g.,} style DA brings significant improvement and two stage training is beneficial for joint training.
\section*{Acknowledgement}
Dr. Liang Zheng is the recipient of Australian Research Council Discovery Early Career Award (DE200101283) funded by the Australian Government.
\clearpage
\bibliographystyle{splncs04}
|
2,869,038,155,440 | arxiv | \section{Comparison of different approaches to emulate ATLAS \ensuremath{\mathrm{VV}\rightarrow\mathrm{JJ}}\xspace analysis}
\label{app-Avvjj}
\par The expected limits obtained in the emulation of the ATLAS \ensuremath{\mathrm{VV}\rightarrow\mathrm{JJ}}\xspace channel show a 40\% discrepancy with respect to the official results (see Sec. \ref{sec:JJ}). This is the largest discrepancy observed among all the channels considered in this study. We have considered alternative approaches in our strategy and carried out several cross-checks, which are summarised here:
\begin{itemize}
\item \textbf{Nominal background}: ATLAS publishes a background description with a total background uncertainty. This information can be used directly as an input to our analysis. The disadvantage of this approach is that it combines all systematic uncertainties into a single contribution, implying a correlation model that may not reflect the accuracy of the fit performed by the ATLAS collaboration.
\item \textbf{Pure fitting}: We have repeated the fit on the data distribution provided by the ATLAS collaboration. The fitting procedure naturally yields a covariance matrix for the shape parameters, which allows to adopt a more realistic correlation model.
\item \textbf{Rescaling}: This is a mixed approach in which the fit is performed over the data distribution to obtain the covariance matrix of the fitting function parameters, but the resulting background prediction and the corresponding uncertainties are then rescaled to match those provided by ATLAS. In this approach, the official ATLAS background prediction is used and our fit is only used to model the uncertainties and their correlations.
\item \textbf{Sidebands}: In this case we repeat the fit procedure described above, after excluding the region of the largest deviation (1700--2300~GeV) from the fit range, in order to exclude the possibility that it could bias the fit.
\end{itemize}
Fig.~\ref{fig:comparisonATLASVVJJ} shows the ratio of the observed exclusion limits to the ones from the official ATLAS results for the different approaches summarised above. In all cases the differences are very small, which suggests that the explanation for the observed discrepancy should be attributed to a factor other than the background determination procedure. The discrepancy is absorbed in the fudge factor which, when tuned to deliver the official expected exclusion limits, remarkably removes (to a large extent) the differences in the observed limits. One should note that the decision to employ these correction factors in our analysis (for this and other channels) does not change qualitatively the conclusions of this study. This can be seen, for example, in the middle plot of Fig. \ref{COMBO_VV_llJ_JJ}, where it is shown that the two different approaches yield significances that differer typically by 0.5$\sigma$.
\begin{figure*}[htb]
\centering
\includegraphics[width=0.32\textwidth, angle =0 ]{WW_ratio_alternative.pdf}
\includegraphics[width=0.32\textwidth, angle =0 ]{WZ_ratio_alternative.pdf}
\includegraphics[width=0.32\textwidth, angle =0 ]{ZZ_ratio_alternative.pdf}
\caption{Emulation of ATLAS \ensuremath{\mathrm{VV}\rightarrow\mathrm{JJ}}\xspace search and comparison of the alternative approaches for the background prediction considered: Fudge factors as a function of the resonance mass $\ensuremath{m_\mathrm{X}}\xspace$, determined via the ratio of the expected limits obtained with different background estimation techniques (black: ``pure fitting'', red: ``nominal background'',
blue: ``rescaling'', magenta: ``sidebands'') over those in the official ATLAS result
for the $\ensuremath{\PW_\mathrm{L}\,\PW_\mathrm{L}}\xspace$ (left), $\ensuremath{\PW_\mathrm{L}\,\PZ_\mathrm{L}}\xspace$ (middle) and $\ensuremath{\PZ_\mathrm{L}\,\PZ_\mathrm{L}}\xspace$ channels (right). See text for details.
}
\label{fig:comparisonATLASVVJJ}
\end{figure*}
\section{$Z' \to WW$ signal}
\label{sec:zp}
We recast the $Z'\to WW$ signal and compare with the $WW$ signal from a bulk KK-graviton hypothesis.
In both benchmarks the final state bosons are purely longitudinal. Due the different tensor structure in the $XWW$ coupling the $W$-bosons coming from the bulk KK-graviton hypothesis are slightly more central~\cite{eiko}.
The changes efficiency can be factored as coming from two effects: the basic acceptance and angular dependence of the V-boson tagging. Figure \ref{fig:cutflowsemi} shows the relative difference in efficiency between the mentioned benchmarks in the CMS semi-leptonic channel, both in negligible width. {\bf [EXPLAIN THE TO-BE-DONE Figure]}
\begin{figure*}[h]\begin{center}
[FIGURE WITH EFFICIENCIES/CUT FLOW]
\caption{\small suggestion to paper \label{fig:cutflowsemi}}
\end{center}\end{figure*}
{\bf [PRACTICAL STRATEGY TO FULLY HADRONIC. AND ATLAS SEMI-LEP]}
\subsubsection{Description of the ATLAS analysis}
\par The ATLAS fully hadronic search analyses calorimetric dijet
events. The main irreducible background
is dijet production in QCD, which is dominated by $2 \to
2$ $t$-channel processes involving quarks and gluons. The contribution of
these processes is minimised by restricting the jet acceptance to $|\eta| <
2.0$ and the rapidity difference between those two jets to $|\Delta \eta| <
1.2$. The events are required to have low missing transverse momentum and a
rather symmetric dijet topology (similar \ensuremath{p_\mathrm{T}}\xspace{} for the two leading jets) to
reduce the detector noise. After this selection, the efficiency is
approximately 70-80\% for a heavy
vector boson signal, and above 80\% for a
\ensuremath{\mathrm{G}_\mathrm{bulk}}\xspace signal.
\par To further reduce the multijet background, two fat jets are
reconstructed using the Cambridge-Aachen algorithm
\cite{Dokshitzer:1997in,Wobisch:1998wt} with radius parameter $R = 1.2$. The
mass-drop filtering algorithm~\cite{Butterworth:2008iy} is applied to each
of these jets for the identification of the sub-jets and grooming.
Events are kept if each of the two leading jets satisfies the following
conditions: have two sub-jets with similar transverse momentum, have
less than 30 tracks matched to it, and have a pruned mass within a
$\pm$13 GeV window either around 82.4 GeV (for \ensuremath{\mathrm{W}}\xspace{} tagging) or around
92.8 GeV (for \ensuremath{\mathrm{Z}}\xspace{}
tagging). The selection efficiency of the grooming algorithm for fat jets
from a \ensuremath{\mathrm{W^\prime}}\xspace{} resonance is between 30\% and 40\%.
The events are
subsequently classified into three
non-mutually-exclusive categories, based on the jet-mass values:
\ensuremath{\mathrm{W}}\xspace{}\ensuremath{\mathrm{W}}\xspace{}, \ensuremath{\mathrm{W}}\xspace{}\ensuremath{\mathrm{Z}}\xspace{} and \ensuremath{\mathrm{Z}}\xspace{}\ensuremath{\mathrm{Z}}\xspace{}. The overall product of the geometric
acceptance with the signal
efficiency for this analysis is typically 10-20\%.
\subsubsection{Statistical analysis}
\par The analysis uses the smoothness test (``bump search'')
approach: the background is approximated by a steeply falling function,
while the signal template is taken from simulation. The sum of the two components
is then fitted to the data. The background function used by the ATLAS collaboration is:
\begin{equation}
f(\ensuremath{m_\mathrm{JJ}}\xspace) = p_0 (1-\ensuremath{m_\mathrm{VV}}\xspace)^{p_1 - \xi p_2}\ensuremath{m_\mathrm{VV}}\xspace^{p_2}
\end{equation}
where $p_0$, $p_1$ and $p_2$ are free parameters and
\ensuremath{m_\mathrm{JJ}}\xspace{} is the dijet invariant mass; ATLAS has also made
the signal templates used in the analysis public.
We employ the same function for the background description, but recalculate the
background uncertainties in order to better account for the large scale correlations in $\ensuremath{m_\mathrm{JJ}}\xspace$.
To this end, we refit the data in each of the three categories above using
the aforementioned background parametrisation. We diagonalise
the uncertainty matrix and obtain three uncertainty eigenvectors
($\sigma_{\lambda_i}$, with $i = 0,1,2$).
Our fit result produces a background estimate which agrees with the nominal
background within 10\%, which is well within the uncertainties (see Appendix \ref{app-Avvjj}).
This background is
subsequently used together with the associated uncertainties in our statistical
analysis (see Fig. \ref{fig:ATLASbkg}).
We consider the following systematic uncertainties, treated as fully
correlated across \ensuremath{m_\mathrm{JJ}}\xspace histogram bins:
\begin{itemize}
\item \emph{Background uncertainty}, obtained as described above.
\item \emph{Signal normalisation uncertainty}, which is separated into
two further sub-categories: a common-across-channels systematic
uncertainty corresponding to the luminosity measurement (2.8\%), and an
additional term applicable to the $\ensuremath{\mathrm{JJ}}\xspace$ channel that covers \ensuremath{\mathrm{V}}\xspace-tagging
uncertainties as well as jet systematics.
\item \emph{Signal jet energy scale uncertainty}, which includes jet
transverse momentum and mass uncertainties (with a $\pm 2\%$ and $\pm
5\%$ impact on $\ensuremath{m_\mathrm{JJ}}\xspace$, respectively).
An additional jet energy resolution uncertainty is known to have a
negligible effect on the signal shape and is ignored in this study.
\end{itemize}
\par Our statistical analysis produces expected exclusion limits that are
typically
50\% more stringent than the ones publicly provided by ATLAS. This
discrepancy, discussed in detail in Appendix \ref{app-Avvjj}, is corrected
for with the introduction of a {\it fudge} factor, defined as the ratio of
the ATLAS expected exclusion
limits and the ones from this study obtained with the {\sc THETA}
statistical framework (see Fig.
\ref{fig:ATLASfudge}). With this correction, our calculated exclusion limits
are in good
agreement with the public ATLAS results (see Fig. \ref{fig:ATLASlimit}).
\begin{figure*}[htb]\begin{center}
\includegraphics[width=0.32\textwidth, angle =0 ]{ATLAS_WWJJ.pdf}
\includegraphics[width=0.32\textwidth, angle =0 ]{ATLAS_WZJJ.pdf}
\includegraphics[width=0.32\textwidth, angle =0 ]{ATLAS_ZZJJ.pdf}
\caption{\small ATLAS hadronic search: Comparison between the
official ATLAS fit (blue line) and the fit of this study with uncertainties as
described in the text (coloured bands), with the
overlaid data of the \ensuremath{m_\mathrm{JJ}}\xspace spectrum for the \ensuremath{\mathrm{W}}\xspace{}\ensuremath{\mathrm{W}}\xspace{} (left), \ensuremath{\mathrm{W}}\xspace{}\ensuremath{\mathrm{Z}}\xspace{} (middle) and
\ensuremath{\mathrm{Z}}\xspace{}\ensuremath{\mathrm{Z}}\xspace{} (right) tagging selections.}
\label{fig:ATLASbkg}
\end{center}\end{figure*}
\begin{figure*}[htb]\begin{center}
\includegraphics[width=0.39\textwidth, angle =0 ]{VV_ratio.pdf}
\caption{\small ATLAS hadronic search: Ratio of observed
exclusion limits obtained with this study to the ones of the
official ATLAS result, as a function of the mass $m_X$ of the exotic
resonance
for the \ensuremath{\mathrm{W}}\xspace{}\ensuremath{\mathrm{W}}\xspace{} (black), \ensuremath{\mathrm{Z}}\xspace{}\ensuremath{\mathrm{Z}}\xspace{} (red)
and \ensuremath{\mathrm{W}}\xspace{}\ensuremath{\mathrm{Z}}\xspace{} (magenta) tagging selections.
\label{fig:ATLASfudge}}
\end{center}\end{figure*}
\begin{figure*}[htb]\begin{center}
\includegraphics[width=0.3\textwidth, angle =0 ]{WW_less.pdf}
\includegraphics[width=0.3\textwidth, angle =0 ]{WZ_less.pdf}
\includegraphics[width=0.3\textwidth, angle =0 ]{ZZ_less.pdf}
\caption{\small ATLAS hadronic search: Observed exclusion
limits on exotic production cross section as a function of the resonance
mass $m_X$ obtained with this study, with (black) and without (red) the
correction discussed in the text (``fudge''), and comparison with the
official ATLAS results (grey)
for \ensuremath{\PGbulk \rightarrow \PW_L\PW_L}\xspace (left), \ensuremath{\PWp \rightarrow \PW_L\,\PZ_L}\xspace (middle) and \ensuremath{\PGbulk \rightarrow \PZ_L\PZ_L}\xspace (right) signal hypotheses
and tagging selections. The green and yellow bands represent the one and
two sigma variations around the median expected limits (dashed lines)
calculated with the same fudge factor.
\label{fig:ATLASlimit} }
\end{center}\end{figure*}
\subsubsection{Results with \ensuremath{\mathrm{WW}}\xspace, \ensuremath{\mathrm{WZ}}\xspace and \ensuremath{\mathrm{ZZ}}\xspace signal hypotheses}
As discussed above, due to the finite detector resolution, the
$\ensuremath{\mathrm{V}}\xspace$-tagging tool is not capable to differentiate between
fat jets originating from \ensuremath{\mathrm{W}}\xspace or \ensuremath{\mathrm{Z}}\xspace bosons. However, there is a
significant performance difference between \ensuremath{\mathrm{W}}\xspace and \ensuremath{\mathrm{Z}}\xspace tagging efficiencies
of up to $\approx 30\%$, mainly as a result of the different boson
masses. By using the mass distribution of longitudinal $\ensuremath{\mathrm{V}}\xspace$-jets, as
documented in Fig. 1 of Ref.~\cite{ATLASVV}, and by taking into account the
different \ensuremath{\mathrm{W}}\xspace and \ensuremath{\mathrm{Z}}\xspace efficiencies, we can calculate the efficiency of tagging
selections for different signal hypotheses (\ensuremath{\mathrm{WW}}\xspace, \ensuremath{\mathrm{WZ}}\xspace, \ensuremath{\mathrm{ZZ}}\xspace). The
comparison of the tagging selection efficiencies can be found
in Table~\ref{table:windows}.
\begin{table}[htb]
\centering
\topcaption{\small Relative efficiencies for \ensuremath{\mathrm{WW}}\xspace, \ensuremath{\mathrm{WZ}}\xspace, \ensuremath{\mathrm{ZZ}}\xspace signal
hypotheses for tagging selection using different mass windows.
\label{table:windows}}
\footnotesize{
\begin{tabular}{cccc}
\toprule
& \multicolumn{3}{c}{Signal hypothesis} \\ \cmidrule(lr){2-4}
Tagging selection & \ensuremath{\mathrm{WW}}\xspace & \ensuremath{\mathrm{WZ}}\xspace & \ensuremath{\mathrm{ZZ}}\xspace \\\midrule
\ensuremath{\mathrm{WW}}\xspace window & 1.00 & 0.65 & 0.42 \\
\ensuremath{\mathrm{WZ}}\xspace window & 0.84 & 1.00 & 0.65 \\
\ensuremath{\mathrm{ZZ}}\xspace window & 0.70 & 0.84 & 1.00 \\\bottomrule
\end{tabular}
}
\end{table}
The effect of applying the different tagging selections to the \ensuremath{\mathrm{WW}}\xspace, \ensuremath{\mathrm{WZ}}\xspace and
\ensuremath{\mathrm{ZZ}}\xspace signal hypotheses as a function of the resonance mass is shown in
Fig.~\ref{fig:windows}. We assume that
the $\ensuremath{m_\mathrm{JJ}}\xspace$ spectrum is not affected by the mass window difference in
the tagging selections, \ie that the same distribution describes the three
tagging categories \ensuremath{\mathrm{WW}}\xspace, \ensuremath{\mathrm{WZ}}\xspace and \ensuremath{\mathrm{ZZ}}\xspace. Since the three categories have
common events, they cannot be combined as if they were statistically
independent. Instead, for each theoretical model under consideration we
choose the tagging category that gives the best expected exclusion
limits. For the \ensuremath{\mathrm{W^\prime}}\xspace model the \ensuremath{\mathrm{WZ}}\xspace tagging selection gives
the best result, whereas for the \ensuremath{\mathrm{G}_\mathrm{bulk}}\xspace graviton model in the $\ensuremath{\PW_\mathrm{L}\,\PW_\mathrm{L}}\xspace$ and
$\ensuremath{\PZ_\mathrm{L}\,\PZ_\mathrm{L}}\xspace$ final states the \ensuremath{\mathrm{ZZ}}\xspace tagging
selection has the best performance.
\begin{figure*}[htb]\begin{center}
\includegraphics[width=0.3\textwidth, angle =0 ]{WWinZZ_fudge.pdf}
\includegraphics[width=0.3\textwidth, angle =0 ]{WZinWW_fudge.pdf}
\includegraphics[width=0.3\textwidth, angle =0 ]{ZZinWW_fudge.pdf}
\caption{\small ATLAS hadronic search:
Expected exclusion limits for different tagging and mass-window
selections, as a function of the mass $m_X$ of the exotic resonance for
\ensuremath{\PGbulk \rightarrow \PW_L\PW_L}\xspace (left), \ensuremath{\PWp \rightarrow \PW_L\,\PZ_L}\xspace (middle) and \ensuremath{\PGbulk \rightarrow \PZ_L\PZ_L}\xspace (right) signal hypotheses. The
results have been obtained with the correction discussed in the
text. \label{fig:windows}}
\end{center}\end{figure*}
\subsubsection{Description of the ATLAS analysis}
The ATLAS semileptonic search considers both the case in which the two
quarks from the vector boson decay are reconstructed as a single merged jet
(boosted regime), and the case in which they are reconstructed as two
distinct jets (resolved regime). In this study, we focus on resonances
heavier than 1.5~TeV, for which the merged regime largely drives the
sensitivity. Thus we consider only the Merged Region (MR) categories of
Refs.~\cite{ATLASZV,ATLASWV}.
In both \ensuremath{\mathrm{ZV}\rightarrow\mathrm{\ell\ell J}}\xspace and \ensuremath{\mathrm{WV}\rightarrow\mathrm{\ell\nu J}}\xspace searches, the boosted jet is identified using the
mass-drop filtering algorithm (as in the $\ensuremath{\mathrm{VV}\rightarrow\mathrm{JJ}}\xspace$ search). In addition, two
same-flavour opposite-sign leptons, or one charged lepton and missing transverse
energy (MET) are required. The events are selected online by single- or
double-lepton based triggers. The detector coverage includes the tracker
volume ($|\eta| < 2.5$) and the fiducial region of the electromagnetic
calorimeter (for electrons) or the muon detector. The typical $\ensuremath{p_\mathrm{T}}\xspace$
threshold for the charged leptons and for MET is 25 GeV. The main
backgrounds are inclusive \ensuremath{\mathrm{V}}\xspace production (\ie \ensuremath{\mathrm{Z}}\xspace+jets for the \ensuremath{\mathrm{\ell \ell J}}\xspace channel
and \ensuremath{\mathrm{W}}\xspace+jets for the \ensuremath{\mathrm{\ell\nu J}}\xspace channel), as well as $t\bar{t}$ production.
\subsubsection{Statistical analysis}
We build the likelihood for the ATLAS semileptonic searches using the
information documented in the HEPDATA database. The ATLAS collaboration
estimates the background uncertainties separately for each lepton
category. The electron \ensuremath{p_\mathrm{T}}\xspace resolution is better than that of the muon in the high-\ensuremath{p_\mathrm{T}}\xspace{}
region.
The systematic uncertainties associated with different background
sources ($t\bar{t}$ and electroweak components) are
also treated separately. Nevertheless, the background distributions
documented in the HEPDATA database (see
Fig.~\ref{fig:check_atlas_vv_llj_lnuj}) are presented jointly for
electrons and muons.
\begin{figure*}[htb]\begin{center}
\includegraphics[width=0.40\textwidth, angle =0 ]{ATLAS_VV_llJ_MR.pdf}
\includegraphics[width=0.40\textwidth, angle =0 ]{ATLAS_VV_lnJ_MR.pdf}
\caption{\small ATLAS \ensuremath{\mathrm{ZV}\rightarrow\mathrm{\ell\ell J}}\xspace (left) and \ensuremath{\mathrm{WV}\rightarrow\mathrm{\ell\nu J}}\xspace (right)
searches: Comparison between the official ATLAS background (blue line) and its
uncertainties (purple band) with the overlaid data of the \ensuremath{m_\mathrm{JJ}}\xspace spectrum
for the Merged Region (of the vector boson hadronic reconstruction) category.
\label{fig:check_atlas_vv_llj_lnuj} }
\end{center}\end{figure*}
We model the signal distributions in the diboson mass spectrum with a Gaussian
function, centred at the assumed resonance mass and with a width reflecting
the experimental resolution. We assume a fixed value of 4\% resolution in
the \ensuremath{\mathrm{\ell \ell J}}\xspace channel for all mass values\,\footnote{The signal resolution for a
$\ensuremath{m_\mathrm{X}}\xspace = 2$ TeV resonance in the \ensuremath{\mathrm{\ell \ell J}}\xspace channel is 4\%, decreasing to 3\% for
lower masses \cite{ATLASZV}. We assume a fixed resolution to simplify the
analysis.}. Similarly, we assume a fixed value of 10\% resolution in the \ensuremath{\mathrm{\ell\nu J}}\xspace
channel for all mass values\,\footnote{In the case of the \ensuremath{\mathrm{\ell\nu J}}\xspace channel, the
reconstruction of the resonance mass requires an assumption on the
longitudinal momentum of the outgoing neutrino that is not detected. In
practice, this is estimated from the MET measurement combined with a
\ensuremath{\mathrm{W}}\xspace{} mass constraint. The diboson resonance mass is subsequently
computed using the jet, lepton and calculated neutrino momenta. The
mass resolution in this channel is degraded compared to
the \ensuremath{\mathrm{\ell \ell J}}\xspace channel.} (see Fig. 1 in Ref. \cite{ATLASWV}).
The signal distributions are normalised to the expected
yield, as calculated from the theoretical cross section and the selection
efficiency provided by the ATLAS collaboration.
We consider the following systematic uncertainties, treated as fully
correlated across \ensuremath{m_\mathrm{JJ}}\xspace histogram bins:
\begin{itemize}
\item {\em Background uncertainty}, provided by the ATLAS experiment (in HEPDATA).
\item {\em Signal normalisation uncertainty}, which is separated into two
further sub-categories: a
common-across-channels systematic uncertainty corresponding to the
luminosity measurement (2.8\%), and an additional term
accounting for all types of scale and efficiency systematic effects (10\%). The
latter is treated as uncorrelated between the $\ensuremath{\mathrm{\ell \ell J}}\xspace$ and $\ensuremath{\mathrm{\ell\nu J}}\xspace$ channels.
\end{itemize}
Given the approximations that we have introduced to model the signal, we do not expect
our statistical analysis to produce results matching with high accuracy the
public ATLAS results. Similarly to the procedure followed for the emulation
of the fully hadronic ATLAS search, we introduce a fudge factor to reduce this
discrepancy. The value of the fudge factor is chosen such that the expected
exclusion limits produced by this study agree
with the official limits by ATLAS. It is found to be between 0.8 and 1.2 in
the resonance mass range of interest, slowly decreasing for larger mass
values (Fig.~\ref{fig:Asemiratio}).
With this correction, our calculated exclusion limits are in good agreement
with the public ATLAS results (Fig. \ref{fig:ll_lnuJ}).
\begin{figure*}[htb]\begin{center}
\includegraphics[width=0.4\textwidth, angle =0 ]{ATLAS_VV_lvJ_semi_paper_fu.pdf}
\caption{\small ATLAS semileptonic searches: Fudge factor as a
function of the mass $m_X$ of the exotic resonance, calculated via the
ratio of observed exclusion limits obtained with this study to the ones of
the official ATLAS result, for the \ensuremath{\PWp \rightarrow \PW_L\,\PZ_L}\xspace (red) and \ensuremath{\PGbulk \rightarrow \PZ_L\PZ_L}\xspace (black) signal
hypotheses in the
\ensuremath{\mathrm{\ell \ell J}}\xspace channel, and for the \ensuremath{\PWp \rightarrow \PW_L\,\PZ_L}\xspace (magenta) and \ensuremath{\PGbulk \rightarrow \PW_L\PW_L}\xspace (orange) signal
hypotheses in the \ensuremath{\mathrm{\ell\nu J}}\xspace channel.
\label{fig:Asemiratio} }
\end{center}\end{figure*}
\begin{figure*}[htb]\begin{center}
\includegraphics[width=0.4\textwidth, angle =0 ]{ATLAS_VV_llJ_ZZ_our_nominal_paper_fu.pdf}
\includegraphics[width=0.4\textwidth, angle =0 ]{ATLAS_VV_llJ_WZ_our_nominal_paper_fu.pdf}
\includegraphics[width=0.4\textwidth, angle =0 ]{ATLAS_VV_lvJ_WW_our_nominal_paper_fu.pdf}
\includegraphics[width=0.4\textwidth, angle =0 ]{ATLAS_VV_lvJ_WZ_our_nominal_paper_fu.pdf}
\caption{\small ATLAS semileptonic searches: Expected
(dashed lines) and observed (continuous lines) exclusion limits on exotic
production cross sections as a function of the
resonance mass $m_X$ obtained with this study (black), and comparison
with the official CMS results (red) for \ensuremath{\PGbulk \rightarrow \PZ_L\PZ_L}\xspace (top left), \ensuremath{\PWp \rightarrow \PW_L\,\PZ_L}\xspace (top
right), \ensuremath{\PGbulk \rightarrow \PW_L\PW_L}\xspace (bottom left) and \ensuremath{\PWp \rightarrow \PW_L\,\PZ_L}\xspace (bottom right) signal hypotheses
in the \ensuremath{\mathrm{\ell \ell J}}\xspace
(top) and \ensuremath{\mathrm{\ell\nu J}}\xspace (bottom) channels. The green and yellow bands represent
the one and two sigma variations around the median expected limits
calculated in this study, with all the corrections described in the text included.
\label{fig:ll_lnuJ}}
\end{center}\end{figure*}
\subsubsection{Description of the CMS analysis}
The jet acceptance is restricted to $|\eta| < 2.5$ and $|\Delta \eta| <
1.3$ in order to reduce the contamination from multijet events. The detector
noise is removed by requiring tight quality criteria on the jets.
The pruning algorithm~\cite{Ellis:2009me} is used to clean up the jet from
soft and large-angle radiation. The mass of the resulting fat jet is
constrained in the$70 < m_J < 100$ GeV range. Finally, the
signal-to-background ratio is enhanced by exploiting the jet
\textit{N-subjettiness}~\cite{Thaler:2010tr, Thaler:2011gf, Stewart:2010tn}
variable $\tau_N$. This variable is used to quantify how well the jet
constituents can be arranged into N sub-jets, \ie in a consistency check
with the hadronic \ensuremath{\mathrm{V}}\xspace boson hypothesis.
The ratio $\tau_{12} = \tau_2/\tau_1$ is built with the two leading jets:
the smaller the ratio, the larger the probability that the jet consists of
two sub-jets. The analysis considers two categories: the
high purity (HP) one, defined by requiring $\tau_{12} < 0.5$ for both
jets, and the low purity (LP) one, defined by requiring
one jet with $\tau_{12} < 0.5$ and the other one with $0.5 < \tau_{12} <
0.75$. The HP category is characterised by a smaller background
contamination. The LP category
captures signal events with asymmetric decays of the vector-boson
candidates in the laboratory frame. Dividing the event sample into the LP
and HP categories improves the sensitivity of the analysis in the mass
range between 1~TeV and 2~TeV, while avoiding the inefficiency of a tight
$\tau_{12}$ selection at large jet momenta.
The product of the geometrical acceptance with the signal
efficiency is similar to the one in the ATLAS search, ranging between 10\%
and 20\%.
\subsubsection{Statistical analysis}
The CMS collaboration provides the binned data and background distributions
with the associated uncertainties in the HEPDATA database (see
Fig. \ref{fig:compareCMS}), as well as the signal distributions for three
different models along with their efficiencies~\cite{CMSVV}: \ensuremath{\PWp \rightarrow \PW_L\,\PZ_L}\xspace and
\ensuremath{\mathrm{G}_\mathrm{bulk}}\xspace decaying exclusively to \ensuremath{\PZ_\mathrm{L}\,\PZ_\mathrm{L}}\xspace or \ensuremath{\PW_\mathrm{L}\,\PW_\mathrm{L}}\xspace.
We consider the following systematic uncertainties:
\begin{itemize}
\item {\em Background uncertainty}, provided by CMS (in HEPDATA) and considered as
fully correlated across the bins of the \ensuremath{m_\mathrm{JJ}}\xspace distribution.
\item {\em Signal normalisation uncertainty}, which is separated
further into two sub-categories: a common-across-channels systematic uncertainty
corresponding to the luminosity measurement (2.2\%), and an additional
term applicable to the \ensuremath{\mathrm{JJ}}\xspace channel that covers \ensuremath{\mathrm{V}}\xspace-tagging
uncertainties, such as \ensuremath{p_\mathrm{T}}\xspace{}, pile-up and PDF dependencies (13\%). The
$\tau_{12}$ uncertainties are treated separately in the category
below.
\item {\em Signal purity category migration uncertainty}, which covers the
effects of events ``migrating'' from the HP to the LP category, or
vice-versa. This uncertainty amounts to 7.5\% and 54 \%, respectively.
\item {\em Signal jet energy scale uncertainty}, propagates to $\pm 1\%$ of uncertainty on $\ensuremath{m_\mathrm{JJ}}\xspace$;
It is treated in the same way as in the ATLAS case.
\end{itemize}
All systematic uncertainties are treated as fully correlated across different
\ensuremath{m_\mathrm{JJ}}\xspace bins. They are also considered as fully correlated between
the LP and the HP categories, with the exception of the ``purity
category migration'' uncertainty, which is treated as fully anti-correlated.
Our statistical analysis for \ensuremath{\PWp \rightarrow \PW_L\,\PZ_L}\xspace, \ensuremath{\PGbulk \rightarrow \PW_L\PW_L}\xspace and \ensuremath{\PGbulk \rightarrow \PZ_L\PZ_L}\xspace models produces
exclusion limits that are in very good agreement with the ones publicly
provided by CMS. An example of this agreement can been seen in the left plot of
Fig.~\ref{fig:checkCMS}.
The exclusion limits calculated in a few benchmark models can be seen in the
right plot of Fig. \ref{fig:checkCMS}. The most stringent limits are
obtained for the \ensuremath{\PGbulk \rightarrow \PZ_L\PZ_L}\xspace hypothesis, thanks to the higher \ensuremath{\mathrm{V}}\xspace-tagging
efficiency for \ensuremath{\mathrm{Z}}\xspace bosons.
\begin{figure*}[htb]\begin{center}
\includegraphics[width=0.40\textwidth, angle =0 ]{CMS_JJ_HPMassPlot.pdf}
\includegraphics[width=0.40\textwidth, angle =0 ]{CMS_JJ_LPMassPlot.pdf}
\caption{\small CMS hadronic search: \ensuremath{m_\mathrm{JJ}}\xspace data
distribution overlaid with the background fit employed in this study with
uncertainties
for High (left) and Low (right) Purity samples. See text for details.
\label{fig:compareCMS} }
\end{center}\end{figure*}
\begin{figure*}[htb]\begin{center}
\includegraphics[width=0.40\textwidth, angle =0 ]{WZ.pdf}
\includegraphics[width=0.40\textwidth, angle =0 ]{CMS_VV_JJ_dijetfit.pdf}
\caption{\small CMS hadronic search. {\bf Left:} Expected
(dashed lines) and observed (continuous lines) exclusion
limits on \ensuremath{\PWp \rightarrow \PW_L\,\PZ_L}\xspace production cross sections as a function of the
resonance mass $m_X$ obtained with this study (black), and comparison
with the official CMS results (red). The green and yellow bands (dashed
lines) represent
the one and two sigma variations around the median expected limits
calculated in this study (by CMS).
{\bf Right:} Expected (dashed lines) and
observed (continuous lines) exclusion limits on exotic production cross
section as a function of the resonance mass $m_X$ obtained with this
study for \ensuremath{\PWp \rightarrow \PW_L\,\PZ_L}\xspace (brown), \ensuremath{\PGbulk \rightarrow \PW_L\PW_L}\xspace (red)
and \ensuremath{\PGbulk \rightarrow \PZ_L\PZ_L}\xspace (black) signal hypotheses.
\label{fig:checkCMS} }
\end{center}\end{figure*}
\subsubsection{Description of the CMS analysis}
The CMS semileptonic analyses~\cite{CMSZVWV} are performed with data
collected by single-lepton triggers for the \ensuremath{\mathrm{\ell\nu J}}\xspace channel and double-lepton
triggers for the \ensuremath{\mathrm{\ell \ell J}}\xspace channel. Jets are identified as boosted vector bosons
using the same algorithm employed for the fully hadronic search (see
Section~\ref{sec:JJ}). Similarly to the strategy developed in the fully hadronic search,
LP and HP categories are introduced, based on the value of $\tau_{21}$, to
increase the analysis sensitivity.
The analysis is performed by using a \ensuremath{\mathrm{G}_\mathrm{bulk}}\xspace graviton as the benchmark
signal model. In order to facilitate the interpretation of the search results in
other theoretical models, the CMS collaboration provides the reconstruction efficiencies
of leptonic and hadronic $\ensuremath{\mathrm{W}}\xspace_L$ and $\ensuremath{\mathrm{Z}}\xspace_L$ in the HP category, as
function of the boson's \ensuremath{p_\mathrm{T}}\xspace{} and $\eta$.
Those 2D efficiency maps include the effects of the pruned jet mass and
$\tau_{21}$ selections, as well as the resonance mass reconstruction.
\subsubsection{Statistical analysis}
The background model is extracted by fitting the \ensuremath{m_\mathrm{VV}}\xspace data distributions
for each lepton flavour with a levelled exponential
\begin{equation}
f(\ensuremath{m_\mathrm{VV}}\xspace) = N \, \exp\left [ -\frac{\ensuremath{m_\mathrm{VV}}\xspace}{\sigma + k \cdot \ensuremath{m_\mathrm{VV}}\xspace} \right ]
\end{equation}
where $N$, $k$ and $\sigma$ are free parameters. This function saturates
in the high $\ensuremath{m_\mathrm{VV}}\xspace$ region, and is meant to describe events where $\ensuremath{m_\mathrm{VV}}\xspace$ was
significantly mismeasured. For example, this may happen if a high $p_T$
muon leaves a nearly straight track barely bent by the magnetic field, or
if the calculation of the neutrino momentum fails. In practice, this
function is used in Ref. \cite{CMSZVWV} to model the HP category
with $k$ as a free parameter, whereas for the LP category $k$ can be set to 0.
In the $\ensuremath{\mathrm{\ell \ell J}}\xspace$ channel we focus on the $\ensuremath{m_\mathrm{VV}}\xspace$ > 700 GeV region, and we
merge the contents of the (publicly available) 50~GeV wide bins to obtain a
uniform, 100-GeV-wide binning for the \ensuremath{m_\mathrm{VV}}\xspace distribution.
We use the diagonalised uncertainties from the fit ($\sigma_{\lambda_i}$,
with $i = 0,1,2$) as
background uncertainties. Figs.~\ref{fig:CMSbkgsemiWW} and
\ref{fig:CMSbkgsemiZZ} show the comparison between the fits produced in this
study and the official CMS fits on the data distributions.
We model the signal distributions in the diboson mass spectrum with a
Gaussian function. The HP signal yield is calculated from the theoretical
cross section and the selection efficiency obtained from the algorithm
described in Ref. \cite{CMSZVWV}. The first step in this process is the
generation of signal samples with the \texttt{Madgraph5} generator as
described in Sec.~\ref{sec:method}. We then apply
acceptance selections on the leptons and generator-level jets, and use the 2D
efficiency maps to emulate the \ensuremath{\mathrm{V}}\xspace-boson reconstruction and tagging processes. Finally,
we apply a 90\% correction to account for $b$-jet
veto inefficiencies. Considering the approximations made, this procedure is
expected to reproduce the official CMS results within a 10\% accuracy. The
HP-category efficiencies that we obtain
are consistent with the nominal \ensuremath{\PGbulk \rightarrow \PW_L\PW_L}\xspace efficiencies for $\ensuremath{m_\mathrm{X}}\xspace = 1.2$ TeV
within 6\%.
The LP category signal efficiencies are generally not provided, but
examples of the LP/HP efficiency ratios are given for a \ensuremath{\mathrm{G}_\mathrm{bulk}}\xspace signal with
$\ensuremath{m_\mathrm{X}}\xspace = 1.2$ TeV. The ratio is 0.47 (0.25) for
the \ensuremath{\mathrm{\ell \ell J}}\xspace (\ensuremath{\mathrm{\ell\nu J}}\xspace) channel. The reason for the efficiency difference between
the two cases lies in the different boosted jet selection applied in the
two channels. We make the assumption that we can use the same LP/HP ratio
for all mass points under consideration in this study, and use the values
above to estimate the expected signal yields in the LP category. Finally,
the $\tau_{21}$ categorisation is not
sensitive to the nature of the resonance\,\footnote{Provided that the
polarisation of the final state bosons is the same for both models.},
therefore we use the same LP/HP ratio also for the \ensuremath{\mathrm{W^\prime}}\xspace signal hypothesis.
\begin{figure*}[htb]
\centering
\includegraphics[width=0.4\textwidth, angle =0 ]{CMS_lnJ_ELEHP.pdf}
\includegraphics[width=0.4\textwidth, angle =0 ]{CMS_lnJ_ELELP.pdf}
\includegraphics[width=0.4\textwidth, angle =0 ]{CMS_lnJ_MUHP.pdf}
\includegraphics[width=0.4\textwidth, angle =0 ]{CMS_lnJ_MULP.pdf}
\caption{\small CMS \ensuremath{\mathrm{WV}\rightarrow\mathrm{\ell\nu J}}\xspace search: Comparison between the official CMS background
(blue line) and the background modelling with uncertainties employed by
this study (coloured bands), with the overlaid data of the \ensuremath{m_\mathrm{JJ}}\xspace spectrum for
the HP (left-hand side) and LP (right-hand side) categories, plotted
separately for the electron (top) and the muon (bottom) channels.}
\label{fig:CMSbkgsemiWW}
\end{figure*}
\begin{figure*}[htb]
\centering
\includegraphics[width=0.4\textwidth, angle =0 ]{CMS_llJ_ELEHP.pdf}
\includegraphics[width=0.4\textwidth, angle =0 ]{CMS_llJ_ELELP.pdf}
\includegraphics[width=0.4\textwidth, angle =0 ]{CMS_llJ_MUHP.pdf}
\includegraphics[width=0.4\textwidth, angle =0 ]{CMS_llJ_MULP.pdf}
\caption{\small
CMS \ensuremath{\mathrm{ZV}\rightarrow\mathrm{\ell\ell J}}\xspace search: Comparison between the official CMS background
(blue line) and the background modelling with uncertainties employed by
this study (coloured bands), with the overlaid data of the \ensuremath{m_\mathrm{JJ}}\xspace spectrum for
the HP (left-hand side) and LP (right-hand side) categories, plotted
separately for the electron (top) and the muon (bottom) channels.}
\label{fig:CMSbkgsemiZZ}
\end{figure*}
We consider the following systematic uncertainties, treated as fully
correlated across \ensuremath{m_\mathrm{JJ}}\xspace histogram bins:
\begin{itemize}
\item {\em Background uncertainty}, extracted from our fit to the data distributions.
\item {\em Signal normalisation uncertainty}, which is separated into two
further sub-categories: a
common-across-channels systematic uncertainty corresponding to the
luminosity measurement (2.2\%), and an additional uncertainty covering
all lepton-related uncertainties (3.7\%
for electrons, 3\% for muons), applied separately for the $\ensuremath{\mathrm{\ell \ell J}}\xspace$ and
$\ensuremath{\mathrm{\ell\nu J}}\xspace$ channels.
\item {\em Signal purity category migration uncertainty}, which covers the
effects of events ``migrating'' from the HP to the LP category, or
vice-versa. This uncertainty amounts to 9\% and 24\%, respectively.
\end{itemize}
As already discussed in previous sections, we apply a fudge factor to
account for differences between our background description and the one from
the public CMS result, as well as for the approximations introduced in the
signal modelling (Fig.~\ref{fig:CMSsemiNomA}). With this correction,
our calculated exclusion limits are in good agreement with the public CMS
results (Fig. \ref{fig:CMSsemiNomB}). The statistical uncertainties
(one- and two-sigma coverage bands) are $\approx 50\%$ smaller than
expected, as they have been calculated with the asymptotic CLs method,
which is known to underestimate uncertainties in tests with small
statistics.
\begin{figure*}[htb]
\centering
\includegraphics[width=0.5\textwidth, angle =0 ]{CMSsemi_paper_fudge.pdf}
\caption{\small CMS semileptonic searches: Fudge factor as a
function of the mass $m_X$ of the exotic resonance, calculated via the
ratio of observed exclusion limits obtained with this study to the ones of
the official CMS result for the \ensuremath{\PGbulk \rightarrow \PW_L\PW_L}\xspace (red) and \ensuremath{\PGbulk \rightarrow \PZ_L\PZ_L}\xspace (black) semileptonic analyses.
\label{fig:CMSsemiNomA}
}
\end{figure*}
\begin{figure*}[htb]
\centering
\includegraphics[width=0.45\textwidth, angle =0 ]{bulkWW_paper.pdf}
\includegraphics[width=0.45\textwidth, angle =0 ]{CMS_VV_llJ_BulkZZ_our_nominal_fu.pdf}
\caption{\small CMS semileptonic searches:
Expected (dashed lines) and observed (continuous lines) exclusion limits on exotic
production cross sections as a function of the
resonance mass $m_X$ obtained with this study (black), and comparison
with the official CMS results (red) for the \ensuremath{\PGbulk \rightarrow \PW_L\PW_L}\xspace search in the \ensuremath{\mathrm{\ell\nu J}}\xspace
channel (left) and the \ensuremath{\PGbulk \rightarrow \PZ_L\PZ_L}\xspace search in the
\ensuremath{\mathrm{\ell \ell J}}\xspace channel (right).
The green and yellow bands represent
the one and two sigma variations around the median expected limits
calculated in this study, with all the corrections described in the text included.
\label{fig:CMSsemiNomB}
}
\end{figure*}
We use the same procedure to recast the results in the context of a \ensuremath{\PWp \rightarrow \PW_L\,\PZ_L}\xspace
signal search, with the results presented in Fig.~\ref{fig:CMSsemiAlter}.
The jet mass selection for the \ensuremath{\mathrm{\ell \ell J}}\xspace channel is $70< m_J <110$~GeV, to be
compared with $65< m_J <105$~GeV for the \ensuremath{\mathrm{\ell\nu J}}\xspace analysis. This choice was
made in order to optimise the search for a neutral resonance (at the expense of
the search for a charged one). Since the \ensuremath{\mathrm{\ell\nu J}}\xspace channel mass window is shifted to a region with more background, the signal sensitivity for the \ensuremath{\mathrm{\ell\nu J}}\xspace channel is reduced.
\begin{figure*}[htb]
\centering
\includegraphics[width=0.4\textwidth, angle =0 ]{WZ_WW_fu.pdf}
\includegraphics[width=0.4\textwidth, angle =0 ]{CMS_VV_llJ_BulkZZ_WZ_paper.pdf}
\caption{\small CMS semileptonic searches:
Expected (dashed lines) and observed (continuous lines) exclusion limits on
exotic production cross section as a function of the resonance mass $m_X$
obtained with this study for the \ensuremath{\PGbulk \rightarrow \PW_L\PW_L}\xspace (red) and \ensuremath{\mathrm{W^\prime}}\xspace (black) signal
hypotheses in the \ensuremath{\mathrm{\ell\nu J}}\xspace channel (left) and for the \ensuremath{\PGbulk \rightarrow \PZ_L\PZ_L}\xspace (red) and \ensuremath{\mathrm{W^\prime}}\xspace
(black) signal hypotheses in the \ensuremath{\mathrm{\ell \ell J}}\xspace channel (right).
\label{fig:CMSsemiAlter}
}
\end{figure*}
\section{Narrow width approximation}
\label{sec:narrow}
The CMS collaboration assumes a signal with negligible width, whereas the ATLAS collaboration simulates signal distributions with a model-dependent width of $\approx 7\%$ of the resonance mass (see Table 1 of Ref.~\cite{ATLASVV}). In this appendix we estimate the effect of this difference in the final exclusion limits and provide a recipe for obtaining the ATLAS results in the narrow-width approximation.
The large width hypothesis used by the ATLAS collaboration impacts the limits through the modification of the signal shapes. In the $\ensuremath{\mathrm{JJ}}\xspace$ channel it widens the core for the signal distribution and creates a large left tail due to the interplay between proton PDFs \cite{Harris:2011bh} and the natural width of the resonance, as one can see in the left plot of Fig. \ref{fig:check}. In practice, for a given total cross section we have events \textit{leaking} outside the $\pm 10\%$ window around $\ensuremath{m_\mathrm{X}}\xspace$. This value corresponds typically to the experimental resolution of this channel. The amount of this leakage, $f_l$ is provided in Ref.~\cite{ATLASVV} and corresponds typically to 15\% in the region under study in this paper.
We expect the events in the left tail to have no significant impact on the exclusion limits. A test was performed by truncating the signal to $\ensuremath{m_\mathrm{X}}\xspace \pm 200$ GeV and repeating the $\ensuremath{\mathrm{JJ}}\xspace$ limit-setting procedure for the \ensuremath{\mathrm{W^\prime}}\xspace hypothesis. As one can see in the right plot of Fig. \ref{fig:check}, the difference in the expected exclusion limits does not exceed 2\%.
To map the ATLAS limits into a narrow width hypothesis we make the following approximation: The main difference between the wide and narrow resonances is the presence of leaking events in the right tail or under the peak. Consequently, by multiplying the signal efficiency of ATLAS by $1/f_l$ we recover most of the properties of the narrow signal. In conclusion, we approximate the narrow signal hypothesis for ATLAS analyses by scaling the fully hadronic and semi-leptonic signals by a factor of 1.1 (\ie by increasing the signal yield by 10\%).
\begin{figure*}[htb]\begin{center}
\includegraphics[width=0.46\textwidth, angle =0 ]{ShapesComparisonWidth.pdf}
\includegraphics[width=0.43\textwidth, angle =0 ]{WZ_na_ratio.pdf}
\caption{\small Narrow-width approximation. {\bf Left:} Signal distribution in the diboson invariant mass for a 2 TeV \ensuremath{\mathrm{W^\prime}}\xspace signal. The hatched $\pm$200 GeV region around the signal represents the narrow-width approximation. {\bf Right:} Ratio of the expected (dashed lines) and observed (continuous lines) exclusion limits when constraining the signal width to 10\% of the resonance mass over those obtained with the default shape. \label{fig:check} }
\end{center}\end{figure*}
\section{Introduction}
\input{intro}
\section{General methodology}
\input{method}
\section{Fully hadronic searches: \ensuremath{\mathrm{VV}\rightarrow\mathrm{JJ}}\xspace}
\label{sec:JJ}
In this Section we discuss the analysis of the ATLAS and CMS searches in
the \ensuremath{\mathrm{VV}\rightarrow\mathrm{JJ}}\xspace channel. We first present the results of our analysis for the two
searches separately, followed by their combination and a summary of the
findings.
\subsection{Emulation of ATLAS search}
\input{atlasvvjj}
\subsection{Emulation of CMS search}
\input{cmsvvjj}
\subsection{Combined LHC results of hadronic searches}
\input{combovvjj}
\section{Semi-leptonic searches: \ensuremath{\mathrm{WV}\rightarrow\mathrm{\ell\nu J}}\xspace and \ensuremath{\mathrm{ZV}\rightarrow\mathrm{\ell\ell J}}\xspace}
\label{sec:Semilep}
In this Section we discuss the analysis of the ATLAS and CMS searches in
the \ensuremath{\mathrm{WV}\rightarrow\mathrm{\ell\nu J}}\xspace and \ensuremath{\mathrm{ZV}\rightarrow\mathrm{\ell\ell J}}\xspace channels. We follow the discussion pattern of the
fully hadronic section: we first present the results of our analysis for
the two searches separately, followed by their combination and a summary of our
findings.
\subsection{Emulation of ATLAS search}
\input{atlasvvjll_jlnu}
\subsection{Emulation of CMS search}
\input{cmsvvjll_jlnu}
\subsection{Combined LHC results of semi-leptonic searches}
\input{combovvjll}
\section{Combination of hadronic and semi-leptonic channels}
\label{sec:combo}
\input{combo}
\section{Conclusions}
\label{sec:conclusion}
\input{conclusion}
\acknowledgments
We would like to thank our colleagues at the ATLAS and CMS collaborations
for their exemplary work and publication of a large number of papers on
exotic searches. We thank Andreas Hinzmann for his precious help in the
implementation of the CMS search in the $X \to \ensuremath{\mathrm{VV}\rightarrow\mathrm{JJ}}\xspace$ channel. We also
thank Goran Senjanovi\'{c} and Andrea Wulzer for fruitful discussions and
valuable suggestions. A.O. thanks the CERN theory group for their hospitality.
This material is based upon work partially supported by the Cooperation Agreement
(SPRINT Program) between the S\~ao Paulo Research Foundation (FAPESP) and the
University of Edinburgh, under Grant No. 2014/50208-0. A.O. is
supported by the MIURFIRB RBFR12H1MW grant. The work of F. D. and C.L. is
supported by the Science and Technology Facilities Council (STFC) in the UK.
\clearpage
|
2,869,038,155,441 | arxiv | \section{Introduction}
\label{section:introduction}
Much has been learned about the structure and formation of the Milky
Way Galaxy from studies of its globular cluster system. The key
historical development in this area was Shapley's use of globular
clusters to investigate the structure of the Galaxy and the location
of the Solar System within it (e.g., Shapley 1918). Many decades
later, \citet{sz78} made abundance estimates for a subset of Galactic
globular clusters and used them to infer a chaotic, hierarchical
scenario for the formation of the Galactic halo. \citet{zinn85} later
identified distinct ``halo'' and ``disk'' subpopulations of globular
clusters in the Milky Way, having different kinematics, spatial
distributions, and, by inference, different origins. To date,
$\sim$150 globular clusters have been identified in our Galaxy
\citep{harris96} and numerous studies have yielded estimates of their
distances, abundances, kinematics, and ages; together these provide
crucial information regarding the assembly history of the Galaxy.
Furthermore, the globular cluster system of the other massive spiral
galaxy in our neighborhood, Andromeda, has been surveyed fairly
completely in the past decade, so that we now have estimates of the
colors, metallicities, and kinematics of a substantial fraction of its
$\sim$450 globular clusters (e.g., Barmby et al.\ 2000; Perrett et
al.\ 2002).
The natural question that arises is whether what we have learned about
the globular cluster systems of the Milky Way and Andromeda is true
for other galaxies of similar mass, especially spiral galaxies. Are
the Milky Way and Andromeda representative of other spiral galaxies,
in terms of the total numbers, spatial distributions, colors, and
specific frequencies of their globular cluster populations? If the
Milky Way and Andromeda globular cluster systems are similar to (or
different from) those of other galaxies, what does that tell us about
how galaxies form and evolve?
Although studies of extragalactic globular cluster systems have
multiplied rapidly over the past decade (see the recent review by
Brodie \& Strader 2006), observational studies of spiral galaxy GC
systems are still comparatively rare. \citet{az98} put together a
comprehensive table of the existing data on galaxies' GC systems
(quantities such as total number, specific frequency, and mean
metallicity). The table included 82 galaxies, and only twelve were
spiral galaxies (Hubble type Sa $-$ Scd), including the Milky Way and
M31. Since that time, {\em Hubble Space Telescope (HST)} studies of
several more spiral galaxies have been published (e.g., Goudfrooij et
al.\ 2003; Chandar, Whitmore, \& Lee 2004). Although the high
resolution of $HST$ offers distinct advantages in terms of
distinguishing GCs from contaminants such as faint background
galaxies, its small field of view means that typically only a small
subset of the area around the galaxies is observed, which makes it
difficult to accurately determine the global properties (spatial and
color distributions, total numbers) of the GC systems. For example,
we showed in our wide-field imaging study of the GC system of the
spiral galaxy NGC~7814 that quantities like the total number and
specific frequency of GCs can be off by $\sim$20$-$75\% when one
extrapolates results from {\it HST} data, or small-format CCD data,
out to large radius (Rhode \& Zepf 2003 and references therein).
This paper presents results from wide-field CCD imaging of the
globular cluster systems of four Sb$-$Sc spiral galaxies: NGC~2683,
NGC~3556, NGC~4157, and NGC~7331. We also discuss observations of a
fifth galaxy, the Sc galaxy NGC~3044, which apparently is too distant
for us to have detected its GC system. Basic properties of these five
galaxies are given in Table~\ref{table:galaxy properties}. The data
presented here were acquired as part of a survey that uses
large-format and mosaic CCD imagers to study the global properties of
the GC systems of spiral, S0, and elliptical galaxies at distances of
$\sim$7$-$20~Mpc. A description of the survey, and results for the
first five galaxies analyzed, are given in Rhode \& Zepf (2001, 2003,
and 2004; hereafter RZ01, RZ03, RZ04). Because of difficulties in
quantifying the selection effects caused by intrinsic structure and
line-of-sight extinction in spiral galaxy disks, the only reliable way
to quantify the global properties of spiral galaxy GC populations is
to study galaxies that appear edge-on in the sky. Therefore the
spiral galaxy targets chosen for the survey have
$i$~$_<\atop{^\sim}$75$^\circ$. We use techniques such as imaging in
multiple filters and analyzing archival $HST$ data to carefully reduce
contamination from non-GCs and estimate the amount of contamination
that remains in the samples. Our main goal is to accurately quantify
the spatial distribution of each galaxy's GC system over its full
radial range, in order to then calculate a reliable total number of
GCs for the system. We can then compare these total numbers to
predictions from galaxy formation models such as Ashman \& Zepf (1992;
hereafter AZ92), who suggested that elliptical galaxies and their GC
systems can be formed by the collision of spiral galaxies. Somewhat
more generally, we wish to compare the global properties of the GC
systems of the spiral galaxies in the survey to those of galaxies of
other morphological types (ellipticals and S0s). Making such a
comparison will help us determine the typical GC system properties for
galaxies of different types, and what that might tell us about galaxy
origins in general.
The paper is organized as follows. Section~\ref{section:reductions}
describes the observations and initial data reduction steps.
Section~\ref{section:analysis} explains our methods for detecting GCs
and analyzing the GC system properties. Section~\ref{section:results}
gives the results, and the final section is a summary of the study.
\section{Observations \& Initial Reductions}
\label{section:reductions}
Images of the targeted spiral galaxies were taken between 1999 October
and 2001 January with the 3.5-m WIYN telescope\footnote{The WIYN
Observatory is a joint facility of the University of Wisconsin,
Indiana University, Yale University, and the National Optical
Astronomy Observatories.} at Kitt Peak National Observatory. One of
two CCD detectors was used; either a single 2048 x 2048-pixel CCD
(S2KB) with 0.196$\arcsec$ pixels and a 6.7$\arcmin$~x~6.7$\arcmin$
field of view, or the Minimosaic Imager, which consists of two 2048
x 4096-pixel CCD detectors with 0.14$\arcsec$ pixels and a total
field of view 9.6$\arcmin$~x~9.6$\arcmin$. For the four nearest
galaxies (with distances 7$-$15~Mpc), the galaxy was positioned
toward the edge of the detector, to maximize the radial coverage of
the GC system. NGC~3044 was substantially more distant, at
$>$20~Mpc away, and so was positioned in the center of the detector.
A series of images was taken in three broadband filters ($BVR$).
Table~\ref{table:wiyn observations} specifies for each galaxy the
dates of the observations, the detector used, and the number of
exposures and integration times in each filter.
The images of NGC~3044, NGC~3556, NGC~4157, and NGC~7331 were taken
under photometric conditions, and calibrated with observations of
photometric standard stars \citep{land92} taken on the same nights as
the imaging data. The images of NGC~2683 were taken under
non-photometric sky conditions. In this case we took single, short
(400 $-$ 600~s) $BVR$ exposures of the galaxy on a subsequent,
photometric night during the same observing run (in January 2001). We
used these short exposures along with calibration frames taken on the
same night to post-calibrate the longer exposures of NGC~2683.
The photometric calibration data for the five galaxies discussed in
this paper were taken on five different nights, with one of two CCD
detectors (Minimosaic or S2KB). The color coefficients in the $V$
magnitude equation ranged from 0.02 to 0.08, with a typical formal
uncertainty of 0.01. The color coefficients in the $B-V$ color
equation ranged from 1.01 to 1.06, with a typical uncertainty of 0.01.
The color coefficients in the $V-R$ color equation ranged from 1.04 to
1.06, with an uncertainty of 0.01 to 0.03. The formal errors on the
zero points in the $V$, $B-V$, and $V-R$ calibration equations fell
between 0.003 and 0.01, indicating that it was in fact photometric on
the nights that the calibration data were taken.
Preliminary reductions (overscan and bias subtraction, flat-field
division) were done with standard reduction tasks in the
IRAF\footnote{IRAF is distributed by the National Optical Astronomy
Observatories, which are operated by the Association of Universities
for Research in Astronomy (AURA), Inc., under cooperative agreement
with the National Science Foundation.} packages CCDRED (for the S2KB
images) or MSCRED (for the Minimosaic images). The MSCRED tasks
MSCZERO, MSCCMATCH, and MSCIMAGE were used to convert the
multi-extension Minimosaic FITS images into single images. The images
taken of each galaxy target were aligned to each other, and then sky
subtraction was performed on each individual image. The individual
images taken with a given filter of a given galaxy target were then
scaled to a common flux level and combined, to create a deep,
cosmic-ray-free image of each galaxy in each of the three filters.
Finally, the sky background level was restored to each of the combined
images. The resolution (FWHM of the point spread function) of the
final combined images ranges from: 0.6$\arcsec$ to 0.9$\arcsec$ for
NGC~2683, 0.7$\arcsec$ to 1.1$\arcsec$ for NGC~3044; 0.7$\arcsec$ to
1.0$\arcsec$ for NGC~3556; 0.9$\arcsec$ to 1.1$\arcsec$ for NGC~4157;
and 0.9$\arcsec$ to 1.0$\arcsec$ for NGC~7814.
\section{Detection \& Analysis of the Globular Cluster System}
\label{section:analysis}
\subsection{Source Detection and Matching}
\label{section:source detection}
Globular clusters at the distances of our galaxy targets will appear
unresolved in ground-based images. To detect GCs, we first removed
the diffuse galaxy light from the images. The final combined images
were smoothed with a ring median filter of diameter equal to 7 times
the mean FWHM of point sources in the image. The smoothed images were
then subtracted from the original versions to create a
galaxy-subtracted image. (We experimented with filters of varying
diameter for the smoothing step and found that the ring filter with
the specified diameter consistently removed the diffuse galaxy light
without removing any of the light from the point sources.) The
appropriate constant sky level was restored to the galaxy-subtracted
images and then the IRAF task DAOFIND was used to detect sources with
a signal-to-noise level between 3.5 and 6 times the noise in the
background. We masked out the high-noise regions of the
galaxy-subtracted images where pointlike GC candidates could not
reliably be detected, such as the inner, dusty disks of the galaxies
and regions immediately surrounding saturated foreground stars or
large resolved background galaxies. We removed from the DAOFIND lists
any sources located within these masked regions, and then matched the
remaining sources to produce a final list of sources detected in all
three filters. The number of sources remaining after this step was
537 in NGC~2683, 522 in NGC~3044, 573 in NGC~3556, 387 in NGC~4157,
and 304 in NGC~7331.
\subsection{Eliminating Extended Sources}
To remove extended objects (e.g., contaminating background galaxies)
from the source lists, we began by measuring the FWHM of each source
in the matched lists and plotting it versus its instrumental
magnitude. An example of such a plot is Figure~\ref{fig:fwhm mag},
which shows FWHM vs. magnitude for the 387 detected sources in the $V$
and $R$ images of NGC~4157. At bright magnitudes, point sources have
FWHM values that form a tight sequence around some mean value.
Extended objects have larger FWHM values, scattered over a larger
range. At fainter instrumental magnitudes, point sources still
scatter around the same mean value, but their FWHM values spread out;
consequently, the border between the FWHM values of point sources and
extended objects becomes less clear at faint magnitudes.
We created FWHM vs. instrumental magnitude plots for the final
combined images of each galaxy, and then eliminated extended objects
by selecting as GC candidates only those sources with FWHM values
close to the mean FWHM value of point sources for each image. We
visually examined the plots to determine the boundary between point
sources and extended objects, and then wrote a computer code to
implement the extended source cut. The range of acceptable FWHM
values gradually increases with increasing magnitude of the sources.
We used the FWHM information in the different filters independently
(i.e., if a source had a large FWHM value in {\it one} of the combined
images, it was removed from the GC candidate lists). For NGC~2683 and
NGC~7331, we used measurements in all three broadband filters to
determine whether a source was extended. For the other three galaxies
(NGC~3044, NGC~3556, and NGC~4157), we used measurements from the $V$
and $R$ images only, since they had better resolution than the
$B$-band images and the FWHM versus magnitude plots showed
significantly less scatter in those filters compared to the plot made
from the $B$ image.
Figure~\ref{fig:fwhm mag} shows typical results from this source
selection step; objects that are deemed point sources in the $V$ and
$R$ images of NGC~4157 are plotted with filled circles; objects deemed
extended are plotted with open squares. The number of sources
remaining after this step was 271 in NGC~2683, 179 in NGC~4157, 275 in
NGC~3556, 262 in NGC~3044, and 245 in NGC~7331.
\subsection{Photometry}
Before doing photometry of the objects in the source lists, we
computed individual aperture corrections for each image of each galaxy
by measuring the light from 10$-$20 bright stars
within a series of apertures from 1 to 6 times the average FWHM of the
image. The aperture corrections are listed in Table~\ref{table:aper
corr} and represent the mean difference between the total magnitude of
the bright stars and the magnitude within the aperture with radius one
FWHM. Photometry with an aperture of radius equal to the average FWHM
of the images was then performed for each of the sources that remained
after the extended source cut. Calibrated $BVR$ magnitudes were
derived for each source by taking the instrumental magnitude and
applying the appropriate aperture correction and photometric
calibration coefficients. Corrections for Galactic extinction,
derived from the reddening maps of \citet{schlegel98}, were also
applied to produce final $BVR$ magnitudes. Galactic extinction
corrections are listed in Table~\ref{table:ext corr}.
\subsection{Color Selection}
\label{section:color selection}
The last step in selecting GC candidates is to choose from the list of
point sources the objects with $V$ magnitudes and $BVR$ colors
consistent with their being GCs at the distance of the host galaxy.
This was executed following the same basic steps for all the
galaxies. First, objects with $M_V$ $<$ $-$11 (assuming the distance
moduli given in Table~\ref{table:galaxy properties}) were removed from
the lists. Then, if an object had a $B-V$ color and error that put it
in the range 0.56 $<$ $B-V$ $<$ 0.99, it was selected. (This $B-V$
range corresponds to [Fe/H] of $-$2.5 to 0.0 for Galactic GCs; Harris
1996.) Finally, if the objects had $V-R$ colors and errors that put
them within a specified distance from the relation between $B-V$ and
$V-R$ for Milky Way GCs, they were selected as GC candidates.
In practice, small refinements to this basic set of selection criteria
were applied to produce the final list of GC candidates. When the
colors of the objects that pass the extended source cut are plotted in
the $BVR$ color-color plane, a marked overdensity of sources in the
region of the plane occupied by Galactic GCs is usually obvious. We
therefore adjusted the selection criteria slightly to ensure that all
the objects within these ``overdensities'' were selected as GC
candidates. Furthermore, because there are relatively few GC
candidates around the spiral galaxies (typically $<$100), we examined
each of the objects with magnitudes and colors anywhere close to those
of Galactic GCs, to confirm that we were not missing real GCs or
(conversely) including likely contaminants in the GC candidate
samples.
For NGC~2683, NGC~3044, and NGC~4157, we accepted all objects that
were within 3-$\sigma$ above or below the $V-R$ vs. $B-V$ line for
Milky Way GCs (where $\sigma$ is the scatter in the $V-R$ vs. $B-V$
relation). In addition, in NGC~4157, we accepted two sources that were
physically located very near the disk of the galaxy and had colors
that put them just outside the $BVR$ selection box, in the direction
of the reddening vector. For NGC~7331, we used a 2-$\sigma$ criterion
for the $BVR$ color selection in order to exclude several likely
contaminants, and we accepted three objects near the galaxy disk with
$BVR$ colors indicating they were probably reddened GCs. For
NGC~3556, the overdensity of point sources in the globular cluster
region of the $BVR$ color-color plane was weighted toward the blue
side of the $V-R$ vs. $B-V$ relation for Milky Way GCs. Therefore for
this galaxy, we selected objects that were 1.5-$\sigma$ above (redder
than) the $V-R$ vs. $B-V$ relation, and 3-$\sigma$ below (bluer than)
the relation. After examining the objects selected in the $BVR$
color-color plane, we decided to also apply a $V$ magnitude cut for
two of the galaxies, in order to eliminate what appeared to be a
significant number of faint background objects that had not been
removed in the extended source cut. We excluded objects with $V$
$>$23.0 in NGC~2683, and $V$ $>$ 23.5 in NGC~3556 which is,
respectively, approximately 0.9 mag and 0.4 mag past the peak of the
GC luminosity function in those galaxies. After all of these
magnitude and color selection criteria were applied, the final numbers
of GC candidates found in NGC~2683, NGC~3556, NGC~4157, and NGC~7331
was 41, 50, 37, and 37, respectively.
NGC~3044 was a special case. When we plotted the 262 point sources
detected around this galaxy in the $BVR$ color-color plane, it was
immediately apparent that there was no overdensity of objects in the
part of the plane occupied by GCs. We nevertheless applied a typical
set of color selection criteria --- i.e., $B-V$ in the usual range and
$V-R$ within 3$\sigma$ above or below the expected value for Galactic
GCs --- and created a list of 35 possible GC candidates, with
$V$~$=$~20.9$-$24.4. These objects were spread uniformly over the
field of view of the WIYN images, rather than being strongly clustered
around the galaxy, as is typical for the other spiral galaxy GC
systems we have surveyed. Only 2 of the 35 objects with GC-like
magnitudes and colors were located within a projected radius of 2.2
arc~minutes (15 kpc, assuming the 23~Mpc distance) from the galaxy
center. (In the Milky Way, more than 80\% of the catalogued GCs have
projected radial distances of 15~kpc or less; Harris 1996.) This
suggests that the GC system of NGC~3044 was not clearly detected with
our WIYN observations. Assuming that this galaxy's distance modulus
is 31.83 and that its GCLF peaks at $M_V$ $=$ $-$7.33 like the Milky
Way GCLF (Ashman \& Zepf 1998), the GCs in the luminous half of the
GCLF should have $V$ magnitudes in the range 20.8$-$24.5. The survey
images are typically 50\% complete at $B$, $V$, and $R$ magnitudes of
24$-$25 (see Section~\ref{section:completeness}), so many such
luminous GCs should be detectable in the images. It may be that this
galaxy has a $S_N$ significantly lower than those of the Milky Way and
the other spiral galaxies in our survey (and thus has very few
luminous GCs), that the galaxy actually lies somewhat further away
than 23~Mpc, and/or that the magnitude limits of the images and our
selection techniques prevent us from detecting the GCs that are there.
In any case, because no convincing GC candidates were detected in
NGC~3044, no further analysis steps were executed for this galaxy.
Figures~\ref{fig:bvr n2683} through \ref{fig:bvr n7331} illustrate the
results of the color selection; objects appearing as point sources in
the WIYN images are shown as open squares, and filled circles are the
final selected GC candidates. We include the $BVR$ color-color plots
for NGC~3044 for completeness. Note that because of the $V$ magnitude
criteria applied, some objects within the color selection boxes are
not selected as GC candidates. For illustrative purposes, the figures
show the expected locations of galaxies of different morphological
types, at different redshifts. The ``galaxy tracks'' were produced by
taking template galaxy spectra for early- to late-type galaxies,
shifting the spectra to simulate moving the galaxies to redshifts
between 0 and 0.7, and then calculating their $BVR$ colors (see RZ01
for details). The galaxy tracks simply show that late-type, low- to
moderate-redshift galaxies have $BVR$ colors similar to GCs, so not
every GC candidate in the samples at this stage is a real GC; some may
be background galaxies. (Section~\ref{section:contamination} details
our efforts to quantify the amount of contamination in the GC
candidate lists.)
Figure~\ref{fig:four cmds} shows color-magnitude diagrams for the GC
candidates in the four galaxies in which the GC system was detected.
The $V$ magnitudes of the GC candidates are plotted versus their $B-R$
colors. The final numbers of GC candidates found in the galaxies are
marked on the plots.
\subsection{Completeness Testing}
\label{section:completeness}
A series of completeness tests was done to determine the point-source
detection limits of the WIYN images of each galaxy. We began by
adding artificial point sources with magnitudes within 0.2~mag of a
particular mean value to each of the $B$, $V$, and $R$ images. The
number of artificial sources depended on the size of the images: 200
sources were added to the Minimosaic images and 50 were added to the
S2KB images (which cover one-fourth the area of the Minimosaic
frames). Next, the same detection steps performed on the original
images were performed on the images containing the artificial stars,
and the fraction of artificial stars recovered in the detection
process was recorded. The process was repeated 25$-$30 times for each
image, incrementing the mean magnitude of the artificial stars by
0.2-mag each time, so that the completeness was calculated over a
range of 5$-$6 magnitudes for each filter, for each galaxy.
Table~\ref{table:completeness} lists the 50\% completeness limits for
the three filters for each galaxy.
\subsection{Quantifying and Correcting for Contamination}
\label{section:contamination}
Some fraction of the objects chosen as GC candidates are actually
contaminating objects --- that is, foreground stars or background
galaxies that have $BVR$ magnitudes and colors like globular clusters.
We used a combination of techniques to estimate the amount of
contamination from non-GCs that existed in our samples, so that we
could correct for this contamination in subsequent analysis steps.
\subsubsection{Contamination Estimate Based on the Asymptotic Behavior of the Radial
Profile}
\label{section:asymptotic contam}
Our first step for this set of galaxies was to use the observed radial
distribution of GC candidates to help estimate the contamination
level. First, an initial radial profile of the GC system of each
galaxy was constructed by assigning the GC candidates to a series of
annuli, each 1$\ifmmode {' }\else $' $\fi$ in width. (More details about the construction
of the radial distributions are given in
Section~\ref{section:profiles}.) The effective area of each annulus
(the region where GCs could be detected) was determined and used to
calculate a surface density of GC candidates for the annulus. The
resultant surface density profile for each galaxy's GC system followed
the general shape expected for GC systems, but rather than going to
zero in the outer regions, the profiles decreased until reaching a
constant positive value in the last few annuli. We assumed this
constant surface density was due to contaminating objects (stars and
galaxies). We calculated the weighted mean surface density of objects
in these outer annuli and took this as an estimate of the
contamination level of the GC candidate lists.
For NGC~2683, the initial radial profile created at this step
flattened to a constant value in the outer six annuli (of a total of
nine annuli). The mean surface density of objects in these outer six
annuli is 0.226$\pm$0.026 arcmin$^{-2}$. For NGC~3556, a constant
surface density of GC candidates was present in the outer four of
eight annuli. Here the level was 0.093$\pm$0.003~arcmin$^{-2}$. For
NGC~4157, the situation was more complicated. The initial radial
distribution of GC candidates showed typical behavior in the inner few
annuli (beginning at some maximum surface density value and then
monotonically decreasing with increasing radius) but then showed a
``bump'' of increased surface density in two adjacent bins in the
outer profile, at $\sim$30~kpc from the galaxy center. This feature
in the radial profile is caused by a group of GC candidates in the
halo of the galaxy and is discussed in detail in
Section~\ref{section:profiles}. The objects may be {\it bona fide}
GCs in the galaxy's outer halo, or a distant, background cluster of
galaxies masquerading as GCs, or just a chance superposition of
several otherwise-unrelated GC candidates. In any case, to estimate
the surface density of contaminants, we removed the seven
closely-grouped GC candidates responsible for the inflated surface
density in those two outer bins, and calculated the mean surface
density of GC candidates in the outer five (of nine total) annuli.
The estimated surface density of contaminants from this analysis is
0.181$\pm$0.059~arcmin$^{-2}$. Finally, for NGC~7331, the images
covered less area around the galaxy, so the radial coverage of the GC
system was reduced. The initial radial profile of GC candidates
flattened in the outer two (of five) annuli and the average surface
density in these annuli is 0.690$\pm$0.220~arcmin$^{-2}$.
\subsubsection{Estimating Stellar Contamination from a Galactic Star
Counts Model}
\label{section:star contam}
We used the Galactic structure model code of \citet{mendez96} and
\citet{mendez00} to yield an independent estimate of the level of
contamination from Galactic stars in the GC candidate lists. Given
specific values for variables such as the Galactocentric distance of
the Sun and the proportions of stars in the Galaxy halo, disk, and
thick disk, the code outputs the predicted number of Galactic stars
with $V$ magnitudes and $B-V$ colors in a user-specified range, in a
given area of the sky. We ran the model for each of our galaxy
targets, adjusting the range of $V$ magnitudes of the Galactic stars
to match the $V$ range of the GC candidates. The model-predicted
surface density of stars with $V$ magnitudes and $B-V$ colors like the
GC candidates in the directions NGC~2683, NGC~3556, NGC~4157, and
NGC~7331 were 0.067~arcmin$^{-2}$, 0.040~arcmin$^{-2}$,
0.051~arcmin$^{-2}$, and 0.155~arcmin$^{-2}$, respectively. (The
stellar surface density in the direction of NGC~7331 is large compared
to the other galaxies because this galaxy is at $-$20 degrees Galactic
latitude, significantly closer to the Galactic plane than the other
galaxies, which have Galactic latitudes between $+$40 and $+$65
degrees.) The surface density values were relatively insensitive to
the choice of model parameters for the Galaxy. The surface density
values range from $\sim$20$-$40\% of the total surface density of
contaminants estimated from the asymptotic behavior of the radial
profile.
\subsubsection{Estimating the Contamination from Background Galaxies
with HST Data}
\label{section:hst contam}
Archival {\it HST} imaging data\footnote{Based on observations made
with the NASA/ESA {\it Hubble Space Telescope}, obtained from the data
archive at the Space Telescope Science Institute. STScI is operated
by AURA, under NASA contract NAS 5-26555.} were available for some of
the target spiral galaxies. Because $HST$ can resolve many extended
objects that appear as point sources in ground-based imaging,
determining whether the WIYN GC candidates are actually galaxies gives
us another estimate of the contamination in the WIYN data.
We downloaded all of the available archival $HST$ images taken in
broadband filters of the targeted galaxies; all of these images were
taken with the Wide Field and Planetary Camera~2 (WFPC2). We found
images that covered portions of the WIYN pointings of NGC~2683,
NGC~3556, and NGC~7331, but no data for NGC~4157. The data sets we
analyzed are summarized in Table~\ref{table:hst data}. The table
lists: proposal ID; target name (either the name of the galaxy or
``Any'', which indicates that the images were taken by WFPC2 while
another $HST$ instrument was being used for the primary science);
total exposure time; distance of the observation from the galaxy
center; and filter. ``On-the-fly'' calibration was applied to the
images before they were retrieved from the archive. The STSDAS task
GCOMBINE was used to combine individual exposures of the same
pointing. The WIYN GC candidates were then located in the WFPC2
images. We used the method of \citet{kundu99}, who measured the flux
from GC candidates in apertures of radius 0.5 pixels and 3 pixels and
then calculated the ratio of counts in the large and small apertures.
Objects that are extended (and therefore galaxies) have count ratios
much larger than those of point sources, since relatively more of
their light is contained within the 3-pixel aperture. We confirmed
the results from the count-ratio method with visual inspection.
For NGC~2683, we analyzed a pointing 0.7~arcmin from the galaxy
center, taken in the 606W filter, that turned out to contain none of
the WIYN GC candidates. We also analyzed a WFPC2 pointing in the 814W
filter, centered 1.9~arcmin from the center of the galaxy. Thirteen
of the WIYN GC candidates appear in the WFPC2 field and one of these
is an extended object rather than a real GC. The area covered by the
WFPC2 image is 5.527 arcmin$^{2}$, so one estimate of the number
density of background galaxies (from these admittedly small-number
statistics) is 0.181~arcmin$^{-2}$.
For NGC~3556, there was a WFPC2 pointing in the 606W filter, located
0.4~arcmin from the galaxy center. Five of the WIYN GC candidates
appear in the combined WFPC2 image; none is extended.
NGC~7331 had many WFPC2 images available in the HST archive, largely
because this galaxy was a target for the HST Cepheid Key Project
\citep{hughes98}. We made combined images from multiple observations
of two different pointings located 3.5$\arcmin$ and 5$\arcmin$ from
the galaxy (see Table~\ref{table:hst data}). Only one of these
pointings contained WIYN GC candidates, however; the other pointing
happened to coincide with a small area of the WIYN frames that
contained no GC candidates. Two of the WIYN GC candidates are located
within the WFPC2 pointing at 3.5~arcmin from the galaxy center;
neither object is extended.
\subsubsection{Final Contamination Correction}
\label{section:final contam}
For the spiral galaxies here,
we will adopt the contamination levels
estimated from the asymptotic behavior of the radial profile, and
take the Galactic star counts models and HST data analysis as checks
on these estimates. NGC~2683 is the one galaxy for which we have
independent estimates of the contamination from both stars (from the
star counts model) and galaxies (from $HST$ data). Adding these two
numbers together (0.181~arcmin$^{-2}$ $+$ 0.067~arcmin$^{-2}$) gives
the same number within the errors as the total contamination estimate
from the radial profile (0.226$\pm$0.026~arcmin$^{-2}$). Also, as
noted in Section~\ref{section:star contam}, the stellar contamination
from the Galactic star counts model was always lower than the total
contamination estimated from the radial profile, which makes sense if
one assumes that galaxies also contribute to the contamination level
in the GC candidate samples.
We took the number density of contaminating objects for each galaxy
given in Section~\ref{section:asymptotic contam} and used it to
calculate the expected fraction of contaminants at each annulus in the
radial profile, for use in subsequent steps. First we multiplied the
number density of contaminants by the effective area of the annulus.
Dividing this number by the total number of GC candidates in the
annulus then yielded the fraction of contaminating objects for that
annulus.
\subsection{Determining the GCLF Coverage}
The observed GC luminosity function (GCLF) was constructed for each of
the four galaxies by assigning the $V$ magnitudes of the GC candidates
to bins of width 0.3 mag. The radially-dependent correction described
in Section~\ref{section:final contam} was used to correct the LF data
for contamination. For example, if a GC candidate was located
2~arcmin from the galaxy center and the contamination fraction was
expected to be 20\% at that radius, then 0.8 was added to the total
number of objects in the appropriate $V$ magnitude bin. The LF was
also corrected for completeness, by computing the total completeness
of each $V$ bin (calculated by convolving the completenesses in all
three filters, as detailed in RZ01) and dividing the number of GCs in
that bin by the completeness fraction.
We assumed the intrinsic GCLF of the spiral galaxies was a Gaussian
function with a peak absolute magnitude like that of the Milky Way
GCLF, $M_V$ $=$ $-$7.3 \citep{az98}. If one applies the distance
moduli in Table~\ref{table:galaxy properties}, this $M_V$ translates
to peak apparent magnitudes of $V$ $=$ 22.1, 23.1, 23.5, and 23.3, for
NGC~2683, NGC~3556, NGC~4157, and NGC~7331, respectively. We fitted
Gaussian functions with the appropriate peak apparent magnitude and
dispersions of 1.2, 1.3, and 1.4 mag to the corrected LF data. Bins
with less than 45\% completeness were excluded from the fitting
process. In a few instances we also excluded one or more bins with
very low numbers (e.g., bins containing less than one GC candidate)
that were causing the normalization to be too low to fit well to the
surrounding bins. The fraction of the theoretical GCLF covered by the
observed LF was calculated for each galaxy. The mean fractional
coverage (averaged for the three different dispersions) and error for
NGC~2683, NGC~3556, NGC~4157, and NGC~7331 were 0.64$\pm$0.02,
0.53$\pm$0.02, 0.513$\pm$0.001, and 0.51$\pm$0.02, respectively.
Finally, we experimented with changing the bin size of the LF data and
quantified how this affected the final value of the fractional GCLF
coverage. We found that changing the bin size produced a change in
the mean fractional coverage of 4$-$7\%. This uncertainty is included
in the final errors (i.e., the error on specific frequency) on
quantities discussed in Section~\ref{section:total numbers}.
\section{Results}
\label{section:results}
\subsection{Radial Distributions of the GC Systems}
\label{section:profiles}
We constructed radial profiles of the galaxies' GC systems by binning
the GC candidates into a series of 1$\ifmmode {' }\else $' $\fi$-wide annuli according to
their projected radial distances from the galaxy centers. The inner
parts of the spiral galaxy disks had been masked out (see
Section~\ref{section:source detection}) because GCs could not be
reliably detected there; thus the positions of the radial bins were
adjusted so that the inner radius of the first annulus started just
outside this masked central region. An effective area --- the area in
which GCs could be detected, excluding the masked portions of the
galaxy, masked regions around saturated stars, and parts of the
annulus that extended off the image --- was computed for each annulus.
We corrected the number of GCs in each annulus for contamination (by
applying the radially dependent contamination correction described in
Section~\ref{section:final contam}) and for GCLF coverage. The final
radial distribution of the GC system was then produced by dividing the
corrected number of GCs in each annulus by the effective area of the
annulus. The errors on the GC surface density values for each annulus
include uncertainties on the number of GCs and contaminating objects.
Tables~\ref{table:profile n2683} through \ref{table:profile n7331}
give the final radial distributions of the GC systems for the four
galaxies; the radial profiles are plotted in Figures~\ref{fig:profile
n2683} through \ref{fig:profile n7331}. The projected radii shown in
the tables and figures are the mean projected radii of the unmasked
pixels in each annulus. Note that because a contamination correction
has been applied to the surface density of GCs in each radial bin,
some of the outer bins have negative surface densities.
We fitted power laws of the form log~$\sigma_{\rm GC}$ $=$ $a0$ $+$
$a1$~log~$r$ and deVaucouleurs laws of the form log~$\sigma_{\rm GC}$
$=$ $a0$ $+$ $a1$~$r^{1/4}$ to the radial distributions. In all cases
the $\chi^2$ values were nearly the same for both the power law and
deVaucouleurs law fits, so we list the coefficients for both functions
in Table~\ref{table:profile coefficients}. The top panels of
Figures~\ref{fig:profile n2683} through \ref{fig:profile n7331} show
the surface density of GCs versus projected radius and the bottom
panels show the log of the surface density versus $r^{1/4}$, with the
best-fit deVaucouleurs law plotted as a dashed line.
The GC system radial profiles of each of the galaxies have slightly
different appearances, but in all cases the GC surface density
decreases to zero within the errors before the last data point. This
suggests that we have observed the full radial extent of the galaxies'
GC systems, which is crucial for a reliable determination of the total
number of GCs in the system. For NGC~2683, the surface density is
consistent with zero by a radius of 4$\ifmmode {' }\else $' $\fi$, or $\sim$9~kpc. For
NGC~3556, which is a much more luminous spiral galaxy, the GC surface
density goes to zero at 5.5$\ifmmode {' }\else $' $\fi$, or $\sim$20~kpc.
NGC~7331 has the lowest inclination of any spiral galaxy in our
survey. (In addition to the current set of galaxies, the survey also
includes NGC~7814, published in RZ03, and NGC~891 and NGC~4013, which
are not yet published.) Because the galaxy has $i$ $\sim$75 degrees
(rather than the $i$ $\sim$ 80$-$90 degrees of the other target
galaxies), we had to mask out a relatively large portion of its inner
radial region, because we could not reliably detect GCs against the
background of the galaxy's dusty spiral disk. Consequently the
innermost point in the galaxy's radial profile (Fig.~\ref{fig:profile
n7331}) is centered at $>$7 kpc from the galaxy center. In general,
the spiral galaxy GC systems we have studied are fairly concentrated
toward the galaxy center (as is true for the Milky Way). This seems
to be the case for NGC~7331 as well: outside of the masked galaxy disk
region, we barely detect NGC~7331's GC system before the data points
in the radial profile flatten to a constant surface density (which we
take to be the contamination level in the data; see
Section~\ref{section:contamination}). In the final version of the
radial profile shown in Figure~\ref{fig:profile n7331}, the surface
densities in the first three radial bins are positive (although the
errors on the surface density at 3$\ifmmode {' }\else $' $\fi$ make its lower limit barely
consistent with zero). Then the GC surface density goes to zero
within the errors in the fourth and fifth radial bins. The fourth
radial bin is centered at 4.8$\ifmmode {' }\else $' $\fi$ (18~kpc), so we take this as the
approximate radial extent of this galaxy's GC system.
NGC~4157 was another special case. The radial profile shows the
expected behavior in the inner regions of the GC system: the surface
density of GCs decreases monotonically with increasing radius until it
is consistent with zero in the radial bins centered at 5$\ifmmode {' }\else $' $\fi$ and
6$\ifmmode {' }\else $' $\fi$. However the surface density then increases to positive
values in the 7$\ifmmode {' }\else $' $\fi$ and 8$\ifmmode {' }\else $' $\fi$ radial bins, before returning to
zero in the last radial bin at 9$\ifmmode {' }\else $' $\fi$. This ``bump'' in the outer
profile is caused by the presence of seven GC candidates with
projected distances of $\sim$30~kpc from the galaxy center. The seven
candidates appear in two groups: one close group with three objects at
$r$ $=$ 31.0$-$31.7~kpc, and a group of four objects that are somewhat
more spread out, with four objects at $r$ $=$ 27$-$32~kpc. These
grouped objects may be real GCs in NGC~4157's halo, or a distant
background group or cluster of galaxies. Alternatively, the objects
may just appear near each other by chance.
We thought it possible that some or all of the seven GC candidates in
question might be associated with a dwarf galaxy that is being
accreted into NGC~4157's halo. For this reason we obtained deep,
broadband images of NGC~4157 with the WIYN Minimosaic in March 2005.
The combined images have a total integration time of 7.5 hours and
reach a surface brightness level of $V$ $>$27, but we found no
evidence for a faint dwarf galaxy anywhere in the vicinity of those
seven GC candidates.
We fitted deVaucouleurs and power laws to both the original version of
the radial profile of NGC~4157's GC system and to a profile with the
seven GC candidates at $\sim$30~kpc removed. Both fits are shown in
Figure~\ref{fig:profile n4157} and listed in Table~\ref{table:profile
n4157}. With the seven ``extra'' GC candidates removed, the GC
surface density is consistent with zero within the errors by 5$\ifmmode {' }\else $' $\fi$,
or 20~kpc.
\subsection{Radial Extent of the GC Systems of the Survey Galaxies}
\label{section:extents}
With (including these four spiral galaxies) a total of nine galaxies
from the wide-field survey now analyzed, we can look for trends in the
GC system properties of the overall sample. One quantity that we
derive from the radial profiles of the GC systems is an estimate of
the radial extent of the systems. For the survey, we take the radial
extent to be the point at which the surface density of GCs in the
final radial profile becomes consistent with zero (within the errors
on the surface density) and stays at zero out to the radial limit of
the data. Figure~\ref{fig:extent} shows the radial extent in
kiloparsecs of the GC systems of the nine survey galaxies analyzed to
date, plotted against the log of the host galaxy stellar mass. To
compute the galaxy masses, we combined $M^T_V$ for each galaxy with
the mass-to-light ratios given in \citet{za93}: $M/L_V$ $=$ 10 for
elliptical galaxies (NGC~3379, NGC~4406, and NGC~4472) $M/L_V$ $=$ 7.6
for S0 galaxies (NGC~4594), $M/L_V$ $=$ 6.1 for Sab$-$Sb galaxies
(NGC~2683, NGC~4157, NGC~7331, and NGC~7814) and 5.0 for Sbc$-$Sc
galaxies (NGC~3556). The errors on the radial extent values in
Figure~\ref{fig:extent} were calculated by taking into account the
errors on the distance modulus assumed for each galaxy, along with the
errors on the determination of the radial extent itself (which we took
to be equal to the width of one radial bin of the spatial profile).
Note also that for the spiral galaxy NGC~4157, we derived the radial
extent from the version of the radial profile with the seven ``extra''
GC candidates discussed in Section~\ref{fig:extent} removed. The
galaxy stellar masses and estimated radial extents are listed in
Table~\ref{table:extents}.
Figure~\ref{fig:extent} shows that, as might be expected, more massive
galaxies generally have more extended GC systems, although with a fair
amount of scatter in the relation. We fitted a line and a second-order
polynomial to the data in the figure; the best-fit line and curve are,
respectively:
\begin{equation}
y = ((57.7\pm3.7)~x) - (619\pm41)
\end{equation}
and
\begin{equation}
y = ((45.7\pm9.5)~x^2) - ((985\pm217)~x) + (5320\pm1240)
\end{equation}
\noindent where $x$ is the radial extent in kpc and $y$ is
$log(Mass/M_{\odot})$. One useful application of the data and
best-fit relations is determining how much radial coverage is needed
in order to observe all or most of the GC system of a particular
galaxy. For example, the field of view of the $HST$ Advanced Camera
for Surveys (ACS) is 3.37$\ifmmode {' }\else $' $\fi$ on a side, which means that the
maximum radial range observable with this instrument is 2.38$\ifmmode {' }\else $' $\fi$ if
the galaxy is positioned at the center of the detector. At the
approximate distance of the Virgo Cluster ($\sim$17~Mpc), this
translates to a maximum projected radial distance of $\sim$12~kpc.
Therefore if one intends to observe the full radial extent of the GC
system of a galaxy placed at the center of the HST ACS field,
Figure~\ref{fig:extent} indicates that one is limited to galaxies with
stellar mass $log(M/M_{\odot})$ $<$ 11 (which is less massive than the
Milky Way Galaxy). One ACS field would include $\sim$50\% of the
radial extent of a GC system of a Virgo Cluster galaxy with
$log(M/M_{\odot})$ in the range 11.1$-$11.2. As explained in
Section~\ref{section:introduction} of this paper and in other papers
from the survey (e.g., RZ01, RZ03, RZ04), deriving accurate global
values for the properties of a GC system requires that one observe
most or all of the radial extent of the system.
\subsection{Total Number and Specific Frequency of GCs}
\label{section:total numbers}
The number of GCs in each galaxy can be derived by integrating the
deVaucouleurs profiles fitted to the radial distributions (see
Section~\ref{section:profiles}) out to some outer radial limit. Since
the radial distributions are corrected for magnitude incompleteness,
missing spatial coverage, and contamination from non-GCs, the result
is a final estimate of the total number of GCs in the system
($N_{GC}$). We chose the outer radius of integration to be the point
in the radial distribution at which the surface density of GCs equals
zero within the errors, and then remains consistent with zero for the
remainder of the data points. Note that for NGC~4157, we integrated
the deVaucouleurs profile that was fitted to the radial profile with
the seven ``extra'' GC candidates located in the 7$-$8$\ifmmode {' }\else $' $\fi$ bins
removed. The outer radius of integration in that case was 5$\ifmmode {' }\else $' $\fi$,
because with those seven objects removed, the GC surface density is
consistent with zero beginning with the 5$\ifmmode {' }\else $' $\fi$ bin out to the last
bin at 9$\ifmmode {' }\else $' $\fi$.
Given that we could not observe some portion of the inner galaxy ---
close to the spiral disk --- for all four target galaxies, we also had
to make assumptions about the number of GCs and/or the shape of the GC
radial profile within that region. The projected outer radial
boundary of the unobserved region ranged from 0.5$-$1.3$\ifmmode {' }\else $' $\fi$, which
translates to 1$-$5~kpc given the distances to the target galaxies.
We assumed four different possibilities for the behavior of the GC
systems in these inner regions: (1) that the same proportion of GCs
was located within the region as in the Milky Way GC system; (2) that
the proportion of GCs missing was like the GC system of NGC~7814
(which we did observe to small radius, with HST WFPC2; see RZ03); (3)
that the best-fit deVaucouleurs law profile continued all the way to
$r$~$=$~0; and (4) that the inner part of the GC radial distribution
was flat (i.e., the GC surface density in the unobserved region
equalled the value in the first radial bin of the observed profile).
Adding the number of GCs in the observed part of the system
(calculated from integrating the deVaucouleurs profile over the radial
range of the data) to the number of GCs in the inner region (given
these various assumptions) yielded a range of values for the total
number of GCs in each galaxy's system. We took the mean of this range
of values as the final estimate of $N_{GC}$.
The luminosity- and mass-normalized numbers of GCs in a galaxy are
useful quantities to calculate and compare among galaxies. The
specific frequency, $S_N$ was defined by \citet{hvdb81} as
\begin{equation}
{S_N \equiv {N_{GC}}10^{+0.4({M_V}+15)}}
\end{equation}
The $M_V$ values assumed for the galaxies are those given in
Table~\ref{table:galaxy properties}. An alternative quantity, $T$,
was introduced by \citet{za93} and is sometimes preferred to $S_N$
because it normalizes the number of GCs by the stellar mass of the
galaxy ($M_G$) rather than $V$-band magnitude:
\begin{equation}
T \equiv \frac{N_{GC}}{M_G/10^9\ {\rm M_{\sun}}}
\end{equation}
\noindent To calculate $M_G$, we again combined $M^T_V$ for each
galaxy with mass-to-light ratios from \citet{za93}. The total
numbers, $S_N$ and $T$ values for each galaxy's GC system are given in
Table~\ref{table:total numbers}.
To calculate the errors on $N_{GC}$ and the specific frequencies $S_N$
and $T$, we took into account the following sources of uncertainty:
(1) the variation in the total number of GCs, depending on what was
assumed for the spatial distribution of the unobserved inner portion
of the GC system; (2) the variation in the calculated coverage of the
GCLF, depending on the assumed intrinsic GCLF function and how the
luminosity function data were binned; and (3) Poisson errors on the
number of GCs and the number of contaminating objects. For NGC~4157,
we also estimated the uncertainty in $N_{GC}$ due to the group of
seven GC candidates in the galaxy's halo, and whether these were real
GCs (and should therefore be included in the total number) or
contaminants. Errors on the specific frequencies $S_N$ and $T$ also
include uncertainties in the total galaxy magnitudes: we assumed that
the internal extinction correction (and thus the galaxy magnitude)
could be uncertain by as much as 0.3 mag, which corresponds to 3$-$4
times the error on $V^0_T$ given in RC3 \citep{devauc91}. Individual
errors from the above sources were added in quadrature to calculate
the final errors on $N_{GC}$, $S_N$ and $T$; the errors are given in
Table~\ref{table:total numbers}.
The GC system of one of our targets, NGC~2683, was studied previously
by \citet{harris85}. They used photographic data from the
Canada-France-Hawaii telescope to identify $\sim$100 GC candidates and
estimated that the total number of GCs in NGC~2683 is 321$\pm$108.
Combining this number with our assumed $M_V^T$ value of $-$20.5 yields
a specific frequency $S_N$ of 2.0$\pm$0.7. This is 2.5 times larger
than our measured $S_N$ value (0.8$\pm$0.4). It is not unusual for our
survey data to yield smaller $N_{GC}$ and $S_N$ values than previous
studies; of six galaxies (including NGC~2683) with $S_N$ values
already in the literature, four have previously-published $S_N$ values
significantly larger than those we derive (RZ01, RZ03, RZ04). Our
smaller $S_N$ values are probably due to a combination of factors,
e.g., lower contamination levels (due to source selection in multiple
filters and higher resolution data) and more accurate radial
distributions yielding better-determined total numbers of GCs.
One of the objectives of our wide-field CCD survey was to determine
whether the two spiral galaxies with the most thoroughly studied GC
systems, the Milky Way and M31, are typical of their galaxy class in
terms of the properties of their GC systems. Figure~\ref{fig:spec
freq} addresses this question by comparing the luminosity- and
mass-normalized specific frequencies of the Milky Way and M31 with
those of the spiral galaxies from our survey. The specific
frequencies of the five galaxies we have analyzed are plotted with
filled circles. Besides the four spiral galaxies presented in this
paper, this includes the Sab galaxy NGC~7814 (RZ03). The five spiral
galaxies we have analyzed have morphological types of Sab (N=1
galaxy), Sb (N $=$3), and Sc (N$=$1) and stellar masses in the range
$log(M/M_{\odot})$ $=$ 10.9$-$11.4.
Open stars in Figure~\ref{fig:spec freq} indicate $S_N$ and $T$ for
the Milky Way (smaller error bars) and M31. The Milky Way has
$N_{GC}$ $\sim$ 180, $S_N$ = 0.6$\pm$0.1, and $T$ $=$ 1.3$\pm$0.2
\citep{az98}. M31 has $\sim$450~GCs, $S_N$ $=$ 0.9$\pm$0.2 and $T$
$=$ 1.6$\pm$0.4 (Ashman \& Zepf 1998; Barmby et al.\ 2000). The seven
spiral galaxies in the figure show a fairly small range of specific
frequency values, with modest scatter (note that $S_N$ can range from
less than zero to $>$10 for giant galaxies; Ashman \& Zepf 1998). The
weighted mean $S_N$ and $T$ values for the GC systems of the five
spiral galaxies in the survey are 0.8$\pm$0.2 and 1.4$\pm$0.3,
respectively. These values fall between, and are consistent with, the
$S_N$ and $T$ values for the GC systems of the Milky Way and M31,
which suggests that the spiral galaxies we are most familiar with are
indeed representative of the GC systems of other spiral galaxies of
similar mass, at least in terms of the relative number of GCs. The
mean $N_{GC}$ of the five spiral galaxies we have surveyed to date is
170$\pm$40.
\citet{goud03} analyzed the GC systems of six nearly edge-on spiral
galaxies with HST WFPC2 optical imaging data. For five of the
galaxies, they had a single WFPC2 observation positioned near the
galaxy center; for the sixth galaxy, they had two WFPC2 fields on each
side of the disk. Given the distances to their target galaxies, the
WFPC2 pointings provided radial coverage of the GC systems out to
(typically) $\sim$5$-$15~kpc. To calculate total numbers and specific
frequencies of GCs in the target galaxies, they follow a technique
from \citet{kissler99} and compare the numbers of GCs detected in the
WFPC2 data with the numbers that would be detected at the same spatial
location in the Milky Way GC system if it were observed under the same
conditions (i.e., at the same distance and projected onto the sky in
the same manner). The mean $S_N$ value for the five Sab$-$Sc spiral
galaxies studied by Goudfrooij et al.\ is 0.96$\pm$0.26 and the mean
$T$ value is 2.0$\pm$0.5.
\citet{chandar04} used HST WFPC2 imaging to study the GC systems of
five low-inclination spiral galaxies. They had very limited spatial
coverage of the galaxies' GC systems: usually the data consisted of
$\sim$1$-$4 WFPC2 pointings located within the inner 5$\ifmmode {' }\else $' $\fi$
($<$14~kpc) of each galaxy's disk. To correct for their missing
spatial coverage, \citet{chandar04} used a technique similar to the
one used by \citet{kissler99} and \citet{goud03}: they compared the
locations of GCs in their observed fields to the analogous locations
and fields in the Milky Way GC system, if it were observed face-on.
They calculated a scale factor (equal to the ratio of the number of
GCs in the analogous Milky Way region to the number of GCs detected in
their observed fields) and applied it to the total number of GCs in
the target galaxy. As Goudfrooij et al.\ and Chandar et al.\ both
note, the implicit assumption in this method is that GC systems of
other spirals have the same spatial distributions as that of the Milky
Way. For the five spiral galaxies in the Chandar et al.\ study, the
average $S_N$ is 0.5$\pm$0.1 and average $T$ is 1.3$\pm$0.2.
The mean $S_N$ and $T$ values found by Goudfrooij et al.\ and Chandar
et al.\ are consistent, within the errors, with the average $S_N$ and
$T$ values we derive from observing the majority of the radial extent
of the GC systems. This is perhaps not entirely unexpected, because
our mean $S_N$ and $T$ values are in line with the corresponding
values for the Milky Way, and the {\it HST} studies necessarily had to
assume similarity with the Milky Way GC system in order to calculate
their total numbers and specific frequencies.
\subsection{Number of Blue (Metal-Poor) GCs Normalized by Galaxy Mass}
Another specific objective of the overall wide-field GC system survey
is to test a prediction of AZ92, who suggested that elliptical
galaxies and their GC populations can form from the merger of two or
more spiral galaxies. In the AZ92 model, the GCs associated with the
progenitor spiral galaxies form a metal-poor GC population in the
resultant elliptical, and a second, comparatively metal-rich
population of GCs is formed during the merger itself. For simple
stellar populations older than $\sim$1$-$2~Gyr, broadband colors
primarily trace metallicity, with bluer colors corresponding to lower
metallicities and red to higher metallicities (see, e.g., Ashman \&
Zepf 1998). AZ92 therefore predicted that giant elliptical galaxies
should show at least two peaks in their broadband color distributions,
due to the presence of the metal-poor (blue) GCs associated with the
original spiral galaxies and metal-rich (red) GCs formed in star
formation triggered by the merger. Bimodal GC color distributions have
subsequently been observed in many elliptical galaxies (e.g., Zepf \&
Ashman 1993, Kundu \& Whitmore 2001; Peng et al.\ 2006). (A detailed
discussion of the exact relationship between color and metallicity for
old stellar populations in different broadband colors --- and whether
the bimodal color distributions of elliptical galaxy GC systems really
are due to the presence of distinct GC subpopulations --- is beyond
the scope of this paper. Thorough discussion of these issues is given
in, e.g., \citet{zepf07}, \citet{strader07}, and \citet{kz07}.) A
consequence of the AZ92 scenario is that the mass-normalized specific
frequencies of blue, metal-poor GCs in spiral and elliptical galaxies
should be about the same. Our survey data allow us to calculate this
quantity, $T_{\rm blue}$, and compare it for galaxies of different
morphological types. We define $T_{\rm blue}$ as
\begin{equation}
T_{\rm blue} \equiv \frac{N_{GC}(\rm blue)}{M_G/10^9\ {\rm M_{\sun}}}
\end{equation}
\noindent where $N_{GC}(\rm blue)$ is the number of blue GCs and $M_G$
is the stellar mass of the host galaxy, calculated by combining
$M^T_V$ with $M/L_V$, as described in Section~\ref{section:extents}.
Estimating the proportion of blue GCs in the early-type galaxies from
the survey is fairly straightforward because of the large numbers
(hundreds to thousands) of GC candidates detected in these galaxies.
We make this estimate by first constructing a sample of GC candidates
that is at least 90\% complete in all three of our imaging filters
($B$, $V$, and $R$), creating a GC color distribution from the
complete sample, and then running a mixture-modeling code to fit
Gaussian functions and estimate the proportion of GCs in the blue and
red peaks. The code we use, called KMM \citep{abz94}, requires at
least 50 objects (given the typical color separation between
metal-poor and metal-rich GC systems) to produce reliable results.
Calculating $T_{\rm blue}$ for the spiral galaxies is more uncertain
because of poor statistics: we typically detect tens of GC candidates
in the galaxies, and end up with very few objects in the sample of
candidates that is complete in $B$, $V$, and $R$. For NGC~2683,
NGC~3556, NGC~4157, and NGC~7331, there were 38, 31, 7, and 26 objects
in the complete sample used to construct the GC color
distribution. Because we are interested in testing whether the blue GC
populations in elliptical galaxies could have originated in spiral
galaxies, we define as ``blue'' those GCs with $B-R$ $<$ 1.23, the
typical location of the separation between the blue and red GC
populations in elliptical galaxies (RZ01, RZ04). The percentage of GC
candidates with $B-R$ $<$ 1.23 in the complete samples constructed for
NGC~2683, NGC~3556, NGC~4157, and NGC~7331 were 63\%, 55\%, 57\%, and
31\%. We took these percentages to be lower limits on the percentage
of blue GCs for the overall system, since presumably some GCs we
detect might be reddened due to internal extinction. Since it seems
unlikely that {\it all} of the GC candidates in the complete sample
are reddened and belong in the blue category, we assumed that, at
most, 70\% of the candidates might be blue. We based this number on
the GC systems of the Milky Way and M31, which both show two peaks in
their color and metallicity distributions. Roughly 70\% of the Milky
Way GCs lie in the metal-poor peak \citep{harris96}. For M31, the
proportion is similar. \citet{barmby00} estimate from both
photometric and spectroscopic metallicities that $\sim$66\% of M31's
GCs lie in the metal-poor peak. \citet{perrett02} publish
spectroscopic estimates of [Fe/H] for $>$200 M31 GCs and estimate that
77\% are metal-poor.
We converted these lower- and upper-limit blue percentages into
$T_{\rm blue}$ values (with an associated error) for each galaxy by
multiplying them by the galaxy's total $T$ value and error. We then
averaged the lower- and upper-limit $T_{\rm blue}$ values, and their
errors, and took that as the final estimate of $T_{\rm blue}$ for each
galaxy. These values are given in Table~\ref{table:total numbers}.
We note that preliminary $T_{\rm blue}$ values for NGC~2683, NGC~3556,
and NGC~4157 were included in \citet{rzs05}; the values given in the
current paper are the same except that the final calculated errors are
slightly larger for NGC~2683 and NGC~4157.
The $T_{\rm blue}$ values for the nine galaxies analyzed so far as
part of this wide-field GC system imaging survey are shown in
Figure~\ref{fig:tblue}.
This is an updated version of a figure initially shown in
\citet{rzs05}. The figure now includes seven early-type galaxies and
seven spiral galaxies: four early-type galaxies and five spiral
galaxies from our survey (RZ01, RZ03, RZ04, and the current paper);
the Milky Way (Zinn 1985; Ashman \& Zepf 1998); M31 (Ashman \& Zepf
1998; Barmby et al.\ 2000; Perrett et al.\ 2002); and three elliptical
galaxies from the literature (NGC~1052 from Forbes et al.\ 2001;
NGC~4374 from Gomez \& Richtler 2004; NGC~5128 from Harris, Harris, \&
Geisler 2004) that meet our criteria for inclusion in the figure.
(Namely, that at least 50\% of the radial extent of the GC system is
observed, and enough information that the total number of GCs and blue
fraction could be estimated; see \citet{rzs05} for details.) In the
figure, circles denote cluster elliptical galaxies, squares mark field
E/S0 galaxies, and triangles are spiral galaxies in the field. Filled
symbols are used for our survey data, the Milky Way, and M31; data
from other studies are shown with open symbols. (The curves in the
figure are discussed below.)
One immediately sees from Figure~\ref{fig:tblue} that the $T_{\rm
blue}$ values for the spiral galaxies we surveyed are relatively
uncertain compared to the early-type galaxies, as a direct result of
the poor number statistics described earlier. Even with these
uncertainties, it is apparent that the typical $T_{\rm blue}$ value
for spiral galaxies is smaller than that of the more massive cluster
elliptical galaxies. The weighted mean $T_{\rm blue}$ for the
cluster ellipticals is 2.3$\pm$0.2, compared to 0.9$\pm$0.1 for the
spiral galaxies. This suggests that some other mechanism ---
besides the straightforward merging of spiral galaxies envisioned by
AZ92 --- is needed to create massive cluster elliptical galaxies and
their GC populations. On the other hand, the $T_{\rm blue}$ values
of three of the four field E/S0 galaxies in the figure are
comparable to those of the spiral galaxies, which suggests that
merging spiral galaxies and their GC systems together is sufficient
to account for the metal-poor GC populations of some field E/S0
galaxies. Similar conclusions were made in RZ03, RZ04, and
\citet{rzs05} and still hold here, with (now) finalized estimates of
$T_{\rm blue}$ for seven spiral galaxies. (We should note here that
previous authors have compared {\it total} GC specific frequency
values for elliptical and spiral galaxies, and reached basically the
same result. For example, \citet{harris81} compares $S_N$ for
elliptical galaxies in the Virgo cluster and the field to $S_N$ for
the Milky Way, M31, and two other spiral galaxies. \citet{harris81}
concludes that the merger of disk galaxies with $S_N$ $<$3 would
produce a new elliptical with $S_N$ in the range $\sim$1 to 3, which
is in line with $S_N$ for some field ellipticals, but much lower
than $S_N$ for Virgo cluster ellipticals.)
Also apparent in Fig.~\ref{fig:tblue} is a general trend of increasing
$T_{\rm blue}$ value with increasing host galaxy stellar mass: more
massive galaxies tend to have proportionally more metal-poor GCs. We
first discussed this trend in detail in \citet{rzs05} and noted there
that it is consistent with a biased, hierarchical galaxy formation
scenario such as that suggested by \citet{santos03}. In this picture,
the first generation of GCs form at high redshift during the initial
stages of galaxy assembly. The GCs are metal-poor because they form
from relatively unenriched gas. This first epoch of GC and baryonic
structure formation is then temporarily suppressed at $z$
$\sim$10$-$15; \citet{santos03} suggests that the suppression is
triggered by the reionization of the Universe. Massive galaxies in
high-density environments are associated with higher peaks in the
matter density distribution, and therefore began their collapse and
assembly process first. The result of such ``biasing'' is that
massive galaxies had assembled a larger fraction of their eventual
total mass by the suppression redshift. A larger fraction of their
baryonic mass could therefore participate in the formation of the
first generation of GCs and as a result, more massive galaxies end up
with relatively larger $T_{\rm blue}$ values compared to less massive
galaxies. \citet{santos03} assumes that the break from baryonic
structure formation is fairly short-lived ($<$1~Gyr). During this
period, stellar evolution continues to enrich the intergalactic
medium, so that any GCs formed after baryonic structure formation
resumes will be comparatively metal-rich.
The slope of the expected $T_{\rm blue}$ trend depends on the redshift
at which the first epoch of GC formation ended. Three curves shown in
Fig.~\ref{fig:tblue} illustrate this. The curves come from an
extended Press-Schechter calculation \citep{ps74,lc93} done by G.\
Bryan (private communication). This type of calculation can be used to
determine the fraction of mass that is in collapsed halos of a given
mass at some early redshift, and that will later end up inside a more
massive halo today, at $z$ $=$ 0. The specific calculation used to
create the curves in Fig.~\ref{fig:tblue} assumes that: galaxies are
formed via gravitational collapse and assembly of smaller halos that
collide and merge together over time; GCs can form within any one of
these smaller halos, as long as the halos have masses of at least
10$^8$~M$_{\odot}$; the number of metal-poor GCs that forms is
directly proportional to the fraction of a galaxy's mass that has
collapsed by a given redshift; and half the baryons within a given
galaxy halo will end up in the form of stars. A constant
mass-to-light ratio was used to convert the final total mass of each
halo to a stellar mass, for the figure. A $\Lambda$CDM cosmology was
assumed, with $\Omega_{m}$ = 0.3, $\Omega_{\Lambda}$ = 0.7, $\Omega_b
h^2$ $=$ 0.02, $h$ $=$ 0.65, and $\sigma_8$ $=$ 0.9. Given these
assumptions, the relative number of metal-poor GCs within a given
galaxy at $z$ $=$ 0 depends on the redshift at which the metal-poor
GCs ceased to form. Fig.~\ref{fig:tblue} shows the predicted $T_{\rm
blue}$ trend for three different truncation redshifts: $z$ $=$ 7, 11,
and 15. The observed trend seems to fall (very roughly) between the
predicted trends for $z_{form} > 11$ and $z_{form} > 15$. Much more
extensive simulations are needed in order to make rigorous predictions
for the relative numbers of metal-poor GCs --- and how these numbers
depend on the redshift of GC formation --- in galaxies over a range of
masses, environments, and merger/accretion histories.
Although the curves shown in the figure come from a fairly
straightforward calculation with several simplifying assumptions, they
show that in principle, a trend in $T_{\rm blue}$ such as we have
observed is generally consistent with a biased, hierarchical galaxy
formation scenario combined with the idea that the first generation of
GCs forms within a finite period in the early history of the Universe.
We should note here two factors that may influence the apparent
relationship between $T_{\rm blue}$ and galaxy stellar mass; these are
discussed in more detail in RZ04 and \citet{rzs05}. We used the
constant $M/L_V$ value from \citet{za93} to calculate the stellar mass
$M_G$ for the elliptical galaxies. (For the spiral galaxies, $M/L_V$
in \citet{za93} changes with morphological type.) In actuality,
$M/L_V$ may have a luminosity dependence as steep as $L^{0.10}$ (Zepf
\& Silk 1996 and references therein). As we concluded in RZ04 and
\citet{rzs05}, such a dependence is not sufficient to explain the
observed $T_{\rm blue}$ trend. Destruction of GCs through dynamical
effects (e.g., dynamical friction, evaporation, tidal shocks) may also
affect the observed $T_{\rm blue}$-galaxy mass relation. For example,
\citet{vesp00} find that destruction may be more efficient in
lower-mass galaxies, although this is dependent on the details of how
galaxy potentials vary as a function of galaxy mass (e.g., Fall \&
Zhang 2001). We note finally that relatively few moderate- and
high-luminosity galaxies are included in Fig.~\ref{fig:tblue} and that
filling in the sparsely-populated regions of the figure would be
useful for helping to determine the amount of ``biasing'' that may be
reflected in today's metal-poor GC populations. We plan to continue
our work on measuring global properties and total numbers of GC
systems of galaxies with a range of luminosities, morphologies, and
environments; with $T_{\rm blue}$ values for dozens of galaxies on a
figure like Fig.~\ref{fig:tblue}, we can more strongly constrain the
redshift of formation of the first generation of GCs and their host
galaxies.
\section{Summary}
\label{section:summary}
As part of a larger survey that uses wide-field CCD imaging to study
the GC systems of giant galaxies, we have acquired and analyzed WIYN
$BVR$ imaging data of five nearly edge-on spiral galaxies: NGC~2683,
NGC~3044, NGC~3556, NGC~4157, and NGC~7331. Our results are as
follows:
1. We unequivocally detect the GC systems of all the galaxy targets
except the Sc galaxy NGC~3044. Given the magnitude depth of our
images, our inability to detect NGC~3044's GC system may suggest that
the galaxy has a low $S_N$, or that it lies beyond the 23~Mpc distance
estimated from its recession velocity.
2. We observed the GC systems of the target galaxies to projected
radial distances of $\sim$6$-$9 arc minutes (corresponding to
20$-$40~kpc, depending on the distance to the galaxy) from the
galaxy centers. The GC surface density in our derived radial
distributions vanishes before the last data point, suggesting that
we have observed the full radial extent of the galaxies' GC
systems.
3. The projected radial extents of the GC systems of the target
spiral galaxies range from $\sim$10$-$20~kpc. Combining the
current data set with measurements for five other spiral,
elliptical, and S0 galaxies from the survey, we derive a coarse
relationship between host galaxy mass and radial extent of the GC
system. Such a relationship is valuable for planning observations
in which the aim is to observe all or most of the spatial extent
of a galaxy's GC system.
4. The estimated total numbers of GCs in the spiral galaxies analyzed
for this survey range from $\sim$80$-$290; the mean $N_{GC}$ is
170$\pm$40. One of the galaxies presented here, NGC~2683, had a
previously-published $S_N$ value that is 2.5 times larger than our
measured value. The weighted mean $S_N$ and $T$ values for the
five spiral galaxies in the survey are, respectively, 0.8$\pm$0.2
and 1.4$\pm$0.3. These values are consistent with the
corresponding values for the Milky Way and M31, which suggests that
the spiral galaxies with the most thoroughly studied GC systems are
representative of the GC systems of other giant spiral galaxies
with similar masses, at least in terms of their relative numbers of
GCs.
5. We estimate the galaxy-mass-normalized specific frequency of blue
(metal-poor) GCs ($T_{\rm blue}$) in each galaxy and then combine
these results with other data from the survey and the literature.
The data confirm our initial conclusion (based on fewer points)
that the metal-poor GC populations in luminous ellipticals are too
large to have formed via the straightforward merger of two or more
spiral galaxies and their associated metal-poor GC
populations. The data likewise confirm that $T_{\rm blue}$
generally increases with host galaxy mass. By comparing the
$T_{\rm blue}$ vs.\ galaxy mass data to results from a simple
model, we show that the observed trend is generally consistent
with the idea that the first generation of GCs formed in galaxies
over a finite period, prior to some truncation redshift.
\acknowledgments The research described in this paper was supported by
an NSF Astronomy and Astrophysics Postdoctoral Fellowship (award
AST-0302095) to KLR, NSF award AST 04-06891 to SEZ, and NASA Long Term
Space Astrophysics grant NAG5-12975 to AK. We are grateful to the
Wesleyan University Astronomy Department for funding ANL while he
analyzed the data for NGC~7331. We thank Greg Bryan for illuminating
discussions and for providing the model calculations shown in
Fig.~\ref{fig:tblue}. We thank the staff at the WIYN Observatory and
Kitt Peak National Observatory for their assistance at the telescope.
We also thank Enzo Branchini, who provided some estimated distances
for the target galaxies based on a model of the local velocity flow.
Finally, we thank the anonymous referee for valuable comments and
suggestions that improved the quality of the paper. This research has
made use of the NASA/IPAC Extragalactic Database (NED) which is
operated by the Jet Populsion Laboratory, California Institute of
Technology, under contract with the National Aeronautics and Space
Administration.
|
2,869,038,155,442 | arxiv | \section{Introduction}
Phononic crystals have experienced an increasing interest in recent years because of their potential applications to acoustic filters \cite{Romero}, the control of vibration isolation \cite{Hussein}, noise suppression, and the possibility of building new transducers \cite{Wu}; for a review see \cite{Page}. It is thus of interest to understand which properties of such structures are sensitive to inherent imperfections in their design and which are not. Besides, one can also address the question of whether or not the disorder can make new interesting properties to appear.
It is usual to characterize a random medium in terms of an effective -homogeneous- medium. For random perturbation of homogeneous free space one finds that the dispersion relation $K(\omega)$ departs from the dispersion relation $k(\omega)$ in free space without disorder, and the imaginary part of the effective wavenumber $K$ indicates how much the opacity due to disorder is important \cite{lintonmartin}.
In the case of photonic or phononic crystals, the band structure of the unperturbed medium is more complicated, with a wavenumber $Q$ of the Bloch Floquet mode being either purely real (pass band) or complex (stop band). The addition of disorder modifies the band structures of these periodic-on-average systems \cite{maradudin,Deych98, Han2008, Maurel2008, Maurel2010,Izrailev2012,Maurel2013}, and generally, produces an increase in the band gap width \cite{Chang2003}. Among periodic media, the case of periodic arrays of resonant scatterers is very attractive since the resonances inherent to the individual scatterers produce strong modifications of the wave propagation; owing to these modifications in the wave properties may help to design materials with unusual properties. Such arrays present band gaps around the resonance frequencies of an individual scatterer. Because periodically located, Bragg resonances are also produced, resulting in a complex band gap structure. Overlapping two types of gaps, a resonant scatterer gap and a Bragg gap, have been shown to produce interesting phenomena, as the creation of a super wide and strongly attenuating band gap used for structure isolation \cite{Croenne2011,Xiao2011,Sugimoto1995,Bradley1994} and slow wave application \cite{Theocharis2014}.
In this paper, we consider the propagation of an acoustic wave in a periodic array of Helmholtz resonators connected to a duct in the plane wave regime (low frequency regime with one propagating mode in the duct).
The corresponding model describes the 1D propagation of the pressure field $p(x)$ through resonant point scatterers (Kronig- Penney system) \cite{Olivier,Richoux2009}
\begin{equation}
p''+k^2p = \sum_n V_n(k)\delta(x-nd)p(x),
\label{WEpotential}
\end{equation}
where $d$ is the periodicity of the array and where $V_n(k)$ encapsulates the effect of the $n$th resonator of the array.
The disorder is introduced by varying the volume of the Helmholtz resonators. When an overlap between a Bragg gap and a resonant bandgap is produced, a narrow transparency band appears within the resulting large bandgap. Unexpectedly, we found that this transparency band is robust with respect to the disorder.
Indeed, first, for small disorder, the transmission decreases; but increasing further the disorder induces an increase in the transmission.
We have carried out experiments whose results show qualitatively this behavior.
To get further informations, with a broader range of the disorder parameter, numerical calculations are shown, that confirm the transparency induced by disorder. The paper is organized as follow: in Part \ref{part1}, the 1D model and the CPA result for the randomly perturbed system are discussed. The experimental results are presented in Part \ref{part2}, and this is completed by numerical calculations, in Part \ref{part3}. Finally, a discussion is proposed in Part \ref{part4}.
\section{Propagation in 1D periodic and perturbed HR array}
\label{part1}
At low frequencies, when only one mode can propagates in the duct, the propagation of acoustic waves in an array of Helmholtz resonators periodically located with spacing $d$ (Fig. \ref{Fig_exp_setup}) can be described by
\begin{equation}
p''+k^2p = \sum_j V_j(k)\delta(x-jd)p(x),
\label{WEpotential}
\end{equation}
where $p$ is the pressure field and $k=\omega/c_0$ (the time dependance $e^{-i\omega t}$ is omitted, $\omega$ is the angular frequency and $c_0$ the sound velocity in free space).
The potential is
\begin{equation}
V_j(k)= - \frac{s}{S_w} \;k_n\; \frac{ \sin k_n\ell \cos k_c L+\alpha \cos k_n \ell \sin k_c L}{\cos k_n \ell \cos k_c L - \alpha \sin k_n \ell \sin k_cL},
\label{potentialV}
\end{equation}
with $\alpha= Sk_c/(sk_n)$, where $S_w, S, s$ are the area of the main waveguide, of the cavity and of the neck, respectively.
$\ell$ and $L$ denote the length of the neck and of the cavity respectively (see Fig. \ref{Fig_exp_setup}(b)). The wavenumbers are $k_m=k\left[ 1+ \beta \delta/R_m\right]$, with $m=w,c,n$ (waveguide, cavity and neck respectivelly) and $R_m$ the corresponding radius, with $\beta=\left[1+(\gamma-1)Pr^{-1/2}\right](1+i)/\sqrt{2}$ and $\delta=\sqrt{\nu/\omega}$ the viscous boundary layer depth ($\nu$ the cinematic viscosity).
The term proportional to $\beta$ in the wavenumber $k$ is a good model of the viscous and thermal attenuation of sound in the duct. We can notice that, with $s \ll S_w$, the strength of the Helmholtz scatterer is small except at resonances.
Approximating $k_n$ and $k_c$ by $k$, and thus omitting the attenuation, these cavity resonances correspond to a vanishing term $D(k)\equiv \cos k\ell \cos kL - \alpha \sin k\ell \sin kL$, and they are of two types: i) the typical Helmholtz resonance occurring at low frequency, say for $k\ell\to 0$ close to $k_H=1/\sqrt{\alpha \ell L}$ and ii) the resonances in the cavity (hereafter referred as volume resonances), near $kL=n\pi$. For instance, for $n=1$,
\begin{equation}
k_V L=\pi+\frac{1}{\alpha \tan(\pi \ell/L)}.
\end{equation}
For a single resonator, these resonances produce a vanishing transmission.
When the resonators are organized in a perfect periodic array,
band gaps are created around the resonance frequencies, according to Bloch Floquet wavenumber $Q$ becoming purely imaginary, $Q$ being given by
\begin{equation}
\cos Qd= \cos kd + \frac{V}{2k} \sin kd.
\end{equation}
When disorder is introduced in the volume cavity by changing the length $L_n$ of the n$^{th}$ cavity, $L_n = L (1+\epsilon_n)$ and $\epsilon_n \in [-\epsilon/2;\epsilon/2]$, it is possible to predict the new Bloch Floquet $K$ using CPA approach
\cite{Maurel2010}
\begin{equation}
\cos Kd= \cos kd + \frac{\langle V\rangle}{2k} \sin kd,
\end{equation}
where $\langle .\rangle$ denotes the ensemble average for all realizations of the $\left\{\epsilon_n\right\}_n$-values.
The resulting transmission coefficient is
\begin{equation}
T_N=\frac{e^{ikd}-{\cal B}^2e^{-ikd}}{e^{ikd-iKNd}-{\cal B}^2e^{-ikd+iKNd}},
\label{TN_CPA}
\end{equation}
where we have written $p(x\geq Nd)=T_Ne^{ik(x-Nd)}$ (the incident wave is $e^{ik x}$) and
with
\begin{equation}
{\cal B}\equiv \frac{1-e^{i(k-K)d}}{1-e^{i(k+K)d}}.
\end{equation}
Obviously, the above results obtained from CPA recover the perfect periodic case when $\epsilon=0$.
In the following, we present the experimental set-up to realize the lattice of Helmholtz resonators. Comparisons between the measured transmission and the above CPA- result, Eq. \eqref{TN_CPA} is presented.
\section{Experimental results}
\label{part2}
\subsection{Experimental set-up}
The experimental set-up (fig. \ref{Fig_exp_setup}) consists in a $8$ m long cylindrical waveguide with an inner area $S_w=2\times10^{-3}$ m$^2$ and a $0.5$ cm thick wall. This waveguide is connected to an array of $N=60$ Helmholtz resonators periodically distributed, with inter-distance $d=0.1$ m. Each resonator is composed by a neck (cylindrical tube with an inner area $s=7.85\times10^{-5}$ m$^2$ and a length $\ell = 2$ cm) and by a cavity with variable length. The cavity is a cylindrical tube with an inner area $S=1.4\times10^{-3}$ m$^2$ and a maximum length $L_{max}=16.5$ cm, see fig. \ref{Fig_exp_setup}(b).
\begin{figure}[h!]
\begin{center}
\includegraphics[width=8.cm,clip]{fig1}
\includegraphics[width=8.cm,clip]{fig2}
\end{center}
\caption{Picture of the experimental set up (left panel). Shematic of the experimental setup (right panel).}
\label{Fig_exp_setup}
\end{figure}
The sound source is connected to the input of the main tube.
The source is embedded in an impedance sensor \cite{MinutePub}, able to calculate the input impedance of the lattice $Z$, defined as the ratio of the acoustic pressure $p$ and the acoustic flow $u$ (the product of the velocity by the area cross section) at the entrance of the lattice, as described in \cite{Macaluso2011}.
This allows to get the reflection coefficient $R$ defined as $p=(1+R)p^+$ owing to $u=u^++u^-$ with $u^+=p^+/Z_w$, $u^-=-p^-/Z_w$ , where the index $+$ and $-$ denote the parts of the quantity associated to right- and left-going waves:
\begin{equation}
R=\frac{Z-Z_w}{Z+Z_w}.
\end{equation}
At the output, an anechoic termination made of a $10$ m long waveguide partially filled with porous plastic foam suppresses the back propagative waves. This ensures the output impedance to be close to the characteristic impedance $Z_w=\rho c /S_w$. Finally, a microphone is used to measure the pressure $p_e$ at the end of the lattice.
Using line matrix theory, $(p,u)$ and $(p_e,u_e)$ are linked by the transfer matrix through
\begin{equation}
\left(
\begin{array}{c}
p\\ u
\end{array}
\right)
=\left(
\begin{array}{cc}
A & B \\ C & D\end{array}
\right)
\left(
\begin{array}{c}
p_e\\ u_e
\end{array}
\right)
\end{equation}
with $ p=Zu$ ($Z$ being measured) and $u_e= p_e/Z_w$ (the acoustic flow is deduced from $p_e$ because of the anechoic termination). Then, the transmission coefficient $T$ defined as
$p_e=Tp^+$, is calculated using that $u=(1+R)p^+/Z$ by definition of $R$ and from above, $u=[C+D/Z_w]p_e=[C+D/Z_w]Tp^+$, from which
\begin{equation}
T=\frac{2 Z_t}{Z+Z_w},
\end{equation}
where $Z_t\equiv [C+D/Z_w]^{-1}$ is deduced from the measured $(p_e, u)$-values.
When considered, the disorder in the lattice is introduced through the variable lengths $L_n$, $n=0,\dots, N$ of the cavities, and $L_n = L (1+\epsilon_n)$ is used with a normal distribution of $\epsilon$ being chosen for each realization and for each resonator cavity, with $\epsilon_n \in [-\epsilon/2;\epsilon/2]$, resulting in a variable scattering strength, $V_n$ in Eq. \eqref{potentialV}.
The transmission coefficients are measured for ten different distributions with same standard deviation, and the mean value $\langle T\rangle$ is taken.
\subsection{Experimental observations}
The transmission coefficient $T$ in the perfect periodic case is presented in the Fig. \ref{Fig_ord} for a cavity length $L=0.165$ m. Four bandgaps are visible : The first (labeled a) at low frequency is associated to the Helmholtz resonance ($k_H$) for $kd/\pi \in [0.15;\ 0.25]$, corresponding to frequency in $[300;\; 450]$ Hz. Two other band gaps (labeled b and d) are associated to the two first volume
resonances ($kL$ close to $\pi$ and $2\pi$); these are for $kd/\pi$ in $[0.64 ;\; 0.68 ]$, $[1.22;\; 1.24]$ (corresponding frequency ranges $[1110;\;1170]$ Hz and $[2100;\;2150]$ Hz).
These three band gaps associated to resonances of the scatterer are often referred as hybridization band gaps \cite{Croenne2011}.
Finally, the band gap labeled c is associated to the Bragg resonance, for $kd/\pi \in [1 ;\; 1.03 ]$, (frequency range $[1700;\; 1800]$ Hz). This band structure has been described in details, including non linear aspects, in \cite{Olivier,Richoux2007, Richoux2009}.
The comparison between the experimental (blue line) and the analytical expression, Eq. \eqref{TN_CPA}, (black line) shows a good agreement. The discrepancy in the low frequency regime may be attributable to the bad quality of the source in this frequency range.
Finally, the strong peaks appearing in the experimental measurements are due to the imperfection in the anechoic termination, resulting in interferences between forward and backward waves in the main tube.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=12.cm,clip]{fig3}
\end{center}
\caption{\label{Fig_ord} Transmission coefficient for an ordered lattice with a cavity length $L=0.165$ m and lattice spacing $d=0.1$ m. Blue line corresponds to the experimental measurement and red line to the analytical prediction, Eq. \eqref{TN_CPA}.}
\end{figure}
The Fig. \ref{Fig_desordre}(a) shows the transmission in the perfect periodic case for $L =0.1$ m. With $L=d$, the volume resonance $k_V$ , with $k_V\sim \pi/L$, and the Bragg frequency $k_B=\pi/d$ are very close, resulting in an almost perfect overlap of the two corresponding band gaps, previously labeled b and c, visible here in the range $kd/\pi \in [0.98;\; 1.12]$ (frequency range $[1600;\; 1800]$ Hz). The first band gap, associated to the Helmholtz resonance $k_H$ is almost non affected by the change in $L$ while the volume resonance with $k_VL\simeq 2\pi$ (previously labeled d) is sent to higher frequency (not visible in our plot).
A noticeable feature is the existence of a small transparency band inside the large stop band near $kd=\pi$, a feature already observed in other system where such overlapping is realized \cite{Sugimoto1995,Bradley1994,Theocharis2014}. This feature, in addition to the main behavior of $T$, is accurately captured by our analytical expression, Eq. \eqref{TN_CPA}, in the perfect periodic case, thus with constant unpertubed potential $V$ (and $K=Q$).
\begin{figure}[h!]
\begin{center}
\includegraphics[width=12.cm,clip]{fig4}
\end{center}
\caption{\label{Fig_desordre} (a) Transmission coefficient of an ordered lattice for a cavity length $L=0.1$ m and lattice spacing $d=0.1$ m. (b) Mean value of the transmission coefficient for a disordered lattice with $\epsilon=0.08$. (c) Mean value of the transmission coefficient for a disordered lattice with $\epsilon=0.1$. (c) Mean value of the transmission coefficient for a disordered lattice with $\epsilon=0.18$. The blue line corresponds to the experimental case obtained with $10$ averages and the red line to our analytical prediction with $100$ averages (except for (a) without disorder).}
\label{Fig_exp_result1}
\end{figure}
We now consider several amplitudes $\epsilon$ of disorder in the scattering strength of the resonators, as previously described.
The measured transmission coefficients $|\langle T\rangle| $ are reported in Fig. \ref{Fig_exp_result1}(b,c,d) for respectively $\epsilon=0.08, \; 0.1, \; 0.18$.
As expected, the more visible effect of the disorder is to strengthen the opacity of the media. This is associated to the fact that the wavenumber $K$ of the effective Bloch mode acquires an imaginary part due to the disorder (in addition to the attenuation) in the ex-pass bands of the perfect ordered case. In counterpart, in the ex-stop bands of the perfect ordered case, the
imaginary part of the wavenumber decreases, resulting in an increase of the transmission \cite{maradudin}.
In the second stop band, an interesting behavior can be noticed, although very qualitative at this stage:
inside the second band gap, around $kd=\pi$, the small transparency band remains visible, since we observe a
peak of transmission robust to disorder. This trend is confirmed by the analytical model (red curves on Fig \ref{Fig_exp_result1}).
In the following section, we use numerical calculations to get further insights on this induced transparency near $kd=\pi$.
\section{Numerical inspection of the induced transparency}
\label{part3}
We now present results from numerical experiments of the propagation in the array of Helmholtz resonators. This is done by solving Eq. \eqref{WEpotential}, with variable ${V}_n$ values. The disorder is introduced by using $L_n=L(1+\epsilon_n)$ in Eq. \eqref{potentialV}.
To calculate $p(x)$, we implement a method based on the impedance, as describe in \cite{Maurel2008}.
For each frequency, $N=10^4$ realizations of the disorder with same amplitude $\epsilon$ are performed. The effective transmission $\langle T\rangle$ is calculated by averaging the transmission coefficients $\langle T\rangle=1/N \sum T_n$, where the $T_n$ are the transmission coefficients for each realization.
The main result is presented in Figs. \ref{Fig_main_num}. In Fig. \ref{Fig_main_num}(a), $|\langle T\rangle |$ is shown in a 2D plot as a function of $kd$ and $\epsilon$ and Fig. \ref{Fig_main_num}(b) shows several transmission curves for given $\epsilon$-values.
Clearly, with $10^4$ realizations of the disordered, the averaged systems have converged.
The transparency robust to disorder is quantitatively confirmed: For the largest values of disorder, the transmission near $kd=\pi$ increases with the disorder.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=12.cm,clip]{fig5}
\end{center}
\caption{(a) Mean value of the transmission coefficient in function of the disorder. (b) Mean value of the transmission coefficient for $\epsilon = 0.08$ (blue), $\epsilon=0.1$ (red), $\epsilon=0.18$ (black) and $\epsilon=0.3$ (green).}
\label{Fig_main_num}
\end{figure}
\section{Discussion}
\label{part4}
Robustness of transparency to disorder could appear counterintuitive with regards to the usual influence of disorder in wave propagation. Indeed, the presence of disorder is known to break the wave propagation and to avoid any transmission. In this study, robustness of transparency is the result of the mixing of two different physical phenomena : (1) the non-exact overlap of the Bragg and hybridization band gaps which generates, in the periodic case, a narrow passband located inside a band gap and (2) the presence of disorder on potential which prevents the wave propagation inside the media. In the periodic case, one of the edges of transparency band due to overlap is located at $kd/\pi = 1$ which corresponds to the Bragg frequency \cite{Theocharis2014,Sugimoto1995}. Because disorder is injected in the potential (resonance frequency of the Helmholtz cavity), the edge of the transparency band is not affected \cite{Maurel2013}. As a consequence, the narrow passband is very sensitive to disorder and disappears from its upper edge remaining the lower edge unchanged and creating a peak of transparence for $kd/\pi = 1$. On the contrary, with no overlapping, the Bragg bandgap increases with disorder only from its upper edge. The lower edge belongs to a passband. In this case, there is no peak of transparency. This configuration can be used to filtering applications by tuning very narrow filter by injection of disorder in the system.
\section{Conclusion}
In this paper, we reported an experimental and numerical characterization of a periodic-on-average disordered system.
The usual widening of the band gaps of disordered arrays is observed.
On the other hand, when nearly perfect overlap between the Bragg and the scatterer resonance frequencies is realized,
evidence of robust transparency has been shown.
\vspace{0.5cm}
\noindent
{\bf Acknowledgments}. This study has been supported by the Agence Nationale de la Recherche through the grant ANR ProCoMedia, project ANR-10-INTB-0914.
V.P. thanks the support of Agence Nationale de la Recherche
through the grant ANR Platon, project ANR-12-BS09-0003-03.
|
2,869,038,155,443 | arxiv | \section*{Abstract}
We theoretically study the role of excitatory and inhibitory interactions
in the aggregations of male frogs.
In most frogs, males produce sounds to attract conspecific females,
which activates the calling behavior of other males and results in collective choruses.
While the calling behavior is quite effective for mate attraction, it requires high energy consumption.
In contrast, satellite behavior is an alternative mating strategy
in which males deliberately stay silent in the vicinity of a calling male
and attempt to intercept the female attracted to the caller,
allowing the satellite males to drastically reduce their energy consumption
while having a chance of mating.
Here we propose a hybrid dynamical model
in which male frogs autonomously switch among three behavioral states
(i.e., calling state, resting state, and satellite state)
due to the excitatory and inhibitory interactions.
Numerical simulation of the proposed model demonstrated that
(1) both collective choruses and satellite behavior can be reproduced
and (2) the satellite males can prolong the energy depletion time of the whole aggregation
while they split the maximum chorus activity into two levels over the whole chorusing period.
This study theoretically highlights the trade-off between energy efficiency and chorus activity
in the aggregations of male frogs driven by the multiple types of interactions.
\begin{flushleft}
{\bf Key words:} Nonlinear dynamics, Acoustic communication, Satellite behavior, Hybrid dynamical model, Spatial structure
\end{flushleft}
\clearpage
\section{Introduction}
Animals aggregate for various purposes such as foraging and mating.
In the aggregations, energy efficient behavior is observed.
For instance, an aggregations of ants consists of active and inactive individuals \cite{Bonabeau_1996},
and the switching between the two modes
likely improves the long-term performance of the whole aggregation \cite{Hasegawa_2016};
bats emit ultrasounds to locate surrounding objects by hearing returning echoes \cite{Griffin_1958}.
Indoor experiments demonstrated that flying bats eavesdrop the echos of the preceding individual \cite{Chiu_2008},
allowing the follower to reduce energy invested in sound production.
Thus, energy efficient behavior would be common in the aggregations,
but this phenomenon has not been studied in detail for many species.
Further empirical and theoretical studies are required to analyze
how energy efficient behaviors of individual animals contribute to the performance of the aggregation as a whole.
In this study, we focus on both energy-consuming and energy-efficient behavior
in the aggregations of male frogs (Figure \ref{fig:concept}A).
Calling behavior is an energy-consuming behavior in which
males produce successive sounds by inflating and deflating a large vocal sac
to attract conspecific females \cite{Gerhardt_2002, Wells_2007}.
Because calling males lose much weight in one night \cite{Gerhardt_2002, MacNally_1981, Cherry_1993, Murphy_1994},
this behavior requires males to consume much energy.
The calls of male frogs generally activate the calling behavior of other males
and elicits a collective structure called as unison bout
in which males almost synchronize the onset and end of their calling bouts
\cite{Gerhardt_2002, Wells_2007, Whitney_1975, Schwarts_1994, Jones_2014, Aihara_2019}
(Figure \ref{fig:concept}B).
In this study, we refer to the interaction activating calling behavior of other males as {\it excitatory interaction}.
Contrarily, satellite behavior is an energy-efficient behavior
in which males deliberately stay silent in the vicinity of a calling male (Figure \ref{fig:concept}C)
and attempt to intercept a female attracted to the caller \cite{Gerhardt_2002}.
This behavior allows the males to reduce energy consumption
because satellite males do not produce any call.
In this study, we refer to the interaction inducing satellite behavior of neighbors as {\it inhibitory interaction}.
Empirical studies showed that calling behavior is dominant in low-density aggregations
while satellite behavior becomes common in high-density aggregations \cite{Gerhardt_2002},
suggesting the dynamical choice of behavioral types depending on surrounding conditions.
This study aims to theoretically examine how the coexistence of excitatory and inhibitory interactions
contributes to chorus activity and energy efficiency in the aggregations of male frogs.
\begin{figure*}
\begin{center}
\includegraphics[width=1.0\textwidth]{Figure1.eps}
\end{center}
\caption{
Excitatory and inhibitory interactions in an aggregation of male frogs.
(A) Schematic diagram of the aggregation.
In this study, we focus on excitatory interaction inducing calling behavior of distant males
as well as inhibitory interaction inducing satellite behavior of neighboring males.
(B) Audio data of male frogs ({\it Hyla japonica})
obtained from our previous studies (Aihara et al., 2011 and 2019).
In general the calls of male frogs activate the calling behavior of other males,
which elicits a collective structure called a unison bout
in which males almost synchronize the onset and end of their calling bouts.
We refer to the interaction as {\it excitatory interaction} among male frogs.
(C) Satellite behavior in male frogs.
As an alternative mating strategy, a male deliberately stays silent in the vicinity of a calling male
to intercept a female attracted to the caller.
We refer to the relationship as {\it inhibitory interaction} among male frogs.
}
\label{fig:concept}
\end{figure*}
\section{Mathematical Modeling}
We propose a mathematical model incorporating both excitatory and inhibitory acoustic interactions among male frogs
as an extension of our study \cite{Aihara_2019}.
Specifically, we model a calling state, a resting state, and a satellite state as separate deterministic models,
and then formulate the transitions among the three states as stochastic processes (Figure \ref{fig:framework}).
A calling state describes the behavior in which a male frog produces successive sounds
to attract conspecific females.
The production of sounds requires the vigorous inflation and deflation of a large vocal sac at a high repetition rate
\cite{Gerhardt_2002, Wells_2007},
and calling males lose much weight in one night \cite{Gerhardt_2002, MacNally_1981, Cherry_1993, Murphy_1994}.
Based on these features, we assume that (1) the energy of a calling male decreases
while his physical fatigue increases,
and that (2) the male stops calling when he gets tired.
A resting state describes the behavior in which a male frog stays silent
without producing any call during an interval between calling states
\cite{Gerhardt_2002, Wells_2007, Whitney_1975, Schwarts_1994, Jones_2014, Aihara_2019}
(Figure \ref{fig:concept}B).
Because of the lower activity, we assume that
(1) the energy of a resting male remains constant
while his physical fatigue decreases,
and that (2-1) the male starts calling when he has rested for enough time and is activated by the calls of other males
or (2-2) the male frog transits to the satellite state
when the attractiveness of his neighbor is superior to his own.
A satellite state describes the behavior in which a male frog stays silent in the vicinity of a calling male
to intercept a female attracted to the caller \cite{Gerhardt_2002}.
Because of the lower activity and mating strategy,
we assume that
(1) the energy of a satellite male remains constant
while his physical fatigue decreases, and
(2) he transits to a resting state and starts calling again
when the attractiveness of his neighbor is inferior to his own.
This section is organized as follows.
In Sec. \ref{sec:Definition of Behavioral States},
we introduce the deterministic models of the calling state, the resting state and the satellite state, respectively.
In Sec. \ref{sec:Definition of Transitions},
we formulate the transitions among the three states.
In Sec. \ref{sec:Parameter values},
we fix the parameters of the proposed model
based on the behavioral features of male frogs.
\subsection{Formulation of Behavioral States}
\label{sec:Definition of Behavioral States}
We formulate three behavioral states (the calling state, the resting state and the satellite state)
as separate deterministic models.
It should be noted that the models of the calling state and the resting state were already proposed
in our previous study \cite{Aihara_2019}.
Here we newly introduce the model of the satellite state
based on the model of the resting state.
First, we describe the calling state of the $n$th frog as follows \cite{Aihara_2019}:
\begin{eqnarray}
\frac{d\theta_{n}}{dt} &=& \omega_{n} + \sum_{m \mathrm{~for~} r_{nm} < r_{\mathrm{acoustic}}}^{N} \delta(\theta_{m}) \Gamma_{nm}(\theta_{n}-\theta_{m}),
\label{eq:call_theta}\\
\frac{dT_{n}}{dt} &=& \delta(\theta_{n}),
\label{eq:call_T}\\
\frac{dE_{n}}{dt} &=& -\delta(\theta_{n}),
\label{eq:call_E}
\end{eqnarray}
where
\begin{eqnarray}
\Gamma_{nm}(\theta_{n}-\theta_{m}) &=& K_{nm} \big[ \sin(\theta_{n}-\theta_{m}) - k \sin(2(\theta_{n}-\theta_{m})) \big].
\label{eq:interaction_term}
\end{eqnarray}
Equation (\ref{eq:call_theta}) is based on a mathematical model called a phase oscillator model
that is derived from simple assumptions about periodicity and interaction \cite{Kuramoto_1984}.
The phase oscillator model can qualitatively reproduce various synchronization phenomena in biological systems \cite{Strogatz, Nenkin, BZ, Walk}
including frog choruses \cite{Aihara_2011, Aihara_2014, Aihara_2019, Aihara_2020}.
We utilize the model to describe periodic calling behavior of male frogs
and also their acoustic interaction.
In Equation (\ref{eq:call_theta}), $\theta_{n}$ is a variable ranging from $0$ to $2\pi$ \cite{Kuramoto_1984}
and represents the phase of calls produced by the $n$th frog \cite{Aihara_2011, Aihara_2014, Aihara_2019}.
Specifically, the $n$th frog is assumed to produce a call when the phase $\theta_{n}$ hits $0$.
$\omega_{n}$ is a positive parameter that determines an intrinsic inter-call interval of this frog
\cite{Aihara_2011, Aihara_2014, Aihara_2019}.
In the second term in the right-hand side of Equation (\ref{eq:call_theta}),
$\delta(\theta_{m})$ is a delta function satisfying the conditions $\delta(\theta_{m})=\infty$ at $\theta_{m}=0$,
$\delta(\theta_{m})=0$ otherwise, and $\int_{t_{m, i}-\epsilon}^{t_{m, i}+\epsilon} \delta(\theta_{m}(t)) dt = 1$
($i$ represents the index of the calls produced by the $m$th frog,
and $\epsilon$ is a positive parameter that is much smaller than the inter-call interval)
\cite{Aihara_2019}.
$\Gamma_{nm}(\theta_{n}-\theta_{m})$ is a $2\pi$-periodic function of the phase difference $\theta_{n}-\theta_{m}$
\cite{Kuramoto_1984}.
In addition, $\Gamma_{nm}(\theta_{n}-\theta_{m})$ is given by Equation (\ref{eq:interaction_term})
with two kinds of coupling strength $K_{nm}$ and $k$
because this function can qualitatively reproduce alternating chorus patterns of male frogs
\cite{Aihara_2011, Aihara_2019}
that are generally observed within each chorusing bout
\cite{Gerhardt_2002, Wells_2007, Brush_1989, Jones_2014}.
$r_{nm}$ is the distance between the $n$th and $m$th frogs,
and $r_{\mathrm{acoustic}}$ is a threshold within which male frogs can acoustically interact with each other.
Specifically, we assume that the $n$th frog pays attention to the first and second nearest callers
based on the previous studies reporting that male frogs acoustically interact with a few neighbors in an aggregation
\cite{Gerhardt_2002, Wells_2007, Brush_1989, Aihara_2016, Jones_2014}.
Consequently, the second term in the righ-hand side of Equation (\ref{eq:call_theta})
describes an instantaneous selective interaction between the $n$th and $m$th frogs
(In other words, when the $m$th frog produces a call at $\theta_{m}=0$,
the phase of the $n$th frog that selectively pays its attention to the $m$th frog is instantaneously affected by the call).
In Equations (\ref{eq:call_T}) and (\ref{eq:call_E}), $T_{n}$ and $E_{n}$ describe
the physical fatigue and energy of the $n$th frog
and are restricted to the ranges of $0 \le T_{n} \le T_{\mathrm{max}}$ and $0 \le E_{n} \le E_{\mathrm{max}}$,
respectively \cite{Aihara_2019}.
Because of the definition of $\delta(\theta_{n})$,
the physical fatigue $T_{n}$ is incremented by $1$
and the energy $E_{n}$ is decremented by $1$
when the $n$th frog produces a call at $\theta_{n}=0$ \cite{Aihara_2019}.
As explained at the beginning of this section,
our model assumes that males in the calling state can transit to the resting state
(Figure \ref{fig:framework}).
Next, we describe the resting state and the satellite state as follows \cite{Aihara_2019}:
\begin{eqnarray}
\frac{d\theta_{n}}{dt} &=& 0,
\label{eq:sleep_theta}\\
\frac{dT_{n}}{dt} &=& -\alpha.
\label{eq:sleep_T}\\
\frac{dE_{n}}{dt} &=& 0,
\label{eq:sleep_E}
\end{eqnarray}
Here we utilize the same framework
because a male frog is assumed to stay silent during both states.
Equation (\ref{eq:sleep_theta}) means that the phase $\theta_{n}$ remains constant without hitting $0$
and therefore a model frog does not produce any call.
Equations (\ref{eq:sleep_T}) and (\ref{eq:sleep_E})
describe the time evolution of the physical fatigue $T_{n}$ and the energy $E_n$, respectively.
These equations mean that the physical fatigue $T_{n}$ decreases
and the energy $E_n$ remains constant,
which is consistent with the assumed behavioral features of both states.
As explained at the beginning of this section,
model frogs in the resting state can transit to the calling state or the satellite state
while the frogs in the satellite state can transit only to the resting state
(Figure \ref{fig:framework}).
\begin{figure*}
\begin{center}
\includegraphics[width=1.0\textwidth]{Figure2.eps}
\end{center}
\caption{
A mathematical model incorporating excitatory and inhibitory interactions among male frogs.
Each frog is assumed to be in a calling state, a resting state, or a satellite state, and transit among the three states.
The transition between the calling state and the resting state is described by a framework proposed by Aihara et al., 2019:
the resting males with low physical fatigue and high energy have a high probability to be activated by the calls of neighboring males
and start calling (corresponding to {\it excitatory interaction})
while calling males attempt to continue calling until physical fatigue exceeds a threshold value.
We newly formulate the transition between the satellite state and the resting state.
Specifically, male frogs are assumed to transit from the resting state to the satellite state at high probability
when the attractiveness of their calls is moderately inferior to that of a neighboring male
(corresponding to {\it inhibitory interaction}).
Inversely, male frogs transit from the satellite state to the resting state
when the attractiveness of their calls is moderately superior to that of a neighboring male.
}
\label{fig:framework}
\end{figure*}
\subsection{Formulation of Transitions}
\label{sec:Definition of Transitions}
Here we formulate the transitions among three behavioral states
(i.e., the calling state, the resting state and the satellite state).
First, we utilize an integer $s_{n}$ to discriminate among the three states:
namely, the calling state of the $n$th frog is described as $s_{n}=1$,
the resting state is described as $s_{n}=0$,
and the satellite state is described as $s_{n}=-1$.
Second, we propose a stochastic mathematical model in which each male spontaneously switches its state
depending on his current condition and also the interaction with other males.
The transition between the calling state and the resting state is formulated
according to our previous study \cite{Aihara_2019}.
Specifically, our previous model \cite{Aihara_2019} assumes that
(1) a male frog in the calling state has a high probability to continue calling when his physical fatigue is low,
and
(2) a male frog in the resting state has a high probability to be activated by the calls of neighboring males when his energy is high and his physical fatigue is low.
Based on the first assumption, the probability of the transition from the calling state to the resting state
is given as follows \cite{Aihara_2019}:
\begin{eqnarray}
P^{\mathrm{call \to rest}}_{n} &=& G_{1}(T_{n}),
\label{eq:p_call-rest}
\end{eqnarray}
where
\begin{eqnarray}
G_{1}(T_{n}) &=& \frac{1}{\exp(-\beta_{\mathrm{fatigue}} (T_{n} - \Delta T))+1}.
\label{eq:G_1}
\end{eqnarray}
$G_{1}(T_{n})$ is a logistic function that increases depending on a parameter $\beta_{\mathrm{fatigue}}$.
Another parameter $\Delta T$ represents the inflection point of the logistic function.
Equations (\ref{eq:p_call-rest}) and (\ref{eq:G_1}) mean
that the probability to stop calling becomes much higher
when the physical fatigue $T_{n}$ exceeds the parameter $\Delta T$ (Figure \ref{fig:framework}).
Next, based on the second assumption,
the probability of the transition from the resting state to the calling state is given as follows \cite{Aihara_2019}:
\begin{eqnarray}
P^{\mathrm{rest \to call}}_{n} &=& G_{2}(T_{n}) G_{3}(E_{n}) G_{4}(\vec{s}_{n}^{\mathrm{neighbor}}),
\label{eq:p_rest-call}
\end{eqnarray}
where
\begin{eqnarray}
G_{2}(T_{n}) &=& \frac{1}{\exp(\beta_{\mathrm{fatigue}} (T_{n} - (T_{\mathrm{max}} - \Delta T)))+1},
\label{eq:G_2}\\
G_{3}(E_{n}) &=& -\frac{2}{\exp(\beta_{\mathrm{energy}} E_{n})+1} + 1,
\label{eq:F}
\end{eqnarray}
\begin{flalign}
G_{4}(\vec{s}_{n}^{\mathrm{neighbor}}) &=
\begin{cases}
p_{\mathrm{high}} & \text{(If a vector $\vec{s}_{n}^{\mathrm{neighbor}}$ has one or more elements of $1$)},\\
p_{\mathrm{low}} & \text{(If a vector $\vec{s}_{n}^{\mathrm{neighbor}}$ has no element of $1$)}.
\end{cases}
\label{eq:H}
\end{flalign}
$G_{2}(T_{n})$ and $G_{3}(E_{n})$ represent the effects of the physical fatigue $T_{n}$ and the energy $E_{n}$ on the transition.
Specifically, $G_{2}(T_{n})$ is a logistic function that increases depending on a parameter $\beta_{\mathrm{fatigue}}$
when the physical fatigue $T_{n}$ decreases,
and $G_{3}(E_{n})$ is another logistic function that decreases depending on a parameter $\beta_{\mathrm{energy}}$
when the energy $E_{n}$ decreases (Figure \ref{fig:framework}).
In Equation (\ref{eq:H}), $\vec{s}_{n}^{\mathrm{neighbor}}$ represents the states of males
that are closer to the focal frog than a threshold value of $r_{\mathrm{acoustic}}$;
$p_{\mathrm{high}}$ and $p_{\mathrm{low}}$ are positive parameters
that satisfy the relationship $p_{\mathrm{high}} \gg p_{\mathrm{low}} > 0$.
Subsequently, $G_{4}(\vec{s}_{n}^{\mathrm{neighbor}})$ is a discrete function
that takes $p_{\mathrm{high}}$ when one or more males are calling in his interaction range
(or it takes $p_{\mathrm{low}}$ when no males are calling).
Equations (\ref{eq:p_rest-call})--(\ref{eq:H}) assume that
male frogs with lower physical fatigue and higher energy
have a high probability to be activated by the calls of other males
and then transit to the calling state,
corresponding to {\it excitatory interaction} that is a focus of this study.
Next, we formulate the transition between the resting state and the satellite state
as another stochastic process
by focusing on relative attractiveness between neighboring males.
In frog choruses, temporal traits of calls vary among individual males
and work as an important indicator to determine their attractiveness towards conspecific females
\cite{Gerhardt_2002, Wells_2007}.
In particular, the number of calls dominantly affects the attractiveness.
Playback experiments using various frog species showed
that loudspeaker broadcasting more calls per unit time
have a higher probability of attracting females
than the loudspeaker broadcasting less calls \cite{Ryan_1992}.
Based on this feature, we treat the number of calls as a representative indicator of the attractiveness
and formulate the probability of the transition from the resting state to the satellite state as follows:
\begin{eqnarray}
P^{\mathrm{rest \to satellite}}_{n} &=& F_{1}(N_{k}-N_{n}),
\label{eq:p_rest-satellite}
\end{eqnarray}
where
\begin{eqnarray}
F_{1}(N_{k}-N_{n}) &=& \frac{1}{\exp(-\beta_{\mathrm{satellite}}(N_{k}-N_{n} - \Delta N))+1}.
\label{eq:F_1}
\end{eqnarray}
Empirical studies demonstrated that
males intermittently start and stop calling in various frog species
(Figure \ref{fig:concept}(B) for the case of {\it Hyla japonica})
\cite{Gerhardt_2002, Wells_2007, Whitney_1975, Schwarts_1994, Jones_2014, Aihara_2019}.
To precisely capture this temporal feature as well as take the effect of the call number into consideration,
we describe the number of calls included in the adjacent calling bout as an integer $N_{n}$
and utilize it as the current attractiveness of the $n$th frog.
$F_{1}(N_{k}-N_{n})$ is a logistic function with a steepness $\beta_{\mathrm{satellite}}$
that takes larger value
when the difference of the call number ($N_{k}-N_{n}$) is larger than a threshold value $\Delta N$.
Here we choose a specific male as the $k$th frog in Equations (\ref{eq:p_rest-satellite}) and (\ref{eq:F_1})
if he positions within a threshold of $r_{\mathrm{satellite}}$
and engages in the calling state or the resting state.
These assumptions mean that the focal model frog (the $n$th frog) is a satellite of a neighboring male (the $k$th frog)
and attempts to intercept a female attracted to the neighbor.
Subsequently, a model frog switches from the resting state to the satellite state
when his attractiveness is inferior to that of the neighboring male,
corresponding to {\it inhibitory interaction} that is a focus of this study.
Next, we formulate the probability of the transition from the satellite state to the resting state as follows:
\begin{eqnarray}
P^{\mathrm{satellite \to rest}}_{n} &=& F_{2}(N_{k}-N_{n}),
\label{eq:p_satellite-rest}
\end{eqnarray}
where
\begin{eqnarray}
F_{2}(N_{k}-N_{n}) &=& \frac{1}{\exp(\beta_{\mathrm{satellite}}(N_{k}-N_{n} + \Delta N))+1}.
\label{eq:F_2}
\end{eqnarray}
$F_{2}(N_{k}-N_{n})$ is a logistic function with the steepness $\beta_{\mathrm{satellite}}$
and becomes larger when the difference of the call number ($N_{k}-N_{n}$) is lower than the threshold $\Delta N$.
Equations (\ref{eq:p_satellite-rest}) and (\ref{eq:F_2}) assume that
a model frog switches from the satellite state to the resting state
when his attractiveness is superior to that of the neighbor.
There are two choices for the transition from the resting state
(i.e., the transition from the resting state to the calling state or the satellite state; see also Figure \ref{fig:framework}).
When determining to which state each model frog transits,
we compare the probability of both transitions
(i.e., $P^{\mathrm{rest \to call}}_{n}$ and $P^{\mathrm{rest \to satellite}}_{n}$)
and choose the transition
with higher probability of happening.
\subsection{Parameter values}
\label{sec:Parameter values}
As explained in Sec. \ref{sec:Definition of Behavioral States} and \ref{sec:Definition of Transitions},
we previously proposed a mathematical model on the transition between the calling state and the resting state
and succeeded in reproducing the occurrence of the collective choruses \cite{Aihara_2019}.
Because this study also focuses on the collective choruses,
we utilize the same parameter values \cite{Aihara_2019} for the corresponding parts of the proposed model
(i.e., the deterministic model of the calling state
(Equations (\ref{eq:call_theta})--(\ref{eq:call_E})),
the deterministic model of the resting state
(Equations (\ref{eq:sleep_theta})--(\ref{eq:sleep_E})),
and the stochastic model of the transition between the calling state and the resting state
(Equations (\ref{eq:p_call-rest})--(\ref{eq:H}))).
Next, we fix or vary the parameters included
in the stochastic model of the transition between the satellite state and the resting state
(Equations (\ref{eq:p_rest-satellite})--(\ref{eq:F_2})).
Because male frogs in the satellite state attempt to intercept a conspecific female attracted to their calling neighbor,
he needs to stay in the vicinity of the neighbor.
In our model, this feature is modeled by the parameter $r_{\mathrm{satellite}}$
within which a model frog transits to the satellite state (Sec.\ref{sec:Definition of Transitions}).
Therefore, we fix $r_{\mathrm{satellite}}$ at a small value of $0.4$ m
that allows the satellite male to immediately catch a female attracted to the neighbor.
Then, we vary the remaining parameters of $\beta_{\mathrm{satellite}}$ and $\Delta N$
(see Equations (\ref{eq:p_rest-satellite})--(\ref{eq:F_2}))
that are related to the transition between the satellite state and the resting state.
The values and meanings of all the parameters are summarized
in Tables S1 -- S3 in Supplementary Information.
\section{Numerical simulation}
\label{sec:Numerical simulation}
We performed numerical simulations of the proposed model
to analyze how the coexistence of excitatory and inhibitory interactions affects
chorus activity and energy efficiency in aggregations of male frogs.
First, to simply test the validity of the proposed model,
we simulated an aggregation of three model frogs that is the minimum unit
exhibiting both the collective chorus and satellite behavior
(Sec. \ref{sec:Small chorus}).
Second, we simulated the aggregations of $10$--$20$ model frogs
that more accurately imitate the aggregations of actual male frogs (Sec. \ref{sec:Larger chorus}).
\subsection{Small aggregation of male frogs}
\label{sec:Small chorus}
For the numerical simulation of a small aggregation,
we assume that three model frogs are distributed at different inter-frog distances
along a line (Figure \ref{fig:Small_aggregation}A).
Frogs 1 and 2 are positioned at a close distance of $0.1$ m apart,
and the pair of Frogs 2 and 3 are positioned at a long distance of $1.0$ m apart.
Because the parameter $r_{\mathrm{satellite}}$ was set at $0.4$ m
(see Sec. \ref{sec:Parameter values}),
the inhibitory interaction can occur only in the closest pair (Frogs 1 and 2)
and the excitatory interaction can occur in the other pairs (Frogs 1 and 3 and Frogs 2 and 3).
Numerical simulation of the proposed model demonstrated the occurrence of collective choruses and satellite behavior.
Figure \ref{fig:Small_aggregation} B and C shows the time evolution of the physical fatigue $T_{n}$ and the state variable $s_{n}$
with the parameters of $\Delta N = 15$ and $\beta_{\mathrm{satellite}}=0.5$,
indicating that Frogs 1 and 3 exhibit collective choruses by synchronously switching
between the resting state ($s_{n}=0$) and the calling state ($s_{n}=1$)
while Frog 2 engages in the satellite state ($s_{n}=-1$) .
To examine the temporal structure within each chorus, we calculated the phase difference between Frogs 1 and 3.
Figure \ref{fig:Small_aggregation} C shows that the phase difference ($\theta_{1}-\theta_{3}$) converged to $\pi$ in each chorus,
which corresponds to anti-phase synchronization of the two frogs.
Note that the anti-phase synchronization within choruses
can be generally observed in frog choruses \cite{Gerhardt_2002, Wells_2007, Brush_1989, Jones_2014}
and likely prevents the males from interfering with each other's calls
\cite{Schwartz_1987, Bee, Henry_2019}.
Thus, the proposed model can reproduce three behaviors of actual male frogs,
i.e., satellite behavior, collective choruses and also anti-phase synchronization within choruses.
\begin{figure*}
\begin{center}
\includegraphics[width=1.0\textwidth]{Figure3.eps}
\end{center}
\caption{
Numerical simulation on the small aggregation of three male frogs.
(A) Spatial structure of the frogs.
Two of the three frogs are assumed to be positioned in a close distance within which satellite behavior can be induced.
(B) Time evolution of the physical fatigue $T_{n}$.
(C) Time evolution of the state variable $s_{n}$.
Calling state, resting state and satellite state are described by $s_{n}=1$, $0$ and $-1$, respectively.
(D) Time evolution of the phase difference ($\theta_{1}-\theta_{3}$) in each chorus.
The phase $\theta_{n}$ describes the call timing of the $n$th frog.
While Frog 2 engages in the satellite state, Frogs 1 and 3 exhibit collective choruses
within which they call alternately in anti-phase.
}
\label{fig:Small_aggregation}
\end{figure*}
Next, we analyzed the detailed trait of mode switching over a long time scale.
Figure \ref{fig:SmallAggre_LongSim}A and B shows the time evolution of the state variable $s_{n}$ and that of the energy $E_{n}$, respectively.
The model frogs can switch between the collective choruses (pink region) and the satellite behavior (light blue region).
Specifically, the satellite male spontaneously switches between the closest pair of Frog 1 and Frog 2 (Figure \ref{fig:SmallAggre_LongSim}A)
and the pace of the energy consumption is slowed down (Figure \ref{fig:SmallAggre_LongSim}B).
To further examine the generality of these results,
we varied the parameters $\Delta N$ and $\beta_{\mathrm{satellite}}$ that dominantly affect
the probability of the transition to the satellite state
(see Equations (\ref{eq:p_rest-satellite})--(\ref{eq:F_2})),
and repeated the simulation $1000$ times at each parameter value with different initial conditions.
Figure \ref{fig:SmallAggre_ParaDep} shows that
(1) the frequency of the transition decreases as the parameter $\Delta N$ increases
and (2) the duration of the satellite state increases as the parameter $\Delta N$ increases.
Thus, the simulation of the proposed model indicates the dynamical switching
between the collective choruses and satellite behavior
while the temporal feature varies especially depending on the parameter $\Delta N$.
\begin{figure*}
\begin{center}
\includegraphics[width=1.0\textwidth]{Figure4.eps}
\end{center}
\caption{
Numerical simulation on the small aggregation over a longer time scale.
(A) Time evolution of the state variable $s_{n}$.
While one of the closest pair (Frog 1 or Frog 2) engages in satellite behavior (light blue region),
the remaining males join in collective choruses (pink region).
(B) Time evolution of the energy $E_{n}$.
The pace of energy consumption is slowed down in the closest pair
because of intermittent transitions into the satellite state.
}
\label{fig:SmallAggre_LongSim}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=1.0\textwidth]{Figure5.eps}
\end{center}
\caption{
Numerical simulation on the parameter dependency of satellite behavior.
Parameters $\Delta N$ and $\beta_{\mathrm{satellite}}$
dominantly affect the probability of the transition between the satellite state and the resting state
(see Equations (\ref{eq:p_rest-satellite})--(\ref{eq:F_2})).
(A) The frequency of the transition into the satellite state.
(B) The duration of the satellite state.
We repeated the simulation $1000$ times at each parameter set with randomized initial conditions.
Plots and bars represent the mean and standard deviation of each quantity.
While the frequency of the transition decreases as the parameter $\Delta N$ increases,
the duration of the satellite state increases as the parameter $\Delta N$ increases.
}
\label{fig:SmallAggre_ParaDep}
\end{figure*}
\subsection{Large aggregations of male frogs}
\label{sec:Larger chorus}
For the numerical simulation of large aggregations,
we assume that $10$--$20$ frogs are distributed around a breeding site
(Figure \ref{fig:LargeAggre_concept}A).
In the wild the positions of male frogs change every night,
but it is common that they are distributed along the edge of a water body
\cite{Aihara_2014, Aihara_2016, Aihara_2021, Bando_2016, Aihara_2017}.
To simply imitate the spatial distribution as well as the variance among nights,
we positioned the model frogs along the edge of a circular water body
and then randomized their positions in each trial of the simulation.
Numerical simulation demonstrated the occurrence of collective choruses and satellite behavior in a large aggregation (Figure \ref{fig:LargeAggre_concept}A).
Figure \ref{fig:LargeAggre_concept}B and C shows the time evolution of the physical fatigue $T_{n}$
and that of the number of satellite males, respectively.
While the majority of the model frogs exhibit collective choruses
by almost synchronizing the dynamics of the physical fatigue $T_{n}$,
the remaining frogs engage in the satellite behavior.
\begin{figure*}
\begin{center}
\includegraphics[width=1.0\textwidth]{Figure6.eps}
\end{center}
\caption{
Numerical simulation on a large aggregation of 15 male frogs.
(A) Schematic diagram of the positions of male frogs.
In the simulation we assume that model frogs are distributed along the edge of a breeding water body,
which is a common spatial distribution of male frogs in the wild.
(B) Time evolution of the physical fatigue $T_{n}$.
(C) Time evolution of the number of satellite males.
While the majority of the model frogs exhibit collective choruses
by almost synchronizing the dynamics of the physical fatigue $T_{n}$,
the remaining frogs engage in satellite behavior.
}
\label{fig:LargeAggre_concept}
\end{figure*}
Next, we numerically evaluated the performance of large aggregations.
Figure \ref{fig:LargeAggre_Quality}A shows the entire time evolution of the energy $E_{n}$
and that of the number of calling males.
While the pace of energy consumption is slowed down due to intermittent transitions into the satellite state
(the top panel of Figure \ref{fig:LargeAggre_Quality}A),
several individuals consistently join in collective choruses
(the bottom panel of Figure \ref{fig:LargeAggre_Quality}A).
Given that male frogs aggregate and produce sounds to attract conspecific females,
the duration and size of the choruses likely determine the mate attraction performance of the whole aggregation
(see Discussions for details).
We quantify this feature by using {\it energy depletion time} and {\it chorus size}.
Specifically, {\it energy depletion time} is defined
as the duration until the energies of all the frogs have become less than $1\%$ of $E_{\mathrm{max}}$,
and {\it chorus size} is defined as the maximum number of calling frogs in each chorus.
Then, we varied the aggregation size between $10$ and $20$ model frogs
and examined how the two factors depend on the number of the satellite males.
Note that the number of the satellite males could be different in each trial of the simulations
because we randomized the positions of the males.
Figure \ref{fig:LargeAggre_Quality}B-D shows the {\it energy depletion time} and {\it chorus size}
in the aggregations of $10$ frogs, $15$ frogs and $20$ frogs, respectively.
While the energy depletion time is much longer in the aggregations with satellite males
than in the aggregations with no satellite males
(the top panels of Figure \ref{fig:LargeAggre_Quality}B-D),
the chorus size divides into two levels depending on the number of satellite males
(the bottom panels of Figure \ref{fig:LargeAggre_Quality}B-D).
\begin{figure*}
\begin{center}
\includegraphics[width=1.0\textwidth]{Figure7.eps}
\end{center}
\caption{
Numerical simulation on the performance of large aggregations.
(A) Schematic diagram of {\it energy depletion time} and {\it chorus size}.
(B)--(D) The dependency of {\it energy depletion time} and {\it chorus size}
on the number of satellite males
in the aggregations of $10$ frogs, $15$ frogs and $20$ frogs, respectively.
We repeated the simulation $200$ times in each aggregation with randomized initial conditions,
and overlaid translucent plots in each graph.
While the energy depletion time is drastically prolonged
in the aggregations with satellite males (the top panels),
the effective chorus size can take two peaks depending on the number of satellite males (the bottom panels).
}
\label{fig:LargeAggre_Quality}
\end{figure*}
\clearpage
\section{Discussion}
\label{sec:discussion}
The roles of excitatory and inhibitory interactions in aggregations of male frogs were theoretically studied
from the viewpoint of energy efficiency and chorus activity.
First, we proposed a hybrid dynamical model in which male frogs switch
their behavioral state
based on their internal condition and the interactions with other males.
In particular, the effect activating the calling behavior of other males
was treated as an excitatory interaction
while the effect inducing the satellite behavior of neighbors
was treated as an inhibitory interaction.
Second, we performed numerical simulations on the assumption of different aggregation sizes.
The simulation of a small aggregation (three frogs) reproduced
both the collective chorus and satellite behavior
that are observed in the aggregations of actual male frogs.
The simulation of large aggregations ($10$-$20$ frogs) demonstrated that
(1) the energy depletion time is prolonged due to the existence of satellite males
and (2) the chorus activity is divided into two levels over the whole chorusing period.
Consequently, this theoretical study indicates that
satellite males can contribute to the energy efficiency of the frog aggregations
by splitting the maximum chorus activity over the whole period.
In the proposed model, the parameter $\Delta N$ describes the threshold of the difference of call number
within which male frogs transit to the satellite state.
Numerical simulation of the small aggregation demonstrated that the occurrence of satellite behavior depends on this parameter
(Figure \ref{fig:SmallAggre_ParaDep}).
Namely, model frogs often transit to the satellite state when the parameter $\Delta N$ is small,
but they seldom transit to the satellite state when $\Delta N$ is large.
Combined with the fact that the features of satellite behavior vary among frog species \cite{Gerhardt_2002},
this parameter is essential to further understand the origin of the behavioral variance.
For instance, a small value of $\Delta N$ corresponds to species
that are sensitive to the difference of call number with their competitor,
resulting in frequent transitions into the satellite state;
a large value of $\Delta N$ corresponds to species
that are insensitive to the difference of call number,
resulting in rare transitions into the satellite state.
Thus, this study using the mathematical model allows us to infer the mechanisms of satellite behavior
by comparing the numerical simulations with empirical studies.
However, playback experiments using loudspeakers demonstrated that
other call traits such as loudness, frequency and call complexity
also affect the preference of female frogs \cite{Ryan_1992}.
Further extension of our model is necessary to more precisely formulate the mechanisms of satellite behavior
although call number is one of the most dominant factors determining the attractiveness of the callers.
Numerical simulations of large aggregations indicate that
satellite males can prolong the energy depletion time of the entire aggregation
by splitting the chorus activity into two levels (Figure \ref{fig:LargeAggre_Quality}).
Because the calls of male frogs attract conspecific females that are spreading and moving around the breeding site,
the long chorus realized by the prolonged energy depletion time
likely increases the total females attracted to the breeding site.
On the other hand, the devision of the chorus activity can be either an advantage or a disadvantage.
Playback experiments using loudspeakers demonstrated that various chorus sizes differently attract females
\cite{Ryan_1981}.
If the two activity levels realized by the satellite males are both effective,
the mate attraction performance is likely optimized.
However, the effective chorus size can vary among species,
and, therefore, further analyses combining numerical simulations and empirical studies are required
to comprehensively evaluate the performance per aggregation and also per individual.
Our numerical simulation assumed a specific spatial structure,
i.e. the linear distribution of male frogs along a water body
(Figure \ref{fig:LargeAggre_concept}A).
While this is a common distribution of male frogs as observed in {\it Hyla japonica} \cite{Aihara_2014, Aihara_2016},
{\it Rachophorus shlegelii} \cite{Aihara_2021, Bando_2016} and {\it Litoria chrolis} \cite{Aihara_2017},
more scattered patterns are possible \cite{Gerhardt_2002}.
To examine the robustness of our results,
we performed additional simulations on the assumption of other distributions:
(1) an almost linear distribution that was slightly scattered along a circular field
and (2) a distribution that was scattered within a circular field.
Both simulations demonstrated that the energy depletion time is prolonged in the aggregation with satellite males
while the chorus activity is split into two levels
(Figures S1 and S2 in Supplementary Information).
Thus, the results of the simulations hold in these distributions,
indicating the robustness of the results.
Future directions of this study includes the application of the proposed model
to engineering fields.
For instance, we previously claimed that the mathematical model of frog choruses
can be applied to the autonomous distributed control of wireless sensor networks (WSNs) \cite{Aihara_2019}.
WSNs are known as a sensing system in which
many sensor nodes deliver a data packet to a specific node due to multi-hop communication \cite{Dressler_2007, Rawat_2014}.
In general, WSNs are constructed as follows:
(1) many nodes are spatially distributed with non-uniform density to monitor a wide area,
(2) each node senses the surrounding condition and sends the information as a data packet to neighboring nodes,
and (3) the nodes deliver the data packet to a sink node (a node that collects all the necessary information)
due to autonomous and multi-hop communication among neighboring nodes.
Because each node is usually driven by a limited amount of battery,
there are two factors essential to increase the performance of WSNs:
{\it energy efficiency} and {\it reliable communication}.
We showed that the mathematical model reproducing the collective frog choruses
allows us to reduce energy consumption by collectively switching the active and inactive states of the sensor nodes,
and allows us to increase the reliability of communication by avoiding the collision of the data packets among neighboring nodes \cite{Aihara_2019}.
To further improve the energy efficiency of WSNs, we need to focus on the spatial distribution.
Namely, the deployed nodes can be locally dense because of their non-uniform distribution.
In such a locally dense area
only a single node needs to be active for the sensing and delivery of the data packet.
Therefore, its neighboring nodes are redundant and can be powered off.
The inhibitory interaction of the proposed model induces the inactive state of close neighbors
(Figures \ref{fig:Small_aggregation} and \ref{fig:LargeAggre_concept}),
which can further improve the energy efficiency of WSNs by powering off the redundant nodes.
Note that a similar idea based on the satellite behavior of frogs was already proposed \cite{Mutazono_2012}.
The novelty and advantage of our methodology is to establish the inactive state in a locally dense area
as well as the synchronous switching between the active and inactive states over the whole network.
\section*{Acknowledgements}
We are grateful to T. Ishizuki for drawing a picture of satellite behavior (Figure \ref{fig:concept}C).
\section*{Author Contributions}
IA, DC, YH and MM designed the research; IA performed numerical simulations;
IA, DC and MM wrote the paper.
\section*{Competing Interests}
We declare we have no competing interests.
\section*{Funding}
This study was supported by JSPS Grant-in-Aid for Young Scientists (No. 18K18005)
and Grant-in-Aid for Scientific Research (B) (No. 20H04144).
|
2,869,038,155,444 | arxiv | \section{Introduction}
\label{sec:introduction}
In recent years, the complexity of data used to make decisions has increased dramatically. A prime example of this is the use of online reviews to decide whether to purchase a product or visit a local business; we refer to the objects being reviewed as \emph{items}. Consider data provided by \texttt{Yelp!}\ (see, \url{http://www.yelp.com/}), which allows users to rate items, such as restaurants, convenience stores, and so forth, on a discrete scale from one to five ``stars.'' Additional features of the businesses are also known, such as the spatial location and type of business.
Datasets of this type are typically very large and exhibit complex dependencies.
As an example of this complexity, users of \texttt{Yelp!}\ effectively determine their own standards when rating a local business. We refer to the particular standards a user applies as a \emph{rubric}. We might imagine a latent variable \(Y_{iu}\) representing the \emph{utility}, or benefit, user \(u\) obtained from item \(i\). For a given level of utility, however, different users may still give different ratings due to having different standards for the ratings; for example, one user may rate a restaurant 5 stars as long as it provides a non-offensive experience, a second user might require an exceptional experience to rate the same restaurant 5 stars, and a third user may rate all items with \(1\) star in order to ``troll'' the website. Each of these users are applying different rubrics in translating their utility to a rating for the restaurant. In addition we also expect user-specific selection bias in the sense that some users may rate every restaurant they attend, while other users may only rate restaurants that they feel strongly about.
This article makes several contributions. First, we develop a semiparametric Bayesian model which accounts for the existence of multiple rubrics for ratings data that are observed over multiple locations. To do this, we use a spatial cumulative probit model \citep[e.g., see][]{Higgs, BerretProbit,schliep2015data} in which the break-points are modeled as user-specific random effects. This requires a flexible model for the distribution \(F\) of the random effects, which we model as a discrete mixture. A by-product of our approach is that we obtain a clustering of users according to the rubrics they are using.
Second, we use the multi-rubric model to address novel inferential questions. For example, ratings provided to a user might be adjusted to match that user's rubric, or to provide a distribution for the rating that a user would provide conditional on having a particular rubric. Utilizing this user-specific standardization of ratings may provide users with better intuition for the overall quality of an item.
This adjustment of restaurant quality for the rubrics is similar to, but distinct from, the task of predicting a user's ratings. Good predictive performance is required for \emph{filtering}, which refers to the task of processing the rating history of a user and producing a list of recommendations \citep[for a review, see][]{bobadilla2013recommender}. As a third contribution, we show that allowing for multiple rubrics improves predictions.
The model proposed here also has interesting statistical features. A useful feature of our model is that it allows for more accurate comparisons across items. For example, if a user rates all items with \(1\) star, then the model discounts this user's ratings. This behavior is desirable for two reasons. First, if a user genuinely rates all items with \(1\) star, then their rating is unhelpful. Second, it down-weights the ratings of users who are exhibiting selection bias and only rating items which they feel strongly about, which is desirable as comparisons across items will be more indicative of true quality if they are based on individuals who are not exhibiting large degrees of selection bias.
Additionally, the rubrics themselves may be of intrinsic interest. We demonstrate that the rubrics learned by our model are highly interpretable. For example, when analyzing the \texttt{Yelp!}\ dataset in Section~\ref{sec:data-analysis}, we obtain Figure~\ref{fig:rubric-props} which displays the ratings observed for users assigned to a discrete collection of rubrics and reveals several distinct rating patterns displayed by users.
Other features of our model are also of potentially independent interest. The multi-rubric model can be interpreted as a novel semiparametric random-effects model for ordinal data, even for problems in which the intuition behind the multi-rubric model in terms of latent utility does not hold. Other study designs in which the multi-rubric analogy may be useful include longitudinal survey studies, or more general ordinal repeated-measures designs. Additionally, the cumulative probit model we use to model latent user preferences includes a spatial process to account for spatial dependencies across local businesses. Recovering an underlying spatial process allows for recommending entire regions to visit, rather than singular items. The development of low-rank spatial methodology for large-scale dependent ordinal data is of interest within the spatial literature, as the current spatial literature for ordinal data do not typically address large datasets on a similar order of the \texttt{Yelp!}\ dataset \citep[e.g., see][among others]{de2004simple, de2000bayesian, Chen2000, Cargnoni, KnorrHeld, CarlinDiscreteCat, Higgs, BerretProbit, rainfallCat}. We model the underlying spatial process using a low-rank approximation \citep{johan} to a desired Gaussian process \citep{banerjee,bradleyMSTM}.
Starting from \citet{koren2011ordrec}, several works in the recommender systems literature have considered ordinal matrix factorization (OMF) procedures which are similar in many respects to our model (see also \citealt*{paquet2012hierarchical} and \citealt*{houlsby2014cold}). Our work differs non-trivially from these works in that the multi-rubric model treats the break-points as user-specific random effects, with a nonparametric prior used for the random effects distribution \(F\). For the \texttt{Yelp!}\ dataset, this extra flexibility leads to improved predictive performance. Additionally, our focus in this work extends to inferential goals beyond prediction; for example, depending on the distribution of the rubrics of users who rate a given item, the estimate of overall quality for that item can be shrunk to a variety of different centers, producing novel multiple-shrinkage effects. Several works in the Bayesian nonparametric literature have also considered flexible models for random effects in multivariate ordinal models \citep{kottas2005, deyoreo2014bayesian, bao2015bayesian}, but do not treat the break-points themselves as random effects.
The paper is organized as follows. In Section~\ref{sec:multi-rubric}, we develop the multi-rubric model, with an eye towards the \texttt{Yelp!}\ dataset, and provide implementation details. In Section~\ref{sec:simulation}, we illustrate the methodology on synthetic data designed to mirror features of the \texttt{Yelp!}\ dataset, and demonstrate that we can accurately recover the number and structure of the rubrics when the model holds, as well as effectively estimate the underlying latent utility field. In Section \ref{sec:data-analysis}, we illustrate the methodology on the \texttt{Yelp!}\ dataset. We conclude with a discussion in Section~\ref{sec:discussion}. In supplementary material, we present simulation experiments which demonstrate identifiability of key components of the model.
\section{The Multi-rubric model}
\label{sec:multi-rubric}
\subsection{Preliminary notation}
We consider ordinal response variables \(Z_{iu}\) taking values in \(\{1, \ldots, K\}\). In the context of online ratings data, \(Z_{iu}\) represents the rating that user \(u\) provides for item \(i\). In the context of survey data, on the other hand, \(Z_{iu}\) might represent the response subject \(u\) gives to question \(i\). We do not assume that \(Z_{iu}\) is observed for all \((i,u)\) pairs, but instead we observe \((i,u) \in \mathcal S \subseteq \{1, \ldots, I\} \times \{1, \ldots, U\}\), where \(U\) is the total number of subjects and \(I\) is the total number of items. For fixed \(i\) we let \(\mathcal U_i = \{u : (i,u) \in \mathcal S\}\) be the set of users that rate item \(i\), and similarly for fixed \(u\) we let \(\mathcal I_u = \{i: (i,u) \in \mathcal S\}\) be the set of items that user \(u\) rates.
\subsection{Review of Cumulative Probit Models}
\label{sec:review-probit}
Cumulative probit models \citep{chib-93,albert1997bayesian} provide a convenient framework for modeling ordinal rating data. Consider the univariate setting, with ordinal observations \(\{Z_i : 1 \le i \le N\}\) taking values in \(\{1,\ldots,K\}\). We assume that \(Z_i\) is a rounded version of a latent variable \(Y_i\) such that \(Z_i = k\) if \(\theta_{k-1} \le Y_i < \theta_k\). Here, \(-\infty = \theta_0 \le \theta_1 \le \cdots \le \theta_K = \infty\) are unknown break-points. When \(Y_i\) has the Gaussian distribution \(Y_i \sim \operatorname{Gau}(x_i^{\top}\gamma, 1)\) this leads to the ordinal probit model, where \(\Pr(Z_i = k \mid \theta, \gamma) = \Phi(\theta_k - x_i^{\top}\gamma) - \Phi(\theta_{k-1} - x_i^{\top}\gamma)\).
We assume $\operatorname{Var}(Y_i) = 1$, as the variance of \(Y_i\) is confounded with the break-points \(\theta = (\theta_1, \ldots, \theta_{K-1})\). Any global intercept term is also confounded with the \(\theta\)'s; there are two resolutions to this issue. The first is to fix one of the \(\theta_k\)'s, e.g., \(\theta_1 \equiv 0\). The second is to exclude an intercept term from \(x_i\). While the former approach is often taken \citep{albert1997bayesian, Higgs}, it is more convenient in the multi-rubric setting to use the latter approach to avoid placing asymmetric restrictions on the break-points.
The ordinal probit model is convenient for Bayesian inference in part because it admits a simple data augmentation algorithm which iterates between sampling \(Y_i \stackrel{\textnormal{indep}}{\sim} \operatorname{TruncGau}(x_i^{\top}\gamma, 1, \theta_{Z_i-1}, \theta_{Z_i})\) for \(1 \le i \le N\) and, assuming a flat prior for \(\gamma\), sampling
\begin{math}
\gamma
\sim
\operatorname{Gau}\{(X^{\top} X)^{-1}X^{\top} \bm Y, (X^{\top} X)^{-1}\},
\end{math}
where \(X\) has \(i^{\text{th}}\) row \(x_i^{\top}\) and \(\bm Y = (Y_1, \ldots,
Y_N)\). Here, \(\operatorname{TruncGau}(\mu, \sigma^2, a, b)\) denotes the Gaussian
distribution truncated to the interval \((a,b)\). Additionally, an update for
\(\theta\) is
needed.
Efficient updates for \(\theta\) can be implemented by using a Metropolis-within-Gibbs step to update \(\theta\) as a block (for details, see \citealp{albert1997bayesian}, as well as \citealp{cowles1996accelerating} for alternative MCMC schemes).
\subsection{Description of the proposed model}
\label{sec:description}
\subsubsection{The multi-rubric model}
We develop an extension of the cumulative probit model to generic repeated-measures ordinal data \(\{Z_{iu} : (i,u) \in \mathcal S\}\). Following \citet{albert1997bayesian} we introduce latent utilities \(Y_{iu}\), but specify a generic ANOVA model
\begin{align}
\label{eq:anova}
Y_{iu} = f_{iu} + \nu_u + \xi_i + \epsilon_{iu}, \qquad \epsilon_{iu} \stackrel{\textnormal{iid}}{\sim} \operatorname{Gau}(0, 1),
\end{align}
where \(\nu_u\) and \(\xi_i\) are main effects and \(f_{iu}\) is an interaction effect. The multi-rubric model modifies the cumulative probit model by replacing the break-point parameter \(\theta\) with \(u\)-specific random effects \(\theta_u = (\theta_{u0}, \ldots, \theta_{uK})\) with \([\theta_u \mid F] \stackrel{\textnormal{indep}}{\sim} F\) for some unknown \(F\). As before, we let \(Z_{iu} = k\) if \(\theta_{u(k-1)} \le Y_{iu} \le \theta_{uk}\).
For concreteness, we take \(F\) to be a finite mixture \(F = \sum_{m = 1}^M \omega_m \delta_{\theta^{(m)}}\) for some large \(M\), with \(\theta^{(m)} \stackrel{\textnormal{iid}}{\sim} H\) and \(\omega \sim \operatorname{Dirichlet}(a, \ldots, a)\), where \(\delta_{\theta\supp m}\) is a point-mass distribution at \(\theta\supp m\). We note that it is also straight-forward to use a nonparametric prior for \(F\) such as a Dirichlet process \citep{escobarwest1995, ferguson1973}. We refer to the random effects \(\theta^{(1)}, \ldots, \theta^{(M)}\) as \emph{rubrics}. Note that for each subject \(u\) there exists a latent class \(m\) such that \(\theta_u = \theta^{(m)}\).
Figure~\ref{fig:multi-rubric-illustration} displays the essential idea for the model. Viewing \(Y\) as a latent utility, the rubric that the user is associated to leads to different values of the observed rating \(Z\). In this example, the second rubric is associated to users who rate many items with a \(3\), while the first rubric is associated to users who do not rate many items with a \(3\).
\begin{figure}
\centering
\includegraphics[width=.8\textwidth]{./Figures/multi-rubric-illustration-1}
\caption{Visualization of the multi-rubric model. The point on the density
indicates the realized value of \(Y\).}
\label{fig:multi-rubric-illustration}
\end{figure}
Treating the break-points as random effects has several benefits. First, it offers additional flexibility over approaches for ordinal data which incorporate a random intercept \citep{gill2009nonparametric}. Due to the fact that the \(\theta_u\)'s are confounded with both the location and scale of \(Y_{iu}\), treating the break-points as random effects is at least as flexible as treating the location and scale of the distribution of the \(Y_{iu}\)'s as random effects. We require this additional flexibility, as merely treating the location and scale of the \(Y_{iu}\)'s as random effects does not allow for the variety of rating behaviors exhibited by users. By treating the break-points as random effects, we are able to capture \emph{any} distribution of ratings in a given rubric (see, e.g., Figure~\ref{fig:rubric-props}). In addition to flexibility, specifying \(F\) as a discrete mixture induces a clustering of users into latent classes. To each user \(u\) we associate a latent variable \(C_u\) such that \(C_u = m\) if \(\theta_u = \theta^{(m)}\). As will be demonstrated in Section~\ref{sec:data-analysis}, the latent classes of users discovered in this way are highly interpretable.
\subsubsection{Model for the \texttt{Yelp!}\ data}
Our model for the \texttt{Yelp!}\ data takes \(Y_{iu} \sim \operatorname{Gau}(\mu_{iu}, 1)\) where
\begin{align*}
\mu_{iu} =
x_i^{\top}\gamma +
\alpha_u^{\top}\beta_i +
W_i +
b_i,
\qquad
W_i = \psi(s_i)^{\top}\eta.
\end{align*}
This is model \eqref{eq:anova} with \(f_{iu} = \alpha_u^{\top}\beta_i\), \(\xi_i = x_i^{\top}\gamma + W_i + b_i\), and \(\nu_u\) removed. This model can be motivated as a combination of the fixed-rank kriging approach of \citet{johan} with the probabilistic matrix factorization approach of \citet{salakhutdinov2007probabilistic}. The terms \(x_i^{\top}\gamma\), \(W_i\), and \(b_i\) are used to account for heterogeneity in the items. The term \(x_i^{\top}\gamma\) accounts for known covariates \(x_i \in \mathbb R^p\) associated to each item. The term \(W_i\) is used to capture spatial structure, and is modeled with a basis function expansion \(W_i = \psi(s_i)^{\top}\eta\) where \(s_i\) denotes the longitude-latitude coordinates associated to the item and \(\psi(s) = (\psi_1(s) \ldots, \psi_r(s))^{\top}\) is a vector of basis functions. We note that it is straight-forward to replace our low-rank approach for \(W_i\) with more elaborate approaches such as the full-scale approach of \citet{sang2012full}. The term \(b_i\) is an item-specific random effect which is used to capture item heterogeneity which cannot be accounted for by the covariates or the low-rank spatial structure.
The vectors \(\alpha_u\) and \(\beta_i\) intuitively correspond to unmeasured user-specific and item-specific latent features. The term \(\alpha_u^{\top}\beta_i\) is large/positive when \(\alpha_u\) and \(\beta_i\) point in the same direction (i.e., the user's preferences align with the item's characteristics), and is large/negative when \(\alpha_u\) and \(\beta_i\) point in opposite directions. This allows the model to account not only for user-specific biases (\(\theta_u\)) and item-specific biases \((x_i, W_i, b_i)\), but also interaction effects.
The multi-rubric model can be summarized by the following hierarchical model. For each model, we implicitly assume the statements hold conditionally on all variables in the models below, and that conditional independence holds within each model unless otherwise stated.
\begin{description}
\item[Response model:] \(Z_{iu} = k\) with probability \(w_{iuk} = \Phi(\theta_{uk} - \mu_{iu}) - \Phi(\theta_{u(k-1)} - \mu_{iu})\) and \(\mu_{iu} = x_i^{\top}\gamma + \alpha_u^{\top}\beta_i + W_i + b_i\).
\item[Random effect model:] \(\theta_u \stackrel{\textnormal{iid}}{\sim} F\), \(\alpha_{u} \sim \operatorname{Gau}(0,\sigma^2_\alpha \operatorname{I})\), \(\beta_{i} \sim \operatorname{Gau}(0,\sigma^2_\beta \operatorname{I})\), and \(b_i \sim \operatorname{Gau}(0, \sigma^2_b)\).
\item[Spatial process model:] \(W_i = \psi(s_i)^{\top} \eta\) where \(\eta \sim \operatorname{Gau}(0, \Sigma_\eta)\).
\item[Parameter model:] \(\gamma \sim \operatorname{Flat}\) and \(F = \sum_{m=1}^M \omega_m \delta_{\theta^{(m)}}\) where \(\omega \sim \operatorname{Dirichlet}(a,\ldots,a)\) and \(\theta^{(m)} \stackrel{\textnormal{iid}}{\sim} H\).
\end{description}
To complete the model we must specify values for the hyperparameters \(\sigma_\alpha, \sigma_\beta, \sigma_b, \Sigma_\eta, a\), and \(H\), as well as the number of rubrics \(M\) and the number of latent factors \(L\). In our illustrations we place half-Gaussian priors for the scale parameters, with \((\sigma_\beta, \sigma_b) \stackrel{\textnormal{iid}}{\sim} \operatorname{Gau}_+(0, 1)\), and \(\sigma_\alpha \equiv 1\). We let \(\Sigma_\eta = \operatorname{diag}(\sigma^2_\eta, \ldots, \sigma^2_\eta)\) and set \(\sigma_\eta \sim \operatorname{Gau}_+(0,1)\). Here, \(\operatorname{Gau}_+(0,1)\) denotes a standard Gaussian distribution truncated to the positive reals. For a discussion of prior specification for variance parameters, see \citet{gelman2006prior} and \citet{simpson2017penalising}.
In our illustrations we use \(M = 20\). For the \texttt{Yelp!}\ dataset, the choice of \(M = 20\) rubrics is conservative, and by setting \(a = \kappa / M\) for some fixed \(\kappa > 0\), we encourage \(\omega\) to be nearly-sparse \citep{ishwaran2002dirichlet, linero2016bayesian}. This strategy effectively lets the data determine how many rubrics are needed, as the prior encourages \(\omega_m \approx 0\) if rubric \(m\) is not needed. The prior \(H\) for \(\theta^{(1)}, \ldots, \theta^{(M)}\) is chosen to have density \(h(\theta) = \prod_{k=1}^K \operatorname{Gau}(\theta_k \mid 0, \sigma_\theta^2) I(\theta_1 \le \cdots \le \theta_{K-1})\) so that \(\theta^{(m)}\) has the distribution of the order statistics of \(K-1\) independent \(\operatorname{Gau}(0, \sigma^2_\theta)\) variables.
\subsection{Evaluating item quality}
A commonly used measure of item quality is the average rating of a user from the population \(\lambda_i = E(Z_{iu} \mid x_i, \phi_i, \gamma)\) where \(\phi_i = (\beta_i, b_i, W_i)\). This quantity is given by
\begin{align*}
\lambda_i
&= \sum_{k = 1}^K k \cdot \Pr(Z_{iu} = k \mid x_i, \phi_i, \gamma)
\\&= \sum_{k=1}^K
\sum_{m=1}^M k \cdot \omega_m \cdot
\int \Pr(Z_{iu} = k \mid x_i, \phi_i, \alpha_u, \gamma, C_u = m)
\, \operatorname{Gau}(\alpha \mid 0, \sigma^2_\alpha \operatorname{I}) \ d\alpha.
\end{align*}
Using properties of the Gaussian distribution, and recalling that \(\sigma^2_\alpha = 1\), it can be shown that
\begin{align}
\label{eq:lambda}
\lambda_i
=
\sum_{k=1}^K \sum_{m = 1}^M k \cdot \omega_m \cdot
\left\lbrace
\Phi\left(\frac{\theta^{(m)}_k - \xi_i}{\sqrt{1 + \|\beta_i\|^2}}\right) -
\Phi\left(\frac{\theta^{(m)}_{k-1} - \xi_i}{\sqrt{1 + \|\beta_i\|^2}}\right)
\right\rbrace,
\end{align}
where \(\xi_i = x_i^{\top}\gamma + b_i + W_i\). In Section~\ref{sec:data-analysis}, we demonstrate the particular users who rated item \(i\) exert a strong influence on the \(\lambda_i\)'s, particularly for restaurants with few ratings.
Rather than focusing on an omnibus measure of overall quality, we can also adjust the overall quality of an item to be rubric-specific. This amounts to calculating
\begin{math}
\lambda_{im} = E(Z_{iu} \mid x_i, \phi_i, \gamma, C_u = m),
\end{math}
which represents the average rating of item \(i\) if all used rubric \(m\). Similar to \eqref{eq:lambda}, this quantity can be computed as
\begin{align}
\label{eq:lambda-k}
\lambda_{im}
=
\sum_{k=1}^K k \cdot
\left\lbrace
\Phi\left(\frac{\theta^{(m)}_k - \xi_i}{\sqrt{1 + \|\beta_i\|^2}}\right) -
\Phi\left(\frac{\theta^{(m)}_{k-1} - \xi_i}{\sqrt{1 + \|\beta_i\|^2}}\right)
\right\rbrace.
\end{align}
In Section~\ref{sec:data-analysis}, we use both \eqref{eq:lambda} and \eqref{eq:lambda-k} to understand the statistical features of the multi-rubric model.
\subsection{Implementation Details}
We use the reduced rank model \(W = \Psi\eta + b\) where \(\Psi\in\mathbb R^{I \times r}\) has \(i^{\text{th}}\) row given by \(\psi(s_i)^{\top}\). We choose \(\Psi\) so that \(\operatorname{Cov}(\Psi\eta)\) is an optimal low-rank approximation to \(\sigma^2_\eta \Xi\) where \(\Xi\) is associated to a target positive semi-definite covariogram. This is accomplished by taking \(\Psi\) composed of the first \(r\) columns of \(\Gamma D^{1/2}\) where \(\Xi = \Gamma D \Gamma^{\top}\) is the spectral decomposition of \(\Xi\). The Eckart-Young-Mirsky theorem states that this approximation is optimal with respect to both the operator norm and Frobenius norm \citep[see, e.g.,][Chapter 8]{rasmussen2005gaussian}. A similar strategy is used by \citet{bradleyPCOS,bradleyMSTM}, who use an optimal low-rank approximation of a target covariance structure \(\Xi \approx \Psi \Sigma_\eta \Psi^{\top}\) where the basis \(\Psi\) is held fixed but \(\Sigma_{\eta}\) is allowed to vary over all positive-definite \(r \times r\) matrices. In our illustrations, we use the squared-exponential covariance, i.e., $\Xi_{ij} = \exp({-{\rho}\|s_i - s_j\|^2})$ \citep{cressie2015statistics}.
To complete the specification of the model, we must specify the bandwidth \(\rho\), the number of latent factors \(L\), and the number of basis functions \(r\). We regard \(L\) as a tuning parameter, which can be selected by assessing prediction performance on a held-out subset of the data. In principle, a prior can be placed on \(\rho\), however this results in a large computational burden; we instead evaluate several fixed values of \(\rho\) chosen according to some rules-of-thumb and select the value with the best performance. For the \texttt{Yelp!}\ dataset, we selected \(\rho = 1000\), which corresponds undersmoothing the spatial field relative to Scott's rule \citep[see, e.g.,][]{hardle2000multivariate} by roughly a factor of two, and remark that substantively similar results are obtained with other bandwidths. Finally, \(r\) can be selected so that the proportion of the variance \(\sum_{d=1}^r D_{ii}^2 / \sum_{d=1}^n D_{ii}^2\) in $\Xi$ accounted for by the low-rank approximation exceeds some preset threshold; for the \texttt{Yelp!}\ dataset, we chose \(r = 500\) to account for 99\% of the variance in \(\Xi\).
When specifying the number of rubrics \(M\), we have found that the model is most reliable when \(M\) is chosen large and \(a = \kappa / M\) for some \(\kappa > 0\); under these conditions, the prior for \(F\) is approximately a Dirichlet process with concentration \(\kappa\) and base measure \(H\) (see, e.g., \citealp{teh2006hierarchical}). We recommend choosing \(M\) to be conservatively large and allowing the model to remove unneeded rubrics through the sparsity-inducing prior on \(\omega\). We have found that taking \(M\) large is necessary for good performance even in simulations in which the true number of rubrics is small and known.
We use Markov chain Monte Carlo to approximately sample from the posterior distribution of the parameters. A description of the sampler is given in the appendix.
\subsection{A note on selection bias}
Let \(\Delta_{iu} =1\) if \((i,u) \in \mathcal S\), and \(\Delta_{iu} = 0\) otherwise. In not modeling the distribution of \(\Delta_{iu}\), we are implicitly modeling the distribution of the \(Z_{iu}\)'s conditional on \(\Delta_{iu} = 1\). When selection bias is present, this may be quite different than the marginal distribution of \(Z_{iu}\)'s. Experiments due to \citet{marlin2009collaborative} provide evidence that selection bias may be present in practice.
A useful feature of the approach presented here is that it naturally down-weights users who are exhibiting selection bias. For example, if user \(u\) only rates items they feel negatively about, they will be assigned to a rubric \(m\) for which \(\theta^{(m)}_{1}\) is very large; this has the effect of ignoring their ratings, as there will be effectively no information in the data about their latent utility. As a result, when estimating overall item quality, the model naturally filters out users who are exhibiting extreme selection bias, which may be desirable.
In the context of prediction, the predictive distribution for \(Z_{iu}\) should be understood as being conditional on the event \(\Delta_{iu} = 1\); that is, the prediction is made with the additional information that user \(u\) chose to rate item \(i\). This is the case for nearly all collaborative filtering methods, as correcting for the selection bias necessitates collecting \(Z_{iu}\)'s for which \(\Delta_{iu} = 0\) would occurred naturally; for example, as done by \citet{marlin2009collaborative}, we might assess selection bias by conducting a pilot study which forces users to rate items they would not have normally rated. With the understanding that all methods are predicting ratings conditional on \(\Delta_{iu} = 1\), the results in Section~\ref{sec:data-analysis} show that the multi-rubric model leads to increased predictive performance.
Selection bias should also be taken into account when interpreting the latent rubrics produced by our model. Our model naturally provides a clustering of users into latent classes, which we presented as representing differing standards in user ratings; however, we expect that the model is also detecting differences in selection bias across users. We emphasize that our goal is to identify and account for heterogeneity in rating patterns, and we avoid speculating on whether heterogeneity is caused by different rating standards or selection bias. For example, a user who rates items with only one-star or five-stars might be either (i) using a rubric which results in extreme behavior, with most of the break-points very close together; or (ii) actively choosing to rate items which they feel strongly about.
\section{Simulation Study}
\label{sec:simulation}
The goal of this simulation is to illustrate that we can accurately learn the existence of multiple rubrics in settings where one would expect it would be difficult to detect them. We consider a situation where the data is generated according to two rubrics that are similar to each other. This allows us to assess the robustness of our model to various ``degrees'' of the multi-rubric assumption. The performance of our multi-rubric model is assessed relative to the single-rubric model, which is the standard assumption made in the ordinal data literature.
We calibrate components of the simulation model towards the \texttt{Yelp!}\ dataset to produce realistic simulated data. Specifically, we set $\eta$ and $\sigma^2_b$ equal to the posterior means obtained from fitting the model to the \texttt{Yelp!}\ dataset in Section~\ref{sec:data-analysis}. We set $\Sigma_\eta = 0.5\operatorname{I}$, corresponding to a much stronger spatial effect than what was observed in the data, and for simplicity we removed the latent-factor aspect of the model by fixing $\sigma^2_\beta \equiv 0$. A two-rubric model is used with $\omega_1 = \omega_2 = 0.5$. We also use the same spatial basis functions and observed values of $(i,u)$ as in the \texttt{Yelp!}\ analysis in Section~\ref{sec:data-analysis}.
We now describe how the two rubrics $\theta_1$ and $\theta_2$ where chosen. First, $\theta_1$ was selected so that $\{Z_{iu} : C_u = 1, i = 1, \ldots, I\}$ was evenly distributed among the five responses. Associated to $\theta_1$ is a probability vector $p_1 = (0.2, 0.2, 0.2, 0.2, 0.2)$. To specify $\theta_2$, we use the same approach with a difference choice of $p$. Let $p_2 = (0, 0.25, 0.5, 0.25, 0)$. Then $\theta_2^{(\tau)}$ is associated to $\tau p_1 + (1 - \tau) p_2$ in the same manner as $\theta_1$ is associated to $p_1$. Here, $\tau$ indexes the similarity of $\theta_1$ and $\theta_2$, and it can be shown that the total variation distance between the empirical distribution of $\{Z_{iu} : C_u = 1\}$ and $\{Z_{iu} : C_u = 2\}$ is $0.8(1 - \tau)$. Thus, values of $\tau$ near $1$ correspond imply that the rubrics are similar, while values of $\tau$ near $0$ imply that they are dissimilar. Figure~\ref{fig:sim-rubrics} presents the distribution of the $Z_{iu}$'s with $C_u = 2$ when $\tau = 0, 0.8$, and $1$.
\begin{figure}
\centering
\includegraphics[width=.8\textwidth]{"./Figures/tau-figure-1"}
\caption{Empirical distribution of the $Z_{iu}$'s in the simulation model,
for $\theta_1$, $\theta^{(0.8)}_2$, and $\theta_2^{(0)}$.}
\label{fig:sim-rubrics}
\end{figure}
We fit a $10$-rubric and single-rubric model for $\tau = 0.0, 0.1, \ldots, 1.0$. Figure~\ref{fig:apurav-sims} displays the proportion of individuals assigned to each rubric for a given value of $\tau$. If the model is accurately recovering the underlying rubric structure, we expect to see a half of the observations assigned to one rubric, and half to another; due to permutation invariance, which of the 10 rubrics is associated to $\theta_1$ and $\theta_2^{(\tau)}$ vary by simulation. Up to $\tau = 0.9$, the model is capable of accurately recovering the existence of two rubrics. We also see that, even at $\tau = 0.8$, the model accurately recovers the empirical distribution of the $Z_{iu}$'s associated to each rubric.
\begin{figure}[!ht]
\centering
\includegraphics[width = .8\textwidth]{"./Figures/apurva-sims-1"}
\includegraphics[width = .8\textwidth]{"./Figures/rubric-recover-1"}
\caption{
Top: proportion of individuals assigned to each rubric at the last iteration of the Markov chain. Bottom: The empirical distribution of $Z_{iu}$ for the two rubrics associated to $C_u = 1$ and $C_u = 2$ when \(\tau = 0.8\); compare with the left and middle plots in Figure~\ref{fig:sim-rubrics}.}
\label{fig:apurav-sims}
\end{figure}
Next, we assess the benefit of using the multi-rubric model to predict missing values. For each value of $\tau$, we fit a single-rubric and multi-rubric model. Using the same train-test split as in the our real data illustration, we compute the log likelihood on the held-out data
\begin{math}
\text{loglik}_{\text{test}} = \sum_{(i,u) \in \mathcal{S}_{\text{test}}} \log
\Pr(Z_{iu} \mid D),
\end{math}
which is further discussed in detail in Section~\ref{sec:data-analysis} . Figure~\ref{fig:lambda-loss} shows the difference in held-out log likelihood for the single-rubric and multi-rubric model as a function of $\tau$. Up-to $\tau = 0.8$, there is a meaningful increase in the held-out log-likelihood obtained from using the multi-rubric model. The case where $\tau = 1$ is also particularly interesting, as this implies that the data were generated from the single rubric model. Here the predictive performance of our model at missing values appears to be robust to the case when the multiple rubric assumption is incorrect.
\begin{figure}[!ht]
\centering
\includegraphics[width = 0.75\textwidth]{"./Figures/lambda-loss-1"}
\caption{
Difference in $\text{loglik}_{\text{test}}$ for the single-rubric and multi-rubric model obtained in the simulation study, as a function of $\tau$. Above each point, we provide the proportion of users whose most likely rubric assignment matched their true rubric.}
\label{fig:lambda-loss}
\end{figure}
Displayed above each point in Figure~\ref{fig:lambda-loss} is the proportion of observations which are assigned to the correct rubric, where each observation is assigned to their most likely rubric. When the rubrics are far apart the model is capable of accurately assigning observations to rubrics. As the rubrics get closer together, the task of assigning observations to rubrics becomes much more difficult.
This simulation study suggests that the model specified here is able to disentangle the two-rubric structure, even when the rubrics are only subtly different. This leads to clear improvements in predictive performance for small and moderate values for $\tau$. Additionally, when the multi-rubric assumption is negligible, or even incorrect, our model performs as well as the single-rubric model.
\section{Analysis of Yelp data}
\label{sec:data-analysis}
\begin{figure}
\centering
\includegraphics[width=.45\textwidth]{"./Figures/W_process"}
\caption{Estimate of the underlying spatial field \(W(s) = \psi(s)^{\top}\eta\) at each realized restaurant location using its posterior mean.
}
\label{fig:w-process}
\end{figure}
We now apply the multi-rubric model to the \texttt{Yelp!}\ dataset, which is publicly available at \url{https://www.yelp.com/dataset_challenge}. We begin by preprocessing the data to include reviews only between January \(1^{\text{st}}\), 2013 and December \(31^{\text{st}}\) 2016, and restrict attention to restaurants in Phoenix and its surrounding areas. We further narrow the data to include only users who rated at least 10 restaurants; this filtering is done in an attempt to minimize selection bias, as we believe that ``frequent raters'' should be less influenced by selection bias.
We first evaluate the performance of the single-rubric and multi-rubric models for various values of the latent factor dimension \(L\). We set \(M = 20\) and induce sparsity in \(\omega\) by setting \(\omega \sim \operatorname{Dirichlet}(1/20, \ldots, 1/20)\). We divide the indices \((i,u) \in \mathcal S\) into a training set \(\mathcal S_{\text{train}}\) and testing set \(\mathcal S_{\text{test}}\) of equal sizes by randomly allocating half of the indices to the training set. We evaluate predictions using a held-out log-likelihood criteria
\begin{align}
\label{eq:criteria}
\begin{split}
\text{loglik}_{\text{test}} &= |\mathcal S_{\text{test}}|^{-1}\sum_{(i,u) \in \mathcal S_{\text{test}}} \log
\Pr(Z_{iu} \mid \mathcal D), \\&
\approx |\mathcal S_{\text{test}}|^{-1}\sum_{(i,u) \in \mathcal S_{\text{test}}} \log
T^{-1} \sum_{t=1}^T \Pr(Z_{iu} \mid C_u^{(t)}, \theta^{(t)}, \mu_{iu}^{(t)})
\end{split}
\end{align}
where \(\mathcal D = \{Z_{iu} : (i,u) \in \mathcal S_{\text{train}}\}\), \(\Pr(Z_{iu} \mid \mathcal D)\) denotes the posterior predictive distribution of \(Z_{iu}\), and \(t = 1, \ldots, T\) indexes the approximate draws from the posterior obtained by MCMC. Results for the values \(L = 1, 3\), and 5, over 10 splits into training and test data, are given in Figure~\ref{fig:heldout-results}. We also compare our methodology to ordinal matrix factorization \citep{paquet2012hierarchical} with learned breakpoints and spatial smoothing, and the mixture multinomial model \citep{marlin2009collaborative} with \(10\) mixture components. The multi-rubric model leads to an increase in the held-out data log-likelihood \eqref{eq:criteria} of roughly \(5\%\) over ordinal matrix factorization and \(8\%\) over the mixture multinomial model. Additionally, we note that the holdout log-likelihood was very stable over replications. The single-rubric model is essentially equivalent to ordinal matrix factorization.
The dimension of the latent factors \(\alpha_u\) and \(\beta_i\) has little effect on the quality of the model. We attribute this to the fact that \(|\mathcal U_i|\) and \(\mathcal I_u|\) are typically small, making it difficult for the model to recover the latent factors. On other datasets where this is not the case, such as the Netflix challenge dataset, latent-factor models represent the state of the art and are likely essential for the multi-rubric model. In the supplementary material we show in simulation experiments that the \(\alpha_u\)'s, \(\beta_i\)'s, and \(L\) are identified.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{./Figures/eval-1}
\caption{Boxplots of \(-2.0 \cdot \text{loglik}_{\text{test}}\) for the mixture multinomial model (MMM, which does not have latent factors), ordinal matrix factorization (OMF), the single rubric model (Single) and the multi-rubric model (Multi), for 10 splits into training and testing data.}
\label{fig:heldout-results}
\end{figure}
Figure~\ref{fig:w-process} displays the learned spatial field \(\widehat W(s) = \psi(s)^{\top}\widehat\eta\) where \(\widehat \eta\) is the posterior mean of \(\eta\).
The results suggest that the downtown Phoenix business district and the area surrounding the affluent Paradise Valley possesses a higher concentration of highly-rated restaurants than the rest of the Phoenix area. More sparsely populated areas such as such as Litchfield Park, or areas with lower income such as Guadalupe, seem to have fewer highly-rated restaurants.
\begin{figure}[!t]
\centering
\includegraphics[width = .9\textwidth]{"./Figures/rubric-props-all-1"}
\includegraphics[width=.9\textwidth]{"./Figures/rubric-ratings-all-1"}
\caption{
Top: bar chart giving the number of users assigned to each rubric, where users are assigned to rubrics by minimizing Binder's loss function. Bottom: bar charts giving the proportions of the observed ratings \(Z_{iu}\) for each item-user pair for the top 9 most common rubrics.}
\label{fig:rubric-props}
\end{figure}
We now examine the individual rubrics. First, we obtain a clustering of users into their rubrics by minimizing Binder's loss function \citep{binder1978bayesian} $L(\bm c) = \sum_{u,u'} |\delta_{c_u, c_{u'}} - \Pi_{u,u'}$, where $\delta_{ij} = I(i = j)$ is the Kronecker delta, $\bm c = (c_1, \ldots, c_U)$ is an assignment of users to rubrics, and $\Pi_{u,u'}$ is the posterior probability of $C_u = C_{u'}$. See \citet{fritsch2009improved} for additional approaches to clustering objects using samples of the $C_u$'s.
The multi-rubric model produces interesting effects on the overall estimate of restaurant quality. Consider the rubric corresponding to \(m = 7\) in Figure~\ref{fig:rubric-props}. Users assigned to this rubric give the majority of restaurants a rating of five stars. As a result, a rating of 5 stars for the \(m = 7\) rubric is less valuable to a restaurant than a rating of 5 stars from a user with the \(m = 6\) rubric. Similarly, a rating of \(3\) stars from the \(m = 7\) rubric is more damaging to the estimate of a restaurant's quality than a rating of \(3\) stars from the \(m = 6\) rubric.
For restaurants with a large number of reviews, the effect mentioned above is negligible, as the restaurants typically have a good mix of users from different rubrics. The effect on restaurants with a small number of reviews, however, can be much more pronounced. To illustrate this effect, Figure~\ref{fig:rating-posterior} displays the posterior distribution of the quantity \(\lambda_i\) defined in \eqref{eq:lambda} for the restaurants with \(i \in \{ 3356, 3809, 9\}\). Each of these businesses has \(4\) reviews total, with empirically averaged ratings of 4.25, 3.75, and 3 stars. For \(i = 3809\) and \(i = 9\), the users are predominantly from the rubric with \(m = 7\); as a consequence, the fact that these restaurants do not have an average rating closer to five stars is quite damaging to the estimate of the restaurant quality. In the case of \(i = 3809\), the effect is strong enough that what was ostensibly an above-average restaurant is actually estimated to be below average by the multi-rubric model. Conversely, item \(i = 3356\) has ratings of \(4, 5, 5\), and \(3\) stars, but one of the \(5\)-star ratings comes from a user assigned to the rubric \(m = 2\) which gave a \(5\)-star rating to only 8\% of businesses. As a result, the \(5\)-star ratings are weighted more heavily than they would otherwise be, causing the distribution of \(\lambda_i\) to be shifted slightly upwards.
\begin{figure}
\centering
\includegraphics[width=.8\textwidth]{"./Figures/rating-posterior-1"}
\caption{Posterior density of \(\lambda_i\) for \(i \in \{9, 3356, 3809\}\).
The dashed line is the empirical average rating of item \(i\); the dotted
line is the overall average of all ratings. Error bars are centered at the
posterior mean with a radius of one standard deviation.}
\label{fig:rating-posterior}
\end{figure}
Lastly, we consider rescaling the average ratings according to a specific rubric. This may of interest, for example, if one is interested in standardizing the ratings to match a rubric which evenly disperses ratings evenly across the possible stars. To do this, we examine the rubric-adjusted average ratings \(\lambda_{im}\) given by \eqref{eq:lambda-k}. Figure~\ref{fig:rubric-specific} displays the posterior density of \(\lambda_{im}\) for \(i = 24\) and \(i = 44\), for the \(9\) most common rubrics. These two restaurants have over 100 reviews, and so the overall quality can be estimated accurately. We see some expected features; for example, the quality of each restaurant has been adjusted downwards for users of the \(m = 10\) rubric, and upwards for the \(m = 7\) rubric. The multi-rubric model allows for more nuanced behavior of the adjusted ratings than simple upward/downward shifts. For example, for the mediocre item \(i = 44\), we see that little adjustment is made for the \(m = 13\) rubric, while for the high-quality item \(i = 24\) a substantial downward adjustment is made. This occurs because the model interprets the users with \(m = 13\) as requiring a relatively large amount of utility to rate an item 5 stars, so that a downward adjustment is made for the high-quality item; on the other hand, users with \(m = 13\) tend to rate things near a \(3.5\), so little adjustment needs to be made for the mediocre item.
\begin{figure}
\centering
\includegraphics[width=.8\textwidth]{"./Figures/rubric-specific-1"}
\caption{Posterior density of \(\lambda_{im}\) for \(i = 44, 24\). Horizontal
lines display the empirical average rating for each item. Rubrics are
organized by the average rating assigned to \(i = 44\) for visualization
purposes.}
\label{fig:rubric-specific}
\end{figure}
\section{Discussion}
\label{sec:discussion}
In this paper we introduced the multi-rubric model for the analysis of rating data and applied it to public data from the website \texttt{Yelp!}. We found that the multi-rubric model yields improved predictions and induces sophisticated shrinkage effects on the estimated quality of the items. We also showed how the model developed can be used to partition the users into interpretable latent classes.
There are several areas exciting areas for future work. First, while Markov chain Monte Carlo works well for the \texttt{Yelp!}\ dataset (it took 90 minutes to fit the model of Section~\ref{sec:data-analysis}), it would be desirable to develop a more scalable procedure, such as stochastic variational inference \citep{hoffman2013stochastic}. Second, the model described here features limited modeling of the users. Information regarding which items the users have rated has been shown in other settings to improve predictive performance; temporal heterogeneity may also be present in users.
The latent class model described here can also be extended to allow for more flexible models for the \(\alpha_u\)'s and \(\beta_i\)'s. For example, a referee pointed out the possibility of inferring about how controversial an item is across latent classes, which could be accomplished naturally by using a mixture model for the \(\alpha_u\)'s.
A fruitful area for future research is the development of methodology for when MAR fails. One possibility for future work is to extend the model to also model the missing data indicators \(\Delta_{iu}\). This is complicated by the fact that, while \(\{Y_{iu} : 1 \le i \le I, 1 \le u \le U\}\) is not completely observed, \(\{\Delta_{iu} : 1 \le i \le I, 1 \le u \le U\}\) is. As a result, the data becomes much larger when modeling the \(\Delta_{iu}\)'s.
\ifblinded
\else
\section*{Acknowledgements}
The authors thank Eric Chicken for helpful discussions. This work was partially supported by the Office of the Secretary of Defense under research program \#SOT-FSU-FATs-06 and NSF grants NSF-SES \#1132031 and NSF-DMS \#1712870.
\fi
|
2,869,038,155,445 | arxiv | \section{Introduction}
A honeycomb monolayer of carbon atoms, known as {\it graphene} \cite{N, Cas, Sar, Kot}, has played an important role in condensed matter research. The undoped system has conical valence \cite{W, Feff1, Feff3} and conduction bands meeting at two different Fermi points, also called the Dirac points, and dispersion relation of the quasi- particle closely resembles the massless Dirac fermions in $2+1$ dimensions. The peculiar Fermi surface has shown to be the origin of a number of remarkable effects, such as the anomalous integer Hall effect. Theoretically, this system can be described by the $2D$ Hubbard model \cite{Hubb, Lieb} on the honeycomb lattice (also called the honeycomb Hubbard model) at half-filling with weak local interactions, whose rigorous construction bas been achieved in \cite{GM}. The geometry of the Fermi surface as well as the physical properties change drastically with doping \cite{Link, Mc, Ros}.
In this series of papers we study the doped Honeycomb Hubbard model in which the value of the renormalized chemical potential $\mu$ is equal to the hopping parameter $t$ (which is set to be $1$). The non-interacting Fermi surface ${\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}_0$ is a collection of exact triangles and van Hove singularities appear. In the two first papers we present the first rigorous construction of this model and we also prove that this model is {\it not} a Fermi liquid in the mathematically precise sense of Salmhofer. In the current paper, we establish the power counting theorem for the $2p$-point Schwinger's functions, $p\ge1$ and prove that the perturbation series for the two-point Schwinger's functions as well as the self-energy function have positive radius of convergence when the temperature $T$ is greater than a value ${T_0\sim \exp{(-C|\lambda|^{-1/2})}}$, where $|\lambda|\ll1$ is the bare coupling constant, $C$ is a constant which depends on the physical parameters of the model such as the electron mass, the lattice structure, etc., but not on the temperature. The upper bounds for the second derivatives of the self-energy w.r.t. the external momentum have been established in this paper. The estimation of the lower bounds for these quantities, which require different techniques, have been established in a companion paper \cite{RW2}.
We believe that this paper and the companion one are important as they provide the first rigorous results on the non-Fermi liquid behaviors in the doped graphene system.
Non-Fermi liquid behaviors have also been proved in the half-filled Hubbard model on the square lattice at half-filling \cite{AMR1, AMR2}. There are important differences between the two models. First of all, the model studied in \cite{AMR1} is at half-filling, in which there are no quantum corrections to the chemical potential and the dispersion relations, due to the particle-hole symmetry, which make the renormalization much simpler. Secondly, the Schwinger's functions as well as the self-energy in the current model are matrix valued functions, due to the lattice structure, which are harder to study than in \cite{AMR2}. The Fermi surface in the current model is triangle-like but not square-like also makes the analysis more involved.
The fact that both models exhibit non-Fermi liquid behaviors may indicate some universal structure in models with van Hove singularities.
The main results of this paper will be proved with the Fermionic cluster expansions and rigorous renormalization group analysis \cite{BG, FT, M2}. One major difficulty in the proof is that the non-interacting Fermi surface ${\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}_0$ is deformed by interaction, and the resulting interacting Fermi surface ${\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}$ is moving when the temperature changes \cite{FST1}. This shift of Fermi surface may cause divergence of many coefficients in the naive perturbation expansions. In order to solve this problem, we introduce a counter-term to the interaction potential, in such a way that the interacting Fermi surface for the {\it new model} is fixed and coincides with ${\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}_0$. The inversion problem, which concerns the uniqueness of the counter-term given a bare dispersion relation is not addressed in this paper. Sector analysis, the BKAR jungle formulas \cite{BK, AR} and the multi-arch expansions are the main tools that we shall employ to establish the upper bounds for the Schwinger's functions and the self-energy functions.
\section{The Model and Main results}
\subsection{The honeycomb lattice and the non-interacting Fermi surface}
Let $\Lambda_A=\{\xx\ \vert\ \xx=n_1\bl_1+n_2\bl_2, n_1, n_2\in\ZZZ\}\subset\RRR^2$ be the infinite triangular lattice generated by the basis vectors
${\bl}_1=\frac12(3,\sqrt{3})$, ${\bl_2}=\frac12(3,-\sqrt{3})$. Let $\Lambda_B=\Lambda_A+{\bf d}_i$ be the shifted triangular lattice of $\Lambda_A$ by one of the three vectors: ${{\bf d}_1}=(1,0)$, ${{\bf d}_2}=\frac12
(-1,\sqrt{3})$ or ${{\bf d}_3}=\frac12(-1,-\sqrt{3})$. Due to the $\ZZZ_3$ symmetry, shifting $\Lambda_A$ by any vector ${\bf d}_i$, $i=1,\cdots,3$ gives the same lattice. So we choose $\Lambda_B=\Lambda_A+{\bf d}_1$, for simplicity.
The infinite honeycomb
lattice is defined as $\L=\L_A\cup L_B$. For a fixed $L\in\hbox{\msytw N}} \def\nnnn{\hbox{\msytww N}_+$, define the finite honeycomb lattice as the torus $\Lambda_L=\L/L\L=\Lambda_{L,A}\cup \Lambda_{L,B}$, which is the superposition of the two sub-lattices $\Lambda_{L,A}:=\Lambda_A/L\Lambda_A$, $\Lambda_{L,B}=\Lambda_B/L\Lambda_B$, with metric $d_L:=\vert\xx-\yy\vert_{\L_L}=\min_{(n_1, n_2)\in\ZZZ^2}\vert \xx-\yy+n_1L\bl_1+n_2L\bl_2\vert$.
\begin{figure}[htp]
\centering
\includegraphics[width=.7\textwidth]{honeycomb1.pdf}
\caption{\label{lattice}
A portion of the honeycomb lattice $\L$. The white and black dots correspond
to the sites of the $\L_A$ and $\L_B$ triangular sub-lattices, respectively.}
\end{figure}
The Fock space for the many-fermions system is constructed as follows:
let ${\cal H}}\def\WW{{\cal W}}\def\cP{{\cal P}_L=\CCC^{L^2}\otimes\CCC^2\otimes\CCC^2$ be a single-particle Hilbert space of functions {$\Psi_{\xx, \t,\a}:\L_L\times\{\uparrow,\downarrow\}\times \{A,B\}\rightarrow\CCC$}, in which $\t\in\{\uparrow,\downarrow\}$ labels the spin index of the quasi-particle, $\a\in\{A, B\}$ distinguishes the two kinds of sub-lattices and $\xx\in \L_L$ labels the lattice point. The
normalization condition is $\Vert\Psi\Vert^2_2=\sum_{\xx,\t,\a}\vert\Psi_{\xx,\t,\a}\vert^2=1$. The Fermionic Fock space $\FFF_L$ is defined as:
\begin{equation}
{\FFF_L=\CCC\oplus\bigoplus_{N=1}^{L^2}\FFF_L^{(N)},\quad \FFF_L^{(N)}=\bigwedge^N {\cal H}}\def\WW{{\cal W}}\def\cP{{\cal P}_L},
\end{equation}
where $\bigwedge^N {\cal H}}\def\WW{{\cal W}}\def\cP{{\cal P}_L$ is the $N$-th anti-symmetric tensor product of ${\cal H}}\def\WW{{\cal W}}\def\cP{{\cal P}_L$. Let $\xi_i=(\xx_i,\t_i,\a_i)$, $i=1,\cdots,N$. Define the Fermionic operators ${\bf a}^\pm_{\xx,\t,\a}$ ({see eg. \cite{BR1}, Page 10, Example 5.2.1}) on $\FFF_L$ by:
\begin{eqnarray}
&&({\bf a}^+_{\xx,\t,\a}\Psi)^{(N)}(\xi_1,\cdots, \xi_N)\nonumber\\
&&\quad\quad\quad:=\frac{1}{\sqrt{N}}\sum_{j=1}^N(-1)^j\delta_{\xx,\xx_j}\delta_{\a,\a_j}\delta_{\t,\t_j} \Psi^{(N-1)}(\xi_1,\cdots ,\xi_{j-1},\xi_{j+1},\cdots,\xi_{N}),\\
&&({\bf a}^-_{\xx,\t,\a}\Psi)^{(N)}(\xi_1,\cdots, \xi_N):= \sqrt{N+1}\ \Psi^{(N+1)}(\xx,\t,\a;\ \xi_1,\cdots,\xi_{n}),
\end{eqnarray}
where $\delta_{.,.}$ is the Kronecker delta function. It is easy to find that the Fermionic operators satisfy the canonical anti-commutation relations (CAR):
\begin{equation}\{{\bf a}^+_{\xx,\t,\a}, {\bf a}^-_{\xx',\t',\a'}\}=\delta_{\xx,\xx'}\delta_{\a,\a'}\delta_{\t,\t'},\quad \{{\bf a}^+_{\xx,\t,\a}, {\bf a}^+_{\xx',\t',\a'}\}=\{{\bf a}^-_{\xx,\t,\a},
{\bf a}^-_{\xx',\t',\a'}\}=0.\end{equation}
We impose the periodic boundary conditions on these Fermionic operators: ${\bf a}^\pm_{\xx+n_1L+n_2L,\t,\a}={\bf a}^+_{\xx,\t,\a},\ \forall \xx\in\Lambda_L$.
The operators ${\bf a}^\pm_{\xx,\t,A}$ and ${\bf a}^\pm_{\xx,\t,B}$ are called the Fermionic operators of type $A$ and type $B$, respectively.
The second quantized grand-canonical Hamiltonian on $\L_L$ is defined by
\begin{equation}
H_{L}(\lambda)=H^0_{L}+ V(\lambda)_{L},
\end{equation}
in which
\begin{eqnarray}
H^0_{L}&=&-t\sum_{\substack{\xx\in \L_{L,A}\\i=1,\cdots,3}}\sum_{\t=\uparrow\downarrow} \Big(\
{\bf a}^+_{\xx,\t,A} {\bf a}^-_{\xx+{\bf d}_i, \t,B} +{\bf a}^+_{\xx+{\bf d}_i,\t,B} {\bf a}^-_{\xx,\t,A}\ \Big)\nonumber\\
&&\quad\quad\quad-\mu\sum_{\substack{\xx\in \L_{L,A}}}\sum_{\t=\uparrow\downarrow}\Big(\
{\bf a}^+_{\xx,\t,A}{\bf a}^-_{\xx,\t,A}+{\bf a}^+_{\xx+ {\bf d}_1,\t,B}{\bf a}^-_{\xx+{\bf d}_1,\t,B}\ \Big),\label{hamil0}
\end{eqnarray}
is the non-interacting Hamiltonian; $t\in\RRR_+$ is the nearest neighbor hopping parameter and $\mu\in\RRR $ is called the {\it renormalized} chemical potential. $V(\lambda)_{L}$ is the interaction potential, to be defined later. We fix $t=1$ for the rest of this paper.
Let ${\Lambda}_L^*$ be the dual lattice of $\Lambda_L$ with basis vectors ${\bf G}_1=\frac{2\pi}{3}(1,\sqrt{3})$, ${\bf G}_2=\frac{2\pi}{3}(1,-\sqrt{3})$, the first Brillouin zone is defined as
\begin{equation}\label{bril}
{\cal D}}\def\AAA{{\cal A}}\def\GG{{\cal G}} \def\SS{{\cal S}_L:=\RRR^2/\Lambda^*_L=\big\{\bk\in\RRR^2\ \vert\ \bk=\frac{n_1}{L}{\bf G}_1+\frac{n_2}{L}{\bf G}_2, {n_{1,2}\in[-\frac{L}{2}, \frac{L}{2}-1]\cap\ZZZ}\big\}
.\end{equation}
The Fourier transform for the Fermionic operators are:
\begin{equation}
{\bf a}^\pm_{\xx,\t,A}=\frac{1}{|\L_L|}\sum_{\bk\in{\cal D}}\def\AAA{{\cal A}}\def\GG{{\cal G}} \def\SS{{\cal S}_L}e^{\pm i\kk\cdot\xx}\hat {\bf a}^\pm_{\bk,\t,A},\ {\bf a}^\pm_{\xx+{\bf d}_1,\t,B}=\frac{1}{|\L_L|}\sum_{\bk\in{\cal D}}\def\AAA{{\cal A}}\def\GG{{\cal G}} \def\SS{{\cal S}_L}e^{\pm i\kk\cdot\xx}\hat {\bf a}^\pm_{\bk,\t,B},
\end{equation}
in which $|\L_L|$ is the volume of $\L_L$. The inverse Fourier transform are given by:
$\hat {\bf a}^\pm_{\bk,\t,A}=\sum_{\xx\in\Lambda_L}e^{\mp i\bk\cdot\xx} {\bf a}^\pm_{\xx,\t, A}$,
$\hat {\bf a}^\pm_{\bk,\t, B}=\sum_{\xx\in\Lambda_L}e^{\mp i\bk\cdot\xx} {\bf a}^\pm_{\xx+{\bf d}_1,\t,B}$.
The periodicity of ${\bf a}^\pm_{\xx,\t,\a}$ implies
$\hat {\bf a}^\pm_{\bk+n_1{\bf G}_1+n_2{\bf G}_2,\t,\a}=\hat {\bf a}^\pm_{\bk,\t,\a}$, and the commutation relations become:
\begin{eqnarray}\{{\bf a}^+_{\bk,\t,\a}, {\bf a}^-_{\bk',\t',\a'}\}=|\L_L|\delta_{\bk,\bk'}\ \delta_{\a,\a'}\ \delta_{\t,\t'}
\quad \{{\bf a}^+_{\bk,\t,\a}, {\bf a}^+_{\bk',\t',\a'}\}=\{{\bf a}^-_{\bk,\t,\a}, {\bf a}^-_{\bk',\t', \a'}\}=0.
\end{eqnarray}
It is useful to relabel the Fermionic operators of type $A$, i.e., those with $\a=A$, by $\a=1$ and the Fermionic operators of type $B$ by $\a=2$, and organize these operators into vectors. Then we can rewrite the non-interacting Hamiltonian as
\begin{equation}\label{qua2}
H^0_L=-\frac{1}{|\Lambda_L|}\sum_{\kk\in\cD_L,\tau=\uparrow\downarrow}\sum_{\a,\a'=1,2}\hat {\bf a}^+_{\bk,\tau,\a}[\hat H_0(\bk)]_{\a,\a'}\hat {\bf a}_{\bk,\tau,\a'},
\end{equation}
with matrix kernel: \begin{equation}
\hat H_0(\bk)=\begin{pmatrix}\ -\mu&-\Omega^*(\bk)\\-\Omega(\bk)&-\mu\
\end{pmatrix},
\end{equation}
in which $\O({\bk})=\sum_{i=1}^3
e^{i({\bf d}_i-{\bf d}_1) \bk}=1+2
e^{-i \frac32 k_1}\cos(\frac{\sqrt{3}}2 k_2)$
is called the {\it non-interacting complex dispersion relation}, and $\Omega^*(\bk)$ is the complex conjugate of $\Omega(\bk)$.
\begin{lemma}[See also \cite{GM}, Lemma 1]\label{inv0}
The Hamiltonian \eqref{qua2} is invariant under the following symmetry.
\begin{itemize}
\item (a) discrete spatial rotations: ${\bf a}^\pm_{\ \bk,\t,\a}\rightarrow e^{\mp i\bk\cdot({\bf d}_3-{\bf d}_1)(\a-1)}{\bf a}^\pm_{ R_{2\pi/3}(\bk),\t,\a}$, $R_\theta\in SO(2)$ is the rotation operator with $\theta\in[0,2\pi]$ independent of $\bk$.
\item (b) vertical reflections: ${\bf a}^\pm_{\ (k_1,k_2),\t,\a}\rightarrow {\bf a}^\pm_{\ (k_1,-k_2),\t,\a}$.
\item (c) interchange of particles: ${\bf a}^\pm_{\ \bk,\t,\a}\leftrightarrow {\bf a}^\pm_{\ -\bk,\t,\a'}$, for $\a\neq\a'$.
\end{itemize}
\end{lemma}
\begin{proof}
It is enough to prove that
\begin{eqnarray}
&&\sum_{\a,\a'=1,2}\sum_{\bk\in\cD_L}\hat {\bf a}^+_{\bk,\tau,\a}[\hat H_0(\bk)]_{\a,\a'}\hat {\bf a}_{\bk,\tau,\a'}\\
&&=-\mu\sum_{\bk\in\cD_L}\big({\bf a}^+_{\bk,\tau,1}{\bf a}_{\bk,\tau,1}+{\bf a}^+_{\bk,\tau,2}{\bf a}_{\bk,\tau,2}\big)
-\sum_{\bk\in\cD_L}\Big[{\bf a}^+_{\bk,\tau,1}\Omega^*(\bk){\bf a}_{\bk,\tau,2}+{\bf a}^+_{\bk,\tau,2}\Omega(\bk){\bf a}_{\bk,\tau,1}\Big]\nonumber
\end{eqnarray}
is invariant under the transformations in $(a)-(c)$.
Since $\Omega(\bk)$ is an even function of $k_2$ and since the domain $\cD_L$ is invariant under the transformation $k_2\rightarrow -k_2$, the conclusion of $(b)$ follows. The rotation operator in $(a)$ is $R_{2\pi/3}=\begin{pmatrix}-1/2&\sqrt3/2\\-\sqrt3/2&-1/2\end{pmatrix}$. We have
$\Omega(R^{-1}_{2\pi/3}(\bk))=e^{i({\bf d}_1-{\bf d}_2)\cdot\bk}\Omega(\bk)$. Since $\cD_L$ is invariant under the rotation $R_{2\pi/3}$, we proved $(a)$. Finally, using the fact that $\Omega(\bk)=\Omega^*(-\bk)$, the conclusion of $(c)$ follows.
\end{proof}
Let $T$ be the temperature of the system and $\beta=1/T$, the Gibbs states associated with $H_{L}$ are defined by:
\begin{equation}\label{gibbs}
\langle\cdot\rangle=\mathrm{Tr}_{\FFF_L}\ [\ \cdot\ e^{-\beta H_{L}}]/Z_{\beta,\L_L},\end{equation}
in which $Z_{\beta,\L_L}=\mathrm{Tr}_{\FFF_L}e^{-\beta H_{L}}$ is the partition function and the trace is taken w.r.t. vectors in the Fock space ${\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}_L$. Define $\Lambda_{\beta, \Lambda_L}:=[-\b,\b)\times\L_L$. For $x_0\in[-\b,\b)$, the imaginary-time evolution of the Fermionic operators are defined as ${\bf a}^\pm_{x}=e^{x^0H_L}{\bf a}^\pm_{\ \xx} e^{-x^0H_L}$,
in which $x=(x_0,\xx)\in\Lambda_{\beta, \Lambda_L}$.
The $2p$-point Schwinger functions, $p\ge0$, are (formally) defined as:
\begin{eqnarray}\label{nptsch}
&&S_{n, \beta, L}(x_1,\e_1,\t_1,\a_1;\cdots x_{2p} ,\e_{2p},\t_{2p},\a_{2p};\lambda):=
\langle{\bf T}\ {\bf a}^{\e_1}_{x_1,\t_1\a_1}\cdots {\bf a}^{\e_{2p}}_{x_{2p},\t_{2p},\a_{2p}}\rangle_{\beta,L}\nonumber\\
&&\quad\quad\quad:=\frac{1}{Z_{\beta,\Lambda_L}}\mathrm{Tr}_{\FFF_L} e^{-\beta H_L}{\bf T}\{{\bf a}^{\e_1}_{({x_1^0},\xx_1),\t_1, \a_1}\cdots {\bf a}^{\e_{2p}}_{({x_{2p}^0},\xx_{2p}),\t_{2p},\a_{2p}}\},
\end{eqnarray}
where ${\bf T}$ is the Fermionic time-ordering operator, defined as
\begin{eqnarray}
&&{\bf T}\ {\bf a}^{\e_1}_{(\xx_1, {x_1^0}),\t_1,\a_1}\cdots {\bf a}^{\e_{2p}}_{(\xx_{2p}, {x_{2p}^0}),\t_{2p},\a_{2p}}\\
&&\quad\quad\quad\quad={\rm sgn} (\pi)\ {\bf a}^{\e_{\pi(1)}}_{(\xx_{\pi(1)}, {x}_{\pi(1)}^0),\t_{\pi(1)},\a_{\pi(1)}}\cdots {\bf a}^{\e_{\pi({2p})}}_{(\xx_{\pi({2p})}, {x}_{\pi(n)}^0),\t_{\pi({2p})},\a_{\pi({2p})}},\nonumber
\end{eqnarray}
such that ${x}_{\pi(1)}^0\ge{x}_{\pi(2)}^0\ge\cdots\ge{x}_{\pi(2p)}^0$, in which $\pi$ is the permutation operator. If some operators are evaluated at equal time, the ambiguity is solved by taking the normal-ordering on these operators: putting ${\bf a}^-_{x_i,\t_i,\a_i}$ on the right of ${\bf a}^+_{x_i,\t_i,\a_i}$.
\subsubsection{The non-interacting Fermi surface}
The non-interacting two-point Schwinger's function (also called the free propagator) given by:
\begin{eqnarray}
C_{\b}(x-y)&:=&\lim_{L\rightarrow\infty}S_{2,\b,L}(x,y;0)\\
&=&\lim_{L\rightarrow\infty} {\frac{1}{\b}|\L_L|}
\sum_{k=(k_0, \kk)\in{\cal D}}\def\AAA{{\cal A}}\def\GG{{\cal G}} \def\SS{{\cal S}_{\b, L}}e^{ik_0\cdot(x_0-y_0)+i\bk\cdot(\xx-\yy)}\hat C(k_0,\bk)\label{free2pt},
\end{eqnarray}
in which
\begin{eqnarray}\label{2ptk}
\hat C(k_0,\bk)=\hat H^{-1}_0(\kk)=\frac{1}{-i k_0 \mI+E(\kk,\mu)},
\end{eqnarray}
is the free propagator in the momentum space. The summation over the momentum $k=(k_0,\bk)$
runs over the domain ${\cal D}}\def\AAA{{\cal A}}\def\GG{{\cal G}} \def\SS{{\cal S}_{\b, L}:=\{(2n+1)\pi T,\ n\in\hbox{\msytw N}} \def\nnnn{\hbox{\msytww N}\}\times\cD_L$, in which $k_0=(2n+1)\pi/\beta=(2n+1)\pi T$, $n\in\hbox{\msytw N}} \def\nnnn{\hbox{\msytww N}$, are called the Matsubara frequencies. $\mI$ is the $2\times2$ identity matrix and \begin{equation}
E(\kk,\mu)=\begin{pmatrix}-\m & -\O^*(\bk) \\ -\O(\bk) &
-\m\end{pmatrix}\end{equation} is called the {\it band matrix}, which is closely related to the band structure of the electrons. Inverting the denominator we obtain:
\begin{eqnarray}\label{2ptkb}
\hat C(k_0,\bk)=\frac{1}{k_0^2+e(\bk,\mu)-2i\mu k_0} \begin{pmatrix}i k_0+\m &-\O^*(\bk) \\ -\O(\bk) &
ik_0+\m\end{pmatrix},
\end{eqnarray}
in which
\begin{eqnarray}\label{band1}
e(\bk,\mu):=-\det\big[ E(\kk,\mu)\big]=4\cos(3k_1/2)\cos(\sqrt{3} k_2/2)+
4\cos^2(\sqrt{3} k_2/2)+1-\mu^2.
\end{eqnarray}
\begin{definition}
The non-interacting Fermi surface (F.S.) is defined as:
\begin{equation}{\cal F}_0:=
\{\bk=(k_1, k_2)\in \RRR^2\vert\ e(\bk,\mu)=0\}.\label{freefs}\end{equation}
It is a one-dimensional closed curve in $\RRR^2$. A Fermi surface may have several connected components, each of which is called a Fermi curve (F.C.).
\end{definition}
The geometry of the Fermi surface depends crucially on the value of $\mu$:
when $\mu=0$, the solution to the equation $e(\bk,\mu)=0$ composes of a set of points, called the {\it Fermi points} or the {\it Dirac points} (cf. eg. \cite{GM}), among which the pair $\bk^F_1=(\frac{2\pi}{3}, \frac{2\pi}{3\sqrt3})$ and $\bk^F_2=(\frac{2\pi}{3}, -\frac{2\pi}{3\sqrt3})$ are considered as the fundamental ones. The interacting Schwinger functions are proved to be analytic functions up to zero temperature \cite{GM}.
When $\mu=1$, the solutions to $e(\bk,1)=0$ are the following set of inequivalent lines:
\begin{eqnarray}\label{sol0}
L_1&=&\{(k_1, k_2)\in\RRR^2:k_2={\sqrt{3}} k_1-\frac{4n+2}{\sqrt3}\pi,\
n\in\ZZZ\}, \nonumber\\
L_2&=&\{(k_1, k_2)\in\RRR^2: k_2=-{\sqrt{3}} k_1+\frac{4n+2}{\sqrt3}\pi,\
n\in\ZZZ\},\nonumber\\
L_3&=&\{(k_1,k_2)\in\RRR^2:k_2=\pm\frac{(2n+1)\pi}{\sqrt3},\ n\in\ZZZ\},
\end{eqnarray}
which form a set of perfect triangles, also called the Fermi triangles. The Fermi surfaces is the union of these Fermi triangles (see Figure \ref{fpt} for an illustration of a Fermi surface composed of $6$ Fermi triangles). The following two Fermi triangles
\begin{eqnarray}
{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}_0^+&=&\{k_2={\sqrt{3}} k_1-\frac{2\pi}{\sqrt3},\ k_1\in[\frac{2\pi}{3},\pi]\}\cup
\{ k_2=-{\sqrt{3}} k_1+\frac{2\pi}{\sqrt3},\ k_1\in[\frac{\pi}{3},\frac{2\pi}{3}]\}\nonumber\\
&&\quad\quad\cup \{k_2=\frac{\pi}{\sqrt3},\ k_1\in[\frac{\pi}{3},\pi]\},\\
{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}_0^-&=&\{k_2={\sqrt{3}} k_1-\frac{2\pi}{\sqrt3},\ k_1\in[\frac{\pi}{3},\frac{2\pi}{3}\}\cup
\{ k_2=-{\sqrt{3}} k_1+\frac{2\pi}{\sqrt3},\ k_1\in[\frac{2\pi}{3},\pi]\}\nonumber\\
&&\quad\quad\cup \{k_2=-\frac{\pi}{\sqrt3},\ k_1\in[\frac{\pi}{3},\pi]\},
\end{eqnarray}
centered around the Fermi points $\bk^F_1$ and $\bk^F_2$, respectively, are called {\it the fundamental Fermi triangles}. All the other Fermi triangles are considered as translations of ${\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}_0^+$ and ${\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}_0^-$. The vertices of the Fermi triangles are called the {\it van Hove singularities}. Lifshitz phase transitions \cite{Lifshitz} may happen when the chemical potential crosses $\mu=1$, for which the topology of the Fermi surface changes: When $0<\mu<1$, the Fermi surface is a set of closed convex curves centered around the Fermi points and bordered by the Fermi triangles; when $\mu>1$ the Fermi surfaces become concave and have more complicated geometrical properties.
\begin{figure}[!htb]
\centering
\includegraphics[scale=.35]{tfs2a.pdf}
\caption{An illustration of the Fermi triangles, among which the shaded triangles are the fundamental ones. The vertices of the Fermi triangles are the van Hove singularities. The centers of the Fermi triangles, $\kk_1^F,\cdots,\kk_6^F$, are the Fermi points. The rhombus generated by ${\bf b}_1$ and ${\bf b}_2$ is the first Brillouin zone.}\label{fpt}
\end{figure}
\subsection{The interaction potential and the moving Fermi surface}
The many-body interaction potential for the honeycomb Hubbard model is defined as:
\begin{eqnarray}
V_{L}(\lambda)&=&\lambda\sum_{\substack{\xx\in \L_{L,A}\\i=1,\cdots,3}}
\Big(\ {\bf a}^+_{\xx,\uparrow,1}{\bf a}^-_{\xx,\uparrow,1}{\bf a}^+_{\xx,\downarrow,1}{\bf a}^-_{\xx,\downarrow,1}\nonumber\\
&&\quad\quad\quad\quad\quad+{\bf a}^+_{\xx+{\bf d}_i,\uparrow,2}{\bf a}^-_{\xx+{\bf d}_i,\uparrow,2}{\bf a}^+_{\xx+{\bf d}_i,\downarrow,2}{\bf a}^-_{\xx+{\bf d}_i,\downarrow,2}\ \Big),
\label{hamil1}\end{eqnarray}
in which $\lambda\in\RRR$ is called the {\it bare coupling constant}. It is easy to prove that $V_L$
is invariant under the transformations introduced in Lemma \ref{inv0}.
If we choose the grand-canonical Hamiltonian be $\tilde H_L=H^0_L+V_L$, the interacting 2-point Schwinger's function $\tilde S^{int}_2(\left\langle)$ (also called the interacting propagator) is defined as:
\begin{eqnarray}\label{sfe1}
[\tilde S^{int}_2(\left\langle,p)]_{\a\a',\tau\tau'}&:=&\langle{\bf T}\{{\bf a^-}_{p_0,{\bf p},\t,\a}{\bf a^+}_{k_0,{\bf k},\t'\a'}\}\rangle\\
&=&\delta_{\t,\t'}\delta(p-k)\Bigg[\frac{1}{-ik_0\mI+E(\bk,\mu)+{\cal E}}\def\si{{\sigma}}\def\ep{{\epsilon}} \def\cD{{\cal D}}\def\cG{{\cal G}((k_0,\kk),\lambda)}\Bigg]_{\a,\a'},\nonumber
\end{eqnarray}
in which
\begin{equation}\label{sf1}
{\cal E}}\def\si{{\sigma}}\def\ep{{\epsilon}} \def\cD{{\cal D}}\def\cG{{\cal G}((k_0,\kk),\lambda)=\begin{pmatrix}\tilde\Sigma_{11}((k_0,\kk),\lambda)&\hat\Sigma_{12}((k_0,\kk),\lambda)\\
\hat\Sigma_{21}((k_0,\kk),\lambda)&\tilde\Sigma_{22}((k_0,\kk),\lambda)\end{pmatrix}.
\end{equation}
is called {\it the self-energy matrix}.
Since the interaction potential $V_L(\lambda)$ is invariant under the transformations of Lemma \ref{inv0}, we have \begin{equation}\label{inv1}
\tilde\Sigma_{11}((k_0,\kk),\lambda)=\tilde\Sigma_{22}((k_0,\kk),\lambda),\quad \hat\Sigma_{12}((k_0,\kk),\lambda)=\hat\Sigma_{21}((k_0,-\kk),\lambda).\end{equation}
The diagonal elements of \eqref{sf1} can be further
decomposed as: $
\tilde\Sigma_{11}((k_0,\kk),\lambda)=T(\lambda)+\hat\Sigma_{11}((k_0,\kk),\lambda)$,
in which $T(\lambda)$ is independent of the external momentum $\bk$ and is called {\it the tadpole term}, and $\hat\Sigma_{11}$ is the non-local part.
Notice that the interacting propagator $\tilde S^{int}_2(\left\langle)$ could be singular at $k_0\rightarrow0$.
\begin{definition}\label{intfs}
The interacting Fermi surface is defined by
\begin{equation}\label{fint1}
{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}=\{\bk\vert \det (E(\bk,1)+{\cal E}}\def\si{{\sigma}}\def\ep{{\epsilon}} \def\cD{{\cal D}}\def\cG{{\cal G}(0,\bk))=0\}.
\end{equation}
\end{definition}
Using \eqref{sf1}, we have:
\begin{equation}
{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}=\Big\{\bk\vert\det \begin{pmatrix}-1+T(\lambda)+\hat\Sigma_{11}((k_0,\kk),\lambda)&-\Omega^*(\bk)+\hat\Sigma_{12}((k_0,\kk),\lambda)\\
-\Omega(\bk)+\hat\Sigma_{12}((k_0,-\kk),\lambda)&-1+T(\lambda)+\hat\Sigma_{11}((k_0,\kk),\lambda)\end{pmatrix}=0\Big\}.
\end{equation}
Since ${\cal E}}\def\si{{\sigma}}\def\ep{{\epsilon}} \def\cD{{\cal D}}\def\cG{{\cal G}(k,\lambda)$ is not known ahead of time, ${\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}$ is also not known and is changing when ${\cal E}}\def\si{{\sigma}}\def\ep{{\epsilon}} \def\cD{{\cal D}}\def\cG{{\cal G}$ changes. One possible way of solving this problem is to fix the interacting Fermi surface by introducing the following (nonlocal) counter-terms:
\begin{eqnarray}
N_{L}(\lambda)&=&-\frac{1}{|\Lambda|}\sum_{\bk\in\cD_L}\sum_{\t\in \{\uparrow,\downarrow\}}\Big[\ \delta\mu(\lambda)\sum_{\a=1,2}\ {\bf a}^+_{\bk,\t,\a}{\bf a}_{\bk,\t,\a}+\sum_{\a,\a'=1,2}
\hat\nu(\bk,\lambda)_{\a\a'}{\bf a}^+_{\bk,\t,\a}{\bf a}_{\bk,\t,\a'}\Big]
\nonumber\\
&&:=-\frac{1}{|\Lambda|}\sum_{\bk\in\cD_L}\sum_{\t\in \{\uparrow,\downarrow\}}\sum_{\a,\a'=1,2}{\bf a}^+_{\bk,\t,\a}[\delta E(\bk,\lambda)]_{\a\a'}{\bf a}_{\bk,\t,\a'},
\end{eqnarray}
in which
\begin{equation}
\delta E(\bk,\lambda)=\begin{pmatrix}\delta\mu(\lambda)+\hat\nu_{11}(\bk,\lambda)&\hat\nu_{12}(\bk,\lambda)\\
\hat\nu_{21}(\bk,\lambda)&\delta\mu(\lambda)+\hat\nu_{22}(\bk,\lambda)\end{pmatrix},
\end{equation}
whose matrix elements satisfy $\hat\nu_{11}(\bk,\lambda)=\hat\nu_{22}(\bk,\lambda)$, $\hat\nu_{12}(\bk,\lambda)=\hat\nu_{21}(-\bk,\lambda)$, and the conditions:
\begin{equation}\label{rncd1a}
\delta\mu(0)=0,\quad \hat\nu_{\a\a'}(\bk,0)=0\ {\rm for}\ \a,\a'=1,2.
\end{equation}
The (new) grand-canonical Hamiltonian is defined by:
\begin{equation}
H_L=H_L^0+V_{L}+N_{L},
\end{equation}
and the interacting propagator is:
\begin{eqnarray}\label{newintp}
\frac{1}{-ik_0\mI+E(\bk,1)+\delta E(\bk,\lambda)+{\cal E}}\def\si{{\sigma}}\def\ep{{\epsilon}} \def\cD{{\cal D}}\def\cG{{\cal G}((k_0,\kk),\delta E,\lambda)}.
\end{eqnarray}
With the introduction of the counter-terms, the singularities of the new interacting propagator \eqref{newintp} are required to coincide with the non-interacting Fermi surface ${\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}_0$, which put constraints on the counter-terms, also called {\it the renormalization conditions}, which can be formulated as follows. We can we can formally expand the interacting propagator as:
\begin{equation}\label{intpr0}
\sum_{n=0}^\infty\frac{1}{-ik_0\mI+E(\bk,1)}\Bigg[\ \frac{\delta E(\bk,\lambda)+{\cal E}}\def\si{{\sigma}}\def\ep{{\epsilon}} \def\cD{{\cal D}}\def\cG{{\cal G}((k_0,\kk),\delta E,\lambda)}{-ik_0\mI+E(\bk,1)}\ \Bigg]^n.
\end{equation}
In order that the singularities of \eqref{intpr0} coincide with ${\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}_0$, it is enough to require that:
\begin{definition}[The renormalization condition]\label{conj1}\\
\begin{itemize}
\item (a) The numerator in \eqref{intpr0} vanishes on the Fermi surface:
\begin{equation}\label{rncd1}
\delta E(\bk,\lambda)+{\cal E}}\def\si{{\sigma}}\def\ep{{\epsilon}} \def\cD{{\cal D}}\def\cG{{\cal G}((0,\kk),\lambda)=0,\ {\forall}\ \bk\ {\rm with}\ e(\bk,1)=0,
\end{equation}
\item (b) the ratio
\begin{equation}\label{rncd2}
\frac{\delta E(\bk,\lambda)+{\cal E}}\def\si{{\sigma}}\def\ep{{\epsilon}} \def\cD{{\cal D}}\def\cG{{\cal G}((k_0,\kk),\delta E,\lambda)}{-ik_0\mI+E(\bk,1)}
\end{equation}
is locally bounded for all $(k_0,\bk)\in\cD_{\b,L}$, up to a zero-measure set.
\end{itemize}
\end{definition}
Define the projection operator $P_F$ which maps each $\bk\in\cD_L$ to a unique $P_F\bk\in{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}_0$, then using the explicit expressions for $\delta E$ and ${\cal E}}\def\si{{\sigma}}\def\ep{{\epsilon}} \def\cD{{\cal D}}\def\cG{{\cal G}$, Formula \eqref{rncd1} can be written as:
\begin{eqnarray}
&&\delta\mu(\lambda)+T(\lambda)=0,\ {\rm and}\label{rncd3}\\
&&\hat\nu_{\a\a'}(P_F\bk,\lambda)+\hat\Sigma_{\a\a'}((0,P_F\bk),\delta E,\lambda )=0,\ \a,\a'=1,2.\label{rncd4}
\end{eqnarray}
Due to the symmetric properties of the counter-terms, Formula \eqref{rncd4} reduces to:
\begin{equation}\label{rncd5}
\hat\nu_{11}(P_F\bk,\lambda)+\hat\Sigma_{11}((0,P_F\bk),\delta E,\lambda )=0,\quad \hat\nu_{12}(P_F\bk,\lambda)+\hat\Sigma_{12}((0,P_F\bk),\delta E,\lambda )=0.
\end{equation}
\begin{remark}
It is important to remark that, since the set of counter-terms $\delta E$ that satisfy condition $(a)$ maybe highly non-trivial, we have indeed defined a class of honeycomb-Hubbard model whose interacting Fermi-surfaces are fixed and coincide with ${\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}_0$.
\end{remark}
We have the following theorem concerning the counter-terms:
\begin{theorem}\label{conj2}
There exist a counter-term matrix $\delta E(\bk,\lambda)$ such that the renormalization conditions introduced in Definition \ref{conj1} can be satisfied. The tadpole counter-term $\delta\mu(\left\langle)$ is a bounded function of $\lambda$. The counter-terms $\hat\nu_{\a\a'}(\bk,\left\langle)$, $\a,\a'=1,2$, are bounded and $C^{1+\epsilon}$ differentiable in the external momentum $\bk$,
\end{theorem}
Notice that we can combine the quadratic terms in $H^0_L$ and $N_L$:
\begin{eqnarray}
-\frac{1}{|\Lambda|}\sum_{\bk\in\cD_L}\sum_{\t\in \{\uparrow,\downarrow\}}\sum_{\a,\a'=1,2}{\bf a}^+_{\bk,\t,\a}[ E_{bare}(\bk,\lambda)]_{\a\a'}{\bf a}_{\bk,\t,\a'},
\end{eqnarray}
in which the kernel matrix
\begin{equation}\label{bme}
E_{bare}=\begin{pmatrix}-1+\delta\mu(\lambda)+\hat\nu_{11}(\bk,\lambda)&-\Omega^*(\bk)+\hat\nu_{12}(\bk,\lambda)\\
-\Omega(\bk)+\hat\nu_{12}(-\bk,\lambda)&-1+\delta\mu(\lambda)+\hat\nu_{11}(\bk,\lambda)\end{pmatrix}
\end{equation}
is called {\it the bare band matrix}, $\mu_{bare}=1-\delta\mu(\lambda)$ is also called the bare chemical potential. The Hamiltonian for the {\it new model} can be considered as
the one with band matrix $E_{bare}$ and interaction potential $V_L(\lambda)$.
\subsection{The main result}
The most interesting quantities are the connected Schwinger's functions $S^c_{2p,\beta}(\lambda)$, $p\ge0$, and the self-energy function $\Sigma(\lambda)$ in the {\it thermodynamic limit} $L\rightarrow\infty$ or $\L_L\rightarrow\L$. A fundamental mathematical problem is whether such quantities are well defined.
In this paper we study the analytic properties of the connected Schwinger's function $S^c_{2p,\beta}(\lambda)$ for $p\ge1$ and the self-energy function. The main results are summarized in the following theorem (see also Theorem \ref{tpc}, Theorem \ref{cth1}, Theorem \ref{mqua}, Theorem \ref{maina} and Theorem \ref{mainb} for the precise presentation of the main results).
\begin{theorem}\label{mainthm}
Consider the doped honeycomb Hubbard model with renormalized chemical potential $\mu=1$ at positive temperature $0<T\ll1$. There exist a counter-term matrix $\delta E$ obeying the renormalization conditions Definition \ref{conj1}, such that, after taking the thermodynamic limit $L\rightarrow\infty$, the perturbation series of the connected $2p$-point Schwinger's functions, $p\ge1$, as well as the self-energy have positive radius of convergence in the domain ${\cal R}}\def\cL{{\cal L}} \def\JJ{{\cal J}} \def\OO{{\cal O}_T:=\{\lambda\in\RRR\ |\ \lambda<c/|\log T|^2\}$, in which $0<c<0.01$ is some constant independent of $T$ and $\lambda$. The second derivatives of the self-energy w.r.t. the external momentum are not uniformly bounded but are divergent for $T\rightarrow0$.
\end{theorem}
These results will be proved with rigorous renormalization group analysis. In the first step, we shall express the Schwinger's functions in terms of Grassmann functional integrations, which are more suitable for the multi-scale analysis.
\section{The Multi-scale Analysis}
\subsection{The fermionic functional integrals}
\begin{notation}
Instead of using the labeling $A$ and $B$ for the two types of quasi-particles, we introduce the new labeling as $A=1$ and $B=2$.
\end{notation}
The Grassmann integrals are linear functionals on the Grassmann algebra ${\bf Gra}$, generated by the Grassmann variables $\{\hat\psi^\e_{k,\t,\a}\}^{\t=\uparrow\downarrow;\a=1,2;\epsilon=\pm}_{k\in{\cal D}}\def\AAA{{\cal A}}\def\GG{{\cal G}} \def\SS{{\cal S}_{\b,L}}$,
which satisfy the periodic condition in the momentum variables: $\hat\psi^\e_{k_0,\bk+n_1{\bf G}_1+n_2{\bf G}_2,\t,\a}=\hat\psi^\e_{k_0,\bk,\t,\a}$, but anti-periodic condition in the frequency variable: $\hat\psi^\e_{k_0+\beta,\bk,\t,\a}=-\hat\psi^\e_{k_0,\bk,\t,\a}$.
The product of ${\bf Gra}$ is defined by: $\hat\psi^\e_{k,\t,\a}\hat\psi^{\e'}_{k',\t',\a'}=-\hat\psi^{\e'}_{k',\t',\a'}\hat\psi^\e_{k,\t,\a}$, for $(\e,\t,\a,k)\neq(\e',\t',\a',k',)$ and $(\hat\psi^\e_{k,\t,\a})^2=0$. Let $D\psi=\prod_{k\in{\cal D}}\def\AAA{{\cal A}}\def\GG{{\cal G}} \def\SS{{\cal S}_{\b, L}, \\
\t=\pm, \a=1,2}d\hat\psi_{k,\t,\a}^+
d\hat\psi_{k,\t, \a}^-$ be the Grassmann Lebesque measure,
$Q( \hat\psi^-, \hat\psi^+)$ be a monomial function of $\hat\psi_{k,\t,\a}^-, \hat\psi_{k,\t,\a}^+$. The Grassmann integral $\int Q D\psi$ is defined to be $1$ for $Q( \hat\psi^-, \hat\psi^+)=\prod_{k\in{\cal D}}\def\AAA{{\cal A}}\def\GG{{\cal G}} \def\SS{{\cal S}_{\b, L}, \\
\t=\pm, \a=1,2} \hat\psi^-_{k,\t,\a} \hat\psi^+_{k,\t,\a}$, up to a permutation of the variables, and $0$ otherwise. The Grassmann differentiation is defined by
${\partial_{ \hat\psi^\e_{k,\t,\a}}}{ \hat\psi^{\e'}_{k',\t',\a'}}=\delta_{k,k'}\delta_{\t,\t'}\delta_{\a,\a'}\delta_{\e,\e'}$.
The {\it Grassmann Gaussian measures} $P(d\psi)$ with covariance $\hat C(k)$ is defined as:
\begin{equation}
P(d\psi) = (\det \NN)^{-1} D\psi \cdot\;\exp \Bigg\{-\frac{1}{\b|\L_L|} \sum_{k\in{\cal D}}\def\AAA{{\cal A}}\def\GG{{\cal G}} \def\SS{{\cal S}_{\b, L},\t={\uparrow\downarrow}, \a=1, 2 }
\hat\psi^{+}_{k,\t, \a}{\hat C({k})}^{-1}\hat\psi^{-}_{k,\t,\a}\Bigg\}\;,
\label{ggauss}\end{equation}
where
\begin{equation}
\NN=\prod_{\kk\in{\cal D}}\def\AAA{{\cal A}}\def\GG{{\cal G}} \def\SS{{\cal S}_L,\t={\uparrow\downarrow}}{\frac{1}{\b|\L_L|}}
\begin{pmatrix}-i k_0-1 & -\O^*(\bk) \\ -\O(\bk) &
-ik_0-1\end{pmatrix},\label{norma}
\end{equation}
is the normalization factor. The Grassmann fields are defined as:
\begin{equation}
\psi^\pm_{x,\t,\a}=\sum_{k\in{\cal D}}\def\AAA{{\cal A}}\def\GG{{\cal G}} \def\SS{{\cal S}_{\b, L}}
e^{\pm ikx}\hat\psi^\pm_{k,\t,\a},\ \ x\in\Lambda_{\beta,L}.
\end{equation}
The interaction potential becomes:
\begin{eqnarray}
&&\VV_L(\psi,\lambda)=
\left\langle\sum_{\a,\a'=1,2}\ \int_{\Lambda_{\beta,L}} d^3x \ \psi^+_{x,\uparrow,\a}
\psi^-_{x,\uparrow,\a'}\psi^+_{x,\downarrow,\a}
\psi^-_{x,\downarrow,\a'}\label{potx}\\
&&\quad\quad+\frac{1}{\b|\Lambda_L|}\sum_{k\in{\cal D}}\def\AAA{{\cal A}}\def\GG{{\cal G}} \def\SS{{\cal S}_{\b, L}}\sum_{\a,\a'=1,2}\sum_{\tau=\uparrow, \downarrow}[\delta\mu(\lambda)\delta_{\a\a'}+\nu_{\a\a'}(\kk,\lambda)]
\hat\psi^+_{k,\t,\a}\hat\psi^-_{k,\t,\a'}\nonumber
\end{eqnarray}
where $\int_{\Lambda_{\beta,L}} d^3x:=\int_{-\beta}^\beta dx_0\ \sum_{\xx\in\L_L}$ is a short-handed notion for the integration and sum. Define the non-Gaussian measure $P^I_L(d\psi):=P(d\psi)e^{-\VV_L(\psi)}$ over ${\bf Gra}$ and let $P^I(d\psi)=\lim_{L\rightarrow\infty}P^I_L(d\psi)$ be the limit of the sequence of measures indexed by $L$ (in the topology of weak convergence of measures), then we can easily prove that these non-Gaussian measure are invariant under the transformations introduced in Lemma \ref{inv0}.
The Schwinger functions are defined as the moments of the measure $P^I(d\psi)$:
\begin{eqnarray}
&&S_{2p,\b}(x_1,\t_1,\e_1,\a_1,\cdots,x_{2p},\t_{2p},\e_{2p},\a_{2p};\left\langle)\nonumber\\
&&\quad\quad\quad\quad\quad\quad\quad\quad\quad=\lim_{L\rightarrow\infty}\frac{\int\psi^{\epsilon_1}_{x_1,\t_1,\a_1}\cdots \psi^{\epsilon_{2p}}_{x_{2p},\t_{2p},\a_{2p}} P(d\psi)e^{-\VV_L(\psi,\lambda)}}{\int P(d\psi)e^{-\VV_L(\psi,\lambda)}}\nonumber\\
&&\quad\quad\quad\quad\quad\quad\quad\quad\quad=:\frac{\int\psi^{\epsilon_1}_{x_1,\t_1,\a_1}\cdots \psi^{\epsilon_{2p}}_{x_{2p},\t_{2p},\a_{2p}} P(d\psi)e^{-\VV(\psi,\lambda)}}{\int P(d\psi)e^{-\VV(\psi,\lambda)}}.
\end{eqnarray}
We assume in the rest of this paper that the thermodynamic limit has been already taken and will forget parameter $L$.
\subsection{Scale Analysis}
The lattice structure plays the role of the short-distance cutoff for the spatial momentum, so that the ultraviolet behaviors of the Schwinger's functions are rather trivial. The two-point Schwinger's function
is not divergent but has a discontinuity at $x_0=0$, $\xx=0$ \cite{BG1}.{ Although summation over all scales of the tadpole terms is not absolutely convergent for $k_0\rightarrow\infty$, this sum can be controlled by using the explicit expression of the single scale propagator.}\footnote{Remark that the amplitude of the tadpole term is not ultraviolet divergent, due to the lattice structure of the model, which serves as an ultraviolet cutoff. Since we are mainly interested in the infrared problems, which are more relevant in condensed matter system, we shall leave the uv analysis to the interested readers, as an exercise.} So we shall omit the ultraviolet analysis and consider only the infrared behaviors, which correspond to the cases of $T\ll1$ and the momentums getting close to the Fermi surface. It is mostly convenient to choose the infrared cutoff functions as the Gevrey class functions, defined as follows.
\begin{definition}\label{gev}
Given ${\cal D}}\def\AAA{{\cal A}}\def\GG{{\cal G}} \def\SS{{\cal S}\subset\RRR^d$ and $h>1$, the Gevrey class $G^h_0({\cal D}}\def\AAA{{\cal A}}\def\GG{{\cal G}} \def\SS{{\cal S})$ of functions of index $h$ is defined as the set of smooth functions $\phi\in{\cal C}^\infty({\cal D}}\def\AAA{{\cal A}}\def\GG{{\cal G}} \def\SS{{\cal S})$ such that for every compact subset ${\cal D}}\def\AAA{{\cal A}}\def\GG{{\cal G}} \def\SS{{\cal S}^c\subset{\cal D}}\def\AAA{{\cal A}}\def\GG{{\cal G}} \def\SS{{\cal S}$, there exist two positive constants $A$ and $\g$, both depend on $\phi$ and the compact set ${\cal D}}\def\AAA{{\cal A}}\def\GG{{\cal G}} \def\SS{{\cal S}^c$, satisfying:
\begin{eqnarray}
\max_{x\in K}|\partial^\alpha \phi(x)|\le A\g^{-|\alpha|}(|\alpha!|)^h,\ \alpha\in\ZZZ^d_+,\ |\alpha|=\alpha_1+\cdots+\alpha_d.
\end{eqnarray}
The Gevrey class of functions with compact support is defined as:
$G_0^h({\cal D}}\def\AAA{{\cal A}}\def\GG{{\cal G}} \def\SS{{\cal S})=G^h({\cal D}}\def\AAA{{\cal A}}\def\GG{{\cal G}} \def\SS{{\cal S})\cap C^\infty_0({\cal D}}\def\AAA{{\cal A}}\def\GG{{\cal G}} \def\SS{{\cal S})$.
The Fourier transform of any $\phi\in G_0^h$ satisfies
\begin{equation}
\max_{k\in\RRR^d}|\hat \phi(k)|\le Ae^{-h(\frac{\g}{\sqrt{d}}|k|)^{1/h}}.
\end{equation}
\end{definition}
The infrared cutoff function $\chi\in G^h_0(\RRR)$ is defined by:
\begin{equation}
\chi(t)=\chi(-t)=
\begin{cases}
=0\ ,&\quad {\rm for}\quad |t|>2,\\
\in(0,1)\ ,&\quad {\rm for}\quad 1<|t|\le2,\\
=1,\ &\quad {\rm for}\quad |t|\le 1.
\end{cases}\label{support}
\end{equation}
Given any fixed constant $\gamma\ge10$, define the following partition of unity:
\begin{eqnarray}\label{part1}
1&=&\sum_{j=0}^{\infty}\chi_j(t),\ \ \forall t\neq 0;\\
\chi_0(t)&=&1-\chi(t),\
\chi_j(t)=\chi(\gamma^{2j-2}t)-\chi(\gamma^{2j}t),\ {\rm for}\ j\ge1.\nonumber
\end{eqnarray}
We are mainly interested in the infrared properties of the free propagator, which correspond to $k_0\rightarrow0$, $|e(\kk,1)|\rightarrow0$ and $|e(\kk,1)|\gg k_0^2$, or equivalently, to the cases of $j\gg1$. So we can ignore $k_0^2$ in the denominator of \eqref{redprop}. Define the infrared propagators as:
\begin{definition}
Let $j_0\gg1$ be a positive integer, the infrared free propagators are defined as:
\begin{eqnarray}\label{irprop}
&&\hat C^{ir}(k)_{\a\a'}=\sum_{j=j_0}^\infty \ \hat C_j(k)_{\a\a'},\ \a,\a'=1,2,\\
&&\hat C_j(k)_{\a\a'}=\hat C(k)_{\a\a'}\cdot \chi_j[4k_0^2+e^2(\kk)].\label{irprop1}
\end{eqnarray}
The number $j_0$ is also called the infrared threshold.
\end{definition}
It is also important to consider the sliced propagators $\hat C_j(k)$ with $0\le j\le j_0$. We have the following proposition:
\begin{proposition}
The $p$-th power of the sliced propagator, $[\hat C_j]^p$, is integrable for $0\le j\le j_0$ and $p\ge1$.
\end{proposition}
\begin{proof}
Since the integration domain of $[\hat C(k)]^p$ is bounded, and the denominator of $\tilde C(k)$ is strictly bounded away from zero in the integration domain, the conclusion follows.
\end{proof}
This proposition states that the interacting Schwinger's functions are also well defined for $0\le j\le j_0$. So it is more convenient to define the infrared propagator as: $\hat C(k)_{\a\a'}=\sum_{j=0}^\infty\hat C_j(k)_{\a\a'},\ \a,\a'=1,2$. It is easy to find that $\chi_j[4k_0^2+e^2(\kk,1)]$ vanishes for $j\ge j_{max}:=\EEE(\tilde j_{max})$, where $\tilde j_{max}$ is the solution to $\gamma^{\tilde j_{max}-1}= 1/\sqrt2\pi T$ and
$\EEE(\tilde j_{max})$ is the integer part of $\tilde j_{max}$.
It is useful to rewrite the propagator \eqref{2ptk} as $\hat C(k)= \tilde C(k) A(k)$, in which
\begin{equation}\label{redprop}\tilde C(k):=\frac{1}{-2ik_0+e(\kk,1)+k^2_0},
\end{equation}
and
\begin{equation}\label{freep3}
A(k)=\begin{pmatrix}i k_0+1&-\O^*(\bk) \\ -\O(\bk) &
ik_0+1\end{pmatrix}.
\end{equation}
\begin{proposition}\label{mat0}
Let $A_{\a\a'}$, $\a,\a'=1,2$ be the matrix elements of $A$. Let $j\in[j_0, j_{max}]$ be any scaling index and $\gamma\ge10$ be a fixed constant such that $\g^{-2j-2}\le 4k_0^2+e^2(\kk,1)\le 2\g^{-2j}$, then there exist two constants $K$, $K'$ which are independent of the scaling index $j$ and satisfy $0.9\le K<K'\le 2$, such that
\begin{equation}\label{matele2}
K\le\vert A_{\a\a'}\vert\le K',\ \a,\a'=1,2\ .
\end{equation}
\end{proposition}
\begin{proof}
It is easy to find that the support of the cutoff function $\chi_j$ at $j\ge j_0$ is:
\begin{equation}\label{multi1}
\cD_j=\Big\{k=(k_0,\kk) \vert\g^{-2j-2}\le 4k_0^2+e^2(\kk,1)\le 2\g^{-2j},\ e^2(\kk,1)\le k^2_0\Big\},\end{equation}
which implies that
\begin{equation}\frac14 \g^{-2j-2}\le k_0^2\le \frac12\g^{-2j},\label{cond1}\end{equation}
\begin{equation}-\frac{\sqrt2}{2}\g^{-j}\le e(\kk)\le -\frac12\g^{-j-1},\ {\rm or}\ \
\frac12\g^{-j-1}\le e(\kk,1)\le \frac{\sqrt2}{2}\g^{-j}
\label{cond0}.
\end{equation}
Consider first the elements $\vert A_{11}\vert=\vert A_{22}\vert=\vert ik_0+1\vert=(1+k_0^2)^{1/2}$. Choosing $j_0=1$ and by \eqref{cond0}, we have $1<(1+k_0^2)^{1/2}<2$. Now we consider the elements $\vert A_{12}\vert=\vert A_{21}\vert=\vert \O_0(\kk)\vert=(1+e(\kk,1))^{1/2}$ (cf. \eqref{band1}). By \eqref{cond0}, we can easily find that $0.9\le(1+e(\kk,1))^{1/2}\le2$. So we can always choose $K$ and $K'$ which satisfy \eqref{matele2}.
\end{proof}
\begin{lemma}\label{bdsp}
Let $\hat C_j(k)_{\a\a'}$, $\a,\a'=1,2$, be any matrix element of the momentum space free propagator at slice $j$. There exists a positive constant $K$, which is independent of the scaling index, such that
\begin{equation}\label{tad1}
\sup_{k\in \cA_j}\Vert\hat C_j(k)_{\a\a'}\Vert\le K\g^{j}.
\end{equation}
\end{lemma}
\begin{proof}
Using the definition of the support function $\chi_j$, and by \eqref{irprop}, we have
\begin{equation}
\sup_{k\in \cA_j}\Vert \tilde C_j(k)_{\a\a'}\Vert\le K'\g^{j}\ ,
\end{equation}
for certain positive constant $K'$ independent of the scaling index. By Proposition \ref{mat0}, the result follows easily.
\end{proof}
\subsection{Sectors and angular analysis}
Due to the $\ZZZ^3$ symmetry of the Fermi triangle (see Figure \ref{figsec}), it is convenient to introduce a new basis $(e_+, e_-)$:
\begin{equation}
e_+=\frac{\pi}{3}(1, \sqrt3),\quad e_-=\frac{\pi}{3}(-1, {\sqrt3}),
\end{equation}
which is neither orthogonal nor normal.
Let $(k_+,k_-)$ be the coordinates in the new basis, Formula \eqref{band1} can be rewritten as:
\begin{equation}\label{band3}
e(\bk,1)=8\cos\frac{\pi(k_++k_-)}{2} \cos\frac{\pi k_+}{2} \cos\frac{\pi k_-}{2}.
\end{equation}
The edges of the Fermi triangles are given by the equations $k_+=\pm1$, $k_-=\pm1$ and $\frac{k_++k_-}{2}=\pm1$, respectively. Define also the quasi-momentum $q_\pm$ by
\begin{eqnarray}\label{qa}
q_\pm=
\begin{cases}
k_\pm-1,&{\rm for}\quad k_\pm\ge0,\\
k_\pm+1,& {\rm for}\quad k_\pm\le0,
\end{cases}
\end{eqnarray}
\eqref{band3} becomes
\begin{equation}\label{band4}
e({\bf q},1)=-8\cos\frac{\pi(q_++q_-)}{2} \sin\frac{\pi q_+}{2} \sin\frac{\pi q_-}{2}.
\end{equation}
The first Brillouin zone the new coordinations $(q_+,q_-)$ is a lozenge centered around $(0,0)$ but with rescaled coordinates, see Figure \ref{fpt1} for an illustration. The fundamental Fermi triangles are defined by the equations $q_+=0$, $q_-=0$ and $q_++q_-=\pm1$.
\begin{figure}[!htb]
\centering
\includegraphics[scale=.35]{tfs2e.pdf}
\caption{An illustration of the first Brillouin zone in the basis $(e_+,e_-)$, in which the rescaled coordinates are also given. The vertices of the Fermi triangles are the van Hove singularities. }\label{fpt1}
\end{figure}
By Formula \eqref{multi1}, the size of $k_0^2+e^2(\bk,1)$ is bounded by
$O(1)\gamma^{-2j}$, at any scaling index $j\ge1$. But the size of $e^2(\bk,1)$, which can be of order $\gamma^{-2i}$ with $i\ge j$, is not fixed. In order to obtain the optimal decaying bounds for the direct-space propagators, we need to control the size of $e^2(\bk,1)$ by further introducing cutoff functions for the spatial momentum.
\begin{definition}
Define the factors $\{t^{(a)}\}$, $a=1,2,3$, by
\begin{equation}\label{threefactors}
t^{(1)}=\cos^2\frac{\pi k_+}{2},\ t^{(2)}=\cos^2\frac{\pi k_-}{2},\ t^{(3)}=\cos^2\frac{\pi(k_++k_-)}{2},
\end{equation}
or in terms of coordinates $q^{(a)}$, by
\begin{equation}\label{threefactorsq}
t^{(1)}=\sin^2\frac{\pi q_+}{2},\ t^{(2)}=\sin^2\frac{\pi q_-}{2},\ t^{(3)}=\cos^2\frac{\pi(q_++q_-)}{2}.
\end{equation}
Correspondingly, define the three local coordinates $\{k^{(a)}\}$, $a=1,2,3$, by
\begin{equation}
k^{(1)}=k_+,\ k^{(2)}=k_{-},\ k^{(3)}=k_++k_-,
\end{equation}
and the coordinates of the quasi-momentum $\{q^{(a)}\}$, $a=1,2,3$, by
\begin{equation}
q^{(1)}=k_+\pm1,\ q^{(2)}=k_-\pm1,\ q^{(3)}=q_++q_-.
\end{equation}
\end{definition}
Since the factors in \eqref{threefactors} or \eqref{threefactorsq} are highly nonlinear in $k^{(a)}$
or $q^{(a)}$, for $k^{(a)}$ not very close to $\pm1$, or for $q^{(a)}$ not very close to $0$, instead of slicing the quasi momentum $k^{(a)}$ or the quasi-momentum $q^{(a)}$ directly, we choose to slice the functions in \eqref{threefactors} or \eqref{threefactorsq}. The support of the cutoff functions are called the {\it sectors} \cite{FMRT}.
\begin{definition}[Sectors]
To each factor $t^{(a)}$ of \eqref{threefactors} or \eqref{threefactorsq}, we introduce a set of indices, $s^{(a)}=0,1,\cdots, j$, called the sector indices. Then we define the following functions of partition of unity:
\begin{eqnarray}\label{secf}
1=\sum_{s^{(a)}=0}^{j}v_{s^{(a)}}(t^{(a)}),\ \ \begin{cases} v_0(t^{(a)})=1-\chi(\g^2t^{(a)}),\\
v_{s^{(a)}}(t^{(a)})=\chi_{s^{(a)}+1}(t^{(a)}),\\
v_j(t^{(a)})=\chi(\g^{2j}t^{(a)}),
\end{cases}
{\rm for}\ \ 1\le s^{(a)}\le j-1, \ a=1,2,3.
\end{eqnarray}
\end{definition}
With the introduction of sector indices, we can define the sectorized propagators, as follows.
\begin{definition}
Let $t^{(a)}$ and $t^{(b)}$, $a,b=1,2,3$, $a\neq b$, be the factors defined by \eqref{threefactors} or \eqref{threefactorsq}, whose values are close to zero. Let $s^{(a)}$ and $s^{(b)}$ be the corresponding sector indices, the free propagator at any slice $j$ can be decomposed as:
\begin{equation}\label{sec0}
\hat C_j(k)=\sum_{\s=(s^{(a)}, s^{(b)})}\hat C_{j,\s}(k),
\end{equation}
where
\begin{equation}
\hat C_{j,\s}(k)=\hat C_j(k)\cdot v_{s^{(a)}}[t^{(a)}]\ v_{s^{(b)}}[t^{(b)}].\label{sec1a}
\end{equation}
is called a sectorized propagator. The support of $\hat C_{j,\s}(k)$ is called a sector with index $(j, s^{(a)},s^{(b)})$, and is denoted by $\Delta^j_{s^{(a)},s^{(b)}}$.
\end{definition}
Notice the three cosine functions in \eqref{band3} are not independent of each other. We have:
\begin{lemma}\label{bdcos}
Let $s\ge2$ be a sector index. If $\vert\cos\frac{\pi k_+}{2}|\le\gamma^{-s}$ and $|\cos\frac{\pi k_-}{2}|\le \sqrt{2}/2$, then there exists some strictly positive constant $K'>0$ such that
\begin{equation}\label{bd3f}
K'\le\vert\cos\frac{\pi(k_++k_-)}{2}\vert\le1.
\end{equation}
\end{lemma}
\begin{proof}
By trigonometrical formula
$\cos\frac{\pi(k_++k_-)}{2}=\cos\frac{\pi k_+}{2} \cos\frac{\pi k_-}{2}-\sin\frac{\pi k_+}{2} \sin\frac{\pi k'_-}{2}$,
we have
\begin{eqnarray}
|\cos\frac{\pi(k_++k_-)}{2}|&\ge& \frac{\sqrt2}{2}[1-\frac{\gamma^{-2s}}{2}-\g^{-s}+O(\g^{-4s})]\nonumber\\
&\ge& \frac{\sqrt2}{2}(1-2\g^{-2}),
\end{eqnarray}
for $\g\ge10$.
Choosing $K'=\frac{\sqrt2}{2}(1-2\g^{-2})$, which is strictly greater than zero, we proved this lemma. We can prove similar results for the three factors in \eqref{threefactorsq}.
\end{proof}
This lemma states that, if any two factors among the three, say, $t^{(1)}$ and $t^{(2)}$, are close to zero, then the third factor $t^{(3)}$ must be strictly bounded away from zero. Therefore, we only need to introduce two sector indices $(s^{(1)}, s^{(2)})$ to control the size of $e(\bk,1)$. If it were for another two factors, say, $t^{(2)}$ and $t^{(3)}$ that are close to zero, then $t^{(1)}$ is bounded away from zero and we only need to introduce the sector indices $s^{(2)}$ and $s^{(3)}$. Remark that the constant $\sqrt{2}/2$ in Lemma \eqref{bdcos} can be replaced by some other positive constant that is strictly smaller than one, for which we just need to assume different values for $\gamma$.
\subsection{Constraints on the sector indices}
In this part we consider the possible constraints on the sector indices. The first one is the following:
\begin{lemma}\label{spm}
Let $j$ be any scaling index. Let $t^{(a)}$ and $t^{(b)}$, $a,b=1,2,3$, $a\neq b$, be the factors that are close to zero and
$s^{(a)}$, $s^{(b)}$ be the corresponding sector indices, then the possible values of the sector indices $s^{(a)}$ and $s^{(b)}$ must satisfy:
\begin{equation}
s^{(a)}+s^{(b)}\ge j-2.
\end{equation}
\end{lemma}
\begin{proof}
By Formulae \eqref{secf} and \eqref{bd3f}, we have
\begin{equation}\label{compare1}
\g^{-2s^{(a)}}\g^{-2s^{(b)}}\ge e^2(\kk,1)=t^{(1)}\cdot t^{(2)}\cdot t^{(3)}\ge c\rq{}^2\gamma^{-2s^{(a)}-2}\gamma^{-2s^{(b)}-2},
\end{equation}
in which $c'=\frac{\sqrt2}{2}(1-2\g^{-2})$.
In order that the sliced propagator is non-vanishing, $e^2(\kk,1)$ must obey Formulae \eqref{multi1} and \eqref{cond0}, so we have
\begin{equation}\label{compare2}
\frac12\g^{-2j}\ge e^2\ge\frac14\g^{-2j-2}
\end{equation}
In order that \eqref{compare1} be consistent with \eqref{compare2}, we have
\begin{equation}
c'^2\gamma^{-2s^{(a)}-2}\gamma^{-2s^{(b)}-2}\le \frac12\g^{-2j},
\end{equation}
So we obtain:
\begin{equation}
s^{(a)}+s^{(b)}\ge j-2+\log_\gamma (1-2\g^{-2}).
\end{equation}
Since the sector indices are integers and $-1<\log_\gamma (1-2\g^{-2})<0$, we have
\begin{equation}
s^{(a)}+s^{(b)}\ge j-2.
\end{equation}
\end{proof}
This lemma also put constraints on the shapes of the sectors. In order to better understand the geometry of the sectors, we introduce the following definitions (see Figure \ref{figsec} for an illustration):
\begin{definition}
A face $f^{(a)}$, $a=1,2,3$, is defined as the region close to the Fermi triangle, in which the factor $t^{(a)}$ takes values in the neighborhood of zero; a corner $I^{(ab)}$, $a,b=1,2,3$, is defined as the region close to the Fermi triangle, in which two factors $t^{(a)}$ and $t^{(b)}$ take values in a neighborhood of zero.
E.g., the face $f^{(1)}$ is the region for which $t^{(1)}$ is close to zero and the corner $I^{(23)}$ is the region for which both $t^{(2)}$ and $t^{(3)}$ are close to zero.
We introduce also the following notions for the sectors:
\begin{itemize}
\item {the sectors with indices $(s,j)$ and $(j,s)$, with $j>s$, are called the face sectors, in particular, the sectors with indices $(0,j)$ and $(j,0)$ are called the middle-face sectors.}
\item the sectors $(j,j)$ are called the corner sectors.
\item the sectors $(s,s)$, with $(j-2)/2\le s<j$, are called the diagonal sectors.
\item other sectors are called the general sectors.
\end{itemize}
\end{definition}
\begin{figure}[htp]
\centering
\includegraphics[width=.7\textwidth]{sectors.pdf}
\caption{\label{figsec}An illustration of the various sectors.
}
\end{figure}
Now we consider the possible constraints on the sector indices placed by conservation of momentum. Let $\s=(s^{(a)},s^{(b)})$, $a\neq b$, $a,b=1,2,3$, be the sector indices of the momentum $\bk$. In order that a sliced propagator $\hat C_{j,\s}(k)\neq0$, the momentum $k=(k_0,\bk)$ must satisfy the following bounds:
\begin{equation}
\frac{1}{4}\gamma^{-2j-2}\le k^2_0\le \frac12 \g^{-2j},
\end{equation}
and
\begin{eqnarray}\label{supp1}
\begin{cases}
\g^{-2}\le t^{(a)}\le 1,&{\rm for}\ s^{(a)}=0,\\
\g^{-s^{(a)}-1}\le t^{(a)}\le\sqrt2 \g^{-s^{(a)}},& {\rm for}\ 1\le s^{(a)}\le j-1,\\
t^{(a)}\le\sqrt2 \g^{-j},&{\rm for}\ s^{(a)}=j.
\end{cases}
\end{eqnarray}
When $k^{(a)}$, $a=1,\cdots,3$, is close to $1$, or equivalently when the quasi-momentum $q^{(a)}$ is close to zero, the constraints in \eqref{supp1} can be formulated as:
\begin{eqnarray}
\begin{cases}
2/{\pi \g}\le\vert q^{(a)}\vert\le 1,\quad\quad\quad\quad\quad\quad\quad\quad {\rm for}\quad s^{(a)}=0,\\
2{\g^{-s^{(a)}-1}}/{\pi }\le\vert q^{(a)}\vert\le\frac{2\sqrt2}{\pi} \g^{-s^{(a)}},\quad\quad\ {\rm for}\quad 1\le s^{(a)}\le j-1,\\
\vert q^{(a)}\vert\le\frac{2\sqrt2}{\pi} \g^{-j},\quad\quad\quad\quad\quad\quad\quad\quad\quad {\rm for}\quad s^{(a)}=j,
\end{cases}\label{supp2}
\end{eqnarray}
for $a=1,2,3$.
\vskip.5cm
Let $k_i=(k_{i,0},\bk_i)$, $i=1,\cdots,4$ be the four momentum entering or existing the vertex $v$. By the conservation of momentum and the periodic boundary condition (c.f. Formula \eqref{bril}), we have:
\begin{eqnarray}\label{com1}
\sum_{i=1}^4k_{i,0}=0,\quad \sum_{i=1}^4k_{i,1}=\frac{4\pi}{3}n_1,\quad \sum_{i=1}^4k_{i,2}=\frac{4\pi}{\sqrt3}n_2,\ n_1, n_2\in\ZZZ,
\end{eqnarray}
in which the last two equations can be rewritten in the new coordination system as:
\begin{equation}\label{com11}
\sum_{i=1}^4k_{i,+}=2n_+,\quad \sum_{i=1}^4k_{i,-}=2n_-,
\end{equation}
where $n_+=n_1+n_2$ and $n_-=-n_1+n_2$. So that $n_+$ and $n_-$ have the same parity. Adding up the two equations of \eqref{com11}, we obtain:
\begin{equation}\label{com33}
\sum_{i=1}^4k_{i}^{(3)}=2n_0,
\end{equation}
in which $n_0=n_++n-$. Since $n_+$ and $n_-$ have the same parity, $n_0$ is always an even integer.
In terms of the quasi-momentum $q_\pm=k_\pm\pm1$, \eqref{com1}-\eqref{com33} can be written as
\begin{equation}\label{coq1}
\sum_{i=1}^4q_{i,+}=2m_+,\quad \sum_{i=1}^4q_{i,-}=2m_-,\quad \sum_{i=1}^4q_{i}^{(3)}=2m_0.
\end{equation}
Since even sums of $\pm1$ are still even numbers, the integers $m_+$ and $m_-$ still have the same parity, and $m_0=m_1+m_2$ is an even integer.
We have the following lemma concerning the sector indices:
\begin{lemma}
Let $i=1,\cdots,4$ be the labeling indices of the four momentum $k^{(a)}_i$ entering or existing the vertex $v$ and let $s^{(a)}_{i}$ be the corresponding sector indices. Then for $\g>\frac{6\sqrt2}{\pi}$ the integers $m^{(a)}$, $a=1,\cdots,3$, are non-vanishing only when $s^{(a)}_{i}=0$ for two or more labeling indices.
\end{lemma}
\begin{proof}
We prove first the case of $a=1$, in which $m^{(1)}=m_+$, using \eqref{coq1}; the proof for the case of $m^{(2)}=m_-$ is exactly the same. Since $\vert q_{i,+}\vert\le1$, with $i=1,\cdots,4$, we have $\vert m_+\vert\le2$. When
$\vert m_+\vert=2$, we have $\vert q_{i,+}\vert=1$ for all $i$, which implies that $s_{i,+}=0$, for all $i$. Now we consider the case of $\vert m_+\vert=1$. Suppose that $s_{i,+}\neq0$ for three labellings the $i=1,2,3$ and $s_{4,+}=0$ . By Formula \eqref{supp2} we have $\vert q_{i,+}|\le\frac{ 2\sqrt2}{\pi}\g^{-1}$ for $i=1,2,3$, so that
\begin{equation}\label{qplus}
q_{1,+}+q_{2,+}+q_{3,+}+q_{4,+}\le \frac{6\sqrt2}{\pi}\g^{-1}+1<2,\quad {\rm for}\quad \g> \frac{6\sqrt2}{\pi},
\end{equation}
which contradicts with \eqref{coq1}. So we proved that $m_+\neq0$ only when $s_{i,+}=0$ for at least two values of the labeling index $i$. With exactly the same method we can prove the case for $m_-$. Using the fact that $q^{(3)}_i=q_{1,+}+q_{1,-}$, the constraint \eqref{qplus} is also valid for $q^{(3)}_i$, with $i=1,\cdots,4$. This concludes the Lemma.
\end{proof}
Now we consider the case of $m^{(a)}=0$.
\begin{lemma}\label{secmain}
Let $q_{i}^{(a)}$, $i=1,\cdots,4$, $a=1,2$ or $3$ be the quasi-momenta that we are going to slice and $j_i$, $s_{i}^{(a)}$, be the scaling indices and sector indices, respectively. Let
$s_{1}^{(a)}, s_{2}^{(a)}$ be the two indices with smallest values among all the sector indices such that $s_{1}^{(a)}\le s_{2}^{(a)}$. For $\g\ge10$, we have the following constraints concerning the possible values of $s_{1}^{(a)}$ and $s_{2}^{(a)}$: either $|s_{1}^{(a)}-s_{2}^{(a)}|\le1$, or $s_{1}^{(a)}=j_1$, in which $j_1$ is strictly smaller than $j_2, j_3$ or $j_4$. We have exactly the same results for the sector indices $s_{i}^{(b)}$, $i=1,\cdots,4$, $b=1,\cdots,3$, $b\neq a$.
\end{lemma}
\begin{proof}
First of all we consider the possible constraints for the quasi momentum $q_{i}^{(1)}=q_{i,+}$ and $q_{i}^{(2)}=q_{i,-}$, $i=1,\cdots,4$. We can always arrange sector indices in the order $s_{1,+}\le s_{2,+}\le s_{3,+}\le s_{4,+}$ and the scaling indices in the order $j_{1}\le j_{2}\le j_{3}\le j_{4}$. Then either $s_{1,+}<j_1$ or $s_{1,+}=j_1$. For both cases we have (cf. Formula \eqref{supp2}) $|q_{i,+}|\le\frac{2\sqrt2}{\pi}\g^{-s_{2,+}}$, for $i=2,3,4$, and $|q_{1,+}|\ge2{\g^{-s_{1,+}-1}}/{\pi }$.
In order that the equation $q_{1,+}+q_{2,+}+q_{3,+}+q_{4,+}=0$ holds, we must have
\begin{equation}
2{\g^{-s_{1,+}-1}}/{\pi }\le \frac{6\sqrt2}{\pi}\g^{-s_{2,_+}},
\end{equation}
which implies
\begin{equation}
s_{2,+}\le s_{1,+}+1+\log_\g (3\sqrt2).
\end{equation}
For any $\g\ge 10$, we have $0<\log_\g (3\sqrt2)<1$, . Since $s_{1,+}$ and $s_{2,+}$ are integers, we have $|s_{2,+}-s_{1,+}|\le1$.
Following the same arguments we can prove the same result for sectors in the $"-"$ direction.
Using the fact that $q_i^{(3)}=q_{i,+} +q_{i,-}$ and the fact that this constraints are valid for both $q_{i,+}$ and $q_{i,-}$, they are also valid for $q_i^{(3)}$. Hence we conclude this proposition.
\end{proof}
\subsection{Decaying properties of the sectorized propagators in the direct space}
In this section we study the decaying behaviors of the free propagators in the direct space.
First of all, define the dual coordinates to the momentum $k^{a}$, $a=1,2,3$, as:
\begin{equation}
x^{(1)}:=x_+=\pi(x_1/3+x_2/\sqrt3), x^{(2)}:=x_-=\pi(-x_1/3+x_2/\sqrt3), x^{(3)}=x_++x_-.
\end{equation}
Only two coordinates among the three are independent.
\begin{notation}
In the following we shall use $(k^{(a)},k^{(b)})$ and $(q^{(a)},q^{(b)})$ to indicate the momentum and quasi momentum whose corresponding factors $t^{(a)}$ and $t^{(b)}$ are the ones that we will slice. The dual variables to this pair of variables are $(x^{(a)},x^{(b)})$ and the corresponding sector indices are $s^{(a)}$ and $s^{(b)}$, respectively. The convention for the values of the indices is always $a,b=1,2,3$ and $a\neq b$.
\end{notation}
Let $(j, \sigma)=(j,s^{(a)},s^{(b)})$ be the scaling index and sector indices for a sector. It is useful to introduce a new index, $l(\s)$, which describes the distance of a sector with index $s$ to the Fermi surface, by $l(j,\s)=s^{(a)}+s^{(b)}-j+2$. It is also called the $depth$ of a sector
and denoted by $l$ when the scaling indices and the sector indices are clear from the context.
\begin{lemma}\label{bdx1}
Let $[C_{j,\sigma}(x-y)]_{\a\a'}$ be the Fourier transform of $[\hat C_{j,\sigma}(k_0,k^{(a),(b)})]_{\a\a'}$, (c.f. Eq. \eqref{sec1a}). There exists a constant $K$, which depends on the model but independent of the scaling indices and the sector indices, such that:
\begin{equation}\label{decay1}
\vert[C_{j,\sigma}(x-y)]_{\a\a'}\Vert_{L^\infty}\le K \g^{-j-l}\ e^{-c[d_{j,\s}(x,y)]^\a},
\end{equation}
where
\begin{equation}\label{dist0}
d_{j,\s}(x,y)=\g^{-j}\vert x_0-y_0\vert+\g^{-s^{(a)}}\vert x^{(a)}-y^{(a)}\vert+\g^{-s^{(b)}}\vert x^{(b)}-y^{(b)}\vert,
\end{equation}
$\a=1/h\in(0,1)$ is the index characterizing the Gevrey class of functions (cf. Definition \ref{gev}). \footnote{The interested readers who are familiar with the sectors for strictly convex Fermi surfaces (cf. \cite{DR1}, \cite{BGM2}, \cite{FKT} are invited to compare the different decaying properties.}.
\end{lemma}
\begin{proof}
By Lemma \ref{mat0} we know that each matrix element of the sliced propagator is dominated by $\tilde C(k)$ times a uniform constant, which can be ignored for the moment. Let $\tilde C_{j,\sigma}(x-y)$ be the Fourier transform of $\tilde C_{j,\sigma}(k)$, then it is enough to prove that:
\begin{equation}\label{decay2}
\Vert\tilde C_{j,\sigma}(x-y)\Vert_{L^\infty}\le K \g^{-j-l}\ e^{-c[d_{j,\s}(x,y)]^\a}.
\end{equation}
This is essentially Fourier analysis and integration by parts. Using the fact that $\tilde C_{j,\sigma}(k)=\tilde C_{j,\sigma}(k_0,q^{(a),(b)}\pm1)$, the integral $\int dk_0dq^{(a)}dq^{(b)}$ constrained to a sector is bounded by $\g^{-j}\cdot \g^{-s^{(a)}}\cdot\g^{-s^{(b)}}$, while the integrand is bounded by $\frac{1}{\gamma^{-j}}$. So we obtain the pre-factor of \eqref{decay2}. Let $\frac{\partial}{\partial k_0}f=(1/2\pi T)[f(k_0+2\pi T)-f(k_0)]$ be the difference operator. To prove the decaying behavior of \eqref{decay2}, it is enough to prove that
\begin{eqnarray}\label{decay4}
\Vert\frac{\partial^{n_0}}{\partial k_0^{n_0}}\frac{\partial^{n^{(a)}}}{\partial (q^{(a)})^{n^{(a)}}}\frac{\partial^{n^{(b)}}}{\partial (q^{(b)})^{n^{(b)}}} \tilde C_{j,\s}(k_0, q^{(a)},q^{(b)})\Vert_{L^\infty}
\le K^n\g^{jn_0}\g^{s^{(a)}n^{(a)}}\g^{s^{(b)}n^{(b)}}(n!)^{1/\alpha},
\end{eqnarray}
where $n=n_0+n^{(a)}+n^{(b)}$ and $\Vert\cdot\Vert$ is the sup norm. By \eqref{supp2}, we can easily prove that there exists some constant $K_1$ such that $\Vert\frac{\partial}{\partial q^{(b)}}v_{s^{(b)}}[\cos^2(q^{(b)}\pm1)\pi/2]\Vert\le K_1\g^{s^{(b)}}$; when the operator $\frac{\partial}{\partial q^{(b)}}$ acts on $\chi_j[k_0^2+e^2(q^{(a)},q^{(b)},1)]$, the resulting term is simply bounded by $\g^{2j-2s^{(a)}-s^{(b)}}$; when $\frac{\partial}{\partial q^{(b)}}$ acts on $[-2ik_0-e(q^{(a)},q^{(b)},1)+k_0^2]^{-1}$, the resulting term is bounded by $\g^{j-s^{(a)}}$. Using the constraint $s^{(a)}+s^{(b)}\ge j-2$, we find that each of the three factors is bounded by $K_2\g^{s^{(b)}}$, for some positive constant $K_2$. When $\frac{\partial}{\partial q^{(b)}}$ acts on a factor $\cos(q^{(b)}\pm1)\pi/2$ that is generated in the previous derivations, it costs a factor $K_3\g^{s^{(b)}}$. Similarly, each operator $\frac{\partial}{\partial q^{(a)}}$ acts on various terms of $\tilde C_{j,\s}(k_0,q^{(a)},q^{(b)})$ is bounded by $K\g^{s^{(a)}}$. Finally, each derivation $\frac{\partial}{\partial k_0}$ on the propagator gives a factor $\g^j$.
The factor $(n!)^{1/\alpha}$ comes from derivations on the compact support functions, which are Gevrey functions of order $\alpha$. When $j=j_{max}$, the propagator decays only in the $x_0$ direction but not in the $x^{(a)}$ or $x^{(b)}$ direction. Let $K$ be the product of the positive constant $K_1$, the result of this Lemma follows.
\end{proof}
Then we have the following lemma:
\begin{lemma}
The $L^1$ norm of $[C_{j,\sigma}(x)]_{\a\a'}$, $x\in\Lambda_{\beta}$, $\a,\a'=1,2$ is bounded as follows:
\begin{equation}\label{tad2}
\Big\Vert\ [C_{j,\sigma}(x)]_{\a\a'}\ \Big\Vert_{L^1}\le O(1)\g^{j}.
\end{equation}
\end{lemma}
\begin{proof}
This lemma can be proved straightforwardly using Lemma \ref{bdx1}. We have
\begin{eqnarray}
&&\Big\Vert\ [C_{j,\sigma}(x)]_{\a\a'}\ \Big\Vert_{L^1}\quad\le O(1)\ \Big|\ \int_{\Lambda_{\beta, L}} dx_0 dx^{(a)} dx^{(b)} \ \tilde C_{j,\sigma}(x)\ \Big|\nonumber\\
&&\quad\le O(1)\g^{-j-l}\g^{(j+s^{(a)}+s^{(b)})}\le O(1)\g^{j}.
\end{eqnarray}
\end{proof}
Remark that, comparing to the $L^\infty$ norm for a sliced propagator $[C_{j,\sigma}(x)]_{\a\a'}$,
a factor $\g^{2j+l}$ is lost when taking the $L^1$ norm. It is convenient to define a new scaling index as follows.
\begin{definition}\label{indexr}
Define the new scaling index $r=\EEE(j+l/2)$, where $\EEE(\cdot)$ takes the integer value of its variable. We have $r\ge0$ and $r_{max}(T):=\EEE(1+\frac{3}{2}\ j_{max}(T))$.
Correspondingly, we have the following decomposition for the propagator:
\begin{equation}
\hat C(k_0,q^{(a)},q^{(b)})=\sum_{r=0}^{r_{max}(T)}\sum_\s\hat C_{r, \s}(k_0,q^{(a)},q^{(b)}).
\end{equation}
In terms of the scaling index $r$, the sector $\Delta^j_{s^{(a)},s^{(b)}}$ is also denoted by $\Delta^r_{s^{(a)},s^{(b)}}$.
\end{definition}
Since $|x-\EEE(x)|\le1$, $\forall x\in\RRR$, we shall simply forget the integer part $\EEE(\cdot)$ in the future sections. The four indices $j$, $s^{(a)}$, $s^{(b)}$ and $r$ are related by the relation $r=j+\frac l2=\frac{j+s^{(a)}+s^{(b)}}{2}+1.$
With the new index $r$, the constraints for the sector indices
$s^{(a)}+s^{(b)}\ge j-2$, $0\le s_{\pm}\le j$, $0\le j\le j_{max}$
becomes $s^{(a)}+s^{(b)}\ge r-2$, $0\le s^{(a),(b)}\le r$ and $0\le r\le r_{max}=3j_{max}/2$, and
the depth index can be expressed as $l=2(s^{(a)}+s^{(b)}-r+2)$.
\section{The perturbation expansion}
\subsection{The BKAR jungle formula and the power-counting theorem}
In this section we study the perturbation expansions for the Schwinger's functions. It is most conveniently to label the perturbation terms by graphs \cite{RW1}. Before proceeding, let us recall some notations in graph theory.
\begin{definition}(cf., eg. \cite{Tutte})\label{defgraph}
Let $n\ge 1$, be an integer, $I_n=\{1,\cdots,n\}$, $\cP_n=\{\ell=(i,j), i, j\in I_n, i\neq j\}$ be the set of unordered pairs in $I_n$. A graph $G=\{V_G, E_G\}$ of order $n$ is defined as a set of
vertices $V_G=I_n$ and of edges $E_G\subset\cP_n$, whose cardinalities are noted by $|V_G|$ and $|E_G|$, respectively. A graph $G'=\{V_{G'}, E_{G'}\}\subset G$ is called a subgraph of $G$ if $V_{G'}\subset V_G$ and $E_{G'}\subset E_G$. It is called a connected component if $G'$ is connected, i.e. there exists a non-empty set of edges connecting any pair of vertices of $G'$. A half-edge, also called an external field, noted by $(i,\cdot)$, is an object such that each pair of them form an edge: $[(i,\cdot), (j, \cdot)]=(i,j)$, for $i,j\in I_n$. A graph $G$ with half-edges is also called a decorated graph or an extended graph.
\end{definition}
\begin{definition}
A forest ${\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}$ is a graph which contains no loops, i.e. no subset $L=\{(i_1,i_2), (i_2, i_3), \cdots, (i_k, i_1)\}\subset E_{{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}}$ with $k\ge3$. An edge in a forest is also called a tree line and an edge in $L$ is called a loop line. A maximally connected component of ${\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}$ is called a tree, noted by $\cT$. A tree with a single vertex is allowed. $\cT$ is called a spanning tree if it is the only connected component of ${\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}$.
\end{definition}
Recall that a general $2p$-point Schwinger function at temperature $\b=1/T$ is defined as:
\begin{eqnarray}
&&S_{2p,\b}(\lambda; x_1,\t_1,\a_1;\cdots,x_{2p},\t_{2p},\a_{2p},\left\langle)\\
&&=\frac{1}{Z}\int d\mu_C(\bar\psi,\psi)\Big[ \prod_{i=1}^p\prod_{\e=\pm,\a_i\t_i}\psi^\e_{\tau_i, \a_i}(x_i)\ \Big]\ \Big[
\prod_{i=p+1}^{2p}\prod_{\e_i=\pm,\a_i,\t_i}\psi^\e_{\tau_i,\a_i}(x_i)\Big]e^{- {\VV}(\bar\psi,\psi)},\nonumber
\end{eqnarray}
where
\begin{equation}
Z=\int d\mu_C(\bar\psi,\psi)e^{-\VV(\psi,\bar\psi)}
\end{equation}
is the partition function.
\begin{remark}\label{rmkindex}
Remark that, since each matrix element of the propagator is bounded from below by a strictly positive constant, we can simply replace each matrix element $\hat C_{\a\a'}$ by $\tilde C$ times a constant. In order to estimate the upper bound for $S_{2p,\b}$, $p\ge1$ and the self-energy function $\Sigma(\lambda)$, it is enough to estimate the upper bound of any matrix element $[S_{2p,\b}]_{\a\a'}$ and $[\Sigma_2(\lambda)]_{\a\a'}$, say $[S_{2p,\b}]_{11}$ and $[\Sigma_2(\lambda)]_{11}$. In order to simplify the notations, in the rest of this paper we shall forget the matrix indices of the Schwinger's functions and the self-energy functions by writing $[S_{2p,\b}]_{11}$ as $S_{2p,\b}$ and writing $\Sigma_2(\lambda)$ as $[\Sigma_2(\lambda)]_{11}$.
\end{remark}
Let $\{\xi^x_i=(x_{i},\t_{i},\a_{i})\}$, $\{\xi^y_i=(y_{i},\t_{i},\a_{i})\}$ and $\{\xi^z_i=(z_{i},\t_{i},\a_{i})\}$ be set of indices associated with the Grassmann variables $\{\psi^\e_{\tau_i,\a_i}(x_i)\}$, $\{\psi^\e_{\tau_i,\a_i}(y_i)\}$ and $\{\psi^\e_{\tau_i,\a_i}(z_i)\}$, respectively. Expanding the exponential into power series and performing the Grassmann integrals, we have
\begin{eqnarray}\label{part2p}
&&S_{2p}(\lambda; x_1,\t_1,\a_1;\cdots,x_{2p},\t_{2p},\a_{2p})\\
&=&\sum_{N=1}^{\infty}\sum_{n+n_1+n_2=N}\frac{\left\langle^n}{n!}\frac{(\delta\mu(\lambda))^{n_1}}{n_1!}\int_{{(\L_{\beta,L})}^{n+n_1}}d^3y_1\cdots d^3y_{n+n_1}\nonumber\\
&&\int_{{(\L_{\beta,L})}^{2n_2}} d^3z_{1}\cdots d^3z_{2n_2}\prod_{i,j=1}^{2n_2}\nu(\zz_i-\zz_j)\delta(z_{i,0}-\delta z_{j,0})\nonumber\\
&&\sum_{\underline{\t},\underline{\a}}\Bigg\{\begin{matrix}&\xi^x_1,&\cdots, \xi^x_p,&\xi^z_1,&\cdots, \xi^z_{n_2},&\xi^y_1,\cdots,\xi^y_n\\
&\xi^x_{p+1},&\cdots, \xi^x_{2p},&\xi^z_{n_2+1},&\cdots,\xi^z_{2n_2},&\xi^y_1,\cdots, \xi^y_n
\end{matrix}\Bigg\},\nonumber
\end{eqnarray}
where $N=n+n_1+n_2$ is the total number of vertices, in which $n$ is the number of interaction vertices, to each of which is associated a coupling constant $\lambda$, and $n_1$ is the number of two-point vertices, each one is associated with a bare chemical potential counter-term $\delta\mu(\left\langle)$ and $n_2$ is the number of non-local counter-terms $\nu(\lambda)$. The two-point vertices are also called {\it the counter-term vertices}. We have used Cayley's notation (c.f. \cite{Rivbook}) for determinants:
\begin{eqnarray}
\Bigg\{\begin{matrix}\xi^x_i\\
\xi^y_j \end{matrix}\Bigg\}=
\Bigg\{\begin{matrix}
x_{i,\t_i,\a_i}\\ y_{j,\t_j,\a_j}
\end{matrix}\Bigg\}=\det\Big[\ \delta_{\t_i\t_j}[C_{j,\t}(x_i-y_{j})]_{\a_i,\a_j}\ \Big].
\end{eqnarray}
Remark that the perturbation series would be divergent if we fully expand the determinant \cite{RW1}. Instead, we can only partially expand the determinant such that the expanded terms are labeled by forest graphs, which don't proliferate very fast \cite{RW1}. In order to make the partial expansions consistent with the multi-slice analysis, to each tree line in the forest, which corresponds to a sliced free propagator, we associate a scaling index $r=j+l/2$, and we arrange the set of tree lines according to the increasing order of $r$. The set of forests with labeling is called a jungle. The canonical way of generating the jungles graphs in perturbation theory is using the BKAR jungle formula (see \cite{AR}, Theorem $IV.3$):
\begin{theorem}[The BKAR jungle Formula.]\label{ar1}
Let $n\ge1$ be an integer, $I_n=\{1,\cdots, n\}$ be an index set and $\cP_n=\{\ell=(i,j), i, j\in I_n, i\neq j\}$. Let ${\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}$ be a forest of order $n$ and $\cS$ be the set of smooth functions from $\RRR^{\cP_n}$ to an arbitrary Banach space. Let ${\bf x}=(x_\ell)_{\ell\in\cP_n}$ be an arbitrary element of $\RRR^{\cP_n}$ and ${\bf 1}\in \RRR^{\cP_n}$ be the vector with every entry equals $1$. Then for any $f\in \cS$, we have:
\begin{equation}\label{BKAR}
f({\bf 1})=\sum_{\cJ=({\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}_1\subset{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}_1\cdots\subset{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}_{m})\\ m-jungle}\Big(\int_0^1\prod_{\ell\in{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}_m} dw_\ell\Big)\Bigg(\prod_{k=1}^m\Big(\prod_{\ell\in{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}_k\setminus{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}_{k-1}}\frac{\partial}{\partial {x_\ell}}\ \Big)\Bigg)\ f[X^{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}(w_\ell)],
\end{equation}
where the sum over ${\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}$ runs over all forests with $n$ vertices, including the empty one which has no edges; $\cJ=({{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}}_0\subset{{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}}_1\cdots\subset{{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}}_{r_{max}}={{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}})$ is a layered object of forests $\{{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}_0,\cdots,{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}_{r_{max}}\}$, also called a jungle, in which the last forest ${\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}={\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}_{r_{max}}$ is a spanning forest of the fully expanded graph $G$ containing $n$ vertices and $2p$ external edges. ${\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}_{0}:={\bf V_n}$ is the completely disconnected forest of $n$ connected components, each of which corresponds to the interaction vertex $\VV(\psi,\lambda)$ ( cf. Formula \eqref{potx} ). $X^{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}(w_\ell)$ is a vector ${(x_\ell)}_{\ell\in \cP_n}$ with elements $x_\ell= x_{ij}^{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}(w_\ell)$, which are defined as follows:
\begin{itemize}
\item $x_{ij}^{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}=1$ if $i=j$,
\item $x_{ij}^{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}=0$ if $i$ and $j$ are not connected by ${\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}_k$,
\item $x_{ij}^{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}=\inf_{\ell\in P^{{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}}_{ij}}w_\ell$, if $i$ and $j$ are connected by the forest ${\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}_k$ but not ${\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}_{k-1}$, where $P^{{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}_k}_{ij}$ is the unique path in the forest that connects $i$ and $j$,
\item $x_{ij}^{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}=1$ if $i$ and $j$ are connected by ${\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}_{k-1}$.
\end{itemize}
\end{theorem}
We obtain:
\begin{eqnarray}\label{rexp1}
S_{2p,\b}(\lambda)&=&\sum_{N=n+n_1+n_2} S_{2p,N,\b},\\
S_{2p,N,\b}&=&\frac{1}{n!n_1!n_2!}\sum_{\{\underline{\tau}\}}\sum_{\cJ}\prod_v\int_{\Lambda_{L,\beta}} d^3x_v\lambda^n (\delta\mu(\lambda))^{n_1} (\nu)^{n_2}\nonumber\\
&&\quad\quad\cdot\prod_{\ell\in{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}}\int dw_\ell
C_{r,\sigma_\ell}(x_\ell,x'_\ell){\det}[ C_{\t}(w)\ ]_{left}\ .\label{rexp11}
\end{eqnarray}
where the sum is over the jungles $\cJ'=({{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}}_0\subset{{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}}_1\cdots\subset{{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}}_{r_{max}})$; $\e(\cJ)$ is a product of the factors $\pm1$ along the jungle. The term
$\det[C_{r,\sigma}(w)]_{left}$ is the determinant for the remaining $2(n+1)\times 2(n+1)$ dimensional square matrix, which has the same form as \eqref{rexp1}, but is multiplied by the interpolation parameters $\{w\}$. So it is still a Gram matrix. This matrix contains all the unexpanded fields and anti-fields that don't form tree propagators. Let $r_f$ be the index of a field or anti-field $f$, then the $(f,g)$ entry of the determinant reads:
\begin{eqnarray}\label{intc1}
C_{r}(w)_{f,g}=\delta_{\t(f)\t'(g)}\sum_{v=1}^n\sum_{v'=1}^n\chi(f,v)\chi(g,v') x^{{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q},r_f}_{v,v'}(\{w\})C_{r,\t(f),\s(f)}(x_v,x_{v'}),
\end{eqnarray}
where $[x^{{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q},r_f}_{v,v'}(\{w\})]$ is an $n\times n$ dimensional positive matrix, whose elements are defined in the same way as in \eqref{BKAR}:
\begin{itemize}
\item If the vertices $v$ and $v'$ are not connected by ${\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}_r$, then $x^{{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q},r_f}_{v,v'}(\{w\})=0$,
\item If the vertices $v$ and $v'$ are connected by ${\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}_{r-1}$, then $x^{{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q},r_f}_{v,v'}(\{w\})=1$,
\item If the vertices $v$ and $v'$ are connected by ${\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}_r$ but not ${\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}_{r-1}$, then $x^{{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q},r_f}_{v,v'}(\{w\})$ is equal to the infimum of the $w_\ell$ parameters for $\ell\in{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}_r/{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}_{r-1}$ which is in the unique path connecting the two vertices. The natural convention is that ${\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}_{-1}=\emptyset$ and that $x^{{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q},r_f}_{v,v'}(\{w\})=1$.
\end{itemize}
Taking the logarithm on $S_{2p,\b}$, we obtain the {\it connected} $2p$-point Schwinger function $S^c_{2p,\b}$:
\begin{eqnarray}\label{rexp2}
S^c_{2p,\b}&=&\sum_{N=n+n_1+n_2}S^c_{2p,N,\b},\\
S^c_{2p,N,\b}&=&\frac{1}{n!n_1!n_2!}\sum_{\{\underline{\tau}\}}\sum_{\cJ'}\prod_v\int_{\L_{\b,L}} d^3x_v\lambda^n (\delta\mu)^{n_1}[\nu]^{n_2}\nonumber\\
&&\quad\cdot\prod_{\ell\in\cT}\int dw_\ell C_{r,\sigma_\ell}(x_\ell,x'_\ell){\det}[ C_{r,\sigma}(w)]_{left}\ ,\label{rexp3}
\end{eqnarray}
in which $S^c_{2p,\b}$ has almost the same structure as $S_{2p,\b}$ in \eqref{rexp11}, except that the summation over jungles is restricted to the ones $\cJ'=({{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}}_0\subset{{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}}_1\cdots\subset{{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}}_{r_{max}}={\cT})$, in which the final layered forest is a spanning tree $\cT$ connecting all the $n$ vertices.
Without losing generality, suppose that a forest ${\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}_r$ contains $c(r)\ge1$ trees, noted by $\cT_r^k$, $k=1,\cdots, c(r)$, and a link in $\cT_r^k$ is noted by $\ell(T)$. To each $\cT_r^k$ we introduce an extended graph (cf. Definition \ref{defgraph}) $G^k_r$, which contains $\cT_r^k$ as a spanning tree and contains a set of half-edges, $e(G^k_r)$, such that the cardinality of $e(G^k_r)$, noted by $|e(G^k_r)|$, is an even number. By construction, the scaling index of any external field, $r_f$, is greater than $r_{\ell(T)}$. Thus the graphical structure of a component $G^k_r$ is highly nontrivial: besides a tree structure $\cT_r^k$, it contains also a set of internal fields (which would form loop lines if they were fully expanded) which still form a determinant. Each connected component $G^k_r$ is contained in a unique connected component with a lower scaling index $r$. The inclusion relation of the graphs $G^k_r$, $r=0,\cdots,r_{max}$, has a tree structure, called the Gallavotti-Nicol\`o tree.
\begin{definition} [cf. \cite{GN}]
A Gallavotti-Nicol\`o tree (GN tree for short) $\cG$ is an abstract tree graph in which the vertices, also called the nodes, correspond to the extended graphs $G^k_r$, $r\in[0,r_{max}]$, $k=1,2,\cdots, c(r)$ and the edges are the inclusion relations of these nodes. The node $G_{r_{max}}$, which corresponds to the full Feynman graph $G$, is called the root of $\cG$. Obviously each GN tree has a unique root. The bare nodes of the GN tree, which form the set ${\bf V_N}={\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}_0$, are also called the leaves. The cardinality of the set of bare nodes, $|{\bf V_N}|$, is also called the order of $\cG$. A GN tree of order $N$ is also noted as $\cG_N$.
\end{definition}
An illustration of a GN tree with $16$ nodes and $8$ bare nodes is shown in Figure \ref{gn1}, and Figure \ref{gn2} is an illustration of grouping the subgraphs of a Feynman into the corresponding GN tree.
\begin{figure}[htp]
\centering
\includegraphics[width=0.53\textwidth]{gnt1.pdf}
\caption{\label{gn1}A Gallavotti-Nicol\`o tree. The round dots
are the nodes and bare vertices, the big square is the root and the thin lines are the external fields. The dash lines indicate the inclusion relations.
}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[width=0.45\textwidth]{gnt2.pdf}
\caption{\label{gn2}An illustration of the grouping of a quadruped Feynman graphs into the GN tree.
}
\end{figure}
The readers who are not familiar with the GN trees are invited to consult \cite{BG1} or \cite{Rivbook} for more details. Now we consider the amplitudes of the connected Schwinger functions.
\begin{remark}\label{ctbd}
Remark that since the counter-terms are quadratic in the two external fields, they only
contribute the factors $|\delta\mu|^{n_1}$ and $|\nu_{\a\a'}|^{n_2}$ (though this term is non-local) to the amplitudes of the correlation functions. As will be proved in Theorem \ref{flowmu} and \ref{mainc}, these counter-term are simply bounded by some absolute positive constants hence are not essential for the power-counting. So we will forget these counter-terms in this section, just for simplicity, and retain them in future sections when we study the renormalizations.
\end{remark}
The $2p$-point Schwinger function can be written as:
\begin{eqnarray}\label{s2pa}
S^c_{2p,\b} &=&\sum_{n}S^c_{2p,n,\b}\lambda^n,\\
S^c_{2p,n,\b}&=&{ \frac{1}{n!}}{\frac{1}{n_1!}}{\frac{1}{n_2!}}\sum_{ \{\underline{\t}\},\cG, \cT} \
\sum_{\cJ',\{\sigma\}}
\epsilon (\cJ') \prod_{j=1}^{n} \int d^3x_{j} \delta(x_1)
\prod_{\ell\in \cT} \int_{0}^{1} dw_{\ell}
C_{\t_{\ell},\si_{\ell}}
(x_{\ell}, \bar x_{\ell}) \nonumber\\
&&\quad \prod_{i=1}^{n}
\chi_{i}(\si)
[\det C_{j}(w)]_{{ left}} \ .\label{form}
\end{eqnarray}
Notice that each matrix element \eqref{intc1} of $\det[C_{r,\sigma}(w)]_{left}$ can be written as an inner product of two vectors:
\begin{equation}
C_{r}(w)_{f,g;\tau,\tau'}=( e_\tau\otimes A_f(x_v, ), e_{\tau'}\otimes B_g(x_{v'},))\ ,
\end{equation}
in which the unit vectors $e_{\uparrow}=(1,0)$, $e_{\downarrow}=(0,1)$ are the spin variables and
\begin{eqnarray}
A_f&=&\frac{1}{\beta|\Lambda|}\sum_{k\in{\cal D}}\def\AAA{{\cal A}}\def\GG{{\cal G}} \def\SS{{\cal S}_{\beta, \L}}\sum_{v=1}^n\chi(f,v)[x^{{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q},r_f}_{v,v'}(\{w\})]^{1/2}e^{-ik\cdot x_v}\cdot\\
&&\quad\quad\quad\cdot\Big[\ \chi_j[k_0^2+e^2(\kk,1)]\cdot
v_{s^{(a)}}[\cos^2(k^{(a)}/2)]\cdot v_{s^{(b)}}[\cos^2(k^{(b)}/2)]\ \Big]^{1/2},\nonumber
\end{eqnarray}
\begin{eqnarray}
B_g&=&\frac{1}{\beta|\Lambda|}\sum_{k\in{\cal D}}\def\AAA{{\cal A}}\def\GG{{\cal G}} \def\SS{{\cal S}_{\beta, \L}}\sum_{v'=1}^n\chi(g,v')[x^{{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q},r_f}_{v,v'}(\{w\})]^{1/2}e^{-ik\cdot x_{v'}}\hat C(k)_{\tau_\ell,\sigma_\ell}\\
&&\quad\quad\quad\cdot\Big[\ \chi_j[k_0^2+e^2(\kk,1)]\cdot
v_{s^{(a)}}[\cos^2(k^{(a)}/2)]\cdot v_{s^{(b)}}[\cos^2(k^{(b)}/2)]\ \Big]^{1/2}\nonumber
\end{eqnarray}
are vectors in the Hilbert space such that
\begin{eqnarray}
||A_f||_{L^\infty}\le O(1)\g^{-j_f-s_{f,+}-s_{f,-}},\quad ||B_f||_{L^\infty}\le O(1)\g^{-s_{f,+}-s_{f,-}}.
\end{eqnarray}
By Gram-Hadamard's inequality \cite{Le, GK} and using the constraint $l_f=s_{f}^{(a)}+s_{f}^{(b)}-j+2$, $a\neq b$,
we have
\begin{equation}
|\det(A_f, B_g)|\le\prod_{f}||A_f||_{L^\infty}\cdot ||B_f||_{L^\infty}\le O(1)'\g^{-j_f-l_f/2},
\end{equation}
and the determinant is bounded by
\begin{equation}
\det\nolimits_{{ left}}\le K^n\prod_{f\ left}\g^{-(j_f+l_f)/2}= K^n\prod_{f\ left}\g^{-r_f/2-l_f/4},
\end{equation}
for some positive constant $K$ that is independent of $r$ and $l$. We can express the $\delta$ functions in \eqref{form}, which encode the constraints from the conservation of momentum, as the Fourier transform of an oscillation factor. Then if we naively bound the delta functions with the Gram-Hadamard inequality, the Fourier oscillation factors will be simply bounded by one and constraints from the conservation of momentum will be lost. In order to solve this problem, we introduce an indicator function $\chi_{i}(\{\si\})= \chi (\sigma^1_i, \sigma^2_i, \sigma^3_i, \sigma^4_i)$ at each vertex in the graph, defined as follows: $\chi_{i}(\{\si\})$ equals to $1$ if the sector indices $\{\si\}$ satisfy the constraints in Lemma \ref{secmain}, and equals to $0$ otherwise. Constraints on the sector indices placed by the GN tree structure also needs to be consider. Integrating over the all position variables except the fixed one,
$x_1$, we obtain the following bound:
\begin{equation} | S^c_{2p,n,\b} | \le {\frac{K^n}{n!}}
\sum_{\{\underline\tau\}, \cG, \cT}\ \sum'_{\{\si \}}
\prod_{i=1}^{n} \chi_{j}(\si)
\prod_{\ell \in \cT} \g^{2r_{\ell}}
\prod_{f} \g^{-r_f/2-l_f/4}, \label{absol1}
\end{equation}
in which the last product runs over all the $4n$ fields
and anti-fields, and the summation $\sum'$ means that we have taken into account the constraints on the sector indices among the different connected components in the GN tree. We have the following lemma concerning the last two terms in \eqref{absol1}:
\begin{lemma}\label{indmain}
Let $c(r)$ be the number of connected components at level $r$ in the GN tree, we have the following inductive formulas:
\begin{eqnarray} &&\prod_{f} \g^{-r_{f}/2}= \prod_{r=0}^{r_{max}}\ \prod_{k=1}^{c(r)} \g^{-|e(G_r^k)|/2}\ ,
\label{induc1}\\
&&\prod_{\ell \in \cT} \g^{2r_{\ell}}=\g^{-2r_{max}-2}
\prod_{r=0}^{r_{max}}\ \prod_{k=1}^{c(r)} \g^{2}\ .
\label{induc2}
\end{eqnarray}
\end{lemma}
\begin{proof}
Both formulas can be proved by induction. We prove \eqref{induc1} first. Consider a graph $G$ with $n$ vertices. Let $N_f$ be the set of fields (half-edges) in $G$ and $n(r,f)$ be the number of fields such that $r_f=r$. Then the l.h.s. of \eqref{induc1} is equal to
$\g^{-\frac12\sum_{f\in N_f} r_{f}}=\g^{-\frac12[\ \sum_{r=0}^{r_{max}}\ n(r,f)\cdot r\ ] }$.
Let $\cup_{k=1}^{c(r)}e(G_r^k)$ be the set of external fields of any connected component in the GN tree of scaling index $r$. Obviously we have $|e(G_r)|=\sum_{k=1}^{c(r)}\ |e(G_r^k)|$. Then
the r.h.s. of \eqref{induc1} is equal to $\g^{-\frac12[\ \sum_{r=0}^{r_{max}}\ |e(G_r)|\ ]}$.
Fix an external field $f_e$ of a connected component in $\cG$ such that $r_{f_e}=r\ge1$. So that $f_e$ an external field of $G_{\le r-1}$, but an internal field of $G_{r}$, hence $f_e\in e(G_0)\cap\cdots\cap e(G_{r-1})$. Then we have
$ \sum_{r=0}^{r_{max}} |e(G_r)|= \sum_{r=0}^{r_{max}}\ r\cdot n(f_e,r)$, where $n(f_e,r)$ is the number of external fields at level $r$, and
\begin{equation}
\prod_{r=0}^{r_{max}}\ \prod_{k=1}^{c(r)} \g^{-|e(G_r^k)|/2}=\g^{-\sum_{r=0}^{r_{max}}\ r\cdot n(f,r)/2}=\prod_{f} \g^{-r_{f}/2}\ .
\end{equation}
Now we consider \eqref{induc2}. Suppose that the spanning tree $\cT$ has $k$ edges $\ell_1, \ell_2,\cdots, \ell_k$, each is assigned a scaling index $r_{\ell_i}$, $i=1,\cdots,k$. We can always order the tree lines according to the order $r_{\ell_1}\le r_{\ell_2}\cdots\le r_{\ell_k}\le r_{max}$. Let $n(\ell,r)$ be the number of tree lines in $\cT$ whose scaling indices are equal to $r$, then the l.h.s. of \eqref{induc2} is equal to
$\g^{2\sum_{\ell\in\cT}r_\ell}=\g^{2\sum_{r=0}^{r_{max}}\ r\cdot n(l,r)}$, and the r.h.s. of \eqref{induc2} is equal to
\begin{eqnarray}\label{induc2r}
&&\g^{-2r_{max}-2}\
\prod_{r=0}^{r_{max}}\ \g^{2c(r)}=
\g^{2 \sum_{r=0}^{r_{max}}\ \big[c(r)-1\big]}\ .
\end{eqnarray}
Since $c(r_{max})=1$, we have
\begin{equation}
c(r)-1=n(\ell,r+1)+n(\ell,r+2)+\cdots n(\ell,r_{max}),
\end{equation}
where $n(\ell,i)=c(i)-c(i+1)$, and
\begin{equation}
\sum_{r=0}^{r_{max}}\ \big[c(r)-1\big]=1\cdot n(\ell,1)+2\cdot n(\ell,2)+\cdots +r_{max}\cdot n(\ell,r_{max}),
\end{equation}
which means that
\begin{equation}
\g^{2 \sum_{r=0}^{r_{max}}\ \big[c(r)-1\big]}=\g^{2 \sum_{r=0}^{r_{max}}\ r\cdot n(\ell,r)}=\prod_{\ell\in\cT}\ \g^{2r_\ell}\ .
\end{equation}
This concludes the lemma.
\end{proof}
\begin{theorem}[The power counting theorem]\label{tpc}
There exists a positive constant $K$ which may depend on the model but is independent of the scaling indices, such that the connected, $2p$-point Schwinger's functions, with $p\ge0$, satisfy the following bound:
\begin{equation}\label{pc1} | S^c_{2p,n,\b} | \le {\frac{K^n}{n!}}
\sum_{\{\underline\tau\}, \cG, \cT}\ \sum'_{\{\si \}}\prod_{i=1}^{n}\ \big[\chi_{i}(\si)\g^{-(l_i^1 + l_i^2 + l_i^3 + l_i^4)/4}\big]\
\prod_{r=0}^{r_{max}}\prod_{k} \g^{2-e(G_r^k)/2}\ .
\end{equation}
So the two-point functions are relevant, the four point functions are marginal and the Schwinger functions with external legs $2p\ge6$ are irrelevant.
\end{theorem}
\begin{proof}
Writing the product $\prod_f\g^{-l_f/4}$ in Formula \eqref{absol1} as $\prod_{i=1}^{n}e^{-(l_i^1 + l_i^2 + l_i^3 + l_i^4)/4}$, taking into account the conservation of momentum at each vertex $i$ and using Lemma \ref{indmain}, the result follows.
\end{proof}
Then we need to consider the summation over the sector indices , which is easily getting unbounded if we don't take into account the constraints placed by the conservation of momentum. This is the so-called sector counting problem and will be studied in the next subsection.
\subsection{The sector counting lemma}
\begin{lemma}[Sector counting lemma for a single bare vertex]\label{sec1}
Let the four half-fields attached to a vertex be $f_1,\cdots, f_4$ with scaling indices $j_1,\cdots, j_4$, sector indices $\si_1=(s_{1}^{(a)}, s_{1}^{(b)}),\cdots,\si_4=(s_{4}^{(a)}, s_{4}^{(b)})$. Let the generalized scaling indices $r_1\cdots, r_4$ associated to the four fields be assigned as
$r_{f_1}=r_{f_2}=r_{f_3}=r$ and $r_{f_4}>r$. Then there exists a positive constant $K$, which is independent of the scaling indices and the sector indices, such that for fixed $\si_4$, we have
\begin{equation}\label{ss1}
\sum_{\si_1, \si_2, \si_3} \chi (\si_1, \si_2, \si_3, \si_4)
\gamma^{-(l_1+l_2+l_3 )/4} \le K.r\ .\end{equation}
\end{lemma}
\begin{proof}
This lemma has been proved in \cite{Riv} for a similar setting. Here we present a simpler proof, for reader's convenience. Let the four fields hooked to a vertex $i$ in a node $G_r^k$ be
$f_1,\cdots, f_4$, whose sector indices are $\s_1,\cdots,\s_4$ and depth indices are $l_i^1,\cdots, l_i^4$. Remark that, among the four fields, we can always choose one, say, $f_4$, as the {\it root} field. Then the index $r_4$ is greater than all the other indices $r_1=r_2=r_3=r$. We can always organize the sector indices $\s_1=(s_{1}^{(a)},s_{1}^{(b)}),\cdots, \s_3=(s_{3}^{(a)},s_{3}^{(b)})$ such that $s_{1}^{(a)}\le s_{2}^{(a)}\le s_{3}^{(a)}$ and $s_{1}^{(b)}\le s_{2}^{(b)}\le s_{3}^{(b)}$. Then, by Lemma \ref{secmain}, either $\s_1$ collapses with $\s_2$, or one has $s_{1}^{(a)}=s_{1}^{(b)}=j_1$, such that $j_1<\min\{j_2,\cdots,j_4\}$.
So we only need to consider the following possibilities:
\begin{itemize}
\item if $\sigma_1\simeq\sigma_2$, we have $s_{2}^{(a)}= s_{1}^{(a)}\pm1$ and $s_{2}^{(b)}= s_{1}^{(b)}\pm1$. The depth indices are arranges as $l_1\le l_2\le l_3$. Then the l.h.s. of \eqref{ss1} is bounded by
\begin{equation}
\sum_{\si_1, \si_3}
\g^{-(2l_1+l_3 )/4}= \sum_{\si_1}\g^{-l_1/2}\sum_{\si_3}
\g^{-l_3 /4}\ ,
\end{equation}
Using the fact that $r_k=i_k+l_k/2$ and $l_k=s_{k}^{(a)}+s_{k}^{(b)}-j_k+2$, we have
$l_k=2(s_{k}^{(a)}+s_{k}^{(b)}-r_k+2)$, for $k=1,\cdots,3$.
For fixed $s_1=(s_{1}^{(a)}, s_{1}^{(b)})$, the sum over $\s_3=(s_{3}^{(a)}, s_{3}^{(b)})$ can be easily bounded:
\begin{eqnarray}
&&\sum_{\si_3=(s_{3}^{(a)},s_{3}^{(b)})}\g^{-l_3 /4} = \sum_{\si_3=(s_{3}^{(a)},s_{3}^{(b)})} \g^{-(l_3-l_1) /4}
\g^{-l_1/4}\\
&&\le \sum_{s_{3}^{(a)}\ge s_{1}^{(a)}}\g^{-(s_{3}^{(a)}-s_{1}^{(a)})/2}\sum_{s_{3}^{(b)}\ge s_{1}^{(b)}}\g^{-(s_{3}^{(b)}-s_{1}^{(b)}) /2}\g^{-l_1/4} \le K_1\cdot \g^{-l_1/4}\ ,\nonumber
\end{eqnarray}
for some positive constant $K_1$ which is independent of the scaling indices.
Now we consider the summation over $\s_1$. By the constraint $s_{1}^{(a)}+s_{1}^{(b)}\ge r-2$ and take into account the factor $\g^{-l_1/4}$ from the above formula, we have:
\begin{eqnarray}
\sum_{\si_1}\g^{-l_1/2}\cdot \g^{-l_1/4}&\le&\g^{3r/2} \sum_{s_{1}^{(a)}=0}^{r}\g^{-3s_{1}^{(a)}/2}\sum_{s_{1}^{(b)}=r-2-s_{1}^{(a)}}^{r}\g^{-3s_{1}^{(b)}/2}\\
&&\le \g^{3r/2}\sum_{s_{1}^{(a)}=0}^{r}\g^{-3s_{1}^{(a)}/2}\g^{-3r/2+3s_{1}^{(a)}/2}
\le K_2\cdot r\ ,\nonumber
\end{eqnarray}
in which $ K_2$ is another positive constant. By choosing $K=K_1\cdot K_2$ we proved the lemma for this case.
\item if $j_1=s_{1}^{(a)}=s_{1}^{(b)}$, which is the smallest index among the four scaling indices, we have $l_1=j_1+2$.
Summing over $\s_1$ is simply bounded by $\g^{-j_1/4}$ and the sum over $\s_3\ge \s_2$ gives a constant. Finally the sum over $\s_2$ gives the factor $r$. So there exists a positive constant $K$, which is also independent of all the scaling indices, such that the l.h.s. of \eqref{ss1} is bounded by $K\cdot\g^{-j_1/4}r\le K\cdot r$.
\end{itemize}
\end{proof}
\section{The convergent contributions to the Schwinger's functions}
\subsection{More notations about the Gallavotti-Nicol\`o trees}
Before proceeding, let us introduce the following notations concerning some specific Gallavotti-Nicol\`o trees.
\begin{definition}[Biped trees]
Let $\cG$ be a Gallavotti-Nicol\`o tree and $G_r^k$ be a node in $\cG$. Let $e(G_r^k)$ be the set of external fields of $G_r^k$ whose cardinality is noted by $|e(G_r^k)|$. A biped $b$ is a node in $\cG$ such that $|e(G_r^k)|=2$. The set of all bipeds is noted by $\cB:=\{G_r^k,\ r=1,\cdots, r_{max}; k=1,\cdots, c(r)\ \big|\ |e(G_r^k)|=2\}$. A biped tree $\cG_\cB$ is defined as a subgraph of a Gallavotti-Nicol\`o tree in which the set nodes, noted by $V(\cG_\cB)$, consists of the following elements: i) the bare nodes $\VV$ of $\cG$, ii), the bipeds $b$ and iii) the root node which corresponds to the complete graph $G$. The edges of $\cG_\cB$ are the natural inclusion relations for the nodes in $V(\cG_\cB)$.
\end{definition}
\begin{definition}
Let $b\in\cB$ be a biped. Then the set of external fields of $b$ is noted by $e_b=\{\bar\psi_b,\psi_b\}$, and the set of external fields of $\cG_\cB$ is noted by ${\cal EB}$. We have ${\cal EB}:=\big(\cup_{b\in B}\ e_b\big)\setminus e(G)$, where $e(G)$ is the set of external fields of the complete graph $G$.
\end{definition}
Similarly, define the quadruped Gallavotti-Nicol\`o trees as:
\begin{definition}
A quadruped $Q$ is a node of a Gallavotti-Nicol\`o tree $\cG$ which has four external fields. The set of all quadrupeds in a $\cG$ is noted by $\cQ$. A quadruped tree $\cG_\cQ$ is defined as a subgraph of $\cG$ whose set of nodes, noted by $V(\cG_\cQ)$, composes of the following elements: the bare nodes of $\cG$, the quadruped $\cQ$ and the root of $\cG$ which is the complete graph $G$. The edges of $\cG_\cQ$ are the inclusion relations of its nodes. The set of external fields associated to $q$ is noted by $e_q$, and the set of external fields of $\cQ$ is noted by ${\cal EQ}$. We have ${\cal EQ}=(\cup_{Q\in\cQ}e_Q)\setminus e(G)$.
\end{definition}
Remark that both the bare vertices and the root can be considered as quadrupeds.
\begin{definition}\label{gn3}
A convergent Gallavotti-Nicol\`o tree $\cG_{{\cal C}}$ is a subgraph of a Gallavotti-Nicol\`o tree which doesn't contain the nodes $\cB$ nor $\cQ$. In other words, the set of nodes in $\cG_{{\cal C}}$ is given by $V({\cal C}):=\big\{ G_{r}^{k},\ r=1,\cdots, r_{max},\ k=1\cdots c(r)\big\vert\ |e( G_{r}^{k})|\ge 6\big\}$ and the edges are the natural inclusion relations of the nodes.
\end{definition}
Correspondingly, we have the following definitions for the Schwinger's functions.
\begin{definition}
Let $\{\cG_{{\cal C}}\}$ be the set of convergent GN trees. The corresponding connected Schwinger functions, noted by $S^c_{{\cal C},2p,\b}$, with $p\ge3$, are called the convergent Schwinger functions. They are the contributions to the Schwinger functions $S_{2p}$ from the convergent graphs. Similarly, define $S^c_{\cQ,\b}$ as the quadruped Schwinger functions whose GN trees are the quadrupeds $\{\cG_\cQ\}$. Finally, define the Schwinger's functions $S^c_{\cB,\b}$, which correspond to the biped GN trees $\{\cG_{\cB}\}$, as the biped Schwinger functions.
\end{definition}
In the rest of this section, we shall study the analytic properties of the $2p$-point Schwinger's functions for $p\ge2$, which include the convergent Schwinger's functions and the quadruped ones. The biped Schwinger's functions will be studied in the next section.
\subsection{The $2p$-point Schwinger's functions with $p\ge3$.}
The perturbation series of $S^c_{{\cal C},2p}$ can be written as
\begin{equation}
S^c_{{\cal C},2p,\b}(\lambda)=\sum_n\left\langle^n S_{{\cal C},2p,n},\end{equation}
\begin{equation}\label{conv1}
S_{{\cal C},2p,n} = {\frac{1}{n!}}\sum_{{\cal B} = \emptyset,
{\cal Q}=\emptyset \\ \{\underline\tau\}, \cJ' }
\sum'_{\{\si \}} \ep (\cJ')\prod_{v} \int_{\Lambda_{\beta}} dx_{v}
\prod_{\ell\in \cT} \int_{0}^{1} dw_{\ell}
C_{j,\si_{\ell}} (x_{\ell}, y_{\ell})
[\det C(w)]_{{ left}}.
\end{equation}
We have the following theorem:
\begin{theorem}[The Convergent contributions (see also \cite{Riv}]\label{cth1}
There exists a uniform positive constant $c_1$ independent of the scaling index such that the connected Schwinger functions $S^c_{{\cal C}, 2p,\b}(\lambda)$, $p\ge3$, are analytic functions of $\lambda$, for $|\lambda\log T|\le c_1 $.
\end{theorem}
\begin{proof}
The proof follows closely \cite{Riv}. Here we try to make the proof simpler and more pedagogical.
Formula \eqref{conv1} can be written as:
\begin{eqnarray}\label{conv2}
S_{{\cal C},2p,n} &=& {\frac{1}{n!}}\sum_{\{G^k_r, {r=0},\cdots,r_{max}; {k=1},\cdots,c(r)\}, { {\cal B} = \emptyset,
{\cal Q}=\emptyset}}\sum_{\underline\tau}
\sum_{\{\si \}}' \ep (\cJ)\nonumber\\
&&\quad\quad \prod_{v} \int_{\Lambda_{\beta}} dx_{v}
\prod_{\ell\in \cT} \int_{0}^{1} dw_{\ell}
C_{j,\si_{\ell}} (x_{\ell}, y_{\ell})
[\det C(w)]_{left},
\end{eqnarray}
in which we make the sum over $\cJ$ more explicit.
By Theorem \ref{tpc}, we have:
\begin{eqnarray}\label{conv3}
|S_{{\cal C},2p,n}| &\le&
{\frac{K^n}{n!}}
\sum_{\{G^k_r, {r=0},\cdots,r_{max}; {k=1},\cdots,c(r)\},{ {\cal B} = \emptyset,
{\cal Q}=\emptyset}}\sum_{\underline\tau,\cT} \sum'_{\{\si \}}\prod_{i=1}^{n}\ \Big[\ \chi_{i}(\si)e^{-(l_i^1 + l_i^2 + l_i^3 + l_i^4)/4}\ \Big]\nonumber\\
&&\quad\cdot\prod_{i=1}^n \g^{-[r_i^1+r_i^2+r_i^3+r_i^4]/6}
\label{cpt1},
\end{eqnarray}
in which we have used the fact that $2-|e(G^k_r)|/2\le-|e(G^k_r)|/6$, for $|e(G^k_r)|\ge 6$, and the fact that $$\prod_{r=0}^{r_{max}}\g^{-|e(G^k_r)|/6}=\prod_{i=1}^n \g^{-[r_i^1+r_i^2+r_i^3+r_i^4]/6}.$$
One important step is to sum over the sector indices in:
\begin{equation}\label{secsumcov}
\sum'_{\{\si \}}\prod_{i=1}^{n}\ \Big[\ \chi_{i}(\si)e^{-(l_i^1 + l_i^2 + l_i^3 + l_i^4)/4}\ \Big]\prod_{i=1}^n \g^{-[r_i^1+r_i^2+r_i^3+r_i^4]/6}.\end{equation}
Remark that, since the scaling indices for the four fields are not necessarily the same, we can't apply Lemma \ref{sec1} directly. Let us fix a field with maximal index $r$, which eventually goes to the root. This field can be chosen as $f_4$ with scaling index $r_4$, without losing generality. The constraint from conservation of momentum implies that either the two smallest sector indices among the four, chosen as $s_{1}^{(a)}$ and $s_{2}^{(a)}$, are equal (modulo $\pm1$), or the smallest sector index $s_{1}^{(a)}$ is equal to the $j_1$, which is the smallest scaling index.
Then summing over the sector
indices $(s_{1}^{(a)},s_{1}^{(b)})$ results in the factor $\max \{r_i^1, r_i^2\}$. Now we consider the summation over the sectors $(s_{3}^{(a)},s_{3}^{(b)})$, for which we obtain (see the proof of Lemma \ref{sec1})
\begin{equation}
\sum_{s_{3}^{(a)},s_{3}^{(b)}}\g^{-l_i^3/4}\le K_1.r_i^3,
\end{equation}
for some positive constant $K_1$. In total we lose a factor $K_1 \bar r\cdot r_i^3$, where $\bar r=\max\{r_i^1, r_i^2\}$, at any vertex $i=1,\cdots, n$. Summation over the sectors which are not the root sectors is bounded by
\begin{equation}
\prod_{i=1}^n\sum_{r_i^1,\cdots, r_i^{4}=0}^{r_{max}}[\bar r \g^{-r_i^1/6}\g^{-r_i^2/6}\cdot r_i^3\g^{-r_i^3/6}\g^{-r_i^4/6}]\le K_2.
\end{equation}
Finally, we have to sum over the sector indices for the root fields, one for each vertex $i$. This summation is bounded by $r_{max}=3j_{max}/2=3|\log T|/2$. Summing over all the GN trees and
spanning trees cost a factor $K_3^n n!$, where $K_3$ is certain positive constant (see \cite{Riv} for the detailed proof of this combinatorial result.). Choosing the positive constant $C=K\cdot K_1\cdot K_2\cdot K_3$, we have
\begin{equation}
|S_{{\cal C},2p,n}|\le \sum_{n=0}^\infty C^n|\lambda|^n|\log T|^n.
\end{equation}
Obviously the above series is convergent for $C|\lambda| |\log T|<1$. Choosing $c_1\le1/C$,
we conclude this theorem.
\end{proof}
\subsection{The quadruped Schwinger's functions}
In this part we consider the connected quadruped Schwinger functions $S^c_{\cQ,\b}$. Since both the
bare vertex $i$ and the general quadruped $Q$ have four external fields, we introduce to each bare vertex an indicator function $\chi_i(\{\sigma\})$ and to each quadruped $Q$ an indicator function $\chi_Q(\{\sigma\})$. The latter indication function is defined as follows: $\chi_Q(\{\sigma\})$ is equal to $1$ if the sector indices $\{\sigma\}=\{\s_Q^1,\cdots,\s_Q^4\}$ of the external fields of $Q$ satisfy the constraints in Lemma \ref{secmain}, and is equal to $0$ otherwise. Then the quadruped Schwinger's functions can be written as
\begin{eqnarray}\label{conv2q}
S^c_{\cQ,\b}(\lambda)&=&\sum_{n=0}^\infty \lambda^n S_{\cQ,n},\\
S_{\cQ,n} &=& {\frac{1} {n!}}\sum_{\cG_\cQ,\cal{EQ}}\sum_{\underline\tau}
\sum_{\{\si \}}' \ep (\cJ)\prod_{i=1}^n\chi_i(\{\sigma\})\prod_{Q\in\cQ}\chi_Q(\{\sigma\})\nonumber\\
&&\quad\quad \prod_{v} \int_{\Lambda_{\beta}} dx_{v}
\prod_{\ell\in \cT} \int_{0}^{1} dw_{\ell}
C_{r_\ell,\si_{\ell}} (x_{\ell}, y_{\ell})
[\det C(w)]_{left}.
\end{eqnarray}
We have the following theorem:
\begin{theorem}\label{mqua}
Let $T=1/\b>0$ be the temperature and $S^c_{\cQ,\b}(\lambda)$ be the quadruped Schwinger's function. There exists a constant $c$, which may depends on the model but is independent of the scaling indices, such that the perturbation series for the quadruped Schwinger function is convergent in the domain $\{\lambda\in\RRR\vert|\lambda|<c/|\log T|^2\}$.
\end{theorem}
Remark that a theorem similar to this one has been already proved in \cite{Riv}, for a different setting. We present here a more pedagogical proof, for reader's convenience.
Before proceeding, we introduce the following definitions.
\begin{definition}[The maximal sub-quadruped \cite{Riv}]
Let $Q\in\cG_\cQ$ be a quadruped which is not a leaf. By the Gallavotti-Nicol\`o tree structure of $\cG_\cQ$, $Q$ must be linked directly to a set of quadrupeds $\{Q'_1,\cdots,Q'_{d_Q}\}$, $d_Q\ge1$, called the maximal sub-quadrupeds of $Q$. These sub-quadrupeds could be either the bare vertices or some general quadrupeds.
\end{definition}
\begin{remark}
Remark that a maximal sub-quadruped $Q'$ of $Q$ may still contain some sub-quadrupeds $Q''_1,\cdots, Q''_{d(Q')}$. The inclusion relation between $Q$ and $Q''$ is not an edge of the quadruped Gallavotti-Nicol\`o tree.
\end{remark}
Now we consider summation over sector indices for a quadruped, for which we have the following lemma.
\begin{lemma}[Sector counting lemma for quadrupeds]\label{secqua}
Let $Q$ be a quadruped of scaling indices $r$, which is linked to $d_Q$ maximal sub-quadrupeds ${Q'_1,\cdots,Q'_{d_Q}}$. Let the external fields of $Q$ be ${f_Q^1,\cdots,f_Q^4}$, with scaling indices ${r_Q^1,\cdots,r_Q^4 }$ and sector indices ${\s_Q^1,\cdots,\s_Q^4}$, respectively. Let the external fields of $Q'_v$, $v=1,\cdots Q_d$, be ${f_v^1,\cdots,f_v^4 }$, with scaling indices ${r_v^1,\cdots,r_v^4 }$ and sector indices ${\s_v^1=(s_{v,1}^{(a)},s_{v,1}^{(b)}),\cdots,\s_v^4=(s_{v,4}^{(a)},s_{v,4}^{(b)} ) }$, respectively. Let $\chi_v(\{\sigma_v\})$ be the characteristic function at the sub-quadruped $Q'_v$, with $v=1,\cdots, d_Q$, we have
\begin{eqnarray}
\sum_{\{\sigma_1\},\cdots,\{\sigma_{d_Q}\}} \prod_{v=1}^{d_Q}\chi_v(\{\sigma_v\})\chi_Q(\{\sigma\})e^{-[l_v^1+l_v^2+l_v^3+l_v^4]/4}\le K_1^{d_Q-1} {r}^{d_Q-1},
\end{eqnarray}
for some positive constant $K$.
\end{lemma}
\begin{proof}
Let $Q$ be a quadruped and $\cT$ be a spanning tree of $G$, then $\cT_Q=\cT\cap Q$ is the set of tree lines in $Q$. Among all the internal fields contained in $Q$, we fix the root field with the highest scaling index $r_Q$, which we denote by $f_{r_Q}$. Since a root field also belongs to some maximal sub-quadruped of $Q$, we fix also a root field for each of the $d_Q$ sub-quadrupeds. Define the external vertices of $Q$ as the set of maximal sub-quadrupeds $Q'$ to which the external fields of $Q$ are hooked. So there can be at most four of them. We call a field (half-edge) a tree field if if, by contraction with another field (half-edge), a tree line of $\cT_Q$ can be formed. We consider the constraints on the sector indices for the maximal sub-quadrupeds, starting from an external sub-quadruped $Q'_1$, which contains at least one external field, to the next maximal sub-quadruped.
By conservation of momentum, whenever two tree fields of an external quadruped are fixed, the last field in that external quadruped is also determined. In this way we find that the number of pairs of sector indices to be determined is equal to the number of tree lines in $Q$ connecting the maximal sub-quadrupeds, which is $d_Q-1$.
Since summing over each pair of sector indices for a root field is bounded by $\sum_{(s_{v}^{(a)},s_{v}^{(b)})}\g^{-l/4 }\le K_1.r_v$ (cf. Lemma \ref{sec1} ),
in which $r_v\le r_Q\le r$, we obtain
\begin{eqnarray}
\sum_{\{\sigma_1\},\cdots,\{\sigma_{d_Q}\}} \prod_{v=1}^{d_Q}\chi_v(\{\sigma_v\})\chi_Q(\{\sigma\})e^{-[l_v^1+l_v^2+l_v^3+l_v^4]/4}\le
(K_1.r_v)^{d_Q-1}\le K_1^{d_Q-1} {r}^{d_Q-1}.
\end{eqnarray}
Thus we conclude this lemma.
\end{proof}
\begin{proof}[Proof of Theorem \ref{mqua}]
In order to sum over all the quadruped trees, it is useful to keep the GN tree structure explicit and write a quadruped tree as $\cG_\cQ=\{Q_r^k, r=0,\cdots, r_{max}, k=1,\cdots, c(r)\}$. The quadruped Schwinger's functions satisfy the following bound (cf. Formula \eqref{cpt1}):
\begin{eqnarray}
|S_{\cQ,n}| &\le&
{\frac{K^n}{n!}}
\sum_{\{Q^k_r, {r=0},\cdots,r_{max}; {k=1},\cdots,c(r)\},{ {\cal B} = \emptyset}}\sum_{\underline\tau,\cT} \sum'_{\{\si \}}\prod_{Q\in\cQ}\chi_Q(\{\sigma\})\prod_{i=1}^{n}\ \Big[\chi_{i}(\si)e^{-[l_i^1 + l_i^2 + l_i^3 + l_i^4]/4}\Big]\nonumber\\
&&\quad\cdot\prod_{r=0}^{r_{max}}\g^{2-|e(G^k_r)|/2}\ .\label{convq}
\end{eqnarray}
Now we sum over all the sector indices, from the leaves of a quadruped tree to the root. The first quadruped $Q_1$ that we encounter contains the bare vertices as the maximal sub-quadrupeds. The second quadruped $Q_2$ contains the quadruped $Q_1$, some other quadrupeds at the same scaling index than $Q_1$, and bare vertices. More quadruped will be encountered when we are going towards the root. In this process, we will apply Lemma \ref{secqua} to each quadruped $Q$ that we meet, until we arrive at the root node of $\cG_\cQ$. Then there exists a constant $K_2$ independent of the scaling index such that:
\begin{eqnarray}
|S_{\cQ,n}| &\le& \prod_{Q\in\cQ}\Big[K_2^{d_Q} \sum_{r=0}^{r_{max}} {r}^{d_Q-1}\Big]
\le \prod_{Q\in\cQ}K_3^{d_Q}|\log T|^{d_Q},
\label{convq2}
\end{eqnarray}
in which $K_3=3 K_2\cdot K/2$ is another positive constant, hence is also independent of the scaling index; The sum over scaling indices in $[\cdots]$ means that we sum over the root scaling indices for each quadruped $Q$, from $0$ to $r_{max}=3|\log T|/2$. We have used the fact that the number of quadruped trees with $n$ vertices is bounded by $c^n n!$ (see \cite{Riv}), which is a variation of Cayley's theorem concerning the number of labeled spanning trees with fixed vertices.
Using following well-known inductive formula
\begin{equation}\label{ind4p}\sum_{Q\in \cQ} d_Q=|\cQ|+n-1\le 2n-2,
\end{equation}
where $| \cQ|$ is the cardinality of the set of quadrupeds $\cQ$, for which we have $| \cQ|\le n-1$,
we can prove the following bound:
\begin{equation}
|S_{\cQ,n}|\le\prod_{Q\in \cQ}K_3^{d_Q}|\log T|^{d_Q}\le K_3^{2n-2}|\log T|^{2n-2},
\end{equation}
and
\begin{equation}
|S^c_{\cQ,\b}(\lambda)|\le\sum_{n=0}^\infty K_3^{2n-2}\cdot\lambda^n\cdot|\log T|^{2n-2}.
\end{equation}
Let $0<c\le1/K_3$ be some constant, define
\begin{equation}\label{adoq}
{\cal R}}\def\cL{{\cal L}} \def\JJ{{\cal J}} \def\OO{{\cal O}^\cQ_T:=\{\lambda\ \vert |\lambda\log^2T|<c \},
\end{equation}
then the perturbation series of $S_{\cQ}(\lambda)$ is convergent for $\lambda\in{\cal R}}\def\cL{{\cal L}} \def\JJ{{\cal J}} \def\OO{{\cal O}^\cQ_T$.
This concludes the theorem.
\end{proof}
\begin{remark}\label{rmtad}
Obviously ${\cal R}}\def\cL{{\cal L}} \def\JJ{{\cal J}} \def\OO{{\cal O}^{\cQ}_T\subset{\cal R}}\def\cL{{\cal L}} \def\JJ{{\cal J}} \def\OO{{\cal O}^{c}_T$. This fact also set a constraint to the analytic domains for the biped Schwinger functions. Define the analytic domain for the two-point Schwinger's functions by ${\cal R}}\def\cL{{\cal L}} \def\JJ{{\cal J}} \def\OO{{\cal O}_T$, then we have
\begin{equation}
{\cal R}}\def\cL{{\cal L}} \def\JJ{{\cal J}} \def\OO{{\cal O}_T={\cal R}}\def\cL{{\cal L}} \def\JJ{{\cal J}} \def\OO{{\cal O}_T\cap{\cal R}}\def\cL{{\cal L}} \def\JJ{{\cal J}} \def\OO{{\cal O}^{c}_T\cap {\cal R}}\def\cL{{\cal L}} \def\JJ{{\cal J}} \def\OO{{\cal O}^{\cQ}_T\subseteq{\cal R}}\def\cL{{\cal L}} \def\JJ{{\cal J}} \def\OO{{\cal O}^{\cQ}_T.
\end{equation}
Therefore, in order that the perturbation series of the $2p$-point, $p\ge1$, Schwinger's functions to be convergent, the coupling constant $\lambda$ should be bounded by
\begin{equation}
|\lambda|<c/{j^2_{max}}=c/{\log^2 T}.
\end{equation}
\end{remark}
\section{The $2$-point Functions}
In this section we study the connected $2$-point Schwinger's functions and the self-energy functions. While the perturbation series for the former are labeled by connected graphs, the ones for the latter are labeled by the one-particle irreducible graphs (1PI for short), which are the graphs that can't be disconnected by deleting one edge.
Let $S^c_{2,\b}(y,z)$ be a connected $2$-point Schwinger function, in which $y$ and $z$ are the coordinates of the two external fields. Using the BKAR tree formula (\cite{RW1}, Theorem $4.1$) and organizing the perturbation terms according to the GN trees, we can write $S^c_{2,\b}(y,z)$ as:
\begin{eqnarray}\label{consch}
S^c_{2,\b}(y,z)&=&\sum_{n=0}^\infty\frac{\lambda^{n+2}}{n!}\int_{({\Lambda_{\beta}})^n} d^3x_1\cdots d^3x_n\sum_{\cG}\sum_{\cG_\cB}\sum_{{\cal EB}}\sum_{\{\sigma\}}\sum_{\cT, \underline\tau}\Big(\prod_{\ell\in\cT}\int_{0}^1 dw_\ell\Big)\nonumber\\
&&\quad\quad\cdot \Big[\prod_{\ell\in\cT}C(f_\ell,g_\ell)\Big]\cdot\det\Big(C(f,g,\{w_\ell\})\Big)_{left},
\end{eqnarray}
where $\cG_\cB$ is a biped GN tree (cf. \cite{RW1}, Definition $5.1$) and $\cT$ is a spanning tree in the root graph of $\cG_\cB$.
It is well known that (see eg. \cite{Iz}, page 290) the self-energy $\Sigma(y,z)$ can be obtained by Legendre transform on the generating functional for $S^c_{2,\b}$. In terms of Feynman graphs this corresponds to replacing the connected graphs labeling the connected functions by the 1PI graphs (which are also called the two-connected graphs in Graph theory). Let $\{\Gamma\}$ be the set of 1PI graphs over the $n+2$ vertices, the self-energy is defined by:
\begin{eqnarray}\label{selfeng}
&&\Sigma_2(y,z,\lambda)=\sum_{n=0}^\infty\frac{\lambda^{n+2}}{n!}\int_{({\Lambda_{\beta}})^n} d^3x_1\cdots d^3x_n\sum_{\cG}\sum_{\cG_\cB}\sum_{{\cal EB}}\sum_{\{\sigma\}, \underline\tau}\sum_{\{\cT\}}\sum_{\{\Gamma\}}\\
&&\quad \Big(\prod_{\ell\in\cT}\int_{0}^1 dw_\ell\Big)\cdot \Big[\prod_{\ell\in\cT}C(f_\ell,g_\ell)\Big]\Big[\prod_{\ell\in\Gamma\setminus\cT}C(f_\ell,g_\ell)\Big]
\cdot\det\Big(C(f,g,\{w_\ell\})\Big)_{left,\Gamma}.\nonumber
\end{eqnarray}
Through the Fourier transform
\begin{equation}
\Sigma_2(y,z,\lambda)=\int dk\ \hat\Sigma(k,\lambda)e^{ik(y-z)},
\end{equation}
the self-energy function in the momentum space $\hat\Sigma(p,\lambda)$ is defined.
Remark that the above expressions are still formal, as summation over the 1PI graphs could be unbounded. The canonical way of generating the 1PI graphs without divergent combinatorial factors is called the multi-arch expansion, which will be introduced shortly. The construction of the 2-point Schwinger's functions and the self-energy requires renormalization theory, which will be introduced in the next subsection.
Before proceeding, let us recall the Salmhofer's criterion on the Fermi liquid \cite{Salm} at equilibrium:
\begin{definition}[Salmhofer's criterion]\label{salmc}
A $2$-dimensional many-fermion system at positive temperature is a Fermi liquid if the thermodynamic limit of the momentum space Green's functions exists for $|\lambda|<\lambda_0(T)$ and if there are constants $C_0, C_1, C_2>0$ independent of $T$
and $\lambda$ such that the following holds. (a) The perturbation expansion for the momentum space self-energy $\hat\Sigma(k,\lambda)$ converges for all $(\lambda,T)$ with $|\lambda\log T|<C_0$. (b) The self-energy $\hat\Sigma(k, \lambda)$ satisfies the following regularity conditions:
\begin{itemize}
\item $\hat\Sigma(k, \lambda)$ is twice differentiable in $k_0$, $k_+,k_-$ and
\begin{equation}
\max_{\beta=2}\Vert\partial_{k_{\a}}^\beta\hat\Sigma(k, \lambda)\Vert_{\infty}\le C_1,\ \a=0,\pm.
\end{equation}
\item The restriction of the self-energy on the Fermi surface is $C^{\beta_0}$ differentiable w.r.t. the momentum, in which $\beta_0>2$, and
\begin{equation}
\max_{\beta=\beta_0}\Vert\partial_{k_{\a}}^\beta\Sigma(k, \lambda)\Vert_{\infty}\le C_2,\ \a=0,\pm.
\end{equation}
\end{itemize}
\end{definition}
\vskip.3cm
\subsection{Localization of the two-point functions}
The localization of the two-point Schwinger's function is naturally defined in the momentum space.
Let $\hat S_2(p)$ a two-point function with external momentum $p$.
Suppose that the internal momentum of the lowest scale belongs to the sector $(j_r, s^{(a)}_{j_r}, s^{(b)}_{j_r})$ while the external momentum $p$ belongs to the sector $(j_e, s^{(a)}_{j_e}, s^{(b)}_{j_e})$. The localization operator is defined as:
\begin{equation}
\tau\hat S_2(p)=\sum_{j=1}^\infty\sum_{\sigma=(s^{(a)},s^{(b)})}\chi_j(4p_0^2+e^2(\bp))\cdot v_{s^{(a)}}[t^{(a)}(\bp)]
\cdot v_{s^{(b)}}[t^{(b)}(\bp)]\cdot\hat S_2 (2\pi T, \bk_F),
\end{equation}
in which $\kk_F=P_F(\bp)$. Notice that $\hat S_2(2\pi T,\kk_F)$ is not a constant on ${\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}_0$ but depends non-trivially on $\kk_F$. In order to establish the non-perturbative bound, it is important to perform the localization in the direct space. The corresponding localization operation, noted by $\tau^*$, is defined by the Fourier transform as follows. Consider the integral
\begin{eqnarray}\label{rn2pt0}
I=\int_{\cD_{\beta,L}\times\cD_{\beta,L}} dp dk\ \hat S_{2}
(p)\hat C(k)\hat R(p,k,P_e),
\end{eqnarray}
in which $\hat C(k)=\sum_j\sum_{\sigma=(s^{(1)},s^{(b)})}\hat C_{j,\sigma}(k)$ is the sectorized free propagator, $\hat R(p,k,P_e)=\bar R(p,P_e)\delta(p-k)$ in which $\bar R(p,P_e)$ is a function of $p$ and external momentum $P_e$. Define the localization operator $\tau$ on $I$ by:
\begin{equation}
\tau I=\int dp dq\ \hat S_{2}
(k_F)\hat C(q) \hat R(p,k,P_e),
\end{equation}
and the remainder term is defined as
\begin{equation}
\hat R I=(1-\tau)I,
\end{equation}
in which $\hat R:=(1-\tau)$ is also called the remainder operator. The direct space representation of $I$ is given by:
\begin{eqnarray}\label{rn2pt1}
\tilde I=\int dy dz\ S_{2} (x,y)\ C(y,z)R(z,x,P_e),
\end{eqnarray}
which is indeed independent of $x$, due to translational invariance. Then the operators $\tau$ and $\hat R:=(1-\tau)$ induce the actions $\tau^*$ and $\hat R^*:=(1-\tau^*)$ in the direct space. The localized term is
\begin{equation}
\tau^*\tilde I=\int dy dz\ S_{2}(x,y)[e^{ik^0_F(x_0-y_0)+i\kk_F\cdot (\xx-\yy)} C(x,z)]R(z,x,P_e).
\end{equation}
Comparing with \eqref{rn2pt1} we find that the localization operator moves the starting point $y$ of the free propagator to the localization point $x$, with the compensation of a phase factor:
\begin{equation}
\tau^*C(y,z)=e^{i2\pi T(x_0-y_0)+i\kk_F\cdot (\xx-\yy)} C(x,z).
\end{equation}
The remainder term is:
\begin{eqnarray}
\hat R^* I=\int dy dz\ S_{2}(x,y)[C(y,z)-e^{ik^0_F(x_0-y_0)+i\kk_F\cdot (\xx-\yy)} C(x,z)]R(z,x,P_e),
\end{eqnarray}
in which
\begin{eqnarray}
&&C(y,z)-e^{ik^0_F(x_0-y_0)+i\kk_F\cdot (\xx-\yy)} C(x,z)\\
&&=\int_0^1 dt(y_0-x_0)\frac{\partial}{\partial x_0}C((ty_0+(1-t)x_0,\yy),z)\nonumber\\
&&+\frac12\sum_{a,b=1\cdots3}(y^{(a)}-x^{(a)})(y^{(b)}-x^{(b)})\partial_{x^{(a)}}\partial_{{x}^{(b)}}
C((x_0,\xx),z)\nonumber\\
&&+\int_0^1 dt(1-t)\sum_{a,b=1\cdots3}(y^{(a)}-x^{(a)})(y^{(b)}-x^{(b)})\partial_{y^{(a)}}\partial_{{y}^{(b)}}
C((x_0, t\yy+(1-t)\xx),z)\nonumber\\
&&+C((x_0,\xx),z)[1-e^{ik^0_F(x_0-y_0)+i\kk_F\cdot (\xx-\yy)}]\nonumber.
\end{eqnarray}
The terms in the last line means than there exists an additional propagator $ C(x,z)$ attached on the biped graph and the new graph has three external lines. So it is not no more linearly divergent and we gain a convergent factor in the power counting. Now we consider the other terms.
Suppose the internal momentum of the two-point function $S_{2}(x,y)$ of the lowest scale belongs to the sector $\Delta^{j_r}_{{s^{(a)}_{j_r},s^{(b)}_{j_r}}}$, the sector with scaling index ${j_r}$ and sector indices $({{s^{(a)}_{j_r},s^{(b)}_{j_r}}})$, while the external momentum belongs
to the sector $\Delta^{j_e}_{{s^{(a)}_{j_e},s^{(b)}_{j_e}}}$,
then there exists a constant $K_1$, $K_2$ such that
\begin{eqnarray}\label{rmdx}
&&|y_0-x_0|\le O(1)\gamma^{j_r},\
|\partial_{x_0}C((x_0,\yy),z)|\le K_1\g^{-j_e}|C((x_0,\yy),z)|,\nonumber\\
&&|y^{(a)}-x^{(a)}|\cdot|y^{(b)}-x^{(b)}|\le\g^{s_{j_r}^{(a)}+s_{j_r}^{(b)}}, |\partial_{x^{(a)}}\partial_{{x}^{(b)}}C((x_0,\xx),z)|\le K_2\g^{-s_{j_e}^{(a)}-s_{j_e}^{(b)}}.\nonumber
\end{eqnarray}
Since the perturbation terms are organized according to the Gallavotti-Nicolo tree structure, we have $j_r\le j_e$. As will be proved in Section $7.2$, we can always choose the optimal internal propagators (rings propagators) such that $s_{j_e}^{(a)}+s_{j_e}^{(b)}\ge s_{j_r}^{(a)}+s_{j_r}^{(b)}$. Hence we gain the convergent factor
$\g^{-(j_e-j_r)}$ and $\g^{-[(s_{j_e}^{(a)}+s_{j_e}^{(b)})-(s_{r_0}^{(a)}+s_{r_0}^{(b)})]}$.
Now we consider the localization for the self-energy $\hat \Sigma(p_0,\bp)$. Again, we suppose that the internal momentum of the lowest scale belongs to the sector $\Delta^{j_r}_{{s^{(a)}_{j_r},s^{(b)}_{j_r}}}$ while the external momentum belongs to the sector $\Delta^{j_e}_{{s^{(a)}_{j_e},s^{(b)}_{j_e}}}$. We have:
\begin{equation}
\tau\Sigma(p_0,\bp)=\sum_{j=1}^{j_{max}}\sum_{\sigma=(s^{(a)},s^{(b)})}\chi_j(4p_0^2+e^2(\bp))\cdot v_{s^{(a)}}[t^{(a)}(\bp)]
\cdot v_{s^{(b)}}[t^{(b)}(\bp)]\cdot\Sigma (2\pi T, \bp_F),
\end{equation}
and
\begin{eqnarray}\label{rmd11}
&&\hat R\hat \Sigma(p_0,\bp)_{s^{(a)},s^{(b)}}:=(1-\tau)\hat \Sigma(p_0,\bp)_{s^{(a)},s^{(b)}}\\
&=&\hat \Sigma(p_0,\bp)_{s^{(a)},s^{(b)}}-\hat \Sigma(k_F^0,\bp)_{s^{(a)},s^{(b)}}+\hat \Sigma(k_F^0,\bp)_{s^{(a)},s^{(b)}}-\hat \Sigma(k_F^0,\bk_F)_{s^{(a)},s^{(b)}}\nonumber\\
&=&\int_0^1 dt (p_0-k_F^0)\frac{\partial}{\partial p_0(t)}\hat \Sigma(k_F^0+t(p_0-k_F^0),\bp)_{s^{(a)},s^{(b)}}\nonumber\\
&+&\int_0^1 dt(1-t)(p^{(a)}-k_{F}^{(a)})(p^{(b)}-k_{F}^{(b)})\frac{\partial^2}{\partial p^{(a)}\partial p^{(b)}}\hat \Sigma(k_F^0,\bk_F+t(\bp-\bk_F))_{s^{(a)},s^{(b)}}\nonumber\ ,
\end{eqnarray}
where $p_0(t)=k_F^0+t(p_0-k_F^0)$ and $p^{(a)}(t)=k_F^0+t(p^{(a)}-k^{(a)}_{F})$.
We have $|p_0-k_F^0|\sim \gamma^{-j_e}$, $\Vert{\partial p_0}\hat \Sigma(k_F^0,\bp)_{s^{(a)},s^{(b)}}\Vert\sim\gamma^{j_r}\Vert\hat \Sigma(k_F^0,\bp)_{s^{(a)},s^{(b)}}\Vert$,
by the GN tree structure. Then we obtain
\begin{equation}\label{rmd12}
\Vert(p_0-k_F^0)\frac{\partial}{\partial p_0}\hat \Sigma(k_F^0,\bp)_{s^{(a)},s^{(b)}}\Vert\le K_1\gamma^{-(j_e-j_r)}\Vert\hat \Sigma(k_F^0,\bp)_{s^{(a)},s^{(b)}}\Vert_{L^\infty},
\end{equation}
and
\begin{eqnarray}\label{rmd13}
&&\Vert(p^{(a)}-k_{F}^{(a)})(p^{(b)}-k_{F}^{(b)})\frac{\partial^2}{\partial p^{(a)}\partial p^{(b)}}\hat \Sigma(k_F^0,\bp)_{s^{(a)},s^{(b)}}\Vert_{L^\infty}\\
&&\quad\quad\quad\le
K_2\g^{-[(s_{j_e}^{(a)}+s_{j_e}^{(b)})-(s_{j_r}^{(a)}+s_{j_r}^{(b)})]}\Vert\Sigma(k_F^0,\bp)_{s^{(a)},s^{(b)}}\Vert_{L^\infty}\nonumber,
\end{eqnarray}
for some positive constants $K_1$ and $K_2$. So we gain the convergent factor $\g^{-[(s_{j_e}^{(a)}+s_{j_e}^{(b)})-(s_{j_r}^{(a)}+s_{j_r}^{(b)})]}$.
\subsection{The renormalization}
The renormalization analysis is to be performed in the multi-scale representation. In this procedure, at any scale $r$, we move the counter-terms from the interaction to the covariance,
so that the tadpoles as well as the self-energy at that scale can
be compensated by the counter-terms and the renormalized band function $E$ remain fixed.
Recall that the renormalization conditions are:
\begin{eqnarray}
\delta\mu(\lambda)+T(\lambda)=0,\ \hat\nu_{\a\a'}(P_F\bk,\lambda)+\hat\Sigma_{\a\a'}((0,P_F\bk),\hat\nu,\lambda )=0,\ \a,\a'=1,2 .\label{rncd5a}
\end{eqnarray}
For the second equation, it is enough to consider the single renormalization condition
\begin{equation}\label{rncd01}
\hat\nu_{11}(P_F\bk,\lambda)+\hat\Sigma_{11}((0,P_F\bk),\hat\nu,\lambda )=0,
\end{equation}
by remark \eqref{rmkindex}. We can forget the matrix indices and rewrite this condition as:
\begin{equation}\label{rncd02}
\hat\nu(P_F\bk,\lambda)+\hat\Sigma((0,P_F\bk),\hat\nu,\lambda )=0.
\end{equation}
\subsubsection{Renormalization of the bare chemical potential}
In this part we consider the renormalization of the bare chemical potential $\mu_{bare}=\mu+\delta\mu$, realized by the compensation of the tadpoles $T$ with the counter-term $\delta\mu$. Define
\begin{equation}
\delta\mu(\lambda)=\delta\mu^{\le r_{max}}(\lambda)=\sum_{r=0}^{r_{max}} \delta\mu^r(\lambda),\ T(\lambda)=T^{\le r_{max}}(\lambda)= \sum_{r=0}^{r_{max}} T^r(\lambda),
\end{equation}
in which $T^r(\lambda)\in\RRR$ is the sliced tadpole with the {\it internal momentum} constrained to the $r$-th shell.
At each scale $r$, the chemical potential counter-term $\delta\mu^r$ is compensated with the tadpole term $T^r$ the renormalized chemical potential $\mu$ remain fixed. By locality, the compensations are $exact$. Before proceeding, it is useful to calculate explicitly the amplitude of a tadpole term.
\begin{lemma}\label{tadmain1}
Let $T^r$ be the amplitude of a tadpole at slice $r$, let $\lambda\in{\cal R}}\def\cL{{\cal L}} \def\JJ{{\cal J}} \def\OO{{\cal O}_T\subseteq{\cal R}}\def\cL{{\cal L}} \def\JJ{{\cal J}} \def\OO{{\cal O}^q_T$ be the coupling constant. There exist two positive constants $c_1$ and $c_2$, with $c_1<c_2$, which are dependent on the model but are independent of $j$, such that:
\begin{equation}\label{tad01}
c_1|\lambda|j\g^{-j}\le\vert T^r\vert\le c_2|\lambda|j\g^{-j}.
\end{equation}
\end{lemma}
\begin{proof}
For any scaling index $0\le j\le j_{max}$, we have:
\begin{equation}
\vert T^r\vert=|\lambda|\sum_{\s=(s^{(a)},s^{(b)})}\ \Big|\int dk_0 dk^{(a)}dk^{(b)}\tilde C_{j,\sigma}(k_0,k^{(a)},k^{(b)})\ \Big|= c_1 |\lambda| \sum_{(s^{(a)},s^{(b)})}\g^{-s^{(a)}-s^{(b)}}.
\end{equation}
in which $0\le s^{(a)}, s^{(b)}\le j$. Using the constraint $s^{(a)}+s^{(b)}\ge j-2$, we have
\begin{equation}
\sum_{(s^{(a)},s^{(b)})}\g^{-s^{(a)}-s^{(b)}}=\Big(\sum_{s^{(a)}=0}^{j}\g^{-s^{(a)}}\Big)\ \Big(\sum_{s^{(b)}=j-2-s^{(a)}}^{j} \g^{-s^{(b)}}\Big).
\end{equation}
Now using the fact that
\begin{equation}
\g^{-j+2+s^{(a)}}\le\sum_{s^{(b)}=j-2-s^{(a)}}^{j} \g^{-s^{(b)}}\le \g^{-j+2+s^{(a)}}\frac{1-\g^{-(2+s^{(a)})}}{1-\g^{-1}},
\end{equation}
we can easily prove that there exists another positive constant $c_2>c_1$, also independent of $j$, such that
\begin{eqnarray}\label{bdtj}
c_1|\lambda|j\g^{-j}\le\vert T^j\vert\le c_2|\lambda|j\g^{-j}.
\end{eqnarray}
\end{proof}
And we have:
\begin{lemma}\label{tad05}
Let $T=\sum_{j=0}^{j_{max}}T^j$ be the full amplitude of a tadpole. There always exist two positive constants $c_1'$ and $c_2'$, with $c_1'<c_2'$, such that:
\begin{equation}
c_1'|\lambda|\le\vert T\vert\le c_2'|\lambda|.
\end{equation}
\end{lemma}
\begin{proof}
Since $|T|=\sum_{j=0}^{j_{max}}|T^j|$, we can prove this lemma directly by summing over the indices $j$, using \eqref{bdtj}.
\end{proof}
In order that Equation \eqref{rncd3} can be valid, we have:
\begin{equation}\label{rnc}
T^r+\delta\mu^r(\lambda)=0,\ {\rm for}\ r=0,\cdots, r_{max},
\end{equation}
which implies that:
\begin{equation}\delta\mu^{\le r}(\lambda)+\delta T^{\le r}=0,\ {\rm for}\ r=0,\cdots, r_{max}.\end{equation}
These cancellations are between a pair of GN trees. Let $F_{2,n}=F'_{2,n-1}\vert T^r\vert$ be the amplitude of a graph with $n$ vertices which contains a tadpole $T^r$. Let $F'_{2,n}=F'_{2,n-1}\delta\mu^r$ be the amplitude of another graph which contain counter-term, which is located at the same position in the GN tree as the tadpole. Then we have $F_{2,n}+F'_{2,n}=0$. See Figure \ref{rtad} for an illustration of the cancellation.
\begin{figure}[htp]
\centering
\includegraphics[width=.6\textwidth]{rentad1.pdf}
\caption{\label{rtad}
Cancellation of a tadpole with the corresponding counter-term (the blue dot). }
\end{figure}
In this way we can fix the renormalized chemical potential at all scales.
We have the following theorem concerning the coefficient $\delta\mu(\lambda)$ of the tadpole counter-terms.
\begin{theorem}\label{flowmu}
There exists a positive constant $K$ independent of the scaling indices such that
the tadpole counter-term can be bounded as follows:
\begin{equation}\label{bdbare}
|\delta\mu(\lambda)|:=|\delta\mu^{\le r_{max}}(\lambda)|\le K|\lambda|,\ {\rm for}\ \lambda\in{\cal R}}\def\cL{{\cal L}} \def\JJ{{\cal J}} \def\OO{{\cal O}_T.
\end{equation}
\end{theorem}
This theorem states that the counter-terms are bounded and will not cause problems in the analysis of the two point functions. So we can replacing $\delta\mu$ by some constants in the rest of this paper, except in Section \ref{secflow}, in which we will prove the upper bound for $\delta\mu(\lambda)$.
\subsubsection{Renormalization of the non-local part of the bare band function}
Now we consider the renormalization of the non-local part of the self-energy. Since the cancellation between tadpoles and counter-terms are exact, we assume that the self-energy function $\hat\Sigma$ is tadpole free. Rewrite also the counter-term in the multi-scale representation as
\begin{equation}
\hat\nu(\lambda,\bk)=\hat\nu^{\le r_{max}}(\lambda,\bk):=\sum_{r=0}^{r_{max}}\sum_{\sigma=s^{(a)},s^{(b)}}\hat\nu_{s^{(a)},s^{(b)}}^r(\lambda,\bk),
\end{equation}
in which
\begin{equation}
\hat\nu_{s^{(a)},s^{(b)}}^r(\lambda,\bk)=\hat\nu(\lambda)(\bk)\chi_j(4k_0^2+e^2(\bk))\cdot v_{s^{(a)}}[t^{(a)}(\bk)]
\cdot v_{s^{(b)}}[t^{(b)}(\bk)].
\end{equation}
The multi-scale representation for the self-energy function is:
\begin{eqnarray}
\hat\Sigma(k,\hat\nu,\lambda )=\Sigma^{\le r_{max}}(k,\hat\nu^{\le r_{max}-1},\lambda ):=\sum_{r=0}^{r_{max}}\sum_{\sigma=(s^{(a)},s^{(b)})}\hat\Sigma_{s^{(a)},s^{(b)}}^r(k,\hat\nu^{\le (r-1)},\lambda ),
\end{eqnarray}
in which
\begin{equation}
\hat\Sigma_{s^{(a)},s^{(b)}}^r(k,\hat\nu^{\le (r-1)},\lambda )=\hat\Sigma(k,\hat\nu^{\le (r-1)},\lambda )\chi_j(4p_0^2+e^2(\bk))\cdot v_{s^{(a)}}[t^{(a)}(\bk)]
\cdot v_{s^{(b)}}[t^{(b)}(\bk)].
\end{equation}
The localization for the self-energy is defined as:
\begin{equation}\tau \hat\Sigma_{s^{(a)},s^{(b)}}^{r}\big[k,\hat\nu^{\le r-1},\lambda \big]=
\hat\Sigma_{s^{(a)},s^{(b)}}^{r}\big[(2\pi T,P_F(\bk)_{s^{(a)},s^{(b)}}),\hat\nu^{\le r-1},\lambda \big],
\end{equation}
in which $P_F(\bk)_{s^{(a)},s^{(b)}}\in{\cal F}} \def\cT{{\cal T}}\def\cS{{\cal S}}\def\cQ{{\cal Q}\cap \Delta^r_{{s^{(a)},s^{(b)}}}$ is a projection of the vector $\bk\in\Delta^r_{{s^{(a)},s^{(b)}}}$ on the Fermi surface.
To perform the renormalizations, at each scale $r$ we move the counter-term $\hat\nu^r$ at from the interaction potential to the covariance, so that the localized self-energy term $\hat\Sigma^r$ can be compensated. The renormalization condition becomes:
\begin{equation}\label{rs1}
\hat\Sigma_{s^{(a)},s^{(b)}}^{r}\big[(2\pi T,P_F(\bk))_{s^{(a)},s^{(b)}},\hat\nu^{\le (r-1)},\lambda \big]+
\hat\nu^{r}_{s^{(a)},s^{(b)}}(P_F(\bk)_{s^{(a)},s^{(b)}},\lambda)=0.
\end{equation}
When the external momentum is not restricted on the Fermi surface, the compensation between the two terms needs not to be exact, due to the non-locality of the proper self-energy $\hat\Sigma^r(k)$ and the counter-term $\hat \nu^r(\bk)$. Then the renormalization is defined as:
\begin{equation}\label{rs2}
\hat\Sigma_{s^{(a)},s^{(b)}}^r((k_0,\bk),\hat\nu^{\le (r-1)},\lambda )+\hat\nu^{r}_{s^{(a)},s^{(b)}}(\bk,\left\langle)=\hat R\hat\Sigma_{s^{(a)},s^{(b)}}^{r}((k_0,\bk),\hat\nu^{\le (r)},\lambda ),
\end{equation}
in which $\hat R\hat\Sigma_{s^{(a)},s^{(b)}}^{r}((k_0,\bk),\hat\nu^{\le (r)},\lambda )$ is the remainder term. By Formula \eqref{rmd11}-\eqref{rmd13}, we know that the remainder term is bounded
by $\g^{-\delta^r}\Vert \hat\Sigma_{s^{(a)},s^{(b)}}^{r+1}(k,\hat\nu^{\le (r)},\lambda )\Vert_{L^\infty}$,
in which
\begin{equation}\label{rmd14}
\g^{-\delta^r}=\max \{\g^{-(r_e-r_r)}, \g^{-[(s_{j_e}^{(a)}+s_{j_e}^{(b)})-(s_{j_r}^{(a)}+s_{j_r}^{(b)})]}\}<1.\end{equation}
From the renormalization conditions \eqref{rs1} and \eqref{rs2} we have:
\begin{equation}\label{rmd15}
\Vert\hat\nu(\bk,\lambda)\Vert_{L^\infty}:=\sup_{\bk}|\hat\nu(\bk,\lambda)|\le \sup_{\bk}\sum_{r=0}^{r_{max}}|(1+\hat R)\hat\Sigma^r(\bk,\lambda)|\le 2\sup_{\bk}\sum_{r=0}^{r_{max}}|\hat\Sigma^r(\bk,\lambda)|.
\end{equation}
We have the following theorem concerning the bound for the counter-term:
\begin{theorem}\label{flownu}
There exists a positive constant $K$ independent of the scaling indices such that
the counter-term $\nu(\bk)$ satisfies the following bound:
\begin{equation}\label{bdbarenu}
\Vert\nu^{\le r_{max}}(\bk,\lambda)\Vert_{L^\infty}\le K|\lambda|,\ \forall\ \lambda\in{\cal R}}\def\cL{{\cal L}} \def\JJ{{\cal J}} \def\OO{{\cal O}_T.
\end{equation}
\end{theorem}
By \eqref{rmd15} we find that, in order to prove this theorem, it is enough to prove that there exists a positive constant $K$ such that $\Vert\hat\Sigma^{\le r_{max}}\Vert_{L^\infty}\le\frac{K|\lambda|}{2}$. This result will be proved in Section \ref{multiarch}.
\begin{remark}
Remark that, in order to obtain the bounds for the $2$-point Schwinger functions, we have to consider not only the biped trees $\cG_\cB$, but also the full Gallavotti-Nicol\`o tree structure. However, since the contributions from the quadruped graphs and convergent graphs are convergent, which only set constraints to the analytic domain of the coupling constants, we can safely forget the contributions from the convergent GN trees and the quadruped ones, but use only the fact that the analytic domain for the biped Schwinger's functions can't be larger than the one for the quadruped Schwinger's functions.
\end{remark}
\section{Construction of the Self-energy function}\label{multiarch}
In this section we shall establish the optimal upper bounds for the self-energy functions and its derivatives w.r.t. the external momentums. The main tool of constructing the self-energy functions is the multi-arch expansions for the determinant, from which we can obtain perturbation terms labeled by the 1PI graphs between any {\it two external vertices} of a graph, from the perturbation terms labeled by tree graphs without generating any divergent combinatorial factor. In order to obtain the optimal bounds for the self-energy function, we still have to perform a second multi-arch expansion on top of the first one. The resulting graphs are now {\it two-particle irreducible} between the two external vertices, which means that one can't disconnect this graph by deleting any two lines. Then by Menger's theorem (cf. eg. \cite{Bon}), there exists exactly {\it three} line-disjoint paths joining the two external vertices. Among the three paths, one can choose two line-and-vertex disjoint paths, from which we can obtain the optimal power counting. The union of the two paths is called a {\it ring} and the propagators in the ring are called the {\it ring propagators}. Then the optimal upper bound for the self-energy can be obtained by integrating out the ring propagator as well as the careful analysis of the corresponding scaling and sector indices.
This part follows closely \cite{AMR1}. Some technical details may be omitted if they can be found in \cite{AMR1} for a similar setting.
\subsection{The multi-arch expansions}
Let $\cT$ be a spanning tree connecting the $n+2$ vertices $\{y,z,x_1\cdots,x_n\}$. The integrand of the connected two-point function labeled by $\cT$ is
\begin{equation}\label{main0}
F(\{C_\ell\}_\cT, \{C(f_i,g_j)\})=\Big[\prod_{\ell\in\cT}C_{\s(\ell)}(f(\ell),g(\ell))\Big]\ \det(\{C(f_i,g_j)\})_{left,\cT}.
\end{equation}
Let $P(y,z,\cT)$ be the unique path in $\cT$ such that $y$ and $z$ are the two ends of the path. Suppose that there are $p+1$ vertices in the path $P(y,z,\cT)$, $p\le n+1$, then we can label each vertex in the path with an integer, starting from the label $0$ for the coordinate $y$ in an increasing order towards $p+1$, which is the label for the coordinate $z$. Let $\mathfrak{B}_i$ be a branch in $\cT$ at the vertex $i$, $1\le i\le p+1$, which is defined as the
subtree in $\cT$ whose root is the vertex $i$. See Figure \ref{mtarch} for an illustration of a tree graph with $4$ branches rooted at the four vertices $y, x_1,x_2$ and $z$.
We fix two half lines, each is attached to one end vertices. The two half-lines are also called the external fields. Since each tree line in $P(y,z,\cT)$ contracts $2$ fields, there are $2(n+2)$ fields to be contracted from the determinant $\det_{left}$. We also call these fields the {\it remaining fields}, and denote the set of the remaining fields by $\mathfrak{F}_{left}$. A packet $\mathfrak{F}_i$ is defined as the set of the remaining fields restricted to a branch, $\mathfrak{B}_i$. By definition we have: $\mathfrak{F}_i\cap\mathfrak{F}_j=\emptyset$ for $i\neq j$, and $\mathfrak{F}_{left,\cT}=\mathfrak{F}_1\sqcup\cdots\sqcup\mathfrak{F}_p$, in which $\sqcup$ means disjoint union. Among all pairs of fields and anti-fields, we select the pair which has a contraction between an element of ${\mathfrak{F}}_1$ and an element of $\sqcup_{k=2}^{p}{\mathfrak{F}}_k$, through an explicit Taylor expansion with interpolating parameter $s_1$, as follows. Let $\{C(f_i,g_j)\}$ be the elements in the remaining determinant for any loop lines $\{(f_i,g_j)\}$, define \begin{eqnarray}
C(f_i,g_j)(s_1)&:=&s_1C(f_i,g_j)\quad {\rm if}\ f_{1}\in {\mathfrak{F}}_1, g_{1}\notin {\mathfrak{F}}_1,\\
&:=&C(f_i,g_j)\quad\quad {\rm otherwise},
\end{eqnarray}
we have
\begin{eqnarray}
&&\det(\{C(f_i,g_j)\})_{left,\cT}=\det(\{C(f_i,g_j)(s_1)\})_{left,\cT}\ \big|_{s_1=1}\\
&&=\det(\{C(f_i,g_j)(s_1)\})_{left,\cT}\ \big|_{s_1=0}
+\int_0^1 ds_1\frac{d}{ds_1}\det(\{C(f_i,g_j)(s_1)\})_{left,\cT}\nonumber,
\end{eqnarray}
in which the first term means that there is no loop line connecting ${\mathfrak{F}}_1$ to its complement and the second term means that there is a contraction between a half-line $f_1\in{\mathfrak{F}}_1$ and $g_1\in{\mathfrak{F}}_{k_1}$, with $1\le k_1\le p$. Graphically this means that we add to $\cT$ an explicit line $\ell_1=(f_1, g_1)$, which joins the packet ${\mathfrak{F}}_1$ to ${\mathfrak{F}}_{k_1}$. The newly added line $\ell_1$ is called a loop line or an arch. We also call ${\mathfrak{F}}_1$ the starting packet of the contraction, and index $1$ is called the starting index of $\ell_1$. Similarly ${\mathfrak{F}}_{k_1}$ is called the arriving packet of the contraction and $k_1$ is called the arriving index of $\ell_1$. These definitions can be generalized to an arbitrary contraction between pairs of packets and the associated arches. The new graph $\cT\cup\{\ell_1\}$ becomes 1-PI between the vertices $y$ and $x_{k_1}$. If $k_1=p$, then the whole 1PI graph is generated and we are done. Otherwise we test whether there is a contraction between an element of $\sqcup_{k=1}^{k_1}{\mathfrak{F}}_k$ and its complement, by introducing a second interpolation parameter $s_2$ to the propagator. Define the interpolated propagator as:
\begin{eqnarray}\label{mul1}
C(f_i,g_j)(s_1, s_2)&:=&s_2C(f_i,g_j)(s_1)\quad {\rm if}\quad f_{i}\in\sqcup_1^{k_1} {\mathfrak{F}}_{k},\ g_{j}\in\sqcup_{k_1+1}^{p}{\mathfrak{F}}_k\ ,\\
&:=&C(f_i,g_j)(s_1)\quad\quad {\rm otherwise}\ .
\end{eqnarray}
We have:
\begin{eqnarray}
&&\det(\{C(f_i,g_j)(s_1)\})_{1,left,\cT}=\det(\{C(f_i,g_j)(s_1,s_2)\})_{1,left,\cT}\})|_{s_2=1}\nonumber\\
&&=\det(\{C(f_i,g_j)(s_1,s_2)\})_{1,left,\cT}\})|_{s_2=0}\nonumber\\
&&\quad\quad\quad+\int_0^1 ds_2\frac{d}{ds_2}\det(\{C(f_i,g_j)(s_1,s_2)\})_{1,left,\cT}\}).
\end{eqnarray}
Again, the first term means that the block $\sqcup_{i=1}^{k_1}{\mathfrak{F}}_{k}$ is not linked to its complement by any arch. The second term can be written as
\begin{eqnarray}\label{link1}
&&\int_0^1 ds_2\frac{d}{ds_2}\det(\{C(f_i,g_j)(s_1,s_2)\})_{1,left,\cT}\})\\
&&\quad=
\int_0^1 ds_2\ \frac{\partial}{\partial s_2}C(f_{i_1},g_{j_1})(s_1,s_2)\cdot \frac{\partial}{\partial C(f_{i_1},g_{j_1})}\det(\{C(f_i,g_j)(s_1,s_2)\})_{1,left,\cT}\}),\nonumber
\end{eqnarray}
in which $f_{i_1}\in \sqcup_1^{k_1} {\mathfrak{F}}_{k}$, $g_{j_1}\in\sqcup_{k_1+1}^{p}{\mathfrak{F}}_k$, and we have
\begin{eqnarray}\label{rprop0}
\frac{\partial}{\partial s_2}C(f_{i_1},g_{j_1})(s_1,s_2)&=&C(f_{i_1},g_{j_1})\quad\quad{\rm if}\quad f_{i_2}\in\sqcup_{i=2}^{k_1}{\mathfrak{F}}_{k}\nonumber\\
&=&s_1C(f_{i_1},g_{j_1})\quad{\rm if}\quad f_{i_2}\in{\mathfrak{F}}_{1},
\end{eqnarray}
whose graphical meaning is that there exists a contraction between the loop field $f_{i_1}$ and $g_{i_1}$, hence we add to $\cT$ another line $\ell_2=(f_{i_1},g_{i_1})$ joining the two packets. Now the new graph $\cT\cup\{\ell_1,\ell_2\}$ becomes 1-PI between the vertices $y$ and $x_{k_2}$. Similarly, for an arch ends at the packet ${\mathfrak{F}}_{k_u}$, the corresponding interpolated propagator is defined as
\begin{equation}\label{rprop}
C(f_{u}, g_u)(s_1,\cdots, s_u)=s_rC(f_u,g_u)(s_1,\cdots, s_{u-1}).
\end{equation}
We continue this interpolation process until the graph becomes 1-PI in the $y-z$ channel.
Suppose that we have generated $m$ arches to the tree to form a 1PI graph, then the set of arches
\begin{eqnarray}
&&\Big\{ \ell_1=(f_1,g_1),\cdots,\ell_m=(f_m,g_m)\ \Big|\ f_1\in{\mathfrak{F}}_1, g_1\in{\mathfrak{F}}_{k_1};\ f_2\in\sqcup_{u=1}^{k_1}{\mathfrak{F}}_u, g_2\in\sqcup_{u=k_{1}+1}^{k_2}{{\mathfrak{F}}_u};\nonumber\\
&&\quad\quad\quad \cdots ;\
f_m\in\sqcup_{u=1}^{k_{m-1}}{\mathfrak{F}}_u,\ g_m\in{\mathfrak{F}}_{k_m}={\mathfrak{F}}_{p};\ k_1\le\cdots\le k_m;\ m\le p\ \Big\}.
\end{eqnarray}
is called an {\it m-arches system}.
\begin{figure}[htp]
\centering
\includegraphics[width=.8\textwidth]{multiarch2.pdf}
\caption{\label{mtarch}
An illustration of the multi-arch expansion. The graph on the l.h.s. is a tree graph with $4$ branches, in which the dash lines are the half-lines, corresponding to the unexpanded fields in the determinant. The graph on the r.h.s. is the one-particle irreducible graph constructed from the tree graph, by extracting two arches (the dotted lines) from the determinant.}
\end{figure}
Finally, we have the following expression for the determinant:
\begin{eqnarray}\label{main1}
&&\det\big(\{C(f_i,g_j)\}\big)_{left,\cT}=\sum_{\substack{{\rm m-arch-systems}\\(f_1,g_1),\cdots, (f_m,g_m)\\ {\rm with}\ m\le p}}\ \int_0^1 ds_1\cdots \int_0^1 ds_m \nonumber\\
&&\Bigg[\frac{\partial}{\partial{s_1}}C(f_1,g_1)(s_1)\cdot\frac{\partial}{\partial{s_2}}C(f_2,g_2)(s_1,s_2)\cdots\frac{\partial}{\partial{s_m}}C(f_m,g_m)(s_1,s_2,\cdots,s_m)\cdot\nonumber\\
&&\quad\quad\cdot\frac{\partial^m det_{left,\cT}}{\prod_{r=1}^m \partial C(f_u, g_u)}\Big(\{s_u\}\Big)\Bigg]\ ,
\end{eqnarray}
where the sum runs over all the $m$-arch systems with $p$ vertices. It is useful to have a more explicit expression for the second line of the above formula. We have:
\begin{proposition}\label{prodpg}
Let $\ell_u$ be a loop line in an $m$-arch system introduced above. Let $q_u$ be the number of loop lines that fly over $\ell_u$, namely those loop lines whose starting indices are smaller than or equal to that of $\ell_u$ while whose arriving indices are greater than that of $\ell_r$. Let
$\prod_{u=1}^m C(f_r,g_u)(s_1,\cdots, s_{u-1})$ be the compact form corresponding to the second line of formula
\eqref{main1}, then we have
\begin{eqnarray}\label{indu0}
&&\prod_{u=1}^m C(f_u,g_u)(s_1,\cdots, s_{u-1}):=\prod_{u=1}^m\partial_{s_u}C(f_u,g_u)(s_1,\cdots,s_u)\nonumber\\
&&\quad=\big[\ \prod_{u=1}^m C(f_u,g_u)\ \big]\cdot \big[\ \prod_{u=1}^m
s_u^{q_u}\ \big]\ .
\end{eqnarray}
\end{proposition}
\begin{proof}
This proposition can be proved by induction, using the definition of the interpolated propagators (c.f. \eqref{rprop}, \eqref{rprop0}). Indeed, if the indices of the successive loop lines in a $m$-arch system are strictly increasing, then there is no interpolation parameter in the product
$\prod_{u=1}^m\partial_{s_u}C(f_u,g_u)(s_1,\cdots,s_u)$; A factor $s_u^{q_u}$ will be generated when there are exactly $q_u$ loop lines which completely fly over $\ell_u$. So we proved this proposition.
\end{proof}
Remark that, since each interpolations is performed between a subset of packets and its complement, the final interpolated covariance is a convex combination of block-diagonal covariances with positive coefficients. Hence the remaining matrix in \eqref{main1} is still positive and its determinant can be bounded by Gram's inequality.
Now we prove that no factorials will be generated in the multi-arch expansions, which is not trivial: while the arriving index for a successive arch is strictly increasing, a factorial might be generated when choosing the departure fields once the arriving field is fixed. However, it has been proved in \cite{DR1,AMR1} that this factorial can be compensated by the integration over the interpolation parameters. We don't repeat the proof here but only collect some basic notions and results of \cite{DR1,AMR1}, just for the reader's convenience. First of all, let's introduce some more notations concerning the multi-arch graphs.
\begin{definition}
Let $\cL_n=\{\ell_1,\cdots,\ell_n\}$ be a set of $n$ loop lines, $n\le m$, in an $m$-arch system such that the arriving indices of these lines are put in the increasing order. The set $\cL_n$ is said to form a nesting system if the starting index of the last line $\ell_n$ is the lowest one among all the starting indices of the loop lines in $\cL_n$. In this case the loop lines $\{\ell_1,\cdots,\ell_{n-1}\}$ are said to be useless in that the graph remains
1PI if we have deleted these lines.
\end{definition}
By Proposition \ref{prodpg}, only the useless loop lines contribute to the interpolating factors in \eqref{indu0}. Let the number of loop lines that completely fly over a loop line $\ell_k$ be $q_k$, then the sum over all $m$-arch systems over $p$ vertices, which may result in combinatorial factors, should be weighted by the integral $\Big(\prod_{u=1}^m \int_0^1 s_u\Big)\
\prod_{u=1}^m s_u^{q_{u}}$. We have:
\begin{lemma}[cf. \cite{AMR1}, Lemma VI.1]
Let $n\ge1$ be the number of vertices of the 1-PI graph formed in the m-arch expansions. There exist some numerical constants $c_2$ and $K$, which are independent of the scaling scaling indices, such that:
\begin{equation}
\sum_{m=1}^p\sum_{\substack{{\rm m-arch-systems}\\(f_1,g_1),\cdots, (f_m,g_m)\\ {\rm with}\ m\le p}}\ \Big(\prod_{u=1}^m \int_0^1 s_u\Big)\
\prod_{u=1}^m s_u^{q_{u}}\le c_2 K^n.
\end{equation}
\end{lemma}
Remark that one can always optimize an $m$-arch system by choosing a {\it minimal} $m$-arch subsystem system which contains the minimal number of loop lines and all the useless arches are not contained, and this doesn't change the combinatorial properties. So we always assume that the $m$-arch systems that are generated are minimal.
\begin{lemma}
The amplitude of the self-energy is given by:
\begin{eqnarray}\label{sfe}
&&\Sigma^{\le r_{max}} (y,z)= \sum_{n=2}^\infty\sum_{n_1+n_2+n_3=n} \frac{\lambda^{n_1}\delta\mu^{n_2}\hat\nu^{n_3}}{n_1!n_2!n_3!} \int_{\Lambda^n} d^3x_1 ... d^3x_n\sum_{\{ \underline\t \}}\sum_{\cG_\cB} \sum_{\substack{\text{external fields}\\\mathcal{EB}}
}\nonumber\\
&&\sum_{\text{spanning trees} \mathcal{T} }\sum_{\{ \sigma \}}
\sum_{\substack{{m-{\rm arch\ systems}} \\ \big( (f_1,g_1),...,(f_m,g_m))\big) \\
{\rm with} \ m \leq p }}
\left( \prod_{\ell \in \mathcal{T}} \int_0^1 dw_\ell \right) \left( \prod_{r = 1}^m \int_0^1 ds_r \right)
\left( \prod_{\ell \in \mathcal{T}} C_{\sigma(\ell)} (f_\ell,g_\ell)\right)\nonumber
\\
&&\quad\left( \prod_{u=1}^m C(f_u,g_u) (s_1,...,s_{u-1})\right)
\frac{\partial^m \det_{\text{left}, \mathcal{T}}}{\prod_{r=1}^m \partial C(f_u,g_u)}
\big( \{ w_\ell\}, \{ s_u\}\big) \ .
\end{eqnarray}
\end{lemma}
\begin{remark}
Following Remark \ref{ctbd}, in the rest of this section we can simply replaced all the counter-terms by constants, except in the parts concerning the renormalizations.
\end{remark}
In order to obtain the optimal bounds for the self-energy, we introduce a second multi-arch expansion, which completes the 1PI graphs into the 2PI (graphs that remain connected after deleting two lines) and one-vertex irreducible (graphs that remain connected after deleting one vertex) graphs. We have:
\begin{lemma}
The amplitude for the 2PI biped graphs reads
\begin{eqnarray}
&&\Sigma (y,z)=\sum_{n=2}^\infty \frac{\lambda^{n}}{n!} \int_{(\Lambda_\b)^n} d^3x_1 ... d^3x_n
\sum_{\{ \underline\t \}} \sum_{\substack{\text{biped structures}\\ \mathcal{B}}
}\sum_{ \mathcal{EB}}\nonumber\\
&&\quad\quad\sum_{\cG_\cB} \sum_\mathcal{T} \sum_{\{\sigma \}}\sum_{\substack{ m-{\rm arch\ systems}\\ \bigl( (f_1,g_1), ... (f_m,g_m) \bigr)}}
\sum_{\substack{ m'-{\rm arch\ systems}\\ \bigl( (f'_1,g'_1), ... (f'_m,g'_m) \bigr)}}
\nonumber\\
&&\quad\quad\left( \prod_{\ell \in \mathcal{T}} \int_0^1 dw_\ell \right) \left( \prod_{\ell \in \mathcal{T}} C_{\sigma(\ell)} (f_\ell,g_\ell)\right)\left( \prod_{u=1}^m \int_0^1 ds_u \right)\left( \prod_{u'=1}^{m' }\int_0^1 ds'_{u'} \right)\nonumber \\
&&\quad\quad\left( \prod_{u=1}^m C(f_u,g_u) (s_1,...,s_{u-1})\right) \left( \prod_{u' = 1}^{m'} C({f'}_{u'},{g'}_{u'}) (s'_1,...,s'_{u'-1})\right)\nonumber\\
&&\quad\quad\frac{\partial^{m+m'} \det_{\text{left}, \mathcal{T}}}{\prod_{u=1}^m \partial C(f_u,g_u)\prod_{u'=1}^{m'} \partial C(f'_{u'},g'_{u'}) }\big( \{ w_\ell\}, \{ s_u\} , \{ s'_{u'} \} \big),
\end{eqnarray}
where we have summed over all the first multi-arch systems with $m$ loop lines and
the second multi-arch systems with $m'$ loop lines. The underlying graphs are two-line irreducible as well as one vertex irreducible.
\end{lemma}
\begin{proof}
The construction of the $2$-PI perturbation series from the 1PI one is analogous to the construction of the 1PI terms from the connected terms, except that the total ordering in the tree graph is lost; one has to consider the {\it partial ordering} of the various branches. This construction has discussed in great detail in \cite{AMR1}, so we don't repeat it here. Remark that the second multi-arch expansion respects again positivity of the interpolated propagator at any stage so the the remaining determinant still obeys the Gram's inequality.
\end{proof}
\subsection{The ring sectors and the power-counting}
Let $G=\cT\cup\cL$ be a 2PI graph generated by the two-level multi-arch expansions. Menger's theorems ensure that, any such graph $G$ has three line-disjoint independent paths and two internally vertex-disjoint paths joining the two external vertices \cite{AMR1} of $G$. Since different propagators in these paths may have different scaling properties, in order to obtain the optimal bounds for the self-energy, we need to choose the optimal integration paths from which we can obtain the best convergent scaling factors. The optimal paths are called the ring structure. See Figure \ref{rin} for an illustration.
\begin{definition}
A ring $R$ is a set of two paths $P_{R,1}$, $P_{R,2}$ in $\cL\cup\cT$ connecting the two vertices $y$ and $z$ and satisfies the following conditions: Firstly, the two paths in $R$ don't have any intersection on the paths or on the vertices, except on the two external vertices $y$ and $z$. Secondly, let $b$ be any node in the biped tree $\cG_\cB$, then at least two external fields of $b$ are not contained in the ring.
\end{definition}
\begin{figure}[htp]
\centering
\includegraphics[width=.42\textwidth]{ring.pdf}
\caption{\label{rin}
A ring structure in a $2$PI graph which connects the two end vertices $y$ and $z$. The thick lines are the ring propagators and the dash lines are the third path in the $2$PI graph.}
\end{figure}
Define the scaling indices and the sector indices for the ring propagators, as follows. Let $P_{R,1}$ and $P_{R,1}$ be the two path in the ring and $k$ be the labeling of propagators in the two paths. Let $r_T$ be the first scale at which $y$ and $z$ fall into a common connected component in the GN tree and $r_R$ the first scale at which the ring connects $y$ and $z$, called the the scaling index for the ring propagators, we have:
\begin{equation}
r_R=\min_{j=1,2}\max_{k\in P_{R,j}}r(k),
\end{equation}
and $r_T\le r_R\le r_{max}$. In the same way, we can define the sectors indices for the ring propagators:
\begin{equation}
s_{+,R}:=\min_{j=1,2}s_{+,R,j}:=\min\max_{k\in P_{R,j}}s_+(k),\ {\rm and}\ \ s_{-,R}:=\min_{j=1,2}s_{+,R,j}:=\min\max_{k\in P_{R,j}}s_-(k),
\end{equation}
which are greater than the sector indices of the tree propagators. The corresponding sectors are called the {\it ring} sectors.
We can also define the scaling index $j_{R,T}$ for the ring propagators, as follows. Let $j_\cT=\max_{k\in P(y,z,\cT)}j_k$, in which $P(y,z,\cT)$ be the unique path in $\cT$ connecting $y$ and $z$, and
$j_R=\min_{j=1,2}\max_{k\in P_{R,j}}j(k)$, then define $j_{R,T}=\min\{j_R, j_\cT\}$. Finally define
\begin{equation}
r_{R,\cT}=\frac{j_{R,T}+s_{+,R}+s_{-,R}}{2},
\end{equation}
which is the $r$-index for the ring sectors.
With all these preparations, we can prove the following upper bound for the self-energy function:
\begin{theorem}\label{maina}
Let $\Sigma_2(y,z)^{\le r}$ be the biped self-energy function with root scaling index $r$, let the corresponding Gallavotti-Nicol\`o tree is noted by $\cG_r$. Let $\lambda\in{\cal R}}\def\cL{{\cal L}} \def\JJ{{\cal J}} \def\OO{{\cal O}_T$ and let $\sigma_{\cG_r}$ be the set of sector indices that is compatible with the Gallavotti-Nicol\`o tree $\cG_r$. Then there exists a constant $O(1)$, which is independent of the scaling index, such that:
\begin{equation}\label{x2pta}
|\Sigma_2(y,z)^{\le r}|\le \lambda^2 r^2\sup_{\sigma\in\sigma_\cG} O(1)\g^{-3r(\sigma)}e^{-c[d_{j,\s}(y,z)]^\a},
\end{equation}
\begin{equation}\label{x2pt}
|z^{(a)}-y^{(a)}||z^{(b)}-y^{(b)}||\Sigma_2(y,z)^{\le r}|\le \lambda^2 r^2\sup_{\sigma\in\sigma_\cG} O(1)\g^{-2r(\sigma)}e^{-c[d_{j,\s}(y,z)]^\a},
\end{equation}
\begin{equation}\label{x2ptc}
|y_0-z_0||\Sigma_2(y,z)^{\le r}|\le \lambda^2 r^2\sup_{\sigma\in\sigma_\cG} O(1)\g^{-2r(\sigma)}e^{-c[d_{j,\s}(y,z)]^\a},
\end{equation}
where $[d_{j,\s}(y,z)]^\a$ (cf. \eqref{dist0}) characterize the decaying behaviors of the propagator in position spaces. The upper bounds presented in \eqref{x2pta}, \eqref{x2pt} and \eqref{x2ptc} are optimal.
\end{theorem}
Before proving this theorem, we consider the following lemma concerning sector counting for the biped graphs.
\begin{lemma}[Sector counting lemma for bipeds]\label{secbi}
Let $b_r$ be a 2PI biped with root scaling index $r\in[1, r_{max}]$ and contains $n+2$ vertices. There exists a positive constant $C_1$, independent of the scaling index $r$, such that the summation over all the sector indices is bounded by $C_1^{n+2}r^{2n+1}$.
\end{lemma}
\begin{proof}
Let $b_r$ be a 2PI biped and $\cT_b=b_{r}\cap\cT$ be the set of tree lines in $b_r$. The two external fields are fixed. They have identical sector indices, by conservation of momentum. We choose a root field $f_r$ at each vertex, and among all the root fields, we choose the one with maximal scaling index as the one for the whole biped. By conservation of momentum there can be at most $2n+1$ independent sectors to be summed. Since summing over each pair of sector indices is bounded by $r$, the total summation is the bound $C_1^n r^{2n+1}$, for some positive constant $C_1$. This concludes the lemma.
\end{proof}
\begin{proof}[Proof of Theorem \ref{maina}]
This theorem for a similar setting has been proved in \cite{AMR1}, Section $VIII$. We only sketch the main idea for the proof and ask the interested readers to consult \cite{AMR1} for more details.
First of all, by integrating out the weakening factors for the tree expansions and multi-arch expansions, we can write the amplitude of the self-energy as:
\begin{eqnarray}
|\Sigma^{\le r}(y,z)|\le\sum_{n=2}^\infty\frac{(K\left\langle)^{n}}{n!}\sum_{\{\underline\tau\}}\sum_{\cG_{\cB}}
\sum_{ \mathcal{EB}} \sum_{ \mathcal{T} } \sum_{R}\sum_{\{ \sigma \}}\prod_p\chi_p(\sigma)\cI_{1,n}(y,z)\cI_{2,n}(y,z,x_{p,\pm}),
\end{eqnarray}
in which $K$ is some positive constant,
\begin{equation}
\cI_{2,n}(y,z,x_{p,\pm})=\int\prod_{v\in R,v\neq y,z} dx_{v,0}\prod_{v\notin R}d^3x_v
\prod_{f\notin R}\g^{-r_f/2-l_f/4}\prod_{p\in \cL}e^{-\frac{c}{2}[d_{j,\s(p)}(y,z)]^\a}
\end{equation}
is the factor in which we keep the position $y$ and $z$ and the spatial positions of the ring vertices $x_{p,\pm}$ fixed and integrate out all the remaining positions (cf. Formula $VIII.86$, \cite{AMR1}). A fraction (one half) of the decaying factor from every loop line in $\cL$ and from the remaining determinant has been also put here to compensate possible divergence from the integrations. And the factor
\begin{equation}
\cI_{1,n}=\int\prod_{i=1}^p dx_{i,+}dx_{i,-}\prod_{k\in R}\g^{-(r+l/2)(k)}\prod_{p\in\cL}e^{-\frac{c}{2}[d_{j,\s(p)}(y,z)]^\a},
\end{equation}
in which $x_i$ are the internal vertices other than $y$ and $z$ in the ring, contains the remaining terms and integrations. Then using similar techniques as \cite{RW1}, one can prove (cf. Lemma $VIII.1$ and $VIII.2$) that:
\begin{equation}
\cI_{2,n}\le K_1^n \g^{-j_{\cT}}
\end{equation}
and
\begin{equation}
\cI_{1,n}\le K_2^p \g^{-s_{+,R,1}-s_{+,R,2}-s_{-,R,1}-s_{-,R,2}}e^{-\frac{c}{4}[d_{j,\s(p)}(y,z)]^\a},
\end{equation}
Combining these two factors and summing over all the tree structure $\cT$, the ring structure $R$ as well as the GN trees, we have
\begin{equation}
|\Sigma^{\le r}(y,z)|\le \sum_n \sum_{r'=0}^r\sum_{\{\sigma\}} C^n\lambda^{n} \g^{-j_{\cT}}\g^{-s_{+,R,1}-s_{+,R,2}-s_{-,R,1}-s_{-,R,2}}e^{-c'[d_{j,\s(p)}(y,z)]^\a},
\end{equation}
for some positive constant $C$. For the sector indices in the exponential, we have:
\begin{equation}
\g^{-j_{\cT}}\g^{-s_{+,R,1}-s_{+,R,2}-s_{-,R,1}-s_{-,R,2}}\le\g^{-3/2(s_{+,R}+s_{-,R}+j_{R,\cT})}\le \g^{-3r_{R,T}}.
\end{equation}
Now we consider summation over the sector indices. Let $n=N+2$ and using Lemma \ref{secbi}, we obtain
\begin{eqnarray}\label{finalsum}
\sum_{N=0}^{\infty} \sum_{r'=0}^r\sum_{\{\sigma\}} C^{N+2}\lambda^{N+2} \le
\sum_{N=0}^{\infty} \sum_{r'=0}^rC^{N+2}C_1^{N}\lambda^{N+2} r'^{2N+1}\le
\sum_{N=0}^{\infty} C_2^N(\lambda r^2)^N \left\langle^2r^2,
\end{eqnarray}
for some positive constant $C_2$ depending on $C$ and $C_1$.
Since $|\lambda r^2|\le |\lambda\log^2 T|\le c$ for $\lambda\in{\cal R}}\def\cL{{\cal L}} \def\JJ{{\cal J}} \def\OO{{\cal O}_T$, summation over $N$
in \eqref{finalsum} is convergent provide that $c\cdot C_2<1$. This inequality can always be true for $c$ small enough.
So we obtain:
\begin{eqnarray}\label{taylorm1}
|\Sigma^{\le r}(y,z)|&\le& \sum_{N=0}^{\infty} C_2^N(\lambda r^2)^N \left\langle^2r^2\sup_{\sigma\in\sigma_\cG}\g^{-3r(\sigma)}e^{-c'[d_{j,\s}(y,z)]^\a}\\
&\le&
K_3\lambda^2 r^2\sup_{\sigma\in\sigma_\cG}\g^{-3r(\sigma)}e^{-c'[d_{j,\s}(y,z)]^\a},\nonumber
\end{eqnarray}
for certain positive constants $K_3$ and $c'$. By choosing the ring structure, the convergent factors we obtained from integrations over the spatial coordinates in $\cI_{1,n}$ and $\cI_{2,n}$ are optimal, the upper bounds we obtained for the self-energy are optimal.
Following exactly the same analysis for the derivatives of the self-energy, (see also \cite{AMR1}, pages 437-442) we can prove Formula \eqref{x2pt} and \eqref{x2ptc}. Hence we conclude Theorem \ref{maina}.
\end{proof}
We can also formulate Theorem \ref{maina} in the Fourier space, we have:
\begin{theorem}[Bounds for the self-energy in the momentum space.]\label{mainb}
Let $\hat\Si^{r}(\lambda,q)$ be the self-energy for a biped of scaling index $r$ in the momentum space representation and $\lambda\in{\cal R}}\def\cL{{\cal L}} \def\JJ{{\cal J}} \def\OO{{\cal O}_T$. There exists a positive constant $K$, which depends on the model but is independent of the scaling index, such that:
\begin{equation} \sup_q|\hat\Si^{r}(\lambda,q)|\le K\left\langle^2 r^2\g^{-r},\label{spa}
\end{equation}
\begin{equation} \sup_q\vert \frac{\partial}{\partial q_\mu } \hat\Si^{ r} (\lambda,q) \vert\le K\lambda^2 r^2,\label{spb}
\end{equation}
\begin{equation}\label{spc} \sup_q| \frac{\partial^2}{\partial q_\mu \partial q_\nu} \hat\Si^{r} (\lambda,q) | \le K\lambda^2 r^2 \g^{r}.
\end{equation}
These upper bounds are optimal.
\end{theorem}
This theorem states that the self energy is uniformly $C^1$ in the external momentum for $|\lambda|<c/|\log T|^2$ for some positive constant $c_1<1$, which is smaller than the one required by Salmhofer's criterion, which is $|\lambda|<c_1/|\log T|$. What's more, for $r=r_{max}$, with $\g^{r_{max}}\sim \frac1T$, then there exists some positive constant $C$, which is independent of the temperature, such that
\begin{equation}\sup_q|\frac{\partial^2}{\partial q_\mu \partial q_\nu} \hat\Si^{r} (\lambda,q) |\le\frac{C}{T},\end{equation}
which is not uniformly in $T$. Since the upper bound we obtained is optimal, this suggests that Salmhofer's criterion is violated and the ground state is not a Fermi liquid. In a companion paper \cite{RW2} we shall establish the following lower bound:
\begin{equation}\label{lbd}\sup_q|\frac{\partial^2}{\partial q_\mu \partial q_\nu} \hat\Si^{r} (\lambda,q) |\ge\frac{C'}{T},\end{equation}
for some positive constant $C'$ that depends on the model but is independent of $T$. With with lower bound we may conclude that the ground state of this model is not a Fermi liquid.
\begin{remark}\label{rmkmain}
Since the self-energy function $\hat\Sigma(q,\lambda)$ is an analytic function for $\lambda\in{\cal R}}\def\cL{{\cal L}} \def\JJ{{\cal J}} \def\OO{{\cal O}_T=\{\lambda\vert\ |\lambda|\log^2T\le c\}$. We can always choose $c$ such that $\hat\Sigma(q,\lambda)$ can be Taylor expanded into the following convergent perturbation series:
\begin{equation}\label{taylor2}
|\hat\Sigma(q,\lambda)|=\sum_{n=2}^\infty \lambda^n \tilde K^n,\ {\rm with}\ \lambda\in{\cal R}}\def\cL{{\cal L}} \def\JJ{{\cal J}} \def\OO{{\cal O}_T,\ |\lambda \tilde K|<0.1. \nonumber
\end{equation}
Then from elementary mathematical analysis we know that
\begin{equation}
\sum_{n=3}^\infty \lambda^n \tilde K^n\le \lambda^2\tilde K^2,
\end{equation}
in which $\lambda^2\tilde K^2$ is the amplitude of the lowest order perturbation term (corresponding to $n=2$), which is the amplitude of the sunshine graph. If we want to study the upper bound or lower bound for the self-energy and its derivatives, it is enough to study the corresponding quantities for the sunshine graph.
\end{remark}
\begin{proof}[Proof of Theorem \ref{mainb}]
Remark that the expressions for the self-energy function in \eqref{x2pta} and \eqref{spa} are simply Fourier dual to each other, so we just need to prove that
formula \eqref{spa}-\eqref{spc} can be derived from \eqref{x2pta}.
Let $\Sigma(y,z)$ be the Fourier transform of $\hat\Sigma(p)$. The left hand side of \eqref{x2pta} reads
\begin{equation}
|\Sigma^r(y,z)|=|\int dp\ \hat\Sigma(q) e^{ip(y-z)}|=C \gamma^{-2r}\sup_q|\hat\Sigma^r(q)|,
\end{equation}
for some positive constant $C$. Since
\begin{equation}
|\Sigma^r(y,z)|\le C\lambda^2 r^2 \g^{-3r},
\end{equation}
so we have
\begin{equation}
\sup_q|\hat\Sigma^r(q)|\le C\lambda^2 r^2 \g^{-r}.
\end{equation}
So we have obtained \eqref{spa}. Since the momentum $q$ is bounded by $\g^{-r}$ at scaling index $r$, the first order differentiation w.r.t. $q$ for the r.h.s. term of \eqref{spa} gives the bound in \eqref{spb} and the second order differentiation gives the bound in \eqref{spc}.
The fact that these bonds are optimal also follow from the fact that the bounds in \eqref{x2pta}-\eqref{x2ptc} are optimal.
\end{proof}
It is important to remember that what we have proved are the upper bounds for $\hat\Sigma_{11}$ (cf. Formula \eqref{rncd01}, \eqref{rncd02} and Remark \ref{rmkindex}). By Remark \ref{rmkindex} we see that the other matrix elements of $[\Sigma(y,z)]_{\a\a'}$ or $[\hat\Sigma(k)]_{\a\a'}$, $\a\a'=1,2$, satisfy exactly the same upper bounds as $\Sigma(y,z)$ and $\hat\Sigma(\bk)$.
As a corollary of Theorem \ref{mainb}, we have the following result:
\begin{theorem}\label{mainc}
There exists a constant $C$ which may depends on the model but is independent of the scaling index $r$ and $\left\langle$, such that the counter-term $\nu^{\le r}(\bk,\left\langle)$ satisfies the following bound:
\begin{equation}\label{cte1}
\sup_{\bk}|\nu^{\le r}(\bk,\left\langle)|\le C\left\langle^2 r^2\g^{-r},
\end{equation}
hence
\begin{equation}\label{cte2}
\sup_{\bk}|\nu(\bk,\left\langle)|=\sup_{\bk}|\nu^{\le r_{max}}(\bk,\left\langle)| \le C'|\log T|^2\left\langle^2\g^{-r},
\end{equation}
where $C'$ is another positive constant independent of $\left\langle$ and $T$.
\end{theorem}
\begin{proof}
By renormalization conditions, we have
$\nu^{\le r}(\bk,\left\langle):=-\tau\Sigma^r(k_0,P_F\bk)$, hence \begin{equation}\sup_\bk|\nu^{\le r}(\bk,\left\langle)|\le \sup_k|\Sigma^r(k_0,\bk)|.\end{equation}
Then we can prove this theorem using the bounds for the self-energy (cf. Equation \eqref{spa}).
\end{proof}
We have the following theorem concerning the $2$-point Schwinger's function:
\begin{theorem}\label{maine}
Let $\hat S_2(k,\lambda)$ be the two-point Schwinger's function, which is not necessarily the connected one. Then for any $\lambda$ such that $|\lambda\log^2T|\le K_1$, with $K_1$ some positive constant independent of $T$ and $\lambda$, we have
\begin{equation}
\hat S_2(k,\lambda)=\hat C(k)[1+\hat R(\lambda, k)],
\end{equation}
in which $\Vert \hat R(\lambda, k)\Vert_{L^\infty}\le K|\lambda|$ for some positive constant $K$ which is independent of $k$ and $\left\langle$.
\end{theorem}
\begin{proof}
By definition, the interacting 2-point Schwinger function can be written as the following geometric series:
\begin{equation}
\hat S_2(k,\lambda)=\hat C(k)+\hat C(k)\hat\Sigma(k)\hat C(k)+\cdots+\hat C(k)[\hat\Sigma(k)\hat C(k)]^n+\cdots.
\end{equation}
Consider the case in which the external momentum is in the scale $r\le r_{max}\sim \log\frac1T$, then we have
\begin{equation}
\hat S^r_2(k,\lambda)=\hat C^r(k)\big(1+\hat\Sigma^r(k)\hat C^r(k)+\cdots+[\hat\Sigma^r(k)\hat C^r(k)]^n+\cdots\big).
\end{equation}
By Theorem \ref{mainb}, we have
\begin{equation}
\Vert \hat\Sigma^r(k)\hat C^r(k)\Vert_{L^\infty}\le\Vert \lambda^2 r^2 C\g^{-r}K_1\g^{r}\Vert_{L^\infty}
\le K_3|\lambda|,
\end{equation}
in which we have used the fact that $\Vert\hat C^r(k)\Vert_{L^\infty}\le K_2\g^{-r}$, for some positive constant $K_2$ (cf. Lemma \ref{bdsp}), and the fact that $|\lambda r^2|\le |\lambda r_{max}^2|\le K_1$, and $K_3=K_1\cdot K_2$.
So we have
\begin{equation}
\Vert \hat R(\lambda, k)\Vert_{L^\infty}:= \Vert\hat\Sigma^r(k)\hat C^r(k)+\cdots+[\hat\Sigma^r(k)\hat C^r(k)]^n+\cdots\Vert_{L^\infty}\le 2K_3|\lambda|.
\end{equation}
Choosing $K=2K_3$ and summing over $r$, the conclusion follows.
\end{proof}
\subsection{Proof of Theorem \ref{conj2}}
With all these preparations, we are ready to prove Theorem \ref{conj2}.
\begin{proof}\label{prfconj}
In order to prove this conjecture, it is enough to verify item (2) of the renormalization conditions (cf. \eqref{rncd2}) at any
slice $r\le r_{max}$ and prove that the counter-term $\nu(\bk,\left\langle)$ is $C^{1+\epsilon}$ in $\bk$. First of all, by the multi-slice renormalization condition \eqref{rs1}, then the ratio in \eqref{rncd2} at any slice $r$ is bounded by
\begin{eqnarray}
&&\Vert(\hat\nu^{r}_{s^{(a)},s^{(a)}}+\Sigma^{r}_{s^{(a)},s^{(a)}})\hat C_r(k)\Vert_{L^\infty}\le K\cdot
\frac{\Vert\Sigma^{r+1}_{s^{(a)},s^{(a)}}\Vert_{L^\infty}}{\g^{-r}}\nonumber\\
&&\quad\le K_1\left\langle^2\frac{(r+1)^2\g^{-r-1}}{\g^{-r}}=\g^{-1}\left\langle^2 K'(r+1)^2
\le\left\langle^2 K_2\log^2 T,
\end{eqnarray}
in which $K$, $K_1$ and $K_2$ are some positive constants and we have used the relation $\gamma^{r_{max}}\sim\frac1T$.
Then we have
\begin{equation}
\Vert(\hat\nu^{r}_{s^{(a)},s^{(a)}}+\Sigma^{r}_{s^{(a)},s^{(a)}})\hat C_r(k)\Vert_{L^\infty}
\le K_3|\lambda|,
\end{equation}
which is bounded for $\left\langle\in{\cal R}}\def\cL{{\cal L}} \def\JJ{{\cal J}} \def\OO{{\cal O}_T$. By construction, the counter-term $\hat\nu^{r}_{s^{(a)},s^{(a)}}(\bk)$ has the same regularity as the self-energy function $\Sigma^{r}_{s^{(a)},s^{(a)}}(\bk)$. By Theorem \ref{mainb} we know that
the self-energy is $C^{1+\epsilon}$ (by Formula \eqref{spb}) but not $C^2$ (by Formula \eqref{spc}) w.r.t. the external momentum, so is the counter-term. So we proved Theorem \ref{conj2}.
\end{proof}
\subsection{Proof of Theorem \ref{flowmu}.}\label{secflow}
In this section we study the upper bound for the counter-term $\delta\mu(\lambda)$.
There are different kinds of terms that contribute to $\delta\mu(\lambda)$: the tadpole term, the integration of the self-energy function:
\begin{equation}
\Sigma^{\le r_{max}}(x)=\sum_{r=0}^{r_{max}}\int d^3y\ \Sigma^r(x, y),
\end{equation}
and the {\it the generalized tadpole term}, which is a tadpole term whose internal lines are decorated by 1PI bipeds. See Figure \ref{gtad1} for an illustration.
\begin{figure}[htp]
\centering
\includegraphics[width=.45\textwidth]{gtad.pdf}
\caption{\label{gtad1}
A generalized tadpole. Each big dot corresponds to the 2PI bipeds.}
\end{figure}
First of all, by Lemma \ref{tad05}, the amplitude of a tadpole is bounded by:
\begin{equation}
\Vert T\Vert_{L^\infty}\le K_1|\lambda|,
\end{equation}
where $K_1$ is a positive constant and $|\lambda|<C/|\log T|^2$. So the amplitude of a tadpole satisfies the bound in Theorem \ref{flowmu}.
Secondly, we have:
\begin{equation}\label{localmu1}
\sup_x|\Sigma_2^{\le r_{max}}(x)|\le \sum_{r=0}^{r_{max}}\int\ d^3y\ |\Sigma_2^r(x, y)|.
\end{equation}
Since the integrand is bounded by (cf. Theorem \ref{maina}, Formula \eqref{x2pta} )
\begin{equation}
|\Sigma^r_2(x,y)|\le K_1|\lambda|^2 r^2 \sup_{\sigma\in\sigma_{\cG}}\g^{-3r(\sigma)}e^{-cd^\alpha_\sigma(x,y)},
\end{equation}
for some positive constant $K_1$, we can perform the integration in \eqref{localmu1} along the spanning tree in the 1PI graph, for which the spatial integration is bounded by
\begin{equation}|\int d^3y e^{-cd^\alpha_\sigma(x,y)}|\le K_2.\gamma^{2r_\cT},
\end{equation}
where $r_\cT$ is the maximal scaling index among the tree propagators. Combine the above two terms, we find that, these exists another positive constant, $K_3$, which is independent of the scaling indices, such that
\begin{equation}\label{local3}
\sum_r\int\ dy_0 dy^{(a)}dy^{(b)}\ |\Sigma_2^r(x, y)|\le K_3 |\lambda|^2,
\end{equation}
Since $|\lambda|^2\ll|\lambda|$ for $\lambda\in{\cal R}}\def\cL{{\cal L}} \def\JJ{{\cal J}} \def\OO{{\cal O}_T$, the amplitudes of the localized term is also bounded by $K_3|\lambda|$.
Now we consider the amplitude for a generalized tadpole, which is formed by contracting a chain of bipeds with a bare vertex. Let $T^g_n$ be a generalized tadpole which contain $n$ irreducible and renormalized bipeds and $n+1$ propagators connecting these bipeds. Let the scaling index of an external propagators be $r^e$ and the lowest scaling index of the propagators in the biped be $r^i$. Then we can consider the generalized tadpole as a string of more elementary graphs, each corresponds
to the part contained in the dashed square in Figure \ref{gtad1}, whose amplitude is
\begin{equation} T^\Sigma(r^e,r^i,\lambda,x)= \int d{x_1}\int dy_1 C^{r^e}(x,x_1)(1-\tau)\Sigma^{r^i}(x_1,y_1).
\end{equation}
Recall that the remainder term $(1-\tau)\Sigma^{r^i}(x_1,y_1)$ is bounded by the sum of
\begin{equation}
\int dx_1 dy_1|x_{1,0}-y_{1,0}|\cdot|\Sigma^{r^i}(x_1,y_1)|\cdot|\frac{\partial}{\partial{x_{1,0}}} C^{r^e}(x,x_1)|,
\end{equation}
and
\begin{equation}
\int dx_1 dy_1|x_1^{(a)}-y_1^{(a)}||x_1^{(b)}-y_1^{(b)}||\Sigma^{r^i}(x_1,y_1)|\cdot|
\frac{\partial}{\partial{x_1^{(a)}}}\frac{\partial}{\partial{x_1^{(b)}}} C^{r^e}(x,x_1)|.
\end{equation}
Using the fact that $j^e+[s^{(a)}]^e+[s^{(b)}]^e=2r^e$ and $[s^{(a)}]^e+[s^{(b)}]^e\ge r^e$, we can prove that there exist positive constants $K_4$ and $K_5$ such that
$$|\frac{\partial}{\partial{x_{1,0}}} C^{r^e}(x,x_1)|\le K_4\g^{-2r^e}e^{-d(x,x_1)},$$ and
$$|\frac{\partial}{\partial{x_1^{(a)}}}\frac{\partial}{\partial{x_1^{(b)}}}C^{r^e}(x,x_1)|\le K_5\g^{-2r^e}e^{-d(x,x_1)},$$
By Theorem \ref{maina}, Formula \eqref{x2pt} and \eqref{x2ptc}, which states that
$$|x_{1,0}-y_{1,0}||\Sigma^{r^i}(x_1,y_1)|\le K_4\g^{-2r^i}e^{-d^\alpha(x_1,y_1)},$$
and
$$|x_1^{(a)}-y_1^{(a)}||x_1^{(b)}-y_1^{(b)}||\Sigma^{r^i}(x_1,y_1)|\le K_5\g^{-2r^i}e^{-d^\alpha(x_1,y_1)},$$
we can easily find that, after performing the integration, each of the above term is bounded by
$$K_5\g^{2r^e}\g^{2r^i}\g^{-2r^e}\g^{-2r^i}\le K_5 .$$
Since there is a propagator in $T^g_n$ whose position coordinates are not integrated,
we have
\begin{equation}
\Vert T^g_n\Vert_{L^\infty}\le\prod_{i=1}^n\Vert\sum_{r^i} T^\Sigma_i(r^e,r^i,\lambda,x)\Vert_{L^\infty}\cdot\Vert C^{r^e}(x_n,x)\Vert_{L^\infty}\le K^n |\lambda|^{2n+1} |\log T|^n\g^{-r^e}.
\end{equation}
Summing over the indices $r^e$ and using the fact that $|\lambda\log^2 T|<c$, we can easily see that there exists a positive constant $K_6$, independent of the scaling index, such that
\begin{equation}
\sum_{n=0}^\infty |T^g_n|\le 2 K_6 |\lambda|.
\end{equation}
Summing up all the local terms and let $K'=K_1+K_3+2K_6$, then the amplitudes of all the local terms are bounded by $K'|\lambda|$. Hence we proved Theorem \ref{flowmu}.
\section{Conclusions and Perspectives}
In this paper we construct the $2p$-point Schwinger's functions in the $2$-dimensional honeycomb Hubbard with renormalized chemical potential $\mu=1$ and establish the upper bounds for the self-energy as well as its second derivatives in this model. In the companion paper \cite{RW2} we establish the lower bound for the second derivative of the self-energy in the momentum space hence achieve the proof that this model is not a Fermi liquid in the mathematical precise sense of Salmhofer. In \cite{GM} the authors studied the honeycomb Hubbard model at half-filling, in which the $2$ point Schwinger's functions are proved to be analytic down to zero temperature. It is therefore important to study the crossover between the cases of $\mu=0$ and $\mu=1$ and consider the honeycomb Hubbard model with $0<\mu<1$, in which the Fermi surfaces are no more points or exact triangles, but convex curves with $\ZZZ_3$ symmetry, for which the sector counting lemma introduced in this paper may not be valid hence a different way to count sectors \cite{Wang20} is required. It is also important to notice that Lifshitz phase transitions may happen at the van Hove filling \cite{Rosen}, which also deserve a rigorous study.
\medskip
\noindent{\bf Acknowledgments}
Zhituo Wang is very grateful to Horst Kn\"orrer for useful discussions and encouragements, and to Alessandro Giuliani and Vieri Mastropietro for useful discussions. Part of this work has been finished during Zhituo Wang's visit to the Institute of Mathematics, University of Zurich. He is also very grateful to Benjamin Schlein for invitation and hospitality. Zhituo Wang is supported by NNSFC No.12071099 and No.11701121. Vincent Rivasseau is supported by Paris-Saclay University and the IJCLab of the CNRS.
\thebibliography{0}
\bibitem{AR}
A.~Abdesselam and V.~Rivasseau,
``Trees, forests and jungles: A botanical garden for cluster expansions,'' in Constructive Physics, Lecture Notes in Physics 446, Springer Verlag, 1995,
\bibitem{AMR1} S. Afchain, J. Magnen and V. Rivasseau: {\it
Renormalization of the 2-Point Function of the Hubbard Model at Half-Filling},
Ann. Henri Poincar\'e {\bf 6}, 399-448 (2005).
\bibitem{AMR2} S. Afchain, J. Magnen and V. Rivasseau:{\it
The Two Dimensional Hubbard Model at Half-Filling, part III: The Lower Bound on the Self-Energy},
Ann. Henri Poincar\'e {\bf 6}, 449-483 (2005).
\bibitem{And} P. W. Anderson: {\it "Luttinger-Liquid" behavior of the normal metallic state of the 2D Hubbard model}, Phys. Rev. Lett. {\bf 64}, 1839-1841 (1990).
\bibitem{BG} G. Benfatto and G. Gallavotti: {\it
Perturbation theory of the Fermi surface in a quantum liquid.
A general quasiparticle formalism and one dimensional systems},
Jour. Stat. Phys. {\bf 59}, 541-664 (1990).
\bibitem{BG1} G. Benfatto and G. Gallavotti: {\it
Renormalization Group},
Physics Notes, Vol. 1, Princeton University Press (1995).
\bibitem{BFM} G. Benfatto, P. Falco and V. Mastropietro, Universality of one-dimensional Fermi systems, II. The Luttinger liquid structure. Comm. Math. Phys. 330 {\bf 1}, 217–282, (2014)
\bibitem{BGM2} G. Benfatto, A. Giuliani and V. Mastropietro: {\it
Fermi liquid behavior in the 2D Hubbard model at low temperatures},
Ann. Henri Poincar\'e {\bf 7}, 809-898 (2006).
\bibitem{BM} G. Benfatto and V. Mastropietro: {\it
Ward identities and chiral anomaly in the Luttinger liquid},
Comm. Math. Phys. {\bf 258}, 609-655 (2005).
\bibitem{Bon} J.A. Bondy and U.S.R. Murty: {\it
Graph theory with applications}, North-Holland (1976).
\bibitem{BK} D. Brydges and T. Kennedy:
{\it Mayer expansions and the Hamilton-Jacobi equation}, J.
Statistical Phys. {\bf 48}, 19 (1987).
\bibitem{BR1} O. Bratteli, D. W. Robinson: {\it Operator Algebras and Quantum Statistical Mechanics 2}, second edition, Springer-Verlag, 2002.
\bibitem{Lieb} D. Baeriswyl, D. Campbell, J. Carmelo, F. Guinea, E. Louis,
{\it The Hubbard Model}, Nato ASI series, V. 343,
Springer Science+Business Media New York, 1995
\bibitem{Cas} A. H. Castro Neto, F. Guinea, N. M. R. Peres, K. S. Novoselov,
A. K. Geim: {\it The electronic properties of graphene}, Rev. Mod. Phys. {\bf 81}, 109-162 (2009)
\bibitem{Sar} S. Das Sarma, S. Adam, E. H. Hwang, E. Rossi
: {\it Electronic transport in two-dimensional graphene}, Rev. Mod. Phys. {\bf 83}, 407-470 (2011)
\bibitem{DR1} M. Disertori and V. Rivasseau: {\it Interacting Fermi liquid in
two dimensions at finite temperature, Part I - Convergent attributions} and
{\it Part II - Renormalization},
Comm. Math. Phys. {\bf 215}, 251-290 (2000) and
291-341 (2000).
\bibitem{Feff1} C. Fefferman and M. Weinstein: {\it Honeycomb lattice potentials and Dirac points}, J. Amer. Math. Soc. {\bf 25}, 1169–1220 (2012).
\bibitem{Feff3} C. Fefferman, J.P. Lee-Thorp, M. Weinstein: {\it Honeycomb Schr\"odinger operators in the strong binding regime}, Comm. Pure Appl. Math. {\bf 71}, 1178–1270 (2018).
\bibitem{FMRT} J. Feldman, J. Magnen, V. Rivasseau and E. Trubowitz:
{\it An infinite volume expansion for many fermions Freen functions},
Helv. Phys. Acta {\bf 65}, 679-721 (1992).
\bibitem{FKT} J. Feldman, H. Kn\"orrer and E. Trubowitz: {\it
A Two Dimensional Fermi Liquid},
Comm. Math. Phys {\bf 247}, 1-319 (2004).
\bibitem{FST1} J. Feldman, M. Salmhofer and E. Trubowitz: {\it
Perturbation Theory Around Nonnested Fermi Surfaces.
I. Keeping the Fermi Surface Fixed}, Journal of
Statistical Physics, {\bf 84}, 1209-1336 (1996).
{\it An inversion theorem in Fermi surface theory} Comm. Pure Appl. Math. 53 (2000), 1350-1384.
\bibitem{FT} J. Feldman, E. Trubowitz: {\it
Perturbation theory for many fermion systems},
Helv. Phys. Acta {\bf 63}, 156-260 (1990).
\bibitem{GN} G. Gallavotti and F. Nicol\`o: {\it
Renormalization theory for four dimensional scalar fields. Part I},
{\it II}, Comm. Math. Phys. {\bf 100}, 545-590 (1985),
{\bf 101}, 471-562 (1985).
\bibitem{GK} K. Gawedzki and A. Kupiainen: {\it Gross-Neveu model through
convergent perturbation expansions}, Comm. Math. Phys. {\bf 102}, 1-30 (1985).
\bibitem{GM} A. Giuliani and V. Mastropietro: {\it The two-dimensional
Hubbard model on the honeycomb lattice}, Comm. Math. Phys. {\bf 293}, 301-346
(2010).
\bibitem{Hubb} J. Hubbard, {\it Electron correlations in narrow energy bands}, Proc. Roy. Soc. (London), {\bf A276}, 238-257 (1963).
\bibitem{Iz} C. Itzykson and J.-B. Zuber: {\it Quantum Field Theory}, Dover publications, 2005
\bibitem{Kot} V. N. Kotov, B. Uchoa, V. M. Pereira, F. Guinea and A. H. Castro Neto
: {\it Electron-Electron Interactions in Graphene: Current Status and Perspectives}, Rev. Mod. Phys. {\bf 84}, 1067-1125 (2012)
\bibitem{Lifshitz} I. M. Lifshitz: {\it Anomalies of Electron Characteristics of a Metal in the High Pressure}, Sov. Phys. JETP {\bf 11}, 1130 (1960).
\bibitem{Link} S. Lint, et el.: {\it Introducing strong correlation effects into graphene by gadolinium interaction}, Phys. Rev. B, {\bf 100}, 121407(R),
(2019).
\bibitem{Tutte} T.~Krajewski, V.~Rivasseau, A.~Tanasa and Zhituo~Wang,
``Topological Graph Polynomials and Quantum Field Theory, Part I: Heat Kernel
Theories,''
J.\ Noncommut.\ Geom.\ {\bf 4} (2010) 29
\bibitem{Le} A. Lesniewski: {\it Effective action
for the Yukawa$_2$ quantum field theory}, Comm. Math. Phys. {\bf 108},
437-467 (1987).
\bibitem{Lu} J. M. Luttinger: {\it An exactly soluble model of a many fermions
system}, J. Math. Phys. {\bf 4}, 1154-1162 (1963).
\bibitem{M2} V. Mastropietro: {\it Non-Perturbative
Renormalization}, World Scientific (2008).
\bibitem{N} K. S. Novoselov, A. K. Geim, S. V. Morozov,
D. Jiang, Y. Zhang, S. V. Dubonos, I. V. Grigorieva and
A. A. Firsov: {\it Electric Field Effect in Atomically Thin Carbon Films},
Science {\bf 306}, 666 (2004).
\bibitem{Riv} V. Rivasseau: {\it The Two Dimensional Hubbard Model at Half-Filling. I. Convergent Contributions}, J. Statistical Phys. {\bf 106}, 693-722 (2002).
\bibitem{Rivbook} V. Rivasseau: {\it From Perturbative Renormalization to Constructive Renormalization}, Princeton university press
\bibitem{RW2} V. Rivasseau and Z. ~Wang: {\it Hubbard model on the Honeycomb lattice at van Hove filling, Part II: the Lower Bound on the Self-Energy}
\bibitem{Mc} J. L. McChesney et el.: {\it
Extended van Hove Singularity and Superconducting Instability in Doped Graphene}, Phys. Rev. Lett. {\bf 104}, 136803 (2010)
\bibitem{Ros} P. Rosenberg et el.: {\it
Tuning the doping level of graphene in the vicinity of the Van Hove singularity
via ytterbium intercalation}, Phys. Rev. B {\bf 100}, 035445 (2019)
\bibitem{Rosen} P. Rosenberg et el.: {\it
Overdoping Graphene beyond the van Hove Singularity}, Phys. Rev. Lett. {\bf 125}, 176403 (2020)
\bibitem{RW1}
V.~Rivasseau and Z.~Wang,
{\it How to Resum Feynman Graphs},
Annales Henri Poincare {\bf 15} (2014) 11, 2069
\bibitem{Salm} M. Salmhofer: {\it Continuous Renormalization for Fermions and Fermi Liquid Theory}, Comm. Math. Phys. {\bf 194}, 249-295 (1998).
\bibitem{To} S. Tomonaga: {\it Remarks on Bloch's methods of sound waves
applied to many fermion systems}, Progr. Theo. Phys. {\bf 5}, 544-569 (1950).
\bibitem{W} P. R. Wallace: {\it The Band Theory of Graphite}, Phys. Rev. {\bf 71},
622-634 (1947).
\bibitem{Wang20} Zhituo Wang: {\it On the Sector Counting Lemma}, preprint.
\endthebibliography
\end{document}
|
2,869,038,155,446 | arxiv | \section{Introduction}
Theoretical treatment of many-particle systems is connected with a number of comp\-li\-ca\-ti\-ons. Some of them can be resolved by introducing the concept of quasiparticle or ``composite particle'', if this is possible. However, in this way we generally en\-co\-un\-ter various factors of the internal structure, which cannot be completely encapsulated into internal degrees of freedom of a composite particle. These are the nontrivial com\-mu\-ta\-ti\-on relations, or the interaction of the constituents between themselves and with other particles, etc. It is desirable to have an equivalent description of many-(composite-)par\-ti\-cle systems, almost as simple as the description of an ideal/point-like particle system, but taking into account the mentioned factors. Deformed bosons or deformed oscillators, see e.g. the review \cite{Bona}, provide possible means for the realization of such an intention. In such a case the basic characteristics of the factors connected with the internal structure would be encoded in one or more deformation parameters.
A particular realization of the mentioned idea to describe quasibosons~\cite{Per} (boson-like composites) in terms of deformed Heisenberg algebra was demonstrated by Avancini and Krein in \cite{Avan} who utilized the quonic \cite{Green} version of the deformed boson al\-geb\-ra. Note that if two or more copies (modes) are involved, different modes of quons do not commute \cite{Avan,Green}. Unlike quons, the deformed oscillators of Arik-Coon type are in\-de\-pen\-dent \cite{Arik,Jag-etal}, that is, the operators corresponding to different copies, mutually commute.
Regardless of their intrinsic origin and physical motivation, diverse models of deformed oscillators have received much attention during the 1990s and till now. Among the best known and extensively studied deformed oscillators one encounters the $q$-deformed Arik-Coon (AC) \cite{Arik} or Biedenharn-Macfarlane (BM) \cite{Biedenharn,Macfarlane} ones, the $q$-deformed Tamm-Dancoff oscillator~\cite{TD,TD2,GR1}, and also the two-parameter $p,\!q$-deformed oscillator \cite{p-q,Arik-Fibo}. On the other hand, the so-called $\mu$-deformed oscillator is much less studied. Introduced in \cite{Jan} almost two decades ago, this deformed oscillator essentially differs from the models we have already mentioned and exhibits rather unusual properties~\cite{GKR,GR2}. Note that there exists a general approach to the description of deformed oscillators based on the concept of the deformation structure function (DSF) given in~\cite{Melja,Bona}. As the extension of the standard quantum harmonic oscillator, deformed oscillators find diverse applications in describing miscellaneous physical systems involving essential nonlinearities, from say quantum optics and the Landau problem to high energy particle phenomenology and modern quantum field theory, see e.g. \cite{Man'ko,Rego2,Rego3,AGI1,AGI1-2,AGI2,Avan2003,AG,gavrS,Rego1}.
Although a great variety of models of deformed oscillators exists as mentioned above, the detailed analysis of possible realizations, on their base, of composite particles along with the interpretation of deformation parameters in terms of the internal structure as far as we know is lacking. To fill this gap, in our preceding paper \cite{GKM-1} some steps in that direction were undertaken and first results were obtained. Namely, we carried out the detailed analysis for quasibosons consisting of two ordinary fermions with the ansatz $A^{\dag}_{\alpha}=\Phi^{\mu\nu}_{\alpha}a^{\dag}_{\mu}b^{\dag}_{\nu}$ for the quasiboson creation operator in the $\alpha$-th mode, meaning the bilinear combination of the constituents' creation operators of the general form. The analysis implies the realization of quasibosons by deformed oscillators characterized by the most general DSF $\phi(N)$ which unambiguously determines \cite{Melja,Bona} the deformed algebra within one mode. Our present study further extends the results obtained in \cite{GKM-1} by using, instead of the usual fermions, {\it their $q$-deformed analog} for the constituents' operators.
The paper is organized as follows. Section~\ref{sec2}, which serves as the base for our analysis, concerns the case of quasibosons whose constituents are ordinary fermions (the particular $q=1$ case of $q$-fermions). Here, after introducing the creation and annihilation operators for composite quasibosons, we recapitulate main facts and results from \cite{GKM-1} (note that some of these results, only sketched in \cite{GKM-1}, here are presented in full detail: in particular, that concerns the extended treatment given in subsection~\ref{sec5}). We establish important relations for
quasibosons' operators that include necessary conditions for the representation of quasibosons in terms of deformed bosons to hold. Those conditions are partially solved in subsection~\ref{sec4} yielding the DSFs~$\phi(N)$ of the effective deformation, and completely solved in subsection~\ref{sec5}. There we obtain explicitly all possible internal structures for quasibosons with the corresponding matrices $\Phi_{\alpha}^{\mu\nu}$. In Section~\ref{sec7}, presenting the further development of the ideas and results of \cite{GKM-1}, for the constituents' operators we take instead of usual fermions their $q$-deformed analogs. The corresponding treatment is performed: the admissible (for the realization under question) structure function~$\phi(n)$ and matrices $\Phi_{\alpha}^{\mu\nu}$ are found as the solution of the necessary conditions for the validity of the realization. Simpler illustrative examples, along with intermediate proofs, are relegated to appendices. The paper ends with concluding remarks and some outlook.
\section{System of quasibosons composed of two fermions}\label{sec2}
The general task of representing the quasibosons consisting of $q$-fermions can be divided into two particular situations: i) the constituents are pure fermions ($q=1$); ii) the constituents are essentially deformed $q$-fermions ($q\neq 1$). This section is devoted to the first case: similar to~\cite{GKM-1} we deal with the system of composite boson-like particles ({\it quasibosons} \cite{Per}) such that each copy (mode) of them is built from two fermions. We study the realization of quasibosons in terms of the set of {\it independent} identical copies of deformed oscillators of the general form (for some examples of mode-independent systems see~\cite{Jag-etal}).
Let us denote the creation and annihilation operators of the two (mutually anticommuting) sets of usual fermions by $a^{\dag}_{\mu}$,
$b^{\dag}_{\nu}$, $a_{\mu}$, $b_{\nu}$ respectively, with their standard anticommutation relations, namely
\begin{equation}
\eqalign{
\{a_{\mu},a^{\dag}_{\mu'}\}\equiv
a_{\mu}a^{\dag}_{\mu'}+a^{\dag}_{\mu'}a_{\mu}=\delta_{\mu\mu'},\qquad&\{a_{\mu},a_{\nu}\}=0,\cr
\{b_{\nu},b^{\dag}_{\nu'}\}\equiv
b_{\nu}b^{\dag}_{\nu'}+b^{\dag}_{\nu'}b_{\nu}=\delta_{\nu\nu'},\qquad&\{b_{\mu},b_{\nu}\}=0.
}
\end{equation}
Besides, each of $a^{\dag}_{\mu}$, $a_{\mu}$ anticommutes with each of $b^{\dag}_{\nu}$, $b_{\nu}$. So, we use these fermions to construct quasibosons. Then, the corresponding quasibosonic creation and annihilation operators $A^{\dag}_{\alpha},\ A_{\alpha}$ (where $\alpha$ labels the particular quasiboson and denotes the whole set of its quantum numbers) are given as
\begin{equation} \label{anzats}
A^{\dag}_{\alpha}=\sum\limits_{\mu\nu}\Phi^{\mu\nu}_{\alpha}a^{\dag}_{\mu}b^{\dag}_{\nu},\quad
A_{\alpha}=\sum\limits_{\mu\nu}\overline{\Phi}^{\mu\nu}_{\alpha}b_{\nu}a_{\mu} \,.
\end{equation}
For the matrices $\Phi_{\alpha}$ we assume the following normalization condition:
\[
\sum\limits_{\mu\nu}\Phi^{\mu\nu}_{\alpha}\overline{\Phi}^{\mu\nu}_{\beta}
\ \equiv \Tr \Phi_{\alpha}\Phi^{\dag}_{\beta}=\delta_{\alpha\beta}
\,.
\]
One can easily check that
\begin{equation}
[A_{\alpha},A_{\beta}]=[A^{\dag}_{\alpha},A^{\dag}_{\beta}]=0 .
\label{2_2}
\end{equation}
For the remaining commutator one finds~\cite{Avan}
\begin{equation} \label{2_3}
[A_{\alpha},A^{\dag}_{\beta}] =
\sum\limits_{\mu\nu\mu'\nu'}\overline{\Phi}^{\mu\nu}_{\alpha}\Phi^{\mu'\nu'}_{\beta}
\left([a_{\mu},a^{\dag}_{\mu'}]b_{\nu}b^{\dag}_{\nu'} +
a^{\dag}_{\mu'}a_{\mu} [b_{\nu},b^{\dag}_{\nu'}]\right)
= \delta_{\alpha\beta} - \Delta_{\alpha\beta}
\end{equation}
where
\begin{equation*}
\Delta_{\alpha\beta} \equiv
\sum\limits_{\mu\nu\mu'}\overline{\Phi}^{\mu\nu}_{\alpha}
\Phi^{\mu'\nu}_{\beta}a^{\dag}_{\mu'}a_{\mu}
+\sum\limits_{\mu\nu\nu'}\overline{\Phi}^{\mu\nu}_{\alpha}
\Phi^{\mu\nu'}_{\beta}b^{\dag}_{\nu'}b_{\nu}.
\end{equation*}
The entity $\Delta_{\alpha\beta}$ in (\ref{2_3}) shows deviation from the pure bosonic canonical relation. Note that if $\Delta_{\alpha\beta}=0$ then we have ${\Phi}^{\mu\nu}_{\alpha}=0$.
Remark that unlike the realization of quasibosonic operators using the quonic version of the deformed oscillator algebra, as was done in~\cite{Avan}, in all our analysis we consider (the set of) completely independent copies of deformed oscillators. That is, we assume the validity of (\ref{2_2}) and also require $[A_{\alpha},A^{\dag}_{\beta}]=0$ for $\alpha\neq\beta$.
The most simple type of deformed oscillator is the Arik-Coon $q$-deformation \cite{Arik}. So it is of interest, first, to try to use this set of $q$-deformed bosons for representing the system of independent quasibosons. However, as was shown in \cite{GKM-1}, the representation of quasibosons with the {\it independent} system of $q$-deformed bosons of the Arik-Coon type leads to inconsistency. For that reason we set the goal to examine other deformed oscillators in the general form given by their structure function~$\phi(N)$.
\paragraph{Necessary conventions.}
Our goal is to operate with $A_{\alpha}$, $A^{\dag}_{\alpha}$ and $N_{\alpha}$ constructed from $a^{\dag}_{\mu},a_{\mu},b^{\dag}_{\nu},b_{\nu}$ ($N_{\alpha}$ is some effective number operator for composite particles) as with the elements (operators) of some deformed oscillator algebra,
``forgetting'' about their internal structure. It means that we are looking for subalgebras of the enveloping algebra
$\mathfrak{A}\{A_{\alpha},A^{\dag}_{\alpha},N_{\alpha}\}$, generated by $A_{\alpha}$, $A^{\dag}_{\alpha}$, $N_{\alpha}$, isomorphic to
some deformed oscillator algebras $\mathfrak{A}\{\mathcal{A}_{\alpha},\mathcal{A}^{\dag}_{\alpha},\mathcal{N}_{\alpha}\}$, generated by $\mathcal{A}_{\alpha}$, $\mathcal{A}^{\dag}_{\alpha}$, $\mathcal{N}_{\alpha}$:
\[
\mathfrak{A}\{A_{\alpha},A^{\dag}_{\alpha},N_{\alpha}\}\simeq
\mathfrak{A}\{\mathcal{A}_{\alpha},\mathcal{A}^{\dag}_{\alpha},\mathcal{N}_{\alpha}\}.
\]
We will establish necessary and sufficient conditions for the existence of such isomorphism. We also require the isomorphism of representation spaces of the mentioned algebras:
\begin{equation}
L\{(a^{\dag}_{\mu})^r(b^{\dag}_{\nu})^s\ldots|O\rangle\}\supset H\simeq
\mathcal{H}=L\{\mathcal{A}^{\dag}_{\gamma_1}\ldots\mathcal{A}^{\dag}_{\gamma_n}|O\rangle\},
\end{equation}
where $L\{...\}$ denotes a linear span. Thus, if the algebra of deformed oscillator operators is given by the relations
\begin{eqnarray}
G_i(\mathcal{A}_{\alpha},\mathcal{A}^{\dag}_{\alpha},\mathcal{N}_{\alpha})=0
\quad\Leftrightarrow\quad
G_i(\mathcal{A}_{\alpha},\mathcal{A}^{\dag}_{\alpha},
\mathcal{N}_{\alpha})\mathcal{A}^{\dag}_{\gamma_1}\ldots
\mathcal{A}^{\dag}_{\gamma_n}|O\rangle=0, \\
\nonumber \hspace{60mm} n=0,1,2,...\label{rels}
\end{eqnarray}
then necessary and sufficient conditions for the isomorphism to exist can be written as
\begin{equation}\label{isom_cond}
G_i(A_{\alpha},A^{\dag}_{\alpha},N_{\alpha})\cong0
\quad\mathop{\Longleftrightarrow}^{def}\quad
G_i(A_{\alpha},A^{\dag}_{\alpha},N_{\alpha})
A^{\dag}_{\gamma_1}\ldots A^{\dag}_{\gamma_n}|O\rangle=0.
\end{equation}
Here the symbol of the weak equality $\cong$ is introduced which means the equality on all the $n$-(quasi)boson states. Next, we observe that
\begin{equation}\nonumber
G_i A^{\dag}_{\gamma_1}|O\rangle=0 \quad\Leftrightarrow\quad
[G_i,A^{\dag}_{\gamma_1}]|O\rangle=0
\end{equation}
and, by induction,
\begin{equation} \nonumber
G_i A^{\dag}_{\gamma_1}...A^{\dag}_{\gamma_n}|O\rangle=0
\quad\Leftrightarrow\quad
[...[G_i,A^{\dag}_{\gamma_1}]...,A^{\dag}_{\gamma_n}]|O\rangle=0.
\end{equation}
For a general deformed oscillator defined using the structure function $\phi(N)$, see e.g. \cite{Bona}, relation (\ref{rels}) takes the form
\begin{equation}\label{system1_0}
\left\{ \eqalign{ \mathcal{A}^{\dag}_{\alpha}\mathcal{A}_{\alpha} =
\phi(\mathcal{N}_{\alpha}),\cr
[\mathcal{A}_{\alpha},\mathcal{A}^{\dag}_{\alpha}] =
\phi(\mathcal{N}_{\alpha}+1)-\phi(\mathcal{N}_{\alpha}),\cr
[\mathcal{N}_{\alpha},\mathcal{A}^{\dag}_{\alpha}] =
\mathcal{A}^{\dag}_{\alpha},\, \ \ \
[\mathcal{N}_{\alpha},\mathcal{A}_{\alpha}] = -\mathcal{A}_{\alpha}.
} \right.
\end{equation}
Here the expressions for $[\mathcal{A}_{\alpha},\mathcal{A}^{\dag}_{\beta}],\ \alpha\neq\beta$, if any, may be added. Thus, the set of functions $G_i$ applicable in this case reads as follows:
\begin{eqnarray*}
G_0(A_{\alpha},A^{\dag}_{\alpha},N_{\alpha}) =
A^{\dag}_{\alpha}A_{\alpha} -
\phi(N_{\alpha}),\\
G_1(A_{\alpha},A^{\dag}_{\alpha},N_{\alpha}) =
[A_{\alpha},A^{\dag}_{\alpha}] - \bigl(\phi(N_{\alpha}+1)-\phi(N_{\alpha})\bigr),\\
G_2(A^{\dag}_{\alpha},N_{\alpha}) =
[N_{\alpha},A^{\dag}_{\alpha}] - A^{\dag}_{\alpha}, \quad {\rm and\ possibly\ some\ others.}
\end{eqnarray*}
Such functions $G_i$ are determined by the structure function of deformation $\phi(N_{\alpha})$. So, relations (\ref{isom_cond}) can be used for deducing the connection between matrices $\Phi^{\mu\nu}_{\alpha}$, which determine the operators $A^{\dag}_{\alpha}$, and the DSF $\phi(N_{\alpha})$.
\subsection{Necessary conditions on $\Phi_{\alpha}^{\mu\nu}$ and
$\phi(n)$}\label{sec4}
In the subsequent analysis we study the independent quasibosons' system realized by deformed oscillators without an indication of the
particular model of deformation. The aim of this section is to obtain necessary conditions for such realization in terms of the matrices $\Phi_{\alpha}$. Note that the results of this section are not sensitive to the form of the definition of $N_{\alpha}(\cdot)$ as a function of $A_{\alpha}$, $A^{\dag}_{\alpha}$.
Using relations (\ref{isom_cond})-(\ref{system1_0}) and taking into account the independence of modes, we arrive at the following weak equalities for the commutators:
\begin{equation}
\left\{
\eqalign{
[A_{\alpha},A^{\dag}_{\beta}]\cong 0 \quad{\rm for}\quad \alpha\neq\beta,\cr
[N_{\alpha},A^{\dag}_{\alpha}]\cong A^{\dag}_{\alpha},\quad [N_{\alpha},A_{\alpha}]\cong -A_{\alpha},\cr
[A_{\alpha},A^{\dag}_{\alpha}] \cong \phi(N_{\alpha}+1)-\phi(N_{\alpha}).
}
\right.\label{system1}
\end{equation}
\paragraph{Treatment of mode independence.}
From the first relation in (\ref{system1}) we derive the equivalent requirements of independence in terms of matrices $\Phi$:
\begin{equation}
\sum\limits_{\mu'\nu'}\left(\Phi^{\mu\nu'}_{\beta}
\overline{\Phi}^{\mu'\nu'}_{\alpha}\Phi^{\mu'\nu}_{\gamma}
+ \Phi^{\mu\nu'}_{\gamma}\overline{\Phi}^{\mu'\nu'}_{\alpha}\Phi^{\mu'\nu}_{\beta}\right)
=0,\quad \alpha\neq\beta,\label{uslnez}
\end{equation}
which can be rewritten in the matrix form
\begin{equation}
\Phi_{\beta}\Phi^{\dag}_{\alpha}\Phi_{\gamma}+
\Phi_{\gamma}\Phi^{\dag}_{\alpha}\Phi_{\beta}=0,\quad
\alpha\neq\beta.\label{req1}
\end{equation}
\paragraph{Conditions on $\Phi_{\alpha}^{\mu\nu}$ within one mode $\alpha$.}
Since $A^{\dag}_{\alpha}A_{\alpha}\cong\phi(N_{\alpha})$ and $A_{\alpha}A^{\dag}_{\alpha}\cong\phi(N_{\alpha}\!+1)$, we have
\begin{equation} \nonumber
[A^{\dag}_{\alpha}A_{\alpha},A_{\alpha}A^{\dag}_{\alpha}]\cong 0 \qquad {\rm and} \qquad
[\Delta_{\alpha\alpha},N_{\alpha}]\cong 0.
\end{equation}
The first equality can equivalently be rewritten as
\[
[A^{\dag}_{\alpha}A_{\alpha},\Delta_{\alpha\alpha}]= [A^{\dag}_{\alpha}A_{\alpha},\sum\limits_{\mu\nu\mu'}\overline{\Phi}^{\mu\nu}_{\alpha}
\Phi^{\mu'\nu}_{\alpha}a^{\dag}_{\mu'}a_{\mu} +\sum\limits_{\mu\nu\nu'}\overline{\Phi}^{\mu\nu}_{\alpha}
\Phi^{\mu\nu'}_{\alpha}b^{\dag}_{\nu'}b_{\nu}]\cong 0.
\]
The calculation of this commutator gives
\begin{eqnarray}
[A^{\dag}_{\alpha}A_{\alpha},\Delta_{\alpha\alpha}]=
2A^{\dag}_{\alpha}\sum\limits_{\mu\nu}\left(\Psi_{\alpha}^{\dag}\right)^{\nu\mu}b_{\nu}a_{\mu}
-2\sum\limits_{\mu'\nu'}\Psi_{\alpha}^{\mu'\nu'}a^{\dag}_{\mu'}b^{\dag}_{\nu'}
A_{\alpha}\cong 0, \label{comm2} \\
\nonumber
\Psi_{\alpha} \equiv \Phi_{\alpha}\Phi^{\dag}_{\alpha}\Phi_{\alpha}.
\end{eqnarray}
With the account of (\ref{anzats}) one can see: the validity of (\ref{comm2}) on the one-quasiboson state requires that the commutator with the creation operator on the vacuum should be
\begin{eqnarray*}
\left[(\overline{\Psi}^{\mu\nu}_{\alpha}\Phi^{\mu'\nu'}_{\alpha} -
\overline{\Phi}^{\mu\nu}_{\alpha}\Psi^{\mu'\nu'}_{\alpha})
a^{\dag}_{\mu'}b^{\dag}_{\nu'}b_{\nu}a_{\mu},\Phi_{\alpha}^{\lambda\rho}
a^{\dag}_{\lambda}b^{\dag}_{\rho}\right]|O\rangle= \\
=\!\left(\!\Phi_{\alpha}^{\mu'\nu'}\!a^{\dag}_{\mu'}b^{\dag}_{\nu'} \!\cdot\!
\overline{\Psi}_{\alpha}^{\mu\nu}\Phi_{\alpha}^{\mu\nu} \!-\!
\Phi_{\alpha}^{\mu'\nu'}\!a^{\dag}_{\mu'}b^{\dag}_{\nu'}
\Delta[\Psi,\Phi] \!-\!\Psi_{\alpha}^{\mu'\nu'}\!a^{\dag}_{\mu'}b^{\dag}_{\nu'} \!+\!
\Psi_{\alpha}^{\mu'\nu'}\!a^{\dag}_{\mu'}b^{\dag}_{\nu'} \!\cdot\!
\Delta_{\alpha\alpha}\!\right)\!|O\rangle \\
=\left(\Phi_{\alpha}^{\mu'\nu'}\cdot
\Tr(\Psi_{\alpha}^{\dag}\Phi_{\alpha}) -
\Psi_{\alpha}^{\mu'\nu'}\right)a^{\dag}_{\mu'}b^{\dag}_{\nu'}|O\rangle=0
\end{eqnarray*}
(the summation over repeated indices is meant). From this we obtain the requirement
\begin{equation}
\Phi_{\alpha}\Phi^{\dag}_{\alpha}\Phi_{\alpha} =
\Tr(\Phi^{\dag}_{\alpha}\Phi_{\alpha}\Phi^{\dag}_{\alpha}\Phi_{\alpha})\cdot
\Phi_{\alpha}\, , \label{req2}
\end{equation}
which is also the sufficient one. This requirement guarantees not only the weak equality as in (\ref{comm2}) but also the corresponding strong (operator) equality.
Thus, we have two independent requirements (\ref{req1}) and (\ref{req2}) for the matrices $\Phi_{\alpha}.$
\paragraph{Relating $\Phi_{\alpha}$ to the structure function $\phi(n)$.}
Let us derive the relations that involve the DSF $\phi$. Directly from the system (\ref{system1}) we obtain the initial values for the DSF $\phi$:
\begin{eqnarray*}
\phi(N_{\alpha})\cong A^{\dag}_{\alpha}A_{\alpha} \quad\quad\ \ \Rightarrow\quad\phi(0)=0,\qquad\qquad\\
\phi(N_{\alpha}+1)\cong A_{\alpha}A^{\dag}_{\alpha} \quad\Rightarrow\quad\phi(1)=1.\qquad\qquad
\end{eqnarray*}
From (\ref{2_3}) and the third relation in (\ref{system1}) we have
\begin{equation*}
[A_{\alpha},A^{\dag}_{\alpha}] = 1-\Delta_{\alpha\alpha} \cong
\phi(N_{\alpha}+1)-\phi(N_{\alpha}),
\end{equation*}
or, equivalently,
\begin{equation*}
F_{\alpha\alpha} \equiv \Delta_{\alpha\alpha} - 1 +
\phi(N_{\alpha}+1)-\phi(N_{\alpha}) \cong 0.
\end{equation*}
If the conditions (see (\ref{system1}))
\begin{equation}
[N_{\alpha},A^{\dag}_{\alpha}]\cong
A^{\dag}_{\alpha},\quad [N_{\alpha},A_{\alpha}]\cong
-A_{\alpha}\label{n_commut}
\end{equation}
do hold (it means that for these relations a subsequent verification is needed), then
\begin{equation*}
\phi(N_{\alpha})A^{\dag}_{\alpha}\!\cong\! A^{\dag}_{\alpha}
\phi(N_{\alpha}\!+\!1)\ \ \Rightarrow \ \
[\phi(N_{\alpha}),A^{\dag}_{\alpha}]\!\cong\! A^{\dag}_{\alpha}
\bigl(\phi(N_{\alpha}\!+\!1)\!-\!\phi(N_{\alpha})\bigr).
\end{equation*}
As a result, we come to
\begin{equation}\label{5_5}
[F_{\alpha\alpha},A^{\dag}_{\alpha}] \!\cong\!
2(\Phi_{\alpha}\Phi^{\dag}_{\alpha}
\Phi_{\alpha})^{\mu\nu}a^{\dag}_{\mu}b^{\dag}_{\nu}\!+\!A^{\dag}_{\alpha}
\Bigl(\phi(N_{\alpha}\!+\!2)\!-\!2\phi(N_{\alpha}\!+\!1)\!+\!
\phi(N_{\alpha})\Bigr).
\end{equation}
Requiring that this commutator vanishes on the vacuum and taking into account that $\phi(0)=0$, $\phi(1)=1$ we obtain
\begin{equation*}
\Phi_{\alpha}\Phi^{\dag}_{\alpha}\Phi_{\alpha} = \Bigl(1-\frac12
\phi(2)\Bigr)\Phi_{\alpha} = \frac{f}{2} \Phi_{\alpha}
\end{equation*}
where the deformation parameter $f$ does appear:
\[
\frac{f}{2}\equiv 1-\frac12 \phi(2)=
\Tr(\Phi^{\dag}_{\alpha}\Phi_{\alpha}\Phi^{\dag}_{\alpha}\Phi_{\alpha})
\ \ \ {\rm for\ all}\ \alpha.
\]
\paragraph{Finding admissible $\phi(n)$ explicitly.}
Equality (\ref{5_5}) can be rewritten as
\begin{equation*}
[F_{\alpha\alpha},A^{\dag}_{\alpha}] \cong
\bigl(2-\phi(2)\bigr)A^{\dag}_{\alpha} + A^{\dag}_{\alpha}
\bigl(\phi(N_{\alpha}+2) - 2\phi(N_{\alpha}+1) +
\phi(N_{\alpha})\bigr).
\end{equation*}
By induction, the equality for the $n$-th commutator can be proven:
\begin{equation*}
[\ldots[F_{\alpha\alpha},A^{\dag}_{\alpha}]\ldots A^{\dag}_{\alpha}]
\cong (A^{\dag}_{\alpha})^n \biggl\{\sum\limits_{k=0}^{n+1}
(-1)^{n+1-k} C^k_{n+1}\phi(N_{\alpha}+k) \biggr\}
\end{equation*}
(here $C_n^k$ denotes binomial coefficients). The requirement that the $n$-th commutator vanishes on the vacuum leads to the recurrence relation
\begin{equation}
\phi(n+1) = \sum\limits_{k=0}^{n} (-1)^{n-k} C^k_{n+1}\phi(k), \quad n\geq
2.\label{recurr1}
\end{equation}
As can be seen, all the values $\phi(n)$ for $n\geq 3$ are determined unambiguously by the two values $\phi(1)$ and $\phi(2)$, which may in general depend on one or more deformation parameters. Taking into account the equality \cite{Korn}
\begin{equation*}
\sum\limits_{k=0}^{n} (-1)^{n-k} k^m C_n^k = \left\{
\eqalign{
0,\quad m<n,\cr
n!,\quad m = n,
}
\right.
\end{equation*}
we find: the only independent solutions of (\ref{recurr1}) are $n$ and $n^2$, as well as their linear combination
\begin{equation}
\phi(n)=\left(1+\frac{f}{2}\right)n - \frac{f}{2}n^2.\label{solution1}
\end{equation}
This structure function satisfies both the initial conditions and the recurrence relations in (\ref{recurr1}).
\begin{remark}\label{rem1}
In view of the uniqueness of the solution with fixed initial conditions, formula (\ref{solution1}) gives the general solution of
(\ref{recurr1}).
\end{remark}
\begin{remark}\label{rem2}
If we take the Hamiltonian in the form $H=\frac12 \bigl(\phi(N)+\phi(N+1)\bigr)$ then using the obtained results it is not difficult to derive the three-term recurrence relations for both the deformation structure function and energy eigenspectrum:
\begin{eqnarray*}
\phi(n+1)=\frac{2(n+1)}{n}\phi(n)-\frac{n+1}{n-1}\phi(n-1),\\
E_{n+1}=\frac{4n^2+4n-4}{2n^2-1}E_{n}-\frac{2n^2+4n+1}{2n^2-1}E_{n-1}.
\end{eqnarray*}
The latter equality has a typical form of the so-called quasi-Fibonacci \cite{GKR} relation for the eigenenergies. Note that the general case of deformed oscillators with polynomial structure functions $\phi(N)$ (these are quasi-Fibonacci as well) was studied in \cite{GR5}.
\end{remark}
\subsection{Treatment of the quasiboson number operator}
The quasiboson number operator $N_{\alpha}$ can be introduced in different ways. Its definition is dictated by the requirements $G_0\cong 0$, $G_1\cong 0$ (recall that $G_0$ and $G_1$ are defined just after (\ref{system1_0})) and also by the self-consistency of the realization. A possible definition could be given by the relation $N_{\alpha}\mathop{=}\limits^{def} \! \phi^{-1}\!(A^{\dag}_{\alpha}A_{\alpha})$, or by $N_{\alpha}\mathop{=}\limits^{def}\!\phi^{-1}\!(A_{\alpha}A^{\dag}_{\alpha})-\!1$. We will not choose some of the two forms of definition, but consider the general possibility:
\begin{equation*}
N_{\alpha}\mathop{=}\limits^{def} \chi(A^{\dag}_{\alpha}A_{\alpha},\varepsilon_{\alpha}),\quad {\rm where}\quad \varepsilon_{\alpha}\equiv 1-\Delta_{\alpha\alpha}= [A_{\alpha},A^{\dag}_{\alpha}].
\end{equation*}
As we have mentioned above, it remains to satisfy relations (\ref{n_commut}), which enable to define the function $\chi$. Note that the second of them stems by conjugation from the first one,
\begin{equation} \label{usl2}
[N_{\alpha},A^{\dag}_{\alpha}]\cong
A^{\dag}_{\alpha}.
\end{equation}
Since we assume the {\it independence} of different modes, see (\ref{system1}), we consider the case $\gamma_1=\gamma_2=\ldots=\alpha$ in the definition~(\ref{isom_cond}).
It is useful to denote by $L_n$ the operators
\begin{equation}
L_0=N,\quad L_{n+1}=[L_n,A^{\dag}_{\alpha}]=
[\ldots[N_{\alpha},A^{\dag}_{\alpha}]\ldots A^{\dag}_{\alpha}], \ \
\ n\geq 0 \,. \label{5_22}
\end{equation}
Taking this into account, condition (\ref{usl2}) can be written as
\begin{equation}
L_1|O\rangle=A^{\dag}_{\alpha}|O\rangle,\quad
L_n|O\rangle=0,\quad n>1.\label{q3}
\end{equation}
Now consider three useful statements.
\begin{proposition}\label{prop1}
The following relations are true:
\begin{eqnarray*}
[\Delta_{\alpha\alpha},A^{\dag}_{\alpha}]=f A^{\dag}_{\alpha},\quad [\Delta_{\alpha\alpha},A_{\alpha}]=
-\overline{f}A_{\alpha},\quad f=
2\Tr(\Phi^{\dag}_{\alpha}\Phi_{\alpha}\Phi^{\dag}_{\alpha}\Phi_{\alpha}),\cr
[\varepsilon_{\alpha},A^{\dag}_{\alpha}]=-f A^{\dag}_{\alpha},\quad [\Delta_{\alpha\alpha},N_{\alpha}]\cong
0,\quad \ \ \Delta_{\alpha\alpha}=\Delta_{\alpha\alpha}^{\dag}.
\end{eqnarray*}
\end{proposition}
\noindent
This statement is proven straightforwardly.
\begin{proposition}\label{prop2}
For each $n\geq 0$ we have the equalities:
\begin{eqnarray}
\left[(A^{\dag}_{\alpha}A_{\alpha})^n,A^{\dag}_{\alpha}\right]=A^{\dag}_{\alpha}\left[(A^{\dag}_{\alpha}A_{\alpha}+
\varepsilon_{\alpha})^n-(A^{\dag}_{\alpha}A_{\alpha})^n\right],\label{e1}\\
\left[\varepsilon_{\alpha}^n,A^{\dag}_{\alpha}\right]=A^{\dag}_{\alpha}[(-f+\varepsilon_{\alpha})^n-\varepsilon_{\alpha}^n]. \label{e2}
\end{eqnarray}
\end{proposition}
\noindent
Using the propositions~\ref{prop1} and~\ref{prop2} and the exact commuting of $A^{\dag}_{\alpha}A_{\alpha}$ and $\varepsilon_{\alpha}$ we come to
the following
\begin{proposition}\label{prop3}
For $N_{\alpha}$ defined as $N_{\alpha}=\chi(A^{\dag}_{\alpha}A_{\alpha},\varepsilon)$, and $n\geq 0$, there is the following equality for the $n$-fold commutator (\ref{5_22}):
\begin{eqnarray*}
L_n=(A^{\dag}_{\alpha})^n\chi(A^{\dag}_{\alpha}A_{\alpha}+n\varepsilon_{\alpha}-\sigma_n
f, \varepsilon_{\alpha} - n f) -
\sum\limits_{k=0}^{n-1}C_n^k (A^{\dag}_{\alpha})^{n-k}L_k,\\
\qquad\qquad\sigma_n=\frac{n(n-1)}{2}.
\end{eqnarray*}
\end{proposition}
\noindent The proofs of propositions \ref{prop2} and \ref{prop3} are given in Appendices A and B.
Then conditions (\ref{q3}) turn into equalities
\begin{equation*}
\left\{ \eqalign{ A^{\dag}_{\alpha}\chi(A^{\dag}_{\alpha}A_{\alpha}+
\varepsilon_{\alpha},\varepsilon_{\alpha} - f)|O\rangle=
A^{\dag}_{\alpha}|O\rangle,\cr
(A^{\dag}_{\alpha})^n\chi(A^{\dag}_{\alpha}A+
n\varepsilon_{\alpha}-\sigma_n f, \varepsilon_{\alpha} - n
f)|O\rangle =\cr \qquad\qquad =C_n^1
(A^{\dag}_{\alpha})^{n-1}L_1|O\rangle =
n(A^{\dag}_{\alpha})^{n}|O\rangle,\quad n>1.
}
\right.
\end{equation*}
To satisfy these, it is necessary that
\begin{equation}\label{def_req}
\chi(n-\sigma_n f,1-n f) = n,\quad n\geq 1.
\end{equation}
So, the condition (\ref{def_req}) guarantees the validity of commutation relations (\ref{n_commut}), and therefore the consistency of the whole representation of quasibosons by deformed bosons. As one can see, the both definitions $N_{\alpha}\mathop{=}\limits^{def}
\phi^{-1}(A^{\dag}_{\alpha}A_{\alpha})$ and $N_{\alpha}\mathop{=}\limits^{def}\phi^{-1}(A_{\alpha}A^{\dag}_{\alpha})-1$ satisfy (\ref{def_req}). Also, there are other definitions like $N_{\alpha}\mathop{=}\limits^{def} (1-p) \phi^{-1}(A^{\dag}_{\alpha}A_{\alpha}) + p (\phi^{-1}(A_{\alpha}A^{\dag}_{\alpha})-1)$, $0<p<1$, which satisfy (\ref{def_req}) and lead, as can be checked, to the self-consistent representation of quasibosons.
\subsection{General solution for matrices
$\Phi_{\alpha}$}\label{sec5}
In this subsection we describe how to find admissible $d_a\times d_b$ matrices $\Phi_{\alpha}$. These should satisfy the system
\begin{equation} \label{system2}
\left\{
\eqalign{
\Tr(\Phi_{\alpha}\Phi_{\beta}^{\dag})=\delta_{\alpha\beta},\cr
\Phi_{\alpha}\Phi^{\dag}_{\alpha}\Phi_{\alpha}=\frac{f}{2}\Phi_{\alpha},\cr
\Phi_{\beta}\Phi^{\dag}_{\alpha}\Phi_{\gamma}+\Phi_{\gamma}\Phi^{\dag}_{\alpha}\Phi_{\beta}=0.
}
\right.
\end{equation}
{\it Consider first the case $f\ne 0$}. If the matrix $\Phi_\alpha$ is nondegenerate (that means $d_a=d_b$ and $\det \Phi_{\alpha}\neq 0$) for some $\alpha$, the second relation yields $\Phi_{\alpha}\Phi^{\dag}_{\alpha}=\frac{f}{2}\mathds{1}$. From the third relation at $\gamma=\alpha$ we obtain: $\ \Phi_{\beta}=0,\quad\forall\beta\neq\alpha. $ Then it follows that only one value of $\alpha$ is possible for which $\det \Phi_{\alpha}\neq 0$. In that case $\Phi_{\alpha}$ is an arbitrary unitary matrix. All the rest $\Phi_{\beta}=0,\ \beta\neq\alpha$. That gives the partial nondegenerate solution of the system. Note that other solutions will be degenerate for all~$\alpha$.
Let us go over to the analysis of degenerate solutions. At $\gamma=\alpha$ the last equation in (\ref{system2}) reduces to $\Phi_{\beta} \Phi^{\dag}_{\alpha} \Phi_{\alpha} + \Phi_{\alpha} \Phi^{\dag}_{\alpha}\Phi_{\beta}=0$; multiplying it by $\Phi_{\alpha}^{\dag}$ and utilizing the second relation (note that $f$ is real) we infer
\begin{equation}\label{aa4}
K\Phi_{\beta}\Phi_{\alpha}^{\dag} \equiv
\left(\Phi_{\alpha}\Phi^{\dag}_{\alpha}+\frac{f}{2}\mathds{1}\right)\Phi_{\beta}\Phi_{\alpha}^{\dag}=0,
\quad \ K\equiv
\Phi_{\alpha}\Phi^{\dag}_{\alpha}+\frac{f}{2}\mathds{1}.
\end{equation}
From the second relation of the system (\ref{system2}) we also obtain:
\begin{equation*}
\forall x\in \Im\Phi_{\alpha}:\quad \Phi_{\alpha}\Phi^{\dag}_{\alpha}x=\frac{f}{2}x\quad\Rightarrow\quad
\dim \Im\Phi_{\alpha}\Phi^{\dag}_{\alpha}\geq \dim \Im\Phi_{\alpha}.
\end{equation*}
Taking into account the latter and the fact that $\Im\Phi_{\alpha}\Phi^{\dag}_{\alpha}\subseteq\Im\Phi_{\alpha}$ we find
\begin{equation}\label{aa3}
\Im\Phi_{\alpha}\Phi^{\dag}_{\alpha}=\Im\Phi_{\alpha}.
\end{equation}
Applying the Fredholm theorem first to $\Phi_{\alpha}$ and then to $\Phi_{\alpha}\Phi^{\dag}_{\alpha}$ and using (\ref{aa3}) we arrive at the decompositions
\begin{eqnarray*}
\forall \alpha:\quad \mathds{C}^{d_a}=\Im\Phi_{\alpha}\oplus\mathop{\mathrm{Ker}}\nolimits\Phi^{\dag}_{\alpha}=
\Im\Phi_{\alpha}\Phi^{\dag}_{\alpha}\oplus\mathop{\mathrm{Ker}}\nolimits\Phi_{\alpha}\Phi^{\dag}_{\alpha},\\
\qquad\quad \mathds{C}^{d_a}=\Im\Phi_{\alpha}\oplus\mathop{\mathrm{Ker}}\nolimits\,\Phi_{\alpha}\Phi^{\dag}_{\alpha}.
\end{eqnarray*}
On each of subspaces $\Im\Phi_{\alpha}$ and $\mathop{\mathrm{Ker}}\nolimits\,\Phi_{\alpha}\Phi^{\dag}_{\alpha}$, which are eigenspaces for $K$, the operator $K$ is nondegenerate:
\begin{equation*}
\forall x\in \Im\Phi_{\alpha}:\quad Kx=fx,\quad{\rm and}\quad
\forall y\in\mathop{\mathrm{Ker}}\nolimits\Phi_{\alpha}\Phi^{\dag}_{\alpha}:\quad
Ky=\frac{f}{2}y.
\end{equation*}
Consequently, the operator $K$ is nondegenerate on the whole $\mathds{C}^{d_a}$. Using (\ref{aa4}) we find
\begin{equation*}
\forall \alpha\neq\beta:\quad
\Phi_{\beta}\Phi_{\alpha}^{\dag}=0\quad {\rm or} \quad
\Phi_{\alpha}\Phi_{\beta}^{\dag}=0.
\end{equation*}
As a result, we arrive at the system which is equivalent to the initial one (\ref{system2}) and to the respective (for each of the
equations) implications ($\alpha\neq \beta$):
\begin{equation*}
\left\{ \eqalign{ \Tr(\Phi_{\alpha}\Phi^{\dag}_{\alpha})\!=\!1,\cr
\Phi_{\alpha}\Phi^{\dag}_{\alpha}\!\cdot\!\Phi_{\alpha}\!=\!(f/2)\!\cdot\!\Phi_{\alpha},\cr
\Phi_{\alpha}\Phi^{\dag}_{\alpha}\!\cdot\!\Phi_{\beta}\!=\!0,\cr
\Phi_{\alpha}\Phi_{\beta}^{\dag}\!=\!0. } \right. \eqalign{
\Rightarrow\ \ \dim\Im\Phi_{\alpha}\Phi^{\dag}_{\alpha}\!=\!{\rm
rank}\, \Phi_{\alpha}\!=\!2/f\equiv m,\,\ {\rm for\ all}\ \alpha,\cr
\Rightarrow\ \ \Im\Phi_{\alpha}{\rm\ -\ eigensubspace\
of}\,\Phi_{\alpha}\Phi^{\dag}_{\alpha},\cr \Rightarrow\ \
\forall\beta\neq\alpha\, \Im\Phi_{\beta}\subset
\mathop{\mathrm{Ker}}\nolimits\Phi_{\alpha}\Phi^{\dag}_{\alpha}=\mathop{\mathrm{Ker}}\nolimits\Phi_{\alpha}^{\dag},\cr
\Rightarrow\ \ \Im\Phi^{\dag}_{\beta}\subset \mathop{\mathrm{Ker}}\nolimits\Phi_{\alpha}. }
\end{equation*}
So, the deformation parameter $f$ has a discrete range of values determined by $m$:
\begin{equation*}
f=\frac2m.
\end{equation*}
The set of the solutions depends on the relation between $\sum_{\alpha}m$ and $\min(d_a,d_b)$. If $\sum_{\alpha}m>\min(d_a,d_b)$, the set of solutions is empty. If $\sum_{\alpha}m\leq\min(d_a,d_b)$, then, according to the relations
\begin{equation*}
\mathds{C}^{d_a}=\Im\Phi_{\alpha}\oplus\mathop{\mathrm{Ker}}\nolimits\Phi^{\dag}_{\alpha},\quad
\Im\Phi_{\beta}\subset
\mathop{\mathrm{Ker}}\nolimits\Phi^{\dag}_{\alpha},\quad\forall\beta\neq\alpha,
\end{equation*}
the space $\mathds{C}^{d_a}$ ($\mathds{C}^{d_b}$) decomposes into the direct sum of linearly independent subspaces:
\begin{eqnarray*}
\mathds{C}^{d_a}=\biggl(\bigoplus\limits_{\alpha}
\Im\Phi_{\alpha}\biggr)\oplus R,\quad \dim
R=n-\sum\limits_{\alpha}m,\quad\Phi^{\dag}_{\alpha}R=0;\\
\mathds{C}^{d_b}=\biggl(\bigoplus\limits_{\alpha}
\Im\Phi^{\dag}_{\alpha}\biggr)\oplus \tilde{R},\quad \dim
\tilde{R}=n-\sum\limits_{\alpha}m, \quad \Phi_{\alpha}\tilde{R}=0.
\end{eqnarray*}
Let $\{e_{1\alpha},\ldots,e_{m\alpha}\}$ be the orthonormal basis in the space $\Im\Phi_{\alpha}$, and $U_1(d_a)$ be the corresponding transition matrix to these bases from the initial one of $\mathds{C}^{d_a}$. Likewise, let $\{f_{1\alpha},\ldots,f_{m\alpha}\}$ be the orthonormal basis in the
space $\Im\Phi^{\dag}_{\alpha}$, and $U_2(d_b)$ the corresponding transition matrix from the initial basis in $\mathds{C}^{d_b}$. In the new bases, the transition matrix $\Phi_{\alpha}$ is block-diagonal:
\begin{equation*}
U^{\dag}_1(d_a)\Phi_{\alpha}U_2(d_b)= \left(
\begin{array}{ccc}
0 & 0 & 0 \\
0 & \ \tilde{\Phi}_\alpha & 0 \\
0 & 0 & 0 \\
\end{array}
\right) .
\end{equation*}
The $m\times m$ matrix $\tilde{\Phi}_{\alpha}$ satisfies the equation $\tilde{\Phi}_{\alpha}\tilde{\Phi}^{\dag}_{\alpha}=\frac{f}{2} \mathds{1}_m$. Its general solution can be given through the unitary matrix: $\tilde{\Phi}_{\alpha}=\sqrt{f/2}\ U_{\alpha}(m)$. Thus the general solution of the initial system (\ref{system2}) is given in the form
\begin{equation}
\Phi_{\alpha}= U_1(d_a)
\mathop{\mathrm{diag}}\nolimits\biggl\{0,\sqrt{\frac{f}{2}}U_{\alpha}(m),0\biggr\}U^{\dag}_2(d_b).\label{gen_solution}
\end{equation}
In this formula, for every matrix $\Phi_{\alpha}$, the block $\sqrt{\frac{f}{2}}U_{\alpha}(m)$ is at its $\alpha$-th place, and does not intersect with the corresponding block of any other matrix $\Phi_{\beta}$ with $\beta\neq \alpha$. To conclude: we have got all possible quasibosonic composite operators, expressed by (\ref{anzats}) and (\ref{gen_solution}), which can be realized by the algebra of deformed oscillators.
{\it The case $f=0$ in (\ref{system2})}. It can be shown that $\Phi_{\alpha}$ should be zero for such $f$. This is followed by applying the singular value decomposition formula for each of the matrices in the equation $\Phi_{\alpha}\Phi^{\dag}_{\alpha}\Phi_{\alpha}=0$. The fact that $\Phi_{\alpha}=0$ means, see (\ref{anzats}) and the normalization just after it, that the pure boson being a special $f=0$ case of the deformed boson with the DSF (\ref{solution1}) is unsuitable for the realization of the two-fermion composite quasiboson.
\section{Quasibosons with $q$-deformed constituent fermions}\label{sec7}
Now let us go over to the $q$-generalization of the model considered above. Namely, we adopt nontrivial $q$-deformation for the constituents, the other assumptions being left as above. So, we start from the set of $q$-fermions, see~\cite{Viswanthan}, independent in fermionic sense:
\begin{eqnarray}
a_{\mu}a^{\dag}_{\mu'}+q^{\delta_{\mu\mu'}}a^{\dag}_{\mu'}a_{\mu}=\delta_{\mu\mu'},\qquad
b_{\nu}b^{\dag}_{\nu'}+q^{\delta_{\nu\nu'}}b^{\dag}_{\nu'}b_{\nu}=\delta_{\nu\nu'},\label{q-commut}\\
a_{\mu}a_{\mu'}+a_{\mu'}a_{\mu}=0,\ \ \mu\neq\mu',\quad
b_{\nu}b_{\nu'}+b_{\nu'}b_{\nu}=0,\ \ \nu\neq\nu'.\label{q-nez}
\end{eqnarray}
The commutation relations~(\ref{q-commut}) within one mode i.e. for $\mu=\mu'$ and $\nu=\nu'$ completely determine the set of admissible values of the parameter $q$ and the (absence or presence, and the order of) nilpotency of the operators $a^{\dag}_{\mu}$ and $b^{\dag}_{\nu}$ depending on $q$. More precisely this is reflected in the following statement.
\begin{lemma}\label{lemma_1}
For the positivity of the norm of $q$-fermion states it is necessary to put $q\in \mathds{R}$ and $q\le 1$. If $q=1$ then $a^\dag_{\mu}$ and $b^\dag_{\nu}$ are nilpotent of second order; otherwise, if $q<1$ the operators $a^\dag_{\mu}$ and $b^\dag_{\nu}$ are not nilpotent of any order:
\begin{eqnarray}
q=1 \quad\Rightarrow\quad (a^\dag_{\mu})^2 = 0, \quad (b^\dag_{\nu})^2 = 0;\cr
q<1 \quad\Rightarrow\quad (a^\dag_{\mu})^k\neq 0,\quad (b^\dag_{\nu})^k\neq 0, \quad k\ge2.\label{q4}
\end{eqnarray}
\end{lemma}
\begin{proof}
The Lemma follows from the expression for the norm of the vector $x\!=\!(a^{\dag}_{\mu})^m |0\rangle$:
\begin{eqnarray*}
\fl ||x||^2=\langle0| a_{\mu}^k (a_{\mu}^{\dag})^k|0\rangle = \langle0|
a_{\mu}^{k-1}[n^a_\mu+1]_{-q}(a_{\mu}^{\dag})^{k-1}|0\rangle =
\langle0| a_{\mu}^{k-1}(a_{\mu}^{\dag})^{k-1}[n^a_\mu+k]_{-q}|0\rangle =\\
= [k]_{-q} \langle0| a_{\mu}^{k-1}(a_{\mu}^{\dag})^{k-1}|0\rangle =
\ldots = [k]_{-q}[k-1]_{-q}\cdot...\cdot[1]_{-q},
\end{eqnarray*}
where the notation $[n]_{-q}\equiv \bigl((-q)^n-1\bigr)/\bigl((-q)-1\bigr)$ is nothing but the deformation structure function for the $q$-fermions; $n^a_\mu$ is the number operator for $q$-fermions of $a$ type. The same considerations apply to the operators $b^\dag_{\nu}$. This ends the proof.
\end{proof}
The $q=1$ case (i.e., usual fermions with well-known nilpotency of their creation/anni\-hilation operators) was completely analyzed in the preceding section (and also in~\cite{GKM-1}). Here we restrict ourselves to the case of $q<1$. Hence (\ref{q4}) holds for any~$k$.
The composite quasibosons' creation and annihilation operators are defined as
\begin{equation*}
A^{\dag}_{\alpha}=\sum\limits_{\mu\nu}\Phi^{\mu\nu}_{\alpha}a^{\dag}_{\mu}b^{\dag}_{\nu},
\quad
A_{\alpha}=\sum\limits_{\mu\nu}\overline{\Phi}^{\mu\nu}_{\alpha}b_{\nu}a_{\mu},
\end{equation*}
that is, like in (\ref{anzats}). The requirements of the self-consistency of the realization (by deformed bosons) remain intact, see (\ref{system1_0}) and~(\ref{system1}):
\begin{eqnarray}
\label{treb1} A^{\dag}_{\alpha}A_{\alpha}\cong \phi(N_{\alpha}),\quad A_{\alpha}A^{\dag}_{\alpha}\cong \phi(N_{\alpha}+1),\\
\label{treb2} [A^{\dag}_{\alpha},A^{\dag}_{\beta}]\cong 0\ \Leftrightarrow\ [A_{\alpha},A_{\beta}]\cong 0,\quad [A_{\alpha},A^{\dag}_{\beta}]\cong0,\ \alpha\neq\beta,\\
\label{treb3} [N_{\alpha},A^{\dag}_{\alpha}]\cong A^{\dag}_{\alpha},\quad [N_{\alpha},A_{\alpha}]\cong -A_{\alpha}.
\end{eqnarray}
In this case the requirement of independence $[A^{\dag}_{\alpha},A^{\dag}_{\beta}]\cong 0$, as one can easily check, leads to the following condition on matrices $\Phi_{\alpha}$:
\begin{equation}\label{nez1}
\Phi^{\mu\nu}_{\alpha}\Phi^{\mu\nu'}_{\beta} = \Phi^{\mu\nu'}_{\alpha}\Phi^{\mu\nu}_{\beta},\quad \Phi^{\mu\nu}_{\alpha}\Phi^{\mu'\nu}_{\beta} = \Phi^{\mu'\nu}_{\alpha}\Phi^{\mu\nu}_{\beta}.
\end{equation}
The second relation in (\ref{treb1}) implies that there should be
\begin{equation}
A_{\alpha}(A^{\dag}_{\alpha})^n |O\rangle =
\phi(N_{\alpha}+1)(A^{\dag}_{\alpha})^{n-1}|O\rangle,\qquad \
n=1,2,3...\,.\label{system3}
\end{equation}
Using (\ref{treb3}) we obtain:
\[
\phi(N_{\alpha}+1)(A^{\dag}_{\alpha})^{n-1}|O\rangle = (A^{\dag}_{\alpha})^{n-1}\phi(N_{\alpha}+n)|O\rangle.
\]
As a result, we arrive at
\begin{equation}
A_{\alpha}(A^{\dag}_{\alpha})^n |O\rangle = \phi(n) (A^{\dag}_{\alpha})^{n-1}|O\rangle,\quad n=1,2,3...\,
.\label{system3-2}
\end{equation}
It can be checked by induction that
\begin{eqnarray*}
A_{\alpha}(A^{\dag}_{\alpha})^n =
(-1)^{\left[\frac{n-1}{2}\right]}\overline {\Phi_{\alpha}^{\mu\nu}}
\prod_{j=1}^n \Phi_{\alpha}^{\mu_1\nu_1}\cdot\\
\cdot\biggl[\sum\limits_{i=1}^n (-1)^{i-1}
\delta_{\mu\mu_i}q^{\sum\limits_{s=1}^{i-1}\delta_{\mu\mu_s}}
\prod_{\scriptstyle r=1\atop\scriptstyle r\neq i}^n a^{\dag}_{\mu_r}
+ (-1)^n q^{\sum\limits_{s=1}^n \delta_{\mu\mu_s}} \prod_{r=1}^n
a^{\dag}_{\mu_r}\cdot a_{\mu}\biggr]\cdot\\
\cdot\biggl[\sum\limits_{k=1}^n (-1)^{k-1}
\delta_{\nu\nu_k}q^{\sum\limits_{s=1}^{k-1} \delta_{\nu\nu_s}}
\prod_{\scriptstyle r=1\atop\scriptstyle r\neq k}^n b^{\dag}_{\nu_r}
+ (-1)^n q^{\sum\limits_{s=1}^n \delta_{\nu\nu_s}} \prod_{r=1}^n
b^{\dag}_{\nu_r}\cdot b_{\nu}\biggr].
\end{eqnarray*}
Then, using equation (\ref{system3-2}) we arrive at
\begin{eqnarray}
\phi(n)\prod_{l=1}^{n-1} \Phi_{\alpha}^{\mu_l\nu_l}
a^{\dag}_{\mu_l}b^{\dag}_{\nu_l}|O\rangle =
(-1)^{\left[\frac{n-1}{2}\right]}\overline{\Phi_{\alpha}^{\mu\nu}}
\prod_{l=1}^n \Phi_{\alpha}^{\mu_l\nu_l}\cdot\nonumber\\
\cdot\biggl[\sum\limits_{i=1}^n (-1)^{i-1}
\delta_{\mu\mu_i}q^{\sum\limits_{s=1}^ {i-1} \delta_{\mu\mu_s}}
\prod_{\scriptstyle r=1\atop\scriptstyle r\neq i}^n a^{\dag}_{\mu_r}
+ (-1)^n q^{\sum\limits_{s=1}^n \delta_{\mu\mu_s}} \prod_{r=1}^n a^{\dag}_{\mu_r}\cdot a_{\mu}\biggr]\cdot\nonumber\\
\cdot\biggl[\sum\limits_{k=1}^n (-1)^{k-1}
\delta_{\nu\nu_k}q^{\sum\limits_{s=1}^{k-1} \delta_{\nu\nu_s}}
\prod_{\scriptstyle r=1\atop\scriptstyle r\neq k}^n b^{\dag}_{\nu_r}
+ (-1)^n q^{\sum\limits_{s=1}^n \delta_{\nu\nu_s}} \prod_{r=1}^n
b^{\dag}_{\nu_r}\cdot b_{\nu}\biggr]|O\rangle.\label{vspm}
\end{eqnarray}
Note that if (\ref{vspm}) holds on the vacuum, the following equality holds on any state:
\begin{eqnarray}
(-1)^{\left[\frac{n-1}{2}\right]}\overline {\Phi_{\alpha}^{\mu\nu}}
\prod_{l=1}^n \Phi_{\alpha}^{\mu_l\nu_l}\cdot
\biggl[\sum\limits_{i=1}^n (-1)^{i-1}
\delta_{\mu\mu_i}q^{\sum\limits_{s=1}^{i-1} \delta_{\mu\mu_s}}
\prod_{\scriptstyle r=1\atop\scriptstyle r\neq i}^n a^{\dag}_{\mu_r}
\biggr]\cdot\nonumber\\
\cdot\biggl[\sum\limits_{k=1}^n (-1)^{k-1}
\delta_{\nu\nu_k}q^{\sum\limits_{s=1}^{k-1} \delta_{\nu\nu_s}}
\prod_{\scriptstyle r=1\atop\scriptstyle r\neq k}^n
b^{\dag}_{\nu_r}\biggr]=\phi(n)\prod_{l=1}^{n-1}
\Phi_{\alpha}^{\mu_l\nu_l}a^{\dag}_{\mu_l}b^{\dag}_{\nu_l}\,.\label{vspm2}
\end{eqnarray}
As a recursive step, let us consider the following relation valid for $n+1$:
\begin{eqnarray*}
\fl A_{\alpha}(A^{\dag}_{\alpha})^{n+1} =
(-1)^{\left[\frac{n}{2}\right]}
\overline{\Phi_{\alpha}^{\mu\nu}}\prod_{l=1}^n
\Phi_{\alpha}^{\mu_l\nu_l} \cdot\biggl[\sum\limits_{i=1}^{n+1}
(-1)^{i-1} \delta_{\mu\mu_i}q^{\sum\limits_{s=1}^{i-1}
\delta_{\mu\mu_s}} \prod_{\scriptstyle r=1\atop\scriptstyle r\neq
i}^{n+1} a^{\dag}_{\mu_r}\biggr]\cdot\\
\cdot\biggl[\sum\limits_{k=1}^{n+1} (-1\!)^{k-1}
\delta_{\nu\nu_k}q^{\sum\limits_{s\!=\!1}^{k\!-\!1} \!\delta_{\nu\nu_s}}
\!\!\prod_{\scriptstyle r=1\atop\scriptstyle r\neq k}\!
b^{\dag}_{\nu_r}\biggr] \Phi_{\alpha}^{\mu_{n\!+\!1}\nu_{n\!+\!1}} \!+\!
(-1\!)^{\left[\frac{n-1}{2}\right]} \overline{\Phi_{\alpha}^{\mu\nu}}
\!\prod_{l=1}^n\! \Phi_{\alpha}^{\mu_l\nu_l}\!\cdot \\
\biggl[\biggl(\sum\limits_{i=1}^{n+1} (-1)^{i-1}
\delta_{\mu\mu_i}q^{\sum\limits_{s=1}^{i-1} \delta_{\mu\mu_s}}
\prod_{\scriptstyle r=1\atop\scriptstyle r\neq i}
a^{\dag}_{\mu_r}\biggr)\cdot (-1)^n q^{\sum\limits_{s=1}^{n+1}
\delta_{\nu\nu_s}} \prod_{r=1}^n b^{\dag}_{\nu_r}\cdot b_{\nu} + \\
+ (-1)^n q^{\sum\limits_{s=1}^{n+1} \delta_{\mu\mu_s}} \prod_{r=1}^n
a^{\dag}_{\mu_r}\cdot a_{\mu} \biggl(\sum\limits_{k=1}^{n+1}
(-1)^{k-1}
\delta_{\nu\nu_k}q^{\sum\limits_{s=1}^{k-1}\delta_{\nu\nu_s}}
\prod_{\scriptstyle r=1\atop\scriptstyle r\neq k}^{n}
b^{\dag}_{\nu_r}\biggr)+\\
+ q^{\sum\limits_{s=1}^{n+1} \delta_{\mu\mu_s}+\delta_{\nu\nu_s}}
\prod_{r=1}^n a^{\dag}_{\mu_r} \cdot a_{\mu} \prod_{r=1}^n
b^{\dag}_{\nu_r}\cdot b_{\nu}\biggr]
\Phi_{\alpha}^{\mu_{n+1}\nu_{n+1}}a^{\dag}_{\mu_{n+1}}b^{\dag}_{\nu_{n+1}}
\mathop{=}\limits^{(\ref{vspm2})}\\
\mathop{=}\limits^{(\ref{vspm2})}
\phi(n)\!\prod_{l=1}^n\! \Phi_{\alpha}^{\mu_l\nu_l}
a^{\dag}_{\mu_l}b^{\dag}_{\nu_l} \!+\!
(-1\!)^{\left[\frac{n}{2}\right]}\overline{\Phi_{\alpha}^{\mu\nu}}
\!\prod_{l=1}^{n+1}\! \Phi_{\alpha}^{\mu_l\nu_l}\cdot\\
\Biggl[(-1\!)^n\!\biggl(\!\sum\limits_{i=1}^n (-1\!)^{i\!-\!1} \delta_{\mu\mu_i}
q^{\sum\limits_{s\!=\!1}^{i\!-\!1} \!\delta_{\mu\mu_s}} \!\!\prod_{\scriptstyle
r=1\atop\scriptstyle r\neq i}\! a^{\dag}_{\mu_r}\!\biggr)
\Bigl(\!\delta_{\nu\nu_{n\!+\!1}}q^{\sum\limits_{s\!=\!1}^n\!
\delta_{\nu\nu_s}} \!\prod_{r=1}^n\! b^{\dag}_{\nu_r} \!\!-\! q^{\sum\limits_{s\!=\!1}^{n\!+\!1} \!\delta_{\nu\nu_s}}
\!\prod_{r=1}^{n+1}\! b^{\dag}_{\nu_r}\!\!\cdot\! b_{\nu}\!\Bigr) \!+\\
+ (-1\!)^n\!\Bigl(\!\delta_{\mu\mu_{n\!+\!1}}q^{\sum\limits_{s\!=\!1}^n \!\delta_{\mu\mu_s}}
\!\!\!\prod_{r=1}^n a^{\dag}_{\mu_r}
\!\!-\! q^{\sum\limits_{s\!=\!1}^{n\!+\!1} \!\delta_{\mu\mu_s}}
\!\!\!\prod_{r=1}^{n+1}\! a^{\dag}_{\mu_r}\!\!\cdot\! a_{\mu}\!\Bigr)
\biggl(\!\sum\limits_{k=1}^n (-1\!)^{k\!-\!1}
\delta_{\nu\nu_k}q^{\sum\limits_{s\!=\!1}^{k\!-\!1} \!\delta_{\nu\nu_s}}
\!\!\!\prod_{\scriptstyle r=1\atop\scriptstyle r\neq k}^{n+1}\! b^{\dag}_{\nu_r}\!\biggr)\!+\\
+ \Bigl(\!\delta_{\mu\mu_{n\!+\!1}}q^{\sum\limits_{s\!=\!1}^n\!
\delta_{\mu\mu_s}} \!\!\!\prod_{r=1}^n\! a^{\dag}_{\mu_r} \!\!-\!
q^{\sum\limits_{s\!=\!1}^{n\!+\!1} \!\delta_{\mu\mu_s}} \!\!\!\prod_{r=1}^{n+1}\!
a^{\dag}_{\mu_r}\!\!\cdot\! a_{\mu}\!\Bigr)
\Bigl(\!\delta_{\nu\nu_{n\!+\!1}}q^{\sum\limits_{s\!=\!1}^n
\!\delta_{\nu\nu_s}} \!\!\!\prod_{r=1}^n b^{\dag}_{\nu_r}\!\! -\!
q^{\sum\limits_{s\!=\!1}^{n\!+\!1} \!\delta_{\nu\nu_s}} \!\!\!\prod_{r=1}^{n+1}\!
b^{\dag}_{\nu_r}\!\!\cdot\! b_{\nu}\!\Bigr)\Biggr]
\end{eqnarray*}
where at the last stage we have used (\ref{vspm2}). Substituting the last expression for $A_{\alpha}(A^{\dag}_{\alpha})^{n+1}$ into (\ref{system3-2}) rewritten for $n\!\rightarrow\! n\!+\!1$ we deduce the following relation that involves the linear combination:
\vspace{-1.5mm}
\begin{equation}
\sum_{\mu_1...\mu_n,\nu_1...\nu_n} B^{\mu_1...\mu_n,\nu_1...\nu_n}(\Phi_{\alpha},q) \cdot
e_{\mu_1...\mu_n,\nu_1...\nu_n} = 0,\label{rav3}
\end{equation}
\vspace{-0.5mm}
where the coefficients are
\vspace{-0.5mm}
\begin{eqnarray*}
\fl B^{\mu_1...\mu_n,\nu_1...\nu_n}(\Phi_{\alpha},q) = -\sum\limits_{i=1}^{n} q^{\sum\limits_{s=1}^{i-1}
(\delta_{\mu\mu_s}+\delta_{\nu\nu_s})}
\Phi_{\alpha}^{\mu_n\nu}\overline{\Phi_{\alpha}^{\mu\nu}}\Phi_{\alpha}^{\mu\nu_n} \prod_{l=1}^{n-1}
\Phi_{\alpha}^{\mu_l\nu_l}\cdot\\
\Bigl((-1)^{\sum\limits_{r=i}^{n-1} \delta_{\nu_r\nu_{r+1}}}
q^{\sum\limits_{s=i}^n \delta_{\nu\nu_s}}
+ (-1)^{\sum\limits_{r=i}^{n-1} \delta_{\mu_r\mu_{r+1}}}
q^{\sum\limits_{s=i}^n \delta_{\mu\mu_s}}\Bigr) +\\
+q^{\sum\limits_{s=1}^n
(\delta_{\mu\mu_s}+\delta_{\nu\nu_s})}\overline{\Phi_{\alpha}^{\mu\nu}}
\Phi_{\alpha}^{\mu\nu} \prod_{l=1}^n \Phi_{\alpha}^{\mu_l\nu_l} -
\left[\phi(n+1)- \phi(n)\right]\prod_{l=1}^n
\Phi_{\alpha}^{\mu_l\nu_l}
\end{eqnarray*}
and the basis elements are
\begin{equation*}
e_{\mu_1...\mu_n,\nu_1...\nu_n} = a^{\dag}_{\mu_1}b^{\dag}_{\nu_1}... a^{\dag}_{\mu_n}b^{\dag}_{\nu_n}|O\rangle.
\end{equation*}
These basis elements are independent for differing sets of indices $\mu_1...\mu_n$ and $\nu_1...\nu_n$ regardless of any permutations within each set. So let us extract in (\ref{rav3}) the terms with $\mu_1=\ldots=\mu_n$ and $\nu_1=\ldots=\nu_n$; using their linear independence from the others, we infer $B^{\mu_1...\mu_1,\nu_1...\nu_1}(\Phi_{\alpha},q)=0$, that can be rewritten in the following form:
\begin{eqnarray*}
\fl \sum\limits_{i=1}^{n} (\!-1\!)^{n\!+i\!-\!1}\!\Bigl(\!2\!+\!(\delta_{\mu\mu_1}\!\!+\!
\delta_{\nu\nu_1})(q^n\!\!-\!q^{i\!-\!1}\!\!-\!2)\!+\!2\delta_{\mu\mu_1}\!\delta_{\nu\nu_1}(q^{n}\!\!-\!1)(q^{i\!-\!1}\!\!-\!1)\!\Bigr)\!
\Phi_{\alpha}^{\mu_1\nu}\overline{\Phi_{\alpha}^{\mu\nu}}\!
\Phi_{\alpha}^{\mu\nu_1}\!(\!\Phi_{\alpha}^{\mu_1\nu_1}\!)^{n\!-\!1} \!\!+\! \\
+\Bigl(1+(\delta_{\mu\mu_1}+\delta_{\nu\nu_1})(q^n-1)+
\delta_{\mu\mu_1}\delta_{\nu\nu_1}(q^n-1)^2\Bigr)
\overline{\Phi_{\alpha}^{\mu\nu}}\Phi_{\alpha}^{\mu\nu}(\Phi_{\alpha}^{\mu_1\nu_1})^{n}=\\
\qquad\qquad=[\phi(n+1)-\phi(n)](\Phi_{\alpha}^{\mu_1\nu_1})^{n} .
\end{eqnarray*}
Performing the summation over $i$, $\mu$, $\nu$ on the left-hand side, we find
\begin{eqnarray}
\fl ((-1)^n\!-\!1)\left(\Phi_{\alpha}\Phi_{\alpha}^{\dag}\Phi_{\alpha}\right)^{\mu_1\nu_1}\!\!(\Phi_{\alpha}^{\mu_1\nu_1})^{n-1}\!
+\Bigl(\frac12 (-q)^n + \frac{q\!-\!1}{2(q\!+\!1)}q^n - \frac{q}{q\!+\!1}(-1)^n\Bigr)\!\bigl[(\Phi_{\alpha}^{\dag}\Phi_{\alpha})^{\nu_1\nu_1} \!\!+\nonumber\\
+ (\Phi_{\alpha}\Phi_{\alpha}^{\dag})^{\mu_1\mu_1}\bigr](\Phi_{\alpha}^{\mu_1\nu_1})^{n}
+\frac{q-1}{q+1}(q^n-1)\left(q^n-(-1)^n\right)|\Phi_{\alpha}^{\mu_1\nu_1}|^2(\Phi_{\alpha}^{\mu_1\nu_1})^{n}=\nonumber\\
\qquad\qquad=[\phi(n+1)-\phi(n)-1](\Phi_{\alpha}^{\mu_1\nu_1})^{n}.
\label{usl5}
\end{eqnarray}
For all the indices $(\mu_1,\nu_1)$ for which $\Phi_{\alpha}^{\mu_1\nu_1}\neq 0$, the last equation can be divided by $(\Phi_{\alpha}^{\mu_1\nu_1})^{n}$. Summing (\ref{usl5}) over $n$ from $n=1$ to $n=s$ and then replacing in the resulting equality $s\rightarrow n-1$ we obtain:
\begin{eqnarray*}
\fl \Bigl(\frac{1-(-1)^{n}}{2}-n\Bigr)
\frac{(\Phi_{\alpha}\Phi^{\dag}_{\alpha}
\Phi_{\alpha})^{\mu_1\nu_1}}{\Phi_{\alpha}^{\mu_1\nu_1}}+
\Bigl([n]_{-q}-\frac{1-(-1)^n}{2}\Bigr)^2 \cdot|\Phi_{\alpha}^{\mu_1\nu_1}|^2+\\
+\frac{1-(-1)^n}{2}\bigl([n]_{-q}-1\bigr)
\left[(\Phi_{\alpha}^{\dag}\Phi_{\alpha})^{\nu_1\nu_1} +
(\Phi_{\alpha}\Phi_{\alpha}^{\dag})^{\mu_1\mu_1}\right] =
\phi(n)-n,\ n\geq 2.
\end{eqnarray*}
Note that the functions $\left(\frac{1-(-1)^{n}}{2}-n\right)$, $\left([n]_{-q}-\frac{1-(-1)^n}{2}\right)^2$ and $\frac{1-(-1)^n}{2}\bigl([n]_{-q}-1\bigr)$ as functions of $n$ are independent for the admissible values of $q$. Hence
$(\Phi_{\alpha}\Phi^{\dag}_{\alpha}\Phi_{\alpha})^{\mu_1\nu_1}/\Phi_{\alpha}^{\mu_1\nu_1}$,
$|\Phi_{\alpha}^{\mu_1\nu_1}|^2$ and
$[(\Phi_{\alpha}^{\dag}\Phi_{\alpha})^{\nu_1\nu_1} +
(\Phi_{\alpha}\Phi_{\alpha}^{\dag})^{\mu_1\mu_1}]$ do not depend on
$(\mu_1,\nu_1)$ if $\Phi_{\alpha}^{\mu_1\nu_1}\neq 0$:
\begin{eqnarray*}
{(\Phi_{\alpha}\Phi^{\dag}_{\alpha}\Phi_{\alpha})^{\mu_1\nu_1}}/{\Phi_{\alpha}^{\mu_1\nu_1}}=p_1,\\
|\Phi_{\alpha}^{\mu_1\nu_1}|^2=p_2,\\
(\Phi_{\alpha}^{\dag}\Phi_{\alpha})^{\nu_1\nu_1} +
(\Phi_{\alpha}\Phi_{\alpha}^{\dag})^{\mu_1\mu_1}=p_3,
\end{eqnarray*}
where $p_1$, $p_2$ and $p_3$ are some numerical parameters. Thus, we obtain
\begin{eqnarray}
\fl \phi(n)=n-\Bigl(n\!-\!\frac{1\!-\!(-1)^{n}}{2}\Bigr)p_1+\Bigl([n]_{-q}\!-\!\frac{1\!-\!(-1)^n}{2}\Bigr)^2p_2 + \frac{1\!-\!(-1)^n}{2}\bigl([n]_{-q}\!-\!1\bigr)p_3.\label{phi2}
\end{eqnarray}
\par
Let us now consider the terms in equation (\ref{rav3}) with $n$ equated indices $\mu_1=\ldots=\mu_n$ and with $n-1$ equated indices in the set ($\nu_1,\ldots,\nu_n$), the remaining one being different. Denote the $n-1$ equal indices by $\nu_1$, and the differing one (suppose it occupies the $k$th position) by $\nu_2$. Due to the linear independence of the mentioned terms from the others we obtain the equation
\begin{equation}
\fl \sum_{k=1}^n B^{\mu_1...\mu_1,\nu_1..\nu_k..\nu_1} e_{\mu_1...\mu_1,\nu_1..\nu_k..\nu_1}|_{\nu_k\rightarrow \nu_2} = 0\ \ {\rm i.e.}\ \sum_{k=1}^n (-1)^k B^{\mu_1...\mu_1,\nu_1..\nu_k..\nu_1}|_{\nu_k\rightarrow \nu_2}=0.\label{eq1}
\end{equation}
Introducing auxiliary notations
\begin{equation*} \left\{
\eqalign{
X=\Phi_{\alpha}^{\mu_1\nu_1}\Phi_{\alpha}^{\mu_1\nu_2},\\
Y=\Phi_{\alpha}^{\mu_1\nu_1}\Phi_{\alpha}^{\mu_1\nu_2}
(\Phi_{\alpha}\Phi_{\alpha}^{\dag})^{\mu_1\mu_1},\\
Z=(\Phi_{\alpha}^{\mu_1\nu_1})^2(\Phi_{\alpha}^{\dag}\Phi_{\alpha})^{\nu_1\nu_2},
}
\right.
\end{equation*}
after performing all the summations in (\ref{eq1}) we obtain:
\begin{eqnarray*}
\eqalign{
[Xp_2]
(-1)^nq^{2n}+\\
[(-q^3\!+\!2q^2\!-\!3q\!+\!4)p_2X]
q^{2n}+\\
[((q^2\!-\!2\!-\!q)p_2+2p_3)X+(-\!2\!-\!q)Y]
nq^n+\\
[((-\!3q^3\!+\!17q^2\!+\!q^4\!-\!26\!-\!5q)p_2\!+\!(-\!4q\!+\!2q^2\!+\!2)p_3)X\!+\!(6\!+\!5q\!-\!q^3\!-\!2q^2)Y\!+\\
\qquad\qquad\qquad+(4q\!-\!10q^2\!+\!6)Z]q^n+\\
[((-\!q^3\!+\!q\!+\!2q^2\!-\!2)p_2+(2\!-\!2q)p_3)X+(q^2\!+\!3q\!-\!2)Y+(-\!2\!-\!2q)Z]
(-q)^n+\\
[((q^2\!+\!3\!-\!4q)p_2+(-\!4q^2\!+\!2q\!+\!2)p_3)X+(4q^2\!-\!q\!-\!5)Y+(3q^2\!+\!1)Z]
(-1)^n+\\
[(2p_1+(-\!3q\!+\!5)p_2-2p_3)X+Y+(3q\!-\!3)Z]
n+\\
[((8\!-\!8q^2)p_1+(23\!-\!3q\!-\!19q^2\!+\!7q^3)p_2+(8q\!-\!4q^3\!+\!2q^2\!-\!6)p_3)X+\\
\qquad\qquad\qquad+(4q^3\!-\!5\!-\!12q\!+\!5q^2)Y+(-\!3q^3\!-\!11\!+\!3q\!+\!11q^2)Z]=0.
}
\end{eqnarray*}
Extracting the coefficients of this system at the linearly independent functions $(-1)^nq^{2n}$, $q^{2n}$, $nq^n$, $q^n$, $(-q)^n, (-1)^n, n, 1$ (considered as the elements of the vector space of functions of $n$), we arrive at the following system:
\[
\left\{
\eqalign{
Xp_2=0,\cr
[-q^3+2q^2-3q+4]p_2X=0,\cr
[(q^2\!-\!2\!-\!q)p_2+2p_3]X+[-2-q]Y=0,\cr
[(-\!3q^3\!+\!17q^2\!+\!q^4\!-\!26\!-\!5q)p_2\!+\!(-\!4q\!+\!2q^2\!+\!2)p_3]X\!+\![6\!+\!5q\!-\!q^3\!-\!2q^2]Y\!+\cr
\qquad\qquad\qquad+[4q\!-\!10q^2\!+\!6]Z=0,\cr
[(-\!q^3\!+\!q\!+\!2q^2\!-\!2)p_2+(2\!-\!2q)p_3]X+[q^2\!+\!3q\!-\!2]Y+[-\!2\!-\!2q]Z=0,\cr
[(q^2\!+\!3\!-\!4q)p_2+(-\!4q^2\!+\!2q\!+\!2)p_3]X+[4q^2\!-\!q\!-\!5]Y+[3q^2\!+\!1]Z=0,\cr
[2p_1+(-\!3q\!+\!5)p_2\!-\!2p_3]X+Y+[3q\!-\!3]Z=0,\cr
[(8\!-\!8q^2)p_1\!+\!(23\!-\!3q\!-\!19q^2\!+\!7q^3)p_2\!+\!(8q\!-\!4q^3\!+\!2q^2\!-\!6)p_3]X+\cr
\,\qquad\qquad\qquad+[4q^3\!-\!5\!-\!12q\!+\!5q^2]Y+[-\!3q^3\!-\!11\!+\!3q\!+\!11q^2]Z=0.
}
\right.
\]
The solution of this system is ($q\neq 1$):
\begin{equation*}
\left\{
\eqalign{
X=\Phi_{\alpha}^{\mu_1\nu_1}\Phi_{\alpha}^{\mu_1\nu_2}=0,\cr
Y=\Phi_{\alpha}^{\mu_1\nu_1}\Phi_{\alpha}^{\mu_1\nu_2}(\Phi_{\alpha}\Phi_{\alpha}^{\dag})^{\mu_1\mu_1}=0,\cr
Z=(\Phi_{\alpha}^{\mu_1\nu_1})^2(\Phi_{\alpha}^{\dag}\Phi_{\alpha})^{\nu_1\nu_2}=0.
}
\right.
\end{equation*}
This set of conditions is equivalent to
\begin{equation}\label{aa1}
\Phi_{\alpha}^{\mu_1\nu_1}\Phi_{\alpha}^{\mu_1\nu_2}=0,
\end{equation}
which means that the matrix $\Phi_{\alpha}$ cannot contain two nonzero elements in any one row.
In a similar way we can derive the condition
\begin{equation}\label{aa2}
\Phi_{\alpha}^{\mu_1\nu_1}\Phi_{\alpha}^{\mu_2\nu_1}=0,
\end{equation}
implying that the matrix $\Phi_{\alpha}$ cannot contain two nonzero elements in any one column.
Next, the same analysis as in the previous two paragraphs is performed for those terms in~(\ref{rav3}), for which:
in the set ($\mu_1,\ldots,\mu_n$) there is only one index (denoted by $\mu_2$) different from the other $(n-1)$ equal ones (all denoted by $\mu_1$), and likewise for $\nu$-indices -- in the set ($\nu_1,\ldots,\nu_n$) there is only one index (denoted by $\nu_2$) different from the other, equal ones (all denoted by $\nu_1$). As a result, we derive
\begin{equation}
\Phi_{\alpha}^{\mu_1\nu_1}\Phi_{\alpha}^{\mu_2\nu_2}=0. \label{aa3}
\end{equation}
That is, the matrix $\Phi_{\alpha}$ cannot have two nonzero elements in differing rows and columns. And, using the previous conditions (\ref{aa1}) and (\ref{aa2}) we obtain that the matrix $\Phi_{\alpha}$ cannot contain two nonzero elements. As a consequence, we obtain the following values for the parameters $p_1,p_2,p_3$:
\[
p_1=p_2=1,\quad p_3=2.
\]
Then the following expression for the deformation structure function results from (\ref{phi2}):
\begin{equation}\label{str_f2}
\phi(n) = \left([n]_{-q}\right)^2.
\end{equation}
The mode-independence conditions contained in (\ref{nez1}) and equalities (\ref{aa1}), (\ref{aa2}) and (\ref{aa3}) enable us to determine the solution for $\Phi_{\alpha}$: the only nonzero elements in matrices $\Phi_{\alpha}$ and $\Phi_{\beta}$ are situated at the intersection of different rows and different columns:
\begin{equation}
\Phi_{\alpha}^{\mu\nu}=\Phi_{\alpha}^{\mu_0(\alpha)\nu_0(\alpha)}\delta_{\mu\mu_0(\alpha)}\delta_{\nu\nu_0(\alpha)},\qquad |\Phi_{\alpha}^{\mu_0(\alpha)\nu_0(\alpha)}|=1.\label{matrPhi2}
\end{equation}
For the illustrative purpose, a more detailed treatment of two particular examples including also the omitted steps of the derivation above, is provided in~\ref{ap3}. The first example concerns the case with only one possible value of $\mu,\nu=1$ for the constituent $q$-fermions modes. The second example concerns the case of two-mode constituents, i.e. of two possible values of~$\mu,\nu=\overline{1,2}$.
It remains to satisfy the commutation relations~(\ref{treb3}) by means of the correct definition of $N_{\alpha}$. Let $N_{\alpha}$ be defined as $N_{\alpha} \mathop{=}\limits^{def} \chi(A^{\dag}_{\alpha}A_{\alpha}, A_{\alpha}A^{\dag}_{\alpha})$, and the matrices $\Phi_{\alpha}$ are those already found in~(\ref{matrPhi2}). Taking into account the latter we have
\begin{equation*}
A_{\alpha}A^{\dag}_{\alpha}\!\cdot\! (A^{\dag}_{\alpha})^n |O\rangle = [n\!+\!1]^2_{-q} (A^{\dag}_{\alpha})^n |O\rangle,\quad A^{\dag}_{\alpha}A_{\alpha}\!\cdot\! (A^{\dag}_{\alpha})^n |O\rangle = [n]^2_{-q} (A^{\dag}_{\alpha})^n |O\rangle.
\end{equation*}
Then~(\ref{treb3}) is equivalent to
\begin{eqnarray*}
\fl \chi(A^{\dag}_{\alpha}A_{\alpha}, A_{\alpha}A^{\dag}_{\alpha})(A^{\dag}_{\alpha})^{n+1}|O\rangle - A^{\dag}_{\alpha} \chi(A^{\dag}_{\alpha}A_{\alpha}, A_{\alpha}A^{\dag}_{\alpha}) (A^{\dag}_{\alpha})^n|O\rangle = A^{\dag}_{\alpha} (A^{\dag}_{\alpha})^n|O\rangle \Leftrightarrow\\
\fl \Leftrightarrow \chi(A^{\dag}_{\alpha}A_{\alpha}, [n+2]^2_{-q})(A^{\dag}_{\alpha})^{n+1}|O\rangle - A^{\dag}_{\alpha} \chi(A^{\dag}_{\alpha}A_{\alpha}, [n+1]^2_{-q}) (A^{\dag}_{\alpha})^n|O\rangle = (A^{\dag}_{\alpha})^{n+1}|O\rangle \Leftrightarrow\\
\fl \Leftrightarrow\chi([n+1]^2_{-q}, [n+2]^2_{-q})(A^{\dag}_{\alpha})^{n+1}|O\rangle - \chi([n]^2_{-q}, [n+1]^2_{-q}) (A^{\dag}_{\alpha})^{n+1}|O\rangle = (A^{\dag}_{\alpha})^{n+1}|O\rangle \Leftrightarrow\\
\Leftrightarrow \chi([n+1]^2_{-q}, [n+2]^2_{-q}) - \chi([n]^{-q}, [n+1]^2_{-q}) = 1,\ \ n\ge0.
\end{eqnarray*}
Thus the condition $\chi\bigl([n]_{-q}^{\,2},[n+1]_{-q}^{\,2}\bigr)\bigr|_n^{n+1}\equiv \chi([n+1]^2_{-q}, [n+2]^2_{-q}) - \chi([n]^2_{-q}, [n+1]^2_{-q}) =1$, $n=0,1,...$, is necessary and sufficient for~(\ref{treb3}) to hold.
\begin{remark}\label{rem3}
Expression~(\ref{str_f2}) for the structure function is valid only when $q\ne 1$ i.e. when $a_{\mu}^\dag$, $a_{\nu}^\dag$ are not nilpotent of any order. If $q=1$, it is the DSF~(\ref{solution1}) which provides the realization. Thus, the unifying formula for the deformation structure function (of those deformed oscillators that give realization) for quasibosons composed of two $q$-fermions can be written as
\begin{equation}\label{gen_DSF}
\phi(n) = \left\{
\eqalign{
\left([n]_{-q}\right)^2=\Bigl(\frac{1-(-q)^{n}}{1+q}\Bigr)^2,\quad q<1;\cr
\Bigl(1+\frac1m\Bigr)n - \frac1m n^2,\qquad q=1,\quad m\in\mathds{N}.
}
\right.
\end{equation}
The absence of a continuous limit from (\ref{str_f2}) to (\ref{solution1}) when $q\rightarrow 1$ or in other words the discontinuity of (\ref{gen_DSF}) at the $q=1$ point is explained as follows. If $q\ne 1$ then there is an infinite number of basis elements \{$(a_1^\dag)^{k_1}...(a_{d_a}^\dag)^{k_{d_a}}(b_1^\dag)^{l_1}...(b_{d_b}^\dag)^{l_{d_b}} |O\rangle$ $\bigr|$ $k_i,l_j\ge 0$, $\sum_{i=1}^{d_a} k_i = \sum_{j=1}^{d_b} l_j = n$, $n=0,1,2,...$\} of the subspace of composite bosons' states. The latter results in an infinite number of requirements~(\ref{rav3}) thus imposing a considerable restriction on $\Phi_{\alpha}^{\mu\nu}$. On the other hand, if $q=1$, then there is only finite number, equal to $\sum_{k=1}^{\min(d_a,d_b)} C_{d_a}^kC_{d_b}^k = C_{d_a+d_b}^{\max(d_a,d_b)}$ of the basis elements: $|O\rangle$, $a^{\dag}_{\mu}b^{\dag}_{\nu}|O\rangle$,\,\,...\,\,, $a_1^\dag ...a_{\min(d_a,d_b)}^\dag b_1^\dag... b_{\min(d_a,d_b)}^\dag|O\rangle$, that leads to a finite number of requirements~(\ref{rav3}). Moreover, in this case only a few requirements among them are independent, see~(\ref{system2}).
\end{remark}
\section{Conclusions and outlook}
As shown in our preceding paper~\cite{GKM-1} and in Section~\ref{sec2} above, the problem of realization of "fermion+fermion" quasibosons by means of deformed oscillators has nontrivial solutions. In the case of pure fermions as constituents, the structure function $\phi$ of the relevant deformation is found in the form (\ref{solution1}) quadratic in the number operator $N$, with a discrete valued deformation parameter $f=2/m$. This is the only
DSF for which the realization (isomorphism) is possible. In addition, necessary and sufficient conditions on the matrices $\Phi_{\alpha}$ in the construction (\ref{anzats}) of quasibosons, for such representation to be self-consistent, are obtained and expression (\ref{gen_solution}) gives their general solution.
In this paper, the novel generalization was carried out, as presented in Section~\ref{sec7}. This is the case of quasibosons made up of two constituents which are $q$-deformed fermions~(\ref{q-commut})-(\ref{q-nez}). For this generalization, again, we have derived the relations for the defining matrices $\Phi_{\alpha}$ and solved them. Detailed analysis led us at $q\ne 1$ to the resulting structure function (\ref{str_f2}) of the deformed oscillator which provides the exact realization of the quasibosons made up of two $q$-fermions. The principal distinction of the situation treated herein from the case considered in Section~\ref{sec2} (following~\cite{GKM-1}) is such that, while the pure fermions are nilpotent, the $q$-deformed fermions for $q\ne 1$ are not nilpotent of any order, see (\ref{q4}). Since the second order nilpotency of usual fermions (as the no-deformation limit of $q$-fermions) abruptly appears at $q=1$ according to Lemma~\ref{lemma_1}, there is no direct transition from the DSF~(\ref{str_f2}) to DSF~(\ref{solution1}), as a result of the continuous $q\to 1$ limit. See also Remark~\ref{rem3} including~(\ref{gen_DSF}) on this issue.
The general strategy of the developed approach is to explore deformed bosons as tools for the realization of quasibosons, which should give considerable simplification (in the algebraic sense) in subsequent applications, achieved when the algebra representing the initial system of composite particles is treated as the algebra corresponding to some deformed oscillator. The obtained results and the developed approach have a potential application to: various problems in (sub)nuclear physics (with such composite particles as hadrons, nucleon complexes) like the study of pairing in
nuclei~\cite{Sviratcheva}; bipartite entangled composites~\cite{GM_Entang} in the Quantum Information Theory (where the role of quasibosons can be played e.g. by biphotons~\cite{Shih}); Bose-Einstein condensation of composite bosons~\cite{Avan2003} and other thermodynamic questions including
the equation of state for many composite bosons systems. Also, the developed formalism can be applied to modeling physical particles or quasiparticles such as excitons, biphonons and cooperons in the corresponding directions of condensed matter physics. Concerning excitons, there already exists~\cite{Combescot_Exc} the description of interacting excitons using infinite series in their creation operators. Besides, excitons were modeled~\cite{Bagheri_Exc,Liu} by $q$-deformed version of bosons, however, without taking into account their compositeness.
As the next steps we intend to study more complicated situations, say, the case of quasibosons composed of two (deformed) bosons, or from two generally deformed fermions. Also, in our nearest plans there is the analysis of composite (quasi-)fermions. Yet another path of extension is to treat quasi-independent quasibosons, i.e. those with noncommuting different modes.
\ack
This research was partially supported
by the Special Program of the Division of Physics and Astronomy of NAS of Ukraine.
|
2,869,038,155,447 | arxiv | \section{Introduction}
Relativistic heavy-ion collisions are well suited to produce hot and
dense matter in the laboratory. Whereas low-energy collisions
create nuclear matter at high baryon chemical potential and
moderate temperature, high-energy collisions at the Relativistic
Heavy-Ion Collider (RHIC) or the Large Hadron Collider (LHC) produce
a dominantly partonic matter at high temperature and almost
vanishing baryon chemical potential. The latter
is controlled by lattice quantum chromodynamics (lQCD) which
shows that the phase transition between the quark-gluon plasma (QGP)
and the hadronic system is a crossover at low baryon chemical
potential~\cite{Bernard:2004je,Aoki:2006we,Bazavov:2011nk}.
Since the partonic matter in relativistic heavy-ion collisions
survives only for a couple of fm/c within a finite volume, it is
quite challenging to investigate its properties. In this context
hard probes (heavy flavor or jets) and penetrating probes (photons
or dileptons) are of particular interest. Dileptons have the
advantage of an additional degree of freedom compared to
photons, i.e. their invariant mass, which allows to roughly separate
hadronic and partonic contributions by appropriate mass cuts \cite{rapp5}. For
example, dileptons with invariant mass less than 1.2 GeV dominantly
stem from hadronic decays while those with invariant masses between
1.2 GeV and 3 GeV stem from partonic interactions and correlated
semileptonic decays of heavy flavor hadrons. In the first case it is
possible to study the modification of hadron properties such as a
$\rho$ meson broadening or a mass shift in nuclear matter~
\cite{ChSym,PHSDreview}. On the other hand the dileptons with
intermediate masses provide information on the properties of
partonic matter once the background from semileptonic heavy flavor
decays is subtracted. This background overshines the partonic
contribution at RHIC and LHC energies and is subleading only at
collision energies per nucleon below about $\sqrt{s_{NN}}$ = 10
GeV~\cite{Song:2018xca}.
Recently, dielectrons in Au+Au collisions at $\sqrt{s_{\rm NN}}=$
200 GeV have been measured as a function of transverse
momentum~\cite{yang2018} for different centralities. It turned out
as a surprise that the yield of dielectrons is largely enhanced at
low transverse momentum - compared to expected hadronic decays - in
particular in peripheral collisions of 60-80\% centrality.
In case the low $p_T$ peak would be measured in ultra-peripheral
collisions \cite{Adams:2004rz} -
for impact parameters larger than roughly twice the radius of the nuclei -
one could attribute it to a coherent source from the strong electromagnetic
fields generated by the charged spectators \cite{yang2018}.
However, an interesting point is that the low $p_T$ enhancement is observed
in peripheral collisions with dominant hadronic reaction channels,
which are expected to be under control by independent $p+p$ measurements.
This raises severe doubts on a coherent nature of the observed phenomenon.
These surprising observations come up as a puzzle and in this work we will
investigate the question if hadronic and partonic in-medium effects might
be the origin for the anomalous enhancement of dielectrons at low
transverse momentum in peripheral collisions.
We will employ the microscopic parton-hadron-string dynamics (PHSD)
transport approach where quarks and gluons in the quark-gluon plasma
are off-shell massive strongly interacting quasi-particles. The
masses of quarks and gluons are assigned from their spectral
functions at finite temperature whose pole positions and widths are,
respectively, given by the real and imaginary parts of partonic
self-energies~\cite{PHSDreview}. The PHSD approach has successfully
described experimental data in relativistic heavy-ion collisions for
a wide range of collision energies from the SchwerIonen-Synchrotron (SIS)
to the LHC for many hadronic as well as
electromagnetic observables \cite{PHSDreview,Volo,Eduard}.
The production channels for dileptons in relativistic heavy-ion
collisions may be separated into three different classes: i)
hadronic production channels, ii) partonic production channels and
iii) the contribution from the semileptonic decay of heavy-flavor
pairs. The production of dileptons in the hadronic phase includes
the following steps: First a resonance $R$ is produced either in a
nucleon-nucleon (NN) or meson-nucleon (mN) collisions.
The produced resonance $R$ may produce dileptons directly through Dalitz decay,
for example, $\Delta \to e^+e^-N$, or the resonance $R$ decays to a meson
which produces dielectrons through direct decay ($\rho, \omega,
\phi$) or Dalitz decay ($\pi^0, \eta, \omega$).
Additionally the resonance $R$ may decay to another resonance
$R^\prime$ which then produces dileptons through Dalitz decay. In
the PHSD we take into account also dilepton production by two-body
scattering such as $\pi+\rho$, $\pi+\omega$, $\rho+\rho$, $\pi+a_1$
\cite{Olenasps}, although the contributions are
subleading. An important point is the modification of the
vector-meson spectral functions ($\rho, \omega, \phi$), i.e. the
collisional broadening of the vector-meson widths in nuclear matter
is incorporated in PHSD (by default)\cite{Brat08dil}, which leads to results
consistent with the experimental data on dileptons from
SIS to LHC energies~\cite{Brat08dil,Olenasps,PHSDreview}.
In partonic matter dileptons are produced through the channels
$q\bar q\to\gamma^*$, $q\bar q\to\gamma^*g$ and $qg\to\gamma^*q$
($\bar q g\to\gamma^* \bar q$) where the virtual photon $\gamma ^*$
decays into $e^+e^-$ or $\mu^+ \mu^-$ pair. We note that
$q(\bar{q})$ and $g$ in the above processes stand for off-shell
partons and the effective propagators for quarks and gluons from the
Dynamical QuasiParticle Model (DQPM) \cite{PHSDreview} have been employed for the
calculation of the differential cross sections in
Refs.~\cite{Song:2018xca,olena2010}. We recall that the dileptons
from the QGP are produced in the early stage of heavy-ion collisions
and have a relatively large invariant mass and high effective
temperature.
The production of dileptons from heavy-flavor pairs is different
from the other channels since the lepton and anti-lepton are
produced in separate semi-leptonic decays. However, since heavy
flavor is always produced by pairs, it contributes to dilepton
production with the probability that both heavy flavor and
anti-heavy flavor have semi-leptonic decays. Furthermore, the heavy
flavor pairs - produced very early in heavy-ion collisions - suffer
from strong interactions with the partonic or hadronic medium and
thus the kinematics of the pair change in time. E.g., the
heavy flavor quarks are suppressed at high transverse momentum due
to the energy loss in partonic matter, while slow heavy flavor
quarks are shifted to larger momenta due to collective flow. These
modifications of heavy flavor pairs in heavy-ion collisions affect
the spectrum of dileptons as demonstrated in Ref.
\cite{Song:2018xca}.
\begin{figure} [h]
\centerline{
\includegraphics[width=8.6 cm]{ddbar.eps}}
\caption{Invariant mass spectra of dileptons from $D\bar{D}$ pairs
with and without partonic and hadronic interactions in 10-40, 40-60,
and 60-80 \% central Au+Au collisions at $\sqrt{s_{\rm NN}}=$ 200
GeV.} \label{ddbar}
\end{figure}
To show these effects quantitatively we compare in Fig. \ref{ddbar}
the invariant mass spectra of dileptons with transverse momenta
less than 0.15 GeV/c from heavy-flavor pairs with and without partonic
and hadronic interactions in 10-40, 40-60, and 60-80 \% central
Au+Au collisions at $\sqrt{s_{\rm NN}}=$ 200 GeV. The figure shows
that the interactions of heavy flavors soften the invariant mass
spectra of dileptons especially in central collisions while the
effect is hardly visible in very peripheral collisions. This change
in slope is due to energy loss for high momentum heavy flavors by
interactions which randomizes the correlation angle between charm
and anti-charm quarks \cite{Song:2018xca}. The softening of the mass
spectrum becomes weaker with decreasing centrality since there are
less and less interactions of charm quarks.
Summarizing, there are three different medium modifications on
dileptons in relativistic heavy-ion collisions, which cannot be
described by hadronic cocktails: 1) the broadening of the
vector-meson spectral functions in nuclear matter, 2) dilepton
production from partonic interactions, and 3) the modification of
the dilepton spectra from heavy-flavor pairs due to strong charm or
beauty scatterings in particular in the partonic phase. We will now
explore these effects on the momentum and mass spectra for dileptons
in Au+Au collisions at $\sqrt{s_{\rm NN}}=$ 200 GeV for different
centrality classes.
Since dileptons have the invariant mass as an additional degree of
freedom, compared to photons or other hadronic probes in heavy-ion
collisions, it potentially provides more information on the matter produced in
these collisions.
\begin{figure} [h]
\centerline{
\includegraphics[width=8.6 cm]{rhic.eps}}
\caption{Invariant mass spectrum of dileptons from the PHSD in
minimum-bias Au+Au collisions at $\sqrt{s_{\rm NN}}=$ 200 GeV in
comparison to the experimental data from the STAR
collaboration~\cite{Adamczyk:2015lme}. The different channels are specified in the legend.} \label{rhic}
\end{figure}
This is demonstrated in Fig. \ref{rhic} where the invariant mass
spectrum of dielectrons in minimum-bias Au+Au collisions at
$\sqrt{s_{\rm NN}}=$ 200 GeV is shown for the constraint that the
transverse momentum of electron and that of position both are larger
than 0.2 GeV/c and each rapidity is smaller than unity, i.e. $|y_e|
\leq 1$.
We note that dielectron Bremsstrahlung from both partonic and hadronic collisions - as suggested long ago
\cite{add1,add2,add3,add4,add5,add6} - is added to our previous study~\cite{Song:2018xca}, although the contributions are subleading. For an estimate of the order of magnitude the differential Bremsstrahlung cross section is evaluated in the soft-photon approximation:
\begin{eqnarray}
E\frac{d^2\sigma(e^+e^-)}{dMd^3 p}=\frac{\alpha^2}{6\pi^3M}\frac{|\epsilon\cdot J|^2}{e^2} \frac{\lambda^{1/2}(s_2,m_3,m_4)}{\lambda^{1/2}(s,m_3,m_4)}\sigma_{el},
\end{eqnarray}
where $\epsilon_\mu$ is the polarization vector of the virtual photon and $J_\mu$ the electromagnetic current of the incoming and outgoing particles in the reaction $1+2\rightarrow 3+4+\gamma^*$. Furthermore, $\sigma_{el}$ is the elastic scattering cross section and $\lambda^{1/2}(s_2,m_3,m_4)$ is the three momentum of particle 3 or 4 in their center-of-mass frame at the invariant energy $s_2=(p_3+p_4)^2$~\cite{PHSDreview}.
One can see from Fig. \ref{rhic} that many hadronic sources contribute to the
low-mass dilepton spectrum while the intermediate-mass range is
dominated by the contribution from heavy-flavor pairs and that from
partonic interactions. We note that the $\rho$ meson considerably
broadens and that the contribution from charmonia is not included in
the PHSD calculations which explains the missing peak in the data
from the STAR collaboration~\cite{Adamczyk:2015lme} at about 3.1 GeV
of invariant mass. Nevertheless, the description of the inclusive
dilepton spectra within PHSD is very good for lower invariant masses.
In view of the completely different contributions for low-mass
dileptons and for intermediate-mass dileptons, it is helpful to
separate them for studying transverse-momentum spectra.
\begin{figure} [h]
\centerline{
\includegraphics[width=8.6 cm]{lowM-pt.eps}}
\centerline{
\includegraphics[width=8.6 cm]{midM-pt.eps}}
\caption{Transverse momentum spectra of (a) low-mass and (b)
intermediate-mass dileptons in 10-40, 40-60, and 60-80 \% central
Au+Au collisions at $\sqrt{s_{\rm NN}}=$ 200 GeV in comparison to
the experimental data (for 60-80 \% central
collisions) \cite{yang2018}.} \label{spec1}
\end{figure}
We show in Fig. \ref{spec1} the transverse momentum spectra of
low-mass ($0.4 < M < 0.79$ GeV) and intermediate mass ($1.2 < M <
2.6$ GeV) dileptons for the same acceptance cuts as in Fig.
\ref{rhic} for 10-40, 40-60, and 60-80 \% central Au+Au collisions
at $\sqrt{s_{\rm NN}}=$ 200 GeV. The yields of low-mass dileptons
within the acceptance cuts are $4.85 \times 10^{-3}$, $1.05 \times
10^{-3}$, and $2.1 \times 10^{-4}$ for 10-40, 40-60, and 60-80 \%
central collisions, respectively. For the intermediate-mass
dileptons they are, respectively, $1.0 \times 10^{-3}$, $2.2 \times
10^{-4}$, and $4.5 \times 10^{-5}$. Comparing the low-mass and
intermediate-mass dileptons, the ratio of the dilepton yields in
10-40 \% central collisions to that in 40-60 \% or 60-80 \% central
collisions is very similar. This demonstrates that the dependence of
the dilepton yield on invariant mass is not so sensitive to the
centrality in heavy-ion collisions, if the collision energy is the same.
The shape of the transverse momentum spectra of dileptons is neither
sensitive to the centrality as shown in Fig. \ref{spec1}. However,
with increasing transverse momentum the spectrum of low-mass
dielectron decreases faster than that of intermediate-mass dileptons as
expected.
Furthermore, in Fig. \ref{spec1} we compare the results from the
PHSD with the experimental data for 60-80 \% central collisions. It
is seen that the PHSD reproduces very well the experimental spectra
both for low-mass dileptons and for intermediate-mass dileptons down
to $p_T \approx$ 0.15 GeV/c. The experimental data show an anomalous
enhancement of dileptons below $p_T \approx$ 0.15 GeV/c for the very
peripheral collisions, which is not described by the PHSD at all.
\begin{figure} [h]
\centerline{
\includegraphics[width=8.6 cm]{lowM-peri.eps}}
\centerline{
\includegraphics[width=8.6 cm]{midM-peri.eps}}
\caption{Transverse momentum spectra of (a) low-mass and (b)
intermediate-mass dileptons with the individual contributions shown
additionally in 60-80 \% central Au+Au collisions at $\sqrt{s_{\rm
NN}}=$ 200 GeV. The experimental data are taken from
Ref.~\cite{yang2018}.} \label{peripheral}
\end{figure}
\begin{figure} [h!]
\centerline{
\includegraphics[width=8.2 cm]{low-pt14.eps}}
\centerline{
\includegraphics[width=8.2 cm]{low-pt46.eps}}
\centerline{
\includegraphics[width=8.2 cm]{low-pt68.eps}}
\caption{Invariant mass spectra of dileptons with small transverse
momentum ($p_T < 0.15$ GeV/c) in (a) 10-40, (b) 40-60, and (c) 60-80
\% central Au+Au collisions at $\sqrt{s_{\rm NN}}=$ 200 GeV. The
experimental data are taken from Ref.~\cite{yang2018}.}
\label{low-pt}
\end{figure}
In order to provide further information, we show in Fig.
\ref{peripheral} all contributions to the transverse momentum
spectra of low-mass and intermediate-mass dileptons in 60-80 \%
central collisions. As in Fig. \ref{rhic}, the low-mass dilepton
sector has contributions from various hadronic and partonic channels
and the most dominant contributions are from $D\bar{D}$ pairs and
$\rho$-meson decays. On the other hand, in the intermediate-mass
dilepton sector the contribution from $D\bar{D}$ pairs and partonic
interactions are dominant with some background from $B\bar{B}$
pairs. As mentioned in the previous section, there are three kinds
of nuclear modifications on dileptons in heavy-ion collisions, but
none of them can explain the enhancement of dileptons at low
transverse momentum. We recall that the dileptons from heavy flavor
pairs are not visibly modified in very peripheral collisions due to
the low amount of rescattering as shown in Fig. \ref{ddbar}. Also
the contributions from $\rho$-meson decays or partonic
interactions are subdominant.
Furthermore, the dilepton Bremsstrahlung is peaked at low transverse momentum only for very small invariant mass, $M\rightarrow 0$, while in the two mass regions of interest the $p_T$ spectra are broad and show no indication
for a low $p_T$ peak.
Fig. \ref{low-pt}, furthermore, shows the invariant mass spectra of
dileptons with small transverse momentum ($p_T < 0.15$ GeV/c) in
10-40, 40-60, and 60-80 \% central Au+Au collisions at $\sqrt{s_{\rm
NN}}=$ 200 GeV in comparison to the experimental data from
Ref.~\cite{yang2018}. The PHSD can reproduce the experimental data
in 10-40 \% central collisions very well, but begins to deviate
slightly from the data in 40-60 \% central collisions; the deviation
becomes pronounced for 60-80 \% central collisions, which is consistent
with Fig. \ref{peripheral}, and implies that the anomalous
enhancement of dileptons at low transverse momentum is only small or moderate
in central collisions. If we assume that the
dilepton spectrum is the same at low transverse momentum (in Fig.
\ref{spec1}) regardless of centrality, then the anomalous source is
quite strong in 60-80 \% central collisions, less strong in 40-60 \%
central collisions, and hardly seen in 10-40 \% central collisions.
Furthermore, since the $p_T$ range is very small we conclude that
the transverse mass distribution from the anomalous source is almost
the same as from hadronic and partonic contributions in central
collisions.
The other point is that differences between the experimental data
and the PHSD results in 40-60 and 60-80 \% central collisions do
practically not depend on the invariant mass of the dileptons but
are rather constant in magnitude. For example, the yield of low-mass
dielectrons ($0.4 < M < 0.79$ GeV) and that of intermediate-mass
dielectrons ($1.2 < M < 2.6$ GeV) from the experimental data in
60-80 \% central collisions are alike but about ten times larger
than those from the PHSD. These findings are hard to reconcile.
In summarizing we have addressed the low $p_T$ enhancement of
dileptons from peripheral heavy-ion collisions where the
experimental data show a large anomalous source regardless of the
dilepton invariant mass. We have employed the PHSD transport
approach to describe the transverse momentum spectra of dileptons in
relativistic heavy-ion collisions which incorporates three
in-medium effects in heavy-ion collisions: i) The spectral functions
of vector meson broaden in nuclear matter, ii) the correlation of
heavy-flavor pairs is modified by partonic and hadronic
interactions, and iii) there are sizeable contributions from
partonic interactions which do not exist in hadronic cocktails.
Taking all matter effects into account, the PHSD reproduces the
experimental data for dileptons down to $p_T \approx$ 0.15 GeV/c at
all centralities, however, underestimates the data below $p_T
\approx$ 0.15 GeV/c in very peripheral collisions.
In extension of previous studies we have incorporated the production
of dilepton pairs by hadronic and partonic bremsstrahlung processes - as suggested early in Refs.
\cite{add1,add2,add3,add4,add5,add6} - employing the soft-photon approximation for an estimate.
We find that these radiative corrections are by far subleading and - in the invariant mass regions of interest -
do not peak at low $p_T$. Accordingly, the
large enhancement of dileptons at low transverse momentum in
peripheral heavy-ion collisions is still an open question and the
solution of the puzzle is beyond standard microscopic models that
have shown to be compatible with dilepton data from heavy-ion
collisions in the range from SIS to LHC energies \cite{PHSDreview}.
\\
\\
The authors acknowledge inspiring discussions with
J. Butterworth, F. Geurts and C. Yang.
This work was supported by the LOEWE center "HIC for FAIR", the
HGS-HIRe for FAIR and the COST Action THOR, CA15213. Furthermore, PM
and EB acknowledge support by DFG through the grant CRC-TR 211
'Strong-interaction matter under extreme conditions'. The
computational resources have been provided by the LOEWE-CSC.
|
2,869,038,155,448 | arxiv | \section{Test cases}\label{sec:cases}
We present results for the optimization of three different DH systems in Denmark. These are located in the cities of Brønderslev, Hillerød and Middelfart.
\subsection{Static data}
The DH systems differ in terms of types and numbers of units as well as the layout of the demand sites. An overview of the basic network representation is given in Figure \ref{fig:dhs}.
\begin{figure}
\centering
\begin{subfigure}[t]{0.49\columnwidth}
\centering
\includegraphics[width=0.9\columnwidth]{middelfart.pdf}
\caption{Middelfart}
\label{fig:middelfart}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.49\columnwidth}
\centering
\includegraphics[width=0.9\columnwidth]{broendeslev.pdf}
\caption{Brønderslev}
\label{fig:bronderslev}
\end{subfigure}
\hfill
\begin{subfigure}[t]{\columnwidth}
\centering
\includegraphics[width=0.5\columnwidth]{hilleroed.pdf}
\caption{Hillerød}
\label{fig:hilleroed}
\end{subfigure}
\caption{Basic network representation of the three DH systems with energy sources ($e$), units ($u$), storage units ($s$), demand sites ($d$) and interconnections ($i$). For the Brønderslev system, the seven CHP units and two wood chip boiler-heat pump units are aggregated for illustrative purposes. WC=wood chips, WP=wood pellets, CHP=combined head and power, GB=gas boiler, EL=electricity, H=heat, SOL=solar heat, ORC=organic rankine cycle, EXC=heat exchanger, NG=natural gas.}
\label{fig:dhs}
\end{figure}
\begin{table}[t]
\centering
\footnotesize
\caption{Unit parameters ($^+$ = Cost and prices have been multiplied with a factor to anonymize data, $^*$ = First-stage decisions, i.e., $u \in \mathcal{U}^*$)}
\begin{adjustbox}{width=0.9\textwidth}
\begin{tabular}{p{0.01\columnwidth}p{0.06\columnwidth}p{0.2\columnwidth}p{0.23\columnwidth}p{0.1\columnwidth}p{0.2\columnwidth}}
\toprule
& Unit & Input restr. & Output restr. & \multicolumn{1}{l}{Cost } & Min. up/down \\
& & [MW/period] & [MW/period] & [EUR/
MWh$_h$] × [periods], \qquad Start up cost [EUR] \\\midrule
\multirow{10}[0]{*}{\begin{sideways}Brønderslev\end{sideways}} & \multicolumn{1}{l}{$u_{CHP1}^*$-} & {NG: [7.822,7.822]} & H: [3.848, 3.848], & {96.98} & \multicolumn{1}{l}{{}} \\
& \multicolumn{1}{l}{$u_{CHP7}^*$} & & EL: [3.277, 3.277] & & \\
& $u_{GB}$ & NG: 33.8 & H: 34.8 & 58.52 & \\
& $u_{SOL}$ & SOL: External & H: External & 0.00 & \\
& $u_{ORC}$ & PH: [10, 20] & H: [8,16], EL: [2,4] & -32.16 & \multicolumn{1}{l}{Up/Down: 3/3} \\
& $u_{EB}^*$ & EL: 20 & H: 19.8 & 56.88 & {} \\
& {$u_{WCHP1}^*$} & WC: [4.32, 10.8] & H: [1.2, 3], & {88.51} & {{}} \\
& & EL: [0.12, 0.3] & PH: [3.2, 8] & & \\
& $u_{WCHP2}^*$ & WC: 10.8 EL: 0.3 & H: 3 PH: 8 & 88.51 & {dep. on WCHP1} \\
& $u_{EXC}$ & PH: 20 & H: 20 & 0.00 & \\\midrule
\multirow{15}[0]{*}{\begin{sideways}Hillerød$^+$\end{sideways}} & $u_{WC}$ & WC: 7.9 & H: 8 & 33.74 & {Start up cost: 30} \\
& {$u_{CHP}^*$} & {NG: [58, 146]} & H: [26, 65], & {100} & {Start up cost: 500} \\
& & & EL: [13, 59.1] & & \\
& $u_{GB1}$ & NG: 28.4 & H: 27 & 58.19 & \\
& $u_{GB2}$ & NG: 4.6 & H: 4.5 & 58.22 & \\
& $u_{GB3}$ & NG: 2 & H: 1.9 & 58.22 & \\
& $u_{GB4}$ & NG: 16.842 & H: 16 & 58.22 & \\
& $u_{GB5}$ & NG: 23.368 & H: 22.2 & 58.22 & \\
& $u_{GB6}$ & NG: 35.79 & H: 34 & 56.69 & \\
& $u_{GB7}$ & NG: 20.4 & H: 20 & 56.58 & \\
& $u_{SOL}$ & SOL: External & H: External & 0.00 & \\
& $u_{ORC1}^*$ & WC: [6.86, 30.83] & H: [5, 25.6], EL: [0, 4] & 43.57 & {Up/Down: 24/24} \\
& {$u_{ORC2}^*$} & {WC: [6.86, 30.83]} & H: [4.675, 23.936], & {44.32} & {Up/Down: 24/24,} \\
& & & EL: [0, 3.740] & & {excludes ORC1} \\
& $u_{WH}$ & WH: External & H: External & 0.00 & \\\midrule \multirow{8}[0]{*}{\begin{sideways}Middelfart\end{sideways}} & $u_{WC}$ & WC: [0.775, 4.095] & {H: [0.814; 4.3]} & 24.19 & \multicolumn{1}{l}{Up/Down: 24/24} \\
& $u_{WP}$ & WP: [0.555, 2.760] & {H: [0.52, 2.5]} & 30.24 & \multicolumn{1}{l}{Up/Down: 12/12} \\
& {$u_{CHP1}^*$} & {NG: [7.565, 7.565]} & {H: [3.625, 3.625], } & {109.61} & \multicolumn{1}{l}{Start up cost: 72.67} \\
& & & {EL: [2.875, 2.875]} & & \multicolumn{1}{l}{} \\
& {$u_{CHP2}^*$} & {NG: [7.675, 7.675]} & {H: [4.22, 4.22], } & {64.13} & \multicolumn{1}{l}{{Start up cost: 73.72}} \\
& & & {EL: [3.3, 3.3]} & & \\
& $u_{GB1}$ & NG: 5.59 & {H: 5.815} & 63.08 & \\
& $u_{GB2}$ & NG: 6.33 & {H: 6.52} & 46.67 & \\\bottomrule
\end{tabular}%
\end{adjustbox}
\label{tab:unitparams}%
\end{table}%
\begin{table}[t]
\centering
\footnotesize
\caption{Lower and upper bounds as well as prices of energy sources ($e$) and demand sites ($d$). TS=time series, Scen.=Scenario set ($^*=$ the price is indirectly taken into account since it is included in the heat production cost of the unit)}
\begin{adjustbox}{width=0.8\textwidth}
\begin{tabular}{llll}\toprule
& \multicolumn{1}{l}{LB [MW/period]} & UB [MW/period] & Price [EUR/MWh] \\\midrule
$e_{WC}$ & 0 & Unlimited & 0$^*$ \\
$e_{WP}$ & 0 & Unlimited & 0$^*$ \\
$e_{NG}$ & 0 & Unlimited & 0$^*$ \\
$e_{EL}$ & 0 & Unlimited & Day-ahead price TS/Scen \\
$e_{SOL}$ & \multicolumn{1}{l}{Solar production TS/Scen} & Solar production TS/Scen & \multicolumn{1}{l}{0} \\
$e_{WH}$ & \multicolumn{1}{l}{Waste heat prod. TS/Scen} & Waste heat prod. TS/Scen & \multicolumn{1}{l}{0} \\\midrule
$d_{H\cdot}$ & \multicolumn{1}{l}{Heat demand TS/Scen} & Heat demand TS/Scen & \multicolumn{1}{l}{0} \\
$d_{EL}$ & 0 & Unlimited & Day-ahead price TS/Scen \\
$d_{Fix}$ & 0 & Unlimited & 0$^*$ \\\bottomrule
\end{tabular}%
\end{adjustbox}
\label{tab:tsdata}%
\end{table}%
\begin{table}[ht]
\footnotesize
\centering
\caption{Interconnection limits for each DH system}
\begin{adjustbox}{width=0.75\textwidth}
\begin{tabular}{lrrrrrrrrrrrr}\toprule
& \multicolumn{3}{c}{Brønderslev} & \multicolumn{8}{c}{Hillerød} & \multicolumn{1}{l}{Middelfart} \\\cmidrule(lr){2-4}\cmidrule(lr){5-12} \cmidrule(lr){13-13}
$i$ & 1 & 2 & 3 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 1 \\\midrule
LB [MW/period] & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
UB [MW/period] & 20 & 32 & 12 & 20 & 5 & 2 & 16.5 & 22.2 & 16 & 2 & 2 & 5\\\bottomrule
\end{tabular}%
\end{adjustbox}
\label{tab:idata}%
\end{table}%
The Middelfart DH system in Figure \ref{fig:middelfart} only contains a few units and consists of two sub-systems connected through an interconnection pipe($i_1$). One sub-system contains a wood chip boiler ($u_{WC}$), wood pellet boiler ($u_{WP}$), a CHP unit ($u_{CHP1}$) and a gas boiler ($u_{GB1}$), while the other sub-system contains only a gas boiler ($u_{GB2}$) and a CHP unit ($u_{CHP2}$). The entire system contains three storage units ($s_1-s_3$) and two demand sites ($d_{H1}, d_{H2}$). The CHP units sell electricity to the day-ahead market ($d_{el}$).
The Brønderslev DH system in Figure \ref{fig:bronderslev} has a larger number and variety of units. Apart from 7 small-scale CHP units ($u_{CHP1},\ldots,u_{CHP7}$), an electric boiler ($u_{EB}$) and a gas boiler ($u_{GB}$) producing heat used directly in the DH network, the system has two units combining wood chip boilers with heat pumps ($u_{WCHP1}, u_{WCHP2}$) and a solar thermal plant ($u_{SOL}$) to produce process heat (PH). The process heat is then utilized by an organic rankine cycle (ORC) CHP unit ($u_{ORC}$) for combined heat and power production or it can be converted to heat through an heat exchanger ($u_{EXC}$). The heat is delivered to a central storage ($s_1$) and from there, via interconnections $i_{1} - i_{3}$, to three demand sites ($d_{H1} - d_{H3}$) where $d_{H3}$ is reached via $d_{H2}$. The CHP (to the day-ahead market ($d_{EL}$)) and ORC units (at fixed price) sell electricity, while the electric boiler and heat pumps consume electricity from the day-ahead market.
The Hillerød DH system in Figure \ref{fig:hilleroed} is characterized by eight separate demand sites ($d_{H1} - d_{H8}$) that are reached via interconnections ($i_1 - i_8$), nearly all of them connected to decentralized gas boilers ($u_{GB1} - u_{GB7}$) that can cover local heat production. Demand site $d_{H1}$ is also connected to a solar thermal plant for heat production ($u_{SOL}$). At the central production, waste heat injection ($u_{WH}$), a wood chip boiler ($u_{WC}$), one CHP unit ($u_{CHP}$) and one wood chip-fired ORC unit are able to produce heat. The ORC unit can be operated in two different operational modes represented by units $u_{ORC1}$ and $u_{ORC2}$, production by either excluding production by the other. The electricity production from the CHP unit is sold on the day-ahead market ($d_{EL}$) while the electricity production from the ORC unit is sold at a fixed price ($d_{Fix}$).
For all systems, the parameters of the units are given in Table \ref{tab:unitparams}, the parameters for energy sources and demand sites are stated in Table \ref{tab:tsdata} and the data for interconnections and storage units are given in Table \ref{tab:idata} and \ref{tab:sdata}, respectively. We use real-world data provided by the respective DH system operator, except all costs and prices for the Hillerød system that were multiplied with a constant factor to anonymize the costs but keep the relation between units and markets.
The day-ahead market $d_{El}$ in all three systems is represented by day-ahead market and two vertices for imbalances (see description in Section \ref{sec:bidding_curves}). The penalty costs for imbalances are 600 EUR/MWh (higher than all electricity prices in the dataset). Additionally, each system contains an energy source for missing heat (with a penalty cost of 10 EUR/kWh) and a demand site for excess heat (with no penalty).
\begin{table}
\centering
\footnotesize
\caption{Storage unit parameters for each DH system}
\begin{adjustbox}{width=0.8\textwidth}
\begin{tabular}{llllll}\toprule
& {$s$} & Capacity [MWh] & Initial level [MWh] & Target level [MWh] & Loss \quad[\%/period] \\\midrule
Brønderslev & 1 & 361.54 & 0.1 & 0.1 & 0.01 \\\midrule
Hillerød & 1 & 556.22 & 0.1 & 0.1 & 0.01 \\\midrule
\multirow{3}[0]{*}{Middelfart} & 1 & 38.048 & 0.1 & 0.1 & 0.01 \\
& 2 & 47.56 & 0.1 & 0.1 & 0.01\\
& 3 & 41.136 & 0.1 & 0.1 & 0.01 \\\bottomrule
\end{tabular}%
\end{adjustbox}
\label{tab:sdata}%
\end{table}%
\subsection{Time series data and scenarios}
For each system, we analyze cases from different seasons and varying planning horizon lengths. Evaluating system behaviour during different periods of the year is important to capture seasonal variations. An overview of test cases is provided in Table \ref{tab:cases}. We divide into stochastic optimization for operational planning and deterministic optimization for longer planning horizons. The purpose of the former is the operational optimization considering the interaction with the day-ahead electricity market including the operational uncertainty. In the latter case, we use deterministic data input to evaluate the operational performance of the system in a particular setting. All datasets have an hourly resolution. Note that the historical data in the three systems is based on different years (2019-2021), i.e., results and costs are not directly comparable due to different weather conditions and market prices.
\begin{table}
\centering
\footnotesize
\caption{Test cases}
\begin{adjustbox}{width=0.8\textwidth}
\begin{tabular}{llllp{0.15\columnwidth}p{0.05\columnwidth}}\toprule
Name & \multicolumn{1}{l}{System} & \multicolumn{1}{l}{Start date} & Dataset length & Planning horizon in one model run & det./ sto. \\
\midrule
B-01-168 & \multirow{3}[2]{*}{Brønderslev} & 18-01-2021 & 2 weeks & 1 week & sto. \\
B-04-168 & & 26-04-2021 & 2 weeks & 1 week & sto. \\
B-07-168 & & 05-07-2021 & 2 weeks & 1 week & sto. \\
B-10-6936 & & 15-10-2020 & to 31-07-2021, 289 days & 289 days & det. \\
\midrule
H-04-168 & \multirow{3}[2]{*}{Hillerød} & 05-04-2021 & 2 weeks & 1 week & sto. \\
H-07-168 & & 05-07-2021 & 2 weeks & 1 week & sto. \\
H-10-168 & & 04-10-2021 &2 weeks & 1 week & sto. \\
H-02-5808 & & 15-02-2021 & to 14-10-2021, 242 days & 242 days & det. \\\midrule
M-05-168 & \multirow{3}[2]{*}{Middelfart} & 13-05-2019 & 2 weeks & 1 week & sto. \\
M-08-168 & & 05-08-2019 & 2 weeks & 1 week & sto. \\
M-12-168 & & 21-12-2019 & 2 weeks & 1 week & sto. \\
M-03-8784 & & 03-03-2019 & to 16-12-2019, 289 days & 289 days & det.\\
\bottomrule
\end{tabular}%
\end{adjustbox}
\label{tab:cases}%
\end{table}%
For each DH system, we used real historical data to evaluate the performance of the models. For the scenario generation, we follow a very basic procedure based on historical data. For each test case, we use the data from the three previous weeks (before the date stated in Table \ref{tab:cases}) weighted with 0.5, 0.33 and 0.17 giving more weight to the recent observations. We distinguish between uncertainty regarding heat flows (solar production, waste heat injection and heat demand) and electricity prices. We combine the price scenarios with the heat flow scenario resulting in a total of 9 scenarios. This distinction is necessary for the creating of the bidding curves (see \cite{pandvzic2013offering}), i.e. the model creates bidding curves with three steps. This very basic scenario generation can easily replaced by state-of-the-art probabilistic forecasting methods by just exchanging the input data. Additional data used for the evaluation is stated in the respective results section.
\section{Conclusion}\label{sec:conclusion}
In this paper, we propose a novel model formulation for production optimization in DH systems. The optimization model uses a generic formulation that allows the application in a wide variety of DH systems and allows the integration of scenarios to account for uncertainty. The final model is a two-stage stochastic program based on a network flow model that is build using an easily adaptable generic network structure. The model can be used for different planning problems, such as operational planning under uncertainty, optimization of bids to the day-ahead electricity market and long-term evaluations of operations by exchanging data and non-anticipativity constraints.
The general applicability and performance of the approach is evaluated based on real data from the three Danish DH systems of Brønderslev, Hillerød and Middelfart with different characteristics. The calculation of the VSS alongside out-of-sample evaluations show the benefit of using stochastic programming in operational planning problems including uncertain input data, such as RES production, electricity prices and heat demand. In particular, the modelling of bidding to the day-ahead electricity market profits from the stochastic model. Furthermore, we present the results of applying the model in a rolling horizon setting to realizations of the uncertain data in the three DH systems. The solutions of the operational planning and an additional long-term evaluation on historic data with a deterministic setting allow an analysis of operational strategies, costs and energy mixes. In all cases, model runtimes model are short enough for application in practice.
As final remark we like to add that the diverse data basis for the analysis in this publication uses historical data from years 2019 to 2021, which does not reflect the current (2022) energy price trends and volatility. The specific results in terms costs and energy mixes in the DH systems would be affected, however, the modelling approach remains applicable without restriction. The input data can be replaced with more recent data, which was not available to us at the time of the experiments.
There are several lines for future research. First, the implementation of the bidding curves in this paper does not model minimum up and down times on market-dependent units. Therefore, an extension with proper block bidding techniques such as proposed in \cite{fleten2007stochastic} and applied in \cite{blockbids} is necessary. Second, the modelling of electricity balancing markets would be a valuable addition. The cases and evaluation in this work focuses on the day-ahead market and uses the balancing market only as an imbalance mechanism and not as an opportunity for trading. Furthermore, the model could be used as a basis for extensive experimental evaluations of different setups in DH systems.
\section{Introduction} \label{sec:introduction}
Under the European Green Deal, the European Commission aims at net carbon neutrality by 2050 \citep{europeancommission2019}, which requires the transformation of the European energy system through integration of a large amount of renewable energy \citep{Hainsch2022}. District heating (DH) can play a substantial role in supporting this transition. Not only is heating and cooling responsible for half of the EU's final energy consumption \citep{europeancommission2015}, the flexibility potential of DH systems can also ease the integration of renewable electricity sources substantially \citep{Thomassen2021}. The Danish government specifically aims at 70 \% emission reduction by 2030 (compared to 1990) alongside carbon neutrality by the middle of the century \citep{Folketinget2020}. This transition requires smart energy systems and modelling approaches, that do not merely focus on a single sector, but take different energy carriers into account \citep{Lund2017a}. This modelling paradigm is particularly important for the practical ability of DH optimization models, as systems often feature a variety of energy sources, such as biomass, solar and industrial waste heat \citep{stateofgreen2018}, and the optimal operation of some systems must even take behind-the-meter electricity dispatch into account \citep{hvidesande}.
Following this notion, this article proposes a novel generic optimisation model for DH systems. Our model features a network flow formulation based on stochastic programming that can take wide variety of energy carriers, productions units, markets and demand sinks into account. Furthermore, the model can be used in different application cases such as operational planning, bidding and long-term system analysis by merely changing input data and the non-anticipativity constraints of the stochastic model. The applicability of the model to all three cases is shown based on real data from the three DH systems in Denmark.
The remainder of this paper is structured as follows. Related work and our contributions are presented in Section \ref{sec:literature}. Sections \ref{sec:network} and \ref{sec:model} present the network and mathematical formulation of our proposed optimization model, respectively. The DH systems we use as cases are introduced in Section \ref{sec:cases} and numerical results are presented in Section \ref{sec:results}. Finally, Section \ref{sec:conclusion} summarizes our work and gives an outlook on future research.
\section{Related work} \label{sec:literature}
We propose a network-flow based model for the optimal scheduling of different energy generation and conversion units in DH systems. Our formulation is based on stochastic programming and allows to model an arbitrary range of energy carriers. Hence, related work focuses on mathematical models that (1) can optimise operational scheduling in DH networks (2) under uncertainty and (3) can represent arbitrary energy carriers. To the best of our knowledge, existing research meeting all of these criteria is limited. In the following, we provide a brief review of other studies focusing on energy system models (Section \ref{subsec:lit-esoms}), DH models (Section \ref{subsec:lit-dhModels}) and energy hubs (Section \ref{subsec:lit-energyHubs}).
\subsection{Energy System Modelling Frameworks} \label{subsec:lit-esoms}
Energy system optimisation models (ESOMs) are most commonly formulated as large-scale linear programs with the aim of providing the optimal dispatch of one or more energy carriers \citep{Dominkovic2018f} and/or investments \citep{Dominkovic2020} in related technologies. The scope and scale of applications typically reach from the district \citep{Weckesser2021}, urban \citep{Dominkovic2020} to country \citep{Daly2014} or even continental level \citep{Pavicevic2020a}, spanning one or more years. An overview on different modelling frameworks can be found on the OpenMod Wiki \citep{Openmod}. A focus on local multi-energy systems is provided in \citep{cuisinier2021techno}.
The model proposed in this paper is a stochastic program, taking uncertainty modelled as scenarios into account, and is formulated in a generic way such that it is able to model arbitrary energy commodities and technologies. Notably, most energy system modelling frameworks are either deterministic models (e.g. Balmorel \citep{Wiese2018a}) or model certain energy technologies specifically \cite{Helisto2019} thus not reaching the general applicability this work does. The Balmorel extension OptiFlow by \cite{Ravn2017} does, as the model proposed here, use a graph-based network flow formulation, but is, at the time of writing, focused on the waste sector. \citep{hilpert2018open} present the oemof modelling framework that uses a network flow formulation to model an energy system, similar to the setting used in this paper, but without consideration of uncertainty and market interaction. \citep{Quelhas2007} provide a general network flow formulation for US power network incorporating also gas and coal inflow. The resulting model is a deterministic LP for long-term evaluation that abstracts from operational constraints such as on/off status, market interaction and renewable energy sources (RES). Also other frameworks, such as Calliope by \cite{Pfenninger2018} meet comparable requirements in theory, but are mostly applied to long-term planning problems (e.g. \cite{Pickering2019}) rather than having an operational focus. That focus makes them less suitable for real-world operational problems, where features as uncertainty modelling, rolling horizon optimisation and market interaction are important.
\subsection{District Heating Optimisation Models}\label{subsec:lit-dhModels}
District-heating specific models are often able to capture detailed characteristics of the system and take uncertainty into account: In \cite{Zhou2020a}, a distributionally robust linear formulation of a co-generation dispatch problem for heat and power dispatch is proposed. The authors of \cite{Xue2021} solve a robust unit commitment problem for a co-generating heat and power system. In \cite{Hohmann2019}, operation strategies for a DH system are optimised in a stochastic program. However, such models, coming along with rich detail with respect to district-heating specific characteristics, typically model energy carriers and units, usually heat and possibly power, explicitly, thus lacking general applicability to systems with a wider range of energy carriers.
Co-generation of power and heat raises the question of an integrated optimisation of both dispatch and power market bidding, both on day-ahead \cite{hurb} and regulating power markets \citep{hvidesande} and taking different bidding products into account \citep{blockbids}. Optimisation under uncertainty becomes especially useful in such co-generation problems, where typical sources of uncertainty can include heat load \citep{Dimoulkas2015}, prices \citep{blockbids} or RES generation \citep{hvidesande}. These authors model the studied DH systems for bidding but abstain from a generic model formulation. An example of a network flow formulation in DH is the work in \cite{Bordin2016}, where the planning of an Italian DH system is optimised, while DH characteristics are modelled explicitly but no uncertainty is considered. The reader is referred to \cite{Sarbu2019} for an review of DH system optimisation models.
Operational optimisation in DH is often combined with design or capacity expansion models. Examples are \cite{Wirtz2020}, \cite{gabrielli2018optimal} and \cite{Weinand2019} relying on explicit modelling of the systems. Further publications including investment decisions in multi-energy systems are presented in \citep{cuisinier2021techno}.
\subsection{Energy hubs}\label{subsec:lit-energyHubs}
The energy hub concept was defined by \cite{Geidl2007} as "An energy hub is considered a unit where multiple energy carriers can be converted, conditioned, and stored", i.e, an energy hub can contain several energy units, storage facilities and transformers. Several energy hubs are interconnected using transmission systems. For each energy hub, a so-called coupling matrix can be derived, which is used to determine the operational strategy\citep{Geidl2007,Beigvand2017}. This matrix contains the transformation factors from one energy carrier to another energy carrier based on the setup of all units inside the energy hub. Based on the energy hub concept, several planning tasks can be executed, e.g., design of an energy hub \cite{wang2018mixed}, systems impact analysis of energy hubs, operational planning \cite{Geidl2005, Najafi2016} and optimal power flow. Within the context of this publication, the productions units of the DH operator can be considered as one energy hub. We are interested in a generic formulation for the stochastic operational scheduling inside the energy hub. We also use a transformation matrix similar to the coupling matrix, but on a unit-level to define the transformations of each unit. \cite{Beigvand2017} look at the economic dispatch of energy hub and give many examples for possible energy hubs and a mathematical formulation using the coupling matrix. Their formulation abstracts from specific time periods, commitment decisions and uncertainties, since it is concerned with optimizing the energy input and dispatch factors to reach the required output. \cite{Geidl2005} propose a general formulation for the determining the optimal dispatch of the units inside an energy hub for one hour using a non-linear formulation. \cite{Najafi2016} present a specific stochastic model for an energy hub containing CHP units, generators and wind turbines considering electricity prices and wind power production as uncertain. \cite{wang2018mixed} determine the configuration of an energy hub using a model that uses a general notation based on input and output ports and branches between components. The decisions are the possible combinations as binary variables and the optimal operation strategy. An extended overview of models for energy hubs is given in \cite{Mohammadi2017}. Like in the above mentioned publications, most energy hubs publication focus on planning and analysis and do not consider real-time operational planning \cite{Krause}.
\subsection{Contribution}
Based on the presented literature above, we summarize our contributions as follows:
\begin{enumerate}
\item We propose a novel formulation for the optimisation of DH systems. The model relies on a network representation of the DH system, which makes the optimization model itself very generic and it can be easily applied to a wide range of DH systems with different units and requirements. The model uses stochastic programming to account for uncertainty in prices and heat flows.
\item The proposed model can be used in different application cases. Traditional operational optimization dispatching the heat production units can achieved by using the model with non-anticipativity on the commitment status and production on some of the units. Determination of bids to day-ahead markets can be achieved by including non-anticipativity constraints that create curves based on the electricity prices scenarios (based on the work in \cite{pandvzic2013offering, hvidesande}). Finally, a deterministic version of the model can be used for pure evaluation purposes on historic data.
\item The optimisation model also allows defining sliding time windows and using the model in a rolling or receding horizon approach.
\item The performance of the model in terms of costs, energy mixes and runtimes is extensively evaluated in several case studies using real data from three Danish DH networks including out-of-sample testing.
\end{enumerate}
\section*{Acknowledgments}
This work is funded by Innovation Fund Denmark through the HEAT 4.0 project (no. 8090-00046B). The authors thank Middelfart Fjernvarme a.m.b.a., Brønderslev Forsyning A/S and Hillerød Forsyning for providing their data.
\section*{Conflict of interest}
\noindent The authors declare that they have no conflict of interest.
\section{Network representation} \label{sec:network}
In this section, we present how a DH system can be represented as a network graph with arcs and vertices and introduce all components, parameters and sets. The general idea is to transform all components of a DH system such as production units, storage units, demand sites etc. to vertices in a network and use the arcs to model possible flows of energy between units. An overview of the nomenclature is given in Table \ref{tab:nomenclature_dh}. The model formulation in Section \ref{sec:model} uses this structure as basis for the mathematical formulation.
\begin{table}[ht]
\footnotesize
\caption{Nomenclature - DH network components}
\centering
\begin{adjustbox}{width=\textwidth}
\begin{tabular}{p{0.15\columnwidth}p{0.85\columnwidth}}\toprule
Symbol & Description\\\midrule
$\mathcal{U}$& Production units\\
$\mathcal{E}$& Energy sources \\
$\mathcal{D}$ & Demand sites\\
$\mathcal{P}$ & Pipeline connection\\
$\mathcal{S}$ & Storage units\\
$\mathcal{F}$ & Energy type\\
$\mathcal{T}$ & Periods\\
${\Omega}$ & Scenarios\\
$\mathcal{V}$ & Vertices $\mathcal{V} = \mathcal{U} \cup \mathcal{E} \cup \mathcal{D} \cup \mathcal{P} \cup \mathcal{S}$\\
$\mathcal{U}^{\text{WC}}$ & Set of unit vertices needing commitment decisions, $\mathcal{U}^{\text{WC}}\subseteq{\mathcal{U}}$\\
$\mathcal{U}^{\text{*}}$ & Set of units with here-and-now decisions $\mathcal{U}^{\text{*}} \subseteq{\mathcal{U}}$\\
$\mathcal{U}^{EXC}_u$ & Set of unit vertices excluded from production, if unit $v$ is producing, $\mathcal{U}^{EXC}_u\subseteq{\mathcal{U}}$\\
$\mathcal{U}^{DEP}_u$ & Set of unit vertices also producing, if unit $v$ is producing, $\mathcal{U}^{DEP}_u\subseteq{\mathcal{U}}$\\
$\mathcal{A}$ & Set of arcs, $\mathcal{A} \subset (V \times V \times F \times T \times T \times \Omega)$\\
$\mathcal{A}^{OUT}_{v,f,t,\omega}$ & Set of outgoing arcs with energy type $f \in \mathcal{F}$ from vertex $v \in \mathcal{V}$ in period $t \in \mathcal{T}$ and scenario $\omega \in \omega$\\
$\mathcal{A}^{IN}_{v,f,t,\omega}$ & Set of incoming arcs with energy type $f \in \mathcal{F}$ from vertex $v \in \mathcal{V}$ in period $t \in \mathcal{T}$ and scenario $\omega \in \omega$\\
$\mathcal{M} = \mathcal{M}^\text{B} \cup \mathcal{M}^\text{S}$ & Set of buying (B) and selling (S) markets\\\midrule
$\phi_{c,f,f'}$ & Conversion factor of vertex $v \in \mathcal{V}$ from energy type $f \in \mathcal{F}$ to energy type $f' \in \mathcal{F}$\\
$\underline{I}_{v,f,t,\omega}/\overline{I}_{v,f,t,\omega}$ & Lower/upper bound on input of energy type $f \in \mathcal{F}$ to vertex $v \in \mathcal{V}$ in period $t \in \mathcal{T}$ and scenario $\omega \in {\Omega}$\\
$L_{v,v',f}$ & Upper bound on flow of energy type $f \in \mathcal{F}$ from vertex $v \in \mathcal{V}$ to vertex $v' \in \mathcal{V}$\\
$\underline{O}_{v,f,t,\omega}/\overline{O}_{v,f,t,\omega}$ & Lower, upper bound on output of energy type $f \in \mathcal{F}$ to vertex $v \in \mathcal{V}$ in period $t \in \mathcal{T}$ and scenario $\omega \in {\Omega}$\\
$C^{I}_{v, f, t, \omega}/C^{O}_{v, f, t, \omega}$ & Cost for inflow/outflow of one unit of energy type $f \in \mathcal{F}$ at vertex $v \in \mathcal{V}$ in period $t \in \mathcal{T}$ and scenario $\omega \in \Omega$\\
$C^{S}_{u}$ & Start-up cost for unit $u \in \mathcal{U}^{WC}$ \\
$T^{UT}_u/ T^{DT}_u$ & Minimum up time/down time for unit $u \in \mathcal{U}^{WC}$\\
$B_{u}$ & Initial online status of unit $u \in \mathcal{U}^\text{WC}$\\
$T^\text{B}_{u}$ & Minimum remaining periods of initial online status of unit $u \in \mathcal{U}^\text{WC}$\\
$R^U_{u,f}$, $R^D_{u,f}$ & Ramping limits on energy type $f$ for unit vertices $u \in \mathcal{U}$\\
$c_{a}$ & Cost per unit on arc $a \in \mathcal{A}$, weighted with scenario probability\\
$\phi_{v,f,f'}$ & Conversion factor from energy type $f$ to type $f'$ at vertex $v$\\
$l_{a}/u_{a}$ & Lower/upper bound on flow on arc $a \in \mathcal{A}$\\
$p_{m, t, \omega}$ & Price at market $m \in \mathcal{M}$ in period $t \in \mathcal{T}$ and scenario $\omega \in \Omega$\\\bottomrule
\end{tabular}
\end{adjustbox}
\label{tab:nomenclature_dh}
\end{table}
\subsection{General sets and parameters}
Energy types are defined by the set $\mathcal{F}$ and are any kind of input fuel or output product that is used or produced in the DH network. Typical examples are electricity and heat as output products as well as electricity, wood chips, natural gas, waste heat and solar heat as input energy types.
The planning horizon is denoted by the set of periods $\mathcal{T}$. The subset of periods $\mathcal{T}^* = \{ 1, ..., |\mathcal{T}^*|\} \subseteq \mathcal{T}$ are the periods for which non-anticipativity must hold for the later defined units. Uncertain input data is given by the set of scenarios $\Omega$. Each scenario $\omega$ has a probability $\pi_\omega$ with $\sum_{\omega \in \Omega} \pi_\omega = 1$.
\subsection{Network structure}
In our model formulation, the DH system is represented by the set of vertices $\mathcal{V}$ and the set of arcs $\mathcal{A}$ that connect the vertices. Thus, we can formulate the main part of the optimization model as a flow problem on this network structure. An arc $a$ is defined by the indices $a = (v,v',f,t,t',\omega)$ where $v$ is the start vertex, $v'$ is the end vertex, $f$ is the type of energy flowing on this arc, $t$ is the start time period, $t'$ is the end time period and $\omega$ the scenario.
To incorporate costs and restrictions on the flow in the network, vertices and arcs have several parameters. Each arc $a$ has cost per unit of flow denoted by $c_a$, which is weighted with probability $\pi_\omega$ with $\omega$ being the scenario of this arc. The flow on each arc $a$ is limited by the lower and upper bound $l_a$ and $u_a$, respectively.
Each vertex $v$ has lower and upper bounds on the total incoming flow, $[\underline{I}_{v,f,t,\omega},\overline{I}_{v,f,t,\omega}]$, as well as the total outgoing flow, $[\underline{O}_{v,f,t,\omega},\overline{O}_{v,f,t,\omega}]$, per energy type $f$, period $t$ and scenario $\omega$. Each vertex has the capability of transforming an energy type $f$ to another energy type $f'$ at a given conversion rate denoted by the transformation factor $\phi_{v,f,f'}$. The parameters are illustrated in Figure \ref{fig:vertex}.
The set of incoming and outgoing arcs for vertex $v \in \mathcal{V}$ for energy type $f \in \mathcal{F}$ in period $t\in\mathcal{T}$ and scenario $\omega \in \Omega$ are denoted by the arc subsets $\mathcal{A}^{IN}_{v,f,t,\omega}$ and $\mathcal{A}^{OUT}_{v,f,t,\omega}$, respectively.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\columnwidth]{vertex.pdf}
\caption{Example for arc and vertex parameters where vertex $v$ receives input of energy type $f$ from unit $u$ or energy source $e_2$ and transforms it to energy type $f'$ that is stored in storage unit $s$ or used in demand site $d_1$. Unit $u$ uses input from energy source $e_1$ and demand site $d_2$ gets input from storage $s$.}
\label{fig:vertex}
\end{figure}
\subsection{District heating network components} \label{sec:components}
In the following, we describe the components of the DH system and how they can be expressed using the above network structure.
\subsubsection{Energy sources $\mathcal{E}$}
Energy sources are given in the set $\mathcal{E} \subset \mathcal{V}$ and are the only possibility to insert energy of different types into the network without producing it by units. Energy sources are used for input fuels such as natural gas, biomass or electricity. Additionally, they can represent heat that is not produced but injected through waste heat sites or solar thermal units. The output of the energy source vertex $v$ is limited by $[\underline{O}_{v,f,t,\omega},\overline{O}_{v,f,t,\omega}]$ for the energy type $f$ associated with this source and $[0,0]$ for all other energy types. The limits can vary over time and per scenario to model time-varying and/or uncertain inflow from e.g., waste heat or solar units. Furthermore, the energy can be priced with $C^O_{v,f,t,\omega}$, potentially depending on time and scenario. Thus, each arc $a$ leaving the source $v$ contains the cost $C^O_{v,f,t,\omega}$ in the parameter $c_a$. There is no inflow to energy sources, i.e, $[\underline{I}_{v,f,t,\omega},\overline{I}_{v,f,t,\omega}] = [0,0]$. An energy source can also be used to model penalty costs for missing heat by providing heat at a high cost.
\subsubsection{Demand sites $\mathcal{D}$}
Demand sites are defined by the set $\mathcal{D} \subset \mathcal{V}$. Typical demand sites are the heat load demands in the DH network but also the electricity markets. Demand sites $v \in \mathcal{D}$ have a limitation on inflow $[\underline{I}_{v,f,t,\omega},\overline{I}_{v,f,t,\omega}]$ for the energy type of the demand site. The inflow for all other energy types is $[0,0]$. Demand sites have no outflow, i.e., $[\underline{O}_{v,f,t,\omega},\overline{O}_{v,f,t,\omega}] = [0,0]$. There are two common cases: $\underline{I}_{v,f,t,\omega} = \overline{I}_{v,f,t,\omega}$, if the demand needs to be fulfilled exactly, and $\underline{I}_{v,f,t,\omega} < \overline{I}_{v,f,t,\omega} = \infty$, if the demand needs to be covered but may be exceeded by its supply. The lower limit can be set to 0 to model electricity markets. Each unit of inflow can be associated with cost if $C^{I}_{v, f, t, \omega} > 0$ or income if $C^{I}_{v, f, t, \omega} < 0$. The parameter $C^{I}_{v, f, t, \omega}$ is included in the cost parameter $c_a$ on all incoming arcs $a$ to demand site $v$. A demand site can also be used to model excess heat by providing an additional heat demand site without any demand (but maybe some penalty costs).
\subsubsection{Storage units $\mathcal{S}$}
Storage units are given in the set $\mathcal{S} \subset \mathcal{V}$ and can store energy from one period to the next. A storage $v \in \mathcal{S}$ is defined for a specific energy type. The conversion factor for this energy type is $\phi_{v,f,f} = (1 - \text{loss})$ to model losses between periods, i.e, the efficiency of the storage unit. For all other energy types, it is set to zero. The capacity of the storage unit is limiting the maximum outflow on arcs connecting the storage with itself in the next period, i.e., $u_a = \text{capacity}(t,\omega)$ with $a = (v,v,f,t,t+1,\omega), \forall t \in \mathcal{T}, \omega \in \Omega$. The capacity can be time- and scenario-dependent. The maximum flow through the storage per period is limiting the total in- and outflow with the lower limits zero, i.e., $[\underline{I}_{v,f,t,\omega}=0,\overline{I}_{v,f,t,\omega}=\text{max-flow}]$ and $[\underline{O}_{v,f,t,\omega}=0,\overline{O}_{v,f,t,\omega}=\text{max-flow}]$. The limitations for all other energy types are zero.
\subsubsection{Interconnections $\mathcal{I}$}
Interconnections are defined by the set $\mathcal{I} \subset \mathcal{V}$ and can be used to model flow restrictions from parts of the network to other parts of the network, e.g., when
two districts are connected. An interconnection is given for a defined energy type and for this energy type there are limitations on the in- and outflow given by $[\underline{I}_{v,f,t,\omega},\overline{I}_{v,f,t,\omega}]$ and $[\underline{O}_{v,f,t,\omega},\overline{O}_{v,f,t,\omega}]$, respectively, which can be time- and scenario-dependent. The limitations for all other energy types are zero. Losses can be modelled similar to storages with $\phi_{v,f,f} = (1 - \text{loss})$ for the given energy type.
\subsubsection{Production units $\mathcal{U}$}
Production units are defined by the set $\mathcal{U} \subset \mathcal{V}$. Each production $v \in \mathcal{U}$ can transform input energy types $f_1 \in \mathcal{F}$ to output energy types $f_2 \in \mathcal{F}$. The capacity of the production is unit is defined by the limitations on the input $[\underline{I}_{v,f1,t,\omega},\overline{I}_{v,f1,t,\omega}]$ and output flows $[\underline{O}_{v,f2,t,\omega},\overline{O}_{v,f2,t,\omega}]$ of each energy type. Non-valid energy types are excluded by setting input and/or output restrictions to zero, respectively. The conversion factor $\phi_{v,f_1,f_2}$ is defined as the relationship between input and output energy type. The increase and decrease in production from one period to the next is limited by the up and down ramping limits $R^U_{v,f}$ and $R^D_{v,f}$, respectively. Those can be defined per energy type $f$. A subset of the units $\mathcal{U}^{*} \subseteq \mathcal{U}$ might relate to here-and-now decisions, i.e., those units need to have the same production in all scenarios $\Omega$ for the given non-anticipativity period $\mathcal{T}^*$.
Some production units require the modelling of their online status (on/off) to impose further restrictions, e.g. in case there exists a minimum production amount or dependencies with other units. The decisions regarding the status of the unit are called commitment decisions and the set of units with commitment decisions is denoted by $\mathcal{U}^{WC}\subseteq \mathcal{U}$. The start-up costs of a unit with commitment are denoted by $C^S_v$. For those units, we can also define minimum up and down times $T^{UT}_v$ and $T^{DT}_v$, respectively. Furthermore, interdependence
between units can be modelled. The set $\mathcal{U}^{DEP}_v$ contains all units that need to be online, when unit $v$ is online. In contrast, the set $\mathcal{U}^{EXC}_v$ contains all units that are excluded from production, if unit is $v$ producing. This can be used to model one unit with two different operational modes as two units excluding each other.
\subsection{Network creation}
Based on the structures defined above, the network can be created by translating all components with their parameters to vertices and arcs. Additional information encoded in the network, apart from the attributes mentioned in the previous section, are the limits on particular connections between components based on energy type, time and scenario information. Those are given by bounds on the arcs $[l_a, u_a]$. This means, if two vertices are not connected then the flow bounds are set to zero. The same holds if a particular energy type flow is not possible.
\subsubsection{Vertices $\mathcal{V}$}
All physical assets in the network, as presented in Section \ref{sec:components}, form the set of vertices in the network, i.e., $\mathcal{V}$ is defined as
$$\mathcal{V} = \mathcal{U} \cup \mathcal{E} \cup \mathcal{D} \cup \mathcal{I} \cup \mathcal{S}.$$
Note that several artificial vertices can be used to account for special conditions. For example, there are artificial energy sources and demand sites for each storage to model initial levels at the beginning of the planning horizon and target storage levels at the end of the planning horizon.
\subsubsection{Arcs $\mathcal{A}$}
The following arcs are created within a period $t \in \mathcal{T}$, $t'=t$ and scenario $\omega \in \Omega$, i.e., flow in the same period and scenario:
\begin{itemize}
\setlength\itemsep{-0.3em}
\item From energy source $e \in \mathcal{E}$ to unit $u \in \mathcal{U}$, if those two are interconnected and unit $u$ has inflow of the energy type $f$ from source $e$. The upper flow limit is the maximum inflow of energy type $f$ to unit $u$ and the cost are the cost per unit from the energy source.
\item From energy source $e \in \mathcal{E}$ to storage unit $s \in \mathcal{S}$, demand site $d \in \mathcal{D}$ or interconnection $i \in \mathcal{I}$, if those two are interconnected and end component has inflow of the energy type $f$ of source $e$. The upper limit is the maximum outflow of energy source $e$ and the cost are the cost per unit from the energy source.
One example is solar heat or waste heat that can be directly used for heating. Then the outflow is limited by the available heat. Another example is the modelling of soft constraints, e.g., imbalances on the electricity markets or missing heat production. In that case, the flow is unlimited and the costs represent penalty costs.
\item From unit $u \in \mathcal{U}$ to storage unit $s \in \mathcal{S}$, if unit $u$ produces the energy type $f$ stored in $s \in \mathcal{S}$ and the two are connected. The upper limit is the maximum outflow of energy type $f$ to unit $u$. The cost are the production cost per unit of outflow.
\item From unit $u \in \mathcal{U}$ to demand site $d \in \mathcal{D}$, if unit $u$ produces the energy type consumed at $d$ and the two are connected. The upper limit is the maximum outflow of energy type $f$ to unit $u$. he cost are the production cost per unit of outflow minus potential income (or cost) for selling the energy type (e.g. electricity on the day-ahead market).
\item From unit $u_1 \in \mathcal{U}$ to unit $u_2 \in \mathcal{U}$, if $u_1$ produces an energy type $f$ that is input to unit $u_2$ and the two are connected. The upper limit is the maximum outflow of energy type $f$ to unit $u_1$. The cost are the production cost per unit of outflow.
\item From unit $u \in \mathcal{U}$ to interconnection $i \in \mathcal{I}$, if unit $u$ produces the energy type of interconnection $i$ and the two are interconnected. Upper limit is the maximum outflow of energy type $f$ to unit $u$. The cost are the production cost per unit of outflow.
\item From storage unit $s \in \mathcal{S}$ to demand site $d \in \mathcal{D}$, if storage unit $s$ stores the energy type $f$ of demand site $d$ and the two are connected. The upper limit is the maximum flow per period of storage $s$. The cost are the cost (or income) for sending one unit to the demand site.
\item From storage unit $s \in \mathcal{S}$ to interconnection $i \in \mathcal{I}$ or vice versa, if storage unit $s$ stores the energy type of interconnection $i$ and the two are connected. The upper limit is the maximum flow per period of storage $s$.
\item From interconnection $i \in \mathcal{I}$ to demand site $d \in \mathcal{D}$, if the interconnection transports the energy type of the demand site and the two are connected. The upper limit is the maximum flow per period of $i$. The cost are the cost (or income) for sending one unit to the demand site.
\item From interconnection $i_1 \in \mathcal{I}$ to interconnection $i_2 \in \mathcal{I}$, if the interconnections transport the same energy type and the two are connected. The upper limit is the maximum flow.
\end{itemize}
Furthermore, the modelling of storage units requires the following arcs from period $t \in \mathcal{T}$ to period $t+1 \in \mathcal{T}$ in the same scenario $\omega \in \Omega$:
\begin{itemize}
\item From storage unit $s \in \mathcal{S}$ to the same storage unit $s$ in the next period. The upper flow limit is the storage capacity.
\end{itemize}
Special arcs are created for initial and end storage levels at the storage units $s \in \mathcal{S}$. Initially stored energy can enter the system via an artificial energy source $e^* \in \mathcal{E}$ and target storage level can leave the system through artificial demand site $d^* \in \mathcal{D}$. The following arcs are added:
\begin{itemize}
\setlength\itemsep{-0.3em}
\item In period $t=1$ (first period) from energy source $e^*$ to storage unit $s$ with a lower and upper limit equal to the initial storage level.
\item In period $t=|\mathcal{T}|$ from storage unit $s$ to demand site $d^*$ with the target storage level as lower bound and storage capacity or target storage level as upper bound.
\end{itemize}
The example in Figure \ref{fig:vertex} could be a setup where vertex $v$ is an electric boiler that can draw electricity ($f$) from either the market ($e_2$) or a wind farm ($u$) that is dependent on wind ($e_1$). The electric boiler produces heat ($f'$) for demand sites $d_1$ and $d_2$ and the flow to demand site $d_2$ is going through a thermal storage ($s$). $\phi_{v,f,f'}$ is the transformation factor from electricity ($f$) to heat ($f'$) for the electric boiler ($v$).
\section{Network flow formulation}\label{sec:model}
\begin{table}
\footnotesize
\centering
\caption{Sets, parameters and variables}
\begin{tabular}{lp{0.8\columnwidth}}
\toprule
\multicolumn{2}{l}{Variables}\\\midrule
$x_{a} \in \mathbb{R}^+$ & Flow on arc $a \in \mathcal{A}$\\
$z_{u,t,\omega} \in \{0, 1\}$ & Binary variable, 1 if unit $u \in \mathcal{U}^\text{WC}$ is producing in period $t$ and scenario $\omega \in \Omega$, 0 otherwise\\
$z^{S}_{u,t,\omega} \in \{0, 1\}$ & Binary variable, 1 if unit $u \in \mathcal{U}^\text{WC}$ is started in period $t$ and scenario $\omega \in \Omega$, 0 otherwise\\
$z^{E}_{u,t,\omega} \in \{0, 1\}$ & Binary variable, 1 if unit $u \in \mathcal{U}^\text{WC}$ is shut down in period $t$ and scenario $\omega \in \Omega$, 0 otherwise\\\bottomrule
\end{tabular}
\label{tab:dv}
\end{table}
{\allowdisplaybreaks
Based on the above defined network, we can create the optimization model. The main decisions of the model are represented by the flow $x_a$ on the arcs $a \in \mathcal{A}$. Further decisions are related to the commitment status of units $u \in \mathcal{U}^{WC}$, where binary variables $z_{u,t,\omega}, z^S_{u,t,\omega}$ and $z^E_{u,t,\omega}$ model whether unit $u$ is online, started up or shut down in period $t$ and scenario $\omega$, respectively. See Table \ref{tab:dv} for an overview of all decision variables including the ranges.
The objective function \eqref{eq:obj} minimizes flow costs through the network plus unit start-up costs.
\begin{align}
&\min \qquad \sum_{a \in \mathcal{A}} c_{a} x_{a} + \sum_{u \in \mathcal{U}^{WC}}\sum_{t\in T} \sum_{\omega \in \Omega} \pi_\omega C^S_{u} z^{S}_{u,t,\omega}\label{eq:obj}
\end{align}
Constraints \eqref{eq:arc_bounds} limits the flow on the arcs depending on the given bounds.
\begin{align}
& l_{a} \le x_{a} \le u_{a} && \forall a \in \mathcal{A}\label{eq:arc_bounds}\end{align}
The transformation from one energy type $f$ to another energy $f'$ at a vertex $k$ is handled in constraints \eqref{eq:transformation} using the transformation factor $\phi_{u,f,f'}$. These constraints hold not for energy sources $\mathcal{E}$ and demand sites $\mathcal{D}$, since they do not have incoming or outgoing flow, respectively.
\begin{align}
&\sum_{a \in \mathcal{A}^{OUT}_{v,f',t,\omega}} \phi_{v,f,f'} x_{a} &- \sum_{a \in \mathcal{A}^{IN}_{v,f,t,\omega}} x_{a} = 0 \label{eq:transformation}&& \forall v \in \mathcal{V} \backslash (\mathcal{E} \cup \mathcal{D}), t \in \mathcal{T}, \omega \in \Omega, f \in \mathcal{F}, f' \in \mathcal{F}
\end{align}
The total outflow and inflow at each vertex is limited by lower and upper bounds in constraints \eqref{eq:outflow_V} to \eqref{eq:inflow_V}. An exception is made for units with commitment decisions $\mathcal{U}^{WC}$, since those are handled explicitly in constraints \eqref{eq:outflow_VC2} and \eqref{eq:inflow_VC2}.
\begin{align}
&\underline{O}_{v,f,t,\omega}\le \sum_{a \in \mathcal{A}^{OUT}_{v,f,t,\omega}} x_{a} \le \overline{O}_{v,f,t,\omega} && \forall v \in \mathcal{V}^{} \backslash \mathcal{U}^{WC}, t \in \mathcal{T}, \omega \in \Omega, f \in \mathcal{F}\label{eq:outflow_V}\\
&\underline{I}_{v,f,t,\omega} \le \sum_{a \in \mathcal{A}^{IN}_{v,f,t,\omega}} x_{a} \le \overline{I}_{v,f,t,\omega} && \forall v \in \mathcal{V}^{} \backslash \mathcal{U}^{WC}, t \in \mathcal{T}, \omega \in \Omega, f \in \mathcal{F}\label{eq:inflow_V}
\end{align}
The ramping constraints for production units, i.e., the allowed difference in production between periods, for the different units are given in constraints \eqref{eq:rampup_c0} to \eqref{eq:rampdown_c0} except for units with commitment decisions $\mathcal{U}^{WC}$ (see constraints \eqref{eq:rampup_c2}-\eqref{eq:rampdown_c2}).
\begin{align}
&\sum_{a \in \mathcal{A}^{OUT}_{u,f,t,\omega}} x_{a} - \sum_{a \in \mathcal{A}^{OUT}_{u,f,t-1,\omega}} x_{a}
\le R^U_{u,f} && \forall u \in \mathcal{U} \backslash \mathcal{U}^{WC}, f \in \mathcal{F}, t \in \mathcal{T}, \omega \in \Omega\label{eq:rampup_c0}\\
&-\sum_{a \in \mathcal{A}^{OUT}_{u,f,t,\omega}} x_{a} + \sum_{a \in \mathcal{A}^{OUT}_{u,f,t-1,\omega}} x_{a}
\le R^D_{u,f} && \forall u \in \mathcal{U} \backslash \mathcal{U}^{WC}, f \in \mathcal{F}, t \in \mathcal{T}, \omega \in \Omega\label{eq:rampdown_c0}
\end{align}
Please note that the $\sum_{a \in \mathcal{A}^{OUT}_{u,f,t-1,\omega}} x_{a}$ in period $t=1$ refers to the initial production level given as input parameter for each unit $\mathcal{U}$.
\subsection{Units with commitment decisions}
In case of units with commitment decisions, we need to impose additional constraints. The commitment status of the unit $z_{u,t,\omega} \in \{0,1\}$ (1=on, 0=off) impacts the production of the unit. The status variable $z_{u,t,\omega}$ is updated using binary variables for starting $z^S_{u,t\omega}\in \{0,1\}$ and stopping $z^E_{u,t\omega}\in \{0,1\}$ the unit.
Constraints \eqref{eq:commitment1_c2} to \eqref{eq:depend_c2} model commitment related restrictions for units in set $\mathcal{U}^{WC}$. Constraints \eqref{eq:commitment1_c2} and \eqref{eq:commitment2_c2} ensure that the status of the unit is set correctly based on starting and stopping the unit while excluding simultaneous starts and stops. $z_{u,t-1,\omega}$ in period $t=1$ refers to the initial status given as parameter $B_{u,\omega}$ for each unit $\mathcal{U}^{WC}$. Minimum up- and down-times are modelled in constraints \eqref{eq:commitment1_init} to \eqref{eq:mindowntime_c2}. Constraints \eqref{eq:commitment1_init} sets the status based on the initial status $B_{u}$ and the required remaining periods $T^B_u$ in this status due to minimum up- or down-time. Constraints \eqref{eq:minuptime_c2} to \eqref{eq:mindowntime_c2} ensure the minimum up- and down-times for the remaining periods, respectively. Constraints \eqref{eq:exclude_c2} to \eqref{eq:exclude2_c2} exclude the simultaneous production of two units that should not run at the same time. The opposite case, where simultaneous production is required, is modelled in Constraints \eqref{eq:depend_c2}.
\begin{align}
&z^{S}_{u,t,\omega} - z^{E}_{u,t,\omega} = z_{u,t,\omega} - z_{u,t-1,\omega} && \forall u \in \mathcal{U}^{WC}, t \in \mathcal{T}, \omega \in \Omega\label{eq:commitment1_c2}\\
&z^{S}_{u,t,\omega}+ z^{E}_{u,t,\omega} \le 1 && \forall u \in \mathcal{U}^{WC}, t \in \mathcal{T}, \omega \in \Omega\label{eq:commitment2_c2}\\
&z_{u,t,\omega} = B_{u} &&\forall u \in \mathcal{U}^{WC}, t \in \lbrace 0,..., T^{B}_u\rbrace, \omega \in \Omega\label{eq:commitment1_init}\\
&\sum_{t'=max\lbrace 1, t-T^{UT}_{u}\rbrace}^{t} z^{S}_{u,t',\omega} \le z_{u,t,\omega} && \forall u \in \mathcal{U}^{WC}, t \in \lbrace {T}^{B}_u, \ldots, |\mathcal{T}| \rbrace, \omega \in \Omega \label{eq:minuptime_c2}\\
&\sum_{t'=max\lbrace 1, t-T^{DT}_u\rbrace}^{t} z^{E}_{u,t',\omega} \le 1-z_{u,t,\omega} && \forall u \in \mathcal{U}^{WC}, t \in \lbrace {T}^{B}_u, \ldots, |\mathcal{T}| \rbrace, \omega \in \Omega \label{eq:mindowntime_c2}\\
& z_{u,t,\omega} + z_{u',t,\omega} \le 1 && \negthickspace\forall u \in \mathcal{U}^{WC}, u' \in \mathcal{U}^{EXC}_u, t \in \mathcal{T},\omega \in \Omega\label{eq:exclude_c2}\\
& z^{S}_{u,t,\omega} + z^{E}_{u',t,\omega} \le 1 && \negthickspace \forall u \in \mathcal{U}^{WC}, u' \in \mathcal{U}^{EXC}_u, t \in \mathcal{T},\omega \in \Omega\label{eq:exclude2_c2}\\
& z_{u,t,\omega} = z_{u',t,\omega} && \negthickspace\forall u \in \mathcal{U}^{WC}, u' \in \mathcal{U}^{DEP}_u, t \in \mathcal{T}, \omega \in \Omega\label{eq:depend_c2}
\end{align}
The inflow and outflow restrictions \eqref{eq:outflow_VC2} and \eqref{eq:inflow_VC2} as well as the ramping of the production \eqref{eq:rampup_c2} and \eqref{eq:rampdown_c2} of the units with commitment decisions are modelled dependent on the status of the unit.
\begin{align}
&z_{u,t,\omega}\underline{O}_{u,f,t,\omega} \le \sum_{a \in \mathcal{A}^{OUT}_{u,f,t,\omega}}x_{a} \le z_{u,t,\omega}\overline{O}_{u,f,t,\omega} && \forall u \in \mathcal{U}^{WC}, t \in \mathcal{T}, \omega \in \Omega, f \in \mathcal{F}\label{eq:outflow_VC2}\\
&z_{u,t,\omega}\underline{I}_{u,f,t,\omega} \le \sum_{a \in \mathcal{A}^{IN}_{u,f,t,\omega}} x_{a} \le z_{u,t,\omega}\overline{I}_{u,f,t,\omega} && \forall u \in \mathcal{U}^{WC}, t \in \mathcal{T}, \omega \in \Omega, f \in \mathcal{F}\label{eq:inflow_VC2}
\end{align}
\begin{align}
& \sum_{a \in \mathcal{A}^{OUT}_{u,f,t,\omega}} x_{a} - \sum_{a \in \mathcal{A}^{OUT}_{u,f,t-1,\omega}} x_{a}
\le R^U_{u,f} z_{u,t-1,\omega} + \underline{O}_{u,f,t,\omega}\nonumber z^{Start}_{u,t,\omega} \\&\hspace{0.5\columnwidth} \forall u \in \mathcal{U}^{WC}, f \in \mathcal{F}, t \in \mathcal{T}, \omega \in \Omega\label{eq:rampup_c2}\\
& -\sum_{a \in \mathcal{A}^{OUT}_{u,f,t,\omega}} x_{a} + \sum_{a \in \mathcal{A}^{OUT}_{u,f,t-1,\omega}} x_{a}
\le R^D_{u,f} z_{u,t,\omega} + \underline{O}_{u,f,t,\omega} z^{Stop}_{u,t,\omega}\nonumber \\&\hspace{0.5\columnwidth}\forall u \in \mathcal{U}^{WC}, f \in \mathcal{F}, t \in \mathcal{T}, \omega \in \Omega\label{eq:rampdown_c2}
\end{align}
\subsection{Non-anticipativity constraints}
Depending on the application case of the model, different non-anticipativity constraints need to be added. If a deterministic version of the model is used, those constraints can be omitted. Furthermore, we can distinguish between non-anticipativity on the commitment decisions of the units and non-anticipativity with respect to bidding curves. The former assumes that the units can operate with the day-ahead market prices without any bidding, while the latter creates bidding curves that can be submitted to the day-ahead electricity market.
\subsubsection{Operational planning without bidding}
For all units with here-and-now decisions $\mathcal{U}^{*}$ in the periods considered as first-stage $\mathcal{T}^*$, we need to include non-anticipativity constraints to ensure the decision structure of a two-stage stochastic program. Non-anticipativity means that the production and status of those units needs to be equal across scenarios. Such non-anticipativity might be necessary, e.g., due to electricity market participation, where the determining of power production amounts has to be made before the market is cleared. The constraints for commitment status and production are given in constraints \eqref{eq:nonanticommit} and \eqref{eq:nonantiflow}, respectively. Constraints \eqref{eq:nonanticommit} ensures that the commitment status is the same for all scenarios for all units in $\mathcal{U}^* \cap \mathcal{U}^{WC}$. The flow non-anticipativity is modeled such that all arcs need to have the expected flow \eqref{eq:nonantiflow}. The set $\mathcal{A}^{*}(a)$ contains all arcs $a'$ that need to have same flow as arc $a$, i.e., the arcs with same start and end vertex, period and energy type, which only differ in scenario, including $a$ itself. The non-anticipativity on flow holds only for arcs with units $v(a)$ in $\mathcal{U}^*$ as start vertex and period $t(a) \in \mathcal{T}^*$ as start period. ${\omega(a')}$ denotes the scenario of arc $a'$.
\begin{align}
&z_{u,t,\omega} = z_{u,t,\omega'} && \forall u \in \mathcal{U}^* \cap \mathcal{U}^{WC}, t \in \mathcal{T}^*, \omega, \omega' \in \Omega\label{eq:nonanticommit}\\
&x_{a} = \sum_{a' \in \mathcal{A}^{*}(a)}\pi_{\omega(a')} x_{a'} && \forall \lbrace a \in \mathcal{A}\ |\ v(a) \in \mathcal{U}^* \land t(a) \in \mathcal{T}^* \rbrace\label{eq:nonantiflow}
\end{align}
}
In this setting (in contrast to the approach described in Section \ref{sec:bidding_curves}) the DH system is assumed to participate in electricity trading via price-independent bids. That means that electricity trades, if first-stage decisions, are realized no matter the market price.
\subsubsection{Operational planning including bidding curves}\label{sec:bidding_curves}
The extension to bidding curves is based on the work in \cite{hvidesande} that use the method of \cite{pandvzic2013offering} for creating bidding curves based on electricity price scenarios. The method creates monotonously increasing/decreasing bidding curves determining a bidding amount for each price scenario. We refer to \cite{hvidesande} and \cite{pandvzic2013offering} for further details.
In this extension, we have a set of markets for selling $m \in \mathcal{M}^S$ and buying $m \in \mathcal{M}^B$ energy. A market $m$ contains a set of three vertices $\mathcal{V}_m$ representing a spot (day-ahead) market as well as imbalances (upward and downward). These vertices are either energy sources, if they offer energy, or demand sites, if they receive energy. The price on the spot market $m \in \mathcal{M}^{S} \cup \mathcal{V}^{S}$ in period $t$ and scenario $\omega$ is denoted by $p_{m,t,\omega}$. The setup of vertices is visualized in Figure \ref{fig:markets}.
\begin{figure}
\centering
\includegraphics[width=0.6\columnwidth]{markets.pdf}
\caption{Set of vertices needed to represent markets with exemplary connections to electric boiler (EB) and CHP unit (CHP). Each market consists of three vertices: day-ahead market (DA), negative imbalance (buying needed) (BMB) and positive imbalance (selling needed) (BMS) (blue = energy sources, gray = demand sites, white = units).}
\label{fig:markets}
\end{figure}
Constraints \eqref{eq:bidcurvesell} and \eqref{eq:bidcurvesell2} determine the bidding amount to the selling market based on the scenarios and allow a monotonously increasing bidding curve based on the market scenario price $p_{m,t,\omega}$. For an equal price an equal selling amount is guaranteed \eqref{eq:bidcurvesell2} ensuring non-anticipativity. Constraints \eqref{eq:bidcurvebuy} and \eqref{eq:bidcurvebuy2} create monotonously decreasing bidding curves for the buying market.
{\allowdisplaybreaks
\begin{align}
\sum_{a \in \mathcal{A}^{IN}_{v,f,t\omega}} x_a - \sum_{a \in \mathcal{A}^{OUT}_{v,f,t\omega}} x_a &= \sum_{a \in \mathcal{A}^{IN}_{v,f,t\omega'}} x_a - \sum_{a \in \mathcal{A}^{OUT}_{v,f,t\omega'}} x_a \nonumber\\&\forall m \in \mathcal{M}^S, v \in \mathcal{V}_m, t \in \mathcal{T}, (\omega, \omega') \in \Omega \times \Omega, \text{if } p_{m,t,\omega} = p_{m,t\omega'}\label{eq:bidcurvesell}\\
\sum_{a \in \mathcal{A}^{IN}_{v,f,t\omega}} x_a - \sum_{a \in \mathcal{A}^{OUT}_{v,f,t\omega}} x_a &\le \sum_{a \in \mathcal{A}^{IN}_{v,f,t\omega'}} x_a - \sum_{a \in \mathcal{A}^{OUT}_{v,f,t\omega'}} x_a \nonumber\\& \forall m \in \mathcal{M}^S, v \in \mathcal{V}_m, t \in \mathcal{T}, (\omega, \omega') \in \Omega \times \Omega, \text{if } p_{m,t,\omega} \le p_{m,t\omega'}\label{eq:bidcurvesell2}\\
\sum_{a \in \mathcal{A}^{OUT}_{v,f,t\omega}} x_a - \sum_{a \in \mathcal{A}^{IN}_{v,f,t\omega}} x_a &= \sum_{a \in \mathcal{A}^{OUT}_{v,f,t\omega'}} x_a - \sum_{a \in \mathcal{A}^{IN}_{v,f,t\omega'}} x_a \nonumber\\&\forall m \in \mathcal{M}^B, v \in \mathcal{V}_m, t \in \mathcal{T}, (\omega, \omega') \in \Omega \times \Omega, \text{if } p_{m,t,\omega} = p_{m,t\omega'}\label{eq:bidcurvebuy}\\
\sum_{a \in \mathcal{A}^{OUT}_{v,f,t\omega}} x_a - \sum_{a \in \mathcal{A}^{IN}_{v,f,t\omega}} x_a &\le \sum_{a \in \mathcal{A}^{OUT}_{v,f,t\omega'}} x_a - \sum_{a \in \mathcal{A}^{IN}_{v,f,t\omega'}} x_a \nonumber\\& \forall m \in \mathcal{M}^B, v \in \mathcal{V}_m, t \in \mathcal{T}, (\omega, \omega') \in \Omega \times \Omega, \text{if } p_{m,t,\omega} \ge p_{m,t\omega'}\label{eq:bidcurvebuy2}
\end{align}}
\subsection{Rolling horizon approach}
The model presented above can also be used in a rolling horizon setting where we shift the planning horizon by $|\mathcal{T}^*|$ in each iteration. To model the rolling horizon correctly, some input parameters need to be updated based on the realization of the uncertainty. The initial status of the units with commitment decisions $\mathcal{U}^{WC}$ and the initial production and storage levels need to be set according to the outcome in period $t=|\mathcal{T}^*|$ in the previous run.
\section{Results}\label{sec:results}
All results are computed on hardware of the DTU Computing Center(DCC)\footnote{https://www.hpc.dtu.dk/} with Intel Xeon Processors 2650v4 2.20GHz using 8 cores and 16 GB RAM. The models are implemented using Python 3.7.10 and PuLP 2.5.1, and solved with Gurobi 9.5.0.
\subsection{Value of stochastic solution and out-of-sample performance}
In the first experiment, we evaluate the operational optimization with and without bidding curves comparing the stochastic programming approach with a simpler deterministic model using the expected value of the uncertain data.
Tables \ref{tab:vss-nobidding} and \ref{tab:vss-bidding} present the objective values and several solution metrics for each test case using each of the two approaches with and without bidding, respectively. Furthermore, we calculate the value of stochastic solution (VSS) that is defined as the difference between the expectation of the expected value solution (Exp.) and the expected value of the stochastic program (Sto.). The VSS is a standard metric for evaluation of stochastic programming \cite[pp.165]{birge2011introduction}. It evaluates the performance based on the scenarios considered in the model, i.e., 9 scenarios in our case. The planning horizon of the model is 168h (first week of the respective dataset) and the first-stage decisions relate to either the commitment status of the units (No bidding) or the bidding curves (Bidding). When bidding is considered, the evaluation first determines if the bid to the market would have been successful in a certain scenario by comparing the bidding price with the scenario price. Only if the bid is successful, production is allowed.
\begin{table}[t]
\centering
\footnotesize
\caption{Performance of expected value approach and stochastic program \textbf{without} consideration of bidding: Objective value (Obj.), net electricity sold on day-ahead market (sales-purchase) (El.sales), Income from electricity market (Income), Heat production (Heat) and cost per MWh. All values are expected values. The VSS is the difference between objective values and VSS[\%] gives the improvement when using the stochastic program. }
\begin{adjustbox}{width=\textwidth}
\begin{tabular}{lrrrrr|rrrrr|rr}
\toprule
~ & \multicolumn{5}{c}{Expected value approach}& \multicolumn{5}{c}{Stochastic program} & ~ & ~ \\ \cmidrule(rl){2-6} \cmidrule(rl){7-11}
Case & Obj. & El.sales & Income & Heat & Cost/ & Obj. & El.sales & Income & Heat & Cost/ & VSS & VSS\\
& [EUR] & [MWh$_\text{e}$] & [EUR] & [MWh$_\text{h}$]& MWh$_\text{h}$ & [EUR] & [MWh$_\text{e}$] & [EUR] & [MWh$_\text{h}$]& MWh$_\text{h}$ & [EUR]& [\%] \\\midrule
B-01-168 & 118831.0 & 1347.6 & 81222.5 & 5118.3 & 23.2 & 111703.8 & 987.2 & 62646.6 & 4957.3 & 22.5 & 7127.2 & 6.0 \\
B-04-168 & 37993.1 & 6.7 & 5185.9 & 3663.1 & 10.4 & 36717.8 & 5.6 & 4900.1 & 3659.0 & 10.0 & 1275.3 & 3.4 \\
B-06-168 & 19861.1 & -67.4 & -3281.9 & 2268.6 & 8.8 & 15144.8 & -71.6 & -3539.3 & 2330.7 & 6.5 & 4716.3 & 23.7 \\
H-04-168 & 282758.6 & 3458.8 & 274766.1 & 8490.2 & 33.3 & 279712.8 & 3380.5 & 270196.6 & 8490.8 & 32.9 & 3045.8 & 1.1 \\
H-07-168 & 3158.1 & 2155.1 & 242157.0 & 2701.5 & 1.2 & 1989.4 & 2181.1 & 246348.6 & 2749.8 & 0.7 & 1168.7 & 37.0 \\
H-10-168 & -513749.4 & 9361.4 & 1544349.4 & 10518.3 & -48.8 & -513749.4 & 9361.4 & 1544349.4 & 10518.3 & -48.8 & 0.0 & 0.0 \\
M-05-168 & 16270.6 & 0.0 & 0.0 & 668.5 & 24.3 & 16270.6 & 0.0 & 0.0 & 668.5 & 24.3 & 0.0 & 0.0 \\
M-08-168 & 6971.4 & 36.3 & 2071.7 & 291.1 & 23.9 & 6971.3 & 36.3 & 2071.7 & 291.1 & 23.9 & 0.1 & 0.0 \\
M-12-168 & 33097.0 & 184.8 & 8981.9 & 1247.6 & 26.5 & 33055.5 & 214.5 & 10105.9 & 1247.6 & 26.5 & 41.5 & 0.1 \\ \bottomrule
\end{tabular}
\label{tab:vss-nobidding}
\end{adjustbox}
\end{table}
\begin{table}[t]
\centering
\footnotesize
\caption{Performance of expected value approach and stochastic program \textbf{including} bidding: Objective value (Obj.), net electricity sold on day-ahead market (sales-purchase) (El.sales), Income from electricity market (Income), Heat production (Heat) and cost per MWh. All values are expected values. The VSS is the difference between objective values and VSS[\%] gives the improvement when using the stochastic program. }
\begin{adjustbox}{width=\textwidth}
\begin{tabular}{lrrrrr|rrrrr|rr}
\toprule
~ & \multicolumn{5}{c}{Expected value approach}& \multicolumn{5}{c}{Stochastic program} & ~ & ~ \\ \cmidrule(rl){2-6} \cmidrule(rl){7-11}
Case & Obj. & El.sales & Income & Heat & Cost/ & Obj. & El.sales & Income & Heat & Cost/ & VSS & VSS\\
& [EUR] & [MWh$_\text{e}$] & [EUR] & [MWh$_\text{h}$]& MWh$_\text{h}$ & [EUR] & [MWh$_\text{e}$] & [EUR] & [MWh$_\text{h}$]& MWh$_\text{h}$ & [EUR]& [\%] \\\midrule
B-01-168 & 144497.6 & 824.7 & 25553.4 & 4956.9 & 29.2 & 106370.7 & 834.4 & 65405.3 & 4957.2 & 21.5 & 38126.9 & 26.4\% \\
B-04-168 & 66262.4 & -33.7 & -28814.4 & 3543.9 & 18.7 & 32010.1 & -3.8 & 11645.2 & 3544.0 & 9.0 & 34252.3 & 51.7\% \\
B-06-168 & 33424.1 & -67.9 & -22584.0 & 2197.0 & 15.2 & 14327.4 & -71.6 & -3494.7 & 2197.8 & 6.5 & 19096.7 & 57.1\% \\
H-04-168 & 288430.3 & 2133.6 & 182284.9 & 8490.5 & 34.0 & 267060.0 & 3768.1 & 307183.5 & 8490.9 & 31.5 & 21370.3 & 7.4\% \\
H-07-168 & 29094.9 & 1258.2 & 149114.0 & 2665.8 & 10.9 & -561.4 & 2347.5 & 267112.2 & 2910.3 & -0.2 & 29656.4 & 101.9\% \\
H-10-168 & -339867.2 & 5037.7 & 914679.2 & 6245.8 & -54.4 & -520133.3 & 9111.8 & 1523616.5 & 10243.8 & -50.8 & 180266.0 & 53.0\% \\
M-05-168 & 16270.5 & 0.0 & 0.0 & 668.5 & 24.3 & 16270.6 & 0.0 & 0.0 & 668.5 & 24.3 & -0.1 & 0.0\% \\
M-08-168 & 6899.4 & 18.2 & 1142.4 & 291.1 & 23.7 & 6862.0 & 28.0 & 1722.4 & 291.1 & 23.6 & 37.4 & 0.5\% \\
M-12-168 & 33603.0 & 85.1 & 4593.1 & 1247.7 & 26.9 & 32799.8 & 180.2 & 9008.5 & 1247.6 & 26.3 & 803.2 & 2.4\% \\ \bottomrule
\end{tabular}
\end{adjustbox}
\label{tab:vss-bidding}
\end{table}
In the absence of power market bidding (Table \ref{tab:vss-nobidding}), using the stochastic program is particularly beneficial in the Brønderslev and Hillerød DH systems. In Middelfart, the performance of the stochastic and expected value models are similar. The stochastic program achieves lower total costs by utilizing cheaper units as can be seen by the lower cost per MWh produced heat. Although the expected value approach achieves higher income from the electricity market in some cases, the cost per MWh heat are always higher or equal. Including scenarios in the stochastic program enables the model to schedule the first-stage units such that total costs across all scenarios are decreased in comparison to the expected value program. The expected value approach optimizes only for the expected scenario and therefore, the commitment of units can be disadvantageous for other scenarios. In Middelfart, the interaction with the electricity market is low, since the cheapest units (wood chip boiler and wood pellet boiler) do not depend on the market. The uncertainty can be handled successfully using the flexibility of the system itself (heat storage and flexible, market-independent boilers), so the costs are the same for both approaches.
If we drop the assumption that the units can obtain the day-ahead market prices without bidding, the benefits of the stochastic program increase further. Table \ref{tab:vss-bidding} shows the results when the bidding behaviour is modelled using bidding curves. Then, even for Middelfart, some benefits can be achieved. The modelling of bidding curves relies on the scenarios for electricity prices, i.e., for each price scenario a single price and quantity are determined. In the expected value setting, only one scenario is present, hence only one bid is given. The distinction of several production levels enables us to achieve more won bids and, consequently, higher profits from the electricity market which reduces the overall costs. In case H-10-168, the electricity prices are so favorable that the stochastic program produces more heat than needed to create additional income from the market.
When comparing Table \ref{tab:vss-nobidding} and Table \ref{tab:vss-bidding}, we see that constructing bidding curves leads to the stochastic program achieving lower in-sample costs in all cases. The bidding curves allow us to distinguish different production levels, while the operational model without bidding curves has to determine one level of operation for all units with commitment decisions and periods.
In both tables, seasonal variations can be observed. The cost per MWh are lower in summer where the heat demand is lower and thus cheaper units are enough to cover the demand. Depending on the cost structure in the respective DH system, this leads to a different model behaviour across seasons: In Middelfart and Br{\o}nderslev, market-independent units tend to yield the lowest cost and thus, trading volumes and per-MWh cost of heat are higher during winter, when market-dependent units need to be active to cover the higher heat demand. In Hiller{\o}d, on the other hand, the market-dependent CHP unit tends to be a low-cost heat generator throughout the entire year. In October 2021, electricity prices rise and therefore, the heat cost per MWh can be lower due to extended electricity sales at higher prices.
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\columnwidth}
\centering
\includegraphics[width=0.95\columnwidth]{oos1-no-bidding.pdf}
\caption{Without bidding}
\label{fig:oos-no-bid}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\columnwidth}
\centering
\includegraphics[width=0.95\columnwidth]{oos1-bidding.pdf}
\caption{With bidding}
\label{fig:oos-bid}
\end{subfigure}
\caption{Out-of-sample performance for set 1. Values are the difference between exp. value approach and stochastic programming in \%. Plots contain the values of the 30 sampled realizations of uncertainty.}
\label{fig:oos}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.49\columnwidth}
\centering
\includegraphics[width=0.95\columnwidth]{oos2-no-bidding.pdf}
\caption{Without bidding}
\label{fig:oos2-no-bid}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\columnwidth}
\centering
\includegraphics[width=0.95\columnwidth]{oos2-bidding.pdf}
\caption{With bidding}
\label{fig:oos2-bid}
\end{subfigure}
\caption{Out-of-sample performance for set 2. Values are the difference between exp. value approach and stochastic programming in \%. The plots contain the values of the 30 sampled realizations of uncertainty.}
\label{fig:oos2}
\end{figure}
Since the VSS only uses scenarios that were already considered in the model, we additionally perform an out-of-sample evaluation. Here, the first-stage decisions are evaluated using new samples that were not used in the initial optimization to account for robust performance in unseen cases. For that purpose, we generate two sample sets:
\begin{enumerate}
\item In the first set, we sample the uncertain parameters from a triangular distribution for each hour of the week. The parameters of the triangular distribution are specific for each hour to account for different patters during the day and week. We estimate the parameters using the values from the same hour during the week in the three historic weeks used in the scenarios. We deduct/add additional 5\% from the lowest/to the highest value to allow also for slightly lower/higher values in the sampling.
The set contains 30 weekly samples per case.
\item In the second set, we use block bootstrapping with historic data from the two weeks prior and the two weeks after the three weeks used in the scenarios, i.e., four weeks in total. Afterwards, we split the data into blocks of four hours to capture some temporal dependency. To create new cases, we sample six blocks of four hours per day. Each block is chosen randomly from the available blocks, but the selection is limited to the same time of the day and distinguishes between weekday and weekend. The set contains 30 weekly samples per case, 15 cases sampled from the first two weeks and 15 cases from the latter two weeks.
\end{enumerate}
The comparison of the expected value model and stochastic model for sets 1 and 2 is presented in Figures \ref{fig:oos} and \ref{fig:oos2}, respectively. The shown data points are the savings of the stochastic program compared to the expected value model (in \%), i.e., positive values mean that the stochastic programming solution outperformed the expected value solution.
The results confirm the observations from the VSS results, i.e, in particular Brønderslev benefits from using a stochastic program and the benefits increase when considering bidding curves. When no bidding is considered, the expected value approach outperforms the stochastic program in some cases, but on average the stochastic program leads to better results. When bidding is considered, the stochastic program clearly outperforms the expected value approach. Based on the out-of-sample evaluation, we can conclude that the stochastic program outperforms an expected value approach. In DH systems with less interaction on the market and no uncertain production (such as Middelfart), a deterministic model and a simple point forecast might be sufficient, while in other cases the stochastic program should be utilized. Note that the scenario generation used in this paper is very simple. Using scenarios created from proper probabilistic forecasting techniques will increase the benefit of using the stochastic program in many cases.
\subsection{Evaluation on real data with rolling horizon}\label{sec:rc}
Next, we evaluate the operational planning in a rolling horizon setting for the same 9 scenarios as previously. Afterwards, the first-stage solutions are evaluated on the realization of the uncertain data. The rolling horizon approach applies a sliding window with length of 168h and we move the window by 24h after each iteration, i.e., the non-anticipativity period for the first-stage decisions is 24h. We consider 168h in the model to account for storage behaviour (as shown in \cite{hurb}). In the evaluation, the storage levels and unit statuses at the end of each day are determined using the observed data and applied as input for the next optimization. Thus, we mimic daily planning in practice. In total, we optimize for 2 weeks, i.e., 14 iterations, and the planning horizon reduces from 168h in a receding manner when approaching the end of the two weeks.
Table \ref{tab:rh} shows the total cost when applying the optimization results to the real observations over the 14 days. If we do not consider bidding curves, the stochastic programming solution can reduce the cost slightly in most cases. Only in one case (B-04-168), the expected value approach performs slightly better than the stochastic program for this specific realization of uncertain parameters. This can be explained by the runtime limitations, as shown in the next section. When considering bidding, the stochastic program reduces cost in all cases and can save up to 42.1\% of the cost in 14 days. The stochastic model performs particularly well for Brønderslev and Hillerød as already concluded in the previous section.
\begin{table}[t]
\footnotesize
\caption{Rolling horizon performance on real data. Difference between exp. value approach and stochastic programming evaluated over the a period of 14 days with a planning horizon 168h and rolling window of 24h. }
\centering
\begin{adjustbox}{width=0.9\columnwidth}
\begin{tabular}{lrrrrrrrrr}\toprule
Case & \multicolumn{4}{c}{No bidding} & \multicolumn{4}{c}{Bidding}\\\cmidrule(r){2-5}\cmidrule(r){6-9}
& Exp. & Sto. & $\Delta$ & [\%] & Exp. & Sto. & $\Delta$ & [\%] \\ \midrule
B-01-168 & 257749.88 & 256180.52 & 1569.36 & 0.6\% & 321250.27 & 288560.77 & 32689.50 & 10.2\% \\
B-04-168 & 41257.64 & 41446.22 & -188.58 & -0.5\% & 80486.26 & 55179.63 & 25306.63 & 31.4\% \\
B-06-168 & 21740.36 & 15866.63 & 5873.72 & 27.0\% & 35248.22 & 20422.54 & 14825.68 & 42.1\% \\
H-04-168 & 605527.16 & 603757.54 & 1769.63 & 0.3\% & 531522.78 & 512361.63 & 19161.15 & 3.6\% \\
H-07-168 & -103030.15 & -107056.46 & 4026.31 & 3.9\% & -99279.96 & -127250.59 & 27970.63 & 28.2\% \\
H-10-168 & -1187638.56 & -1187638.56 & 0.00 & 0.0\% & -1199759.10 & -1379174.08 & 179414.98 & 15.0\% \\
M-05-168 & 26082.61 & 26069.20 & 13.41 & 0.1\% & 26082.61 & 26069.20 & 13.41 & 0.1\% \\
M-08-168 & 15532.20 & 15512.30 & 19.90 & 0.1\% & 14687.92 & 14688.02 & -0.10 & 0.0\% \\
M-12-168 & 73741.12 & 72524.16 & 1216.96 & 1.7\% & 72837.97 & 72581.06 & 256.91 & 0.4\% \\ \bottomrule
\end{tabular}
\end{adjustbox}
\label{tab:rh}
\end{table}
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.49\columnwidth}
\centering
\includegraphics[width=1.1\columnwidth]{TotalHeatFlowB-Jan-168-24-bid-sto.pdf}
\caption{Heat production}
\label{fig:b-heat}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\columnwidth}
\centering
\includegraphics[width=1.1\columnwidth]{TotalMarketB-Jan-168-24-bid-sto.pdf}
\caption{Electricity trading}
\label{fig:b-el}
\end{subfigure}
\caption{Optimized production for case B-01-168 with daily rolling horizon and 168h planning horizon.}
\label{fig:b-01-168}
\end{figure}
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.49\columnwidth}
\centering
\includegraphics[width=1.1\columnwidth]{TotalHeatFlowH-Oct-168-24-bid-sto.pdf}
\caption{Heat production}
\label{fig:h-heat}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\columnwidth}
\centering
\includegraphics[width=1.1\columnwidth]{TotalMarketH-Oct-168-24-bid-sto.pdf}
\caption{Electricity trading}
\label{fig:h-el}
\end{subfigure}
\caption{Optimized production for case H-10-168 with daily rolling horizon and 168h planning horizon.}
\label{fig:h-04-168}
\end{figure}
\begin{figure}[ht]
\centering
\begin{subfigure}[b]{0.49\columnwidth}
\centering
\includegraphics[width=1.1\columnwidth]{TotalHeatFlowM-Dec-168-24-bid-sto.pdf}
\caption{Heat production}
\label{fig:m-heat}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\columnwidth}
\centering
\includegraphics[width=1.1\columnwidth]{TotalMarketM-Dec-168-24-bid-sto.pdf}
\caption{Electricity trading}
\label{fig:m-el}
\end{subfigure}
\caption{Optimized production for case M-12-168 with daily rolling horizon and 168h planning horizon.}
\label{fig:m-12-168}
\end{figure}
Figures \ref{fig:b-01-168}, \ref{fig:h-04-168} and \ref{fig:m-12-168} show the heat production and electricity trading for 14 days for one case in Brønderslev, Hillerød and Middelfart, respectively. The results are based on the model using bidding curves.
In Brønderslev (Fig. \ref{fig:b-01-168}), the heat pumps (WCHP) with the ORC unit provide the base load for heat production. The remaining heat demand is covered either by the gas boiler (GB) or the CHP (CHP1-7) units. The CHP units fill the storage unit in hours where electricity is sold and storage outflow covers the heat demand in the succeeding hours. On the third day, the electric boiler (EB) wins a bid, since the electricity price is very low (see Fig \ref{fig:b-el}). The CHP units win bids and produce in hours with high electricity prices. The electricity needed for the heat pumps is either won as bids on the day-ahead market (Cons. DA) or imbalances are created (Cons. BM) (see Fig \ref{fig:b-el}) since the combination of heat pumps with the ORC is very cost-efficient.
In Hillerød (Fig. \ref{fig:h-04-168}), the CHP unit also exploits high-price hours to produce heat and fill the storage. The remaining heat demand is either covered by the wood chip boiler (WCB), waste heat (eWH), solar heat (eSOL) or the ORC unit (ORC1,2). The gas boilers are used only in exceptional cases due to their high operational cost.
In Middelfart (Fig. \ref{fig:m-12-168}), the CHP1 and GB1 units are never used, as they are too expensive. Only CHP2 wins production bids in some hours with high prices. The main heat production in Middelfart is achieved using the wood chip boiler (WCB) and wood pellet boiler (WPB). When no CHP bids are successful, peak demand is covered by the gas boiler (GB).
\subsection{Runtimes}
\begin{figure}[ht!]
\centering
\begin{subfigure}[b]{0.49\columnwidth}
\centering
\includegraphics[width=0.95\columnwidth]{runtimes-no-bidding.pdf}
\caption{No bidding}
\label{fig:runtime-nobidding}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\columnwidth}
\centering
\includegraphics[width=0.95\columnwidth]{runtimes-bidding.pdf}
\caption{Bidding}
\label{fig:runtime-bidding}
\end{subfigure}
\caption{Computational runtime for each case and all 14 iterations of the rolling horizon. Timeout is 600 sec.}
\label{fig:runtime}
\end{figure}
The runtimes for solving the model in each iteration of the receding horizon algorithm presented in the previous Section \ref{sec:rc} are given Figure \ref{fig:runtime}. This means each box plot is based on 14 values. The time-out per iteration is 600 seconds and is only hit once in case B-04-168. Brønderslev is by far the most complex network of the three. For the cases in Hillerød and Middelfart, each model is solved in less than 135 seconds and on average in less than 35 seconds. For the iteration where the time-out of 600 seconds is reached, the remaining gap is 0.0003 (0.03\%) close to the default cutoff of Gurobi (0.0001). Note that B-04-168 is also the case in Table \ref{tab:rh} where the stochastic program did not outperform the expected value approach.
Based on these results, we deem the runtime as short enough to be used in practice, since the optimization is carried out only once a day (for day-ahead market optimization). Even if the generic formulation is used for intra-day optimization, runtimes less than 10 minutes are fast enough.
\subsection{Long-term analysis} \label{subsec:long-term-analysis}
In the last analysis, we apply the model formulation to evaluate the performance of the DH systems in the long term. For this, we use the cases M-03-6936, B-10-6936 and H-02-5808 that include historical data for more than seven months. We solve the entire planning horizon as a deterministic case with the assumption that the day-ahead market can be used without bidding.
The distribution of the heat production among the units (per month and in total) for each DH systems is shown in Figure \ref{fig:longterm}. Objective values, runtimes and RES shares are presented in Table \ref{tab:longterm}. The results put the already observed operational patterns presented in Section \ref{sec:rc} in a yearly context and confirm the observations.
In Brønderslev (Fig. \ref{fig:b-total}), the major share of the heat demand is covered by the ORC unit and WCHPs (which we consider as RES). The CHP units and the electric boiler are mostly used during winter where the heat demand is higher. The share of RES in heat production is more than 78\%, when considering the WCHPs as RES. In Middelfart (Fig. \ref{fig:m-total}), the heat demand during summer can be covered by the wood chip boiler (WCB). In winter, the wood pellet boiler (WPB) is used more extensively. CHP2 only operates occasionally in case of high electricity prices. Here, the share of RES in heat production (WCB+WPB) is even higher with 88\%.
In Hillerød (Fig. \ref{fig:h-total}), the operation strategy shifts between summer and winter. In the colder months, the ORC unit and the wood chip boiler are used for a large share of the heat production in addition to the efficient CHP unit. This coincides with the increasing electricity prices in the second half of 2021. During summer, the system mostly relies on waste heat and the CHP unit, which can operate at favorable power prices (see also the representative week in Section \ref{sec:rc}). Although the CHP unit supports the electricity grid in hours with high prices by with efficient co-generation of heat and power, it relies on natural gas. Therefore, the share of RES (ORC1,2, eSol, WCB, eWH) in Hillerød is low with 34\%. In case market conditions change, the Hillerød system can easily adapt: With increasing gas prices and/or higher emission taxes, the result is expected to change in favor of the ORC unit and WCB. Therefore, we also tested the operation with an increase in natural gas prices of 0.04 EUR/kWh by replacing the cost factor at the energy source for gas. The result is shown in Figure \ref{fig:hg-total}. Now, the production of the ORC unit and wood chip boiler (WCB) is more than doubled. Furthermore, it is less profitable to operate the CHP unit, since the prices on the electricity market are not high enough except in September and October. Thus, high demands are covered also by gas boilers. The share of RES increases from 34\% to 73\%, but also the overall cost increase drastically due to higher fuel costs and lower power market revenues.
The runtimes for the long-term models ranges from 150 seconds to 3760 seconds (slightly more than one hour) which is acceptable for such long planning horizons, since these kind of analyses are only performed occasionally.
\begin{figure}[t]
\centering
\begin{subfigure}[b]{0.49\columnwidth}
\centering
\includegraphics[width=1\columnwidth]{b-total.pdf}
\caption{B-10-6936}
\label{fig:b-total}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.49\columnwidth}
\centering
\includegraphics[width=1\columnwidth]{m-total.pdf}
\caption{M-03-6936}
\label{fig:m-total}
\end{subfigure}
\begin{subfigure}[b]{0.49\columnwidth}
\centering
\includegraphics[width=1\columnwidth]{h-total.pdf}
\caption{H-02-5808}
\label{fig:h-total}
\end{subfigure}
\begin{subfigure}[b]{0.49\columnwidth}
\centering
\includegraphics[width=1\columnwidth]{hg-total.pdf}
\caption{H-02-5808 + 0.04[EUR/kWh] for gas}
\label{fig:hg-total}
\end{subfigure}
\caption{Distribution of heat production over the months and in total.}
\label{fig:longterm}
\end{figure}
\begin{table}[t]
\centering
\footnotesize
\caption{Objective values and runtimes for the long-term analysis}
\begin{adjustbox}{width=0.8\textwidth}
\begin{tabular}{lrrr}
\toprule
Case & Objective [EUR] & Runtime [s] & Share RES [\%]\\ \midrule
B-10-6936 & 1795156.73 & 412.55 & 78.61\% \\
M-03-6936 & 710407.53 & 156.94 & 88.18\% \\
H-02-5808 & 522125.61 & 3761.08 & 34.03\% \\
H-02-5808 + 0.04[EUR/kWh] for gas & 6219317.78 & 293.53 & 73.24\% \\\bottomrule
\end{tabular}
\end{adjustbox}
\label{tab:longterm}
\end{table}
|
2,869,038,155,449 | arxiv | \section{INTRODUCTION}
If the observed $\gamma$-ray bursts originate at cosmological distances (e.g., Paczy{\'n}ski 1991)
then the distance
scale of their distribution must correspond to a redshift of order unity
for the burst intensity distribution to be consistent
with observations (e.g., Paczy{\'n}ski 1991;
Piran 1992; Mao and Paczy{\'n}ski 1992; Fenimore {\it et al. } 1992;
Wickramasinghe {\it et al. } 1993).
Assuming that the universe is homogeneous and free of bulk flows on such
scales the angular distribution of bursts should appear to be perfectly
isotropic to
an observer at rest with respect to the cosmic microwave background (CMB)
frame, up to statistical fluctuations.
However, the solar system is moving relative
to the CMB frame at a speed of $370\!\pm\!10$ km s$^{-1}$ in
the direction $(l,b)\!=\!(264.7\deg,48.2\deg)$ (Peebles 1993).
Consequently, the bursts' distribution should exhibit
a dipole component in that direction due to several effects:
abberation, anisotropic Doppler shift of the event rate, and the
angular variation in the distance out to which bursts can be detected.
We derive the amplitude of the expected dipole for a Friedmann
cosmological model, examine its dependence on various evolution rates
of the burst
population, and discuss its dependence on the luminosity function.
For sets of parameters which are consistent with the observed $\left<V/V_{max}\right>\,\,$
parameter we obtain a dipole amplitude of $\sim 10^{-2}$, an order of
magnitude larger than that of the CMB temperature dipole.
In \S{2} we derive the three effects which contribute to the
anisotropy, and evaluate the amplitude
of the dipole in \S{3}. In \S{4} we summarize the results and discuss
their implications.
\section{THE ANISOTROPY EFFECTS}
\def \tilde{\theta}{\tilde{\theta}}
Denoting the dimensionless
velocity of the solar system relative to the CMB frame by
$\beta\!\equiv\!{v/c}$ and using the Lorentz
transformation it can be shown that photon
directions are related by
\begin{equation} \cos\theta = {\cos\tilde{\theta} + \beta \over 1 + \beta\cos\tilde{\theta}
} \,\,\,\,\,\,\,\, , \end{equation}
where $\tilde{\theta}$ and $\theta$ are the angles between our direction of motion and
the direction to a source in the CMB rest frame and in our frame, respectively.
Therefore, assuming an isotropic distribution of bursting objects in the CMB
frame, the angular distribution of sources as observed by us
should be proportional to
\begin{equation} {dN_s\over d\Omega} \, \propto \, f^{2}(\theta) \:\: , \end{equation}
where
\begin{equation} f(\theta) \equiv { \sqrt{1-\beta^2} \over 1 - \beta\cos\theta }
\, \simeq 1 + \delta(\theta) \,\,\,\,\, , \end{equation}
$\delta(\theta)\equiv \beta\cos\theta$, and the rightmost approximation
is due to $\beta\!\ll\!1$.
This is the effect of abberation which makes the sources ``bunch up''
in the forward direction.
In addition, the burst frequency is Doppler shifted
by the factor $f(\theta)$, implying a highest event rate
in the direction of motion.
These effects are independent of the cosmological model and of
possible evolution of the burst population.
The part of a burst's intrinsic luminosity which is
shifted into the detector bandwidth depends on the effective Doppler
shift. Thus, the number of detectable events also varies with direction.
In order to calculate this effect let us assume the following: 1)
a Friedmann cosmological model with vanishing cosmological constant. 2)
all the bursts have identical
power-law spectra for the photon number in the comoving frame,
$n_{\gamma}(E)\!\propto\! E^{-S}$. 3) The burst detector is sensitive to
photons in a fixed energy bandpass, $E_1\!\le\! E\!\le\! E_2$,
and is triggered by a peak flux higher than a given detection treshold
$F_{min}$ (the peak flux is proportional to the peak photon count rate).
We shall also assume that the bursts are ``standard candles'', and discuss
broader luminosity functions in \S{3}.
Let us define the luminosity of a source by
\def {\~z}{{\~z}}
\begin{equation} L \equiv \int_{E_1}^{E_2}{E\, n_{\gamma}(E)\, dE} \,\,\,\,\, , \end{equation}
where $n_\gamma(E)$ is normalized accordingly. For a detector at rest relative
to the CMB frame the burst's
luminosity which is shifted into the detector bandwidth is
$(1+z)^{2-S}L$, where $z$ is the cosmological
redshift of the source in the CMB frame.
Dividing by $(1+z)^2$ for
the time dilation in the reception of photons and for the loss of
energy per photon, the peak flux at the observer is
\def {\tilde{z}}_{max}{{\tilde{z}}_{max}}
\begin{equation} F(z) = {L\over 4\pi} {(1+z)^{-S} \over r^{2}(z) }
\,\,\,\,\,\,\, , \end{equation}
where $r$ is the proper motion distance to the source and is given by
\begin{equation} r(z) = {2c\over H_0} \, {\Omega_{0}z +
(2-\Omega_0)(1-\sqrt{1+\Omega_{0}z}) \over \Omega_{0}^{2} (1+z)}
\:\:\:\:\:. \end{equation}
Thus, in the CMB rest frame bursts with luminosity $L$ can be detected
out to a redshift ${\tilde{z}}_{max}$
which is defined by $F({\tilde{z}}_{max})\!=\!F_{min}$, where $F_{min}$ is the detection
treshold.
In our moving frame, the effective
Doppler shift of photons from a source which is located at a cosmological
redshift $z$ is $(1+z)/f(\theta)$. Therefore, a detector in our frame
can detect bursts out to a cosmological redshift $z_{max}$ which
is defined by
\begin{equation} { (1+{\tilde{z}}_{max})^{-S} \over r^{2}({\tilde{z}}_{max}) } \,
= \, { (1+z_{max})^{-S}\,\left[f(\theta)\right]^{S}
\over r^{2}(z_{max}) } \,\,\,\,\, . \end{equation}
Obviously, $z_{max}\!=\!z_{max}({\tilde{z}}_{max},\theta,S)$. Substituting
$z_{max}\!\equiv\! {\tilde{z}}_{max} + \Delta z$, replacing
$f^{S}$ by $1\!+\!S\delta(\theta)$ (see Eq. [3]),
and keeping terms up to first order
($\delta\!\ll1$ and consequently $\Delta{z}\!\ll\!{\tilde{z}}_{max}$), we obtain
\def r_{max}{r_{max}}
\begin{equation} \Delta z = \left( {{2\over r({\tilde{z}}_{max})}{\left
{dr\over dz} \right|_{{\tilde{z}}_{max}} \!\!\!\! + \, {S\over 1+{\tilde{z}}_{max}} }
\, \right) S\delta(\theta) \,\,\,\,\,\,\,\,\, , \end{equation}
where $r(z)$ is given by equation (6).
Thus, the number of events that we should observe
in a solid angle $d\Omega$ is
\begin{equation} {dN(\theta) \over d\Omega} \propto
\left[ 1+\delta(\theta)\right] }^3 \!\!\!\!\!
\int\limits_{0}^{{\tilde{z}}_{max}+\Delta{z(\theta)}} {\!\!\!\!\! {n_s(z)
\over (1+z)} \,
r^{2}(z) \, {dr\over dz} \, dz } \,
\,\,\,\,\,\, , \end{equation}
where $n_s(z)$ is the number of sources per comoving volume at cosmological
redshift $z$, and the factor $[1\!+\!\delta]^3$ is due to the effects of
abberation and the modified event rate that we discussed earlier.
The burst population may evolve since $z\!\sim\!{1}$ until
the present epoch. Let us assume that the comoving
source number density is given by
$n_s\!\propto\!(1+z)^{-\alpha}$, so $\alpha\!=\!0$ corresponds to a
constant rate of bursts per unit
comoving volume per unit comoving time,
and positive values of $\alpha$ describe an increase in the source
population, or equivalently in the intrinsic event rate, with time.
Thus, the integral in equation (9) can be denoted by $T(z_{max},\Omega_{0},\alpha)$,
where
\begin{equation} T(z,\Omega_{0},\alpha)\! &\equiv& \!
\int\limits^{z} {\!
{r^2 \over (1+z)^{1+\alpha}}\,{dr\over dz}\,dz} \,\,\,\,\,\,\, . \end{equation}
This integral can be evaluated analytically for certain values of $\alpha$,
e.g., for $\alpha\!\!=\!\!0$ ($n_s\!=\!{\rm constant}$) we
obtain
\begin{eqnarray} T(z,\Omega_{0}\!<\!1,0) &=&
{(2-\Omega_{0})\left[x-8x^{3}(1-\Omega_{0})\right]
\sqrt{1+4x^{2}(1-\Omega_{0})}\over 64 (\Omega_{0}-1)^2}
- {x^4\over 2} \nonumber \\ & & \nonumber \\
& &
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
+\, {\Omega_{0} x^3\over 6(\Omega_{0}-1)} + \,
{(\Omega_{0}-2){\rm arcsinh}(2x\sqrt{1-\Omega_{0}}) \over 128 (1-\Omega_{0})^{5/2}} \,\,\, ;
\nonumber \\ & & \nonumber \\
T(z,\Omega_{0}\!=\!1,0) &=& {x^3\over 3} - {x^4\over 2} + {x^5\over 5}
\,\,\,\,\, ,
\end{eqnarray}
where $x\!\equiv\!r(z)H_{0}/2c$, and we have ignored the coefficient
$(2c/H_0)^3$ since equation (9) is a proportionality relation.
We may replace
$T({\tilde{z}}_{max}\!+\!\Delta{z})$ by $T({\tilde{z}}_{max})+ (dT/dz)|_{{\tilde{z}}_{max}}\Delta{z}$ due to
$\Delta{z}\!\!\ll\!{\tilde{z}}_{max}$. Thus,
substituting equation (8) for $\Delta{z}$, and using the definition of $\delta
(\theta)$, we obtain
\begin{equation} {dN(\theta)\over d{\Omega}} \, \propto\,
1\, +\, \left(3 + K\right)\! \beta\cos\theta \, + \, O(\beta^2)
\,\,\,\,\,\,\, , \end{equation}
where
\begin{equation} K({\tilde{z}}_{max},\Omega_{0},\alpha,S) \equiv
\left( {{2S\over r({\tilde{z}}_{max})}{\left
{dr\over dz} \right|_{{\tilde{z}}_{max}} \!\!\!\! + \, {S^{2}\over 1+{\tilde{z}}_{max}} }
\, \right)
{1\over T({\tilde{z}}_{max})} \left {dT\over dz}\right|_{{\tilde{z}}_{max}} \,\,\,\,\,\, , \end{equation}
and $T$ is defined by equation (10).
Notice that the above results are
independent of the value of the Hubble constant.
In order to gain some insight we calculated the function $K$
for the case of $\alpha\!=\!0$ and
found it to be well fitted (to within a few percents) by
\begin{equation} K \, \simeq \, 6.7\, \left({S\over 2}\right)^{\!1.4} \!
\Omega_{0}^{-1/3} \, {\tilde{z}}_{max}^{\,-2.7}
\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, (\alpha\!=\!0) \end{equation}
in the range of parameters $\,1.0\!\le S\le\!2.5\, ,\: 0.2\!\le\!\Omega_{0}\!\le1,
\, $ and $1\!\le{\tilde{z}}_{max}\!\le2$.
\section{THE DIPOLE AMPLITUDE}
The redshift out to which bursts are detected, ${\tilde{z}}_{max}\,$, is not a free
parameter. It is determined by the requirement that the burst intensity
distribution coincides with the observed one, namely, that the $\left<V/V_{max}\right>\,\,$
parameter equals the measured value. Since
$\left<V/V_{max}\right>$_{BATSE}$=0.330\pm0.016$ (Meegan {\it et al. } 1993), the BATSE instrument
can detect bursts out to a redshift ${\tilde{z}}_{max}$ which is determined by
\begin{equation} \langle {V_{ }\over V_{max}}\rangle \equiv
\, {1\over T({\tilde{z}}_{max},\Omega_{0},\alpha)} \int\limits_{0}^{{\tilde{z}}_{max}}{
{F^{3/2}_{min}\over F^{3/2}}
\, {r^2 \over (1+z)^{1+\alpha}} \, {dr\over dz}\, dz} \, \,= \, 0.330
\,\,\,\,\, , \end{equation}
where $T$ is defined in equation (10),
$F$ is given by equation (5), and $F_{min}\!\equiv\!{F({\tilde{z}}_{max})}$.
Thus, ${\tilde{z}}_{max}$ depends on $\alpha$, $S$, and $\Omega_{0}$.
There is a considerable diversity in the observed
spectra of $\gamma$-ray bursts, but the average
spectral index of the photon spectrum, $S$, is somewhere between $-1.5$
and $-2$ (schaefer {\it et al. } 1992). Regarding the $\alpha$ parameter,
it is clear that
the population of cosmological $\gamma$-ray bursts may evolve with epoch but we have no
observational constraint on that.
Therefore, we shall examine the
cases of moderate ($\alpha\!\!=\!\! 1/2$) and rapid ($\alpha\!\!=\!\!1$) rates
of evolution, as well as the case of
a constant comoving event rate ($\alpha\!=\!0).
Substituting $\beta\!=\!1.233\!\times10^{-3}$ in equation (12),
the amplitude of the dipole is
\begin{equation} \left[ 3.7 + 1.23K({\tilde{z}}_{max},\Omega_{0},\alpha,S)\right]\times 10^{-3}
\,\,\,\,\, , \end{equation}
where, for a given $\Omega_{0}$, $\alpha$,
and $S$, the value of ${\tilde{z}}_{max}$ is determined from
equation (15), and $K$ is evaluated using
equation (13). The results for various
combinations of parameters are shown in Table 1.
For reasonable sets of parameters the dipole amplitude is of order $10^{-2}$,
an order of magnitude larger than $\beta$,
and it is almost independent of the value of $\Omega_{0}$.
At first sight it seems surprising that the dipole amplitude increases when
evolution of the burst population becomes significant (Table 1).
Afterall, a higher
value of $\alpha$ implies a smaller number of bursts originating within a
given range of redshift, $[z,z\!+\!\!\Delta{z}]$. However, an increasing rate
of
evolution also compels a {\it lower\/} ${\tilde{z}}_{max}$ since evolution replaces
some of the ``redshift effect'' which is required for a consistency with
the observed
$\left<V/V_{max}\right>$. Therefore, the proper volume within $[{\tilde{z}}_{max},{\tilde{z}}_{max}\!+\!\Delta
z]$, relative to the volume within $[0,{\tilde{z}}_{max}]$ is of order $3\Delta r/r$,
where $r\!=\!r({\tilde{z}}_{max})$ and $\Delta r\!=\!(dr/dz)|_{{\tilde{z}}_{max}}\Delta z$.
Thus, since $\Delta{r}/r\sim O(\beta)$, and the effect of evolution is of order
$\alpha\beta$, the net effect of an
increasing $\alpha$ is an increase in the fraction of detectable bursts,
as long as $\alpha\!\lesssim3$.
We should keep in mind that the possibility of a negative value of
$\alpha$, namely, a decrease in the comoving event rate with time,
cannot be excluded. In such case the dipole amplitude will be lower,
and the redshift out to which bursts are detected
will be higher. We argue that $\alpha$ is unlikely to be negative and
large for the following reasons: 1) bursts at a too high redshift would
introduce a severe difficulty to most progenitor models, e.g., the merging
neutron star model, since galaxies may
not have formed yet. 2) it would imply a strong correlation between
the brightness and the duration of bursts, which is not observed.
The assumption that all the bursts are ``standard candles'' may be adequate,
but a broader luminosity function cannot be excluded. In such case, ${\tilde{z}}_{max}$
would depend on $L$ through equation (5), and an integration over the range
of possible luminosities, $\int{\!dL \,\Phi(L)\,}$, should precede the r.h.s of
equations (9), (10), and (15). We argue that if the luminosity function
is falling with increasing luminosity, e.g., a power law distribution
($\Phi(L)\!\propto\! L^{-\gamma}$), than
the dipole amplitude will {\it increase}, the more so for a larger value of
$\gamma$. The reason for that is the following:
assuming ``standard candles'' implies that most of the observed bursts
originate at distances close to the boundary of the sphere of detectable
bursts. By contrast, in the case of a steeply falling luminosity function
the average distance to a burst may be considerably smaller.
Therefore,
from an argument similar to the one presented in the previous paragraph,
as well as from the apparently (inverse)
strong dependency of the dipole amplitude on the cosmological redshift
(e.g., equation [12]) we conclude that replacing the ``standard
candle'' assumption by a falling luminosity function will tend to increase the
dipole amplitude. A detailed calculation for specific
luminosity functions is beyond the scope of the present study.
\section{CONCLUSION}
{\it Assuming\/} that $\gamma$-ray bursts originate at cosmological distances, we have shown
that three effects combine together to produce a dipole
anisotropy in the bursts' angular distribution. The dipole should point
in the direction of the solar
motion relative to the cosmic microwave background rest frame.
The amplitude of the predicted
dipole depends weakly on $\Omega_{0}$, but it is sensitive to the
the spectral index of the photon
spectra, and to the rate of evolution of the burst population. It is
independent of the value of the Hubble constant. The maximum
redshift at which bursts can be detected is not a free parameter but is
constrained by
the requirement that the $\left<V/V_{max}\right>\,\,$ parameter be consistent with observations.
The dipole amplitude turns out to be of order $10^{-2}$ for various
combinations of parameters. This is an order of
magnitude larger than what one would expect since the solar
velocity with respect to the CMB is $370\, {\rm km\,
s}$^{-1}$ ($\simeq10^{-3}$).
Obviously, the sun is in motion relative to the Galaxy too, so one would
expect a similiar effect even if bursts originate within an extended Galactic
halo (e.g., Fishman {\it et al. } 1978; Atteia and Hurley 1986; Maoz 1993). However,
in this case
the amplitude of the predicted
anisotropy ($<\!1\%$) is negligible relative to the
uncertainties in our understanding of the exact shape of the halo.
It is only within the cosmological origin hypothesis that the
dipole due to the solar motion is of practical interest.
The predicted dipole cannot provide a strong test to the hypothesis of
a cosmological origin of $\gamma$-ray bursts until a sample of the order of $10^{4}$
bursts is established. The sky exposure map will also have to be complete to
a sufficient accuracy. In the near future, being aware of the expected dipole,
rather than testing the consistency of the
data with a perfectly isotropic distribution,
will enable future statistical analyses to derive a more reliable statistical
significance for their results.
I wish to thank Avi Loeb, Ramesh Narayan, and Tsvi Piran for comments.
This work was supported by the U.S. National Science Foundation, grant
PHY-91-06678.
\newpage
\doublespace
\vspace{0.9in}
{\bf TABLE 1.} The Dipole Amplitude
\vspace{0.2in}
\begin{tabular}{l c c| c c|}
source &\multicolumn{2}{c|}{$\Omega_{0}=1$}
& \multicolumn{2}{c}{$\Omega_{0}=0.3$} \\ \cline{2-3} \cline{4-5}
evolution& S=2 &\multicolumn{1}{c|}{S=1.5} &S=2 &S=1.5 \\ \hline
\!$n_s=$ constant
&$(1.02\, ;\, 11.7\!\times\! 10^{-3})
&$(1.30\, ;\, 6.4\!\times\! 10^{-3})
&$(1.22\, ;\, 11.2\!\times\! 10^{-3})
&$(1.65\, ;\, 6.1\!\times\! 10^{-3})
\\
\!$n_s\propto (1\!+\!z)^{-1/2}$
&$(0.86\, ;\, 14.3\!\times\! 10^{-3})
&$(1.06\, ;\, 7.8\!\times\! 10^{-3})
&$(0.99\, ;\, 13.9\!\times\! 10^{-3})
&$(1.27\, ;\, 7.6\!\times\! 10^{-3})
\\
\!$n_s\propto (1\!+\!z)^{-1}$
&$(0.75\, ;\, 17.4\!\times\! 10^{-3})
&$(0.90\, ;\, 9.4\!\times\! 10^{-3})
&$(0.83\, ;\, 18.0\!\times\! 10^{-3})
&$(1.02\, ;\, 9.6\!\times\! 10^{-3})
\\
\end{tabular}
\vspace{0.3in}
{\bf Table 1} - The redshift out to which bursts can be detected, ${\tilde{z}}_{max}$,
and the amplitude of the dipole component
(equation [16]), evaluated for several combinations of the photon
spectral index, $S$, the cosmological density parameter, $\Omega_{0}$, and the
rate of evolution of the burst population. In general, ${\tilde{z}}_{max}\!\simeq\!1$ and
the dipole amplitude is of order $10^{-2}$. The dependence on the various
parameters is discussed in \S{3}.
\newpage
|
2,869,038,155,450 | arxiv | \section{Introduction}
After the struggles to understand the still mysterious
M theory, Matrix theory \cite{r:BFSS} emerged as the most successful
candidate to describe the eleven dimensional theory.
Although it has already passed many nontrivial
tests, there remains nontrivial issues which
needs careful examinations. One of such issues is
the Lorentz invariance. Because of its very definition,
Matrix theory needs the extra information to understand
eleventh dimension (so called ``M''-direction).
Although there are some beautiful works \cite{r:PP} which
suggest the symmetry by using
2+1 dimensional instanton calculus,
it is still desirable to have a direct confirmation.
The situation is essentially different in its close cousin,
de Wit-Hoppe-Nicolai (dWHN) supermembrane\cite{r:dWHN}.\footnote{
For a detailed information on the supermembrane theory,
see \cite{r:TD} and references therein.}
Although the difference between the two theories is simply in
their gauge groups (SU(N) vs the area preserving diffeomorphism (APD)),
we have an explicit definition of the Lorentz generators \cite{r:dWMN}
and the Lorentz algebra itself was already checked explicitly
\cite{r:Mel}\cite{r:EMM}.
In this letter, we examine the supermembrane in the
toroidally compactified spacetime.
In section two, we propose Lorentz invariant form of the
SUSY algebra with the central charges associated with
membranes.
In section three, we derive the APD constraints
associated with the harmonic vector fields
which play a central role in the analysis of the BPS conditions.
In sections four and five, we give equations that characterize
BPS states with $1/2$ and $1/4$ SUSY.
Examination of the latter gives a system of
the first order differential
equations which is analogous to the Bogomol'nyi bound
of the super Yang-Mills
theory. We show that a particular solution gives
the BPS states of the type IIA superstring after
the double dimensional reduction.
Finally in section six we discuss how our results may be
extended to the matrix formulation of M-theory.
\section{Eleven Dimensional SUSY algebra
of Su\-per\-membrane and BPS condition}
Let us first examine the SUSY algebra of dWHN model.
We use the same notations and definitions
as in \cite{r:dWMN} in the following computation.
In particular the expression of supercharges is given by:
\begin{eqnarray}
Q^{+}& = &
\frac{1}{\sqrt{P_{0}^{+}}}\int d^{2}\sigma
\left(P^{a}\gamma_{a}+\frac{\sqrt{w}}{2}\{X^{a},X^{b}\}
\gamma_{ab}\right)\theta,
\nonumber\\
Q^{-}& = &
\sqrt{P_{0}^{+}}\int d^{2}\sigma\sqrt{w}\theta,
\end{eqnarray}
where $\{A,B\}\equiv\frac{\epsilon^{rs}}{\sqrt{w}}
\partial_{r}A\partial_{s}B$ $(r,s=1,2)$. Using the Dirac brackets:
$$
\left(X^{a}(\sigma),P^{b}(\rho)\right)_{DB}
=\delta^{ab}\delta^{(2)}(\sigma,\rho), \qquad
\left(\theta_{\alpha}(\sigma),\theta_{\beta}(\rho)\right)_{DB}=
-\frac{i}{\sqrt{w(\sigma)}}\delta_{\alpha\beta}\delta^{(2)}
(\sigma,\rho),
$$
the SUSY algebra of dWHN model is computed as follows
\cite{r:dWHN}(see also \cite{r:BSS}),
\begin{eqnarray}
i\left(Q_\alpha^-, Q_\beta^-\right)_{DB}
& = & \delta_{\alpha\beta}P_0^+,\nonumber\\
i\left(Q_\alpha^-,Q_\beta^+\right)_{DB}
& = & P_0^a(\gamma_a)_{\alpha\beta}
+\frac{1}{2} z^{ab}(\gamma_{ab})_{\alpha\beta},\nonumber\\
i\left(Q_\alpha^+,Q_\beta^+\right)_{DB}
& = & 2\delta_{\alpha\beta} H + 2z^a(\gamma_a)_{\alpha\beta}
+\frac{2}{4!}z^{abcd}(\gamma_{abcd})_{\alpha\beta}.
\label{e:SUSY}
\end{eqnarray}
The brane charges which appear in the right hand side
of these equations are defined by
\begin{eqnarray}
z^{ab} & = & -\int d^2 \sigma \sqrt{w}\left\{
X^a, X^b\right\},\label{e:t2}\\
z^a & = & \frac{1}{P_0^+}\int d^2\sigma\left(
\left\{ X^a, X^b\right\}P_b - \frac{i}{2}\sqrt{w}
\left\{X^a,\theta^\alpha\right\}\theta^\alpha\right)
\nonumber\\
& & -\frac{3i}{16P^+_0}\int d^2\sigma\sqrt{w}\left\{
X^c,\theta\gamma^{ac}\theta\right\},\label{e:l2}\\
z^{abcd} &=& -\frac{12}{P_0^+}\int d^2\sigma\sqrt{w}
\left\{X^{\left[a\right.},X^b\right\}\left\{
X^c,X^{\left.d\right]}\right\}-\frac{i}{4P^+_0}
\int d^2 \sigma \sqrt{w}\left\{ X^{\left[ a\right.},
\theta\gamma^{\left.bcd\right]}\theta\right\}\label{e:l4}.
\end{eqnarray}
The second term in \eq{e:l2}
and the second term in \eq{e:l4} should
vanish as we already
discussed in our previous paper \cite{r:EMM} (appendix F)
to make the supercharge well-defined.
The first term in \eq{e:l4} vanishes for the membrane
configuration. \eq{e:l4} should be
regarded as the longitudinal
5 brane charge but it becomes
absent in the supermembrane.
Finally, the first term in \eq{e:l2} can be rewritten as
\begin{equation}
\int d^2 \sigma \sqrt{w} \left\{ X^-,X^a\right\}.
\end{equation}
It makes the SUSY algebra \eq{e:SUSY}
Lorentz invariant\footnote{
We understand the SUSY algebra in this form
was also derived
by de Wit et. al \cite{r:dWPP}.
We thank B. de Wit to send
us the preliminary version of their paper.
We have to admit that some part of this paper have
overlaps with theirs although it was studied
independently.}.
In \cite{r:BSS}, the BPS conditions of the SUSY algebra
was discussed in the Matrix theory.
It is our purpose here
to reexamine the analysis for
the manifestly Lorentz
invariant form \eq{e:SUSY}.
We write the SUSY algebra in the matrix form,
\begin{eqnarray}
&&\left(
\begin{array}{cc}
i\left(Q^-,Q^-\right)_{DB} &
i\left(Q^-,Q^+\right)_{DB} \\
i\left(Q^+,Q^-\right)_{DB}&
i\left(Q^+,Q^+\right)_{DB}
\end{array}
\right)
=
\left(
\begin{array}{cc}
P_0^+ \cdot {I_{16}} & {\mbox{\bf P}}+{\mbox{\bf z}_2}\\
{\mbox{\bf P}}-{\mbox{\bf z}_2} & 2 H \cdot {I_{16}}+2{\mbox{\bf z}_1}
\end{array}
\right)
\nonumber\\
&&\mbox{\hspace*{0.2in}}=
\left(
\begin{array}{cc}
P_0^+ \cdot {I_{16}} & 0 \\
{\mbox{\bf P}}-{\mbox{\bf z}_2} & {I_{16}}
\end{array}
\right)
\cdot
\left(
\begin{array}{cc}
\frac{1}{P_0^+}{I_{16}} & 0\\
0 & \frac{1}{P_0^+}{\mbox{\sl m}}
\end{array}
\right)
\cdot
\left(
\begin{array}{cc}
P_0^+ \cdot{I_{16}} & {\mbox{\bf P}}+{\mbox{\bf z}_2} \\
0 & {I_{16}}
\end{array}
\right).
\label{e:scmat}
\end{eqnarray}
Here our notation is ${\mbox{\bf P}}=P^a_0\gamma_a$,
${\mbox{\bf z}_1}=z^a\gamma_a$, ${\mbox{\bf z}_2} = \frac{1}{2} z^{ab}\gamma_{ab}$.
The real symmetric matrix ${\mbox{\sl m}}$ is defined as,
\begin{eqnarray}
{\mbox{\sl m}} &=& 2P^+_0(H\cdot{I_{16}}+{\mbox{\bf z}_1})
-({\mbox{\bf P}}-{\mbox{\bf z}_2})({\mbox{\bf P}}+{\mbox{\bf z}_2})\nonumber\\
& = & (2P^+_0\cdot H -P^a_0 P^a_0-\frac{1}{2}z^{ab}z^{ab})
{I_{16}} + 2(P_0^+z^a - P_0^c z^{ca})\gamma_a\nonumber\\
& & + \frac{1}{4}z^{ab}z^{cd}\gamma^{abcd}.
\label{e:cm}
\end{eqnarray}
{}From \eq{e:scmat} we find that ${\mbox{\sl m}}$ is
positive semi-definite when the theory is quantized.
At this point, it is easy to observe that
the BPS condition of $1/2$ SUSY is simply ${\mbox{\sl m}}=0$
and that of $1/4$ SUSY is that ${\mbox{\sl m}}$ has rank 8.
We will analyze these conditions in detail in
sections 4 and 5.
\section{Constraint from APD}
Associated with the gauge symmetry in the 0+1 dimensional
Yang-Mills system,
the Gauss law constraint of the dWHN model is given by
\begin{eqnarray}
\varphi(\sigma) & = & -\left\{\frac{P^a}{\sqrt{w}}
, X^a\right\}-\frac{i}{2}\left\{ \theta,\theta\right\}
\approx 0\label{e:gauss1}\\
\varphi^{(\lambda)}
& = & \int d^2\sigma \epsilon^{rs}\phi^{(\lambda)}_r
\left( P^+_0\partial_s X^-+\frac{P^a}{\sqrt{w}}
\partial_s X^a +\frac{i}{2} \theta\partial_s\theta\right)
\approx 0.\label{e:gauss2}
\end{eqnarray}
The first constraint comes from the area preserving diffeomorphism
(APD) in the bulk. The second ones are associated with
the harmonic one form $\phi^{(\lambda)}_r$ where $\lambda=1,\cdots,
2g$ ($g$ is the genus of the surface). These two conditions
ensure the integrability of the definition of $X^-$,
\begin{equation}
\partial_r X^-(\sigma) = -\frac{1}{P_0^+}
\left( \frac{P^a}{\sqrt{w}}\partial_r X^a + \frac{i}{2}
\theta\partial_r\theta\right).
\end{equation}
When the target space has a toroidal topology,
\begin{equation}
X^a\sim X^a+2\pi R^a, \quad X^-\sim X^-+2\pi R,
\end{equation}
and the membrane
has certain winding number, the embedding coordinates and their
momenta can be expanded in terms of the eigenfunction of the
Laplacian as follows,\footnote{
We normalize the harmonic one forms $\phi^{(\lambda)}_{r}$ as
$$
\oint_{C^{\lambda^{\prime}}}d\sigma^{r}\phi^{(\lambda)}_{r}
=\delta^{\lambda\lambda^{\prime}},
$$
where $C^{\lambda}$ ($\lambda=1,2,\ldots,2g$) comprize a basis
of the first homology class.}
\begin{eqnarray}
\partial_r X^a(\sigma) & = & 2\pi R^a\phi^{(\lambda)}_r n^{(\lambda)a} +
\sum_A X^a_A\partial _r Y^A(\sigma),\nonumber\\
\partial_r X^-(\sigma) & = & 2\pi R\phi^{(\lambda)}_r n^{(\lambda)} +
\sum_A X^-_A\partial _r Y^A(\sigma),\nonumber\\
P^a(\sigma) & = &P^{+}(\sigma)\frac{\partial}{\partial t}X^{a}(\sigma)
=\sqrt{w}\left(\frac{m^{a}}{R^a}+\sum_A P_A^a
Y^A(\sigma)\right),
\nonumber\\
P^+(\sigma) & = &\sqrt{w}P^+_0=\sqrt{w}\frac{m}{R},\nonumber\\
\theta^\alpha(\sigma) & = & \theta_0^{\alpha}+
\sum_A\theta^\alpha_A Y^A(\sigma).
\end{eqnarray}
Here $Y_A$ is the eigenfunction of the Laplacian with non-zero
eigenvalue, $\Delta Y_A = -\omega_A Y_A$, $\omega_A>0$.
$n^{(\lambda)a}$, $n^{(\lambda)}$, $m^{a}$ and $m$ are
integer-valued.
We plug the expansion into \eq{e:gauss2} to get
\begin{eqnarray}
\varphi^{(\lambda)}& = &2\pi f_{\lambda\lambda'0}
(m n^{(\lambda')}+m^a n^{(\lambda')a})+2\pi
\sum_{\lambda',B}{ f_{\lambda\lambda'}}^{B}R^a n^{(\lambda')a}
P^a_B\nonumber\\
&&+\sum_{AB}{f_{\lambda}}^{AB}(X^a_AP^a_B-\frac{i}{2}\theta_A\theta_B).
\end{eqnarray}
The structure constants are defined as
$$
f_{\lambda AB}=\int d^{2}\sigma\epsilon^{rs}\phi_{r}^{(\lambda)}
\partial_{s}Y_{A}Y_{B},\quad
f_{\lambda\lambda^{\prime}B}=\int d^{2}\sigma\epsilon^{rs}
\phi^{(\lambda)}_{r}\phi^{(\lambda^{\prime})}_{s}Y_{B}.
$$
In our analysis in the following sections,
we mainly take the topology of the membrane as two torus.
If we pick the coordinate $\sigma^r$ to satisfy $\sigma^r\sim
\sigma^r+1$ ($r=1,2$),
the eigenfunction becomes $Y^A = e^{2\pi i (A_1\sigma^1
+ A_2\sigma^2)}$ with $A=(A_1, A_2)\neq (0,0)$, $A_i\in\mbox{{\bf Z}}$.
We write the expansion of the embedding function as,
\begin{eqnarray}
X^- & = & -\frac{R}{m} Ht + 2\pi Rn_{r}\sigma^{r}
+{\hat{X}}^-(\sigma)\nonumber\\
X^a & = & \frac{R}{m}\frac{m^a}{R^a} t
+2\pi R^a n^a_r\sigma^r+\hat{X^a}(\sigma),
\end{eqnarray}
where the periodic parts are given by
$
{\hat{X}}^\mu(\sigma) = \sum_A X^\mu_A Y^A,
$
and so on.
The central charges of the toroidal membrane are given as
\begin{eqnarray}
z^{ab} & = & -(2\pi)^2R^aR^b(n_1^an_2^b-n_2^an_1^b)\nonumber\\
z^a & = & (2\pi)^2R R^a(n_1 n_2^a-n_2 n_1^a).
\end{eqnarray}
The constraint \eq{e:gauss2} is simplified as,
\begin{equation}
2\pi\varphi_r \equiv -\epsilon_{rs}\varphi^{(s)}
= 2\pi\left( m n_r + m^a n^a_r +i
\sum_{A\neq \vec 0} A_r (P^a_{-A}X^a_A+\frac{i}{2}\theta^\alpha_{-A}
\theta^\alpha_A)\right)\approx 0.
\label{eq:levelmatching}
\end{equation}
As we will see, this condition may be regarded as an analogue of
the level matching condition in string theory.
\section{BPS configuration with 1/2 SUSY}
In the following we will mainly consider
the case when the topology of the membrane
is two torus. In such situation,
the last term in \eq{e:cm} vanishes which facilitates
the analysis of the BPS condition.
By using the definition of
the invariant supermembrane mass ${\cal M}$:
\begin{equation}
{\cal M}^2=2P_0^+ \cdot H - P_0^a P_0^a,
\end{equation}
the BPS condition ${\mbox{\sl m}} = 0$ becomes\footnote{We point out that
${\cal M}^{2}$ is Lorentz invariant by virtue of $z^{a+}=0$.},
\begin{eqnarray}
{\cal M}^2 & = & \frac{1}{2}z^{ab}z^{ab}\nonumber\\
P_0^+ z^a - P^c_0z^{ca} & = & 0.
\label{e:BPS}
\end{eqnarray}
The second condition relates the winding in the
longitudinal direction to those
in the transverse dimensions.
Now we can discuss the relationship between (\ref{eq:levelmatching})
and the BPS condition \eq{e:BPS}. The first equation in \eq{e:BPS}
tells us that there is no nonzero mode contribution.
This implies
\begin{eqnarray}
X^a & = & \frac{R}{R^a}\frac{m^a}{m}t + 2\pi R^a
n^a_r\sigma^r ,\nonumber\\
X^- & = & -\frac{R}{m}Ht+2\pi Rn_r\sigma_r ,\nonumber\\
\theta^{\alpha} &=& \theta^{\alpha}_{0}.
\label{e:M-membrane}
\end{eqnarray}
The constraint \eq{eq:levelmatching} is reduced
to the following simple relation
\begin{equation}
\varphi_{r}^{(0)}\equiv mn_{r}+m^{a}n^{a}_{r}=0.
\end{equation}
Using this, the second equation of \eq{e:BPS} is rewritten as
\begin{equation}
\varphi_{1}^{(0)}n_{2}^{a}-\varphi_{2}^{(0)}n_{1}^{a}=0.
\end{equation}
We therefore conclude that, by virtue of the constraint
\eq{eq:levelmatching}, the 1/2 BPS condition \eq{e:BPS}
is automatically satisfied even for the membrane
wrapping in the M-direction (with no nonzero modes).
This suggests that \eq{eq:levelmatching}
plays an important role in showing the Lorentz
invariance of the supermembrane theory.
\section{BPS configurations with 1/4 SUSY}
Let us proceed to explore the equation of supermembrane
with one quarter SUSY.
As before we assume the toroidal topology of the supermembrane.
The BPS condition is that
the matrix ${\mbox{\sl m}}$ in \eq{e:cm} has rank 8.
Since $P_0^+ z^a - P^c_0z^{ca}$ is a constant vector,
we can always choose a nine-dimensional orthonormal basis
$(e^{(9)}_{a},e^{(i)}_{a})$ $(i=1,2,\ldots,8)$ with the property
\begin{equation}
P_0^+ z^a - P^c_0z^{ca}\propto e^{(9)a}.
\end{equation}
We will henceforth denote the components of a vector $V^{a}$ as
\begin{equation}
V^{9}=e^{(9)}_{a}V^{a},\quad
V^{i}=e^{(i)}_{a}V^{a}.
\end{equation}
In this frame, the BPS condition is equivalent to
\begin{equation}\label{e:bps4}
{\cal M}^2-\frac{1}{2} z^{ab}z^{ab} \mp 2
(P_0^+ z^9 - P^c_0z^{c9})=0.
\end{equation}
We introduce the notation,
$\nabla^a \equiv 2\pi R^a(n^a_1 \partial_2 - n^a_2\partial_1)$.
Various parts in the Hamiltonian are written
as,
\begin{eqnarray}
\left\{ X^a, X^b\right\} & = & -z^{ab}
+ \nabla^a \hat{X}^b-\nabla^b\hat{X}^a+
\left\{ \hat{X}^a,\hat{X}^b\right\},\nonumber\\
\left\{ X^a, P^b\right\} & = &
\nabla^a\hat{P}^b + \left\{
\hat{X}^a,\hat{P}^b\right\}.
\end{eqnarray}
In the following analysis, we assume the fermionic background
to vanish for simplicity.
The left hand side of \eq{e:bps4} becomes,
\begin{eqnarray}
&&\int d^2\sigma \left({\hat{P}}^a{\hat{P}}^a+\frac{1}{2}
(\nabla^a {\hat{X}}^b - \nabla^b {\hat{X}}^a+
\left\{{\hat{X}}^a,{\hat{X}}^b\right\})^2\right)
\mp 2\int d^2\sigma
{\hat{P}}^c\nabla^9 {\hat{X}}^c
\nonumber\\
&=&\int d^2\sigma \left[
\left({\hat{P}}^c\mp(\nabla^9{\hat{X}}^c-\nabla^c{\hat{X}}^9
+\left\{ {\hat{X}}^9,{\hat{X}}^c\right\})\right)^2\right.\nonumber\\
&&\qquad\quad\left.+\frac{1}{2}\left(\nabla^i{\hat{X}}^j-\nabla^j {\hat{X}}^i+
\left\{{\hat{X}}^i,{\hat{X}}^j\right\}\right)^2\right]\geq 0.
\end{eqnarray}
In deriving this equation,
we used the APD constraints \eq{e:gauss1} \eq{e:gauss2}.
The final expression becomes a sum of the
squares as expected.
The BPS condition for $1/4$ SUSY becomes,
\begin{eqnarray}
{\hat{P}}^9 & = & 0 ,\nonumber\\
{\hat{P}}^i& = & \pm\left(\left\{ X^9, X^i\right\} +z^{9i}\right) ,\nonumber\\
0& = & \left\{X^i,X^j\right\} +z^{ij}.
\end{eqnarray}
This equation\footnote{Similar problem was
approached by Becker, Becker and
Strominger \cite{r:BBS} in a slightly different context.}
is an analogue of the Bogomol'nyi bound
of the super Yang-Mills theory (see for example \cite{r:Hav}).
We note that, in the situation considered here, the following equation
holds
\begin{equation}
P^+_i z^i-P^c_0 z^{ci} = 0.
\end{equation}
The SUSY generators which are not broken under such a configuration are,
\begin{eqnarray}
Q^{(\mp)} & \equiv & \Pi^{\mp}\left\{\sqrt{P^{+}_{0}}Q^{+}
-\frac{({\mbox{\bf P}}-{\mbox{\bf z}_2})}{\sqrt{P^{+}_{0}}}Q^{-}\right\}\nonumber\\
& = & \Pi^\mp \int d^2\sigma \left(
{\hat{P}}^a \gamma^a + \frac{1}{2} \left(
\left\{ X^a,X^b\right\} + z^{ab} \right)
\gamma_{ab}\right) {\hat{{\theta}}},
\end{eqnarray}
where $\Pi^\mp= (1\mp \gamma^9)/2$ are projection operators.
In fact these generators have vanishing Dirac brackets with canonical
variables, e.g.,
\begin{eqnarray}
\left(Q^{(\mp)}, \theta\right)_{DB} & = &
-i\Pi^\mp\left[ \mp {\hat{P}}^9+\left({\hat{P}}^i\mp \left(
\left\{X^9, X^i\right\}+z^{9i}\right)\right)\gamma_j
+\frac{1}{2}\left(\left\{X^i,X^j\right\}+z^{ij}\right)\gamma_{ij}\right]
\nonumber\\
&=&0.\label{e:Qtheta}
\end{eqnarray}
We remark that, in general, the right hand side of \eq{e:Qtheta} need
not be strictly zero.
It is sufficient to set it zero modulo APD gauge transformations.
This enables us to analyze the case of non-vanishing $\theta$.
As an illustration let us consider the following configuration:
\begin{eqnarray}
X^{9}&=&\frac{R}{R^{9}}\frac{m^{9}}{m}t+2\pi R^{9}n^{9}\sigma^{2},
\nonumber \\
X^{i}&=&\frac{R}{R^{i}}\frac{m^{i}}{m}t+2\pi R^{i}n^{i}\sigma^{1}
+\hat{X}^{i}(t,\sigma^{1}),
\nonumber \\
\theta^{\alpha}&=&\theta^{\alpha}_{0}+\hat{\theta}^{\alpha}(t,\sigma^{1}),
\nonumber \\
X^{-}&=&-\frac{R}{m}Ht+2\pi R(n\sigma^{1}+n^{\prime}\sigma^{2})
+\hat{X}^{-}(t,\sigma^{1}),
\nonumber \\
P^{+}&=&\frac{m}{R}.
\end{eqnarray}
This configuration has the central charges
\begin{equation}
z^{9}=4\pi^{2}RR^{9}nn^{9},\quad
z^{i}=-4\pi^{2}RR^{i}n^{\prime}n^{i}, \quad
z^{9i}=4\pi^{2}R^{9}R^{i}n^{9}n^{i},\quad
z^{ij}=0.
\end{equation}
The Gauss law constraint for this configuration
reduces to\footnote{The constraint in the bulk,
$\varphi(\sigma)\approx 0$, is automatically satisfied in this case.
}
\begin{eqnarray}
\varphi_{1}&=&mn+m^{i}n^{i}+\frac{1}{2\pi}\int d\sigma^{1}
(\hat{P}^{i}\partial_{1}\hat{X}^{i}+\frac{i}{2}\hat{\theta}
\partial_{1}\hat{\theta})\approx 0,
\nonumber \\
\varphi_{2}&=&mn^{\prime}+m^{9}n^{9}\approx 0.
\end{eqnarray}
The first equation is of the same form as the level-matching
condition of the closed superstring.
This is consistent with the fact that, after the double dimensional
reduction, 11D supermembrane reduces to 10D type IIA superstring
\cite{r:DHIS}.
The BPS condition is rewritten as
\begin{eqnarray}
\partial_{t}\hat{X}^{i}&=&\mp2\pi RR^{9}\frac{n^{9}}{m}
\partial_{1}\hat{X}^{i},\nonumber \\
\Pi^{\pm}\hat{\theta}&=&0.
\label{eq:DDR}
\end{eqnarray}
The second equation comes from the condition:
$(Q^{(\mp)},X^{a})_{DB}=0$ {\em mod APD}.
It leads us to see that the fermion modes with
plus (minus) chirality are projected out.
Combined with equations of motion, the condition (\ref{eq:DDR})
picks up only the left(right)-handed modes
in the $\sigma^{1}$-direction.
These configurations are therefore understood as an
extension of the BPS configurations in the type IIA
superstring to 11D supermembrane.
\section{Discussion}
In this paper we investigated winding modes
of the supermembrane in the light cone gauge.
We have obtained the following results: (i) 1/2 SUSY is
achieved even if the membrane wraps around the longitudinal direction;
(ii) success in constructing such configurations is attributed to
the Gauss law constraint \eq{e:gauss2} associated with the harmonic
vector fields; (iii) we derive the first order differential equations
to characterize $1/4$ SUSY; (iv) we explicitly constructed string
BPS states from those of the membrane.
While the constraint \eq{e:gauss2} has been overlooked in the previous
analysis of M(atrix) theory, it may play an essential
role if our result is taken seriously.
Thus it may be useful to consider an extension of \eq{e:gauss2} to
M(atrix) theory. In the case of a toroidal supermembrane, we can
construct an obvious candidate:
\begin{eqnarray}
\varphi^{(1)}_{M}=2\pi mn^{2}+{\rm Tr}\left([q,X^{a}]P^{a}
-\frac{i}{2}[q,\theta^{\alpha}]\theta^{\alpha}\right),\nonumber\\
\varphi^{(2)}_{M}=-2\pi mn^{1}+{\rm Tr}\left([p,X^{a}]P^{a}
-\frac{i}{2}[p,\theta^{\alpha}]\theta^{\alpha}\right).
\label{eq:Mgauss2}
\end{eqnarray}
$X^{a}$, $P^{a}$ and $\theta^{\alpha}$ are now regarded as $m\times m$
matrix-valued and $(q,p)$ are the matrices
with the commutation relation $[q,p]=I$.
A candidate for the longitudinal membrane in M(atrix) theory is also
obtained if we replace $(\sigma^{1},\sigma^{2})$ in \eq{e:M-membrane}
by $(q,p)$.
One important point is that
the generalization of the $1/4$ condition to the M(atrix)
theory is straightforward. All we have to do is to replace
the APD bracket with the commutator.
We hope that our approach gives a new viewpoint to this
famous problem
and the relation with the BPS membrane state
in the supergravity theory
\cite{r:DS} will be very interesting.
\vskip 3mm
\noindent {\bf Acknowledgement: }
We would like to thank H. Nicolai and B. de Wit for communication.
We are also obliged to M. Ninomiya for encouragement.
\vskip 3mm
|
2,869,038,155,451 | arxiv | \section{Introduction}
\label{sec_introduction}
Self-gravitating systems such as globular clusters and galaxies can be
considered as a collection of $N$ stars in gravitational interaction
whose dynamics is described by the Hamilton equations of motion
\cite{bt}. In statistical mechanics, this situation is associated with
the microcanonical ensemble where the energy and the particle number
are fixed \cite{paddy,houches}. In a recent series of papers
\cite{prs,sc,lang,sic,chs,crrs,sich,sopik}, we have proposed to
consider a system of self-gravitating Brownian particles which are
subject, in addition to the gravitational force, to a friction and a
noise. Their dynamics is described by $N$-coupled stochastic Langevin
equations. In statistical mechanics, this situation is associated with
the canonical ensemble where the temperature and the particle number
are fixed \cite{chav}. In previous papers, we have considered a
mean-field approximation valid in a proper thermodynamic limit with
$N\rightarrow +\infty$ and, in order to simplify the problem, we have
studied a limit of strong friction $\xi\rightarrow +\infty$, or
equivalently a large time regime $t\gg
\xi^{-1}$. In these approximations, the problem is reduced to the
study of the Smoluchowski-Poisson (SP) system. We have also introduced
a generalized class of stochastic processes and kinetic equations in
which the diffusion coefficient (or the friction/mobility) is allowed
to depend on the concentration of particles
\cite{pre,physnext,pa,cban,lemou}. This can model microscopic
constraints (e.g. close packing) acting on the particles when their
local concentration becomes large. The evolution of the system is then
described by the generalized Smoluchowski-Poisson (GSP) system
involving a barotropic equation of state $p(\rho)$ specified by the
stochastic model. This mean-field nonlinear Fokker-Planck (MFNFP)
equation admits a Lyapunov functional, determined by the equation of
state, which can be interpreted as a generalized free energy
\cite{pre}. Thus, this model is associated with a notion of
(effective) ``generalized thermodynamics'' in $\mu$-space. In the
classical case where the diffusion coefficient is constant, we recover
an isothermal equation of state $p=\rho k_{B}T/m$ associated with the
Boltzmann free energy but more general equations of state can be
considered. Interestingly, the same type of drift-diffusion
equations are encountered in mathematical biology to describe the
chemotactic aggregration of bacterial populations, in relation with
the Keller-Segel model
\cite{murray,keller,jager}. The analogy between self-gravitating
Brownian particles and bacterial populations is developed in
\cite{crrs}.
Here, we propose to generalize these models so as to take into account
finite $N$ effects and inertial effects (finite friction $\xi$). We
thus propose a general kinetic and hydrodynamic description of
self-gravitating Brownian particles starting directly from a system of
$N$ coupled Langevin equations of motion with long-range attractive
interactions. We shall extend the technics developed in astrophysics
to our problem of self-gravitating Brownian particles. In particular,
we shall derive the appropriate expression of the Virial theorem for
these systems and study their linear dynamical stability by a method
similar to that developed by Eddington \cite{eddington} and Ledoux
\cite{ledoux} for barotropic stars described by the Euler
equations. We shall make the link between parabolic and hyperbolic
models by considering an intermediate model taking into account the
inertia of the particles as well as a friction force. The Euler
equations are recovered for $\xi=0$ and the Smoluchowski equation is
obtained in the limit $\xi\rightarrow +\infty$.
The paper is organized as follows. In Sec. \ref{sec_kin}, we
introduce general kinetic and hydrodynamic models of self-gravitating
Brownian particles starting from coupled Langevin equations. We derive
the $N$-body Fokker-Planck equation (subsection \ref{sec_nbody}), the
mean-field Kramers equation (subsection \ref{sec_kram}) and the
generalized mean-field Kramers equation (subsection \ref{sec_gkram}).
Then, we take the hydrodynamic moments of these equations and derive
the damped Jeans equations (subsection \ref{sec_jeans}) and the damped
barotropic Euler equations (subsection \ref{sec_eul}) by closing the
hierarchy of moments with a local thermodynamical equilibrium
(L.T.E.) hypothesis. We obtain the mean-field Smoluchowski equation in
a strong friction limit $\xi\rightarrow +\infty$ and, in subsection
\ref{sec_oak}, we derive the orbit-averaged-Kramers equation is a weak
friction limit $\xi\rightarrow 0$. In Sec. \ref{sec_vith}, we
establish the general expression of the Virial theorem for
self-gravitating Brownian particles from the damped Jeans equations
(subsection \ref{sec_vj}) and from the damped Euler equations
(subsection \ref{sec_ve}). We also consider the effect of correlations
due to finite $N$ effects (subsection \ref{sec_corrv}). In
Sec. \ref{sec_dyn}, we study the linear dynamical stability of an
inhomogeneous stationary solution of the damped barotropic
Euler-Poisson system and investigate the effect of the friction
coefficient on the evolution of the perturbation. In Appendix
\ref{sec_pol} we give a short complement concerning the stability of
polytropic systems and in Appendix \ref{sec_exV} we derive the exact
expression of the Virial theorem starting directly from the stochastic
equations of motion. We show that the Virial theorem takes a very
simple form in dimensions $d=2$ and $d=4$ and analyze the consequences
of this simplification. In Appendix \ref{sec_hom}, we study the linear
dynamical stability of homogeneous stationary solutions of the damped
barotropic Euler equations (for an arbitrary potential of interaction)
and obtain a generalization of the Jeans instability criterion. In the
Conclusion, we discuss the different regimes in the evolution of
Hamiltonian and Brownian systems with long-range interactions \cite{chav}
distinguishing the phase of violent collisionless relaxation, the
collisional evolution due to finite $N$ effects and the
``collisional'' evolution due to {\it imposed} friction and stochastic
forces for Brownian systems.
\section{Kinetic and hydrodynamic models of
self-gravitating Brownian particles}
\label{sec_kin}
The Smoluchowski-Poisson system that has been extensively studied in
previous papers \cite{prs,sc,lang,sic,chs,crrs,sich,sopik} describes a
gas of self-gravitating Brownian particles in a mean-field
approximation (valid for $N\rightarrow +\infty$) and in a strong
friction limit $\xi\rightarrow +\infty$. In this section, we introduce
more general kinetic and hydrodynamic models of self-gravitating
Brownian particles that go beyond these approximations.
\subsection{The $N$-body Fokker-Planck equation}
\label{sec_nbody}
Basically, a system of self-gravitating Brownian particles is
described by the $N$ coupled stochastic equations
\begin{eqnarray}
\label{nb1}
{d{\bf r}_{i}\over dt}={\bf v}_{i},
\end{eqnarray}
\begin{eqnarray}
\label{nb2}
{d{\bf v}_{i}\over dt}=-\xi{\bf v}_{i}-m\nabla_{i}U({\bf
r}_{1},...,{\bf r}_{N})+\sqrt{2D}{\bf R}_{i}(t),
\end{eqnarray}
where $-\xi {\bf v}_{i}$ is a friction force and ${\bf R}_{i}(t)$
is a Gaussian white noise satisfying $\langle {\bf
R}_{i}(t)\rangle={\bf 0}$ and $\langle
{R}_{a,i}(t){R}_{b,j}(t')\rangle=\delta_{ij}\delta_{ab}\delta(t-t')$,
where $a,b=1,...,d$ refer to the coordinates of space and
$i,j=1,...,N$ to the particles. We define the inverse temperature
$\beta=1/k_B T$ through the Einstein relation $\xi=D\beta m$ (see
below). For $\xi=0$ and $D=0$, Eqs.~(\ref{nb1})-(\ref{nb2}) reduce
to the Newton-Hamilton equations of motion describing the ordinary
self-gravitating gas with a Hamiltonian
\begin{equation}
\label{nb3} H=\sum_{i=1}^{N}{1\over 2}m{v_{i}^{2}}+m^{2}U({\bf
r}_{1},...,{\bf r}_{N}),
\end{equation}
where $U({\bf r}_{1},...,{\bf r}_{N})=\sum_{i<j}u({\bf r}_{i}-{\bf
r}_{j})$ and $u({\bf r}_{i}-{\bf r}_{j})=-G/\lbrack (d-2)|{\bf
r}_{i}-{\bf r}_{j}|^{(d-2)}\rbrack$ denotes the gravitational
potential of interaction in $d$ dimensions ($u({\bf r}_{i}-{\bf
r}_{j})=G\ln |{\bf r}_{i}-{\bf r}_{j}|$ for $d=2$). In this paper, we
shall be particularly interested by the gravitational interaction, but
we stress that our formalism remains valid for a more general class of
binary potentials of interaction of the form $u(|{\bf r}_{i}-{\bf
r}_{j}|)$. The evolution of the $N$-body distribution function is
governed by the Fokker-Planck equation
\cite{chav}:
\begin{equation}
\label{nb4} {\partial P_{N}\over\partial t}+\sum_{i=1}^{N}\biggl
({\bf v}_{i}\cdot {\partial P_{N}\over\partial {\bf r}_{i}}+{\bf
F}_{i}\cdot {\partial P_{N}\over\partial {\bf v}_{i}}\biggr
)=\sum_{i=1}^{N} {\partial\over\partial {\bf v}_{i}}\cdot \biggl\lbrack
D{\partial P_{N}\over\partial {\bf v}_{i}}+\xi P_{N}{\bf
v}_{i}\biggr\rbrack.
\end{equation}
In the absence of friction and diffusion ($\xi=D=0$), it reduces
to the Liouville equation. The Liouville equation conserves the
energy $\langle E\rangle=\int P_{N}H
\prod_i d{\bf r}_{i}d{\bf v}_{i}$ and the Gibbs entropy
$S=-k_{B}\int P_{N}\ln P_{N}d{\bf r}_{1}d{\bf v}_{1}...d{\bf
r}_{N}d{\bf v}_{N}$ (more generally, any functional of $P_{N}$)
defined on $\Gamma$-space. This corresponds to a microcanonical
description. Alternatively, in the Brownian model, the temperature $T$
is fixed (instead of the energy) and the Fokker-Planck equation
(\ref{nb4}) decreases the Gibbs free energy $F=\langle
E\rangle-TS$ which can be written explicitly,
\begin{equation}
\label{freen} F[P_{N}]=\int P_{N}H
\prod_i d{\bf r}_{i}d{\bf v}_{i}+k_{B}T\int P_{N}\ln P_{N}\prod_i d{\bf r}_{i}d{\bf v}_{i}.
\end{equation}
This corresponds to a
canonical description. One has
\begin{equation}
\label{nb6}
\dot F=-\sum_{i=1}^{N}\int {1\over\xi P_{N}}\biggl (D{\partial
P_{N}\over\partial {\bf v}_{i}}+\xi P_{N}{\bf v}_{i}\biggr )^{2}
d{\bf r}_{1}d{\bf v}_{1}...d{\bf r}_{N}d{\bf v}_{N}\le 0.
\end{equation}
Therefore, the free energy plays the role of a Lyapunov functional for
the $N$-body Fokker-Planck equation. At equilibrium, $\dot F=0$
implying that the r.h.s. of Eq.~(\ref{nb4}) vanishes. The
l.h.s. (advective term) must also vanish independently. From these
two requirements we find that the
stationary solution of Eq.~(\ref{nb4}) is the canonical distribution
\footnote{In order to properly define a {\it strict} statistical
equilibrium state for self-gravitating systems, one has to introduce a
small-scale regularization otherwise $F[P_{N}]$ has no minimum and
Eq.~(\ref{nb7}) is not normalizable. Thus, in Eq.~(\ref{nb7}), it is
implicitly understood that $U$ is a regularized potential. Note that
{\it physical} statistical equilibrium states unaffected by the
small-scale regularization exist in the form of long-lived metastable
structures that are {\it local} minima of the Boltzmann mean-field
free energy $F_{B}[f]$ defined in Eq. (\ref{fexp}). We refer to
\cite{meta} for a physical discussion of these issues.}
\begin{equation}
\label{nb7} P_{N}({\bf r}_{1},{\bf v}_{1},...,{\bf r}_{N},{\bf
v}_{N})={1\over Z(\beta)}e^{-\beta m
(\sum_{i=1}^{N}{v_{i}^{2}\over 2}+m U({\bf r}_{1},...,{\bf
r}_{N}))},
\end{equation}
provided that the coefficients of diffusion and friction are connected
by the Einstein relation $\xi=D\beta m$. The partition function
$Z(\beta)$ is determined by the normalization condition $\int P_{N}
\prod_i d{\bf r}_{i}d{\bf v}_{i}=1$. The canonical distribution
(\ref{nb7}) minimizes the free energy $F$ at fixed particle
number. Introducing the reduced probability distributions
\begin{equation}
\label{nb8} P_{j}({\bf x}_{1},...,{\bf x}_{j})=\int P_{N}({\bf
x}_{1},...,{\bf x}_{N})\,d{\bf x}_{j+1}...d{\bf x}_{N},
\end{equation}
where ${\bf x}=({\bf r},{\bf v})$, we can readily write down a
hierarchy of equations for $P_{1}$, $P_{2}$ etc. The first
equation of the hierarchy is
\begin{eqnarray}
\label{nb9}\qquad {\partial P_{1}\over\partial t}+{\bf
v}_{1}\cdot {\partial P_{1}\over\partial {\bf r}_{1}}+(N-1)\int
{\bf F}(2\rightarrow 1)\cdot
{\partial P_{2}\over\partial {\bf v}_{1}}\,d{\bf r}_{2}d{\bf v}_{2}
={\partial\over\partial {\bf v}_{1}}\cdot \biggl\lbrack D{\partial
P_{1}\over\partial {\bf v}_{1}}+\xi P_{1}{\bf v}_{1}\biggr\rbrack,
\end{eqnarray}
where ${\bf F}(2\rightarrow 1)=-m\partial u_{12}/\partial {\bf r}_{1}=Gm ({\bf r}_{2}-{\bf r}_{1})/|{\bf r}_{2}-{\bf r}_{1}|^{d}$ is the force (by unit of mass) created
by particle $2$ on particle $1$. Note that this equation is exact
(i.e. valid for all $N$) and takes into account the correlations
between the particles encapsulated in the two-body distribution
function $P_2$. For $D=\xi=0$, we recover the first equation of the
BBGKY hierarchy. We shall give the form of the Virial theorem
associated to Eq.~(\ref{nb9}) in Sec. \ref{sec_corrv}. Before that, we
consider the mean-field limit of this equation valid for $N\rightarrow
+\infty$.
\subsection{The mean-field Kramers equation}
\label{sec_kram}
In a properly defined thermodynamic limit \cite{chav}, we can show
that the cumulant of the two-body correlation function is of order
$1/N$. Therefore, for $N\rightarrow +\infty$, we can implement the
mean-field approximation
\begin{eqnarray}
\label{k1} P_{2}({\bf r}_{1},{\bf v}_{1},{\bf r}_{2},{\bf
v}_{2},t)=P_{1}({\bf r}_{1},{\bf v}_{1},t)P_{1}({\bf r}_{2},{\bf
v}_{2},t)+O(1/N).
\end{eqnarray}
Substituting this result in Eq.~(\ref{nb9}) and introducing the
distribution function $f=NmP_{1}$, we obtain the mean-field Kramers
equation
\begin{eqnarray}
\label{k2} {\partial f\over\partial t}+{\bf v}\cdot {\partial
f\over\partial {\bf r}}-\nabla\Phi \cdot {\partial f\over\partial
{\bf v}} ={\partial\over\partial {\bf v}}\cdot \biggl\lbrack
D{\partial f\over\partial {\bf v}}+\xi f{\bf v}\biggr\rbrack,
\end{eqnarray}
where $\Phi({\bf r},t)=\int u({\bf r}-{\bf r}') \rho({\bf r}',t)d{\bf
r}'$. This equation is non-local because the potential
$\Phi({\bf r},t)$ is induced by the density $\rho({\bf r},t)=\int f
d{\bf v}$ of the particles composing the whole system (it is not an
external potential). Thus, for self-gravitating systems,
Eq.~(\ref{k2}) has to be solved in conjunction with the Poisson
equation
\begin{eqnarray}
\Delta\Phi=S_{d}G\rho. \label{pois}
\end{eqnarray}
In the absence of friction and diffusion ($D=\xi=0$), the mean-field
Kramers equation reduces to the Vlasov equation \cite{bt}. The Vlasov
equation conserves the energy $E={1\over 2}\int f v^{2}d{\bf r}d{\bf
v}+{1\over 2}\int \rho\Phi d{\bf r}$ and the Boltzmann entropy
$S_{B}=-\int (f/m)\ln (f/m) d{\bf r}d{\bf v}$ (more generally all the
functionals of $f$ called the Casimirs) defined on
$\mu$-space. Alternatively, the Kramers-Poisson (KP) system
(\ref{k2})-(\ref{pois}) involves a fixed temperature and decreases the
Boltzmann free energy $F_{B}=E-TS_{B}$ which can be written explicitly
\begin{eqnarray}
F_{B}[f]={1\over 2}\int f v^{2}d{\bf r}d{\bf v}+{1\over 2}\int \rho\Phi d{\bf r}+k_{B}T \int {f\over m}\ln {f\over m} d{\bf r}d{\bf v}.\label{fexp}
\end{eqnarray}
Indeed, one has
\begin{eqnarray}
\dot F_{B}=-\int {1\over\xi f}\biggl (D{\partial
f\over\partial {\bf v}}+\xi f {\bf v}\biggr )^{2} d{\bf r}d{\bf
v}\le 0. \label{fdotk}
\end{eqnarray}
At equilibrium, $\dot F_{B}=0$ implying that the r.h.s. of Eq.~(\ref{k2})
vanishes. The l.h.s. (advective term) must also vanish
independently. From these two requirements, and using the Einstein
relation, we find that the stationary solutions of the Kramers-Poisson system (\ref{k2}) correspond to the mean-field Maxwell-Boltzmann distribution
\begin{eqnarray}
f=A e^{-\beta m ({v^{2}\over 2}+\Phi({\bf r}))} \label{mfmax}
\end{eqnarray}
which has to be solved in conjunction with the Poisson equation
(\ref{pois}). The stable mean-field Maxwell-Boltzmann distribution
minimizes the Boltzmann free energy $F_{B}[f]$ at fixed mass. It is
both thermodynamically stable (in the canonical ensemble) and linearly
dynamically stable with respect to the KP system \cite{pre}. We note
that the equilibrium one-body distribution function (\ref{mfmax}) can
be obtained from the $N$-body canonical distribution (\ref{nb7}) by
constructing an equilibrium BBGKY-like hierarchy and implementing a
mean-field approximation \cite{chav}. On the other hand, the Boltzmann
free energy (\ref{fexp}) can be deduced from the free energy of the
$N$-body system (\ref{freen}) by making the mean-field approximation
$P_{N}(1,...,N)=\prod_{i} P_{1}(i)$ \cite{chav}.
\subsection{The generalized non-local Kramers equation}
\label{sec_gkram}
For sake of generality, we shall consider the case where the diffusion
coefficient explicitly depends on the distribution function. Thus, in
Eq. (\ref{k2}), we set $D(f)=D f C''(f)$ where $C$ is a convex
function, i.e. $C''\ge 0$, and $D$ is a constant. In that case, we obtain
the generalized mean-field Kramers equation \cite{pre}:
\begin{eqnarray}
\label{k3} {\partial f\over\partial t}+{\bf v}\cdot {\partial
f\over\partial {\bf r}}-\nabla\Phi \cdot {\partial f\over\partial {\bf
v}} ={\partial\over\partial {\bf v}}\cdot \biggl\lbrack
DfC''(f){\partial f\over\partial {\bf v}}+\xi f{\bf
v}\biggr\rbrack.
\end{eqnarray}
This equation can be obtained in the mean-field limit of a generalized
$N$-body Fokker-Planck equation associated with a generalized class of
stochastic processes of the form (\ref{nb1})-(\ref{nb2}) where the
diffusion coefficient depends on $f({\bf r}_{i},{\bf v}_{i},t)$
\cite{chav}. For $C=f\ln f$, we recover the
classical Kramers equation (\ref{k2}). However, Eq.~(\ref{k3}) can
describe more general situations such as quantum particles with
exclusion or inclusion principles (fermions, bosons, quons), lattice
models, non-ideal effects etc... These generalized Fokker-Planck
equations are associated with an effective thermodynamical formalism
(ETF) in $\mu$-space \cite{pre}. In particular, the generalized
Kramers-Poisson (GKP) system decreases the free energy
\begin{eqnarray}
\label{k4} F[f]\equiv E-TS=\int f{v^{2}\over 2}\,d{\bf r}d{\bf v}+{1\over
2}\int \rho\Phi \,d{\bf r}+T\int C(f)\,d{\bf r}d{\bf v},
\end{eqnarray}
where the last term can be interpreted as a generalized entropy
$S=-\int C(f)d{\bf r}d{\bf v}$ and we have defined the effective
temperature $T=1/\beta$ through the relation $\xi=D/T$ (effective
Einstein relation). One has
\begin{eqnarray}
\label{k5} \dot F=-\int {1\over\xi f}\biggl (DfC''(f){\partial
f\over\partial {\bf v}}+\xi f {\bf v}\biggr )^{2} d{\bf r}d{\bf
v}\le 0.
\end{eqnarray}
At equilibrium, we find that the stationary solutions of the
generalized Kramers equation (\ref{k3}) are determined by the
integro-differential equation
\begin{eqnarray}
\label{k6} C'(f)=-\beta\biggl ({v^{2}\over 2}+\Phi\biggr )-\alpha,
\end{eqnarray}
where $\Phi({\bf r},t)=\int u({\bf r}-{\bf r}') f({\bf r}',{\bf
v}',t)\,d{\bf r}'d{\bf v}'$. Since $C$ is convex, this relation can
be inversed to give $f=F(\beta\epsilon+\alpha)$ where
$F(x)=(C')^{-1}(-x)$ and $\epsilon={v^{2}\over 2}+\Phi({\bf r})$. We
note that the equilibrium distribution determined by Eq. (\ref{k6}) is
a function $f=f(\epsilon)$ of the individual energy
$\epsilon$ alone which is monotonically decreasing
(for $\beta>0$). This equilibrium distribution function extremizes
the free energy (\ref{k4}) at fixed mass. Furthermore, only a (local)
minimum of free energy is linearly dynamically stable with respect to
the GKP system \cite{pre}.
\subsection{The damped Jeans equations}
\label{sec_jeans}
We shall now determine the hierarchy of moment equations
associated with the generalized Kramers-Poisson system. Defining
the density and the local velocity by
\begin{eqnarray}
\label{j1} \rho=\int f\,d{\bf v}, \qquad \rho{\bf u}=\int f{\bf
v}\,d{\bf v},
\end{eqnarray}
and integrating Eq.~(\ref{k3}) on velocity, we get the continuity equation
\begin{eqnarray}
\label{j2} {\partial\rho\over\partial t}+\nabla\cdot (\rho{\bf u})=0.
\end{eqnarray}
Next, multiplying Eq.~(\ref{k3}) by ${\bf v}$ and integrating on velocity, we obtain
\begin{eqnarray}
\label{j3} {\partial\over\partial t}(\rho
u_{i})+{\partial\over\partial x_{j}}(\rho u_{i}u_{j})= -{\partial
P_{ij}\over\partial x_{j}}-\rho{\partial\Phi\over\partial
x_{i}}-\xi\rho u_{i},
\end{eqnarray}
where we have defined the pressure tensor
\begin{eqnarray}
\label{j4}P_{ij}=\int fw_{i}w_{j}\,d{\bf v},
\end{eqnarray}
where ${\bf w}={\bf v}-{\bf u}$ is the relative velocity. In the absence
of diffusion and friction ($D=\xi=0$), we recover the Jeans equations
of astrophysics that are derived from the Vlasov equation
\cite{bt}. For self-gravitating Brownian particles, the equivalent of
the Jeans equations (\ref{j2})-(\ref{j3}) include an additional
friction force $-\xi {\bf u}$. Using the continuity equation,
Eq.~(\ref{j3}) can be rewritten
\begin{eqnarray}
\label{j5} \rho\biggl ({\partial u_{i}\over\partial
t}+u_{j}{\partial u_{i}\over\partial x_{j}}\biggr )=-{\partial
P_{ij}\over\partial x_{j}}-\rho{\partial\Phi\over\partial
x_{i}}-\xi\rho u_{i}.
\end{eqnarray}
\subsection{The damped barotropic Euler equations}
\label{sec_eul}
By taking the successive moments of the velocity, we can obtain a
hierarchy of hydrodynamical equations. Each equation of the hierarchy
involves the moment of next order. The ordinary Jeans equations that
are based on the Vlasov equation are difficult to close because the
Vlasov equation admits an infinite number of stationary
solutions. Therefore, a notion of thermodynamical equilibrium is
difficult to justify in the usual point of view (see, however,
\cite{csr} in the context of the theory of violent relaxation). In the
present case, the situation is simpler because the Kramers equation
admits a Lyapunov functional (\ref{k4}) and a unique
stationary distribution defined by Eq.~(\ref{k6}). If we are sufficiently
close to equilibrium, it makes sense to close the hierarchy of
equations by using a condition of local thermodynamic equilibrium. We
shall thus determine the pressure tensor Eq.~(\ref{j4}) with the
distribution function $f_{L.T.E}$ defined by the relation
\begin{eqnarray}
\label{eul1} C'(f_{L.T.E.})=-\beta\biggl \lbrack {w^{2}\over
2}+\lambda({\bf r},t)\biggr \rbrack.
\end{eqnarray}
This distribution function minimizes the generalized free energy
Eq.~(\ref{k4}) at fixed temperature $T$, local density $\rho({\bf r},t)$
and local velocity ${\bf u}({\bf r},t)$. The function
$\lambda({\bf r},t)$ is the Lagrange multiplier associated with
the density field. It is determined by the requirement
\begin{eqnarray}
\label{eul2}\rho({\bf r},t)=\int f_{L.T.E.} \,d{\bf v}=\rho\lbrack
\lambda({\bf r},t)\rbrack.
\end{eqnarray}
At equilibrium, we recover the distribution function
Eq.~(\ref{k6}) with ${\bf u}({\bf r})={\bf 0}$ and $\lambda({\bf
r})=\Phi({\bf r})+\alpha/\beta$. Using the condition
Eq.~(\ref{eul1}) of local thermodynamic equilibrium (LTE), the
pressure tensor Eq.~(\ref{j4}) can be written
$P_{ij}=p\delta_{ij}$ with
\begin{eqnarray}
\label{eul3} p({\bf r},t)={1\over d}\int f_{L.T.E.} w^{2}\,d{\bf
w}=p\lbrack \lambda({\bf r},t)\rbrack.
\end{eqnarray}
The pressure is a function $p=p(\rho)$ of the density which is
entirely specified by the function $C(f)$, by eliminating $\lambda$
from the relations Eq.~(\ref{eul2}) and Eq.~(\ref{eul3}). We note
furthermore that, using Eq. (\ref{eul1}) and integrations by parts,
the previous equations (\ref{eul2}) and (\ref{eul3}) easily
lead to $\nabla p={1\over d}\int {\partial f\over\partial {\bf
r}}w^{2}d{\bf w}={1\over d}\nabla\lambda
\int {\partial f\over\partial {\bf w}}{\bf w}d{\bf
w}=-\nabla\lambda\int f d{\bf w}= -\rho\nabla
\lambda$, hence $p'(\rho)=-\rho
\lambda'(\rho)$. In the case of Brownian particles described by the
ordinary Kramers equation (\ref{k2}) with $C(f)=f\ln f$, the equation
of state determined by Eqs. (\ref{eul2}) and (\ref{eul3}) is the
isothermal one $p={k_{B}T\over m}\rho$. More generally, we obtain the damped
Euler equations for a barotropic gas \cite{pre}:
\begin{eqnarray}
\label{eul4} {\partial\rho\over\partial t}+\nabla\cdot (\rho{\bf u})=0,
\end{eqnarray}
\begin{eqnarray}
\label{eul5} {\partial {\bf u}\over\partial t}+({\bf u}\cdot
\nabla){\bf u}= -{1\over\rho}\nabla p-\nabla\Phi-\xi {\bf u}.
\end{eqnarray}
These equations decrease the free energy
\begin{eqnarray}
\label{eul6} F[\rho,{\bf u}]=\int \rho\int^{\rho}{p(\rho')\over
\rho'^{2}} \,d \rho'd{\bf r}+{1\over 2}\int\rho\Phi d{\bf r}+\int
\rho {{\bf u}^{2}\over 2} \,d{\bf r},
\end{eqnarray}
which can be deduced from the free energy (\ref{k4}) by using the
local thermodynamic equilibrium condition (\ref{eul1}) to express
$F[f]$ as as a functional of $\rho$ and ${\bf u}$, using $F[\rho,{\bf
u}]=F[f_{L.T.E.}]$ (see \cite{lemou} for details). A direct
calculation yields
\begin{eqnarray}
\label{eul7} \dot F=-\xi\int \rho {\bf u}^{2}\,d{\bf r}\le 0.
\end{eqnarray}
At equilibrium, $\dot F=0$ implying ${\bf u}={\bf 0}$. Then,
Eq.~(\ref{eul5}) yields the condition of hydrostatic balance
\begin{eqnarray}
\label{eulhydro} \nabla p+\rho\nabla\Phi={\bf 0},
\end{eqnarray}
which also results from Eq. (\ref{k6}). Indeed, for $f=f(\epsilon)$ with $\epsilon=v^{2}/2+\Phi({\bf r})$, one has $\rho=\int f(\epsilon)d{\bf v}$, $p={1\over d}\int f(\epsilon)v^{2}d{\bf v}$ so that
\begin{eqnarray}
\label{hproof} \nabla p={1\over d}\int f'(\epsilon)\nabla\Phi v^{2}d{\bf v}={1\over d}\nabla\Phi \int {\partial f\over\partial {\bf v}}\cdot {\bf v}d{\bf v}=-\nabla\Phi\int f d{\bf v}=-\rho\nabla\Phi.
\end{eqnarray}
The damped barotropic Euler equations (\ref{eul4})-(\ref{eul5}) are
interesting as they connect hyperbolic models to parabolic
models. Indeed, for $\xi=0$ we recover the standard barotropic Euler
equations describing the dynamics of gaseous stars
\cite{bt,aa1,aa2,grand} or the formation of large-scale structures in
Cosmology \cite{peeble}. Alternatively, in the strong friction limit
$\xi\rightarrow +\infty$, we can neglect the inertial term in
Eq.~(\ref{eul5}) and we obtain
\begin{eqnarray}
\label{eul8} {\bf u}=-{1\over\xi\rho}(\nabla
p+\rho\nabla\Phi)+O(\xi^{-2}).
\end{eqnarray}
Substituting this drift term in the continuity equation (\ref{eul4}), we get
the generalized Smoluchowski equation \cite{pre}:
\begin{eqnarray}
{\partial\rho\over\partial t}=\nabla\cdot \biggl \lbrack
{1\over\xi}(\nabla p+\rho\nabla\Phi)\biggr\rbrack. \label{eul9}
\end{eqnarray}
This equation decreases the free energy
\begin{eqnarray}
\label{eul6b} F[\rho]=\int \rho\int^{\rho}{p(\rho')\over
\rho'^{2}} \,d \rho'd{\bf r}+{1\over 2}\int\rho\Phi d{\bf r},
\end{eqnarray}
which is obtained from Eq. (\ref{eul6}) by neglecting the last term of
order $O(\xi^{-2})$. A direct calculation yields
\begin{eqnarray}
\label{eul7b} \dot F=-\int {1\over\xi \rho}(\nabla p+ \rho \nabla\Phi)^{2}\,d{\bf r}\le 0.
\end{eqnarray}
It should be recalled that the damped Euler equations
(\ref{eul4})-(\ref{eul5}) remain heuristic because their derivation is
based on a Local Thermodynamic Equilibrium (L.T.E.) assumption
(\ref{eul1}) which is not rigorously justified. However, using a
Chapman-Enskog expansion, it is shown in \cite{lemou} that the
generalized Smoluchowski equation (\ref{eul9}) is {\it exact} in the
limit $\xi\rightarrow +\infty$ (or, equivalently, for times $t\gg
\xi^{-1}$). The generalized Smoluchowski equation can also be
obtained from the moments equations of the generalized Kramers
equation by closing the hierarchy, using $\xi\rightarrow +\infty$ (see
Sec. 9 of \cite{cban}).
\subsection{The orbit-averaged-Kramers equation}
\label{sec_oak}
We shall consider here the opposite limit of low friction
$\xi\rightarrow 0$. In the case where the term in the r.h.s of
Eq. (\ref{k2}) is small, we can obtain a simplified equation for the
evolution of the distribution function by averaging the kinetic
equation over the orbits of the particles. In the case of Hamiltonian
self-gravitating systems described by the Landau equation, this leads
to the orbit-averaged-Landau equation introduced by H\'enon
\cite{henon}. We shall here derive the orbit-averaged-Kramers equation
for self-gravitating Brownian systems.
Let us first rewrite the mean-field Kramers equation (\ref{k2}) in the
form
\begin{eqnarray}
\label{oak1} {\partial f\over\partial t}+{\bf v}\cdot {\partial
f\over\partial {\bf r}}-\nabla\Phi \cdot {\partial f\over\partial
{\bf v}} =Q(f).
\end{eqnarray}
We consider the case where $\xi\rightarrow 0$ for fixed $\beta$ so
that $\xi\simeq D\simeq 0$ and the operator $Q(f)$ can be considered
as small. If we take $Q(f)=0$, we obtain the Vlasov equation. We shall
assume that the system has reached a stable stationary distribution of
the Vlasov equation of the form $f=f(\epsilon)$ which depends only on
the energy $\epsilon={v^{2}\over 2}+\Phi({\bf r},t)$ of the
particles. This is a particular case of the Jeans theorem for
spherical systems. Such steady solution can arise from a process of
violent collisionless relaxation
\cite{lb,chav}. Since $Q(f)\neq 0$, the distribution function
will change due to the terms of friction and diffusion that are
present in the stochastic equation Eq. (\ref{nb2}) \footnote{In the
case of Hamiltonian systems where $\xi=D=0$, the distribution function
changes due to finite $N$ effects representing close encounters
between stars. These encounters are usually modelled by the Landau
operator
\cite{chav} which is of order $1/N\ll 1$. On the other hand, for self-gravitating Brownian particles the evolution is modeled by
the Kramers operator (obtained for $N\rightarrow +\infty$) which is of
the order $\xi t_{D}$ (where $t_{D}$ is the dynamical time). The
comparison between the evolution of Hamiltonian and Brownian systems
with long-range interactions is further discussed in the Conclusion of
this paper.}. However, if $\xi\rightarrow 0$, this change will be slow
so that the latter forces cause only a small variation on the
energy. We shall therefore consider that $f({\bf r},{\bf v},t)\simeq
f(\epsilon,t)$ remains a function of the energy alone that slowly
evolves due to imposed stochastic forces. Noting that
\begin{eqnarray}
\label{oak2} {\partial \over\partial t}f(\epsilon({\bf r},{\bf v},t),t)= {\partial f\over\partial t}+{\partial \Phi\over\partial t}{\partial f\over\partial \epsilon},
\end{eqnarray}
we can rewrite the kinetic equation (\ref{oak1}) in the form
\begin{eqnarray}
\label{oak3} {\partial f\over\partial t}+{\partial \Phi\over\partial t}{\partial f\over\partial \epsilon}=Q(f).
\end{eqnarray}
Since $f$ depends only on the energy, the system is spherically symmetric. Then, the phase space hypersurface with energy smaller than $\epsilon$ is
\begin{eqnarray}
\label{oak4} q(\epsilon,t)\equiv 16\pi^{2}\int_{v^{2}/2+\Phi\le \epsilon}r^{2}dr v^{2}dv={16\pi^{2}\over 3}\int_{0}^{r_{max}(\epsilon,t)}\lbrack 2(\epsilon-\Phi)\rbrack^{3/2}r^{2}dr,
\end{eqnarray}
where $r_{max}(\epsilon,t)$ is the largest radial extent of an orbit with energy $\epsilon$ at time $t$. It is determined by the equation $\epsilon=\Phi(r_{max},t)$ corresponding to $v=0$. The previous relation can be written more compactly as
\begin{eqnarray}
\label{oak5} q(\epsilon,t)={16\pi^{2}\over 3}\int_{0}^{r_{max}} v^{3} r^{2}dr,
\end{eqnarray}
where $v=\sqrt{2(\epsilon-\Phi(r,t))}$. The phase space hypersurface $g(\epsilon,t)d\epsilon$ with energy between $\epsilon$ and $\epsilon+d\epsilon$ is given by
\begin{eqnarray}
\label{oak6} g(\epsilon,t)\equiv {\partial q\over\partial \epsilon}={16\pi^{2}}\int_{0}^{r_{max}(\epsilon,t)}\lbrack 2(\epsilon-\Phi)\rbrack^{1/2}r^{2}dr={16\pi^{2}}\int_{0}^{r_{max}} v r^{2}dr.
\end{eqnarray}
Now, the density of particles in the hypersurface between $\epsilon$
and $\epsilon+d\epsilon$ is uniform since the distribution function
depends only on the energy. In fact, the system evolves on a short
timescale $\sim t_{D}$ by purely inertial effects (corresponding to
the advective terms in the l.h.s of Eq. (\ref{oak1})) so as to
establish this quasi-stationary regime where $f\simeq
f(\epsilon,t)$. We shall therefore average the kinetic equation
(\ref{oak3}) on each hypersurface of iso-energy using
\begin{eqnarray}
\label{oak7} \langle X\rangle (\epsilon,t)={\int_{0}^{r_{max}} Xv r^{2}dr\over \int_{0}^{r_{max}} v r^{2}dr}
\end{eqnarray}
for any function $X(r,v,t)$. Thus, the orbit-averaged kinetic equation can be written
\begin{eqnarray}
\label{oak8}16\pi^{2}\int_{0}^{r_{max}}r^{2}dr \ v\left\lbrack {\partial f\over\partial t}+{\partial\Phi\over\partial t}{\partial f\over\partial\epsilon}-Q(f)\right\rbrack=0.
\end{eqnarray}
The first term in bracket is
\begin{eqnarray}
\label{oak9}16\pi^{2}\int_{0}^{r_{max}}r^{2}dr \ v {\partial f\over\partial t}=g(\epsilon,t){\partial f\over\partial t}={\partial q\over\partial \epsilon}{\partial f\over\partial t}.
\end{eqnarray}
Since
\begin{eqnarray}
\label{oak10}{\partial q\over\partial t}=-16\pi^{2}\int_{0}^{r_{max}}v{\partial\Phi\over\partial t}r^{2}dr,
\end{eqnarray}
the second term in bracket can be written
\begin{eqnarray}
\label{oak11}16\pi^{2}\int_{0}^{r_{max}}r^{2}dr \ v {\partial\Phi\over\partial t}{\partial f\over\partial \epsilon}=-{\partial q\over\partial t}{\partial f\over\partial \epsilon}.
\end{eqnarray}
Finally, since
\begin{eqnarray}
\label{oak12}
vQ(f)=D{\partial\over\partial\epsilon}\left\lbrack v^{3}\left ({\partial f\over\partial \epsilon}+\beta f\right )\right\rbrack,
\end{eqnarray}
the last term in bracket is
\begin{eqnarray}
\label{oak13}16\pi^{2}\int_{0}^{r_{max}}r^{2}dr \ v Q(f)=3D{\partial\over\partial\epsilon}\left\lbrack q\left ({\partial f\over\partial \epsilon}+\beta m f\right )\right \rbrack.
\end{eqnarray}
Regrouping all these results, we obtain the orbit-averaged-Kramers equation
\begin{eqnarray}
\label{oak14}{\partial q\over\partial\epsilon}{\partial f\over\partial t}-{\partial q\over\partial t}{\partial f\over\partial \epsilon}=3D{\partial\over\partial\epsilon}\left\lbrack q\left ({\partial f\over\partial \epsilon}+\beta m f\right )\right \rbrack,
\end{eqnarray}
\begin{eqnarray}
\label{oak15} q(\epsilon,t)={16\pi^{2}\over 3}\int_{0}^{r_{max}(\epsilon,t)}\lbrack 2(\epsilon-\Phi(r,t))\rbrack^{3/2}r^{2}dr,
\end{eqnarray}
\begin{eqnarray}
\label{oak16} {1\over r^{2}}{\partial\over\partial r}\left (r^{2}{\partial\Phi\over\partial r}\right )=16\pi^{2}G\int_{\Phi(r,t)}^{+\infty}f(\epsilon,t)\lbrack 2(\epsilon-\Phi(r,t))\rbrack^{1/2}r^{2}d\epsilon,
\end{eqnarray}
where the last equation is the Poisson equation. It is easy to verify
that the free energy is monotonically decreasing ($\dot F\le 0$) and
that the stationary solution of Eq. (\ref{oak14}) is the Boltzmann distribution
$f=Ae^{-\beta m\epsilon}$. This equations will be studied in a future communication. Note also that in $d=1$, $q=\oint v dx$, $g=2\oint dx/v$ and in $d=2$, $q=2\pi^{2}\int_{0}^{r_{m}}v^{2}rdr$ and $g=2\pi^{2}r_{m}^{2}$.
\section{The Virial theorem for Brownian particles}
\label{sec_vith}
\subsection{The Virial theorem from the damped Jeans equations}
\label{sec_vj}
We shall give here the form of the Virial theorem appropriate to
the damped Jeans equations (\ref{j2})-(\ref{j3}). The only difference with the
standard Jeans equations is the presence of a friction term. We
shall thus only give the final result and refer to \cite{bt} for
the details of calculation. The damped Virial theorem can be
written
\begin{eqnarray}
\label{damv1} {1\over 2}{d^{2}I_{ij}\over dt^{2}}+{1\over 2} \xi
{dI_{ij}\over dt}=2K_{ij}+W_{ij}-{1\over 2}\oint (P_{ik}
x_{j}+P_{jk} x_{i})\,dS_{k}.
\end{eqnarray}
We have included boundary terms which must be taken into account if
the system is confined within a box. The tensor of inertia $I_{ij}$
and the potential energy tensor $W_{ij}$ are defined in Paper I.
The kinetic energy tensor is defined by
\begin{eqnarray}
\label{damv2} K_{ij}={1\over 2}\int f v_{i}v_{j}\,d{\bf v}.
\end{eqnarray}
It can be written as
\begin{eqnarray}
\label{damv3} K_{ij}=T_{ij}+{1\over 2}\Pi_{ij},
\end{eqnarray}
where
\begin{eqnarray}
\label{damv4}T_{ij}={1\over 2}\int \rho u_{i}u_{j}\,d{\bf r},
\qquad \Pi_{ij}=\int P_{ij}\,d{\bf r}.
\end{eqnarray}
Note that the tensors $K_{ij}$ and $P_{ij}$ depend on the distribution
function $f({\bf r},{\bf v},t)$, not only on hydrodynamic variables.
The scalar Virial theorem takes the form
\begin{eqnarray}
\label{damv6} {1\over 2}{d^{2}I\over dt^{2}}+ {1\over 2}\xi
{dI\over dt}=2K+W_{ii}-\oint P_{ik}x_{i}\,dS_{k},
\end{eqnarray}
where $I$ is the moment of inertia and
\begin{eqnarray}
\label{damv7} K={1\over 2}\int f v^{2}\,d{\bf r}d{\bf v},
\end{eqnarray}
is the kinetic energy. It can be written
\begin{eqnarray}
\label{damv8} K=T+{1\over 2}\Pi,
\end{eqnarray}
where
\begin{eqnarray}
\label{damv9} T={1\over 2}\int \rho {\bf u}^{2}\, d{\bf r}, \qquad
\Pi=d\int p\,d{\bf r}.
\end{eqnarray}
In the absence of diffusion and friction ($D=\xi=0$), we recover the
usual expression of the Virial theorem issued from the Jeans equations
\cite{bt}. For Brownian particles, the novelty is the presence of a
damping term $\xi\dot I$.
As pointed out in Paper I, the moment of inertia depends on the origin
$O$ of the system of coordinates. Let ${\bf R}(t)=(1/M)\int \rho
{\bf r} d{\bf r}$ denote the position of the center of mass with
respect to the origin $O$. Using the equation of continuity
(\ref{eul4}), we find that $Md{\bf R}/dt={\bf P}$ where ${\bf P}=\int
\rho {\bf u}d{\bf r}$ is the total impulse. Using the Jeans equation
(\ref{j3}), we find that $d{\bf P}/dt=-\xi {\bf P}$. In our case,
there exists an absolute referential $({\cal R})$. Indeed, in writing
Eq. (\ref{nb2}) we have implicitly assumed that our Brownian particles
evolve in a fluid that it motionless. Otherwise, the friction force in
Eq. (\ref{nb2}) has to be modified according to $-\xi ({\bf
v}_{i}-{\bf U})$ where ${\bf U}$ is the velocity of the fluid
\cite{lemou}. We must work therefore in this referential $({\cal
R})$. If we denote by ${\bf P}_{0}$ the initial value of the impulse,
we get ${\bf P}(t)={\bf P}_{0}e^{-\xi t}$. If now ${\bf R}_{0}$
denotes the initial position of the center of mass with respect to
$O$, we find that
\begin{eqnarray}
\label{rmov} {\bf R}(t)={\bf R}_{0}+{{\bf P}_{0}\over M\xi}(1-e^{-\xi t}).
\end{eqnarray}
Therefore, at $t\rightarrow +\infty$ the center of mass has been shifted by a
quantity ${\bf P}_{0}/M\xi$. In the strong friction limit $\xi\rightarrow +\infty$, we find that the center of mass is motionless (Paper I).
\subsection{The Virial theorem from the damped Euler equations}
\label{sec_ve}
The Virial
theorem associated to the damped barotropic Euler equations
(\ref{eul4})-(\ref{eul5}) can be deduced from Eq.~(\ref{damv1}) by
using the fact that $P_{ij}=p(\rho)\delta_{ij}$. This yields
\begin{eqnarray}
\label{damv11} {1\over 2}{d^{2}I_{ij}\over dt^{2}} +{1\over 2}\xi
{dI_{ij}\over dt}=2T_{ij}+{1\over d}\Pi \delta_{ij}+W_{ij}-{1\over
2}\oint p(\delta_{ik} x_{j}+\delta_{jk} x_{i})\,dS_{k},
\end{eqnarray}
\begin{eqnarray}
\label{damv12} {1\over 2}{d^{2}I\over dt^{2}}+{1\over 2} \xi {dI\over
dt}=2T+\Pi+W_{ii}-\oint p {\bf r}\cdot d{\bf S},
\end{eqnarray}
where each quantity is now expressed in terms of hydrodynamic
variables. At equilibrium, if no macroscopic motion is present
($T=0$) and if we can neglect the boundary term, we get
\begin{eqnarray}
\label{damv13} W_{ij}=-{1\over d}\Pi\, \delta_{ij}.
\end{eqnarray}
In the absence of diffusion and friction ($D=\xi=0$), we recover the
usual form of Virial theorem issued from the barotropic Euler
equations \cite{bt}. Alternatively, in the strong friction limit
$\xi\rightarrow +\infty$, we can neglect the term $\ddot I$ in front
of $\xi\dot I$. Furthermore, since the velocity field scales as ${\bf
u}=O(\xi^{-1})$, the kinetic energy tensor $T_{ij}=O(\xi^{-2})$ can
also be neglected. Therefore, the Virial theorem associated with the generalized Smoluchowski-Poisson system
(\ref{eul9})-(\ref{pois}) can be written
\begin{eqnarray}
\label{damv11b} {1\over 2}\xi
{dI_{ij}\over dt}=2 K_{ij}+W_{ij}-{1\over
2}\oint p(\delta_{ik} x_{j}+\delta_{jk} x_{i})\,dS_{k},
\end{eqnarray}
\begin{eqnarray}
\label{damv12b} {1\over 2} \xi {dI\over
dt}=2 K+W_{ii}-\oint p {\bf r}\cdot d{\bf S},
\end{eqnarray}
where
\begin{eqnarray}
\label{kdef} K_{ij}={1\over d}K\delta_{ij}, \qquad K={d\over 2}\int p d{\bf r},
\end{eqnarray}
is the expression of the kinetic energy to leading order in the limit $\xi\rightarrow +\infty$ where ${\bf u}({\bf r},t)$ can be neglected. We thus recover the results of Paper I starting directly from the GSP system.
\subsection{The effect of correlations}
\label{sec_corrv}
If we account for the effect of correlations (due to finite values of
$N$) between the particles and use the exact kinetic equation (\ref{nb9}),
we obtain the exact damped Jeans equations
\begin{eqnarray}
\label{corrv1} {\partial\rho\over\partial t}+\nabla\cdot (\rho{\bf u})=0,
\end{eqnarray}
\begin{eqnarray}
\label{corrv2} {\partial\over\partial t}(\rho u_{i})+
{\partial\over\partial x_{j}}(\rho u_{i}u_{j})=-{\partial
P_{ij}\over\partial x_{j}}+Gm^{2}\int {x_{i}'-x_{i}\over |{\bf
r}'-{\bf r}|^{d}}\rho_{2}({\bf r},{\bf r}',t)\,d{\bf r}'-\xi\rho
u_{i},
\end{eqnarray}
where we have introduced the spatial correlation function
\begin{eqnarray}
\label{corrv3} \rho_{2}({\bf r}_{1},{\bf r}_{2},t)=N(N-1) \int
P_{2}({\bf r}_{1},{\bf v}_{1},{\bf r}_{2},{\bf v}_{2},t)\,d{\bf
v}_{1}d{\bf v}_{2}.
\end{eqnarray}
In the mean-field approximation $\rho_{2}(1,2)=\rho(1)\rho(2)$, we
recover the damped Jeans equations (\ref{j2})-(\ref{j3}). The Virial theorem is
now given by
\begin{eqnarray}
\label{corrv4} {1\over 2}{d^{2}I_{ij}\over dt^{2}} +{1\over 2}\xi
{dI_{ij}\over dt}=2K_{ij}+W^{corr}_{ij}-{1\over 2}\oint (P_{ik}
x_{j}+P_{jk} x_{i})\,dS_{k},
\end{eqnarray}
where
\begin{equation}
W_{ij}^{corr}=-{Gm^{2}\over 2}\int \rho_{2}({\bf r},{\bf r}')
{({x}_{i}-{x}'_{i})(x_{j}-x_{j}')\over |{\bf r}-{\bf
r}'|^{d}}\,d{\bf r}d{\bf r}', \label{corrv5}
\end{equation}
is a generalization of the potential energy tensor accounting
for correlations between particles. In the strong friction limit $\xi\rightarrow +\infty$, the Virial theorem reduces to
\begin{eqnarray}
\label{corrv4str} {1\over 2}\xi
{dI_{ij}\over dt}=\delta_{ij}\int p d{\bf r}+W^{corr}_{ij}-{1\over 2}\oint
p (\delta_{ik}
x_{j}+\delta_{jk} x_{i})\,dS_{k}.
\end{eqnarray}
If we consider the case of Brownian particles with an isothermal
equation of state $p=\rho k_{B}T/m$ and if we focus on a space
with $d=2$ dimensions where
\begin{equation}
W_{ii}^{corr}=-{Gm^{2}\over 2}\int \rho_{2}({\bf r},{\bf r}')
\,d{\bf r}d{\bf r}'=-N(N-1){Gm^{2}\over 2},
\label{corrv6}
\end{equation}
the scalar Virial theorem takes the form
\begin{eqnarray}
\label{d2a}
{1\over 2} \xi {dI\over dt}=2Nk_{B}(T-T_{c})-2PV,
\end{eqnarray}
with a critical temperature
\begin{eqnarray}
\label{yt2}
k_{B}T_{c}={Gm^{2}(N-1)\over 4}.
\end{eqnarray}
These results are valid for an arbitrary number of
particles. For $N\gg 1$, using $N-1\simeq N$, we recover the results
of Paper I.
\section{Dynamical stability of self-gravitating Brownian particles}
\label{sec_dyn}
We shall now investigate the linear dynamical stability of a
stationary solution of the damped barotropic Euler-Poisson system
(\ref{eul4})-(\ref{eul5}) satisfying the condition of hydrostatic
balance (\ref{eulhydro}). We shall determine in particular the
equation of pulsations satisfied by a small perturbation around this
equilibrium state. We shall investigate the influence
of the friction parameter $\xi$ on the pulsation period and make the
connection between the standard Euler-Poisson system (hyperbolic)
obtained for $\xi=0$ and the generalized Smoluchowski-Poisson system
(parabolic) obtained for $\xi\rightarrow +\infty$. The linearized
damped Euler-Poisson equations are
\begin{eqnarray}
\label{pusl1} {\partial\delta \rho\over\partial t} +\nabla\cdot
(\rho\delta {\bf u})=0,
\end{eqnarray}
\begin{eqnarray}
\label{pusl2} \rho {\partial \delta {\bf u}\over\partial t}=
-\nabla(
p'(\rho)\delta\rho)-\rho\nabla\delta\Phi-\delta\rho\nabla\Phi-\xi\rho
\delta{\bf u},
\end{eqnarray}
\begin{eqnarray}
\label{pusl3} \Delta\delta\Phi=S_{d}G\delta\rho.
\end{eqnarray}
Considering spherically symmetric systems and writing the evolution of
the perturbation as $\delta\rho\sim e^{\lambda t}$, we get
\begin{eqnarray}
\label{pusl4} \lambda\delta\rho+{1\over r^{d-1}}{d\over dr}
(r^{d-1}\rho\delta u)=0,
\end{eqnarray}
\begin{eqnarray}
\label{pusl5} \lambda\rho\delta u=-{d\over dr}(p'(\rho)\delta\rho)-
\rho {d\delta\Phi\over dr}-\delta\rho{d\Phi\over dr}-\xi\rho
\delta{u},
\end{eqnarray}
\begin{eqnarray}
\label{pusl6} {1\over r^{d-1}}{d\over dr}\biggl
(r^{d-1}{d\delta\Phi\over dr}\biggr )=S_{d}G\delta\rho.
\end{eqnarray}
As in Paper I, we introduce the function $q(r)$ defined by
\begin{eqnarray}
\label{nov1} \delta\rho={1\over S_{d}r^{d-1}}{dq\over dr}.
\end{eqnarray}
The continuity equation then yields
\begin{eqnarray}
\label{nov2}\delta u=-{\lambda q\over S_{d}\rho r^{d-1}}.
\end{eqnarray}
After some elementary transformations similar to those of Paper I,
Eq.~(\ref{pusl5}) can be put in the form
\begin{equation}
{d\over dr}\biggl ({p'(\rho)\over S_{d} \rho r^{d-1}}{dq\over dr}\biggr
)+{Gq\over r^{d-1}}={\lambda(\lambda+\xi)\over S_{d} \rho r^{d-1}} q.
\label{pusl7}
\end{equation}
The case of barotropic stars described by the Euler-Poisson system
corresponds to $\xi=0$ \cite{bt,aa1,aa2,grand}. The case of
self-gravitating Brownian particles described by the generalized
Smoluchowski-Poisson system is recovered for $\xi\gg \lambda$
(see \cite{pre} and Paper I). We can therefore use the results of Paper
I by making the substitution $\xi\lambda\rightarrow
\lambda(\lambda+\xi)$. Therefore, an approximate
analytical expression for the eigenvalue $\lambda$ is given by
\begin{equation}
\lambda(\lambda+\xi)=(d\overline{\gamma}+2-2d)(d-2){W\over I}, \qquad (d\neq 2)
\label{pusl8}
\end{equation}
\begin{equation}
\lambda(\lambda+\xi)=-(\overline{\gamma}-1){GM^{2}\over I}, \qquad (d=2).
\label{pusl9}
\end{equation}
The friction coefficient $\xi$ affects the
evolution of the instability but it does not change the
instability threshold (determined by the sign of the l.h.s. of Eqs.
(\ref{pusl8})-(\ref{pusl9})). The unstable case corresponds to
$\lambda(\lambda+\xi)=\sigma^{2}>0$. The two eigenvalues are
\begin{equation}
\lambda_{\pm}={-\xi\pm \sqrt{\xi^{2}+4\sigma^{2}}\over 2}.
\label{pusl10}
\end{equation}
Since $\lambda_{+}>0$, we see that the perturbation grows
exponentially rapidly as $e^{\lambda_{+}t}$. The stable case corresponds to
$\lambda(\lambda+\xi)=-\sigma^{2}<0$. The two eigenvalues are
\begin{equation}
\lambda_{\pm}={-\xi\pm \sqrt{\xi^{2}-4\sigma^{2}}\over 2}.
\label{pusl11}
\end{equation}
If $\xi^{2}-4\sigma^{2}\ge 0$, then $\lambda_{\pm}<0$ and the
perturbation decreases exponentially rapidly without oscillating. This
is the case in particular for Brownian particles described by the
Smoluchowski equation ($\xi\rightarrow +\infty$) for which
$\lambda=-\sigma^{2}/\xi$ (Paper I). Alternatively, if
$\xi^{2}-4\sigma^{2}\le 0$, then $\lambda_{\pm}=(-\xi\pm
i\sqrt{4\sigma^{2}-\xi^{2}})/2$ and we have slowly damped oscillations
with a pulsation $\omega={1\over 2}\sqrt{4\sigma^{2}-\xi^{2}}$ and a
damping rate $\xi/2$. This is the case in particular for barotropic
stars ($\xi=0$) which oscillate with pulsation $\omega=\sigma$ without
attenuation. The separation between these two regimes (pure
damping vs damped oscillations) is obtained for $\xi=2\sigma$ at which
$\omega=0$. This suggests to introducing the dimensionless parameter
\begin{equation}
F\equiv {\xi^{2}\over \lambda(\lambda+\xi)},
\label{pusl12}
\end{equation}
measuring the efficiency of the friction force. The critical values
are $F=0$ and $F=-4$. If $F<-4$ the system is stable and a
perturbation is damped out exponentially rapidly without oscillating. If
$-4<F<0$ the system is stable and a perturbation exhibits damped
oscillations. The pulsation vanishes for $F=-4$ and the damping rate
vanishes for $F=0$. For $F>0$, the system is unstable. Using
Eqs.~(\ref{pusl8})-(\ref{pusl9}), the parameter defined in
Eq.~(\ref{pusl12}) is explicitly given by
\begin{equation}
F={1\over (3\overline{\gamma}-4)}{\xi^{2} I\over W},\qquad (d=3)
\label{pusl13}
\end{equation}
\begin{equation}
F={-1\over \overline{\gamma}-1}{\xi^{2} I\over GM^{2}},\qquad (d=2)
\label{pusl14}
\end{equation}
\begin{equation}
F=-{1\over \overline{\gamma}}{\xi^{2} I\over W},\qquad (d=1).
\label{pusl15}
\end{equation}
Dimensionally, this parameter scales as $|F|\sim \xi^{2}R^{d}/GM$. It
can also be written $|F|\sim (\xi t_{D})^{2}$ where $t_{D}\sim
1/\sqrt{\rho G}$ is the dynamical time \cite{bt}. The dynamical
stability of a homogeneous system (for a general form of potential of
interaction) is treated in Appendix \ref{sec_hom}.
\section{Conclusion}
\label{sec_conclusion}
In this paper, we have introduced general models of self-gravitating
Brownian particles (stochastic $N$ body, kinetic, hydrodynamic,...)
that relax the simplifying assumptions that are usually considered:
mean-field approximation for $N\rightarrow +\infty$ (thermodynamic
limit) and overdamped approximation for $\xi\rightarrow +\infty$
(strong friction limit). These general models show the connection
between previously considered models and offer a unifying framework to
study these systems. We have focused here on the case of
self-gravitating systems but most of our results also apply to the
problem of chemotaxis in biology. This will be specifically considered
in another paper where we discuss inertial models of bacterial
populations.
It should be emphasized that the Brownian model
(\ref{nb1})-(\ref{nb2}) contains the standard Hamiltonian model of
stellar dynamics \cite{bt} as a special case since the Langevin
equations reduce to the Hamilton equations for $\xi=D=0$. We expect
therefore to have different regimes depending on the value of the
parameters. To characterize these regimes properly, it is useful to
introduce different timescales: (i) the {\it dynamical time}
$t_{D}\sim 1/\sqrt{\rho G}$ (Kepler time) is the typical period of an
orbit or a typical free-fall time \cite{bt} (ii) the {\it collisional
relaxation time} $t_{R}\sim (N/\ln N) t_{D}$ (Chandrasekhar time) is
the typical time it takes a stellar system (Hamiltonian) to relax to
the Boltzmann distribution $e^{-\beta m\epsilon}$ due to close
encounters. This relaxation is due to finite $N$ effects
\cite{chandras} (iii) the {\it friction time} $t_{B}\sim \xi^{-1}$ (Kramers time) is the typical time it takes a Brownian system to thermalize, i.e. to
have its velocity distribution close to the Maxwellian $e^{-\beta m
v^{2}/2}$ \cite{risken}. This thermalization is due to the combined
effect of imposed friction and diffusion in the Langevin model
(\ref{nb1})-(\ref{nb2}). It is due to a thermal bath (of
non-gravitational origin) {\it not} to collisions (finite $N$
effects). We can now distinguish different cases:
(1) The case $t_{D}\ll t_{R} \ll t_{B}$ ($\xi\rightarrow 0$)
corresponds to Hamiltonian systems. For $t\ll t_{R}$, the system is
described by the Vlasov-Poisson system. There is first a phase of {\it
violent collisionless relaxation} on a timescale $\sim t_{D}$ leading
to a quasi-stationary state (QSS) in mechanical equilibrium. This is a
stable stationary solution of the Vlasov equation (on the
coarse-grained scale) that is usually not described by the Boltzmann
distribution. On longer timescale $t_{R}\sim (N/\ln N)t_{D}$ the
encounters between stars (due to finite $N$ effects) have the tendency
to drive the system towards a statistical equilibrium state described
by the Boltzmann distribution. In reality, this process is hampered by
the escape of stars and the gravothermal catastrophe. The collisional
evolution of the system is described by the Landau-Poisson system
which is the $1/N$ correction to the Vlasov limit (it singles out the
Boltzmann distribution among all stationary solutions of the Vlasov
equation)\cite{chav}. In fact, due to the time scale separation
between the phase of violent relaxation (inertial effects) and the
phase of collisional relaxation (finite $N$ effects), we can consider
for intermediate times that the distribution function is a
quasi-stationary solution of the Vlasov equation of the form
$f=f(\epsilon,t)$ (for spherical systems) that slowly evolves under
the action of close encounters according to the orbit-averaged-Landau
equation (traditionally called orbit-averaged-Fokker-Planck
equation). This implies that the lifetime of the QSS is {\it long} as
it increases as a power of $N$. It {\it slowly} evolves under the
effect of encounters which act as a perturbation of order $1/N$ with
$N\gg 1$. Therefore, the system first reaches a state of mechanical
equilibrium (through violent relaxation) then a state of thermal
equilibrium (through stellar encounters). These different phases of
the dynamical evolution of Hamiltonian stellar systems have been
studied by astrophysicists for a long time \cite{bt}.
(2) The case $t_{B}\ll t_{D}\ll t_{R}$ ($\xi\rightarrow +\infty$)
corresponds to the overdamped limit of the Brownian model. The
velocities first relax towards the Maxwellian distribution on a
timescale $t_{B}\sim \xi^{-1}$ (due to the thermal bath) and the
density relaxes towards a state of mechanical equilibrium on a longer
timescale (Smoluchowski diffusive time). Therefore, the system first
reaches a state of thermal equilibrium (because of the terms of
friction and noise in the Langevin equations) then a state of
mechanical equilibrium (through inertial effects). This overdamped
regime, described by the Smoluchowski-Poisson system, has been studied
in our series of papers
\cite{prs,sc,lang,sic,chs,crrs,sich,sopik}.
(3) Finally, there is an interesting case $t_{D}\ll t_{B}\ll t_{R}$
that has not yet been studied. In that case, there is first a phase of
violent relaxation on a timescale $\sim t_{D}$ leading to a
quasi-stationary state (QSS) in mechanical equilibrium like in case
(1). This phase is followed by a thermalization leading to the
Boltzmann distribution on a timescale $t_{B}\sim \xi^{-1}$ due to the
thermal bath, i.e. the combined effect of imposed friction and
diffusion in the Langevin model (\ref{nb1})-(\ref{nb2}), {\it not} to
``collisions'' (finite $N$ effects) as in case (1). The first phase is
described by the Vlasov-Poisson system and the second phase by the
Kramers-Poisson system. For $\xi\rightarrow 0$ (but $\xi\gg (\ln
N/N)t_{D}^{-1}$) there is a time scale separation between the phase of
violent relaxation and the phase of Brownian relaxation. Similarly to
case (1), we can consider for intermediate times that the distribution
function is a quasi-stationary solution of the Vlasov equation of the
form $f=f(\epsilon,t)$ (for spherical systems) that slowly evolves
under the action of imposed friction and diffusion (thermal bath, not
collisions) according to the orbit-averaged-Kramers equation derived
in Sec. \ref{sec_oak}. Since the Brownian timescale $t_{B}\sim
\xi^{-1}$ is independent on $N$, this implies that the lifetime of the
QSS in this regime is independent on $N$. Furthermore, it is {\it
shorter} than in case (1) if $\xi\gg (\ln N/N)t_{D}^{-1}$. Therefore,
the system first reaches a state of mechanical equilibrium (through
violent relaxation) then a state of thermal equilibrium (through the
effect of imposed fluctuation and dissipation, i.e. the thermal
bath). This is the opposite situation to case (2). The study of the
orbit-averaged-Kramers equation will be considered in a future
work. Note that if $t_{B}$ and $t_{R}$ are comparable, one must take
into account simultaneously the effect of the thermal bath (friction
and random force) and the effect of collisions (finite $N$
effects). This is another interesting case. Finally, we stress that
these different regimes should be observed for other potentials of
interaction $u(|{\bf r}-{\bf r}'|)$ than the gravitational one
(e.g. for the HMF and BMF models \cite{cvb}). Kinetic theories of
Hamiltonian and Brownian particles with long-range interactions are
discussed in \cite{chav} at a general level.
\vskip1cm
{\bf Acknowledgements} We are grateful to the referees for useful remarks that helped to improve the presentation of the paper.
|
2,869,038,155,452 | arxiv | \section{Introduction}
The problem of simulating the dynamics of quantum systems was the original motivation for quantum computers \cite{Fey82} and remains one of their major potential applications. Although classical algorithms for this problem are inefficient, a significant fraction of the world's computing power today is spent in solving instances of this problem that arise in, e.g., quantum chemistry and materials science~\cite{OLCF13,NERSC13}. Furthermore, efficient classical algorithms for this problem are unlikely to exist: since the simulation problem is \textsf{BQP}-complete~\cite{Fey85}, an efficient classical algorithm for quantum simulation would imply an efficient classical algorithm for any problem with an efficient quantum algorithm (e.g., integer factorization \cite{Sho97}).
The first explicit quantum simulation algorithm, due to Lloyd \cite{Llo96}, gave a method for simulating Hamiltonians that are sums of local interaction terms. Aharonov and Ta-Shma gave an efficient simulation algorithm for the more general class of sparse Hamiltonians \cite{AT03}, and much subsequent work has given improved simulations \cite{Chi04,BACS07,WBHS11,Chi10,PQSV11,BC12,CW12,BCCKS14,BCCKS15}. Sparse Hamiltonians include most physically realistic Hamiltonians as a special case (making these algorithms potentially useful for simulating real-world systems). In addition, sparse Hamiltonian simulation can be used to design other quantum algorithms \cite{HHL09,CCDFGS03,CCJY09}.
For example, it was used to convert the algorithm for evaluating a balanced binary NAND tree with $n$ leaves \cite{FGG08} to the discrete-query model \cite{CCJY09}.
In the Hamiltonian simulation problem, we are given an $n$-qubit Hamiltonian $H$ (a Hermitian matrix of size $2^n \times 2^n$), an evolution time $t$, and a precision $\epsilon>0$, and are asked to implement the unitary operation $e^{-iHt}$
up to error at most $\epsilon$ (as quantified by the diamond norm distance).
That is, the task is to implement a unitary operation, rather than simply to generate \cite{AMRR11} or convert \cite{LMRSS11} a quantum state.
We say that $H$ is $d$-sparse if it has at most $d$ nonzero entries in any row. In the sparse Hamiltonian simulation problem, $H$ is specified by a black box that takes input $(j,\ell) \in [2^n] \times [d]$ (where $[d] := \{1,\ldots,d\}$) and outputs the location and value of the $\ell$th nonzero entry in the $j$th row of $H$.
Specifically, as in \cite{BC12}, we assume access to an oracle $O_H$ acting as
\begin{equation}
O_H\ket{j,k,z} = \ket{j,k,z \oplus{H_{jk}}}
\label{eq:oracleh}
\end{equation}
for $j,k \in [2^n]$ and bit strings $z$ representing entries of $H$, and another oracle $O_F$ acting as
\begin{equation}
O_F\ket{j,\ell} = \ket{j,f(j,\ell)},
\label{eq:oraclef}
\end{equation}
where $f(j,\ell)\colon [2^n] \times [d] \to [2^n]$ is a function giving the column index of the $\ell$th nonzero element in row $j$. Note that the form of $O_F$ assumes that the locations of the nonzero entries of $H$ can be computed in place. This is possible if we can efficiently compute both $(j,\ell) \mapsto f(j,\ell)$ and the reverse map $(j,f(j,\ell)) \mapsto \ell$, which holds in typical applications of sparse Hamiltonian simulation. Alternatively, if $f$ provides the nonzero elements in order, we can compute the reverse map with only a $\log d$ overhead by binary search.
At present, the best algorithms for sparse Hamiltonian simulation, in terms of query complexity (i.e., the number of queries made to the oracles) and number of 2-qubit gates used, are one based on a Szegedy quantum walk \cite{Chi10,BC12} and another based on simulating an unconventional model of query complexity called the fractional-query model
\cite{BCCKS14}.
An algorithm with similar complexity to \cite{BCCKS14} is based on implementing a Taylor series of the exponential \cite{BCCKS15}.
The quantum walk approach has query complexity $O(d\norm{H}_{\max}t/\sqrt\epsilon)$, which is linear in both the sparsity $d$ and the evolution time $t$. (Here $\norm{H}_{\max}$ denotes the largest entry of $H$ in absolute value.) However, this approach has poor dependence on the allowed error $\epsilon$. In contrast, the fractional-query approach has query complexity $O\big(d\tau \frac{\log(\tau/\epsilon)}{\log\log(\tau/\epsilon)}\big)$, where $\tau := d\norm{H}_{\max}t$. This approach gives exponentially better dependence on the error at the expense of quadratically worse dependence on the sparsity.
Considering the fundamental importance of quantum simulation, it is desirable to have a method that achieves the best features of both approaches.
In this work, we combine the two approaches, giving the following.
\begin{theorem}
\label{thm:upper}
A d-sparse Hamiltonian $H$ acting on $n$ qubits can be simulated for time $t$ within error $\epsilon$ with
\begin{equation}
\label{eq:upper}
O\left( \tau \frac{\log(\tau/\epsilon)}{\log\log(\tau/\epsilon)}\right)
\end{equation}
queries and
\begin{equation}
O\left( \tau [n + \log^{5/2}(\tau/\epsilon)] \frac{\log(\tau/\epsilon)}{\log\log(\tau/\epsilon)}\right) \end{equation}
additional 2-qubit gates, where $\tau := d \norm{H}_{\max} t$.
\end{theorem}
This result provides a strict improvement over the query complexity of \cite{BCCKS14,BCCKS15}, removing a factor of $d$ in $\tau$, and thus providing near-linear instead of superquadratic dependence on $d$.
We also prove a lower bound showing that any algorithm must use $\Omega(\tau)$ queries. While a lower bound of $\Omega(t)$ was known previously \cite{BACS07}, our new lower bound shows that the complexity must be at least linear in the product of the sparsity and the evolution time. Our proof is similar to a previous limitation on the ability of quantum computers to simulate non-sparse Hamiltonians \cite{CK10}: by replacing each edge in the graph of the Hamiltonian by a complete bipartite graph $K_{d,d}$, we effectively boost the strength of the Hamiltonian by a factor of $d$ at the cost of making the matrix less sparse by a factor of $d$. Combining this result with the error-dependent lower bound of \cite{BCCKS14}, we find a lower bound as follows.
\begin{theorem}
\label{thm:lower}
For any $\epsilon,t>0$, integer $d\ge 2$, and fixed value of $\norm{H}_{\max}$, there exists a $d$-sparse Hamiltonian $H$ such that simulating $H$ for time $t$ with precision $\epsilon$ has query complexity
\begin{equation}
\label{eq:lower}
\Omega \left( \tau + \frac{\log(1/\epsilon)}{\log\log(1/\epsilon)}\right).
\end{equation}
\end{theorem}
Thus our result is near-optimal for the scaling in either $\tau$ or $\epsilon$ on its own. However, our upper bound \eq{upper} has a product, whereas the lower bound \eq{lower} has a sum. It remains an open question how to close the gap between these bounds. Intriguingly, a slight modification of our technique gives another algorithm with the following complexity.
\begin{theorem}
\label{thm:tradeoff}
For any $\alpha\in(0,1]$, a d-sparse Hamiltonian $H$ acting on $n$ qubits can be simulated for time $t$ within error $\epsilon$ with
query complexity
\begin{equation}
\label{eq:finalresult2} O\bigl( \tau^{1+\alpha/2} + \tau^{1-\alpha/2} \log(1/\epsilon)\bigr).
\end{equation}
\end{theorem}
This result provides a nontrivial tradeoff between the parameters $t$, $d$, and $\epsilon$, and suggests that further improvements to such tradeoffs may be possible.
We now informally describe the key idea behind our algorithms.
For simplicity, suppose that the entries of the Hamiltonian are small, satisfying $\norm{H}_{\max} \leq 1/d$, and $t=1$.
Previous work on Hamiltonian simulation \cite{Chi10,BC12} has shown that using a constant number of queries, we can construct a unitary $U$ whose top-left block (in some basis) is exactly $e^{-i\arcsin(H)}$.
Technical difficulties aside, the essential problem is to implement the unitary $e^{-iH}$ given the ability to perform $e^{-i\arcsin(H)}$.
While it is not clear how to express $e^{-iH}$ as a product of easy-to-implement unitaries and $e^{-i\arcsin(H)}$, it can be approximated by a linear combination of powers of $e^{-i\arcsin(H)}$.
Although such a decomposition may not seem natural, we show that nevertheless it leads to an efficient implementation.
In the next section we present a more technical overview of this high-level idea. In \sec{analysis} we analyze and prove the correctness of our algorithms. \sec{lower} proves the lower bound presented in \thm{lower} and we conclude with some discussion in \sec{disc}.
\section{Overview of algorithms}
Our algorithm uses a Szegedy quantum walk as in \cite{Chi10,BC12}, but with a linear combination of different numbers of steps. Such an operation can be implemented using the techniques that were developed to simulate the fractional-query model \cite{BCCKS14}. This allows us to introduce a desired phase more accurately than with the phase estimation approach of \cite{Chi10,BC12}. As in \cite{BCCKS14}, we first implement the approximated evolution for some time interval with some amplitude and then use oblivious amplitude amplification to make the implementation deterministic, facilitating simulations for longer times.
In the rest of this section, we describe the approach in more detail.
References~\cite{Chi10,BC12} define a quantum walk step $U$ that depends on the Hamiltonian $H$ to be simulated.
In turn, this quantum walk step is based on a state preparation procedure that only requires one call to the sparse Hamiltonian oracle, avoiding the need to decompose $H$ into a sum of terms as in product-formula approaches.
Two copies of the Hilbert space acted on by $H$ are used.
First, the initial state is in one of these Hilbert spaces.
Then, the state preparation procedure is used to map the initial state onto the joint Hilbert space.
This state preparation acts on the second copy of the Hilbert space, controlled by the state in the first Hilbert space.
The quantum walk steps take place in this joint Hilbert space.
Finally, the controlled state preparation is inverted to map the final state back to the first Hilbert space.
In the controlled state preparation, each eigenstate of $H$ is mapped onto a superposition of two eigenstates $\ket{\mu_\pm}$ of the quantum walk step $U$.
The precise definition of $U$ is not needed here; for our application, it suffices to observe that the
eigenvalues $\mu_\pm$ of $U$ are related to the eigenvalues $\lambda$ of $H$ via
\begin{equation}
\label{eq:mulam}
\mu_\pm = \pm e^{\pm i\arcsin(\lambda/Xd)},
\end{equation}
where $X\ge \norm{H}_{\max}$ is a parameter that can be increased to make the steps of the quantum walk closer to the identity.
For small $\lambda/Xd$, the steps of the quantum walk yield a phase factor that is nearly proportional to that for the Hamiltonian evolution.
However, the phase deviates from the desired value since the function $\arcsin\nu$ is not precisely linear about $\nu=0$.
Also, there are two eigenvalues $\mu_\pm$, and in previous approaches it was necessary to distinguish between these to approximate Hamiltonian evolution~\cite{Chi10,BC12}.
In contrast, for the new technique we present here it is not necessary to distinguish the eigenspaces.
An obvious way to increase the accuracy is to increase $X$ above its minimum value of $\norm{H}_{\max}$. However, the number of steps of the quantum walk is $O(tXd)$, so increasing $X$ results in a less efficient simulation. Another approach is to use phase estimation to correct the phase factor \cite{Chi10,BC12}, but this approach still gives polynomial dependence on $1/\epsilon$.
Instead, we propose using a superposition of steps of the quantum walk to effectively linearize the $\arcsin$ function. Specifically, rather than applying $U$, we apply
\begin{equation} \label{eq:super}
V_k := \sum_{m=-k}^{k} a_m U^{m}
\end{equation}
for some coefficients $a_{-k},\ldots,a_k$.
We show that the coefficients can be chosen by considering the generating function for the Bessel function \cite[9.1.41]{AS64},
\begin{equation}
\label{eq:generat}
\sum_{m=-\infty}^{\infty} J_m(\seg) \mu_\pm^{m} = \exp\left[ \frac \seg 2 \left( \mu_\pm - \frac 1{\mu_\pm} \right) \right] = e^{i\lambda \seg/Xd},
\end{equation}
where the second equality follows from \eq{mulam}.
Because the right-hand-side does not depend on whether the eigenvalue of $U$ is $\mu_+$ or $\mu_-$, there is no need to distinguish the eigenspaces.
Thus the ability to perform the operation
\begin{equation}
\label{eq:infinite}
\sum_{m=-\infty}^\infty J_m(\seg) U^{m}
\end{equation}
would allow us to exactly implement the evolution under $H$ for time $-\seg/Xd$.
Because of the minus sign, we will take $\seg$ to be negative to obtain positive time.
By truncating the sum in \eq{infinite} to some finite range $\{-k,\ldots,k\}$, we obtain an expression in which each term can be performed using at most $k$ queries. Because the Bessel function falls off exponentially for large $|m|$, we can obtain error at most $\epsilon$ with a cutoff $k$ that is only logarithmic in $1/\epsilon$.
A linear combination of unitaries (LCU) such as \eq{super} can be implemented using the LCU Lemma (\lem{approxV}) described in the next section.
The high-level intuition for the procedure is as follows.
We prepare ancilla qubits in a superposition encoding the coefficients of the linear combination and then perform the unitary operations of the linear combination in superposition, controlled by the ancilla.
One could then obtain $V_k$ by postselecting on an appropriate ancilla state.
Instead, to obtain $V_k$ deterministically, we apply the oblivious amplitude amplification procedure introduced in \cite{BCCKS14}.
Rather than using $V_k$ to implement evolution over the entire time, we break the time up into shorter time steps we call ``segments'' (named by analogy to the segments used in \cite{BCCKS14}) and use $V_k$ to achieve the time evolution for each segment.
The complexity of our algorithm is the number of segments ($tXd/|\seg|$) times the complexity for each segment ($k$) times the number of steps needed for oblivious amplitude amplification ($a$).
We have some freedom in choosing $\seg$, which controls the amount of evolution time simulated by each segment.
To obtain near-linear dependence on the evolution time $t$, we choose $\seg=O(1)$.
Then amplitude amplification requires $O(1)$ steps, and the number of segments needed is $O(\tau)$, giving the linear factor in \eq{upper}.
The value of $k$ needed to achieve overall error at most $\epsilon$ is logarithmic in $\tau/\epsilon$, yielding the logarithmic factor in \eq{upper}.
An alternative approach is to use a larger segment that scales with $\tau$.
Choosing $\seg=-\tau^\alpha$ for $\alpha\in(0,1]$, we need $k = O(\tau^\alpha+\log(1/\epsilon))$.
Then we require $O(\tau^{1-\alpha})$ segments and $O(\tau^{\alpha/2})$ steps of amplitude amplification, giving the scaling presented in \thm{tradeoff}.
\section{Analysis of algorithms}
\label{sec:analysis}
\subsection{A quantum walk for any Hamiltonian}
We begin by reviewing the quantum walk defined in \cite{Chi10,BCCKS14}. Given a Hamiltonian $H$ acting on $\CC^N$ (where $N:=2^n$), the Hilbert space is expanded to $\CC^{2N}\otimes\CC^{2N}$.
First, an ancilla qubit in the state $\ket{0}$ is appended, which expands the space from $\CC^N$ to $\CC^{2N}$.
Then the entire Hilbert space is duplicated, giving $\CC^{2N}\otimes\CC^{2N}$.
This is achieved using the isometry
\begin{equation}
T:=\sum_{j=0}^{N-1} \sum_{b\in\{0,1\}} (\ket{j}\bra{j}\otimes \ket{b}\bra{b}) \otimes \ket{\varphi_{jb}}
\end{equation}
with $\ket{\varphi_{j1}}=\ket{0}\ket{1}$ and
\begin{equation}
\label{eq:state}
\ket{\varphi_{j0}} := \frac{1}{\sqrt d} \sum_{\ell\in F_j} \ket{\ell} \Biggl( \sqrt{\frac{H^*_{j\ell}}{X}}\ket{0}+\sqrt{1-\frac{|H^*_{j\ell}|}{X}}\ket{1}\Biggr),
\end{equation}
where $X\ge \norm{H}_{\max}$ and $F_j$ is the set of indices of nonzero elements in column $j$ of $H$.
Here we use the convention that the first subsystem is the original space, the next is the ancilla qubit, and the third and fourth subsystems are the duplicated space and duplicated ancilla qubit, respectively.
This operation can be viewed as a controlled state preparation, creating state $\ket{\varphi_{j0}}$ on input $\ket{j}\ket{0}$.
If the ancilla qubit is in the state $\ket{1}$, then $\ket{0}\ket{1}$ is prepared.
Starting with the initial space, the controlled state preparation is performed, and then steps of the quantum walk are applied using the unitary
\begin{equation}
U := iS(2TT^\dagger -\openone),
\end{equation}
where $S$ swaps the two registers (i.e., $S\ket{j_1}\ket{j_2}\ket{\ell_1}\ket{\ell_2}=\ket{\ell_1}\ket{\ell_2}\ket{j_1}\ket{j_2}$ for all $j_1,\ell_1 \in [N]$, $j_2,\ell_2\in\{0,1\}$).
Finally, the inverse state preparation $T^\dagger$ is performed.
For a successful simulation, the output should lie in the original space, and the ancilla should be returned to the state $\ket{0}$.
Let $\lambda$ be the eigenvalue of $H$ with eigenstate $\ket{\lambda}$, and let $\nu:=\lambda/Xd$ be the corresponding scaled eigenvalue for the quantum walk. The steps of the quantum walk $U$ satisfy $U\ket{\mu_\pm} = \mu_\pm\ket{\mu_\pm}$ \cite{Chi10} with
\begin{align}
\ket{\mu_\pm} &:= (T+ i\mu_\pm ST)\ket{\lambda},\\
\label{eq:munueq}
\mu_\pm &:= \pm\sqrt{1-\nu^2} + i\nu = \pm e^{\pm i\arcsin\nu}.
\end{align}
To apply the steps of the quantum walk to approximate Hamiltonian evolution, there are two challenges: we must handle both the $\ket{\mu_+}$ and $\ket{\mu_-}$ sectors, and correct the applied phase.
In this work we are able to solve both these challenges at once by using a superposition of steps of the quantum walk.
\subsection{Linear combination of unitaries}
We now describe how to perform a linear combination of unitary operations.
Given an $M$-tuple of unitary operations $\vecU = (U_1,\ldots,U_M)$, we quantify the complexity of implementing a linear combination of the $U_m$s in terms of the number of invocations of
\begin{equation}
\label{eq:controlled}
\sel(\vecU) := \sum_{m=1}^M \ket{m}\bra{m} \otimes U_{m}.
\end{equation}
Such a result was previously given in \cite{BCCKS15,Kot14}.
Here we formalize that result and generalize to allow more steps of oblivious amplitude amplification.
The overall result is as given in the following lemma.
\begin{lemma}[LCU Lemma]\label{lem:approxV}
Let $\vecU = (U_1,\ldots,U_M)$ be unitary operations and let $\tilde V = \sum_{m=1}^M a_m U_m$ be $\delta$-close to a unitary.
We can approximate $\tilde V$ to within $O(\delta)$ using $O(a)$ $\sel(\vecU)$ and $\sel(\vecU^\dag)$ operations and $O(Ma)$ additional 2-qubit gates, where $a:=\sum_{m=1}^M |a_m|$.
\end{lemma}
To prove this result, we first consider an operation that would give $\tilde V$ with postselection, then apply oblivious amplitude amplification to achieve it deterministically.
The operation that provides $\tilde V$ with postselection is described in the following lemma.
\begin{lemma}\label{lem:sup}
Let $\vecU = (U_1,\ldots,U_M)$ be unitary operations acting on a Hilbert space $\mathcal{H}_2$, let $\tilde V = \sum_{m=1}^M a_m U_m$,
and let $s\ge\sum_{m=1}^M |a_m|$ be a real number.
Define a Hilbert space $\mathcal{H}_1$ to be a tensor product of a qubit and a subspace of dimension $M$, and let $\ket{\zer} := \ket{0}\ket{0} \in\mathcal{H}_1$.
Then there exists a unitary operation $W$ acting on $\mathcal{H}_1 \otimes \mathcal{H}_2$ such that
$Z = \frac 1s (\ket{\zer}\bra{\zer}\otimes \tilde{V})$, with $Z:=PWP$, $P := \ket{\zer}\bra{\zer} \otimes \id$.
The operation $W$ can be applied with $O(1)$ $\sel(\vecU)$ and $\sel(\vecU^\dag)$ operations and $O(M)$ additional 2-qubit gates.
\end{lemma}
\begin{proof}
To perform $W$, we perform an operation that rotates the ancilla qubits from $\ket\zer$ to the state
\begin{equation}
\ket{\chi} = \left( \sqrt{\frac as}\ket 0+\sqrt{1-\frac as}\ket 1 \right) \otimes \frac{1}{\sqrt{a}} \sum_{m=1}^M \sqrt{a_m}\ket{m},
\end{equation}
where $a:=\sum_{m=1}^M |a_m|$.
This state is of dimension $2M$ and can be prepared from state $\ket{\zer}$ using $O(M)$ operations (which is trivial for $\ket{m}$ encoded in unary).
Next we perform the controlled operation $\sel(\vecU)$.
Finally, inverting the preparation of $\ket{\chi}$ and projecting onto $\ket{\zer}$ would effectively project the ancilla onto $\ket{\chi}$.
Then the unnormalized operation on $\mathcal{H}_2$ is $\tilde V/s$, corresponding to $Z$.
The action of applying the unitary operation to prepare $\ket{\chi}$, the controlled operation $\sel(\vecU)$, and the inverse preparation
gives the desired operation $W$.
\end{proof}
Next we provide a multi-step version of robust amplitude amplification, generalizing the single-step version presented in \cite{BCCKS15}.
In this lemma, and throughout this paper, $\norm{\cdot}$ denotes the spectral norm.
\begin{lemma}[Robust oblivious amplitude amplification]\label{lem:roaa}
Let $W$ be a unitary matrix acting on $\mathcal{H}_1 \otimes \mathcal{H}_2$ and let $P$ be the projector onto the subspace whose first register is $\ket{\zer} := \ket{0}\ket{0} \in \mathcal{H}_1$, i.e., $P := \ket{\zer}\bra{\zer} \otimes \id$. Furthermore let $Z:=PWP$ satisfy $Z = \frac 1s (\ket{\zer}\bra{\zer}\otimes \tilde{V})$, where $\tilde{V}$ is $\delta$-close to a unitary matrix and $\sin\bigl(\frac{\pi}{2(2\iters+1)}\bigr)= \frac 1s$ for some $\iters \in \mathbb{N}$, and let $R:=-W(\id-2P)W^\dag(\id-2P)$. Then
\begin{equation}
\label{eq:roaa}
\norm{PR^\iters WP - (\ket{\zer}\bra{\zer} \otimes \tilde{V})} = O(\delta).
\end{equation}
\end{lemma}
\begin{proof}
We start by considering a single iteration, as in \cite{BCCKS15}. Then we have
\begin{align}
RWP
&= -W(\openone-2P)W^\dagger(\openone-2P)WP \nn
&= -WP + 2WP + 2PWP -4WPW^\dagger PWP \nn
&= WP+2 Z -4W Z^\dagger Z.
\end{align}
Multiplying by $P$ on the left gives
\begin{align}
PRWP = -PW(\openone-2P)W^\dagger(\openone-2P)WP = 3 Z -4 Z Z^\dagger Z,
\end{align}
which matches the expression in \cite{BCCKS15}.
The general solution after $m$ iterations is
\begin{align}
\label{eq:sol}
R^m WP = (WP- Z)\frac{T_{2m+1}(\sqrt{1- Z^\dagger Z})}{\sqrt{1- Z^\dagger Z}} + Z\frac{(-1)^m T_{2m+1}(\sqrt{ Z^\dagger Z})}{\sqrt{ Z^\dagger Z}},
\end{align}
where $T_{2m+1}$ are Chebyshev polynomials of the first kind.
(Because Chebyshev polynomials for odd order only include odd powers, no square roots appear when \eq{sol} is expanded.)
We establish \eq{sol} by induction. First note that it holds for $m=0$, because $T_1(x)=x$, so the right-hand side evaluates to $WP$.
Next assume that it holds for a given $m$.
It is straightforward to show that
\begin{align}
R(WP-Z) &= (WP- Z)(1-2 Z^\dagger Z) +2 Z(1- Z^\dagger Z) \quad \text{and} \nn
R Z &= (WP- Z)(-2 Z^\dagger Z) + Z(1-2 Z^\dagger Z).
\end{align}
Hence, multiplying both sides of \eqref{eq:sol} by $R$, we get
\begin{align}
\label{eq:hard}
R^{m+1}WP &= R \left[(WP- Z)\frac{T_{2m+1}(\sqrt{1- Z^\dagger Z})}{\sqrt{1- Z^\dagger Z}} + Z\frac{(-1)^m T_{2m+1}(\sqrt{ Z^\dagger Z})}{\sqrt{ Z^\dagger Z}}\right] \nn
&= (WP- Z)\left[(1-2 Z^\dagger Z)\frac{T_{2m+1}(\sqrt{1- Z^\dagger Z})}{\sqrt{1- Z^\dagger Z}} -2 Z^\dagger Z\frac{(-1)^m T_{2m+1}(\sqrt{ Z^\dagger Z})}{\sqrt{ Z^\dagger Z}}\right] \nn
&\quad + Z\left[ 2(1- Z^\dagger Z)\frac{T_{2m+1}(\sqrt{1- Z^\dagger Z})}{\sqrt{1- Z^\dagger Z}} + (1-2 Z^\dagger Z)\frac{(-1)^m T_{2m+1}(\sqrt{ Z^\dagger Z})}{\sqrt{ Z^\dagger Z}} \right].
\end{align}
To progress further, we use the relation \cite[22.3.15]{AS64}
\begin{equation}
T_{2m+1}(x) = \cos[(2m+1) \arccos x] = (-1)^m\sin[(2m+1)\arcsin x].
\end{equation}
Using this we find that, with $x=\sin\theta$,
\begin{align}
&(1-2x^2)\frac{T_{2m+1}(\sqrt{1-x^2})}{\sqrt{1-x^2}}-2 x^2\frac{(-1)^m T_{2m+1}(x)}{x} \nn
&\quad=(\cos^2\theta-\sin^2\theta)\frac{\cos[(2m+1)\theta]}{\cos\theta}-2 \sin\theta \sin[(2m+1)\theta] \nn
&\quad=\frac{\cos (2\theta)\cos[(2m+1)\theta]-\sin(2\theta) \sin[(2m+1)\theta]}{\cos\theta} \nn
&\quad=\frac{\cos[(2m+3)\theta]}{\cos\theta} \nn
&\quad=\frac{T_{2m+3}(\sqrt{1-x^2})}{\sqrt{1-x^2}}.
\end{align}
Next put $x=\cos\phi$ to obtain
\begin{align}
&2(1-x^2)\frac{T_{2m+1}(\sqrt{1-x^2})}{\sqrt{1-x^2}}+(1-2 x^2)\frac{(-1)^m T_{2m+1}(x)}{x} \nn
&\quad=2(\sin^2\phi)\frac{(-1)^m\sin[(2m+1)\phi]}{\sin\phi} -(\cos^2\phi-\sin^2\phi)\frac{(-1)^m \cos[(2m+1)\phi]}{\cos\phi} \nn
&\quad=(-1)^m\frac{\sin(2\phi)\sin[(2m+1)\phi]-\cos(2\phi) \cos[(2m+1)\phi]}{\cos\phi} \nn
&\quad=(-1)^{m+1}\frac{\cos[(2m+3)\phi]}{\cos\phi} \nn
&\quad=(-1)^{m+1}\frac{T_{2m+3}(x)}{x}.
\end{align}
Using these relations, \eq{hard} simplifies to
\begin{align}
R^{m+1}WP =
(WP- Z)\frac{T_{2m+3}(\sqrt{1- Z^\dagger Z})}{\sqrt{1- Z^\dagger Z}}+ Z\frac{(-1)^{m+1} T_{2m+3}(\sqrt{ Z^\dagger Z})}{\sqrt{ Z^\dagger Z}}.
\end{align}
Hence we find that, if \eq{sol} is correct for non-negative integer $m$, it is correct for $m+1$.
Hence it is correct for all non-negative integers $m$ by induction.
Thus we find that by multiplying on the left by $P$, we obtain
\begin{align}
\label{eq:sol2}
PR^m WP = Z\frac{(-1)^m T_{2m+1}(\sqrt{ Z^\dagger Z})}{\sqrt{ Z^\dagger Z}}.
\end{align}
Then, in the case that $\delta=0$, i.e., if $\tilde{V}$ is equal to a unitary $V$, we would have
\begin{equation}
Z=\frac 1s (\ket{\zer}\bra{\zer}\otimes V).
\end{equation}
Then $Z^\dagger Z=(\ket{\zer}\bra{\zer}\otimes \openone)/s^2$, and we get
\begin{align}
Z\frac{(-1)^m T_{2m+1}(\sqrt{ Z^\dagger Z})}{\sqrt{ Z^\dagger Z}} &= \frac 1s (\ket{\zer}\bra{\zer}\otimes V) \frac{(-1)^m T_{2m+1}(1/s)}{1/s} \nn
&= (\ket{\zer}\bra{\zer}\otimes V)(-1)^m T_{2m+1}(1/s) \nn
&= (\ket{\zer}\bra{\zer}\otimes V)\sin[(2m+1)\arcsin(1/s)].
\end{align}
In the case $m=\iters$, we can use $\sin\bigl(\frac{\pi}{2(2\iters+1)}\bigr)= \frac 1s$ to obtain
$\sin[(2\iters+1)\arcsin(1/s)]=1$, which implies
\begin{equation}
PR^\iters WP = \ket{\zer}\bra{\zer}\otimes V.
\end{equation}
Next, consider the case where $\tilde{V}$ is only $\delta$-close to being unitary. Let us define
\begin{equation}
\Delta := \sqrt{\tilde V^\dagger \tilde V} - \openone.
\end{equation}
We immediately obtain $\norm{\Delta}=O(\delta)$, and
\begin{equation}
\frac 1s(\ket{\zer}\bra{\zer}\otimes\Delta) = \sqrt{ Z^\dagger Z} - \frac 1s( \ket{\zer}\bra{\zer}\otimes\openone).
\end{equation}
We then get
\begin{align}
Z\frac{(-1)^\iters T_{2\iters+1}(\sqrt{ Z^\dagger Z})}{\sqrt{ Z^\dagger Z}} &= \frac 1s (\ket{\zer}\bra{\zer}\otimes \tilde V) \frac{(-1)^\iters T_{2\iters+1}((\openone+\Delta)/s)}{(\openone+\Delta)/s} \nn
&=(\ket{\zer}\bra{\zer}\otimes \tilde V) \frac{(-1)^\iters T_{2\iters+1}((\openone+\Delta)/s)}{\openone+\Delta}.
\end{align}
Using $(-1)^\iters T_{2\iters+1}(x)=\sin[(2\iters+1)\arcsin x]$ and $(-1)^\iters T_{2\iters+1}(1/s)=1$, we obtain
\begin{equation}
\| (-1)^\iters T_{2\iters+1}((\openone+\Delta)/s) - \openone \| = O(\iters^2\delta^2/s^2).
\end{equation}
Since $\iters = \Theta(s)$, $\iters^2/s^2=O(1)$, which implies
\begin{equation}
\| (-1)^\iters T_{2\iters+1}((\openone+\Delta)/s) - \openone \| = O(\delta^2).
\end{equation}
The contribution to the error from $\openone+\Delta$ is $O(\delta)$, so we have
\begin{align}
\left\| Z\frac{(-1)^\iters T_{2\iters+1}(\sqrt{ Z^\dagger Z})}{\sqrt{ Z^\dagger Z}} - (\ket{\zer}\bra{\zer}\otimes \tilde V) \right\| = O(\delta).
\end{align}
Using \eq{sol2} we then get \eq{roaa} as required.
\end{proof}
\lem{roaa} is in terms of the spectral norm distance, but the diamond norm distance is at most a constant factor larger.
The specific result, proven in the Appendix, is as follows.
\begin{lemma}
\label{lem:diamond}
Let $U$, $V$ be operators satisfying $\|U\|\le 1$ and $\|V\|\le 1$, and let $\|\cdot\|_\diamond$ denote the diamond norm. Then $\norm{U-V}_\diamond \le 2 \norm{U-V}$. Furthermore, if $\mathcal V$ is a quantum channel with Kraus decomposition $\mathcal{V}(\rho) = V \rho V^\dag + \sum_j V_j \rho V_j^\dag$ and $\mathcal{U}(\rho) = U \rho U^\dag$, then $\norm{\mathcal{U}-\mathcal{V}}_\diamond \le 4 \norm{U-V}$.
\end{lemma}
The LCU Lemma follows by combining \lem{sup} and \lem{roaa}.
\begin{proof}[Proof of \lem{approxV}]
Using \lem{sup} we can implement the operation $W$ required for \lem{roaa} using $O(1)$ $\sel(\vecU)$ and $\sel(\vecU^\dag)$ operations and $O(M)$ additional 2-qubit gates.
We can choose $s\ge a$ such that $\sin\bigl(\frac{\pi}{2(2\iters+1)}\bigr)= \frac 1s$ for some $\iters \in \mathbb{N}$.
Then \lem{roaa} shows that $\tilde V$ can be approximated to within $O(\delta)$ using $O(\iters)$ applications of $W$ and the projection $P$.
Since $\iters=O(s)=O(a)$, the total number of $\sel(\vecU)$ and $\sel(\vecU^\dag)$ operations is $O(a)$ and the number of additional 2-qubit gates is $O(Ma)$.
\end{proof}
\subsection{Main algorithm}
The main problem with applying the quantum walk as presented in \cite{Chi10,BC12} is that $\arcsin \nu$ is a nonlinear function of $\nu$, so an imprecise phase is introduced.
To solve this, we use a superposition of different numbers of applications of $U$.
Define $V_k$ as in \eq{super}, where the choice of $\{a_m\}_{m=-k}^k$ is considered below.
The eigenvalues of $V_k$ corresponding to the eigenvalues $\mu_\pm$ of $U$ are
\begin{equation}
\mu_{\pm,k} := \sum_{m=-k}^k a_m \mu_\pm^{m}.
\end{equation}
In general $\mu_{\pm,k}$ can depend on $\pm$.
However, we will choose $a_m$ satisfying $a_{-m}=(-1)^m a_m$, which yields $\mu_{\pm,k}$ independent of $\pm$.
To see how to choose the coefficients $a_m$,
solve \eq{munueq} for $\nu$ to give
\begin{equation}
\nu = -\frac{i}{2} \left( \mu_\pm - \frac 1{\mu_\pm} \right).
\end{equation}
This implies that, for any $z$,
\begin{equation}
e^{i\nu \seg} = \exp\left[ \frac \seg 2 \left( \mu_\pm - \frac 1{\mu_\pm} \right) \right].
\end{equation}
This corresponds to the standard generating function for the Bessel function \cite[9.1.41]{AS64}, so
\begin{equation}
\label{eq:lamsum}
e^{i\nu \seg} = \exp\left[ \frac \seg 2 \left( \mu_\pm - \frac 1{\mu_\pm} \right) \right] = \sum_{m=-\infty}^{\infty} J_m(\seg) \mu_\pm^{m}.
\end{equation}
Thus we can take $a_m \approx J_m(\seg)$.
Because there are efficient classical algorithms to calculate Bessel functions, the circuit to prepare $\ket{\chi_k}$ can be designed efficiently.
Note that for large $m$, we have $|J_m(\seg)| \sim \frac{1}{m!}|\seg/2|^m$ \cite[9.3.1]{AS64}, so the values of $a_m$ are similar to the coefficients in the expansion of the exponential function. Thus the segments used here are analogous to the segments used in \cite{BCCKS15}.
To determine the complexity of this approach, we primarily need to bound the error in approximating $e^{i\nu \seg}$. To optimize the result, we use the coefficients
\begin{equation}
\label{eq:avals}
a_m := \frac{J_m(\seg)}{\sum_{j=-k}^k J_j(\seg)}.
\end{equation}
We make this choice because the most accurate results are obtained when the values of $a_m$ sum to $1$.
Note also that this yields the symmetry $a_{-m}=(-1)^m a_m$, because $J_{-m}(\seg) = (-1)^m J_m(\seg)$ \cite[9.1.5]{AS64}.
The sum of $J_m(\seg)$ over all integers $m$ is equal to $1$ (which can be shown by putting $t=1$ in \cite[9.1.41]{AS64}), but because $k$ is finite, we normalize the values as in \eq{avals}.
With this choice, we have the following error bound, proved in the Appendix.
\begin{lemma}
\label{lem:errorbound}
With the values $a_m$ as in \eq{avals}, for $|\seg|\le k$ we have
\begin{equation}
\label{eq:error}
\| V_k - V_\infty \| =
O\left( \frac{\norm{H}}{Xd} \frac{(\seg/2)^{k+1}}{k!} \right).
\end{equation}
\end{lemma}
Note that $V_\infty$ is the exact unitary operation desired.
We now determine the query complexity of this approach. In fact, we prove a result that is slightly tighter than the query complexity stated in \thm{upper}.
\begin{lemma}
\label{lem:upperquery}
A d-sparse Hamiltonian $H$ acting on $n$ qubits can be simulated for time $t$ within error $\epsilon$ with complexity (quantified by the number of 2-qubit operations and controlled-$U$ and controlled-$U^\dagger$ operations)
\begin{equation}
\label{eq:finalresult}
O\left( \tau \frac{\log(\|H\|t/\epsilon)}{\log\log(\|H\|t/\epsilon)}\right).
\end{equation}
\end{lemma}
\begin{proof}
Our main goal is to determine the value of $k$ needed to bound the error by $\epsilon$.
This depends on the length of time for the segments, which we can adjust by choosing the value of $\seg$.
We wish to perform each step deterministically with one step of oblivious amplitude amplification, so we should have $s=2$ in \lem{roaa}.
Using the values of $a_m$ given in \eq{avals}, this means that we should take $\seg=O(1)$, and for concreteness $\seg=-1/2$ yields $a<2$, so we can take $s=2$.
Then, using \lem{approxV} with $U_m=U^m$ and $\tilde V=V_k$, we can approximate the operation $V_k$ to within $O(\delta)$.
Given an allowable error in a segment of $\delta>0$, let us take
\begin{equation}
k=O\left( \frac{\log(\norm{H}/Xd\delta)}{\log\log(\norm{H}/Xd\delta)} \right).
\end{equation}
Then, using \lem{errorbound} and the inequality $k!>(k/e)^k$, it is straightforward to show that the error in each segment is no more than $\delta$.
Using \lem{diamond}, this bound on the error in terms of the spectral norm distance implies a bound on the diamond norm distance that is at most a constant factor larger.
For the total error to be no more than $\epsilon$, the value of $\delta$ can be no more than $\epsilon$ divided by the number of segments.
The number of segments is $O(tXd)$, which gives
\begin{equation}
\label{eq:kval}
k=O\left( \frac{\log(\norm{H}t/\epsilon)}{\log\log(\norm{H}t/\epsilon)} \right).
\end{equation}
Using \lem{approxV}, the complexity of each segment is $O(k)$ since a $\sel(\vecU)$ operation can be implemented with complexity $O(M)$, and $M=2k+1$.
It is straightforward to apply $\sel(\vecU)$ using $O(k)$ controlled-$U$ and controlled-$U^\dagger$ operations.
If $\ket m$ is encoded in unary, then each controlled operation may be just controlled on one qubit of $\ket{m}$.
The number of segments required is $O(tXd)$.
It is most efficient to take the minimum value of $X$, which is $\norm{H}_{\max}$.
Because each segment uses $O(k)$ controlled-$U$ and controlled-$U^\dagger$ operations, as well as $O(M)=O(k)$ additional 2-qubit gates,
the complexity for the simulation over time $t$ is $O(\tau k)$,
Using the value of $k$ from \eq{kval} gives the overall complexity stated in \eq{finalresult}.
\end{proof}
Next we determine the gate complexity of this approach.
Again we give a slightly tighter result than presented in \thm{upper}.
\begin{lemma}
\label{lem:uppergate}
A d-sparse Hamiltonian $H$ acting on $n$ qubits can be simulated for time $t$ within error $\epsilon$ using
\begin{equation}
\label{eq:finalresultg}
O\left( \tau [n+F(\log(\|H\|t/\epsilon))] \frac{\log(\|H\|t/\epsilon)}{\log\log(\|H\|t/\epsilon)}\right)
\end{equation}
2-qubit gates, where $F(m)$ is the complexity of performing elementary functions with $m$ bits.
\end{lemma}
\begin{proof}
To obtain the gate complexity, we need to consider the procedure for performing the step $U$ in detail.
We can perform $T$ by first performing $\log d$ Hadamard gates to prepare the superposition state
\begin{equation}
\frac 1{\sqrt{d}}\sum_{\ell=0}^{d-1} \ket{\ell}\ket{0}.
\end{equation}
Here we take $d$ to be a power of $2$ without loss of generality.
(The value of $d$ can always be rounded up to the nearest power of two.)
Then the oracle $O_F$ (from \eq{oraclef}) can be used to produce the state
\begin{equation}
\frac 1{\sqrt{d}}\sum_{\ell\in F_j} \ket{\ell}\ket{0}.
\end{equation}
A call to the oracle $O_H$ for the value of an element of the Hamiltonian (from \eq{oracleh}) then gives the value of $H_{j\ell}$ in an ancilla space.
Another ancilla qubit is rotated from $\ket{0}$ to
\begin{equation}
\label{eq:rotstate}
\sqrt{\frac{H^*_{j\ell}}{X}}\ket{0}+\sqrt{1-\frac{|H^*_{j\ell}|}{X}}\ket{1}
\end{equation}
based on the value of $H_{j\ell}$.
Then inverting the oracle $O_H$ erases the value of $H_{j\ell}$ from the ancilla space.
Note that there is a sign ambiguity for the square root when $H_{j\ell}$ takes negative real values.
This is addressed in \cite{BC12} and does not affect the complexity.
To perform the step $U$, we also require the swap operation $S$, which has complexity $O(n)$ due to the number of qubits. The gate complexity is $O(n)$ from $S$, plus $O(\log d)=O(n)$ from the Hadamard gates, plus the complexity of performing the rotations to obtain the state \eq{rotstate}.
The complexity of the rotations depends on the number of bits of precision used for the entries of $H$. To obtain overall error $O(\epsilon)$, the number of bits must be $\log(\|H\|t/\epsilon)$.
To determine the rotations needed, we must also compute a square root and trigonometric functions on the output of the oracle.
If these functions can be computed with complexity $F(m)$ for $m$-bit inputs, the contribution to the overall complexity is $F(\log(\|H\|t/\epsilon))$.
In \lem{upperquery} the complexity is quantified in terms of the number of controlled-$U$ and controlled-$U^\dagger$ operations, so to obtain the overall gate complexity we just need to multiply that complexity by the cost of $U$.
There is also a cost in terms of additional 2-qubit gates in \lem{upperquery}, but that is smaller than the gate cost of performing $U$.
Therefore, the gate complexity is equal to the complexity from \lem{upperquery} times $O(n+F(\log(\|H\|t/\epsilon)))$, which gives a gate complexity as in \eq{finalresultg}.
\end{proof}
This result depends on the complexity of elementary functions, $F(m)$, needed to calculate the rotations.
Using advanced techniques, $F(m)$ may be made close to linear in $m$ \cite{RPB}, though such advanced techniques only give an improvement for extremely high precision.
Using simple techniques based on Taylor series and long multiplication, $F(m)=O(m^{5/2})$.
The classical complexity of determining the coefficients $\{a_m\}_{m=-k}^k$ is also potentially significant.
A set of values of the Bessel function can be efficiently computed using Miller's recurrence algorithm \cite{Mil52,Olv64}.
The complexity scales as $k$ (the number of entries) times $\log(\|H\|t/\epsilon)$ (the bits of precision needed for each $J_m(\seg)$).
This is no larger than the quantum gate complexity.
Note that the gate complexity in \lem{uppergate} depends linearly on $n$, whereas the query complexity in \thm{upper} does not.
This is because performing an operation on a target state with $n$ qubits must require at least $\Omega(n)$ gates.
In contrast, the number of queries need not scale with $n$, because the queries are used to determine which gates to perform.
There is an implicit complexity of $\Omega(n)$ for the queries, because the input to a query is of size at least $n$.
The proof of \thm{upper} then follows immediately.
\begin{proof}[Proof of \thm{upper}]
The implementation of $U$ uses $O(1)$ oracle calls, which means that the query complexity is the same as the number of controlled applications of $U$.
Noting that $\|H\|\le d\|H\|_{\max}$, \lem{upperquery} implies the query complexity in \thm{upper}, and \lem{uppergate} with $F(m)=O(m^{5/2})$ implies the gate complexity in \thm{upper}.
\end{proof}
\subsection{A tradeoff between $\tau$ and $\epsilon$}
The alternative algorithm characterized by \thm{tradeoff} uses larger segments with $\seg\propto -\tau^\alpha$ for $\alpha \in (0,1]$.
The case $\alpha=0$ corresponds to the case considered above, whereas $\alpha=1$ corresponds to a \emph{single} segment.
The analysis of this section assumes $\alpha>0$.
To analyze this algorithm, we first need to bound the absolute sum of Bessel functions.
\begin{lemma}\label{lem:sqrtsum}
The quantity
\begin{equation}\label{sdef}
{\cal S}(\seg) := \sum_{m=-\infty}^{\infty} |J_m(\seg)|
\end{equation}
is $O(\sqrt{|\seg|})$.
\end{lemma}
We prove this in the Appendix.
Using the robust version of amplitude amplification given in \lem{roaa}, we obtain the bound in \thm{tradeoff}.
\begin{proof}[Proof of \thm{tradeoff}]
Using \lem{errorbound}, Stirling's formula, and the fact that $\norm{H} \le d\norm{H}_{\max} \le Xd$, we find that for $|\seg|\le k$ the error is bounded as
\begin{equation}
\|V_k - V_\infty\| = O\bigl( (e/k)^k (\seg/2)^{k+1} \bigr).
\end{equation}
By \lem{roaa}, the error in a segment after amplitude amplification is of the same order.
Therefore, to ensure that the error in a segment is at most $\delta$, it suffices to take
\begin{equation}
k = O(|\seg| + \log(1/\delta)) = O(\tau^\alpha+\log(1/\delta)).
\end{equation}
With this value of $k$, we have $\sum_{m=-k}^k J_m(\seg) = 1 + O(\delta)$, so \lem{sqrtsum} gives
\begin{equation}
\sum_{m=-k}^k |a_m| = O({\cal S}(\seg)) = O(\sqrt{|\seg|}).
\end{equation}
This corresponds to the number of steps of oblivious amplitude amplification.
The overall complexity is therefore $O(k\sqrt{|\seg|})=O(k\tau^{\alpha/2})$ for a single segment.
The number of segments is $\tau/|\seg| \propto \tau^{1-\alpha}$.
This means that the complexity is $O(k\tau^{1-\alpha/2})$.
The value of $k$ also depends on the number of segments.
We can take $\delta=\epsilon/\tau^{1-\alpha}$, which gives $k=O(\tau^\alpha+\log(1/\epsilon))$, implying the result in
\thm{tradeoff}.
\end{proof}
In this proof we have ignored $\log\tau$ in comparison to $\tau^\alpha$, which would not be valid for $\alpha=0$.
For the gate complexity, we again have a multiplying factor of $n+F(\log(\|H\|t/\epsilon))$, yielding a number of gates scaling as
\begin{equation}
O\bigl( [n+F(\log(\|H\|t/\epsilon))] \bigl[\tau^{1+\alpha/2} + \tau^{1-\alpha/2} \log(1/\epsilon)\bigr]\bigr).
\end{equation}
\section{Lower bound}
\label{sec:lower}
We now present a lower bound showing that the dependence of our algorithm on $\tau := d \norm{H}_{\max} t$ is nearly optimal (and that the dependence of \cite{BC12} on $\tau$ is optimal). The main idea of the proof is the same as in Theorem 3 of \cite{CK10}, but we slightly adapt that argument to let $t$ vary independently of $d$. Note that this is stronger than proving separate lower bounds of $\Omega(t)$ and $\Omega(d)$, since that would only show a lower bound of $\Omega(d+t)$, which is weaker than our $\Omega(td)$ lower bound.
\begin{lemma}
\label{lem:low}
For any positive integer $d$ and any $t>0$, there exists a $2d$-sparse Hamiltonian $H$ with $\norm{H}_{\max}=\Theta(1)$ such that simulating $H$ with constant precision for time $t$ requires $\Omega(td)$ queries.
\end{lemma}
\begin{proof}
Similarly to the $\Omega(t)$ lower bound from \cite{BACS07}, we construct a sparse Hamiltonian whose dynamics compute the parity of a bit string, and we use the fact that at least $N/2$ quantum queries are needed to compute the parity of $N$ bits~\cite{BBC+01,FGGS98}.
First consider a Hamiltonian $H_1$ whose graph is a path with $N+1$ vertices. (Here the \emph{graph of $H$} has a vertex for each basis state and an edge between two vertices if the corresponding entry of $H$ is nonzero.) The Hamiltonian acts on vectors $\ket{i}$ with $i\in \{0,\ldots,N\}$ and has nonzero matrix elements
\begin{align}
\bra{i-1}H_1\ket{i}=\bra{i}H_1\ket{i-1}=\sqrt{i(N-i+1)}
\end{align}
for $i \in [N]$. Simulating $H_1$ for time $\pi/2$ starting with the state $\ket{0}$ gives the state $\ket{N}$ (i.e., $e^{-iH_1\pi/2}\ket{0}=\ket{N}$).
Next, consider a Hamiltonian $H_2$ generated from a string $x \in \{0,1\}^N$ as in \cite{BACS07}. $H_2$ acts on vertices $\ket{i,j}$ with $i\in \{0,\ldots,N\}$ and $j\in \{0,1\}$ and has nonzero matrix elements
\begin{align}
\bra{i-1,j}H_2\ket{i,j\oplus x_i}
=\bra{i,j\oplus x_i}H_2\ket{i-1,j}
=\sqrt{i(N-i+1)}
\end{align}
for all $i \in [N]$ and $j \in \{0,1\}$. By construction, $\ket{0,0}$ is connected to $\ket{i,j}$ if and only if $j=x_1 \oplus \cdots \oplus x_i$. In particular, $\ket{0,0}$ is connected to $\ket{N,x_1 \oplus \cdots \oplus x_N}$, and determining whether it is connected to $\ket{N,0}$ or $\ket{N,1}$ determines the parity of $x$. The graph of $H_2$ consists of two disjoint paths, one containing $\ket{0,0}$ and $\ket{N,x_1 \oplus \cdots \oplus x_N}$. Thus we have $e^{-iH_2\pi/2}\ket{0,0} = \ket{N,x_1 \oplus \cdots \oplus x_N}$, so evolution for time $\pi/2$ computes the parity of $x$.
Finally, we construct the Hamiltonian $H$ claimed in the lemma. As before, $H$ is generated from a string $x \in \{0,1\}^N$. $H$ acts on vertices $\ket{i,j,\ell}$ with $i \in \{0,\ldots,N\}$, $j \in \{0,1\}$, and $\ell \in [d]$.
The nonzero entries of $H$ are
\begin{align}
\bra{i-1,j,\ell}H\ket{i,j \oplus x_i,\ell'}
=\bra{i,j \oplus x_i,\ell'}H\ket{i-1,j,\ell}
=\sqrt{i(N-i+1)}/N
\end{align}
for all $i \in [N]$, $j \in \{0,1\}$, and $\ell,\ell' \in [d]$. The graph of $H$ is similar to that of $H_2$, except that for each vertex in $H_2$, there are now $d$ copies of it in $H$. Each vertex is connected to all $d$ copies of its neighboring vertices, so the graph has maximum degree $2d$. Observe that, having divided the matrix elements by $N$, we have $\|H\|_{\max}=\Theta(1)$.
Now we simulate the Hamiltonian starting from the state $\ket{0,0,*}$, where $\ket{i,j,*} := \frac{1}{\sqrt{d}} \sum_\ell \ket{i,j,\ell}$ denotes a uniform superposition over the third register. The subspace $\spn\{\ket{i,j,*}: i \in \{0,\ldots,N\},$ $j \in \{0,1\}\}$ is an invariant subspace of $H$. Since the initial state lies in this subspace, the quantum walk remains in this subspace.
The nonzero matrix elements of $H$ in this invariant subspace are
\begin{align}
\bra{i-1,j,*}H\ket{i,j \oplus x_i,*}
=\bra{i,j \oplus x_i,*}H\ket{i-1,j,*} &
=d\sqrt{i(N-i+1)}/N,
\end{align}
so we have $e^{-iHt}\ket{0,0,*} = \ket{N,x_1\oplus \cdots \oplus x_N,*}$ for $t=N\pi/2d$. Since this determines the parity of $x$, we find a lower bound of $\Omega(N) = \Omega(td)$ as claimed.
\end{proof}
It is now straightforward to use this result to prove \thm{lower}.
\begin{proof}[Proof of \thm{lower}]
We choose one of two Hamiltonians depending on whether the first or second term in \eq{lower} is larger.
If $\tau$ is larger, then we use \lem{low}.
The value of $d$ used in \lem{low} is denoted $d'$ here, to distinguish it from the $d$ given in \thm{lower}.
Taking $d'=\lfloor d/2 \rfloor$, we ensure that $d'$ is a positive integer, because $d\ge 2$.
Then \lem{low} shows that there is a $2d'$-sparse Hamiltonian; given this value of $d'$, this Hamiltonian is also $d$-sparse, as required for \thm{lower}.
For \thm{lower}, we are also given a required value for $\|H\|_{\max}$.
The Hamiltonian used in \lem{low} has $\|H\|_{\max}=\Theta(1)$.
By multiplying that Hamiltonian by a scaling factor, we obtain a Hamiltonian with the required value of $\|H\|_{\max}$.
Dividing the time used in \lem{low} by the same factor, the simulation requires time $\Omega(\tau)$ for constant precision.
In \thm{lower} we require precision $\epsilon$, which can only increase the complexity.
In the case where the second term is larger, we use Theorem 6.1 of \cite{BCCKS14}.
There it is shown that performing a simulation of a $2$-sparse Hamiltonian with precision $\epsilon$ and $\|H\|_{\max}t=O(1)$ requires
\begin{equation}
\label{eq:bccks}
\Omega \left( \frac{\log(1/\epsilon)}{\log\log(1/\epsilon)}\right)
\end{equation}
queries.
Because $d\ge2$, this Hamiltonian is also $d$-sparse.
As using larger values of $\|H\|_{\max}t$ can only increase the complexity, we also have this lower bound in the more general case.
Therefore, regardless of whether the first or second term in \eq{lower} is larger, this expression provides a lower bound on the complexity.
\end{proof}
It is also possible to combine our lower bound with the lower bound of \cite{BCCKS14} to obtain a combined lower bound in terms of $d$, $t$, and $\epsilon$, that is stronger than \thm{lower}. This yields a lower bound of $\Omega(N)$ for any $N$ that satisfies $\epsilon < \frac{1}{2} |{\sin(td/N)}|^N$. Note that when $\epsilon$ is a constant, we recover \lem{low} and when $td$ is constant, we recover \eq{bccks}. However, for intermediate values this lower bound can be strictly larger than the expression in \thm{lower}.
\section{Conclusion}
\label{sec:disc}
Our technique for Hamiltonian simulation combines ideas from quantum walks and
fractional-query simulation to provide improved performance over both previous
techniques. As a result, it provides near-optimal scaling with respect to all parameters of interest. In particular, the scaling is only slightly superlinear in $\tau = d\norm{H}_{\max}t$, whereas we have proven that linear scaling is optimal. Furthermore, the method has query complexity sublogarithmic in the allowed error, which was proven to be optimal in \cite{BCCKS14}.
Nevertheless, there is still a gap between the complexity of our algorithm and the lower bound in \eq{lower}, as they involve different tradeoffs between the parameters $\tau$ and $\epsilon$. It remains open whether the performance can be further improved, perhaps to give performance similar to \eq{lower}, although as observed at the end of \sec{lower}, we can rule out scaling strictly as in \eq{lower}.
Our technique can potentially be used for the more general task of operation conversion, in which we use one quantum operation to implement another. In our work, we convert a step of a quantum walk to Hamiltonian evolution, whereas in \cite{HHL09} the task is to convert Hamiltonian evolution to an inverse. One approach to operation conversion is to use phase estimation. Here we have shown that a superposition of operations can provide far better performance.
\section*{Acknowledgment}
D.W.B. is funded by an ARC Future Fellowship (FT100100761). This work was also supported in part by CIFAR, NSERC, the Ontario Ministry of Research and Innovation, and the US ARO under ARO grant Contract Numbers W911NF-12-1-0482 and W911NF-12-1-0486.
This preprint is MIT-CTP \#4631.
\bibliographystyle{myhamsplain}
|
2,869,038,155,453 | arxiv | \section{Introduction}
We discuss here a novel method for unsupervised spatio-temporal change detection in multi-temporal SAR/POLSAR images. WECS is based upon correlation screening for energy apportionment on wavelet approximations. The spatial character of the change detection is attained on pixel level. The method is fast, scalable, linearly updatable, and the resulting measures are sparse.
A review for change detection in multi-temporal remote sensing is given by \cite{ban2016change}.
Different proposals for this purpose may be found in the literature. They vary in their motivations as well as in their applicability. Change detection in multi-temporal hyperspectral images is discussed in \cite{bovolo2015time}, \cite{liu2019review}, and \cite{matsunaga2017current}. \cite{jia2018novel} pursue change detection techniques via non-local means and principal component analysis. Compressed projection and image fusion are employed by \cite{hou2014unsupervised}. Deep learning by slow feature analysis for change detection is the subject of \cite{du2019unsupervised}. \cite{chen2020change} proposes a change detection method driven by adaptive parameter estimation.
Besides different methodological paradigms, several areas of application receive special attention. For instance, urban change detection applications via polarimetric SAR images are discussed in \cite{ansari2020urban}. \cite{song2018multi} discusses land cover change detection in mountainous terrain via multi-temporal and multi-sensor remote sensing images. \cite{ru2021multi} studies multi-temporal scene classification and scene change detection. Deforestation change detection is discussed by \cite{barreto2016deforestation}.
Wavelet methods present many advantages for a plethora of applications \citep{vidakovic1999statistical}. Their computational efficiency and sparseness are specially relevant for large images and other high-dimensional data \citep{morettin2017wavelets}. \cite{atto2012multidate}, \cite{bouhlel2015multivariate}, \cite{celik2009multiscale}, \cite{cui2012statistical} use different wavelet methods for change detection in satellite images.
The motivation for our proposed method is multi-fold. We aim a fast and accurate method. We would also like this method to be easily updatable when a new observation is captured. Finally, escalability was a concern as well. We propose a wavelet-based procedure for change detection in multi-temporal remore sensing images (WECS). It is unsupervised and built on ultra-high dimensional correlation screening \citep{fan2020statistical} for the wavelet coefficients. We present two complimentary wavelet measures in order to detect sudden and/or cumulative changes, as well as for the case of stationary or non-stationary multi-temporal images. The procedure presents some advantages. It is unsupervised, fast and updatable, thus allowing for real-time change detection. Moreover, it is sparse and scalable.
The rest of the text goes as follows. Section \ref{section_method} introduces the problem and presents the proposed method. We show WECS performance on synthetic multi-temporal image data in Section \ref{section_validation}. In Section \ref{section_realdata} we apply the proposed method to a time series of 85 satellite images in the border region of Brazil and the French Guiana, for images captured from November 08, 2015 to December 09 2017. Section \ref{section_discussion} concludes the paper with a discussion.
\section{Methodology}\label{section_method}
Let $\mathcal{I}(1),\ldots,\mathcal{I}(m)$ be a set of matrices representing the log-images of some region of interest. These images may be relative to one satellite channel or a combination of channels; this will be specified when appropriate. Our goal is twofold: to find possible points in time where some relevant change might have taken place at the region represented in $\mathcal{I}(m)$, $m=1,\ldots,n$, and to find which regions are closely associated to the observed changes along time. We shall address these tasks by analyzing the bidimensional discrete wavelet decomposition of $\mathcal{I}(m)$ at the $J$-th approximation level, i.e.,
such that
\begin{equation}
\mathcal{I}(m)=\mathcal{I}_J(m)+\pmb{\epsilon}_J(m),
\end{equation}
where $\mathcal{I}_J(m)$ is the wavelet approximation of log-image $\mathcal{I}(m)$ at level $J$.We denote $\mathcal{I}_J(m)$'s approximation coefficients matrix by ${\textbf X}(m)$ \citep{morettin2017wavelets,vidakovic1999statistical}. The advantages of doing so is that we linearize possible speckle (multiplicative) noise in the original images by taking logarithms to obtain $\mathcal{I}(m)$ and later perform a smoothing to obtain $\mathcal{I}_J(m)$, whose approximation coefficients matrix is ${\textbf X}(m)$.
We can then consider further apportioning the total $\mathbb{L}_2$ energy of $\left\{\mathcal I(m)\right\}$ as
\begin{equation}
\sum_{m=1}^n\|\mathcal I_J(m)\|^2_2=n\|\bar{\mathcal I}\|^2_2+\sum_{m=1}^n\|\mathcal I_J(m)-\bar{\mathcal I}\|^2_2,\label{eq_ANOVAlogimage}
\end{equation}
where $\bar{\mathcal{I}}=n^{-1}\sum_{m=1}^n\mathcal{I}_J(m)$. The same can be done on the wavelet domain with ${\textbf X}(m)$, which is the way we shall proceed in the following steps.
We discuss here two procedures which are similar, but may yield different results depending on the nature of relevant changes. The first procedure makes use of (\ref{eq_ANOVAlogimage}), so that we establish an {\it average wavelet-approximated image} and detect time points for which images differ from the {\it characteristic} image. Changes are detected by absolute correlations between individual wavelet approximation coefficients time series and the time series of overall wavelet energy. The second procedure also uses absolute correlations but instead of an average wavelet image energy, we compute the overall energy differences for subsequent wavelet-smoothed images and their approximation coefficients. The former should help us detect image changes from an overall behavior over time whilst the latter should also detect changes on non-stationary set-ups. In practive, both procedures shall be performed in the wavelet domain by using $\{{\textbf X}(m)\}$ instead of $\{\mathcal{I}_J(m)\}$.
The average image $\bar{\mathcal{I}}=n^{-1}\sum_{m=1}^n\mathcal{I}_J(m)$ has approximation coefficients matrix given by $\bar{{\textbf X}} = n^{-1}\sum_{m=1}^n {\textbf X}(m)$. Let $X_{k,l}(m)$ and $\bar{X}_{k,l}$ be the entry $(k,l)$ of the matrices ${\textbf X}(m)$ and $\bar{{\textbf X}}$, respectively. We take the matrix ${\textbf D}(m) = [D_{k.l}(m)]$, where $D_{k,l}(m)=(X_{k,l}(m)-\bar{X}_{k,l})^2$. We then
analyze the time series given by
\begin{equation}
{\textbf d}(m)= \sum_{k,l}D_{k,l}(m) = \sum_{k,l}(X_{k,l}(m) - \bar{X}_{k,l})^2, \quad m=1,\ldots,n,
\label{eq_defdm}
\end{equation}
which displays the temporal variation with respect to $\bar{{\textbf X}}$ of spatial energies.
The time points with highest values of $d(m)$ represent the images for which the most expressive changes take place, where changes here are measured through $\mathbb{L}_2$ energy. Define the $n\times p$ matrix
\begin{equation}
{\textbf D}=\left(
\begin{array}{c}
vec({\textbf D}(1))^T\\
\vdots\\
vec({\textbf D}(n))^T\\
\end{array}
\right),
\label{eq_defmatrixD}
\end{equation}
where $vec({\textbf D}(m))$ is the $p\times 1$ vector of wavelet coefficients for time $m$ and $p=\#\{k,l\}$ is total number of locations represented by ${\textbf X}(m)$, $m=1,\ldots,n$. Sparsity \citep{johnstone2009statistical} on the wavelet coefficients plays a special role here. We suppose a handful of coefficients drive the changes given by ${\textbf d}$, so that the effective dimension of ${\textbf D}$ (number of locations where relevant changes occur), say $e_d$, is such that $e_d<<p$. This can be represented as the following linear model
\begin{equation}
{\textbf d}={\textbf D}\pmb{\beta}^{(d)}+\pmb{\xi}^{(d)}
\label{sparsemodel_d}
\end{equation}
where $\pmb{\beta}^{(d)}$ is sparse, i.e., it has $p-e_d$ null elements, and $\pmb{\xi}^{(d)}$ is some $n\times 1$ random vector of errors.
In order to identify spatio-temporal changes we employ the idea of ultra-high dimensional correlation screening \citep{fan2020statistical} as follows. For each squared mean-corrected approximation coefficient time series, given by ${\textbf D}(m)$, consider its Pearson correlation with the mean-corrected total approximation energy, given by ${\textbf d}$:
\begin{equation}
R_{k,l}^{(d)}= \mbox{{\rm corr\,}}\left( {\textbf D}_{k,l}, {\textbf d}\right),
\label{def_Rkld}
\end{equation}
where ${\textbf D}_{k,l}=(D_{k,l}(1),\ldots,D_{k,l}(n))^T$ is the time series of squared mean deviations of wavelet coefficients for the two-dimensional index $\{k,l\}$.
We have a matrix ${\textbf R}^{(d)}=[R_{k,l}^{(d)}]$ of correlations of ultra-high dimension. Define the set of
{\it important} indices for changes in images with respect to $\bar{\mathcal I}$ as
$\mathcal{M}^{*d}=\{(k,l):\mbox{ Change in }\mathcal{I}(m)\mbox{ with respect}$ to $\bar{\mathcal{I}}$
$\mbox{ are caused by changes in approximation coefficients of index }(k,l)\}$. This set coincides with the non-zero vectorized one-dimensional indices for the sparse representation of $\pmb{\beta}^{(d)}$ in (\ref{sparsemodel_d}).
We build the empirical set of selected indices by
\begin{equation}
\mathcal{M}_{\tau}^{(d)}=\{(k,l):|R_{k,l}^{(d)}|>\tau_d\},
\label{def_Mtaud}
\end{equation}
where $\tau_d>0$ is a convenient threshold value, function of $n$ and $J$. Under some regularity conditions,
\[
P(\mathcal{M}_{\tau}^{(d)}\supset\mathcal{M}^{*d})\rightarrow 1,
\]
as $n\rightarrow\infty$ \citep{fan2020statistical}.
Analogously we take ${\textbf T}(m) = [T_{k,l}(m)]$, where $T_{k,l}(m)=(X_{k,l}(m+1)-X_{k,l}(m))^2$, for $m=1,\ldots,n-1$. We then analyze the time series given by
\begin{equation}
t(m)= \sum_{k,l}T_{k,l}(m) = \sum_{k,l}(X_{k,l}(m+1) - X_{k,l}(m))^2, \quad m=1,\ldots,n-1.
\label{eq_deftm}
\end{equation}
The time points with highest values of $t(m)$ represent the images for which the most expressive changes take place, where changes here are measured through $\mathbb{L}_2$ energy. Define the $(n-1)\times p$ matrix
\begin{equation}
{\textbf T}=\left(
\begin{array}{c}
vec({\textbf T}(1))^T\\
\vdots\\
vec({\textbf T}(n))^T\\
\end{array}
\right),
\label{eq_defmatrixT}
\end{equation}
where $vec({\textbf T}(m))$ is the $p\times 1$ vector of wavelet coefficients for time $m=1,\ldots,n-1$.
The highest values of $\{t(m)\}$ represent the images for which the most expressive changes take place between time points $m$ and $m+1$, where changes here are measured through $\mathbb{L}_2$ energy. For each squared mean-corrected approximation coefficient time series, given by ${\textbf T}(m) = [T_{k,l}(m)]$, consider its Pearson correlation with the mean-corrected total approximation energy, given by ${\textbf t}$:
\begin{equation}
R_{k,l}^{(t)}= \mbox{{\rm corr\,}}\left( {\textbf T}_{k,l}, {\textbf t}\right).
\label{def_Rklt}
\end{equation}
We again suppose a handful of coefficients drive the changes given by ${\textbf t}$, so that the effective dimension, say $e_t$, is such that $e_t<<p$. This can be represented as the following linear model
\begin{equation}
{\textbf t}={\textbf T}\pmb{\beta}^{(t)}+\pmb{\xi}^{(t)}
\label{sparsemodel_t}
\end{equation}
where $\pmb{\beta}^{(t)}$ is sparse, i.e. $\pmb{\beta}^{(t)}$ has $p-e_t$ null elements, and $\pmb{\xi}^{(t)}$ is some $(n-1)\times 1$ random vector of errors.
We have a matrix ${\textbf R}^{(t)}=[R_{k,l}^{(t)}]$ of correlations of ultra-high dimension. Define the set of
{\it important} indices for changes in subsequent images as $\mathcal{M}^{*t}=\{(k,l):\mbox{ Change in }\mathcal{I}(m+1)\mbox{ with respect to\hfill }\\ \mathcal{I}(m) \mbox{ for some }m=1,\ldots,n-1\mbox{ are caused by the approximation coefficients of index }(k,l)\}$. We build a set of selected indices by
\[
\mathcal{M}_{\tau}^{(t)}=\{(k,l):|R_{k,l}^{(t)}|>\tau_t\},
\]
where $\tau_t>0$ is a convenient threshold value, function of $n$ and $J$. Under some regularity conditions,
\[
P(\mathcal{M}_{\tau}^{(t)}\supset\mathcal{M}^{*t})\rightarrow 1,
\]
as $n\rightarrow\infty$ \citep{fan2020statistical}.
Therefore, if we define
\begin{eqnarray}
\mathcal{M}_{\tau}&=&\mathcal{M}_{\tau}^{(t)}\cup\mathcal{M}_{\tau}^{(d)},\label{def_Mpop}\\
\mathcal{M}^{*}&=& \mathcal{M}^{*d}\cup\mathcal{M}^{*t}\label{def_Memp},
\end{eqnarray}
we have the following consistency property:
\[
P(\mathcal{M}_{\tau}\supset\mathcal{M}^{*})\rightarrow 1.
\]
as $n\rightarrow\infty$.
Thence, if we compute $\{d(m)\}$ and $\{t(m)\}$, and build $\mathcal{M}_{\tau}=\mathcal{M}_{\tau}^{(t)}\cup\mathcal{M}_{\tau}^{(d)}$, the consistency of the screening methods above guarantees asymptotic coverage of all approximation coefficients strongly associated with changes with respect to the average image as well as immediate previous image with a high probability, as long as the required regularity conditions hold.
Further geometrical motivation for our proposal is given as follows. We argue the case of ${\textbf d}$ but the same may be written regarding ${\textbf t}$ as well. As defined by (\ref{eq_defdm}), we expect ${\textbf d}$ to be a vector with some few high values, say $s_d$, and $n-s_d$ smaller values. This segregates the multi-temporal images, since the former time points identify the images in which significant changes occur, while the latter indices identify time points with no major changes. Consider $U>L>0$ such that the $s_d$ highest values of ${\textbf d}$ are larger then $U$, and the $n-s_d$ smallest values of ${\textbf d}$ are smaller then $L$. We also take $\delta=U-L$. The indices defined by (\ref{def_Mtaud}) are such that
\[
\frac{<{\textbf D}_{k,l},{\textbf d}>}{\|{\textbf D}_{k,l}\|_2\|{\textbf d}\|_2}>\tau_d,
\]
i.e., such that $\sum_{m=1}^nD_{k,l}(m)d(m)>\tau_d\|{\textbf D}_{k,l}\|_2\|{\textbf d}\|_2$. This can be rewritten as
\[
\left|\sum_{m:d(m)>U} D_{k,l}(m)\right|-\left|\sum_{m:d(m)<L} D_{k,l}(m)\right|>\Delta,
\]
for some arbitrary $\Delta>>0$ (which can be a function of $n$ and $J$). Thence, when we employ correlation screening we select the two-dimensional wavelet indices which have the closest empirical directions to the vector of image temporal changes. Thus we are performing a truly spatio-temporal change detection in a single procedure.
\section{Validation on synthetic data}\label{section_validation}
In this section we apply the change detection methods above on synthetic data of multi-temporal images.
The synthetic multi-temporal images ($n=4$) are shown in Figure \ref{F:EllipsoidChanges}. The first image, $I(1)$, presents three elongated ellipses. Changes consist of three different types of ellipses that are successively added to the original image $I(1)$. The second image, $I(2)$, has new large ellipses added. Smaller ellipses are then added to form $I(3)$ and small dots are added to form $I(4)$.
\begin{figure}[htb!]
\centering
\includegraphics[width=10pc]{ellipses_t1}
\includegraphics[width=10pc]{ellipses_t2}
\includegraphics[width=10pc]{ellipses_t3}
\includegraphics[width=10pc]{ellipses_t4}
\caption{Synthetic multi-temporal ($n=4$) images. Features and changes come as ellipses and dots.}
\label{F:EllipsoidChanges}
\end{figure}
\begin{figure}[htp!]
\noindent\includegraphics[width=12pc,height=10pc]{total_changes}(a)
\includegraphics[width=12pc,height=10pc]{corr_changes_dm}(b)
\includegraphics[width=12pc,height=10pc]{corr_changes_tm}(c)
\includegraphics[width=12pc,height=10pc]{corr_changes_logratios}(d)
\includegraphics[width=12pc,height=10pc]{corr_changes_dm_nowavelets}(e)
\includegraphics[width=12pc,height=10pc]{corr_changes_tm_nowavelets}(f)
\caption{Synthetic images with changing ellipses. (a) Image composed by the total changes over time.
(b) Proposed db2 wavelet ${\textbf d}(m)$ with $J=2$; (c) Proposed db2 wavelet ${\textbf t}(m)$ with $J=2$.
(d) Aggregation of log-ratios. (e) ${\textbf d}(m)$ without wavelets. (f) ${\textbf t}(m)$ without wavelets.
}
\label{F:Changes_methods_images}
\end{figure}
Figure \ref{F:Changes_methods_images} illustrates the simulated synthetic images, the proposed wavelet detection methods, and three classic detection methods, as well. Panel (a) presents the total changes with respect to image $I(1)$. Panels (b) and (c) show the results by the proposed wavelet methods using Daubechies db2 and $J=2$ by ${\textbf d}(m)$ and ${\textbf t}(m)$, respectively. Panel (d) presents the results by the aggregated log-ratios. Finally, in Panels (e) and (f) we can see the results if ${\textbf d}(m)$ and ${\textbf t}(m)$ are performed purely on the spatial domain, without wavelets. The spatio-temporal advantages of the proposed wavelet ${\textbf d}(m)$ and ${\textbf t}(m)$ are clear in Figure \ref{F:Changes_methods_images}. A slight advantage for the detection of dots is attained by ${\textbf d}(m)$ over ${\textbf t}(m)$.
We compute ROC curves to compare the detection performance of different methods. For this we simulate noisy versions of the synthetic images illustrated by Figure \ref{F:EllipsoidChanges}. The change detection methods are then employed. Each method generates a correlation matrix between the real image of total changes and the estimated one.
Each ROC curve presents how close the magnitude variation of change measures is to the variation of the image of total changes in the following way:
\begin{enumerate}
\item Let $R$ be the matrix of change measures. Compute the range $[r_{\min},r_{\max}]$ of the values in $R$;
\item Let $(r_{(1)},\ldots,r_{(100)})$ be equally space values between $r_{\min}$ and $r_{\max}$;
\item For each $k=1,\ldots,n$, check how many pixels are such that $R_{i,j}>r_{(k)}$ coincide with the pixels $(i,j)$ where a change really occurs on the image of total changes. Dividing this number by the total number of changes gives the true positive rate.
\item For each $k=1,\ldots,n$, check how many pixels are such that $R_{i,j}>r_{(k)}$ do not coincide with the pixels $(i,j)$ where a change really occurs. Dividing this number by the total number of pixels where changes do not occur gives the false positive rate.
\end{enumerate}
\begin{figure}[htp!]
\includegraphics[width=12pc,height=10pc]{methods_comparison}(a)
\includegraphics[width=12pc,height=10pc]{levels_comparison_WithReference}(b)
\noindent\includegraphics[width=12pc,height=10pc]{families_comparison_WithReference}(c)
\includegraphics[width=12pc,height=10pc]{dm_comparison_wavelet_deepL}(d)
\includegraphics[width=12pc,height=10pc]{levels_comparison_NoReference}(e)
\includegraphics[width=12pc,height=10pc]{families_comparison_NoReference}(f)
\caption{ROC curves for detection of changing ellipses in synthetic images and different methods. (a) The proposed methods in black (db2 wavelet ${\textbf d}(m)$) and green (db2 wavelet ${\textbf t}(m)$) vs three non-wavelet methods: aggregated log-ratios (red stars); $d(m)$ (blue); and ${\textbf t}(m)$ (red circles).
(b) db2 ${\textbf d}(m)$ with different levels.
(c) ${\textbf d}(m)$ with different wavelet bases and J=2;
(d) The proposed db2 wavelet ${\textbf d}(m)$ with (yellow) and without (red) deep-learning pre-treatment.
(e) db2 ${\textbf t}(m)$ with different levels. (f) ${\textbf t}(m)$ with different wavelet bases and J=2. }
\label{F:EllipsoidChanges_details}
\end{figure}
Figure \ref{F:EllipsoidChanges_details} presents the different ROC curves for change detection methods applied to the synthetic data as follows. The effects of wavelet bases, level of decomposition, image pre-treatment and ${\textbf d}(m)$/${\textbf t}(m)$ usage are shown on the ROC curves. We employ the following wavelet bases: Haar; Daubechies db2; Daubechies db4; Coiflets coif4; Symlets sym2 ; and Symlets sym4. Panels (c) and (f) present the ROC curves for the proposed methods under the aforementioned bases for ${\textbf d}(m)$ and ${\textbf t}(m)$, respectively. On both instances $J=2$ is employed. The results for ${\textbf d}(m)$ are much more robust to basis variation then the ones for ${\textbf t}(m)$. Combining the ROC curves's comparison from both panels, Daubechies db2 is the best choice. Panels (b) and (e) present the ROC curves for different levels of decomposition under the aforementioned bases for ${\textbf d}(m)$ and ${\textbf t}(m)$, respectively. On both instances db2 is employed, and five levels are considered: $J=1,2,3,4,5$. Levels $J=2,3$ have a clear better performance for ${\textbf d}(m)$ (with a slight advantage to $J=2$), whilst $J=1$ is competitive for ${\textbf t}(m)$. The overall performance of $J=2$ warrants its use for the rest of the comparisons. Panel (d) shows how the proposed method performs with or without images' deep learning pre-treatment. The change detection method is the proposed db2 ${\textbf d}(m)$ with $J=2$. We can see that the ROC curves for treated or untreated images are almost identical. The proposed untreated method runs in 3.35s, while the combined deep-learning/db2 ${\textbf d}(m)$ with $J=2$ runs in 559.12s on a notebook. The configuration of the notebook is: OS - Ubuntu 18.04.5 LTS; RAM 7.7 GB; Intel\textregistered Core\texttrademark $ $i7-7500U CPU @ 2.70GHz x 4; graphics - Intel\textregistered HD Graphics 620 (KBL GT2); GNOME - 3.28.2; OS type - 64-bit. We finally have in Panel (a) the proposed db2 ${\textbf d}(m)$ with $J=2$ and db2 ${\textbf t}(m)$ with $J=2$ compared to three other non-wavelet methods. These are ${\textbf d}(m)$ and ${\textbf t}(m)$ where wavelet decomposition is not performed, i.e., the squared deviations are computed using $\{\mathcal{I}(m)\}$ instead of $\{{\textbf X}(m)\}$, and the classic method of analyzing aggregated log-ratios of $\{\mathcal{I}(m)\}$. The ROC curves in Panel (a) clearly show that the proposed db2 ${\textbf d}(m)$ with $J=2$ outperforms the rest.
We may summarize these results as: the proposed wavelet ${\textbf d}(m)$ method presents a superior performance. It is also equipped with the following nice properties: (i) it is scalable; (ii) it is sparse; (iii) it is parsimonious; (iv) it performs equally well with or without image denoising pre-treatment; iv) it can be easily adapted to be linearly updated when a new image is acquired; and (vi) it is fast. Thence, the proposed wavelet change-detection procedure can be used as a real-time change detection tool for long time series of large images.
\FloatBarrier
\section{Real Data Results}\label{section_realdata}
We employed the proposed change detection method on a series of 85 multi-date satellite images. The images were taken on a forest region at the border of Brazil and the French Guiana from November 08, 2015 to December 09, 2017. Each image has two channels and 1200 by 1000 pixels. We perform three change detection wavelet analyses: VV Polarization Channel; VH Polarization Channel; and the Combined Image by Euclidean norm.
A multi-resolution analysis (MRA) based on a Symlet basis with filter of length 16 (symlet 8) is built. The log-images are approximated at levels $J=1,2,3,4$. Table \ref{T:approxenergy} shows the 85 images' average energy for each approximation level. We notice that roughly 99\% of the energy is recovered with $J=1$, and more than 90\% with $J=2$. The VV channel shows better overall energy recovery than the VH channel. For $J=4$ and $J=3$, the Euclidean combination of the polarization channels increases the energy representation percentage. For $J=2$, VV channel and combined channels are equivalent. VH results in 4\% less energy than VV for $J=3$, and $-10\%$, for $J=4$.
\begin{table}[h!]
\caption{Wavelet Approximation Mean Energy Percentage for log-images. Forest region at the border of Brazil and the French Guiana from November 08, 2015 to December 09, 2017. $n=85$ multi-date satellite images. Each image has two channels and 1200 by 1000 pixels. VV Polarization Channel; VH Polarization Channel; and the Combined Image by Euclidean norm. Approximation $J=1,2,3,4$.}
\begin{tabular}{cccc|cccc|cccc}
\hline
\multicolumn{12}{c}{\sc Mean Approximated Energy Percentage}\\
\hline
\multicolumn{4}{c|}{VV Channel}&\multicolumn{4}{c|}{VH Channel}&\multicolumn{4}{c}{Combined Channels}\\
\hline
$J=4$&$J=3$&$J=2$&$J=1$&$J=4$&$J=3$&$J=2$&$J=1$&$J=4$&$J=3$&$J=2$&$J=1$\\
\hline
0.803&0.847&0.924&0.990&0.763&0.814&0.908&0.988&0.816&0.858&0.931&0.991\\
\hline
\end{tabular}\label{T:approxenergy}
\end{table}
Figures \ref{F:squared_J1-4_VV}-\ref{F:squared_J1-4_euclid} show the series of squared deviations ${\textbf d}(m)$ and ${\textbf t}(m)$, for the VV channel, VH channel, and Euclidean combination, respectively. An overall feature on this data is that the VV polarization presents much higher energy than the VH. The amount of energy related to changes is ten times higher on the former compared to the latter's.
Regarding change time points, in each figure, we can notice a pattern of peaks which are common to all approximation levels. They are time points:
\begin{list}{}{}
\item (a) 14, 43, 54, and 58 by the coefficients' squared deviations on the average VV image;
\item (b) 14, 43, 54, and 58 by the coefficients' squared deviations on the consecutive VV images;
\item (c) 14, 38, 41, 43, 54, 56, and 58 by the coefficients' squared deviations on the average VH image;
\item (d) 14, 38, 41, 43, 54-58 by the coefficients' squared deviations on the consecutive VH images;
\item (e) 14, 43, 54, and 58 by the coefficients' squared deviations on the average combined channels image; and
\item (e) 14, 43, 54, and 58 by the coefficients' squared deviations on the consecutive combined channels images.
\end{list}
\begin{figure}[htp!]
\noindent\includegraphics[width=18pc,height=4pc]{J1_VV_squared_meandev}(a)
\includegraphics[width=18pc,height=4pc]{consecdif_J1_VV_squared_meandev}(b)
\includegraphics[width=18pc,height=4pc]{J2_VV_squared_meandev}(c)
\includegraphics[width=18pc,height=4pc]{consecdif_J2_VV_squared_meandev}(d)
\includegraphics[width=18pc,height=4pc]{J3_VV_squared_meandev}(e)
\includegraphics[width=18pc,height=4pc]{consecdif_J3_VV_squared_meandev}(f)
\includegraphics[width=18pc,height=4pc]{J4_VV_squared_meandev}(g)
\includegraphics[width=18pc,height=4pc]{consecdif_J4_VV_squared_meandev}(h)
\caption{{\sc VV Polarization Channel} Series of squared deviations $d(m)$ and $t(m)$. The red horizontal line represents the median value and the yellow horizontal line represents their median plus two times their absolute median deviation. $d(m)$ - Approximation Levels: (a) $J=1$; (c) $J=2$; (e) $J=3$; (g) $J=4$. $t(m)$ - Approximation Levels: (b) $J=1$; (d) $J=2$; (f) $J=3$; (h) $J=4$.
}
\label{F:squared_J1-4_VV}
\end{figure}
\begin{figure}[htp!]
\noindent\includegraphics[width=18pc,height=4pc]{J1_VH_squared_meandev}(a)
\includegraphics[width=18pc,height=4pc]{consecdif_J1_VH_squared_meandev}(b)
\includegraphics[width=18pc,height=4pc]{J2_VH_squared_meandev}(c)
\includegraphics[width=18pc,height=4pc]{consecdif_J2_VH_squared_meandev}(d)
\includegraphics[width=18pc,height=4pc]{J3_VH_squared_meandev}(e)
\includegraphics[width=18pc,height=4pc]{consecdif_J3_VH_squared_meandev}(f)
\includegraphics[width=18pc,height=4pc]{J4_VH_squared_meandev}(g)
\includegraphics[width=18pc,height=4pc]{consecdif_J4_VH_squared_meandev}(h)
\caption{{\sc VH Polarization Channel} Series of squared deviations $d(m)$ and $t(m)$. The red horizontal line represents the median value and the yellow horizontal line represents their median plus two times their absolute median deviation. $d(m)$ - Approximation Levels: (a) $J=1$; (c) $J=2$; (e) $J=3$; (g) $J=4$. $t(m)$ - Approximation Levels: (b) $J=1$; (d) $J=2$; (f) $J=3$; (h) $J=4$.
}
\label{F:squared_J1-4_VH}
\end{figure}
\begin{figure}[htp!]
\noindent\includegraphics[width=18pc,height=4pc]{J1_euclid_squared_meandev}(a)
\noindent\includegraphics[width=18pc,height=4pc]{consecdif_J1_euclid_squared_meandev}(b)
\includegraphics[width=18pc,height=4pc]{J2_euclid_squared_meandev}(c)
\includegraphics[width=18pc,height=4pc]{consecdif_J2_euclid_squared_meandev}(d)
\includegraphics[width=18pc,height=4pc]{J3_euclid_squared_meandev}(e)
\includegraphics[width=18pc,height=4pc]{consecdif_J3_euclid_squared_meandev}(f)
\includegraphics[width=18pc,height=4pc]{J4_euclid_squared_meandev}(g)
\includegraphics[width=18pc,height=4pc]{consecdif_J4_euclid_squared_meandev}(h)
\caption{{\sc Combined Channels} Series of squared deviations $d(m)$ and $t(m)$. The red horizontal line represents the median value and the yellow horizontal line represents their median plus two times their absolute median deviation. $d(m)$ - Approximation Levels: (a) $J=1$; (c) $J=2$; (e) $J=3$; (g) $J=4$. $d(m)$ - Approximation Levels: (b) $J=1$; (d) $J=2$; (f) $J=3$; (h) $J=4$.
}
\label{F:squared_J1-4_euclid}
\end{figure}
\begin{figure}[htp!]
\noindent\includegraphics[width=18pc,height=3pc]{J3_VV_squared_meandev}(a)
\includegraphics[width=18pc,height=3pc]{consecdif_J3_VV_squared_meandev}(b)
\includegraphics[width=18pc,height=3pc]{J3_VV_sqrdif_change_locations}(c)
\includegraphics[width=18pc,height=3pc]{consecdif_J3_VV_sqrdif_change_locations}(d)
\includegraphics[width=18pc,height=3pc]{J3_VV_sqrdif_lowest_cor_locations}(e)
\includegraphics[width=18pc,height=3pc]{consecdif_J3_VV_sqrdif_lowest_cor_locations}(f)
\includegraphics[width=18pc,height=3pc]{perc01_J3_VV_sqrdif_change_locations}(g)
\includegraphics[width=18pc,height=3pc]{perc01_consecdif_J3_VV_sqrdif_change_locations}(h)
\includegraphics[width=18pc,height=3pc]{perc01_J3_VV_sqrdif_lowest_cor_locations}(i)
\includegraphics[width=18pc,height=3pc]{perc01_consecdif_J3_VV_sqrdif_lowest_cor_locations}(j)
\caption{{\sc VV Polarization Channel} Level $J=3$ series of squared mean deviations: (a) ${\textbf d}(m)$; (b) ${\textbf t}(m)$. Red horizontal line represents the median value., Yellow, two absolute median deviations beyond the median. Squared approximation coefficient deviations: 0.1\% highest absolute correlations - (c) ${\textbf d}(m)$; (d) ${\textbf t}(m)$; 0.1\% smallest absolute correlations - (e) ${\textbf d}(m)$; (f) ${\textbf t}(m)$; 0.01\% highest absolute correlations - (g) ${\textbf d}(m)$; (h) ${\textbf t}(m)$; 0.01\% smallest absolute correlations - (i) ${\textbf d}(m)$; (j) ${\textbf t}(m)$.
}
\label{F:squared_meandev_J3_VV}
\end{figure}
\begin{figure}[htp!]
\noindent\includegraphics[width=18pc,height=3pc]{J3_VH_squared_meandev}(a)
\includegraphics[width=18pc,height=3pc]{consecdif_J3_VH_squared_meandev}(b)
\includegraphics[width=18pc,height=3pc]{J3_VH_sqrdif_change_locations}(c)
\includegraphics[width=18pc,height=3pc]{consecdif_J3_VH_sqrdif_change_locations}(d)
\includegraphics[width=18pc,height=3pc]{J3_VH_sqrdif_lowest_cor_locations}(e)
\includegraphics[width=18pc,height=3pc]{consecdif_J3_VH_sqrdif_lowest_cor_locations}(f)
\includegraphics[width=18pc,height=3pc]{perc01_J3_VH_sqrdif_change_locations}(g)
\includegraphics[width=18pc,height=3pc]{perc01_consecdif_J3_VH_sqrdif_change_locations}(h)
\includegraphics[width=18pc,height=3pc]{perc01_J3_VH_sqrdif_lowest_cor_locations}(i)
\includegraphics[width=18pc,height=3pc]{perc01_consecdif_J3_VH_sqrdif_lowest_cor_locations}(j)
\caption{{\sc VH Polarization Channel} Level $J=3$ series of squared mean deviations: (a) ${\textbf d}(m)$; (b) ${\textbf t}(m)$. Red horizontal line represents the median value., Yellow, two absolute median deviations beyond the median. Squared approximation coefficient deviations: 0.1\% highest absolute correlations - (c) ${\textbf d}(m)$; (d) ${\textbf t}(m)$; 0.1\% smallest absolute correlations - (e) ${\textbf d}(m)$; (f) ${\textbf t}(m)$; 0.01\% highest absolute correlations - (g) ${\textbf d}(m)$; (h) ${\textbf t}(m)$; 0.01\% smallest absolute correlations - (i) ${\textbf d}(m)$; (j) ${\textbf t}(m)$.
}
\label{F:squared_meandev_J3_VH}
\end{figure}
\begin{figure}[htp!]
\noindent\includegraphics[width=18pc,height=3pc]{J3_euclid_squared_meandev}(a)
\includegraphics[width=18pc,height=3pc]{consecdif_J3_euclid_squared_meandev}(b)
\includegraphics[width=18pc,height=3pc]{J3_euclid_sqrdif_change_locations}(c)
\includegraphics[width=18pc,height=3pc]{consecdif_J3_euclid_sqrdif_change_locations}(d)
\includegraphics[width=18pc,height=3pc]{J3_euclid_sqrdif_lowest_cor_locations}(e)
\includegraphics[width=18pc,height=3pc]{consecdif_J3_euclid_sqrdif_lowest_cor_locations}(f)
\includegraphics[width=18pc,height=3pc]{perc01_J3_euclid_sqrdif_change_locations}(g)
\includegraphics[width=18pc,height=3pc]{perc01_consecdif_J3_euclid_sqrdif_change_locations}(h)
\includegraphics[width=18pc,height=4pc]{perc01_J3_euclid_sqrdif_lowest_cor_locations}(i)
\includegraphics[width=18pc,height=4pc]{perc01_consecdif_J3_euclid_sqrdif_lowest_cor_locations}(j)
\caption{{\sc Combined Channels} Level $J=3$ series of squared mean deviations: (a) ${\textbf d}(m)$; (b) ${\textbf t}(m)$. Red horizontal line represents the median value., Yellow, two absolute median deviations beyond the median. Squared approximation coefficient deviations: 0.1\% highest absolute correlations - (c) ${\textbf d}(m)$; (d) ${\textbf t}(m)$; 0.1\% smallest absolute correlations - (e) ${\textbf d}(m)$; (f) ${\textbf t}(m)$; 0.01\% highest absolute correlations - (g) ${\textbf d}(m)$; (h) ${\textbf t}(m)$; 0.01\% smallest absolute correlations - (i) ${\textbf d}(m)$; (j) ${\textbf t}(m)$.
}
\label{F:squared_meandev_J3_euclid}
\end{figure}
Figures \ref{F:squared_meandev_J3_VV}-\ref{F:squared_meandev_J3_euclid} present a timewise comparison between overall energy variations, $d(m)$ and $t(m)$ and their respective individual coefficients. Each figure has twelve panels. Panels on the left and right deal with the first and second change detection methods respectively. A full description is give in each figure's caption. The overall conclusion from these figures is that as expected there are temporal variations between the methods regarding change detection (Panels (a)-(b)). The correlation screening detects the most relevant indices, as well as the least relevant ones (Panels (c)-(f)).
These conclusions hold for analyses based upon VV, VH or combined polarizations, but the VV polarization signal is much stronger than VH's. A slight advantage is perceived for the ${\textbf d}(m)$ method as opposed to ${\textbf t}(m)$'s. This makes sense, since we are dealing here with data from a forest region over a long time, and seasonal changes will be perceived more easily on the first proposed method.
\FloatBarrier
\begin{table}[ht!]
\caption{Absolute correlation thresholds and number of selected coefficients for $n=85$ multi-temporal images of $1200\times 1000$ . Correlation was computed between approximation coefficients and approximation total energy at level $J=2$ for each image. }\label{tabela_quantis}
\begin{tabular}{c|cccr||c|cccr}
\hline
Qtile &\multicolumn{4}{c||}{\sc Correlation Thresh} &Qtile & \multicolumn{4}{c}{\sc Correlation Thresh} \\
Level &VV&VH&Comb& Coeffs &Level &VV&VH&Comb& Coeffs \\
\hline
0.50&0.281&0.199&0.292&600000&0.99&0.647&0.582&0.668&12000\\
0.55&0.300&0.218&0.311&540000&0.991&0.651&0.587&0.673&10800\\
0.60&0.319&0.239&0.331&480000& 0.992&0.656&0.594&0.678&9600\\
0.65&0.339&0.260&0.352&420000& 0.993&0.662&0.601&0.684&8400\\
0.70&0.361&0.282&0.375&360000& 0.994&0.669&0.608&0.690&7200\\
0.75&0.385&0.307&0.399&300000& 0.995&0.676&0.617&0.697&6000\\
0.80&0.412&0.335&0.428&240000& 0.996&0.685&0.626&0.705&4800\\
0.85&0.445&0.368&0.462&180000& 0.997&0.696&0.639&0.716&3600\\
0.90&0.487&0.410&0.506&120000& 0.998&0.710&0.654&0.729&2400\\
0.95&0.549&0.472&0.570&60000& 0.999&0.731&0.676&0.748&1200\\
\hline
\end{tabular}
\end{table}
\FloatBarrier
\begin{figure}[htp!]
\noindent\includegraphics[width=11pc]{J2_VV_image_cor}(a)
\noindent\includegraphics[width=11pc]{J2_VH_image_cor}(b)
\noindent\includegraphics[width=11pc]{J2_euclid_image_cor}(c)
\includegraphics[width=11pc]{J3_VV_image_cor}(d)
\includegraphics[width=11pc]{J3_VH_image_cor}(e)
\includegraphics[width=11pc]{J3_euclid_image_cor}(f)
\caption{ Absolute Correlation matrices $|{\textbf R}|$ between mean-corrected squared coeffcients at approximation levels and overall mean-corrected total energy. $J=2$ (a)-(c); $J=3$ (d)-(f). $n=85$ multi-temporal images of $1200\times 1000$ ($J=10$). The color bar on the right gives the magnitude of correlation at all positions. (a) {\sc VV Polarization - Channel} $J=2$. (b) {\sc VH Polarization - Channel} $J=2$. (c) {\sc Combined Channels} $J=2$. (d) {\sc VV Polarization - Channel} $J=3$. (e) {\sc VH Polarization - Channel} $J=3$. (f) {\sc Combined Channels} $J=3$}
\label{F:image_corr_J2J3}
\end{figure}
\begin{figure}[htp!]
\noindent\includegraphics[width=11pc]{consecdif_J2_VV_image_cor}(a)
\noindent\includegraphics[width=11pc]{consecdif_J2_VH_image_cor}(b)
\noindent\includegraphics[width=11pc]{consecdif_J2_euclid_image_cor}(c)
\includegraphics[width=11pc]{consecdif_J3_VV_image_cor}(d)
\includegraphics[width=11pc]{consecdif_J3_VH_image_cor}(e)
\includegraphics[width=11pc]{consecdif_J3_euclid_image_cor}(f)
\caption{ Absolute Correlation matrices $|{\textbf R}|$ between consecutive log-images' squared coeffcients at approximation levels and overall mean-corrected total energy . Levels $J=2$ (a)-(c); $J=3$ (d)-(f). $n=85$ multi-temporal images of $1200\times 1000$. The color bar on the right gives the magnitude of correlation at all positions. (a) {\sc VV Polarization - Channel} $J=2$. (b) {\sc VH Polarization - Channel} $J=2$. (c) {\sc Combined Channels} $J=2$. (d) {\sc VV Polarization - Channel} $J=3$. (e) {\sc VH Polarization - Channel} $J=3$. (f) {\sc Combined Channels} $J=3$.}
\label{F:image_corr_J2J3_consec}
\end{figure}
Figures \ref{F:image_corr_J2J3}-\ref{F:image_corr_J2J3_consec} shows the absolute correlation images for the VV, VH and combined channels for levels $J=2$ and $J=3$. We can notice that high correlation coefficients for $J=2$ are high correlation coefficients for $J=3$ as well, Moreover, there is a clear spatial connection between polarizations and between $d(m)$- and $t(m)$-based analyses. On the other hand, correlations are more efficiently segregated when we move: from $J=2$ to $J=3$; from VH to VV polarization; or from $t(m)$ to $d(m)$.
\section{Discussion}\label{section_discussion}
We present a novel way of detecting changes in multi-temporal satellite images, WECS. The procedure is based on wavelet energies from both the estimated individual coefficients as well as the whole image approximation. It makes use of correlation screening for ultra-high dimensional data. The proposed method's performance is shown using both synthetic and real data. The proposed method yields spatio-temporal change points. Its performance with or without images's pre-treatment is statistically identical, but the computational cost of the proposed method is 180 times smaller than the pre-treatment's cost. Therefore, we may say that this method may be used on untreated images with equivalent performance for a fraction of the computational cost. Because of its reliance on wavelet representation and correlation screening, it is sparse, very fast and scalable. Finally, it is easily adapted to be updatable, so that real-time change detection is feasible even with a portable computer.
\section*{Acknowledgement}
RF acknowledges support by FAPESP grant 2016/24469-6. AP acknowledges support by FAPESP grant 2018/04654-9 and CNPq grants 309230/2017-9 and 310991/2020-0.
\bibliographystyle{agsm}
|
2,869,038,155,454 | arxiv | \section{Introduction}
The Sachdev-Ye-Kitaev (SYK) models \cite{SaYe93,Kit.KITP.2} of fermions with random interactions have been the focus of much recent attention in both the quantum gravity and the condensed matter literature. The majority of this work has focused on the model with Majorana fermions, which has no globally conserved charge, other than the Hamiltonian itself. In this paper, we direct our attention to the model with $N\gg 1$ complex fermions \cite{SS15}, a.k.a. the complex SYK model:
\begin{equation}
\hat{H} = \sum_{\substack{j_1<\ldots<j_{q/2},\\ k_1<\ldots<k_{q/2}}} J_{j_1\ldots j_{q/2}\,, k_1 \ldots k_{q/2}} \antisym{\hat{\psi}^\dagger_{j_1} \ldots \hat{\psi}^\dagger_{j_{q/2}} \hat{\psi}_{k_1} \ldots \hat{\psi}_{k_{q/2}}}
\label{eq: Hamiltonian}
\end{equation}
where $\antisym{\cdots}$ denotes the antisymmetrized product of operators.
The couplings $J_{j_1\ldots j_{q/2}\,, k_1 \ldots k_{q/2}}$ are independent random complex variables with zero mean and the following variance:
\begin{equation}
\bar{\left| J_{j_1\ldots j_{q/2}\,, k_1 \ldots k_{q/2}} \right|^2}
= J^2\, \frac{(q/2)!\,(q/2-1)!}{N^{q-1}} \,.
\end{equation}
One advantage of the antisymmetrized Hamiltonian is that it makes the particle-hole symmetry manifest.
For example at $q=4$, the Hamiltonian has the following form
\begin{equation}
\qquad \hat{H}= \sum_{j_1<j_{2}\,, k_1<k_{2}} J_{j_1j_{2}, k_1 k_{2} }\Big(\hat{\psi}^\dagger_{j_1} \hat{\psi}^\dagger_{j_{2}} \hat{\psi}_{k_1} \hat{\psi}_{k_{2}}+\hat{C}_{j_1j_2,k_1k_2}
\Big)\,,
\label{phsymham}
\end{equation}
where $\hat{C}_{j_1j_2,k_1k_2}$ collects various terms arising from anti-commuting fermion operators; more explicitly,
\begin{equation}
\hat{C}_{j_1j_2,k_1k_2} =
\frac{1}{2}\Bigl(\delta_{j_{1}k_{1}}\hat{\psi}^\dagger_{j_{2}} \hat{\psi}_{k_2}-\delta_{j_{1}k_{2}}\hat{\psi}^\dagger_{j_{2}} \hat{\psi}_{k_1}-\delta_{j_{2}k_{1}}\hat{\psi}^\dagger_{j_{1}} \hat{\psi}_{k_2}+
\delta_{j_{2}k_{2}}\hat{\psi}^\dagger_{j_{1}} \hat{\psi}_{k_1}
+ \frac{1}{2} \delta_{j_1k_2} \delta_{j_2k_1} -\frac{1}{2} \delta_{j_1k_1} \delta_{j_2k_2}\Bigr)\,.
\end{equation}
Without $\hat{C}_{j_1j_2,k_1k_2}$ term, the Hamiltonian is not invariant under the particle-hole symmetry $\hat{\psi}^\dagger_j \leftrightarrow \hat{\psi}_j$. Using the same notation, we define the globally conserved $\operatorname{U}(1)$ charge $\hat{Q}$ by
\begin{equation}
\hat{Q} = \sum_{j} \antisym{\hat{\psi}_j^\dagger \hat\psi_j}
=\sum_{j} \hat{\psi}_j^\dagger \hat\psi_j - \frac{N}{2} \,.
\label{chargeoper}
\end{equation}
It is related to the ultraviolet (UV) asymmetry of the Green function
\begin{equation}
G(\tau_1,\tau_2) = - \langle {\rm T} \hat{\psi}(\tau_1)\hat\psi^\dagger (\tau_2) \rangle\,, \quad G(0^+) = -\frac{1}{2} + \mathcal{Q} \,, \quad G(0^-) = \frac{1}{2} + \mathcal{Q}\,,\quad \mathcal{Q}=\frac{\langle \hat{Q}\rangle }{N} \,.
\label{charge Green function}
\end{equation}
In the infrared (IR), the spectral asymmetry is characterized by the long-time behavior of the Green function
\begin{equation}
G_{\beta=\infty}(\pm \tau)
\sim \mp e^{\pm \pi \calE} \tau^{-2\Delta} \qquad \text{for} \quad \text{and} \quad \tau \gg J^{-1}\,,
\label{GE}
\end{equation}
or equivalently the small frequency behavior
\begin{equation}
G_{\beta=\infty}(\pm i \omega) \sim \mp i e^{\mp i \theta} \omega^{2\Delta -1} \qquad \text{for} \quad \text{and} \quad 0< \omega \ll J\,,
\end{equation}
where $\beta= T^{-1}$ is the inverse temperature, $\Delta = 1/q$ is the scaling dimension of the fermion operator, $\calE \in (-\infty,+\infty)$ and $\theta\in (-\Delta \pi, \Delta \pi)$ are the spectral asymmetry parameters
related by the following formula
\begin{equation}
e^{2 \pi \calE} = \frac{\sin(\pi \Delta + \theta)}{\sin(\pi \Delta - \theta)}\,.
\label{calEtheta}
\end{equation}
Note the value of $(\calE, \theta)$ can not be fixed by the IR equations; so there is a one-parameter family of solutions in the scaling limit \cite{SaYe93}.
Ultimately, the actual value of $(\calE,\theta)$ is set by the value of specific charge $\mathcal{Q}$. Although charge is a UV property of the system, the relationship between $(\calE,\theta)$ and $\mathcal{Q}$ is universal and independent of UV details \cite{GPS01,SS15}:
\begin{equation}
\mathcal{Q} = - \frac{\theta}{\pi} - \left( \frac{1}{2} -\Delta \right) \frac{\sin (2 \theta)}{\sin (2\pi \Delta)}\,.
\label{charge theta intro}
\end{equation}
We will provide new derivations of Eq.~(\ref{charge theta intro}) here (see Eqs.~(\ref{charge theta}), (\ref{dQdtheta}) and Section~\ref{sec:bulk}). This universal relation is analogous to the Luttinger relation of Fermi liquid theory, which relates the size of the Fermi surface (an IR quantity) to the total charge (a UV quantity).
The form of Eq.~(\ref{GE}) also applies to fermionic fields with unit $\operatorname{U}(1)$ charge in asymptotically AdS$_2$ black holes, as was computed by Faulkner et al. \cite{Faulkner09}; the parameter $\calE$ is then a dimensionless measure of the electric field near the AdS$_2$ horizon \cite{SS15} (see Appendix~\ref{app:em} and Eq.~(\ref{defE})). For both SYK models and black holes, fields with $\operatorname{U}(1)$ charge $p$ have the asymmetry factor $e^{\pm \pi p \calE}$.
Another key feature of the SYK models is the presence of a non-zero entropy in the zero temperature limit \cite{GPS01}:
\begin{equation}
\lim_{\beta \rightarrow \infty}\lim_{N\to\infty} \frac{S(N, N\mathcal{Q}, \beta^{-1})}{N} = \calS(\mathcal{Q}) > 0\,,
\label{defS0}
\end{equation}
The function $\calS(\mathcal{Q})$ is {\it universal}, in that it is determined only by the structure of the low energy conformal theory, and is independent of the UV perturbations to the Hamiltonian which are irrelevant to the low energy.
Such a zero temperature entropy is {\it not\/} associated with an exponentially large ground state degeneracy. Instead, it signals an exponentially small many-body energy level spacing down to the ground state; see Section~\ref{sec:dos}. For each given $N$, the entropy does go to zero at exponentially low temperatures.
We will present a new derivation of $\calS(\mathcal{Q})$ in Section~\ref{sec:bulk} using a two dimensional bulk picture involving massive Dirac fermions on the hyperbolic plane.
At finite but sufficiently low temperatures, the dynamics of the Majorana SYK model is governed by a collective mode with the Schwarzian action \cite{Kit.KITP.2,MS16-remarks,KS17-soft}. An analogous effective theory of the complex SYK model also includes a $\operatorname{U}(1)$ mode \cite{Davison17}
\begin{equation}
I_{\text{eff}} [\varphi, \lambda]
= \frac{NK}{2} \int_0^{\beta} d \tau
\bigl(\lambda'(\tau) + i\calE\varphi'(\tau)\bigr)^2
- \frac{N\gamma}{4 \pi^2} \int_0^{\beta} d \tau \,
\operatorname{Sch}\left( \tan\frac{\varphi(\tau)}{2},\, \tau\right),
\label{Seff}
\end{equation}
where $\varphi(\tau)$ is a
monotonic time reparameterization obeying $\varphi(\tau + \beta) = \varphi(\tau) + 2\pi$, and $\lambda (\tau)$ is a phase field obeying $\lambda (\tau + \beta) = \lambda (\tau) + 2 \pi n$ with integer winding number $n$ conjugate to the total charge $Q$. Notation
$\operatorname{Sch} ( f(x), x )$ stands for the Schwarzian derivative
\begin{equation}
\operatorname{Sch} ( f(x), x ) := \frac{f'''}{f'} - \frac{3}{2} \left( \frac{f''}{f'} \right)^2 \,.
\end{equation}
In this effective theory, the $\operatorname{U}(1)$ and $SL(2,R)$ freedom are actually decoupled, which can be demonstrated by the variable change $\lambda(\tau) =\tilde{\lambda}(\tau) +i\calE\bigl(\frac{2\pi}{\beta}\tau-\varphi(\tau)\bigr)$.
The action~\eqref{Seff} is characterized by two parameters, $K$ and $\gamma$, and these can be specified by their connection to thermodynamics. They depend upon the specific charge $\mathcal{Q}$ (or the chemical potential $\mu$), but this dependence has been left implicit.
The leading low temperature correction to the entropy in Eq.~(\ref{defS0}) at fixed $N$ and $Q$ is
\begin{equation}
\frac{S}{N} = \calS + \gamma\, \beta^{-1} + O(\beta^{-2})\,, \label{defgamma}
\end{equation}
and so $\gamma$ is the $T$-linear coefficient of the specific heat at fixed charge, as in Fermi liquid theory. The parameter $K$
is the zero temperature compressibility
\begin{equation}
K = \frac{d\mathcal{Q}}{d\mu} \quad, \quad \text{at} \quad \beta=\infty\,. \label{defK}
\end{equation}
Unlike $\calS$ and $\calE$, the parameters $K$ and $\gamma$ are not universal, and depend upon details of the microscopic Hamiltonian and not just the low energy conformal field theory.
The zero temperature entropy in Eq.~(\ref{defS0}) and the pair of soft modes as in Eq.~(\ref{Seff}) are also pertinent to higher dimensional charged black holes with AdS$_2$ horizons, and this is discussed elsewhere \cite{SS10,SS15,JMDS16b,KJ16,HV16,Davison17,Gaikwad:2018dfc,Nayak:2018qej,Moitra:2018jqs,Chaturvedi:2018uov,Sachdev19,Moitra:2019bub}. Key aspects of such black holes are summarized in Appendix~\ref{app:em}. We also note that supersymmetric higher dimensional black holes with AdS$_2$ horizons obtained from string theory have integer values for $e^{N\calS}$ \cite{Dabholkar:2004yr,Dabholkar:2014ema}, as does the SYK model with $\mathcal{N}=2$ supersymmetry \cite{Fu:2016vas} (which we do not consider here).
An important property of both complex SYK and charged black holes with AdS$_2$ horizons is the following relationship between the entropy $\calS(\mathcal{Q})$ in Eq.~(\ref{defS0}) and the parameter $\calE$:
\begin{equation}
\frac{d \calS(\mathcal{Q})}{d\mathcal{Q}} = 2 \pi \calE\,. \label{dSdQ}
\end{equation}
This relationship first appeared in the study of SYK-like models by Georges et al. \cite{GPS01}, building upon large $N$ studies of the multichannel Kondo problem \cite{PGKS97}.
Independently, this relationship appeared as a general property of black holes with AdS$_2$ horizons in the work of Sen \cite{Sen05,Sen08}, where $\calE$ is identified with the electric field on the horizon \cite{Faulkner09}, as noted above. It was only later that the identity of this relationship between the SYK and black hole models was recognized \cite{SS15}. We will obtain a deeper understanding of Eq.~(\ref{dSdQ}) in the present paper, based on the global $\operatorname{U}(1)$ symmetry associated with the conserved charge and the locality of effective action.
Let us summarize our notation for thermodynamic quantities. These quantities are of the order of $N$: the total charge $Q$ (which is integer for $N$ even, and half-integer for $N$ odd), action $I$, entropy $S$, and the associated free energy and grand potential. $N$-independent quantities include: the temperature $T=\beta^{-1}$, chemical potential $\mu$, spectral asymmetry parameter $\calE$, specific charge $\mathcal{Q}$, zero-temperature entropy $\calS$, charge compressibility $K$, and the $T$-linear coefficient in the specific heat $\gamma$. Except the first two, they are defined in the large $N$ limit.
\subsection{Outline of the paper}
We begin Section~\ref{sec:charge} by setting up the formalism for the complex SYK model as a path integral over the two-time Green function and self energy. We introduce a definition of the conserved charge $\mathcal{Q}$ suitable for this formalism and then derive the known universal relation between $\mathcal{Q}$ and $\calE$ (Eq.~(\ref{charge theta})).
In Section~\ref{sec:generalform}, we find a general form of a local effective action $I_{\text{eff}}$ and derive Eq.~(\ref{Seff}).
Section~\ref{sec:thermo} is concerned with thermodynamic quantities and a discussion of what parameters come from the UV. In Section~\ref{sec:dos}, we evaluate the path integral over $\lambda$ and $\varphi$ with action $I_{\text{eff}}$ exactly, which yields new results for the many-body density of states.
Section~\ref{secRG} sets up a renormalization theory of the complex SYK model. This will enable us to obtain another derivation of the relationship between the specific charge $\mathcal{Q}$ and the spectral asymmetry $\calE$.
In Section~\ref{sec:compress}, we turn to the calculation of the parameters of the effective action, in particular charge compressibility $K$. We present three numerical computations that yield values of $K$ in good agreement with each other. These computations and our analysis show that all energy scales contribute to the charge compressibility. A low energy analysis based on linear coupling, mentioned in Section~\ref{sec:thermo}, or conformal perturbation theory (see Appendix~\ref{app:GrishaK}) does not yield the correct value of $K$, even though such methods work \cite{MS16-remarks,KS17-soft} for the Schwarzian mode.
Section~\ref{sec:bulk} presents a two dimensional bulk derivation of the zero temperature entropy of the complex SYK model. We show that the $\calE$-dependent value of the zero temperature entropy per fermion $\calS$ can be obtained from a Euclidean path integral over massive Dirac fermions on hyperbolic plane ${\rm H^2}$. We show that the appropriate quantity is the ratio of fermionic determinants with different boundary conditions on the boundary of ${\rm H^2}$. Another bulk interpretation of the entropy appears in Appendix~\ref{app:em}, where we recall the connection to higher-dimensional black holes. In $d+2$ dimensions ($d \geq 2$), the AdS$_2$ arises as a factor in the near-horizon geometry of a near-extremal charged black hole. In this picture, $\calS$ is related to the horizon area in the $d$ extra dimensions, and, as we noted above, this $\calS$ also obeys the differential relation~(\ref{dSdQ}).
\section{Low temperature properties}
\label{sec:charge}
In this section, we analyze the complex SYK model based on the $(G,\Sigma)$ action. We provide a general definition of charge in this framework and prove its universal relation to the IR asymmetry of the Green function. Furthermore, we find the general form of effective action and evaluate the path integral over the low energy fluctuations, which yield new results for the many-body density of states.
\subsection{Preliminaries}
We start with a review of the basics.
For convenience, we measure time in units of $J^{-1}$, which is equivalent to setting $J$ to 1. For the Hamiltonian \eqref{eq: Hamiltonian}, we may consider either the partition function for a fixed charge or the grand partition function. The latter can be obtained from the $(G,\Sigma)$ action:
\begin{equation}
\begin{aligned}
\frac{I}{N} = - \ln \det \left( - \sigma - \Sigma \right) &- \int d\tau_1 d\tau_2 \left[
\Sigma(\tau_1,\tau_2) G(\tau_2,\tau_1) + \frac{1}{q} \left(- G(\tau_1,\tau_2) G(\tau_2,\tau_1) \right)^{\frac{q}{2}}
\right] \,, \\
&\text{where} \quad \sigma(\tau_1,\tau_2) = \delta' (\tau_1-\tau_2) - \mu \delta (\tau_1-\tau_2) \,.
\end{aligned}
\label{Gsigma action}
\end{equation}
The Schwinger-Dyson equations are as follows:
\begin{equation}
-\underbrace{(\Sigma+\sigma)}_{\tilde{\Sigma}} G = 1, \quad \Sigma(\tau_1,\tau_2) = G(\tau_1,\tau_2)^{\frac{q}{2}} (-G(\tau_2,\tau_1))^{\frac{q}{2}-1} \,. \label{SchwDyseq}
\end{equation}
The general idea of solving these equations in the IR limit is to ignore $\sigma$, which is localized at short times. However, care should be taken because the Fourier transform of $\sigma$ contains the non-negligible, $\omega$-independent term $-\mu$. Fortunately, this term is absent from $\tilde{\Sigma}$, so we will use $G$ and $\tilde{\Sigma}$ as independent variables. Thus, $\sigma$ moves to the second equation in~\eqref{SchwDyseq}, where it can be safely ignored as the equation is solved in the time representation.
Since the IR equations do not depend on $\mu$, we get a family of solutions parametrized by a formally independent variable $\calE$. At zero temperature,
\begin{equation}
\begin{gathered}
G_{\beta=\infty}(\pm \tau)
= \mp e^{\pm \pi \calE} b^{\Delta} \tau^{-2\Delta}\,, \quad
\tilde{\Sigma}_{\beta=\infty}(\pm \tau)
= \mp e^{\pm \pi \calE} b^{1-\Delta} \tau^{-2 (1-\Delta)}
\qquad \text{for} \quad \tau \gg 1 \\
\text{where} \qquad b = \frac{1-2\Delta}{2\pi} \cdot \frac{\sin (2\pi \Delta)}{2 \cos \pi (\Delta+i\calE ) \cos \pi (\Delta-i \calE) } \,.
\end{gathered}
\label{zeroT Green}
\end{equation}
We can also introduce a parameter $\theta$ to characterize the spectral asymmetry in the frequency domain:
\begin{equation}
\begin{aligned}
G_{\beta=\infty}(\pm i \omega) &= \mp i e^{\mp i \theta} \sqrt{
\frac{\Gamma(2-2\Delta)}{\Gamma(2\Delta)}} b^{\Delta - \frac{1}{2}} \omega^{2\Delta -1} \\
\tilde{\Sigma}_{\beta=\infty}(\pm i \omega) &= \mp i e^{\pm i \theta} \sqrt{
\frac{\Gamma(2\Delta)}{\Gamma(2-2\Delta)}} b^{ \frac{1}{2}- \Delta } \omega^{1-2\Delta }
\end{aligned} \qquad \text{for} \quad 0 < \omega \ll 1 \,.
\label{zeroT Green frequency}
\end{equation}
The spectral asymmetry parameters $\calE$ and $\theta$ are related by the equations
\begin{equation}
e^{-2i\theta} =
\frac{\cos(\pi(\Delta+i\calE))}{\cos(\pi(\Delta-i\calE))}\,,\qquad
e^{2 \pi \calE} = \frac{\sin(\pi \Delta + \theta)}{\sin(\pi \Delta - \theta)}\,.
\label{calEtheta2}
\end{equation}
Using these relations, we can also express the prefactor $b$ as a function of $\theta$:
\begin{equation}
b= \frac{1-2\Delta}{2\pi } \cdot \frac{2 \sin (\pi \Delta + \theta) \sin (\pi \Delta - \theta)}{\sin (2\pi \Delta)} \,.
\end{equation}
The zero-temperature solutions can be extended to finite temperature:
\begin{equation}
\begin{aligned}
G(\tau) &\approx G_c(\tau) = - b^{\Delta } \left( \frac{\beta}{\pi} \sin \frac{\pi \tau}{\beta} \right)^{-2\Delta} e^{2\pi \calE \left( \frac{1}{2}-\frac{\tau}{\beta} \right)} \\
\tilde{\Sigma}(\tau) &\approx \tilde{\Sigma}_c(\tau) = - b^{1-\Delta } \left( \frac{\beta}{\pi} \sin \frac{\pi \tau}{\beta} \right)^{-2(1-\Delta)} e^{2\pi \calE \left( \frac{1}{2}-\frac{\tau}{\beta} \right)}
\end{aligned}
\qquad \begin{array}{c}\text{for} \quad 0<\tau<\beta,\\[5pt]
\tau \gg 1\,\, \text{and} \,\, \beta -\tau \gg 1 \,.
\end{array}
\label{Gconf}
\end{equation}
The subscript $c$ here means ``conformal''.
In the frequency domain with Matsubara frequencies $ \omega_n= \frac{2\pi}{\beta} \left( n + \frac{1}{2} \right) \ll 1$, the Green function and self energy have the following form,
\begin{equation}
\begin{aligned}
G(\pm i \omega_n) &\approx G_c(\pm i \omega_n) = \mp i e^{\mp i \theta} \sqrt{
\frac{\Gamma(2-2\Delta)}{\Gamma(2\Delta)}} \left( \frac{2\pi}{\beta} \sqrt{b} \right)^{2\Delta - 1} \frac{\Gamma\left( n + \frac{1}{2} + \Delta \pm i \calE \right)}{\Gamma
\left(
n + \frac{3}{2} - \Delta \pm i \calE
\right)
}\,, \\
\tilde{\Sigma}(\pm i \omega_n) & \approx\tilde{\Sigma}_c(\pm i \omega_n) = \mp i e^{\pm i \theta} \sqrt{
\frac{\Gamma(2\Delta)}{\Gamma(2-2\Delta)}} \left( \frac{2\pi}{\beta} \sqrt{b}\right)^{1-2\Delta }
\frac{\Gamma\left( n + \frac{3}{2} - \Delta \pm i \calE \right)}{\Gamma
\left(
n + \frac{1}{2} + \Delta \pm i \calE
\right)
}\,.
\end{aligned}
\end{equation}
Given these exact solutions to the IR equations, it remains to be checked whether they extrapolate at higher energies to a solution to the full UV equations which depend upon $\mu$. This has been examined numerically \cite{SaYe93,GPS99,GPS01,Fu:2016yrv,Azeyanagi2018}, and a consistent extrapolation exists for $|\mu| \lesssim 0.24$ \cite{Azeyanagi2018,planckian19}. The IR parameter $\calE$ can be determined as a smooth, odd function of the UV parameter $\mu$ over this regime.
In addition to the emergent reparameterization symmetry that is present in the low energy limit of the Majorana SYK model, the complex SYK model has an extra emergent symmetry related to phase fluctuation:
\begin{equation}
\begin{aligned}
G(\tau_1,\tau_2) &\rightarrow \varphi' (\tau_1)^{\Delta} \varphi' (\tau_2)^{\Delta} e^{i (\lambda(\tau_1)-\lambda(\tau_2))} G (\varphi(\tau_1),\varphi(\tau_2)) \\
\tilde{\Sigma}(\tau_1,\tau_2) &\rightarrow \varphi' (\tau_1)^{1-\Delta} \varphi' (\tau_2)^{1-\Delta} e^{i (\lambda(\tau_1)-\lambda(\tau_2))} \tilde{\Sigma} (\varphi(\tau_1),\varphi(\tau_2)) \\
\end{aligned}\,,
\label{symmetry}
\end{equation}
where $\varphi(\tau)$ is a monotonic time reparameterization with winding number $1$ and $\lambda(\tau)$ is a phase fluctuation with possibly arbitrary integer winding number. The symmetries are not exact in the presence of $\sigma$ term in the $(G,\Sigma)$ action \eqref{Gsigma action}. To make this point more transparent, it is useful to rewrite the action in terms of $(G,\tilde{\Sigma})$:
\begin{equation}
\begin{aligned}
\frac{I}{N} = -\ln \det \bigl( - \tilde{ \Sigma} \bigr) - &\int d\tau_1 d\tau_2 \left[ \tilde{\Sigma}(\tau_1,\tau_2) G(\tau_2,\tau_1) + \frac{1}{q} \left( - G(\tau_1,\tau_2) G(\tau_2,\tau_1) \right)^{\frac{q}{2}} \right] \\
+& \int d\tau_1 d\tau_2~ \sigma(\tau_1,\tau_2) G(\tau_2,\tau_1)
\,.
\end{aligned}
\label{GSigma tilde action}
\end{equation}
Now the first line of the r.h.s. of Eq.~\eqref{GSigma tilde action} is invariant under the symmetry transformation \eqref{symmetry}, while the second line changes. This point will be further discussed in Section~\ref{sec:generalform}.
\subsection{Charge}
\label{section: charge}
For an explicit UV source field $\sigma$ (cf.\ Eq.~\eqref{Gsigma action}) that arises from a microscopic Hamiltonian, the charge is conventionally defined by the UV asymmetry of the Green function as Eq.~\eqref{charge Green function}. In this section we will derive a formula for charge in $(G,\tilde{\Sigma})$ action framework for general source field $\sigma$ (without assuming time translation symmetry) using ideas similar to those in Appendix C of Ref.~\cite{kitaev2006anyons}.
\subsubsection{``Flow'' of Green function $G$}
Let us consider the action \eqref{GSigma tilde action}
with
$\sigma(\tau_1,\tau_2)$ depending on both times, not just on
$\tau=\tau_1-\tau_2$. If $(G,\tilde{\Sigma})$ is stationary (i.e. satisfies the Schwinger-Dyson equations) and $\beta=\infty$, then
\begin{equation}
\int_{-\infty}^{+\infty} \left( \sigma (\tau_1,\tau_0) G(\tau_0,\tau_1) - \sigma(\tau_0,\tau_1) G(\tau_1,\tau_0) \right) d\tau_1 =0 \,.
\label{conserved current}
\end{equation}
This can be established by considering an infinitesimal variation $\delta\lambda(\tau)$ and the corresponding variations
\begin{equation}
\begin{aligned}
\delta G(\tau_1,\tau_2) &= i \left(\delta\lambda (\tau_1)-\delta\lambda (\tau_2) \right) G(\tau_1,\tau_2) \,, \\
\delta \tilde\Sigma(\tau_1,\tau_2) &= i \left( \delta\lambda (\tau_1)-\delta\lambda (\tau_2) \right) \tilde\Sigma(\tau_1,\tau_2) \,.
\end{aligned}
\end{equation}
Only the $\sigma G$ term in \eqref{GSigma tilde action} has a non-trivial variation, which is proportional to the l.h.s.\ of \eqref{conserved current} if $\delta\lambda(\tau)\propto\delta(\tau-\tau_0)$. On the other hand, the variation of the action must be zero since $(G,\tilde{\Sigma})$ is stationary.
Following the ideas in Appendix C of Ref.~\cite{kitaev2006anyons}, we may call
\begin{equation}
j(\tau_1,\tau_2)=\sigma(\tau_1,\tau_2)G(\tau_2,\tau_1) - \sigma(\tau_2,\tau_1)G(\tau_1,\tau_2) : \quad
\begin{tikzpicture}[scale=1, baseline={([yshift=-4pt]current bounding box.center)}]
\draw [thick, ->,>=stealth] (30pt,0pt) -- (120pt,0pt) ;
\filldraw (50pt,0pt) circle (1pt) node[below] {$\tau_1$} ;
\filldraw (100pt,0pt) circle (1pt) node[below] {$\tau_2$} ;
\draw[mid arrow] (50pt,0pt) .. controls (60pt,20pt) and (90pt,20pt).. (100pt,0pt);
\end{tikzpicture}
\label{eq: current0}
\end{equation}
the ``current'' flowing from $\tau_1$ to $\tau_2$.
To make a closer analogy to the aforementioned reference, let us substitute the expression $\sigma(\tau_1,\tau_2) = \tilde{\Sigma}(\tau_1,\tau_2) - G(\tau_1,\tau_2)^{\frac{q}{2}}(- G(\tau_2,\tau_1))^{\frac{q}{2}-1}$ (obtained from the Schwinger-Dyson equations) into the current formula:
\begin{equation}
j(\tau_1,\tau_2)=\tilde{\Sigma}(\tau_1,\tau_2)G(\tau_2,\tau_1) - \tilde{\Sigma}(\tau_2,\tau_1)G(\tau_1,\tau_2)\,.
\label{eq: current of G}
\end{equation}
Treating $G$ and $\tilde{\Sigma}$ as matrices indexed by $(\tau_1,\tau_2)$, we have $\tilde{\Sigma} = -G^{-1}$. If $G$ were a unitary quasidiagonal matrix, the results in Appendix C of Ref.~\cite{kitaev2006anyons} would apply, and certain quantities would be quantized. However, here Green function $G$ has non-trivial IR asymptotics violating the conditions of being quasidiagonal. Nevertheless, we will use similar ideas and definitions as the aforementioned reference.
Note that Eq.~\eqref{conserved current} can be interpreted as the conservation of the current at each point $\tau_0$:
\begin{equation}
\int_{-\infty}^{+\infty} j(\tau_1,\tau_0)d\tau_1 = 0,
\end{equation}
as illustrated in Fig.~\ref{fig: definition of Q}\,(a). It follows that the total current through a cross section $\tau_0$,
\begin{equation}
\mathcal{Q} = \int_{-\infty}^{\tau_0} d\tau_1 \int_{\tau_0}^{+\infty} d\tau_2 ~j(\tau_1,\tau_2) \,,
\label{eq: charge0}
\end{equation}
(see Fig.~\ref{fig: definition of Q}\,(b)) is independent of $\tau_0$. As explained below, this quantity is a natural generalization of the specific charge $Q/N$ to general sources. We may call $\mathcal{Q}$ the ``flow'' of the matrix $G$ as it depends solely on $G$ through Eq.~\eqref{eq: current of G} with $\tilde{\Sigma}=-G^{-1}$. We also remark that the definition of the flow does not rely on the time translation symmetry. That is, the source $\sigma(\tau_1,\tau_2)$, and the Green function $G(\tau_1,\tau_2)$, may depend on both $\tau_1$ and $\tau_2$ rather than just $\tau=\tau_1-\tau_2$.
\begin{figure}[t]
\center
\subfloat[Conservation law]{
\begin{tikzpicture}[scale=1.2, baseline={(current bounding box.center)}]
\draw [->,>=stealth,thick] (-20pt,0pt) -- (120pt,0pt) ;
\draw [white, dashed] (80pt,-3pt) -- (80pt,30pt);
\filldraw (50pt,0pt) circle (1pt);
\draw[near arrow] (50pt,0pt) .. controls (60pt,20pt) and (90pt,20pt).. (100pt,0pt);
\draw[near arrow] (50pt,0pt) .. controls (60pt,10pt) and (70pt,10pt).. (80pt,0pt);
\draw[far arrow] (20pt,0pt) .. controls (30pt,10pt) and (40pt,10pt).. (50pt,0pt);
\draw[far arrow] (0pt,0pt) .. controls (10pt,20pt) and (40pt,20pt).. (50pt,0pt);
\draw [thick, blue, densely dashed] (50pt,0pt) circle (6pt);
\node[below] at (50pt,-5pt) {$\tau_0$};
\end{tikzpicture}}
\hspace{20pt}
\subfloat[Total current through a cross section]{
\begin{tikzpicture}[scale=1.2, baseline={(current bounding box.center)}]
\draw [->,>=stealth, thick] (-10pt,0pt) -- (170pt,0pt) ;
\draw[thick, mid arrow] (110pt,0pt) .. controls (120pt,10pt) and (130pt,10pt).. (140pt,0pt);
\draw[thick, mid arrow] (90pt,0pt) .. controls (100pt,10pt) and (110pt,10pt).. (120pt,0pt);
\draw[thick, mid arrow] (70pt,0pt) .. controls (80pt,10pt) and (90pt,10pt).. (100pt,0pt);
\draw[thick, mid arrow] (50pt,0pt) .. controls (60pt,10pt) and (70pt,10pt).. (80pt,0pt);
\draw[thick, mid arrow] (30pt,0pt) .. controls (40pt,10pt) and (50pt,10pt).. (60pt,0pt);
\draw [thick, blue, dashed] (80pt,-5pt) -- (80pt,30pt);
\draw[red, mid arrow] (95pt,0pt) .. controls (115pt,20pt) and (135pt,20pt).. (155pt,0pt);
\draw[red, mid arrow] (65pt,0pt) .. controls (85pt,20pt) and (105pt,20pt).. (125pt,0pt);
\draw[red, mid arrow] (35pt,0pt) .. controls (55pt,20pt) and (75pt,20pt).. (95pt,0pt);
\draw[red, mid arrow] (5pt,0pt) .. controls (25pt,20pt) and (45pt,20pt).. (65pt,0pt);
\node[below] at (80pt,-5pt) {$\tau_0$};
\end{tikzpicture}}
\caption{(a) Conservation law: the total current through a closed (dashed) circle is zero; (b) Flow $\mathcal{Q}$, as the total current through a cross section $\tau_0$ (blue dashed line), is independent of the position $\tau_0$. In general, there are contributions from all time scales; longer scale currents are shown in red.}
\label{fig: definition of Q}
\end{figure}
We now explain the interpretation of the flow $\mathcal{Q}$ as charge. Plugging the definition \eqref{eq: current0} of the current $j$ into Eq.~\eqref{eq: charge0}, we get
\begin{equation}
\mathcal{Q} = \int_{-\infty}^{\tau_0} d\tau_1 \int_{\tau_0}^{+\infty} d\tau_2 \bigl(
\sigma (\tau_1,\tau_2) G(\tau_2,\tau_1) - \sigma(\tau_2,\tau_1) G(\tau_1,\tau_2)
\bigr)\,.
\label{eq: defQ 2 var}
\end{equation}
This formula reduces to a simpler form when the source has the time translation symmetry, i.e.\ for $\sigma(\tau_1,\tau_2)=\sigma(\tau)$, where $\tau=\tau_1-\tau_2$:
\begin{equation}
\mathcal{Q} = \int_{0}^{+\infty} d\tau \int_{0}^{\tau} d\tau_2 \bigl(
\sigma (-\tau) G(\tau) - \sigma(\tau) G(-\tau)
\bigr)=-\int_{-\infty}^{+\infty} d\tau~ \tau \sigma(\tau) G(-\tau) \,.
\label{defQ}
\end{equation}
The last expression in turn reduces to the conventional definition of the charge when $\sigma(\tau) = \delta'(\tau) - \mu \delta (\tau)$. In this case, for the Green function $G(\tau)$ that is discontinuous at $\tau=0$, we use the average $\frac{1}{2}(G(0^+)+G(0^-))$ to define its value at $\tau=0$. Thus,
\begin{equation}
\mathcal{Q} = - \int_{-\infty}^{+\infty} d\tau~\tau \delta'(\tau) G(-\tau) = \int_{-\infty}^{+\infty} d\tau \delta(\tau) G(-\tau) = \frac{ G(0^+)+G(0^-)}{2} \,,
\end{equation}
in agreement with Eq.~\eqref{charge Green function}.
For extremely local UV sources such as $\delta'(\tau)$ and $\delta(\tau)$, the charge is a local quantity. However, if we consider a general source $\sigma(\tau)$, the r.h.s. of Eq.~\eqref{defQ} includes contributions from all scales; see Fig.~\ref{fig: definition of Q}\,(b) for a cartoon.
\subsubsection{Invariance of the charge}
We will show that the charge $\mathcal{Q}$ depends only on the UV and
IR asymptotics of $G(\tau_1,\tau_2)$ and $\tilde{\Sigma}(\tau_1,\tau_2)$ (where $\tilde{\Sigma} = - G^{-1}$) as well as some topological data. The UV asymptotics is determined by the $\delta'(\tau_1-\tau_2)$ term in $\tilde{\Sigma}$. To formulate the invariance, let $(G_1,\tilde{\Sigma}_1)$ and $(G_2, \tilde{\Sigma}_2)$ have the same asymptotics and in addition, let the following ``relative winding number'' be zero:
\begin{equation}
\nu (G_1,G_2) = \frac{1}{2\pi i} \int_0^{+\infty} \frac{d}{d\omega} \left(
\ln \frac{G_1(i\omega)}{G_2(i\omega)}
\right) d\omega \,.
\end{equation}
If $\nu(G_1,G_2)=0$, then $(G_1,\tilde{\Sigma}_1)$ can be continuously deformed into $(G_2,\tilde{\Sigma}_2)$. Here it is important to consider the winding number in frequency domain rather than time domain, because the Schwinger-Dyson equation $ \tilde{\Sigma}(\omega) = - G(\omega)^{-1}$ guarantees that a smooth path in $(G,\tilde{\Sigma})$ space will disallow both singularities and zeros of $G(\omega)$. This will not work for $G(\tau)$, since the other equation ${\Sigma}(\tau)= G(\tau)^{\frac{q}{2}} (-G(-\tau))^{\frac{q}{2}-1}$ does not constrain zeros of $G(\tau)$.
To prove that the charge is invariant under such deformation, it is sufficient to consider infinitesimal, asymptotically trivial deformations. Let us use the formula
\begin{equation}
\mathcal{Q} = \int_{-\infty}^{+\infty} d\tau_1 \int_{-\infty}^{+\infty} d\tau_2\, (f(\tau_2)-f(\tau_1)) \sigma(\tau_1,\tau_2) G(\tau_2,\tau_1)\,,
\end{equation}
where $f$ is an arbitrary function such that
\begin{equation}
\lim_{\tau\rightarrow +\infty} f(\tau) =1 \,, \quad \text{and} \quad
\lim_{\tau \rightarrow -\infty} f(\tau)=0 \, :\qquad
\begin{tikzpicture}[scale=0.9,baseline={([yshift=-4pt]current bounding box.center)}]
\draw [->,>=stealth] (50pt,0pt) -- (50pt,30pt) node[right]{$f$};
\draw[->,>=stealth] (0pt,-1pt) -- (110pt,-1pt) node[right]{$\tau$};
\draw[thick, blue] (0pt,0pt)--(40pt,0pt) .. controls (50pt,0pt) and (50pt,10pt) .. (60pt,10pt) -- (90pt,10pt) ;
\draw [dashed] (30pt,10pt) -- (100pt,10pt) node[right]{$1$};
\end{tikzpicture}
\end{equation}
This formula coincides with Eq.~\eqref{eq: defQ 2 var} for the step function $f(\tau) = \theta(\tau-\tau_0)$.
The integral does not depend on the details of $f$ because of the conservation law, namely Eq.~\eqref{conserved current}.\footnote{More explicitly, if we vary $f$ without changing its asymptotics, the corresponding variation of the charge is proportional to the l.h.s. of the Eq.~\eqref{conserved current} and therefore vanishes.} More intuitively, $f$ may be interpreted as a linear combination of step functions $\theta(\tau-\tau_0)$ with some weights for each cross section position $\tau_0$. In other words, we can blur the cross section, and this will not affect the flow.
We proceed by anti-symmetrizing the integrand,
\begin{equation}
\mathcal{Q}= \frac{1}{2} \int d\tau_1 d\tau_2\, (f(\tau_2)-f(\tau_1)) \left(
\sigma(\tau_1,\tau_2) G(\tau_2,\tau_1) - \sigma (\tau_2,\tau_1) G(\tau_1,\tau_2)
\right)\,.
\end{equation}
Since
$\sigma(\tau_1,\tau_2)= \tilde\Sigma(\tau_1,\tau_2)- G(\tau_1,\tau_2)^{\frac{q}{2}} (- G(\tau_2,\tau_1))^{\frac{q}{2}-1}$, we also get this equation:
\begin{equation}
\mathcal{Q} = \frac{1}{2} \int d\tau_1 d\tau_2\, (f(\tau_2)-f(\tau_1)) \left(
\tilde{ \Sigma}(\tau_1,\tau_2) G(\tau_2,\tau_1) - \tilde{\Sigma}(\tau_2,\tau_1) G(\tau_1,\tau_2)
\right)\,.
\end{equation}
Note that the two terms cannot be integrated separately because the corresponding integrals are not absolutely convergent (since $G(\tau_1,\tau_2) \sim |\tau_1-\tau_2|^{-2\Delta}$, $\Sigma\sim |\tau_1-\tau_2|^{-2+2\Delta}$, there is a logarithmic divergence in IR).
Now, consider an infinitesimal variation $\delta G$ and
$\delta \tilde{\Sigma} = \tilde{\Sigma} \left( \delta G \right) \tilde{\Sigma}$ such that
$\delta G(\tau_1,\tau_2)$ and $\delta \tilde{\Sigma}(\tau_1,\tau_2)$ decay sufficiently fast as $\tau_1-\tau_2 \rightarrow \infty$:
\begin{equation}
\begin{aligned}
\delta\mathcal{Q} & = \int d\tau_1 d\tau_2 \left( f(\tau_2)-f(\tau_1)\right)
\delta \left(
\tilde{\Sigma}(\tau_1,\tau_2) G(\tau_2,\tau_1)
\right) \\
&= \int d\tau_1 d\tau_2 \left( f(\tau_2)-f(\tau_1)\right) G(\tau_2,\tau_1)
\delta
\tilde{\Sigma}(\tau_1,\tau_2) \\&+ \int d\tau_3 d\tau_4 \left( f(\tau_3)-f(\tau_4)\right)
\tilde{\Sigma}(\tau_4,\tau_3) \delta G(\tau_3,\tau_4) \,.
\end{aligned}
\end{equation}
Substituting $\delta \tilde{\Sigma} = \tilde{\Sigma} \left( \delta G \right) \tilde{\Sigma}$ into the first line of the last expression and using
$\tilde{\Sigma}(\tau_4,\tau_3)
= - \int d\tau_2 d\tau_1 \tilde{\Sigma}(\tau_4,\tau_2) G(\tau_2,\tau_1) \tilde{\Sigma}(\tau_1,\tau_3) $ in the second line, we get
\begin{equation}
\delta\mathcal{Q} = \int d^4\tau \left(
f(\tau_2) - f(\tau_3)+ f(\tau_4)-f(\tau_1)
\right)
\tilde{\Sigma} (\tau_4,\tau_2) G(\tau_2,\tau_1) \tilde{\Sigma} (\tau_1,\tau_3) \delta G(\tau_3,\tau_4)\,.
\end{equation}
Now, we can regroup and integrate the terms containing $f(\tau_2)-f(\tau_3)$ and $f(\tau_4)-f(\tau_1)$ separately:
\begin{equation}
\begin{aligned}
&\int d^3\tau \left(
f(\tau_2) - f(\tau_3)
\right) \int d\tau_1~
\tilde{\Sigma} (\tau_4,\tau_2) G(\tau_2,\tau_1) \tilde{\Sigma} (\tau_1,\tau_3) \delta G(\tau_3,\tau_4) = 0 \,, \\
& \int d^3\tau \left(
f(\tau_4) - f(\tau_1)
\right) \int d\tau_2~
\tilde{\Sigma} (\tau_4,\tau_2) G(\tau_2,\tau_1) \tilde{\Sigma} (\tau_1,\tau_3) \delta G(\tau_3,\tau_4) = 0 \,.
\end{aligned}
\end{equation}
Each integral contains a delta function that annihilates $f(\tau_2)-f(\tau_3)$ in the first case and $f(\tau_4)-f(\tau_1)$ in the second case. Therefore, we conclude that $\delta \mathcal{Q}=0$ for asymptotically and topologically trivial deformations.
\subsubsection{Calculation of the charge}
In fact, we can calculate the charge in the complex SYK model using the IR asymptotics. (The result for $q=4$ was originally derived in Ref.~\cite{GPS01} using a different method, see Appendix~\ref{app:GPS} for a detailed discussion.) We will use the antisymmetrized version of Eq.~\eqref{defQ} to express $\sigma$ in terms of $\tilde{\Sigma}$ as we have done before:
\begin{equation}
\begin{aligned}
\mathcal{Q}& = - \int_{-\infty}^{+\infty} \tau \sigma(\tau) G(-\tau) d\tau = - \frac{1}{2} \int_{-\infty}^{+\infty} \tau \bigl( \sigma(\tau) G(-\tau) - \sigma(-\tau) G(\tau) \bigr) d\tau \\
&= - \frac{1}{2} \int_{-\infty}^{+\infty} \tau \left( \tilde{\Sigma} (\tau) G(-\tau) - \tilde{\Sigma} (-\tau) G(\tau )\right) d\tau \,.
\label{charge formula}
\end{aligned}
\end{equation}
The two terms in the last expression almost cancel each other at $\tau \gg 1$, but individually, the corresponding integrals are logarithmically divergent. To proceed, let us replace $G(\tau)$ with
\begin{equation}
G_{\eta}(\tau) = \begin{cases}
G(\tau) & \text{for} \quad |\tau| \lesssim 1 \\
G(\tau) |\tau|^{-2\eta} & \text{for} \quad |\tau| \gg 1
\end{cases}\,,
\label{G_eta}
\end{equation}
where $\eta$ is a small positive number. This change has little effect on the integrand in \eqref{charge formula}, but the two terms can now be separated. The corresponding integrals are equal to each other due to the symmetry $\tau\to -\tau$. Thus,
\begin{equation}
\begin{aligned}
\mathcal{Q} = \lim_{\eta \rightarrow 0} \left[ - \int_{-\infty}^{+\infty} \tau \tilde{\Sigma}(\tau) G_{\eta}(-\tau) d\tau \right]= \lim_{\eta \rightarrow 0} \left[ \frac{1}{2\pi i} \int_{-\infty}^{+\infty} \left( \partial_\omega G(i\omega)^{-1}\right) G_{\eta}(i\omega) d\omega
\right] \,.
\end{aligned}
\label{charge integral}
\end{equation}
It is important that the symmetric-in-time regularization \eqref{G_eta} is not symmetric in frequency, which has nontrivial consequences. The Fourier transform of $G_{\eta}$ is
\begin{equation}
\begin{pmatrix}
G_{\eta}(i \omega) \\
G_{\eta} (-i\omega)
\end{pmatrix}
= \Gamma(1-2\Delta')
\begin{pmatrix}
i^{1-2\Delta'} & i^{-1+2\Delta'} \\
i^{-1+2\Delta'} & i^{1-2\Delta'}
\end{pmatrix}
\begin{pmatrix}
- b^\Delta e^{\pi \calE} \\
b^{\Delta} e^{-\pi \calE}
\end{pmatrix} \omega^{-1+2\Delta'}\,, \quad 0 < \omega \ll 1 \,,
\end{equation}
where $\Delta' = \Delta+\eta$.
Expanding the $\omega$-independent coefficients in this expression to the first order in $\eta$, we explicitly see the asymmetry:
\begin{equation}
G_{\eta} (\pm i\omega) \approx \omega^{2\eta} G(\pm i \omega) \left[
1+ \eta \bigl(
-2\psi(1-2\Delta) - \pi \tan \pi (\Delta \pm i\calE)
\bigr)
\right]\,,
\end{equation}
where $\psi(x)=\Gamma'(x)/\Gamma(x)$ is the digamma function.
Now, we are in a position to evaluate the integral over $d\omega$ in \eqref{charge integral}, which should be understood as a principal value integral. For frequencies above some threshold $\omega_0$ (such that $\omega_0 \ll 1$ but $\eta \ln \frac{1}{\omega_0} \ll 1$), the difference between $G$ and $G_{\eta}$ may be neglected. On the other hand, if $|\omega|<\omega_0$, then $G(i\omega)\sim |\omega|^{-1+2\Delta}$, and hence, $\partial_\omega G(i\omega)^{-1} \approx (1-2\Delta) \omega^{-1} G(i\omega)^{-1}$. Using these approximations, we get
\begin{equation}
\begin{aligned}
\int_{-\infty}^{+\infty} &\left( \partial_\omega G(i\omega)^{-1}\right) G_\eta(i\omega) d\omega \\
& \approx \int_{\omega_0}^{+\infty} \partial_\omega \ln \frac{G(-i\omega)}{G(i\omega)} ~d\omega + (1-2\Delta)\int_0^{\omega_0} \omega^{-1} \left(
\frac{G_{\eta}(i\omega)}{G(i\omega)} - \frac{G_{\eta}(-i\omega)}{G(-i\omega)}
\right) d\omega \\
& \approx -2i \theta + (1-2\Delta) \eta \bigl(
-\pi \tan \pi (\Delta+i\calE) + \pi \tan \pi (\Delta-i\calE)
\bigr) \int_0^{\omega_0} \omega^{-1+2\eta} d\omega \,.
\end{aligned}
\label{anomalous phase shift}
\end{equation}
The last integral is simply ${1}/({2\eta})$, so the result is independent of $\eta$. Including the factor ${1}/({2\pi i})$ from \eqref{charge integral} we obtain:
\begin{equation}
\begin{aligned}
\mathcal{Q} &= - \frac{\theta}{\pi} - \frac{1-2\Delta}{4i }\bigl(
\tan \pi (\Delta+i\calE) - \tan \pi (\Delta -i \calE)
\bigr) \\
& = - \frac{\theta}{\pi} - \left( \frac{1}{2} -\Delta \right) \frac{\sin (2 \theta)}{\sin (2\pi \Delta)}\,,
\end{aligned}
\label{charge theta}
\end{equation}
which reproduces the result in Ref.~\cite{Davison17}.
\subsubsection{Analogy with field-theoretic anomalies}
In some sense, the calculation of the charge performed here (also see Appendix~\ref{app:GPS} for parallel discussions based on symmetric-in-frequency regularizations) has a flavor of perturbative anomalies in quantum field theory. Both describe a mismatch between the IR and UV. In the case of anomaly, there is no consistent UV cutoff respecting the symmetry; in our case, the UV behavior is well-defined but quantifiably different from the IR. By analogy with the Fermi liquid theory, one might expect the charge to be given by the first term in Eq.~\eqref{charge theta}. However, that is not correct due to the non-trivial effect of regularization, which produces the second term,
\begin{equation}
-\left( \frac{1}{2}-\Delta \right)
\frac{\sin (2\theta)}{\sin (2\pi \Delta)} \,.
\end{equation}
In Appendix~\ref{app:GPS}, we will further comment on this term and relate it to the Luttinger-Ward term in the standard analysis\cite{luttinger1960ground}.
\subsection{Covariant formalism and the effective action}
\label{sec:generalform}
At the low temperatures, $\beta \gg 1$, the action~\eqref{GSigma tilde action} is almost invariant under the emergent symmetry \eqref{symmetry}. In other words, we can generate ``quasi-solutions'' of the Schwinger-Dyson equations by applying $(\varphi,\lambda)$ transformations to the actual solution $(G_*,\tilde{\Sigma}_*)$:
\begin{equation}
\begin{aligned}
G(\tau_1,\tau_2)& = \varphi' (\tau_1)^{\Delta} \varphi' (\tau_2)^{\Delta} e^{i (\lambda(\tau_1)-\lambda(\tau_2))} G_*(\varphi(\tau_1),\varphi(\tau_2))\,, \\
\tilde{\Sigma}(\tau_1,\tau_2) &= \varphi' (\tau_1)^{1-\Delta} \varphi' (\tau_2)^{1-\Delta} e^{i (\lambda(\tau_1)-\lambda(\tau_2))} \tilde{\Sigma}_* (\varphi(\tau_1),\varphi(\tau_2)) \,.
\end{aligned}
\label{approximate solutions}
\end{equation}
Such quasi-solutions cost a small increase in action,\footnote{Eq.~\eqref{approximate solutions} defines the IR part of a quasi-solution, while the UV part should be tuned to minimize the cost.} which we call the ``effective action''. More exactly, $I_{\text{eff}}[\varphi,\lambda]= I[G,\tilde{\Sigma}]-\operatorname{const}$, where the constant depends on convention: we may set it to $I[G_*,\tilde{\Sigma}_*]$ or use a different base value. The goal of this section is to determine the general form of $I_{\text{eff}}[\varphi,\lambda]$ to leading orders.
\subsubsection{Covariant formalism}
The form of the approximate solutions coincides with the transformation laws for functions changing from one frame to another. The latter is described by a diffeomorphism of the time circle together with a gauge transformation. For instance, a field $\psi(x)$ with scaling dimension $\Delta$ and charge $p$ defined in frame ``$x$'' can be transformed to frame $(y,\phi)$ by the following formula:
\begin{equation}
\psi_{y,\phi}(y)= \left(\frac{dy}{dx}\right)^{-\Delta} e^{-ip \phi(x)} \psi(x)\,. \label{covtrans}
\end{equation}
It is also straightforward to define the transformation laws for $G$ and $\tilde{\Sigma}$, i.e. functions of two variables.
Taking this viewpoint, the ``quasi-solution'' \eqref{approximate solutions} may be interpreted as follows. In order to generate a quasi-solution $(G,\tilde{\Sigma})$ in the physical frame $x=\tau$, we start with the frame $(y,\phi)=(\varphi,\lambda)$, where the Green function is given by $G_*$. Then we pull back to the physical frame using the inverse transformation of \eqref{covtrans}, namely
\begin{equation}
\psi(x) = \left(\frac{dy}{dx}\right)^{\Delta} e^{ip \phi(x)} \psi_{y,\phi}(y(x))\,.
\end{equation}
From this perspective, the effective action $I_{\text{eff}}[\varphi,\lambda]$ in fact measures the failure of $(\varphi,\lambda)$ to be the physical frame.
At first glance, introducing the notion of ``frame'' in this problem seems redundant because in the end we should write all results in the physical frame. However, the condition that the action is invariant under frame transformations (if we also transform $\sigma$) is helpful in determining the general form of the effective action $I_{\text{eff}}[\varphi,\lambda]$.
Now, let us consider expressing the $(G,\tilde{\Sigma})$ action in a general frame $(\varphi,\lambda)$. (In this setting, $G$ and $\tilde{\Sigma}$ are arbitrary and not related to $\varphi$ and $\lambda$ in any particular way.) In the new frame, the fields are defined as follows:
\begin{equation}
\begin{aligned}
G_{\varphi,\lambda}(\varphi_1,\varphi_2)& := \varphi' (\tau_1)^{-\Delta} \varphi' (\tau_2)^{-\Delta} e^{-i (\lambda(\tau_1)-\lambda(\tau_2))} G (\tau_1, \tau_2)\,, \\
\tilde{\Sigma}_{\varphi,\lambda}(\varphi_1,\varphi_2) &:= \varphi' (\tau_1)^{-1+\Delta} \varphi' (\tau_2)^{-1+\Delta} e^{-i (\lambda(\tau_1)-\lambda(\tau_2))} \tilde{\Sigma} (\tau_1,\tau_2) \,,\\
\sigma_{\varphi,\lambda}(\varphi_1,\varphi_2) &:= \varphi' (\tau_1)^{-1+\Delta} \varphi' (\tau_2)^{-1+\Delta} e^{-i (\lambda(\tau_1)-\lambda(\tau_2))} \sigma (\tau_1,\tau_2) \,.
\end{aligned}
\end{equation}
Representing $G$, $\tilde{\Sigma}$, $\sigma$ in terms of $G_{\varphi,\lambda}$, $\tilde{\Sigma}_{\varphi,\lambda}$, $\sigma_{\varphi,\lambda}$ and plugging into Eq.~\eqref{GSigma tilde action}, we get
\begin{equation}
\begin{aligned}
\frac{I}{N} = -\ln \det \bigl( - \tilde{ \Sigma}_{\varphi,\lambda} \bigr) - &\int d\varphi_1 d\varphi_2 \left[ \tilde{\Sigma}_{\varphi,\lambda}(\varphi_1,\varphi_2) G_{\varphi,\lambda}(\varphi_2,\varphi_1) + \frac{1}{q} \left( - G_{\varphi,\lambda}(\varphi_1,\varphi_2) G_{\varphi,\lambda}(\varphi_2,\varphi_1) \right)^{\frac{q}{2}} \right] \\
+& \int d\varphi_1 d\varphi_2~ \sigma_{\varphi,\lambda}(\varphi_1,\varphi_2) G_{\varphi,\lambda}(\varphi_2,\varphi_1)
\,.
\end{aligned}
\label{action new frame}
\end{equation}
Naively, the $\ln\det$ term transforms nontrivially under $\varphi$. However, the determinant needs some UV regularization anyway, and we will use a particular regularization to make it frame-independent:\footnote{The regularized determinant depends on UV details of $\tilde{\Sigma}$. This issue is not important for the present discussion, but it can be mitigated by the use of a different regularization \cite{KS17-soft}: $\det (-\tilde{\Sigma}) \rightarrow \det(-\tilde{\Sigma})/\det(-\tilde{\Sigma}_*)$.}
\begin{equation}
\det (-\tilde{\Sigma}) \rightarrow \frac{\det (-\tilde{\Sigma})}{\det (-\sigma)} \cdot 2 \cosh \frac{\beta \mu}{2} \,.
\label{eq: regdet}
\end{equation}
The second factor is the free fermion partition function (formally equal to $\det(-\sigma)$).
The expression \eqref{action new frame} for the $(G,\tilde{\Sigma})$ action in frame $(\varphi,\lambda)$ has the same general form as in the physical frame, but the UV source in different:
\begin{equation}
\begin{aligned}
\sigma_{\varphi,\lambda}(\varphi_1,\varphi_2)&:= \varphi'(\tau_1)^{\Delta-1} \varphi'(\tau_2)^{\Delta-1} e^{-i (\lambda(\tau_1)-\lambda(\tau_2))}(\delta'(\tau_1-\tau_2)-\mu(\tau_1) \delta(\tau_1-\tau_2))
\\ &=\varphi'(\tau_1)^{\Delta}
\varphi'(\tau_2)^{\Delta} \left( \delta' (\varphi_1-\varphi_2) - \varphi'(\tau_1)^{-1} \mu_{\lambda} (\varphi_1) \delta (\varphi_1-\varphi_2)
\right)\,,
\end{aligned}
\label{source}
\end{equation}
where
\begin{equation}
\mu_{\lambda}(\varphi) = \mu(\tau) - i \frac{d\lambda(\tau)}{d\tau}\,.
\label{chemical potential}
\end{equation}
We will make a few comments on this transformation.
\begin{itemize}
\item First of all, the non-trivial change of the source lifts the degeneracy of quasi-solutions and induces the effective action $I_{\text{eff}}[\varphi,\lambda]$, which can be explained as follows. If $\sigma_{\varphi,\lambda}$ were given by the same expression as the standard source, $\sigma_{\text{std}}(\varphi_1,\varphi_2) =\delta'(\varphi_1-\varphi_2)-\mu \delta(\varphi_1-\varphi_2)$, then $(G_{\varphi,\lambda},\tilde{\Sigma}_{\varphi,\lambda}) =(G_*,\tilde{\Sigma}_*)$ would be a stationary point of the action \eqref{action new frame} in frame $(\varphi,\lambda)$, and therefore, its pullback $(G,\tilde{\Sigma})$ would satisfy the Schwinger-Dyson equations in the physical frame. Thus, the actual distinction between solutions and quasi-solutions can be attributed to the difference between $\sigma_{\varphi,\lambda}$ and $\sigma_{\text{std}}$. In the first approximation, the effective action is obtained by integrating $\sigma_{\varphi,\lambda}-\sigma_{\text{std}}$ against $G_*$.
\item Following Ref.~\cite{KS17-soft}, let us define a field
$\varepsilon_{\varphi}(\varphi) = \varphi'(\tau)$ which sets the length scale for the frame $\varphi$. Using this notation, we have
\begin{equation}
\sigma_{\varphi, \lambda} (\varphi_1,\varphi_2) = \varepsilon_\varphi(\varphi_1)^{\Delta} \varepsilon_\varphi(\varphi_2)^{\Delta} \left( \delta' (\varphi_1-\varphi_2) - \varepsilon_\varphi(\varphi_1)^{-1} \mu_{\lambda} (\varphi_1) \delta (\varphi_1-\varphi_2)
\right)\,.
\label{length scale}
\end{equation}
Let us try to understand the meaning of the powers of $\varepsilon$. We will use this terminology: a field that scales as $[\text{length}]^{-\alpha}$ and transforms in the corresponding way under diffeomorphisms is said to have dimension $\alpha$ and called an ``$\alpha$-form''. Thus, $\varepsilon$ has dimension $-1$ and $\mu$ has dimension $0$ (because its transformation law \eqref{chemical potential} does not contain $d\varphi/d\tau$). The function $\sigma(\varphi_1,\varphi_2)$ transforms as a $(1-\Delta,1-\Delta)$ form; as such, it is comparable with $\varepsilon_\varphi(\varphi_1)^{\Delta-1} \varepsilon_\varphi(\varphi_2)^{\Delta-1}\sim \varepsilon^{-2(1-\Delta)}$. The actual powers of $\varepsilon$ in Eq.~\eqref{length scale} may be written as $\varepsilon^{h-2(1-\Delta)}$, where $h$ is associated with the remaining factor, that is, $\delta'(\varphi_1-\varphi_2)$ or $\mu_{\lambda} (\varphi_1) \delta(\varphi_1-\varphi_2)$. In fact, $\delta'(\varphi_1-\varphi_2)$ is diffeomorphism-invariant if treated as a $(1,1)$ form, i.e.\ its total dimension is $h=2$, whereas $\delta(\varphi_1-\varphi_2)$ corresponds to $h=1$. So everything is consistent. In a conventional field theory, the source \eqref{length scale} would be represented by the term
\begin{equation}
\int(\varepsilon\Phi+\mu\Psi)\,d\varphi
\end{equation}
in the action, where the fields $\Phi=\Phi(\varphi)$ and $\Psi=\Psi(\varphi)$ have dimensions $2$ and $1$, respectively. (The exponent of $\varepsilon$ in \eqref{length scale} differs by $2\Delta-1$ because in the $(G,\tilde{\Sigma})$ action, $\sigma$ is multiplied by $G$ and integrated over two variables rather than one.)
\item The expression \eqref{chemical potential} for the chemical potential in the $(\varphi,\lambda)$ frame may be interpreted as {\bf gauge invariance}. It sets a non-trivial constraint on the effective action; namely, the dependence on the soft mode $\lambda$ is tied to its dependence on $\mu$:
\begin{equation}
i \frac{\delta I_{\text{eff}}[\varphi,\lambda]}{\delta \lambda'} = \frac{\delta I_{\text{eff}}[\varphi,\lambda]}{\delta \mu}\,.
\end{equation}
\end{itemize}
\subsubsection{Diffeomorphism-invariant effective action}
Now we are ready to determine the general form of the effective action.
Let us consider the following quasi-solution:
\begin{equation}
G(\tau_1,\tau_2) = \varphi' (\tau_1)^{\Delta} \varphi' (\tau_2)^{\Delta} e^{i (\lambda(\tau_1)-\lambda(\tau_2))} G_*(\varphi(\tau_1),\varphi(\tau_2))\,,
\label{fLambda quasisolution}
\end{equation}
where $\varphi$ maps the time circle to the standard circle of length $2\pi$ (i.e.\ $\varphi(\tau+\beta)=\varphi(\tau)+2\pi$) and $G_*$ is the IR solution of the Schwinger-Dyson equations with $\beta$ formally set to $2\pi$:
\begin{equation}
\qquad G_*(\varphi_1,\varphi_2)
= - b^{\Delta} \left(2 \sin \frac{\varphi_1-\varphi_2}{2} \right)^{-2\Delta} e^{\calE (\pi-\varphi_1+\varphi_2)},\qquad
0<\varphi_1-\varphi_2<2\pi.
\end{equation}
To separate the $\operatorname{U}(1)$ and $SL(2,R)$ degrees of freedom, let $\lambda(\tau) =\tilde{\lambda}(\tau) +i\calE\bigl(\frac{2\pi}{\beta}\tau-\varphi(\tau)\bigr)$ so that
\begin{equation}
G(\tau_1,\tau_2)=-b^{\Delta}
e^{i(\tilde{\lambda}(\tau_1)-\tilde{\lambda}(\tau_2))}
\Biggl(\frac{\sqrt{\varphi'(\tau_1)\varphi'(\tau_2)}}
{2\sin\frac{\varphi(\tau_1)-\varphi(\tau_2)}{2}}\Biggr)^{2\Delta}
e^{2\pi\calE\left(\frac{1}{2}-\frac{\tau_1-\tau_2}{\beta}\right)}\,.
\label{qsol1}
\end{equation}
In general, the effective action contains local and non-local terms.\footnote{For a non-local correction to the Schwarzian effective action in the Majorana SYK model, see Ref.~\cite{KS17-soft}. The non-local correction is subleading in the $\beta^{-1}$ expansion and will not be studied in this paper.} The local part describes interactions between the UV sources $\varepsilon$, $\mu$ and some IR data. We have discussed the origin of the fields $\varepsilon$, $\mu$ in last section; now let us search for the IR fields by the intermediate asymptotic expression of $G$ for $1\ll |\tau_1-\tau_2|\ll \beta$:
\begin{equation}
G(\tau_1,\tau_2) \approx G_{\beta=\infty} (\tau_1,\tau_2) \bigl(1+ A(\tau_+)(\tau_1-\tau_2) + B(\tau_+) (\tau_1-\tau_2)^2+\ldots \bigr)\,,
\label{eq: IR expansion}
\end{equation}
where $\tau_+ = \frac{\tau_1+\tau_2}{2}$, and the coefficients $A$, $B$ are obtained by Taylor expanding the quasi-solution \eqref{qsol1} w.r.t. small time separation $\tau_1-\tau_2$. These coefficient have the following explicit form:
\begin{equation}
\label{ABtau}
A(\tau) = i\tilde{\lambda}'(\tau)-\frac{2\pi}{\beta}\calE\,, \qquad
B(\tau) = \frac{\Delta}{6} \operatorname{Sch}\bigl(e^{i \varphi(\tau)},\tau \bigr) + \frac{1}{2} A(\tau)^2 \,.
\end{equation}
Thus, all relevant IR information is contained in the fields
\begin{equation}
A(\tau) = i\tilde{\lambda}'(\tau)-\frac{2\pi}{\beta}\calE \,,\qquad
O(\tau) = \operatorname{Sch} \bigl(e^{i \varphi(\tau)},\tau\bigr) \,.
\end{equation}
The local part of the action should have a covariant expression in an arbitrary frame $x$. We aim to find the effective action to $\beta^{-1}$ order. In other words, the action should have the accuracy to recover the free energy or grand potential to $T^2$ order and capture, for example, the linear dependence of the specific heat. With the UV source $(\varepsilon,\mu)$ and IR data $(A,O)$, the most general diffeomorphism- and gauge-invariant action is
\begin{equation}
\frac{I_{\text{eff}}[\calE, \varphi,\tilde{\lambda}]}{N}
= \int \varepsilon_x^{-1} f(\mu-A)\, dx - \calG(\calE)
- \alpha_S \int \left( \varepsilon_x O_x -\rho_x \right) dx \,.
\label{diffeoinv action}
\end{equation}
Let us discuss some of its features as well as defining the field $\rho$.
\begin{itemize}
\item In addition to the fluctuating fields $\varphi$ and $\lambda$, the action depends on the global parameter $\calE$. Its equilibrium value will be determined by finding an extremum (actually, a maximum) of $I_{\text{eff}}[\calE, \varphi,\tilde{\lambda}]$ in $\calE$ with fixed external parameter $\beta$, $\mu$.
\item The function $f$ is a general even function characterizing the charge response to the chemical potential. The gauge invariance \eqref{chemical potential} requires that any dependence on $\mu$ be through the combination $\mu-A$. The $\calG(\calE)$ term is of order $1$ and related to the zero temperature entropy.
\item We have expressed the action in a general frame $x$ to emphasize its covariant properties. In particular, we have used the notation $O_x$ and introduced a field $\rho_x$ (see \cite{KS17-soft}, Eq.~(167)):
\begin{equation}
O_x(x) = \operatorname{Sch}(e^{i\varphi(\tau(x))},x) \,, \quad
\rho_x = \frac{(\partial_x \varepsilon_x)^2}{2\varepsilon_x} - \partial_x^2 \varepsilon_x\,
\end{equation}
to form an invariant expression $\int \left( \varepsilon_x O_x -\rho_x \right) dx$. (Recall that $\varepsilon_x = x'(\tau)$ is the field setting the local length scale, where $\tau$ is the physical frame.)
To show the diffeomorphism invariance, it is essential to check the transformation laws of the local operators $O,\rho,\varepsilon$. The transformation law of $O$ is given by the chain rule of the Schwarzian derivative:
\begin{equation}
O_y(y) = \left(y'(x) \right)^{-2} \left( O_x(x)- \operatorname{Sch}(y,x) \right)\,.
\end{equation}
This can be further summarized in a matrix form. In fact, the pair $(1, O)$ forms a representation of $\operatorname{Diff}(S^1)$,
\begin{equation}
\begin{pmatrix}
1 \\
O_y
\end{pmatrix} = \begin{pmatrix}
1 & 0 \\
-y'(x)^{-2} \operatorname{Sch}(y,x) & y'(x)^{-2}
\end{pmatrix}
\begin{pmatrix}
1 \\
O_x
\end{pmatrix}\,.
\end{equation}
Similarly, the pair $(\varepsilon,\rho)$ also forms a representation,
\begin{equation}
\begin{pmatrix}
\varepsilon_y \\
\rho_y
\end{pmatrix} =
y'(x)
\begin{pmatrix}
1 & 0 \\
-y'(x)^{-2} \operatorname{Sch}(y,x) & y'(x)^{-2}
\end{pmatrix}
\begin{pmatrix}
\varepsilon_x \\
\rho_x
\end{pmatrix} \,.
\end{equation}
Thus, the following combination transform as a $1$-form
\begin{equation}
\varepsilon_y O_y - \rho_y = y'(x)^{-1} (\varepsilon_x O_x - \rho_x)\,,
\end{equation}
which further implies the diffeomorphism invariance of the action~\eqref{diffeoinv action}.
The form of the $\rho$ field may look obscure; however, it will naturally arise
when we try to transform the Schwarzian action from the physical frame $\tau$ to a general frame $x(\tau)$ using the chain rule and express everything in the new frame
\begin{equation}
O_\tau(\tau) = \varepsilon_x^2 O_x (x) + \operatorname{Sch}(x,\tau) = \varepsilon_x^2 O_x(x) - \varepsilon_x \rho_x(x)\,.
\end{equation}
In other words, in the physical frame $\varepsilon=1$,\, $\rho=0$, and the last term in the effective action \eqref{diffeoinv action} is just the familiar Schwarzian action
$
-\alpha_S \int \operatorname{Sch}(x,\tau) d\tau
$.
\end{itemize}
Now, let us restrict to the physical frame $x=\tau$. Expanding $f(\mu-A)$ to the second order in $A=i\tilde{\lambda}'-\frac{2\pi}{\beta}\calE$, we get
\begin{equation}
\begin{aligned}
\frac{I_{\text{eff}} [\calE,\varphi,\tilde{\lambda}]}{N} &= \beta f(\mu) + 2\pi(\calE-in) f'(\mu) - \calG(\calE)\\
&+ \int \left[
-\frac{f''(\mu)}{2} \left( \tilde{\lambda}'(\tau) + i\frac{2\pi}{\beta}\calE\right)^2 - \alpha_S \operatorname{Sch}(e^{i\varphi(\tau)},\tau )
\right] d\tau \,,
\end{aligned}
\label{full effective action}
\end{equation}
where $n$ is the winding number of the function $\tilde{\lambda}$, defined by the generalized periodicity condition $\tilde{\lambda}(\tau+\beta) =\tilde{\lambda}(\tau)+2\pi n$. The second line in the above equation is equivalent to the action \eqref{Seff}, originally derived in Ref.~\cite{Davison17}, with
\begin{equation}
K = - f'' (\mu) \,, \qquad \gamma = 4 \pi^2 \alpha_S \,,
\end{equation}
To further simplify the effective action, let
\begin{equation}
\tilde{\lambda}(\tau)=\bar{\lambda}(\tau)+\frac{2\pi n}{\beta}\tau \,.
\end{equation}
where $\bar{\lambda}$ has zero winding number. Then
\begin{equation}
\frac{I_{\text{eff}} [\calE,n,\varphi,\bar{\lambda}]}{N}
= \beta f\Bigl(\mu+\frac{2\pi}{\beta}(\calE-in)\Bigr) - \calG(\calE)
+ \int \left[
-\frac{f''(\mu)}{2} \,\bar{\lambda\kern1pt}'^2
- \alpha_S \operatorname{Sch}(e^{i\varphi},\tau ) \right] d\tau \,.
\label{effact1}
\end{equation}
\subsection{Thermodynamics}
\label{sec:thermo}
We now use the effective action $I_{\text{eff}}[\calE,n,\varphi,\bar{\lambda}]$ to compute the low temperature expansion of the grand potential $\Omega(\beta^{-1},\mu)$. If $N$ is large, we may use the saddle point field configurations, $\varphi(\tau)=\frac{2\pi}{\beta}\tau$,\, $\bar{\lambda}(\tau)=0$,\, $n=0$, and find the extremum $I_*$ of $I_{\text{eff}}[\calE, \varphi,\lambda]$ with respect to $\calE$:
\begin{equation}
\frac{\Omega}{N} = \frac{I_*}{\beta N} = f(\mu_0) - \calG(\calE)\beta^{-1} - 2\pi^2 \alpha_S \beta^{-2}
\label{grandpotential}
\end{equation}
where
\begin{equation}
\mu_0=\mu+ \frac{2\pi}{\beta} \calE \,.
\end{equation}
The saddle point condition for $\calE$ requires that
\begin{equation}
\calG'(\calE) = 2\pi f'(\mu_0) = -2\pi\mathcal{Q}.
\label{g and Q}
\end{equation}
where $\mathcal{Q}= -N^{-1}{\partial \Omega(\beta^{-1},\mu)}/{\partial \mu} = - f'(\mu_0) $. Eq.~\eqref{g and Q} implies that $\mathcal{Q}$ is a function of $\calE$. This function has been calculated in Eq.~\eqref{charge theta}, so one can compute $\calG(\calE)$ as well.
\subsubsection{Free energy and entropy}
We can also find the free energy $F(\beta^{-1},Q)= \Omega + \mu Q$:
\begin{equation}
\frac{F}{N} = \mathcal{F}_0(\mathcal{Q}) - \calS(\mathcal{Q})\beta^{-1} - 2\pi^2 \alpha_S \beta^{-2} \,,
\end{equation}
where $\calS= \calS(\mathcal{Q})$ is the ``zero temperature entropy'' per site and
\begin{alignat}{3}
\mathcal{F}_0(\mathcal{Q}) &= f(\mu_0) + \mu_0\mathcal{Q} \qquad
&&\text{for} \quad
&f'(\mu_0)&= -\mathcal{Q} \,,
\displaybreak[0]\label{Legendre0}\\[3pt]
\calS(\mathcal{Q}) &= \calG(\calE)+ 2\pi \calE \mathcal{Q} \qquad
&&\text{for} \quad
&\calG'(\calE)&= - 2\pi\mathcal{Q} \,.
\label{Legendre}
\end{alignat}
The formula \eqref{Legendre} says that $\calS(\mathcal{Q})$ and $\calG(\calE)$ are related by the Legendre transformation, where $\mathcal{Q}$ and $2\pi\calE$ are the conjugate variables. It leads to the fundamental relation (\ref{dSdQ}) between the entropy and the particle-hole asymmetry. Equation \eqref{Legendre0} is the usual Legendre duality between the free energy and the grand potential at zero temperature; it implies that $\mu_0=d\mathcal{F}_0(\mathcal{Q})/d\mathcal{Q}$. At finite temperature, the chemical potential receives a definite correction:
\begin{equation}
\mu(\beta^{-1},\mathcal{Q}) = \mu_0(\mathcal{Q}) - \frac{2\pi}{\beta} \calE(\mathcal{Q}).
\label{eq: rule}
\end{equation}
Similar relations hold for the low energy limit of charged black holes \cite{Faulkner09,Sachdev19}, as reviewed in Appendix~\ref{app:em}. The low $T$ limit must be taken at fixed $Q$ with $\mu$ obeying (\ref{dmudt}), to obtain a near-horizon metric that is conformally equivalent to AdS$_2$.
\subsubsection{Charge compressibility}
\label{sec: RG charge compressibility}
As discussed in Refs.~\cite{MS16-remarks, KS17-soft}, the specific heat is determined by the prefactor $\alpha_S$ of the Schwarzian action, which is related to the magnitude $\alpha_G$ of the leading UV-sourced correction to the IR Green function. Specifically,
\begin{equation}
\alpha_S = \frac{-k_c'(2)(1-\Delta)b}{6}\alpha_G \,,
\end{equation}
where $k_c'(2)$, $\Delta$ and $b$ are all IR data that can be obtained in the conformal limit.
Now, for the complex SYK model we have one more thermodynamic coefficient to determine, namely the charge compressibility $K$. A natural question is whether the charge compressibility can be determined in a similar way by the same UV parameter $\alpha_G$. This possibility is based on the observation that the IR degrees of freedom $A(\tau)$, $B(\tau)$ in Eqs.~\eqref{eq: IR expansion}, \eqref{ABtau} satisfy the relation
\begin{equation}
B(\tau) = \frac{\Delta}{6} \operatorname{Sch}\bigl(e^{i \varphi(\tau)},\tau \bigr) + \frac{1}{2}A(\tau)^2,
\label{BA relation}
\end{equation}
which might constrain the form of the effective action. Or is the charge compressibility independent of $\alpha_S$ and requires a separate numerical study?
To answer this question, let us think about possible couplings between the IR degrees of freedom and some UV data. The idea of renormalization theory, as used in Ref.~\cite{KS17-soft}, is not to solve the actual problem in the UV (which is hard) but to replace it with a more tractable model with sufficiently many parameters that would reproduce the leading IR behavior and any possible corrections to it. The simplest term to include in the $(G,\tilde{\Sigma})$ action is the linear coupling $\int d\tau_1 d\tau_2\, \sigma(\tau_1,\tau_2) G(\tau_2,\tau_1)$ of the UV source to the Green function, where the latter is represented by the asymptotic expression \eqref{eq: IR expansion} at intermediate time intervals $\tau_1-\tau_2$ with coefficients $A\bigl(\frac{\tau_1+\tau_2}{2}\bigr)$ and $B\bigl(\frac{\tau_1+\tau_2}{2}\bigr)$. By smearing the actual, very singular source, nonlinear effects can be reduced. In this approximation, the effective action is a sum of terms proportional to $\int A(\tau)\, d\tau$ and $\int B(\tau)\, d\tau$. Any contribution of the form $\int A(\tau)^2\, d\tau$ is due to the second term via Eq.~\eqref{BA relation}. But on the other hand,
\begin{equation}
\frac{I_{\text{eff}}[\calE, \varphi,\lambda]}{N} = \int f(\mu-A)\, d\tau - \calG(\calE)- \alpha_S \int\operatorname{Sch}\bigl(e^{i \varphi(\tau)},\tau \bigr)\,d\tau \,,
\end{equation}
see Eq.~\eqref{diffeoinv action}. Therefore, the linear model predicts the following value of the charge compressibility $K=-f''(\mu)$:
\begin{equation}
K_{\rm linear}= \frac{6 \alpha_S}{\Delta} = \frac{3}{2\pi^2 \Delta} \gamma \,. \label{Kgamma relation}
\end{equation}
However, a nonlinear coupling of the form\footnote{In contrast, we won't worry about non-linear contributions to the specific heat because they are subleading in temperature.} $\int s(\tau_1,\tau_2,\tau_3,\tau_4) G(\tau_1,\tau_2) G(\tau_3,\tau_4)\, d^4\tau$ can also generate a term proportional to $\int A(\tau)^2\, d\tau$. Let us denote this additional contribution by $K_{\rm non-linear}$ so that
\begin{equation}
K = K_{\rm linear}+ K_{\rm non-linear}.
\end{equation}
Its actual value is not accessible without numerics. In Section~\ref{sec:compress} we will present numerical computations for the total $K$ at half filling, namely $\mu=0$ and $\mathcal{Q}=0$.
We would like to make a final remark on the ratio $K_{\rm linear}/\gamma$ in (\ref{Kgamma relation}). It agrees with Eq.~(\ref{Kovergamma}) obtained from a different analysis following the perturbation theory done in Ref.~\cite{MS16-remarks}. This analysis relies on the UV parameter $\alpha_G$, see Appendix~\ref{app:GrishaK} for details. Similarly to $K_{\rm linear}$, it does not include the additional non-universal UV contributions to the compressibility.
\subsection{Partition function at low temperatures and the density of states}
\label{sec:dos}
We first overview some relevant results for the Majorana SYK model. An interesting time scale in the problem is given by the coefficient of the Schwarzian action, $\alpha_SN$. If the inverse temperature $\beta$ is of this order of magnitude or greater, quantum fluctuations are strong. This regime was originally studied in Ref.~\cite{BAK16} (see also \cite{Altland:2019czw}). The density of states (DOS) and the partition function for the pure Schwarzian model are as follows \cite{Cotler:2016fpe,Garcia-Garcia:2017pzl,BaAlKa17,Stanford:2017thb}, where the energy $E$ and the temperature $\beta^{-1}$ are measured in units of $(\alpha_SN)^{-1}$:
\begin{equation}
\rho_{\operatorname{Sch}}(E)=\sinh\biggl(2\pi\sqrt{2E}\biggr),\qquad
Z_{\operatorname{Sch}}(\beta^{-1})= \int e^{-\beta E}\rho_{\operatorname{Sch}}(E)\,dE
=\frac{1}{2}\biggl(\frac{2\pi}{\beta}\biggr)^{3/2} e^{2\pi^2/\beta}\,.
\end{equation}
These functions are defined up to an overall factor that depends on the normalization of the integration measure.
The DOS and the partition function for the Majorana SYK model contain some additional factors. Up to a common overall constant,
\begin{equation}
\rho(E)\sim \alpha_S N^{-1/2}e^{N\calS}
\rho_{\operatorname{Sch}}\bigl(\alpha_SN(E-E_0)\bigr)\,,\quad\:
Z(\beta^{-1})\sim
N^{-3/2}e^{-E_0\beta+N\calS}Z_{\operatorname{Sch}}\bigl(\alpha_SN\beta^{-1}\bigr) \,,
\label{rho_Maj}
\end{equation}
where, $E_0$ is the ground state energy. The factor $N^{-3/2}$ in the partition function has been introduced to obtain the correct asymptotic behavior for $\beta\gg 1$ fixed and $N$ going to infinity:
\begin{equation}
\ln Z = N\bigl(-\mathcal{F}_0\beta+\calS+2\pi^2\alpha_S\beta^{-1}
+\cdots\bigr) +N^0\biggl(-\operatorname{const}\cdot\beta-\frac{3}{2}\ln\beta+\cdots\biggr)
+O(N^{-1}).
\end{equation}
Note the absence of a $\ln N$ term. Indeed, the logarithm of the partition function admits a $1/N$ expansion, where different terms correspond to different classes of Feynman diagrams. In particular, the $N^0$ term is given by the sum of ladders closed into a loop, yielding the expression $-\frac{1}{2}\operatorname{Tr}\ln(1-K_{G})$. Here $K_G$ is the exact ladder kernel; it has ${}\sim\beta$ eigenvalues that are not too small, whereas the $-\frac{3}{2}\ln\beta$ contribution is due to the eigenvalues close to $1$.
There is one more thing to take into account---variations between different samples:
\begin{equation}
\ln\overline{Z}-\overline{\ln Z}
\approx \frac{1}{2}\overline{(\delta\ln Z)^2} \,,\quad\,
\text{where }\, \delta\ln Z=\ln Z-\overline{\ln Z} \,.
\end{equation}
In Eq.~\eqref{rho_Maj}, $E_0$ should be understood as the ground state energy for a particular realization of disorder. This may or may not be important, depending on the parameter $q$. Indeed, the sample-to-sample fluctuations are dominated by a particular Feynman diagram that contributes to $\ln\overline{Z}$ but not to $\overline{\ln Z}$ \cite{KS17-soft}. Assuming that $N\gg\beta\gg1$, its value can be estimated as follows:
\begin{equation}
\ln\overline{Z}-\overline{\ln Z} \:\approx\:
\figbox{0.8}{rd_error} \:\sim\: N^{2-q}\beta^2 \,.
\end{equation}
Therefore, the fluctuations of the free energy are of the order of $\delta F\sim N^{1-q/2}$ with no singular temperature dependence. We conclude that for $\beta\sim N$, the sample-to-sample fluctuations are significant if $q=4$ but not at larger values of $q$.\medskip
For the complex SYK model, the density of states is a function of two conserved quantities, charge and energy:
\begin{equation}
\rho(E,Q)=\operatorname{Tr}\Bigl(\Pi_{Q}\,\delta\bigl(\hat{H}-E\kern1pt\hat{1}\bigl)\Bigr),
\end{equation}
where $\hat{H}$ and $\hat{Q}$ are defined in Eqs.~(\ref{eq: Hamiltonian}) and~(\ref{chargeoper}), respectively, and $\Pi_Q$ is the projector onto the subspace with a given value of $Q$. For simplicity, we assume that $N$ is even so that $Q$ takes on integer values. The partition function for a fixed $Q$ and the grand partition function are as follows:
\begin{equation}
Z_Q(\beta^{-1})=\int e^{-\beta E} \rho(E,Q)\, dE \,,\qquad
Z(\beta^{-1},\mu)=\sum_{Q=-\infty}^{\infty}e^{\beta\mu Q}Z_Q(\beta^{-1})\,.
\end{equation}
In analytical calculations, we will be interested in the case where $E$ is close to $E_0(Q)$, the lowest eigenvalue of $\hat{H}$ with charge $Q$. We will show that
\begin{equation}
\rho(E,Q)\sim \alpha_SN^{-1}e^{N\calS(Q/N)}
\rho_{\operatorname{Sch}}\bigl(\alpha_SN(E-E_0(Q))\bigr)
\label{DQE}
\end{equation}
up to a constant factor, or equivalently,
\begin{equation}
\ln Z_{Q}(\beta^{-1}) \approx -\beta E_0(Q) + N\calS(Q/N)
+\ln\bigl(N^{-2}Z_{\operatorname{Sch}}(\alpha_SN\beta^{-1})\bigr)
\label{lnZQ}
\end{equation}
up to a constant term. However, $E_0(Q)$ is difficult to compute with sufficient (say, $1/N$) precision; for $q=4$, it depends on the realization of disorder. A simple though not very accurate approximation is as follows:
\begin{equation}
E_0(Q)=N\mathcal{F}_0(Q/N)+\operatorname{const}+O(N^{-1}) \,.
\end{equation}
We now derive Eq.~\eqref{lnZQ} from the effective action. Note that the integration measure is defined up to some $N$-dependent factor.\footnote{The $(G,\tilde{\Sigma})$ action is free from such ambiguity. However, we have lost track of normalization when eliminating $\tilde{\Sigma}$ and ``hard'' degrees of freedom in $G$.} We will use the factor $N^{-3/2}$ that comes with $Z_{\operatorname{Sch}}$ as previously explained. The additional normalization factors in our calculation are reasonably well motivated but not trustworthy, so the overall power of $N$ in front of $Z_{\operatorname{Sch}}$ will be checked independently.
Using the effective action \eqref{effact1}, the grand partition function is expressed as follows:
\begin{equation}
\begin{aligned}
Z(\beta^{-1},\mu)&=\sum_{n=-\infty}^{\infty}
\int D\calE\,\frac{D\bar{\lambda}}{\operatorname{U}(1)}\,\frac{D\varphi}{\operatorname{PSL}(2,R)}\,
\exp\bigl(-I_\text{eff}[\calE,n,\varphi,\bar{\lambda}]\bigr)\\[2pt]
&=Z_{\operatorname{U}(1)}(\beta^{-1},\mu)\cdot N^{-3/2}Z_{\operatorname{Sch}}(\alpha_SN\beta^{-1}) \,.
\end{aligned}
\end{equation}
Let us focus on $Z_{\operatorname{U}(1)}$, which involves the variables $n$, $\calE$, and $\tilde{\lambda}$. The notation ${D\bar{\lambda}}/{\operatorname{U}(1)}$ indicates that we consider $\bar{\lambda}(\tau)$ up to an additive constant. The corresponding integral,
\begin{equation}
\int\frac{D\bar{\lambda}}{\operatorname{U}(1)}\,
\exp\biggl(-\frac{NK}{2}\int\bar{\lambda}'^2\,d\tau\biggr)
=\sqrt{\frac{NK}{2\pi\beta}} \,,
\end{equation}
may be interpreted as the partition function (per unit length) of a free particle with mass $NK$. The integral over $\calE$ is evaluated using the method of steepest descent. Since
\begin{equation}
\frac{\partial^2 I_\text{eff}}{\partial\calE^2} \approx -\calG''(\calE)N
=2\pi\frac{\partial\mathcal{Q}}{\partial\calE}N <0,
\end{equation}
the integration path is parallel to the imaginary axis, and the symbol $D\calE$ is understood as ${d\calE}/{i}$ up to some real factor of the order of $1$. Thus,
\begin{equation}
Z_{\operatorname{U}(1)}(\beta^{-1},\mu) \sim\sqrt{\frac{NK}{2\pi\beta}}
\sum_{n=-\infty}^{\infty}\int_{-i\infty}^{i\infty} \frac{d\calE}{i}\,
e^{-\tilde{I}_\text{eff}} \,,\quad \text{where }\,
\frac{\tilde{I}_\text{eff}}{N} =\betaf\Bigl(\mu+\frac{2\pi}{\beta}(\calE-in)\Bigr)
- \calG(\calE)\,.
\end{equation}
For each value of the winding number $n$, the effective action attains its extremum at the value of $\calE$ determined by the equation $\calG'(\calE) =2\pi f'\bigl(\mu+\frac{2\pi}{\beta}(\calE-in)\bigr)$. Replacing the right-hand side with $2\pif'(\mu)$ introduces an $O(\beta^{-1})$ error in $\calE$ and an $O(\beta^{-2})$ error in $\tilde{I}_\text{eff}$; the latter is within the precision of the effective action model.\footnote{Here we have assumed that $n\lesssim 1$, which is true if $\beta\lesssim N$. But even in the opposite limit, the error in $\tilde{I}_\text{eff}$ is relatively small.} The value of ${\partial^2 \tilde{I}_\text{eff}}/{\partial\calE^2}$ at the extremum is also almost independent of $n$. Applying the method of steepest descent and choosing the order $1$ factor for future convenience, we get
\begin{equation}
\begin{aligned}
Z_{\operatorname{U}(1)}(\beta^{-1},\mu)
&\sim \sqrt{\frac{2\pi K}{\beta}}
\sum_{n=-\infty}^{\infty} \exp\biggl(-\beta\tilde{\Omega}\Bigl(\beta^{-1},\,
\mu-i\frac{2\pi}{\beta}n\Bigr)\biggr)\\
&\approx\sqrt{\frac{2\pi K}{\beta}}\,e^{-\beta\tilde{\Omega}(\beta^{-1},\mu)}
\sum_{n=-\infty}^{\infty}
\exp\biggl(-2\pi i\mathcal{Q} Nn-\frac{2\pi^2KN}{\beta}n^2\biggr)\,,
\end{aligned}
\label{ZU1n}
\end{equation}
where, as in Section~\ref{sec:thermo},
\begin{equation}
\tilde{\Omega}(\beta^{-1},\mu)
=N\biggl(f\Bigl(\mu+\frac{2\pi}{\beta}\calE\Bigr)
-\calG(\calE)\beta^{-1}\biggr)\,,\qquad
\calG'(\calE)=2\pif'\Bigl(\mu+\frac{2\pi}{\beta}\calE\Bigr)=-2\pi\mathcal{Q}\,.
\end{equation}
The sum over $n$ in \eqref{ZU1n} is evaluated using the Poisson summation formula:
\begin{equation}
\begin{aligned}
Z_{\operatorname{U}(1)}(\beta^{-1},\mu)
&\sim N^{-1/2}e^{-\beta\tilde{\Omega}(\beta^{-1},\mu)}
\sum_{Q=-\infty}^{\infty} \exp\biggl(-\frac{(Q-\mathcal{Q} N)^2}{2KN}\biggr)\\
&\approx \sum_{Q=-\infty}^{\infty}
N^{-1/2} \exp\Bigl(-\beta\bigl(\tilde{F}(\beta^{-1},Q)-\mu Q\bigr)\Bigr)\,,
\end{aligned}
\label{ZU1Q}
\end{equation}
where
\begin{equation}
\tilde{F}(\beta^{-1},\mathcal{Q} N)
=\tilde{\Omega}(\beta^{-1},\mu)+\mu\mathcal{Q} N
=N\bigl(\mathcal{F}_{0}(\mathcal{Q})-\calS(\mathcal{Q})\beta^{-1}\bigr)\,.
\end{equation}
An important feature of the second line in \eqref{ZU1Q} is that the argument of $\tilde{F}$ is the integer charge $Q$ being summed over, and not the mean charge $\mathcal{Q} N$; consequently the entropic prefactor of each term in the sum is $e^{N \mathcal{S} (Q/N)}$ and not $e^{N\mathcal{S} (\mathcal{Q})}$. (Since $d\calS/d\mathcal{Q}=2\pi\calE$, the ratio of such factors in the $Q+1$ and $Q$ sectors is $e^{2\pi\calE}$.)
So, when we multiply the second line in \eqref{ZU1Q} by $N^{-3/2}Z_{\operatorname{Sch}}(\alpha_SN\beta^{-1})$, we obtain an expression for $Z(\beta^{-1},\mu)$ that is equivalent to \eqref{lnZQ}. Finally, this yields (\ref{DQE}), where the density of states at charge $Q$ has a prefactor, $e^{N \mathcal{S}(Q/N)}$, with entropy evaluated at the same charge $Q$.
On the other hand, if $N$ is very large, the sum over $n$ in \eqref{ZU1n} is reduced to the $n=0$ term. Multiplying it by the same factor, we get
\begin{equation}
\ln Z(\beta^{-1},\mu)
\approx -\beta\tilde{\Omega}(\beta^{-1},\mu)
+\frac{2\pi^2\alpha_SN}{\beta}-2\ln\beta\qquad
\text{for } N\gg\beta\gg1\,.
\end{equation}
The absence of a $\ln N$ term is consistent with $1/N$ expansion.
\section{Renormalization theory}
\label{secRG}
In this section, we describe the physics at intermediate time scales, $1\ll \tau \ll \beta$, generalizing the ideas in Ref.~\cite{KS17-soft} section 3. More exactly, we will study the renormalization of both symmetric and anti-symmetric perturbations to the Green function and the self energy.
\subsection{General idea}
The $(G,\tilde{\Sigma})$ action \eqref{GSigma tilde action} is suited for the perturbative study near conformal point $(G_c,\tilde{\Sigma}_c)$, which is an exact saddle point for $\sigma=0$. We will work at zero temperature, i.e.\ $G_c=G_{\beta=\infty}$,\, $\tilde{\Sigma}_c=\tilde{\Sigma}_{\beta=\infty}$, see \eqref{zeroT Green}. The actual UV source $\sigma$, consisting of a delta function and its derivative, is strong in the UV (i.e.\ for $\tau:=\tau_1-\tau_2\sim 1$), and therefore, is hard to study without numerics. However, it is possible to introduce a weaker perturbation in a slightly extended UV region such that its effect at $\tau\gg 1$ reproduces the actual correction $(\delta G,\delta\tilde{\Sigma})$ to the conformal solution. This method has been applied to the Majorana SYK model in Ref.~\cite{KS17-soft} section 3, yielding a derivation of the Schwarzian action as well as the relation between its coefficient and the UV-sourced correction to the Green function.
One useful property of the Majorana SYK model is anti-symmetry under time reflection. Namely, the perturbation source $\delta'(\tau)$ is an anti-symmetric function of time, and the ladder kernel that propagates the perturbation preserves this symmetry. As a consequence, the responses $\delta G(\tau)$, $\delta\tilde{\Sigma}(\tau)$ are also anti-symmetric in time. However, that is not the case for the complex SYK model, see Fig.~\ref{fig: responses} for an illustration. The actual UV source $\sigma(\tau_1,\tau_2)=\delta'(\tau_{12})-\mu \delta(\tau_{12})$ has both anti-symmetric and symmetric parts. More importantly, the ladder kernel (which will be studied later) mixes anti-symmetric and symmetric functions. The mixing effect will be characterized by a $2\times 2$ matrix that generalizes the number $k_{c}(h)$ of the Majorana SYK model~\cite{MS16-remarks,KS17-soft}.
\begin{figure}[t]
\center
\includegraphics{RG_flow}
\caption{RG flow of a perturbation $\sigma$ (solid line), generating the response $\delta G$ (dashed line) at larger time scales.}
\label{fig: responses}
\end{figure}
In general, renormalization theory determines how UV sources manifest themselves at intermediate scales, and thus, affect the IR physics. For instance, the interaction between the UV source and the IR deformation of the conformal solution due to reparameterization of time $\varphi(\tau)$ contributes to the local part of the effective action for the $\varphi$ field: it generates the Schwarzian term, which further determines such properties as specific heat. For the complex SYK model, the new ingredient is the perturbation due to chemical potential (or charge $\mathcal{Q}$), sourcing the asymmetry of the Green function characterized by $\calE$ or $\theta$. The nontrivial relation \eqref{charge theta} between $\theta$ and $\mathcal{Q}$ can be reproduced using renormalization, which further supports the statement that the charge is determined by the intermediate asymptotics of $G$.
To apply the renormalization theory for $\beta=\infty$, we write the charge as
\begin{equation}
\mathcal{Q}=\int \sigma(\tau)\cdot (-\tau G(-\tau))\,d\tau
\label{defQ1}
\end{equation}
(cf.\ \eqref{defQ}), with $\sigma$ being small at each individual time scale but possibly spanning multiple scales. This way, $\sigma$ is regarded as a combination of infinitesimal perturbation sources. Focusing on a particular scale $\xi=\ln|\tau|$, we may characterize the cumulative effect of all sources at smaller scales by some value of $\calE$. The additional source at scale $\xi$ contributes both to $\mathcal{Q}$ (via integral \eqref{defQ1}) and to $\calE$ (via renormalization). Thus, one can calculate $d\calE/d\mathcal{Q}$, as elaborated in the following sections. The change in the asymmetry parameter $\calE$ is propagated by the RG flow and further augmented by any sources present at larger scales.
\subsection{Linear response to the perturbation $\sigma$}
\subsubsection{Quadratic expansion of the $(G,\tilde{\Sigma})$ action}
In this section, it will be convenient to treat bilocal functions $f(\tau_1,\tau_2)$ as operators (i.e.\ matrices indexed by $\tau_1$ and $\tau_2$) for which one can consider the product and the trace. A similar interpretation is also applicable to functions of four times. For example,
\begin{gather}
\operatorname{Tr} \bigl(f \cdot g\bigr)
=\int d\tau_1\,d\tau_2\, f(\tau_1,\tau_2)\, g(\tau_2,\tau_1) \,,\\[2pt]
f^{T}Ag = \int d\tau_1\,d\tau_2\,d\tau_3\,d\tau_4\,
f(\tau_2,\tau_1)\, A(\tau_1,\tau_2;\tau_3,\tau_4)\, g(\tau_3,\tau_4).
\end{gather}
With this in mind, we can express the $(G,\tilde{\Sigma})$ action \eqref{GSigma tilde action} as follows:
\begin{equation}
\frac{I[G,\tilde{\Sigma}]}{N}
= -\ln \det\bigl(-\tilde{\Sigma}\bigr)
- \frac{(-1)^{\frac{q}{2}}}{q}\,
\operatorname{Tr}\bigl(G^{\frac{q}{2}} \cdot G^{\frac{q}{2}}\bigr)
-\operatorname{Tr}\bigl(\tilde{\Sigma} \cdot G\bigr) +
\operatorname{Tr}\bigl(\sigma \cdot G\bigr)\,.
\end{equation}
Here the power $G^{q/2}$ is taken element-wise. Next, we expand the action $I=I[G,\tilde{\Sigma}]$ to the second order in the variations around the conformal point, $\delta G=G-G_c$,\, $\delta\tilde{\Sigma}=\tilde{\Sigma}-\tilde{\Sigma}_c$, ignoring the constant term $I[G_c,\tilde{\Sigma}_c]$:
\begin{equation}
\frac{I_2}{N} = \frac{1}{2} \begin{pmatrix}
\delta \tilde{\Sigma}^T & \delta G^T
\end{pmatrix}
\begin{pmatrix}
W_{\Sigma} & -\mathbf{1} \\
-\mathbf{1} & W_{G}
\end{pmatrix}
\begin{pmatrix}
\delta \tilde{\Sigma} \\
\delta G
\end{pmatrix} + \operatorname{Tr} \left( \sigma \cdot \delta G \right),\quad
W_{\Sigma} = \frac{\delta^2 I}{(\delta \tilde{\Sigma})^2} \,, \quad W_{G} = \frac{\delta^2 I}{(\delta G)^2} \,.
\label{eq: quadratic action}
\end{equation}
The matrices $W_{\Sigma}$ and $W_{G}$ can also be expressed as follows:
\begin{equation}
W_{\Sigma} = \frac{\delta G}{\delta \tilde{\Sigma}} \,, \qquad
W_{G} = \frac{\delta\Sigma}{\delta G} \,,
\end{equation}
where the functional dependences of $G$ on $\tilde{\Sigma}$ and of $\Sigma$ on $G$ are given by the Schwinger-Dyson equations:
\begin{equation}
G = -\tilde{\Sigma}^{-1}, \qquad
\Sigma(\tau_1,\tau_2) = G(\tau_1,\tau_2)^{\frac{q}{2}} \left( - G(\tau_2,\tau_1)\right)^{\frac{q}{2}-1}\,.
\end{equation}
(The equation $\tilde{\Sigma}=\Sigma+\sigma$ is not used.) These are the explicit formulas and diagrammatic representations of those matrices:
\begin{align}
&W_{\Sigma}(\tau_1,\tau_2;\tau_3,\tau_4) = G_c(\tau_1,\tau_3) G_c(\tau_4,\tau_2) =
\begin{tikzpicture}[baseline={([yshift=-4pt]current bounding box.center)}]
\draw[thick, mid arrow] (40pt,12pt)--(0pt,12pt);
\draw[thick, mid arrow] (0pt,-12pt)--(40pt,-12pt);
\node at (-5pt,12pt) {\scriptsize $1$};
\node at (-5pt,-12pt) {\scriptsize $2$};
\node at (48pt,12pt) {\scriptsize $3$};
\node at (48pt,-12pt) {\scriptsize $4$};
\end{tikzpicture}\,,
\label{WSigma}\\[5pt]
&\begin{aligned}
W_{G}(\tau_1,\tau_2;\tau_3,\tau_4) &= \left( -1 \right)^{\frac{q}{2}-1}
\left[
\frac{q}{2} G_c(\tau_1,\tau_2)^{\frac{q}{2}-1} G_c(\tau_2,\tau_1)^{\frac{q}{2}-1} \delta (\tau_1,\tau_3) \delta (\tau_2,\tau_4) \right. \\
&\hspace{50pt} \left.+ \left( \frac{q}{2}-1 \right) G_c(\tau_1,\tau_2)^{\frac{q}{2}}
G_c(\tau_2,\tau_1)^{\frac{q}{2}-2} \delta (\tau_1,\tau_4) \delta (\tau_2,\tau_3)
\right]\\
&= (-1)^{\frac{q}{2}-1} \left(
\frac{q}{2}
\begin{tikzpicture}[baseline={([yshift=-4pt]current bounding box.center)}]
\draw[thick, densely dotted] (10pt,15pt)--(0pt,15pt);
\draw[thick, densely dotted] (0pt,-15pt)--(10pt,-15pt);
\draw[thick, mid arrow] (0pt,15pt)..controls (5pt,7pt) and (5pt,-7pt)..(0pt,-15pt);
\draw[thick, mid arrow] (0pt,-15pt)..controls (-5pt,-7pt) and (-5pt,7pt)..(0pt,15pt);
\node at (-5pt,15pt) {\scriptsize $1$};
\node at (-5pt,-15pt) {\scriptsize $2$};
\node at (18pt,15pt) {\scriptsize $3$};
\node at (18pt,-15pt) {\scriptsize $4$};
\end{tikzpicture}
+
\left(\frac{q}{2}-1 \right)
\begin{tikzpicture}[baseline={([yshift=-4pt]current bounding box.center)}]
\draw[thick, densely dotted] (0pt,15pt)--(20pt,-15pt);
\draw[thick, densely dotted] (20pt,15pt)--(0pt,-15pt);
\draw[thick, mid arrow] (0pt,-15pt)..controls (5pt,-7pt) and (5pt,7pt)..(0pt,15pt);
\draw[thick, mid arrow] (0pt,-15pt)..controls (-5pt,-7pt) and (-5pt,7pt)..(0pt,15pt);
\node at (-5pt,15pt) {\scriptsize $1$};
\node at (-5pt,-15pt) {\scriptsize $2$};
\node at (28pt,15pt) {\scriptsize $3$};
\node at (28pt,-15pt) {\scriptsize $4$};
\end{tikzpicture}
\right)\,.
\end{aligned}
\label{WG}
\end{align}
(An arrow pointing from $\tau'$ to $\tau$ denotes $G_c(\tau,\tau')$.)
\subsubsection{Ladder kernels}
To calculate the effects of the perturbation source $\sigma$, we may first eliminate $\delta\tilde{\Sigma}$ from the quadratic action by evaluating it at the saddle point, $\delta\tilde{\Sigma}=W_{\Sigma}^{-1} \delta G$:
\begin{equation}
\frac{I_2[\delta G]}{N} = \frac{1}{2} \delta G^T \left(W_{G} - W_{\Sigma}^{-1} \right) \delta G + \operatorname{Tr} \left( \sigma \cdot \delta G\right)\,.
\end{equation}
We further take the saddle point with respect to $\delta G$ and find its equilibrium value,
\begin{equation}
\delta G = - \left(W_{G} - W_{\Sigma}^{-1} \right)^{-1} \sigma \,,
\end{equation}
which may be interpreted as the sum of ladder diagrams. The corresponding $\delta\tilde{\Sigma}$ is expressed in a similar way:
\begin{equation}
\begin{aligned}
\delta G &= (1-\underbrace{W_{\Sigma} {W_{G}}}_{=:K_G})^{-1} W_{\Sigma}\sigma =\left(1+ K_G + K^2_G + \ldots \right) W_{\Sigma}\sigma\,, \\
\delta \tilde{\Sigma} &= W_{\Sigma}^{-1} \delta G = (1- \underbrace{W_{G} W_{\Sigma}}_{=:K_{\Sigma}})^{-1} \sigma = (1+K_{\Sigma}+K_{\Sigma}^2+\ldots) \sigma \,.
\end{aligned}
\label{response}
\end{equation}
The ladder kernels $K_G$, $K_{\Sigma}$ are products of $W_{\Sigma}$ and $W_G$ in different orders, and thus, have the same spectrum (excluding $0$). Let us give their diagrammatic representations:
\begin{equation}
\begin{aligned}
K_{G}(\tau_1,\tau_2;\tau_3,\tau_4) &= \left( -1 \right)^{\frac{q}{2}-1}
\left(
\frac{q}{2}
\begin{tikzpicture}[baseline={([yshift=-4pt]current bounding box.center)}]
\draw[thick, mid arrow] (40pt,15pt)--(0pt,15pt);
\draw[thick, mid arrow] (0pt,-15pt)--(40pt,-15pt);
\draw[thick, mid arrow] (40pt,15pt)..controls (45pt,7pt) and (45pt,-7pt)..(40pt,-15pt);
\draw[thick, mid arrow] (40pt,-15pt)..controls (35pt,-7pt) and (35pt,7pt)..(40pt,15pt);
\node at (-5pt,15pt) {\scriptsize $1$};
\node at (-5pt,-15pt) {\scriptsize $2$};
\node at (48pt,15pt) {\scriptsize $3$};
\node at (48pt,-15pt) {\scriptsize $4$};
\end{tikzpicture}
+\left(\frac{q}{2}-1 \right)
\begin{tikzpicture}[baseline={([yshift=-4pt]current bounding box.center)}]
\draw[thick, far arrow] (40pt,-15pt)--(0pt,15pt);
\draw[thick, near arrow] (0pt,-15pt)--(40pt,15pt);
\draw[thick, mid arrow] (40pt,15pt)..controls (45pt,7pt) and (45pt,-7pt)..(40pt,-15pt);
\draw[thick, mid arrow] (40pt,15pt)..controls (35pt,7pt) and (35pt,-7pt)..(40pt,-15pt);
\node at (-5pt,15pt) {\scriptsize $1$};
\node at (-5pt,-15pt) {\scriptsize $2$};
\node at (48pt,15pt) {\scriptsize $3$};
\node at (48pt,-15pt) {\scriptsize $4$};
\end{tikzpicture}
\right)
\,,\\[5pt]
K_{\Sigma}(\tau_1,\tau_2;\tau_3,\tau_4) &= \left( -1 \right)^{\frac{q}{2}-1}
\left(
\frac{q}{2}
\begin{tikzpicture}[baseline={([yshift=-4pt]current bounding box.center)}]
\draw[thick, mid arrow] (40pt,15pt)--(0pt,15pt);
\draw[thick, mid arrow] (0pt,-15pt)--(40pt,-15pt);
\draw[thick, mid arrow] (0pt,15pt)..controls (5pt,7pt) and (5pt,-7pt)..(0pt,-15pt);
\draw[thick, mid arrow] (0pt,-15pt)..controls (-5pt,-7pt) and (-5pt,7pt)..(0pt,15pt);
\node at (-5pt,15pt) {\scriptsize $1$};
\node at (-5pt,-15pt) {\scriptsize $2$};
\node at (48pt,15pt) {\scriptsize $3$};
\node at (48pt,-15pt) {\scriptsize $4$};
\end{tikzpicture}
+\left(\frac{q}{2}-1 \right)
\begin{tikzpicture}[baseline={([yshift=-4pt]current bounding box.center)}]
\draw[thick, far arrow] (0pt,15pt)--(40pt,-15pt);
\draw[thick, near arrow] (40pt,15pt)--(0pt,-15pt);
\draw[thick, mid arrow] (0pt,-15pt)..controls (5pt,-7pt) and (5pt,7pt)..(0pt,15pt);
\draw[thick, mid arrow] (0pt,-15pt)..controls (-5pt,-7pt) and (-5pt,7pt)..(0pt,15pt);
\node at (-5pt,15pt) {\scriptsize $1$};
\node at (-5pt,-15pt) {\scriptsize $2$};
\node at (48pt,15pt) {\scriptsize $3$};
\node at (48pt,-15pt) {\scriptsize $4$};
\end{tikzpicture}
\right) \,.
\end{aligned}
\label{KGSigma}
\end{equation}
\subsubsection{Calculation of $K_G(h)$ and $K_{\Sigma}(h)$}
Due to $SL(2,R)$ symmetry, $K_G$ and $K_\Sigma$ preserve power-law functions such as $\sigma(\tau) \sim \tau^{2\Delta-1-h}$. More exactly, we will consider perturbation sources of the form
\begin{equation}
\sigma(\tau) =
\begin{pmatrix}
c_+\\
c_-
\end{pmatrix} |\tau|^{1-h}
\tilde{\Sigma}_c(\tau)
:= \begin{cases}
c_+ |\tau|^{1-h}\tilde{\Sigma}_c(\tau), & \tau>0 \\
c_- |\tau|^{1-h}\tilde{\Sigma}_c(\tau), & \tau<0
\end{cases}\,,
\end{equation}
which generate the responses
\begin{equation}
\delta G(\tau) =
\begin{pmatrix}
\delta G_+\\
\delta G_-
\end{pmatrix} |\tau|^{1-h} G_c (\tau) \,, \qquad
\delta\tilde{\Sigma}(\tau) =
\begin{pmatrix}
\delta \tilde{\Sigma}_+\\
\delta \tilde{\Sigma}_-
\end{pmatrix} |\tau|^{1-h}\tilde{\Sigma}_c(\tau)\,.
\label{perturbations}
\end{equation}
The goal is to find the $2\times 2$ matrices $W_\Sigma(h)$, $W_G(h)$ relating such coefficient vectors, excluding the $\tau$-dependent factors. For example, since $W_\Sigma=\delta G/\delta\tilde{\Sigma}$, we have $\left(\begin{smallmatrix} \delta G+\\ \delta G_- \end{smallmatrix}\right) = W_\Sigma(h) \left(\begin{smallmatrix} \delta\tilde{\Sigma}_+\\ \delta\tilde{\Sigma}_- \end{smallmatrix}\right)$. To calculate $W_{\Sigma}(h)$, we use the equation $\delta G(i\omega) = G(i\omega)^2 \delta \tilde{\Sigma}(i\omega)$, where $G(i\omega)$ is given by \eqref{zeroT Green frequency}. It should be combined with the Fourier transform
\begin{equation}
\bigintssss
\begin{pmatrix}
a_+ |\tau|^{-\alpha}, \quad \tau>0 \\
a_- |\tau|^{-\alpha} , \quad \tau<0
\end{pmatrix} e^{i\omega \tau} d\tau =
\begin{pmatrix}
a'_+ |\omega|^{-1+\alpha}, \quad \omega>0 \\
a'_- |\omega|^{-1+\alpha} , \quad \omega<0
\end{pmatrix}\,,
\quad \begin{pmatrix}
a_+'\\
a_-'
\end{pmatrix}
= M(\alpha)
\begin{pmatrix}
a_+\\
a_-
\end{pmatrix}\,,
\end{equation}
where
\begin{equation}
M(\alpha) = \Gamma(1-\alpha) \begin{pmatrix}
i^{1-\alpha} & i^{-1+\alpha} \\
i^{-1+\alpha} & i^{1-\alpha}
\end{pmatrix}\,, \quad
M(\alpha)^{-1} = \frac{\Gamma(\alpha)}{2\pi}
\begin{pmatrix}
i^{-\alpha} & i^\alpha \\
i^\alpha & i^{-\alpha}
\end{pmatrix}\,.
\end{equation}
The relevant values of $\alpha$ are $2\Delta-1+h$ for $\delta G$ and $1-2\Delta+h$ for $\delta \tilde{\Sigma}$. Thus,
\begin{equation}
\begin{aligned}
W_{\Sigma}(h) ={}& - \frac{\Gamma(2-2\Delta)}{\Gamma(2\Delta)}
\begin{pmatrix}
-e^{-\pi \calE} & 0 \\
0 & e^{\pi \calE}
\end{pmatrix} M(2\Delta-1+h)^{-1}
\begin{pmatrix}
e^{-2i \theta} & 0 \\
0 & e^{2i \theta}
\end{pmatrix}
\\
&\cdot M(1-2\Delta+h)
\begin{pmatrix}
-e^{\pi \calE} & 0 \\[3pt]
0 & e^{-\pi \calE}
\end{pmatrix} \\[3pt]
{}={}& \frac{\Gamma(2\Delta-1+h) \Gamma(2\Delta-h) }{\Gamma(2\Delta) \Gamma(2\Delta-1) \sin (2\pi \Delta) }
\begin{pmatrix}
\sin (\pi h + 2\theta) & - \sin (2\pi \Delta) + \sin (2\theta) \\
-\sin (2\pi \Delta) - \sin (2\theta) & \sin (\pi h -2\theta)
\end{pmatrix}
\end{aligned}
\end{equation}
The matrix $W_G(h)$ is obtained from \eqref{WG}; it is in fact independent of $h$:
\begin{equation}
W_G(h) = \begin{pmatrix}
\frac{q}{2} & \frac{q}{2}-1 \\
\frac{q}{2}-1 & \frac{q}{2}
\end{pmatrix}\,.
\end{equation}
Finally,
\begin{equation}
K_G(h)=W_{\Sigma}(h)W_G(h)\,, \quad K_{\Sigma}(h) = W_G(h) W_{\Sigma}(h)\,.
\end{equation}
Note that
\begin{equation}
K_G(1-h) =
\begin{pmatrix}
0 & 1\\
1 & 0
\end{pmatrix}
K_{\Sigma}(h)^T
\begin{pmatrix}
0 & 1 \\
1 & 0
\end{pmatrix} \,;
\label{eqn: kernel symmetry}
\end{equation}
this equation is related to the fact that $K_G(\tau_1,\tau_2;\tau_3,\tau_4)=K_{\Sigma}(\tau_4,\tau_3;\tau_2,\tau_1)$. Therefore, $K_G(h)$, $K_\Sigma(h)$, $K_G(1-h)$, $K_\Sigma(1-h)$ share the same eigenvalues.
\subsubsection{Resonant response}
Resonances occur at special values of $h$ such that $\det (1-K_\Sigma(h))=0$. In particular, $h=-1,0,1,2$ are solutions of this equation, also see Appendix~\ref{sec: operator spectrum} for discussions on the other solutions. Among them, the $h=2$ and $h=1$ perturbation sources determine the coefficient $\alpha_S$ in the effective action and the parameters $\calE,\mathcal{Q}$, respectively. The dual values, $1-h=-1,0$, correspond to IR degrees of freedom, namely, the fluctuating fields $\varphi(\tau)$ and $\lambda(\tau)$.
Let $h=h_I$ be some resonance. The power-law source $\sigma(\tau)\sim \tau^{2\Delta-1-h_I}$ results in a divergent response, and therefore, has to be regulated. For this purpose, we multiply the source by a window function $u(\ln|\tau|)$:
\begin{equation}
\sigma_I(\tau) = \begin{pmatrix}
c^I_+ \\
c^I_-
\end{pmatrix} |\tau|^{1-h_I}\tilde{\Sigma}_c (\tau)\,
u\left(\ln |\tau|\right)\,, \qquad \int_{-\infty}^{+\infty} u(\xi) d\xi = 1\,.
\label{source2}
\end{equation}
Assuming that $u$ has finite support, $\sigma_I$ vanishes in the IR so that any response at sufficiently large scales is due to RG flow. On the other hand, the window should be sufficiently wide and $u(\xi)$ vary slowly with $\xi$, such that $\sigma_I (\tau)$ can be decomposed into power-law sources with $h$ close to $h_I$.
Following the argument in Ref.~\cite{KS17-soft} section 3.1 we conclude that at sufficiently large $\tau$,
\begin{equation}
\frac{\delta\tilde{\Sigma}(\tau)}{\tilde{\Sigma}_c (\tau)}= \begin{pmatrix}
\delta \tilde{\Sigma}_+\\
\delta \tilde{\Sigma}_-
\end{pmatrix}|\tau|^{1-h_I}\,, \quad \text{where} \quad
\begin{pmatrix}
\delta \tilde{\Sigma}_+ \\
\delta \tilde{\Sigma}_-
\end{pmatrix}
=\operatorname{Res}_{h=h_I}\bigl(K_{\Sigma}(h)-1\bigr)^{-1}
\begin{pmatrix}
c^I_+ \\
c^I_-
\end{pmatrix}\,.
\label{compare 1}
\end{equation}
A similar formula can be obtained for $\delta G$. Note that this result is independent of the details of window function $u$.
\subsubsection{The $h=1$ resonance}
As already mentioned, this resonance is related to the parameter $\calE$ and $\mathcal{Q}$. So, let us find the residue of $(K_{\Sigma}(h)-1)^{-1}$ at $h=1$.
First, we compute $W_{\Sigma}(1)$ and $W'_{\Sigma}(1)$:
\begin{equation}
\begin{aligned}
W_{\Sigma}(1) &= \begin{pmatrix}
0 & -1 \\
-1 & 0
\end{pmatrix}
+ \frac{\sin (2 \theta)}{\sin (2\pi \Delta)} \begin{pmatrix}
-1 & 1 \\
-1 & 1
\end{pmatrix}\,, \\[3pt]
W'_{\Sigma}(1) & = -\frac{1}{1-2\Delta} W_{\Sigma}(1) - \pi \frac{\cos (2\theta)}{\sin (2\pi \Delta)} \begin{pmatrix}
1 & 0 \\
0 & 1
\end{pmatrix}\,.
\end{aligned}
\end{equation}
Thus,
\begin{equation}
\begin{aligned}
K_{\Sigma}(1) &= W_G W_{\Sigma}(1)=\begin{pmatrix}
1- \frac{q}{2} & -\frac{q}{2} \\
- \frac{q}{2} & 1-\frac{q}{2}
\end{pmatrix}
+ \frac{\sin (2 \theta)}{\sin (2\pi \Delta)} (q-1) \begin{pmatrix}
-1 & 1 \\
-1 & 1
\end{pmatrix}\,, \\[3pt]
K_{\Sigma}'(1) & =W_G W'_{\Sigma}(1)= -\frac{1}{1-2\Delta} K_{\Sigma}(1) - \pi \frac{\cos (2\theta)}{\sin (2\pi \Delta)} \begin{pmatrix}
\frac{q}{2} & \frac{q}{2}-1 \\
\frac{q}{2}-1 & \frac{q}{2}
\end{pmatrix}\,.
\end{aligned}
\end{equation}
The matrix $K_{\Sigma}(1)$ has eigenvalues $-(q-1)$ and $1$ as expected. The left and right eigenvectors associated with the eigenvalue $1$ are:
\begin{equation}
v^L= \begin{pmatrix}
1 & -1
\end{pmatrix}\,, \quad
v^R = \frac{1}{2} \begin{pmatrix}
1 \\
-1
\end{pmatrix}
- (1-\Delta) \frac{\sin (2\theta)}{\sin (2\pi \Delta)}
\begin{pmatrix}
1 \\
1
\end{pmatrix} \,, \quad v^L v^R =1\,.
\end{equation}
By abuse of notation, we introduce the number
\begin{equation}
k'(1):= v^L K_{\Sigma}' (1) v^R = -\frac{1}{1-2\Delta} - \pi \frac{\cos (2\theta)}{\sin (2\pi \Delta)}\,
\label{eqn: kprime(1)}
\end{equation}
not actually defining $k(h)$. Thus,
\begin{equation}
\operatorname{Res}_{h=1} \bigl(K_{\Sigma}(h)-1\bigr)^{-1} = \frac{1}{k'(1)} v^R v^L \,.
\label{Res_h=1}
\end{equation}
\subsection{Calculation of $d\mathcal{Q}/d\calE$}
As part of the renormalization scheme for $\mathcal{Q}$ and $\calE$, we calculate the variations of these parameters due to a perturbation source at a particular scale. More specifically, we consider the source \eqref{source2} with $h_I=1$:
\begin{equation}
\delta\sigma (\tau) =
\begin{pmatrix}
c_+ \\
c_-
\end{pmatrix} \tilde{\Sigma}_c(\tau)\,
u\left(\ln |\tau|\right)\,, \qquad \int_{-\infty}^{+\infty} u(\xi) d\xi = 1\,.
\end{equation}
To find $\delta\mathcal{Q}$, we integrate $\delta\sigma(\tau)$ against $-\tau G_c(-\tau)$, see \eqref{defQ1}. The functions $G_c=G_{\beta=\infty}$ and $\tilde{\Sigma}_c=\tilde{\Sigma}_{\beta=\infty}$ are given by \eqref{zeroT Green}; notice that $\tilde{\Sigma}_c(\tau)\cdot(-\tau G_c(-\tau)) =b\tau^{-1}$. Hence,
\begin{equation}
\delta\mathcal{Q} = b \left( c_+ -c_-\right) = b v^L \begin{pmatrix}
c_+ \\
c_-
\end{pmatrix} \,.
\label{compare 2}
\end{equation}
The source also determines $\delta \tilde{\Sigma}$ through equations \eqref{compare 1} and \eqref{Res_h=1}:
\begin{equation}
\frac{\delta \tilde{ \Sigma} (\tau)}{\tilde{\Sigma}(\tau)} = \delta\mathcal{Q}\cdot \frac{1}{b k'(1)} v^R \,.
\label{dSigma1}
\end{equation}
This result may be interpreted as a change of the asymmetry parameter $\calE$. Indeed, it follows from Eq.~\eqref{zeroT Green} that
\begin{equation}
b^{-1} \frac{d b}{d\calE} = - 2\pi \frac{\sinh (2\pi \calE)}{\cos (2\pi \Delta)+ \cosh (2\pi \calE)} = - 2\pi \frac{\sin (2\theta)}{\sin (2\pi \Delta)} \,,
\end{equation}
and hence,
\begin{equation}
\frac{\delta \tilde{\Sigma}(\tau)}{\tilde{\Sigma}(\tau)} = \delta \calE
\begin{pmatrix}
\pi + (1-\Delta) b^{-1} \frac{d b}{d\calE} \\[2pt]
-\pi + (1-\Delta) b^{-1} \frac{d b}{d\calE}
\end{pmatrix} = \delta\calE\cdot 2\pi v^R \,.
\label{dSigma2}
\end{equation}
Comparing \eqref{dSigma1} with \eqref{dSigma2}, we get:
\begin{equation}
\frac{d\mathcal{Q}}{d\calE} = 2\pi b k'(1)= - \frac{\sin (2\pi \Delta)}{\cos (2\pi \Delta)+ \cosh (2\pi \calE)} - \pi (1-2\Delta)\frac{1+ \cos (2\pi \Delta) \cosh (2\pi \calE)}{( \cos (2\pi \Delta)+ \cosh (2\pi \calE) )^2}\,.
\label{dQdE}
\end{equation}
This formula can be written more compactly using the $\theta$ variable,
\begin{equation}
\frac{d\mathcal{Q}}{d\theta} = 2\pi b k'(1) \frac{d\calE}{d\theta}=- \frac{1}{\pi} - (1-2\Delta) \frac{\cos (2\theta)}{\sin (2\pi \Delta)}\,, \label{dQdtheta}
\end{equation}
which is
consistent with Eq.~(\ref{charge theta}) and the results in Refs.~\cite{GPS01,SS15,Davison17}.
\section{Computation of the compressibility}
\label{sec:compress}
This section will present three different numerical approaches to computing the charge compressibility $K$ of the complex SYK model. We will limit these computations to the particle-hole symmetric case, where $Q=0$ and $\mu=0$. These computations will involve determination of the response of the particle-hole symmetric solution to small non-zero $Q$ or $\mu$:
\begin{enumerate}
\item In Section~\ref{sec:exact}, we will compute the compressibility by an exact diagonalization, evaluating the ground state energy $E_0$ as a function of small $Q$.
The value of $E_0$ is determined by the UV structure of the model, and we therefore expect $K$ to also be sensitive to the UV structure. This is as in Fermi liquid theory, where the compressibility involves a new Landau parameter, $F_0^s$, and is not determined by just the quasiparticle effective mass $m^\ast$. In contrast, in both Fermi liquid theory and the SYK models, the $T$-linear coefficient of the specific heat is determined by low energy physics: in Fermi liquid theory by $m^\ast$, and in the SYK model by the leading low energy deviation of the conformal solution \cite{Kit.KITP.2,MS16-remarks}.
\item In Section~\ref{numschwindys} we will numerically compute $K$ by an alternative method: full numerical solution of the Schwinger-Dyson equations of the SYK model.
\item Finally, numerical approach in Section~\ref{kerneldiag} employs diagonalization of the two-particle kernel.
\end{enumerate}
The values of $K$ obtained in these subsections are in excellent agreement.
Throughout the whole section we will recover $J$.
\subsection{Exact diagonalization}
\label{sec:exact}
In this subsection we perform exact diagonalization of the complex SYK Hamiltonian for $q=4$.
The Hamiltonian \eqref{phsymham}
commutes with the charge operator (\ref{chargeoper}): $[\hat{H},\hat{Q}]=0$. Therefore we can diagonalize $\hat{H}$ in each charge sector and find the ground state energy $E_{0}(Q)$.
\begin{figure}[t]
\center
\subfloat[$E_0(Q)$ vs. $Q$]
{\includegraphics[width=0.45\textwidth]{Parabolas.pdf}}
\qquad
\subfloat[ $E_0(0)$ vs. $N$]
{\includegraphics[width=0.42\textwidth]{PlotE0.pdf}}
\caption{(a) The ground energy $E_{0}(Q)$ as a function of charge $Q$ in units $J$ ($q=4$). The number of samples for each charge are 250000 ($N=10$), 120000 ($N=11$), 50000 ($N=12$), 10000 ($N=13$), 2000 ($N=14$), 1000 ($N=15$), 500 ($N=16$), 200 ($N=17$). Dashed lines are fit by $E_{0}(Q)=E_{0}+a Q^{2}$.
(b) Plot for the ground energy $E_{0}$ at zero charge. The dashed line is a fit by $E_{0}=- 0.079 N-0.479 + 1.6/N$. The leading large $N$ term to be compared with $2\epsilon_{0}$, where $\epsilon_{0}\approx -0.0406$ \cite{Cotler:2016fpe}
}
\label{fig: ED}
\end{figure}
Our numerical results for the ground state energy $E_{0} (Q)$ are summarized in Fig.~\ref{fig: ED}(a).
\begin{enumerate}
\item
Fitting these results to the form $E (Q) = E_0 + a Q^2$, we obtain the values of $E_0$ and $a$ as shown in the following table
\begin{center}
\begin{tabular}{@{} c|c|c|c|c|c|c|c|c @{}}
\toprule
$N$ & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 \\
\hline
$E_{0}$ & $-1.105$ & $-1.197$ & $-1.288$ & $-1.378$ & $-1.462$ & $-1.552$ & $-1.636$ & $-1.719$ \\
\hline
$a$ & 0.0489 & 0.0437 & 0.0400 & 0.0371 & 0.0338 & 0.0316 &0.0292 & 0.0276 \\
\end{tabular}
\end{center}
\item
Finally, using $a = 1/(2 N K)$, we obtain the values of $K$ in the following table
\begin{center}
\begin{tabular}{@{} c|c|c|c|c|c|c|c|c @{}}
\toprule
$N$ & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 \\
\hline
$K$ & $1.023$ & $1.041$ & $1.043$ & $1.037$ & $1.055$ & $1.056$ & $1.072$ & $1.067$ \\
\end{tabular}
\end{center}
\end{enumerate}
Note that there is little dependence of $K$ on $N$.
We also show the $N$ dependence of $E_0$ in Fig.~\ref{fig: ED}(b).
\subsection{Schwinger-Dyson equation}
\label{numschwindys}
Here we briefly review the numerical solution of the Schwinger-Dyson equations for the complex SYK model. This has already been discussed in many papers, see particularly
Refs.~\cite{MS16-remarks, Davison17}. Our main purpose is to show that this method gives compressibility $K$ very close to the result obtained in Section~\ref{sec:exact} from exact diagonalization.
We solve Schwinger-Dyson equation numerically using the well-known method of weighted iterations
\begin{equation}
\Sigma_{j}(\tau)=J^{2}G_{j}^{q/2}(\tau)G_{j}^{q/2-1}(\beta-\tau)\,, \quad
G_{j+1}(i\omega_{n}) =(1-w) G_{j}(i\omega_{n})+\frac{w}{i \omega_{n}+\mu-\Sigma_{j}(i\omega_{n})} \,.
\end{equation}
For non-zero chemical potential it is convenient to start iterations with the conformal answer, regulated at the boundaries $\tau=0^{+}$ and $\tau=\beta^{-}$, and with the $\theta$ parameter corresponding to specific charge $\mathcal{Q}$ close to expected numerical value. This prevents iterations from falling into exponentially decaying solution.
\begin{figure}[t]
\center
\includegraphics[width=0.55\textwidth]{plotGmu.pdf}
\caption{\label{plotGmu} Plot of numerical solution for $G(\tau)$ at $q=4$, $\beta J =200$ and $\beta \mu=20$. The dashed line is conformal solution (\ref{Gconf})
with $\theta$ found from numerical $\mathcal{Q}$ using the formula (\ref{charge theta}).}
\end{figure}
We find $\mathcal{Q}$ numerically using the formula
\begin{equation}
\mathcal{Q}= \frac{1}{2}(G(0^{+})-G(\beta^{-}))\,. \label{numQ}
\end{equation}
For large $\beta J$ we can use equation (\ref{charge theta}) to find parameter $\theta$ in conformal solution (\ref{Gconf}). We plot an example of exact numerical $G(\tau)$ and its conformal fit $G_{\textrm{c}}(\tau)$ in Fig.~\ref{plotGmu}.
The grand potential can be computed from the expression \cite{Davison17}
\begin{equation}
-\beta \frac{\Omega}{N} = \log \left(2 \cosh \frac{\beta \mu}{2}\right) +2 \textrm{Re} \sum_{n=0}^{\infty}\log\left(1-\frac{\Sigma(i\omega_{n})}{i\omega_{n}+\mu}\right) +\frac{q-1}{q} \sum_{n=-\infty}^{+\infty}\Sigma(i\omega_{n})G(i\omega_{n})\,,
\end{equation}
from which one can obtain the entropy as
\begin{equation}
S = -\beta \frac{\Omega}{N} - \beta \mu \mathcal{Q} +\frac{2}{q}\sum_{n=-\infty}^{+\infty}\Sigma(i\omega_{n})G(i\omega_{n})\,,
\end{equation}
where $Q$ is computed numerically using (\ref{numQ}).
Finally compressibility in units $J$ can be obtained numerically by using the formula
\begin{equation}
K = \lim_{\mu\to 0} \frac{\mathcal{Q}}{\mu} = \frac{1}{N} \lim_{\mu\to 0} \frac{1}{2\mu} (G(0^{+})-G(\beta^{-}))\,.
\end{equation}
Numerically we fix $J=1$ and compute the ratio $\mathcal{Q}/\mu$ for small $T$ and small $\mu$. We first approximate the result to the zero temperature to obtain $K$ as a function of small $\mu$, as shown in Fig.~\ref{Kcomp}, left. Then we approximate such $K(\mu)$ to $\mu=0$ (Fig.~\ref{Kcomp}.b, right).
\begin{figure}[t]
\center
\subfloat[$\mathcal{Q}/\mu$ vs. $T$]{
\includegraphics[width=0.44\textwidth]{dQdmu.pdf}}\qquad
\subfloat[$K$ vs. $T$]{
\includegraphics[width=0.44\textwidth]{Kmu.pdf} }
\caption{\label{Kcomp} (a) Plot of ${\mathcal{Q}}/ \mu$ for different temperatures $T$ and chemical potentials $\mu$ for $q=4$. (b) Plot of $K$ for different $\mu$. }
\end{figure}
We did computations for $q=4$ and used $10^8$ grid points for the two-point function. The value of $K$ we found is
\begin{equation}
K \approx 1.045/J\,. \label{KSDres}
\end{equation}
This result agrees quite well with the exact diagonalization result in the previous section and with the value of $K$ reported in \cite{Davison17}.
\subsection{Kernel diagonalization}
\label{kerneldiag}
This type of numerics was first done in Ref.~\cite{MS16-remarks} for the antisymmetric kernel\footnote{We thank D. Stanford for sharing his code with us.}. In Appendix~\ref{app:eff} we discuss analytical approach for kernel diagonalization. (also see Ref.~\cite{Davison17} Appendix F).
The fluctuation analysis here is complementary to that in Section~\ref{secRG} in the sense that here we expand the fluctuations around the exact saddle while in the Section~\ref{secRG} we expand around the conformal saddle.
We remind that we are working on the saddle with $\mathcal{Q}=0$, where the general expressions for the kernel \eqref{KGSigma} have additional symmetry, i.e. they commute with the operator that switches two times, and thus we may analyze the kernel on the subspace of antisymmetric and symmetric functions separately. For this purpose,
let us consider the symmetrized antisymmetric and symmetric kernels\footnote{Comparing to the general expression \eqref{KGSigma}, we ``average'' $K_G$ and $K_\Sigma$ in the sense that we separate $(q-2)$ rungs from one side to two sides, such that the final expression is hermitian. The superscript $\textrm{A}/\textrm{S}$ indicate the subspaces of the antisymmetric/symmetric functions of two time the kernels act on. We also need to replace the conformal $G_c$ by the exact Green function since we are expanding w.r.t. the exact saddle in this section.}
\begin{equation}
K^{\textrm{A}/\textrm{S}}(\theta_{1},\theta_{2};\theta_{3},\theta_{4})=-\Big(\frac{q}{2}\pm(\frac{q}{2}-1)\Big)J^{2} |G(\theta_{12})|^{\frac{q-2}{2}}G(\theta_{13})G(\theta_{24})|G(\theta_{34})|^{\frac{q-2}{2}}\,,
\label{eqn: KAS}
\end{equation}
where we fix $\beta=2\pi$ so all angles take values in the interval $[0,2\pi]$.
Since these kernels are invariant under
the translation of all four times, i.e. they
commute with the operator $D = i(\partial_{\theta_{1}}+\partial_{\theta_{2}})$, one can look for the eigenfunctions
of the kernels $\Psi_{h,n}^{\textrm{A}/\textrm{S}}(\theta_{1},\theta_{2})$, which are simultaneously eigenfunctions of the operator $D$:
\begin{equation}
\Psi_{h,n}^{\textrm{A}/\textrm{S}}(\theta_{1},\theta_{2}) = e^{i n \frac{\theta_{1}+\theta_{2}}{2}}\phi^{\textrm{A}/\textrm{S}}_{h,n}(\theta_{12})\,.
\end{equation}
Let us also define variants of the kernels with the parameter $n$ accordingly,
\begin{equation}
K_n^{\textrm{A}/\textrm{S}} (\theta,\theta')= \int_{0}^{2\pi} K^{\textrm{A}/\textrm{S}} \left( s + \frac{\theta}{2}, s- \frac{\theta}{2} ; \frac{\theta'}{2} , - \frac{\theta'}{2} \right) e^{-ins} ds\,,
\end{equation}
such that $\phi^{\textrm{A}/\textrm{S}}_{h,n}(\theta)$ are the eigenfunctions with eigenvalue $k^{\textrm{A}/\textrm{S}}(h,n)$, more explicitly,
\begin{equation}
\int_{0}^{2\pi} K_n^{\textrm{A}/\textrm{S}} (\theta,\theta') \phi^{\textrm{A}/\textrm{S}}_{h,n}(\theta') \, d\theta' =k^{\textrm{A}/\textrm{S}}(h,n) \phi^{\textrm{A}/\textrm{S}}_{h,n}(\theta)\,.
\end{equation}
To numerically diagonalize kernels $K_n^{\textrm{A}/\textrm{S}}(\theta,\theta')$ in the space of antisymmetric/symmetric functions (on the discretized coordinates $\theta$, $\theta'$), it is more convenient to impose the symmetry explicitly, namely, we use $(K^{\textrm{A}}_n(\theta,\theta')-K^{\textrm{A}}_n(\theta,-\theta'))/2$ and $(K^{\textrm{S}}_n(\theta,\theta')+K^{\textrm{S}}_n(\theta,-\theta'))/2$ in the actual calculation.
We expect to find the highest eigenvalue of the kernels $K^{\textrm{A}}_{n}(\theta,\theta')$ and $K^{\textrm{S}}_{n}(\theta,\theta')$ for large $\beta \mathcal{J}$, where $\mathcal{J}=\sqrt{q}J/2^{\frac{q-1}{2}}$ in the form
\begin{equation}
k^{\textrm{A}}(2,n) = 1 -\frac{\alpha_{K}^{\textrm{A}}}{\beta \mathcal{J}}|n|+O\Bigl(\frac{1}{(\beta \mathcal{J})^{2}}\Bigr)\,, \quad k^{\textrm{S}}(1,n) = 1 -\frac{\alpha_{K}^{\textrm{S}}}{\beta \mathcal{J}}|n|+O\Bigl(\frac{1}{(\beta \mathcal{J})^{2}}\Bigr)\,. \label{ksexpect}
\end{equation}
These eigenvalues correspond to $h=2$ and $h=1$ modes.
The Schwarzian coupling $\alpha_{\textrm{S}}$ and compressibility $K$ is related to $\alpha_{K}^{\textrm{A}}$ and $\alpha_{K}^{\textrm{S}}$ through the formulas \footnote{We notice that we have additional factor of 2 for $\alpha_S$ in comparison to \cite{MS16-remarks} because in our case $N$ is the number of complex fermions.}
\begin{equation}
\alpha_{{S}} = \frac{\alpha_{K}^{\textrm{A}}}{3\alpha_{0}q^{2}}\frac{1}{\mathcal{J}} , \quad K = \frac{\alpha_{K}^{\textrm{S}}}{\alpha_{0}(q-1)} \frac{1}{\mathcal{J}}\,, \label{KrelalS}
\end{equation}
where $\alpha_{0}=2\pi q \cot({\pi}/{q})/((q-1)(q-2))$. We compute numerically $k^{\textrm{S}}$ for $q=4$ and different values of $\beta \mathcal{J}$ and $n$. The plot of $k^{\textrm{S}}$ for $q=4$ and $n=1$ is represented in Fig.~\ref{kSn1}(a). By fitting the data points by polynomial in $1/\beta \mathcal{J}$
we obtain
\begin{figure}[t]
\center
\subfloat[$k^{\textrm{S}}(1,1)$ vs. $\beta\mathcal{J}$]{
\includegraphics[width=0.45\textwidth]{kSn1.pdf} }
\qquad
\subfloat[ $n \alpha_K^{\textrm{S}}(n)$ vs. $n$]{
\includegraphics[width=0.44\textwidth]{alkSn.pdf} }
\caption{\label{kSn1} (a) Plot of numerical $k^{\textrm{S}}(1,1)$ for $q=4$ and $n=1$. The dashed line is the fit (\ref{ksfit}). (b) Plot for $n\alpha_{K}^{\textrm{S}}(n)$ for $q=4$. One can see that within computational accuracy $\alpha_{K}^{\textrm{S}}(n)$ almost does not depend on $n$, confirming the expectation (\ref{ksexpect}). We use $10^{8}$ grid points for numerical computation of $G(\theta)$
and $10^{5}$ grid points for the kernel discretization in $\theta$ and $\theta'$ directions, so the kernel becomes a $10^{5}\times 10^{5}$ matrix.
}
\end{figure}
\begin{equation}
k^{\textrm{S}}(1,1) = 1-\frac{9.2}{\beta \mathcal{J}}+\frac{130.5}{(\beta \mathcal{J})^{2}}-\frac{2377}{(\beta \mathcal{J})^{3}}\,. \label{ksfit}
\end{equation}
From this fit we find that $\alpha_{K}^{\textrm{S}}=9.2$ and from (\ref{KrelalS}) we obtain for $q=4$
\begin{equation}
K \approx 1.04/J\,. \label{Knumres}
\end{equation}
This agrees very well with $K$ obtained from the Schwinger-Dyson equation (\ref{KSDres}) and exact diagonalization. We also plotted $n \alpha_{K}^{\textrm{S}}(n)$ in Fig.~\ref{kSn1}(b), where $\alpha_{K}^{\textrm{S}}(n)$ obtained from fitting $k^{\textrm{S}}$ for different $n$ and $\alpha_{K}^{\textrm{S}}(1)=9.2$. One can see that within computational accuracy $\alpha_{K}^{\textrm{S}}$ does not depend on $n$ in agreement with expectation (\ref{ksexpect}).
\begin{figure}[t]
\center
\subfloat[Eigenfunctions $\phi_{2,n}^{\textrm{A}}$]{
\includegraphics[width=0.45\textwidth]{phi2n234.pdf}}
\qquad
\subfloat[Eigenfunctions $\phi_{1,n}^{\textrm{S}}$]{
\includegraphics[width=0.45\textwidth]{phi1n123.pdf}}
\caption{\label{phi21nAS} (a) numerical eigenfunctions $\phi_{2,n}^{\textrm{A}}$ for the antisymmetric kernel; note the perfect agreement between numerics and the analytic solution. (b) numerical eigenfunctions $\phi_{1,n}^{\textrm{S}}$ for the symmetric kernel. In this case one can see UV divergence near $\theta=0$ where the numerics disagrees with the theoretical conformal perturbation theory.
}
\end{figure}
Following the discussions in Ref.~\cite{MS16-remarks}, one might expect that the numerical result of $\alpha_K^{\textrm{S}}$ can be related to the deviation of the exact Green function from the conformal one similar to the case for $\alpha_K^{\textrm{A}}$ (cf. Ref.~\cite{MS16-remarks} Eq.~(3.88)). We present a calculation following this procedure in Appendix \ref{app:GrishaK}. The result does not agree with the numerical value of $K$ but agrees with the $K_{\text{linear}}$ (see \eqref{Kgamma relation})
\begin{equation}
K_{\textrm{linear}} = -\left.\frac{2^{\frac{q+1}{2}}\alpha_{G}}{J\sqrt{q} \alpha_{0}}{k_{c}^{\textrm{A}}}'(2)\right|_{q=4} \approx 0.48/J\,.
\end{equation}
On the other hand, the numerical result for $\alpha_{S}$ from the anti-symmetric kernel agrees perfectly with the theoretical computation \cite{MS16-remarks}. The reason for the disagreement for $K$ presumably related to the fact that $K$ is a UV sensitive
quantity,
and the naive perturbation
theory for the symmetric kernel in $1/\beta \mathcal{J}$ series does not work well, e.g. the integrals obtained from higher corrections to the Green function have uncompensated power-law divergences which then contribute to the first order $1/\beta \mathcal{J}$ term, changing the final result.
One sign of such a breakdown of perturbation theory is visible in our numerical results for eigenvectors of the symmetric kernel. They agree with the conformal kernel eigenfunctions
everywhere except UV region, whereas for the antisymmetric eigenfunctions the agreement is perfect everywhere; see Fig.~\ref{phi21nAS}.
The conformal kernel eigenfunctions, which are simultaneously eigenfunctions of the Casimir with eigenvalues $h=2$ (anti-symmetric) and $h=1$ (symmetric) read
\begin{equation}
\begin{aligned}
&\phi_{2,n}^{\textrm{A}}(\theta) = \frac{\gamma_{n}}{2\sin\frac{\theta}{2}}\Big(\frac{\sin \frac{n \theta}{2}}{\tan \frac{\theta}{2}}- n \cos \frac{n \theta}{2}\Big)\,,\quad
\phi_{1,n}^{\textrm{S}}(\theta) = \frac{1}{2\pi|n|^{1/2}} \frac{\sin \frac{n \theta}{2}}{\sin \frac{\theta}{2}}\,, \label{confeigfun} \\
& \hspace{80pt} \text{where} \quad \gamma_{n}^{2} =\frac{3}{\pi^{2}|n|(n^2-1)} \,.
\end{aligned}
\end{equation}
The divergence of the eigenfunctions of the symmetric kernel in UV region is captured in the large $q$ limit (see Appendix \ref{largeqsymkern}).
\section{Bulk picture and zero-temperature entropy}
\label{sec:bulk}
In this section, we find the zero-temperature entropy $\calS$ of the complex SYK model by considering a massive Dirac fermion in AdS$_2$. The actual calculation is done in the Euclidean case, that is, on the hyperbolic plane. The asymmetry of the Green function \eqref{Gconf} may be interpreted as a phase factor with an imaginary phase, $2\pi i\calE$, suggestive of an imaginary $\operatorname{U}(1)$ field acting on the Dirac fermion. (It corresponds to a real electric field in AdS$_2$.) The partition function in the presence of such a field yields the dependence of $\calS$ on $\calE$, and hence, on $\mathcal{Q}$ via Eq.~(\ref{dQdE}). We will find that the $\calS$ so obtained is exactly equal to that obtained from direct computations for the complex SYK model~\cite{GPS01,SS15,Davison17}.
Our computation of $\calS$ should be contrasted with that for higher-dimensional charged black holes
\cite{Myers99,Faulkner09,SS15,Gaikwad:2018dfc,Nayak:2018qej,Moitra:2018jqs,Chaturvedi:2018uov,Sachdev19,Moitra:2019bub,Anninos:2019oka}, summarized in Appendix~\ref{app:em}. In the latter case, the value of $\calS$ in Eq.~(\ref{S0bh}) is determined by the horizon area and has no direct connection to the parameters of the SYK model. The present section interprets $\calS$ as the contribution of fermionic fields; such matter fields~\cite{Faulkner09} only make a subdominant contribution to thermodynamics in the conventional higher dimensional AdS/CFT correspondence.
\subsection{General idea}
For illustrative purposes, we will use the Majorana SYK model,
\begin{equation}\label{H_SYK}
\hat{H}_{\text{Majorana}}
= \frac{i^{q/2}}{q!}\sum_{j_1,\dots,j_q}J_{j_1\cdots j_q}\,
\hat{\chi}_{j_1}\dots\hat{\chi}_{j_q}\,,
\qquad
\overline{J_{j_1\cdots j_q}^2}=\frac{(q-1)!J^2}{N^{q-1}}\,.
\end{equation}
Among many methods of calculating its zero-temperature entropy $\calS=\calS_{\text{Majorana}}$, the formula
\begin{equation}
\calS_{\text{Majorana}}
= \int_0^{\frac{1}{2}-\Delta} \frac{\pi x }{\tan (\pi x)} dx
\label{SMaj}
\end{equation}
can be derived by evaluating $\frac{1}{2}\ln\det(-\tilde{\Sigma})$ with proper regularization\cite{Kit.KITP.1, Kit.KITP.2} (see also appendix~\ref{subsec: ln det}). Indeed, $\calS$ is defined as the zeroth order term in the $1/\beta$ expansion $\frac{\ln Z}{N} = -\frac{E_0}{N}\beta +\calS +O(\beta^{-1})$, where $\ln Z$ may be approximated by minus the $(G,\tilde{\Sigma})$ action at the saddle point. As explained in appendix~\ref{subsec: ln det}, the double integral part of the action has $\beta$ and $O(\beta^{-1})$ terms but no constant term.
For the complex SYK model, $Z$ should be understood as the grand partition function, and $\calS$ should be replaced by its Legendre transform, $\calG(\calE)= \calS(\mathcal{Q})-2\pi\calE\mathcal{Q}$. We will derive a formula similar to \eqref{SMaj} by considering $\ln Z$ in the $\beta\to\infty$ limit and extracting the constant term:\footnote{One may have noticed that the integrand in \eqref{gz_int} has a form similar to the Plancherel measure for the universal cover of $SL(2,R)$. This analogy will be elucidated in Section~\ref{sec:Plancherel}.}
\begin{equation}
\calG(\calE) = \int_{0}^{\frac{1}{2}-\Delta} \frac{2 \pi x \sin (2\pi x) }{\cosh (2\pi \calE) - \cos(2\pi x) } dx\,.
\label{gz_int}
\end{equation}
For $\tilde{\Sigma}$ asymmetric in time and frequency, the direct calculation of $\det(-\tilde{\Sigma})$ is fraught with regularization difficulties. This is where the bulk picture offers a crucial advantage, replacing the tricky UV regularization with a simple subtraction of a boundary contribution.
In an abstract sense, the bulk is an artificial system that mimics most important properties of the real one. It may also be regarded as a heat bath for a small subset of sites~\cite{KS17-soft}. The following argument seems to apply to all large $N$ systems, but we will focus on the Majorana SYK model for simplicity. Consider adding an extra site to $N$ existing ones and modifying the couplings $J_{j_1,\dots,j_q}$ accordingly, multiplying them by $\bigl(\frac{N}{N+1}\bigr)^{\frac{q-1}{2}} \approx 1-\frac{q-1}{2N}$. In the thermodynamic limit, the logarithm of the partition function is proportional to $N$, and its change by the stated procedure is just $\frac{\ln Z}{N}$. Calling the original $N$ sites a ``bath'', we get:
\begin{equation}
\frac{\ln Z}{N}=\ln Z_{\text{full}}-\ln Z_{\text{bath}}
-\frac{q-1}{2}\,\frac{\partial\ln Z}{N\partial\ln J}\,,
\end{equation}
where ``full'' refers to the bath and the extra site together, but with the couplings unchanged. In the $\beta\to\infty$ limit, $\frac{\partial\ln Z}{N\partial\ln J}=-\frac{E_0}{N}\beta+O(\beta^{-1})$; hence, the last term in the above equation may be neglected.
To calculate $\ln Z_{\text{full}}-\ln Z_{\text{bath}}$, we may write the Hamiltonian as $\hat{H}_{\text{full}} =\hat{H}_{\text{bath}} +i\hat{\chi}\hat{\xi}$, where $\hat{\chi}$ represents the extra site and $\hat{\xi}$ is a certain operator acting on the other sites. When $N$ is large, $\hat{\xi}$ is Gaussian, meaning that the bath is completely characterized by the correlation function $\langle{\rm T} \hat{\xi}(\tau_1)\hat{\xi}(\tau_2)\rangle=-\Sigma(\tau_1,\tau_2)$ while higher correlators are obtained by Wick's theorem. This suggests the replacement of the real system by a collection of Grassmann variables $\Psi_j$ with a quadratic action $I=-\frac{1}{2}\sum_{j,k}B_{jk}\Psi_j\Psi_k$, where the indices take values on the time circle (for the extra site) and some abstract locations (for the bath). The full matrix $B$ has this structure:
\begin{equation}
B_{\text{full}}
=\begin{pmatrix} -\sigma & Y\\ -Y^T & B_{\text{bath}} \end{pmatrix}\,,\qquad
\sigma=\partial_\tau\,,\quad YB_{\text{bath}}^{-1}Y^{T}=-\Sigma\,,
\end{equation}
with $\sigma$ and $B_{\text{bath}}$ being square and skew-symmetric, and $Y$ rectangular. Using this artificial model, we get
\begin{equation}
\ln Z_{\text{full}}-\ln Z_{\text{bath}}
=\frac{1}{2}\ln\frac{\det B_{\text{full}}}{\det B_{\text{bath}}}
=\frac{1}{2}\ln\det(-\sigma-\Sigma)\,,
\end{equation}
where we have used the identity $\det\left(\begin{smallmatrix}A & B\\C & D \end{smallmatrix}\right)= \det D\cdot \det(A-BD^{-1}C)$.
While the previous description leaves many possibilities for choosing $B_{\text{bath}}$, the nicest one is a Majorana fermion with mass $M=\frac{1}{2}-\Delta$ on the hyperbolic plane. All its properties follow from those of the Dirac fermion, studied in the next subsection and appendix~\ref{sec:app_Dirac}. In this preliminary discussion, we use the Poincare half-plane model with the metric $ds^2=({d\tau^2+dy^2})/{y^2}$\, ($y>0$). A Majorana spinor $\psi$ has two components, $\psi_{\downarrow}$ and $\psi_{\uparrow}$. Solutions of the equation of motion have this asymptotic form:
\begin{equation}
\psi(\tau,y)=
\psi_+(\tau)\, y^{\Delta_+}\!\begin{pmatrix}
1\\ 1
\end{pmatrix}
+\psi_-(\tau)\, y^{\Delta_-}\!\begin{pmatrix}
1\\ -1
\end{pmatrix}\,\text{ for }y\to 0\,,\quad\:
\Delta_+=1-\Delta\,,\quad \Delta_-=\Delta\,.
\label{near-b_Poincare}
\end{equation}
The boundary condition $\psi_-(\tau)=0$ is chosen, which prescribes a sufficiently fast decay near the boundary. We will refer to it as the ``Dirichlet b.c.'' and to the condition $\psi_+(\tau)=0$ as the ``Neumann b.c.''.
Assuming that only the first term in \eqref{near-b_Poincare} is present, we can promote the asymptotic coefficient $\psi_+(\tau)$ to a field and identify it with the field $\xi(\tau)$ characterizing the bath. This is reasonable because the correlator
\begin{equation}
\langle\psi_{\pm}(\tau_1)\psi_{\pm}(\tau_2)\rangle
\sim \operatorname{sgn}(\tau_1-\tau_2)\,|\tau_1-\tau_2|^{-2\Delta_{\pm}}\qquad
(\text{``$+$'' for Dirichlet, }\, \text{``$-$'' for Neumann})
\end{equation}
matches $\langle\xi(\tau_1)\xi(\tau_2)\rangle=-\Sigma(\tau_1,\tau_2)$ if the ``$+$'' sign is chosen. The part of the action involving the boundary fermion $\chi(\tau)$ is
\begin{equation}
I_{\text{boundary}}=\int \biggl(\frac{1}{2}\chi\partial_{\tau}\chi+i\psi_{+}\chi\biggr)\,
d\tau\,.
\end{equation}
Since we are interested in low temperature properties, or large time scales, the $\chi\partial_{\tau}\chi$ term may be neglected. Thus, $\chi$ becomes a Lagrange multiplier field, forcing $\psi_{+}$ to vanish. This indicates a change from the Dirichlet to Neumann boundary condition. The corresponding asymptotic coefficient $\psi_-(\tau)$ may be identified with $\chi(\tau)$, whose correlator is $-G(\tau_1,\tau_2)$.
To summarize, the zero-temperature entropy of the Majorana SYK model is
\begin{equation}
\calS=\bigl[\ln Z_{\text{full}}-\ln Z_{\text{bath}}\bigr]_{\text{reg}}\,,
\end{equation}
where $[\cdots]_{\text{reg}}$ denotes the constant term in the $1/\beta$ expansion. The partition functions $Z_{\text{bath}}$ and $Z_{\text{full}}$ correspond to a Majorana fermion on the hyperbolic plane with the Dirichlet and Neumann boundary conditions, respectively. For the complex SYK model, one should consider $\calG(\calE)$ instead of $\calS$ and use a Dirac fermion. The calculation will follow. We note that this procedure is similar to that used to compute the influence of double trace operators on the free energy in the AdS/CFT correspondence~\cite{gubser2003double,Gubser:2002vv,Diaz:2007an,Giombi:2013yva,Giombi:2014xxa,Giombi:2016pvg}.
\subsection{Dirac fermion on the hyperbolic plane}
Now we describe a realization of the auxiliary ``bath'' system for the complex SYK model. The abstract action $I_{\text{bath}}=-\Psi^{\dag}B_{\text{bath}}\Psi$ is chosen in the form
\begin{equation}
I_{\text{Dirac}}
=\int i \bar{\psi}\left(\gamma^c\nabla_c + M\right)\psi\,\sqrt{g}\,d^2x\,,\qquad
\psi = \begin{pmatrix}
\psi_{\downarrow} \\ \psi_{\uparrow}
\end{pmatrix}\,,\quad
\bar{\psi} = \begin{pmatrix}
-\psi_{\uparrow}^* & \psi_{\downarrow}^*
\end{pmatrix}\,,
\end{equation}
where
\begin{equation}
\nabla_{\alpha}\psi
=\left(\partial_{\alpha} + \frac{1}{2} \omega_{\alpha bc} \Sigma^{bc}
-iA_\alpha\right) \psi\,.
\end{equation}
Specific to two dimensions, the spin connection factors into a scalar and a constant matrix:
\begin{equation}
\begin{pmatrix}
\omega_{\alpha 11} & \omega_{\alpha 12} \\
\omega_{\alpha 21} & \omega_{\alpha 22}
\end{pmatrix} = \omega_{\alpha} \begin{pmatrix}
0 & -1 \\
1 & 0
\end{pmatrix} \,,\qquad
\partial_\alpha\omega_\beta-\partial_\beta\omega_\alpha
=-\frac{R}{2}\epsilon_{\alpha\beta}\,.
\end{equation}
(Further details, such as the expressions for the Dirac matrices $\gamma^1,\gamma^2$ and the spin matrices $\Sigma^{ab}= \frac{1}{4} \left[ \gamma^a, \gamma^b\right]$, can be found in appendix~\ref{sec:app_Dirac}.) The Majorana case differs in that $\psi_{\downarrow}$, $\psi_{\uparrow}$ are real, the $\operatorname{U}(1)$ gauge field $A$ is absent, and the action has an overall factor $\frac{1}{2}$.
We use the Poincare disk model of the hyperbolic plane ${\rm H^2}$:
\begin{equation}
ds^2=4 \frac{dr^2+r^2 d\varphi^2}{(1-r^2)^2}\,.
\end{equation}
The $\operatorname{U}(1)$ gauge field $A$ is imaginary (but becomes real upon the analytic continuation from the hyperbolic plane to the global anti-de Sitter space sharing a diameter of the Poincare disk). More specifically,
\begin{equation}
A_\alpha=-i\calE\omega_\alpha,\quad
\partial_{\alpha}A_{\beta}-\partial_{\beta}A_{\alpha}
=-i\calE\epsilon_{\alpha\beta}\,.
\end{equation}
Thus, the model is characterized by the Dirac mass $M$ and the field strength $\calE$. We also need to specify a boundary condition. To this end, we note that a general solution of the Dirac equation $(\gamma^c\nabla_c+M)\psi=0$ has this asymptotic form near the boundary:
\begin{equation}
\psi(r,\varphi) \approx\psi_{+}(\varphi)\,\eta_{+}(r,\varphi)
+\psi_{-}(\varphi)\,\eta_{-}(r,\varphi)\quad \text{for } r\to 1\,,
\end{equation}
where
\begin{equation}
\eta_{\pm} (r,\varphi)= \bigl(1-r^2\bigr)^{\Delta_{\pm}}
\begin{pmatrix}
e^{i \frac{(\varphi\pm\gamma)}{2}} \\
\pm e^{-i \frac{(\varphi\pm\gamma)}{2}}
\end{pmatrix} \,,\qquad
\Delta_{\pm}=\frac{1}{2}\pm\sqrt{M^2-\calE^2}\,,\quad
\gamma=\arcsin\frac{\calE}{M}\,.
\label{eqn: etaD}
\end{equation}
\begin{figure}[t]
\centerline{\begin{tikzpicture}[scale=0.8, baseline={([yshift=-4pt]current bounding box.center)}]
\draw[thick] (0pt,0pt) circle (40pt);
\draw[thick, ->,>=stealth] (0pt,0pt) -- (15pt,0pt);
\node[right] at (13pt,0pt) {$\Vec{e}_1$};
\draw[thick, ->,>=stealth] (0pt,0pt) -- (0pt,15pt);
\node[above] at (0pt,13pt) {$\Vec{e}_2$};
\draw[->,>=stealth] (15pt,15pt) -- ++(8pt,0pt);
\draw[->,>=stealth] (15pt,15pt) -- ++(0pt,8pt);
\draw[->,>=stealth] (-15pt,-15pt) -- ++(8pt,0pt);
\draw[->,>=stealth] (-15pt,-15pt) -- ++(0pt,8pt);
\draw[->,>=stealth] (15pt,-15pt) -- ++(8pt,0pt);
\draw[->,>=stealth] (15pt,-15pt) -- ++(0pt,8pt);
\draw[->,>=stealth] (-15pt,15pt) -- ++(8pt,0pt);
\draw[->,>=stealth] (-15pt,15pt) -- ++(0pt,8pt);
\draw[->,>=stealth, blue,thick] (28.28pt, 28.28pt) -- ++ (-20pt,20pt) node[right]{$\partial_\varphi$};
\end{tikzpicture}}
\caption{Local frame $(\Vec{e}_1,\Vec{e}_2)$ relative to which the Dirac spinor is defined.}\label{fig:frame}
\end{figure}
The dependence on the polar angle $\varphi$ in Eq.~\eqref{eqn: etaD} is a consequence of gauge choice: we use the local frame (vielbein) shown in Fig.~\ref{fig:frame}, whose orientation relative to the tangent vector $\partial_\varphi$ depends on $\varphi$. For the bath model, we postulate the Dirichlet boundary condition, $\psi_{-}(\varphi)=0$. But when the bulk fermion is coupled to a boundary fermion, the correct condition is Neumann, $\psi_{+}(\varphi)=0$.
The Euclidean propagator for each boundary condition,
\begin{equation}
C_{\pm}=-i(\gamma^c\nabla_c+M)^{-1}_{\pm},\qquad
C_{\pm}(x_1,x_0)=\bigl\langle\psi(x_1)\bar{\psi}(x_0)\bigr\rangle_{\pm}
\end{equation}
with the matrix structure
\begin{equation}
C_{\pm}=\begin{pmatrix}
-C^{\downarrow \uparrow}_{\pm} & C^{\downarrow \downarrow}_{\pm}
\vspace{2pt}\\
-C^{\uparrow\uparrow}_{\pm} & C^{\uparrow \downarrow}_{\pm}
\end{pmatrix}\,, \quad C^{jk}_{\pm} = \langle \psi_j \psi_k^* \rangle_{\pm}\,,
\end{equation}
is calculated in appendix~\ref{sec:app_Dirac}, see Eq.~\eqref{eqn: propagator Z1Z0}. In particular, when both $x_1=(r_1,\varphi_1)$ and $x_0=(r_0,\varphi_0)$ approach the boundary, the propagator becomes
\begin{equation}
C_{\pm}(r_1,\varphi_1;r_0,\varphi_0)
\approx\bigl\langle\psi_{\pm}(\varphi_1)
\bar{\psi}_{\pm}(\varphi_0)\bigr\rangle\,
\eta(r_1,\varphi_1)\bar{\eta}(r_0,\varphi_0)\quad \text{for } r_1,r_0\to 1\,,
\end{equation}
where for $0<\varphi_1-\varphi_0<2\pi$, we have
\begin{equation}
\bigl\langle\psi_{\pm}(\varphi_1)\psi_{\pm}(\varphi_0)\bigr\rangle
=\frac{\Gamma\bigl(\Delta_{\pm}+\frac{1}{2}+i\calE\bigr)\,
\Gamma\bigl(\Delta_{\pm}+\frac{1}{2}-i\calE\bigr)}{4\pi\,\Gamma(2\Delta_\pm)}\,
e^{\calE (\pi-\varphi_1+\varphi_0) }
\biggl( 2\sin \frac{\varphi_1-\varphi_0}{2} \biggr)^{-2\Delta_{\pm}}.
\end{equation}
Thus, $\langle\psi_{+}(\varphi_1)\psi_{+}(\varphi_0)\rangle \sim -\tilde{\Sigma}(\varphi_1,\varphi_0)$ and $\langle\psi_{-}(\varphi_1)\psi_{-}(\varphi_0)\rangle \sim -G(\varphi_1,\varphi_0)$ (up to some constant factors), where $\tilde{\Sigma}$ and $G$ are defined for the complex SYK model with
\begin{equation}
\Delta=\frac{1}{2}-\sqrt{M^2-\calE^2}\,.
\label{eqn:DM}
\end{equation}
\subsection{Subtraction of infinities and the ``spooky propagator''}
We are now in a position to evaluate the thermodynamic quantity
\begin{equation}
\calG(\Delta,\calE)=\bigl[\ln Z_{\text{full}}-\ln Z_{\text{bath}}\bigr]_{\text{reg}}
=\bigl[\ln\det(\gamma^c\nabla_c+M)_{-}
-\ln\det(\gamma^c\nabla_c+M)_{+}\bigr]_{\text{reg}}\,.
\end{equation}
Each of the two terms in the square brackets suffers from a UV divergence and the divergence due to infinite volume. The former is canceled due to the subtraction of the terms and the latter due to the regularization $[\cdots]_{\text{reg}}$, which amounts to the subtraction of a boundary contribution. The two terms exactly cancel each other if $M=|\calE|$. For $M>|\calE|$, it is convenient to take the derivative with respect to $M$ using the relation \eqref{eqn:DM} between $M$ and $\Delta$:
\begin{equation}
\frac{M}{\Delta-1/2}\,\frac{\partial\calG(\Delta,\calE)}{\partial\Delta}
=\bigl[\operatorname{Tr}(\gamma^c\nabla_c+M)_{-}^{-1}
-\operatorname{Tr}(\gamma^c\nabla_c+M)_{+}^{-1}\bigr]_{\text{reg}}
=i\bigl[\operatorname{Tr}(C_{-}-C_{+})\bigr]_{\text{reg}}\,.
\label{eqn:dertr1}
\end{equation}
In the last expression, $-C_{+}$ may be regarded as a propagator of a ghost particle. For this reason, we call the difference $C_{\text{sp}}=C_{-}-C_{+}$ the ``spooky propagator''. The function $C_{\text{sp}}(x_1,x_0)$ has no singularity at $x_1=x_0$ and may be interpreted as the bulk fermion propagating from point $x_0$ to the boundary, where it mixes with the boundary fermion, and then moving to point $x_1$.\,\footnote{More exactly, $C_{\text{sp}} \sim (\text{boundary to bulk})_+ \cdot G \cdot (\text{bulk to boundary})_+$. The boundary-to-bulk and bulk-to-boundary propagators are, actually, $\tilde{SL}(2,R)$ intertwiners.} This is an explicit formula:
\begin{equation}
C_{\text{sp}}(r,\varphi;0)=\frac{M\sin(2\pi\Delta)}
{4i\cos(\pi(\Delta-i\calE))\cos(\pi(\Delta+i\calE))}
\begin{pmatrix}
-A_{\Delta,\frac{1}{2}+i\calE,-\frac{1}{2}-i\calE}(r^2) &
e^{i\varphi}A_{\Delta,\frac{1}{2}+i\calE,\frac{1}{2}-i\calE}(r^2) \\
e^{-i\varphi}A_{\Delta,\frac{1}{2}+i\calE,\frac{1}{2}-i\calE}(r^2) &
-A_{\Delta,-\frac{1}{2}+i\calE,\frac{1}{2}-i\calE}(r^2)
\end{pmatrix}\,,
\end{equation}
where
\begin{equation}
A_{\lambda,l,r}(u)=u^{(l+r)/2}(1-u)^{\lambda}
\textbf{F}(\lambda+l,\lambda+r,1+l+r;u)\,,\quad
\textbf{F}(a,b,c;u)=\frac{{}_2{\rm F}_1(a,b,c;u)}{\Gamma(c)}\,.
\end{equation}
Let us complete the calculation of $\calG(\Delta,\calE)$ using Eq.~\eqref{eqn:dertr1}. We have
\begin{equation}
\operatorname{Tr} C_{\text{sp}}=\int_{\rm H^2} \operatorname{Tr} C_{\text{sp}}(x,x)\,\sqrt{g(x)}\,d^2x
= \text{Area}({\rm H^2})\cdot\operatorname{Tr} C_{\text{sp}}(0,0)\,.
\end{equation}
The area of the hyperbolic plane is obviously infinite, but it can be made finite by regularization. Indeed, consider the disk $D_r$ of radius $r$ centered at the origin. It has the following area and boundary length:
\begin{equation}
\text{Area}(D_r)=4\pi\int_{0}^{r^2}\frac{dx}{(1-x)^2}
=\frac{4\pi r^2}{1-r^2}\,,\qquad
\text{Length}(\partial D_r)=\frac{4\pi r}{1-r^2}\,,
\end{equation}
so that
\begin{equation}
\lim_{r\to 1}\bigl(\text{Area}(D_r)-\text{Length}(\partial D_r)\bigr)=-2\pi\,.
\end{equation}
Hence, $[\operatorname{Tr} C_{\text{sp}}]_{\text{reg}}=-2\pi \operatorname{Tr} C_{\text{sp}}(0,0)$. Plugging this in \eqref{eqn:dertr1}, we get:
\begin{equation}
\frac{\partial\calG(\Delta,\calE)}{\partial\Delta}
=\frac{i\pi(1-2\Delta)}{M}\operatorname{Tr} C_{\text{sp}}(0,0)
=-\frac{\pi(1-2\Delta)\sin(2\pi\Delta)}
{2\cos(\pi(\Delta+i\calE))\cos(\pi(\Delta-i\calE))}\,.
\label{eqn:dgz/dD}
\end{equation}
(This is also equal to $-2\pi^2b$, where $b$ is defined in \eqref{zeroT Green}.) Thus,
\begin{equation}
\calG(\Delta,\calE)=\int_{\Delta}^{1/2}
\frac{\pi(1-2x)\sin(2\pi x)}{\cosh(2\pi\calE)+\cos(2\pi x)}\,dx\,.
\end{equation}
In conclusion, we rewrite Eq.~\eqref{eqn:dgz/dD} as follows,
\begin{equation}
\frac{\partial\calG(\Delta,\calE)}{\partial\Delta}
=-\pi\biggl(\frac{1}{2}-\Delta\biggr)
\Bigl(\tan\pi(\Delta+i\calE)+\tan\pi(\Delta-i\calE)\Bigr)\,,
\end{equation}
and note that it is consistent with the combination of \eqref{g and Q} and \eqref{charge theta}:
\begin{equation}
\frac{\partial\calG(\Delta,\calE)}{\partial\calE}=-2\pi\mathcal{Q}
=2\theta - i\pi\biggl(\frac{1}{2}-\Delta\biggr)
\Bigl(\tan\pi(\Delta+i\calE)-\tan\pi(\Delta-i\calE)\Bigr)\,.
\end{equation}
Indeed, both equations give the same result for the mixed derivative if we use the fact that $\partial(2\theta)/\partial\Delta =-i\pi \bigl(\tan\pi(\Delta+i\calE)-\tan\pi(\Delta-i\calE)\bigr)$.
\subsection{Relation to the Plancherel factor}
\label{sec:Plancherel}
For readers who are familiar with the Plancherel measure for $\tilde{SL}(2,R)$~\cite{Pukanszky64,Kitaev:2017hnr}, it may be tempting to relate the key ingredient in the entropy formula,
\begin{equation}
\operatorname{Tr} C_{\text{sp}}(0,0) = \frac{i M \sin (2\pi \Delta)}{2 \cos (\pi (\Delta+i\calE)) \cos (\pi (\Delta-i\calE)) }\,,
\label{eqn: 5.4 goal}
\end{equation}
to the Plancherel factor. The latter also appears in the decomposition of the unit operator $\mathbf{1}^\nu$ acting on $\nu$-spinors (for an arbitrary real $\nu$) on the hyperbolic plane~\cite{Kitaev:2017hnr}:
\begin{equation}
\mathbf{1}^\nu = \frac{1}{2\pi} \Biggl(
\int_{0}^{+\infty} ds\, \frac{s \sinh (2\pi s)}{\cosh (2\pi s)+\cos (2\pi \nu)}\, \Pi^{\nu}_{1/2+is} + \sum_{\substack{\lambda = |\nu|-p > 1/2\\
p=0,1,2,\ldots}} \left( \lambda - \frac{1}{2} \right) \Pi_{\lambda}^{\nu}
\Biggr) \,,
\label{eqn: decomposition of 1}
\end{equation}
where $\Pi^{\nu}_{\lambda}$ is the projector onto the eigenspace of the $\tilde{SL}(2,R)$ Casimir operator with the eigenvalue $\lambda(1-\lambda)$. The operators $\Pi^{\nu}_{\lambda}$ are defined by integral kernels that depend on pairs of points $x_1,x_0 \in {\rm H^2}$; the normalization is such that $\Pi^{\nu}_{\lambda}(x,x)=1$.
We will make the connection to the Plancherel factor explicit by deriving \eqref{eqn: 5.4 goal} from \eqref{eqn: decomposition of 1}, bypassing the full calculation of the Dirac propagator. As explained in appendix~\ref{sec:app_Dirac}, the components of a Dirac spinor have different effective spins $\nu$, equal to ${\downarrow}=-\frac{1}{2}-i\calE$ and ${\uparrow}=\frac{1}{2}-i\calE$. The Dirac operator is represented by the matrix
\begin{equation}
\gamma^c\nabla_c+M =
\begin{pmatrix}
M & 2\nabla_- \\
2 \nabla_+ & M
\end{pmatrix}\,,
\label{eqn: block D}
\end{equation}
where $\nabla_+$ and $\nabla_-$ are certain differential operators changing the value of $\nu$ by $1$ and $-1$, respectively. (Here the subscripts ``$\pm$'' have nothing to do with boundary conditions.) The Casimir operator is expressed in terms of $\nabla_\pm$ by Eq.~\eqref{eqn: Casimir}, so both $4\nabla_-\nabla_+$ for $\nu={\downarrow}$ and $4\nabla_+\nabla_-$ for $\nu={\uparrow}$ are equal to $\frac{1}{4}+\calE^2-Q$. Using this and the formula $\Delta=\frac{1}{2}-\sqrt{M^2-\calE^2}$, we obtain the following expression for the propagator:
\begin{equation}
C =
\begin{pmatrix}
-C^{\downarrow\uparrow} & C^{\downarrow\downarrow}
\vspace{2pt}\\
-C^{\uparrow\uparrow} & C^{\uparrow\downarrow}
\end{pmatrix}
=-i(\gamma^c\nabla_c+M)^{-1}
= -i\bigl(Q-\Delta(1-\Delta)\bigr)^{-1}
\begin{pmatrix}
M & -2\nabla_- \\
-2\nabla_+ & M
\end{pmatrix}.
\label{eqn: Cgen}
\end{equation}
Let us first calculate the matrix element involving $\nu=\frac{1}{2}-i\calE$ spinors with Dirichlet boundary condition (indicated by the subscript ``$+$''),
\begin{equation}
C^{\uparrow\downarrow}_{+}
= -iM\bigl(Q-\Delta(1-\Delta)\bigr)^{-1}_{+}\,.
\end{equation}
The general idea is to use the Casimir eigendecomposition \eqref{eqn: decomposition of 1}; the role of boundary condition will become clear later.
\begin{figure}[t]\centering
\subfloat[Real $\nu$]{
\begin{tikzpicture}[baseline={(current bounding box.center)}]
\draw [->,>=stealth] (-30pt,0pt) -- (100pt,0pt) node[right]{\scriptsize $\Re \lambda$};
\draw [->,>=stealth] (0pt,-60pt) -- (0pt,60pt) node[right]{\scriptsize $\Im \lambda$};
\draw[far arrow, dashed,thick] (40pt,-60pt)--(40pt,60pt) node[right]{\scriptsize $\frac{1}{2}+is$};
\filldraw (40pt,0pt) circle (1pt);
\filldraw (70pt,0pt) circle (1pt) ;
\draw[thick, dashed, near arrow] (70pt,0pt) circle (6pt);
\end{tikzpicture}}
\hspace{30pt}
\subfloat[$\nu=\frac{1}{2}-i\calE$]{
\begin{tikzpicture}[baseline={(current bounding box.center)}]
\draw [->,>=stealth] (-30pt,0pt) -- (100pt,0pt) node[right]{\scriptsize $\Re \lambda$};
\draw [->,>=stealth] (0pt,-60pt) -- (0pt,60pt)
node[right]{\scriptsize $\Im \lambda$};
\draw[far arrow, dashed, thick, dash phase=-1pt]
(40pt,-60pt) -- (40pt,-26pt)
arc (-90:90:6pt)
-- (40pt,60pt)
node[right]{\scriptsize $\frac{1}{2}+is$};
\filldraw (40pt,-20pt) circle (1pt) ;
\end{tikzpicture}}
\caption{Contour $\Gamma$ in Eq.~\eqref{eqn: new one} includes the vertical line $\Re\lambda=\frac{1}{2}$ and also encircles the points $\lambda=\nu+n$ (for integer $n$) between that line and the line $\Re(\lambda-\nu)=\frac{1}{2}$.}
\label{fig: new contour}
\end{figure}
For the task at hand, it is convenient to transform Eq.~\eqref{eqn: decomposition of 1} to a different form, which generalizes to complex values of $\nu$:
\begin{equation}
\mathbf{1}^\nu = \frac{i}{4\pi } \int_{\Gamma} d\lambda\,
\Bigl(\lambda-\frac{1}{2}\Bigr)
\tan\biggl(\pi\Bigl(\lambda-\frac{1}{2}-\nu\Bigr)\biggr)\,
\Pi^{\nu}_{\lambda}\,,
\label{eqn: new one}
\end{equation}
where the contour $\Gamma$ is illustrated in Fig.~\ref{fig: new contour}(a). It is obtained by a deformation of the vertical line $\Re(\lambda-\nu)=\frac{1}{2}$ and consists of the line from $\frac{1}{2}-i\infty$ to $\frac{1}{2}+i\infty$ and circles surrounding the poles of $\tan\bigl(\pi\bigl(\lambda-\frac{1}{2} -\nu\bigr)\bigr)$ in the strip ${\frac{1}{2}< \Re\lambda < \frac{1}{2}+\Re\nu}$ or ${\frac{1}{2}+\Re\nu < \Re\lambda < \frac{1}{2}}$ (depending on the sign of $\Re\nu$). The rewriting is based on this representation of the Plancherel factor,
\begin{equation}
\frac{s \sinh (2\pi s)}{\cosh (2\pi s)+\cos (2\pi \nu)}=-\frac{is}{2}\Bigl( \tan \bigl(\pi (is-\nu)\bigr) -\tan \bigl(\pi (-is-\nu)\bigr) \Bigr)\,,
\end{equation}
and the symmetry $\Pi^\nu_{\lambda}=\Pi^\nu_{1-\lambda}$, which allows one to extend the integral in \eqref{eqn: decomposition of 1} from a half-line to a full line. More explicitly,
\begin{equation}
\int_{0}^{+\infty} ds\, \frac{s \sinh (2\pi s)}{\cosh (2\pi s)+\cos (2\pi \nu)} \Pi^{\nu}_{1/2+is}\,
= \frac{i}{2} \int_{\frac{1}{2}-i\infty}^{\frac{1}{2}+i\infty}
d\lambda\,
\Bigl(\lambda-\frac{1}{2}\Bigr)
\tan\biggl(\pi\Bigl(\lambda-\frac{1}{2}-\nu\Bigr)\biggr)\,
\Pi^{\nu}_{\lambda}\,.
\end{equation}
The discrete series contribution (i.e.\ the second term in \eqref{eqn: decomposition of 1}) can be treated as residues of the same integrand, which leads to the expression \eqref{eqn: new one}. Note that when $\lambda$ and $\nu$ are arbitrary complex numbers, $\Pi^\nu_{\lambda}$ is no longer an orthogonal projector. Formally, it is just a function of $x_1,x_0 \in {\rm H^2}$, and $\mathbf{1}^\nu$ should likewise be interpreted as a (generalized) function, namely, $g(x_0)^{-1/2}\delta(x_1-x_0)$, where $g$ is the determinant of the metric tensor.
Given this caveat, we will proceed with caution. It is true that
\begin{equation}
Q\Pi^\nu_{\lambda}=\Pi^\nu_{\lambda}Q=\lambda(1-\lambda)\Pi^\nu_{\lambda}\,.
\end{equation}
However, the following corollary holds only for the Dirichlet boundary condition and is qualified by a restriction on $\lambda$:
\begin{equation}
C^{\uparrow\downarrow}_{+}\Pi^{1/2-i\calE}_{\lambda}
= -iM\bigl(\lambda(1-\lambda)-\Delta(1-\Delta)\bigr)^{-1}
\Pi^{1/2-i\calE}_{\lambda}\quad\:
\text{for}\quad \Delta<\Re\lambda<1-\Delta\,.
\label{eqn: PiC}
\end{equation}
Indeed, we should require that the left-hand side of the above equation be well-defined, meaning the absolute convergence of the corresponding integral:
\begin{equation}
\bigl(C^{\uparrow\downarrow}_{+}\Pi^{1/2-i\calE}_{\lambda}\bigr)(x_1,x_0)
=\int C^{\uparrow\downarrow}_{+}(x_1,x)
\Pi^{1/2-i\calE}_{\lambda}(x,x_0)\,\sqrt{g(x)}\,d^2x\,.
\end{equation}
To check this condition, let us use polar coordinates, $x=(r,\varphi)$. As $r$ tends to $1$, the propagator $C^{\uparrow\downarrow}_{+}(x_1,x)$ scales as $(1-r)^{1-\Delta}$, whereas $\Pi^\nu_{\lambda}(x,x_0)$ has terms proportional to $(1-r)^{\lambda}$ and $(1-r)^{1-\lambda}$. Since $\sqrt{g(x)}\sim(1-r)^{-2}$, the convergence condition is exactly as indicated in Eq.~\eqref{eqn: PiC}.
\begin{figure}[t]
\center
\subfloat[Contour $\Gamma$ for $C_+$]{
\begin{tikzpicture}[baseline={(current bounding box.center)}]
\draw [->,>=stealth] (-10pt,0pt) -- (90pt,0pt) node[right]{\scriptsize $\Re \lambda$};
\draw [->,>=stealth] (0pt,-60pt) -- (00pt,60pt) node[right]{\scriptsize $\Im \lambda$};
\draw[far arrow, dashed, thick]
(40pt,-60pt) -- (40pt,-11pt) arc (-90:90:5pt) -- (40pt,60pt)
node[right]{\scriptsize $\frac{1}{2}+is$};
\filldraw (40pt,-6pt) circle (1pt) ;
\filldraw (60pt,0pt) circle (1pt);
\node[below] at (63pt,-3pt) {\scriptsize $\Delta_+$};
\filldraw (20pt,0pt) circle (1pt);
\node[below] at (23pt,-3pt) {\scriptsize $\Delta_-$};
\end{tikzpicture}}
\hspace{15pt}
\subfloat[Contour $\tilde{\Gamma}$ for $C_-$]{
\begin{tikzpicture}[baseline={(current bounding box.center)}]
\draw [->,>=stealth] (-10pt,0pt) -- (90pt,0pt) node[right]{\scriptsize $\Re \lambda$};
\draw [->,>=stealth] (0pt,-60pt) -- (00pt,60pt) node[right]{\scriptsize $\Im \lambda$};
\draw[dashed,thick,blue, mid arrow] (40pt,-60pt)--(40pt,-25pt);
\draw[dashed,thick,blue]
(40pt,-15pt) -- (40pt,-11pt) arc (-90:90:5pt) -- (40pt,15pt);
\filldraw (40pt,-6pt) circle (1pt);
\draw[dashed,thick,blue, mid arrow] (40pt,25pt)--(40pt,60pt);
\node[right] at (40pt,60pt) {\scriptsize $\frac{1}{2}+is$};
\filldraw (60pt,0pt) circle (1pt);
\node[below] at (75pt,-3pt) {\scriptsize $\Delta_+$};
\filldraw (20pt,0pt) circle (1pt);
\node[below] at (10pt,-3pt) {\scriptsize $\Delta_-$};
\draw[thick,densely dashed,blue] (15pt,0pt) arc (-180:0:5pt);
\draw[thick,densely dashed,blue] (25pt,0pt) arc (180:90:15pt);
\draw[thick,densely dashed,blue] (15pt,0pt) arc (180:90:25pt);
\draw[thick,densely dashed,blue] (65pt,0pt) arc (0:180:5pt);
\draw[thick,densely dashed,blue] (55pt,0pt) arc (0:-90:15pt);
\draw[thick,densely dashed,blue] (65pt,0pt) arc (0:-90:25pt);
\draw[red,thick, <-,>=stealth] (21.21pt,6.84pt) arc (160:35:20pt);
\draw[red,thick, <-,>=stealth] (58.79pt,-6.84pt) arc (-20:-145:20pt);
\end{tikzpicture}}
\hspace{15pt}
\subfloat[Contour $\Gamma_{\text{sp}}\sim\tilde{\Gamma}-\Gamma$ for $C_{\text{sp}}$]{
\begin{tikzpicture}[baseline={(current bounding box.center)}]
\draw [->,>=stealth] (-10pt,0pt) -- (90pt,0pt) node[right]{\scriptsize $\Re \lambda$};
\draw [->,>=stealth] (0pt,-60pt) -- (00pt,60pt) node[right]{\scriptsize $\Im \lambda$};
\filldraw (60pt,0pt) circle (1pt);
\node[below] at (63pt,-8pt) {\scriptsize $\Delta_+$};
\filldraw (20pt,0pt) circle (1pt);
\node[below] at (23pt,-8pt) {\scriptsize $\Delta_-$};
\draw[thick,densely dashed,blue, near arrow, xscale=-1] (-20pt,0pt) circle (8pt);
\draw[thick,densely dashed,blue, near arrow] (60pt,0pt) circle (8pt);
\filldraw (40pt,-6pt) circle (1pt);
\end{tikzpicture}\hspace{8pt}}
\caption{Switching the boundary condition from Dirichlet to Neumann amounts to exchanging the poles $\Delta_-$ and $\Delta_+$. The procedure should be accompanied by a deformation of the integration contour as shown in (b). The difference contour $\tilde{\Gamma}-\Gamma$ is homologous (in the complement of singularities) to the one shown in (c).}
\label{fig: spooky}
\end{figure}
We now apply the decomposition of identity \eqref{eqn: new one} together with Eq.~\eqref{eqn: PiC}:
\begin{equation}
C^{\uparrow\downarrow}_{+}
= C^{\uparrow\downarrow}_{+} \cdot \mathbf{1}^{1/2-i\calE}
=\frac{M}{4\pi } \int_{\Gamma} d\lambda\,
\frac{\bigl(\lambda-\frac{1}{2}\bigr)
\tan\bigl(\pi(\lambda+i\calE)\bigr)}
{(\lambda-\Delta)(1-\lambda-\Delta)}\,
\Pi^{1/2-i\calE}_{\lambda} \,.
\label{eqn: C_+ integral}
\end{equation}
Note that the contour $\Gamma$ passes between the poles of the integrand at $\Delta_-=\Delta$ and $\Delta_+=1-\Delta$. The propagator $C^{\uparrow\downarrow}_{-}$ with Neumann boundary condition cannot be obtained in the same way, but we can use analytic continuation in $M$. Suppose that $M>|\calE|$ initially. As $M$ changes to $-M$ avoiding the branch cut between $\calE$ and $-\calE$, the numbers $\Delta_+$ and $\Delta_-$ are swapped, and the propagator $C^{\uparrow\downarrow}_{+}$ turns into $-C^{\uparrow\downarrow}_{-}$ for the original value of $M$. On the right-hand side of \eqref{eqn: C_+ integral}, the analytic continuation should involve a deformation of the integration contour such that it avoids the moving poles, see Fig.~\ref{fig: spooky}. Thus,
\begin{equation}
C^{\uparrow\downarrow}_{-}
=\frac{M}{4\pi } \int_{\tilde{\Gamma}} d\lambda\,
\frac{\bigl(\lambda-\frac{1}{2}\bigr)
\tan\bigl(\pi(\lambda+i\calE)\bigr)}
{(\lambda-\Delta)(1-\lambda-\Delta)}\,
\Pi^{1/2-i\calE}_{\lambda} \,.
\label{eqn: C_- integral}
\end{equation}
The ``spooky propagator'' $C^{\uparrow\downarrow}_{\text{sp}} = C^{\uparrow\downarrow}_{-} - C^{\uparrow\downarrow}_{+}$ is given by the integral of the same function along the difference contour $\Gamma_{\text{sp}}\sim\tilde{\Gamma}-\Gamma$, which consists of circles wrapping the points $\lambda=\Delta_-$ (clockwise) and $\lambda=\Delta_+$ (counterclockwise) as shown in Fig.~\ref{fig: spooky}(c). Hence, the spooky propagator is determined by the residues of the integrand at $\Delta_-$ and $\Delta_+$:
\begin{equation}
C^{\uparrow\downarrow}_{\text{sp}}
= \frac{i M}{4} \Bigl(\tan \bigl( \pi (\Delta+i\calE) \bigr) + \tan \bigl( \pi (\Delta-i\calE)\bigr) \Bigr) \Pi^{1/2-i\calE}_{\Delta} \,.
\end{equation}
The calculation of the other diagonal element of the propagator matrix, $-C^{\downarrow\uparrow}$ (in all three variants) is completely analogous; we just need to use $\nu=-\frac{1}{2}-i\calE$. Restricting to coincident points and using the normalization condition $\Pi^{\nu}_{\lambda}(x,x)=1$, we obtain the final result:
\begin{equation}
\operatorname{Tr} C_{\text{sp}}(0,0) = \frac{i M}{2} \Bigl(\tan \bigl( \pi \bigl( \Delta+i\calE \bigr) \bigr) + \tan \bigl( \pi \bigl( \Delta-i\calE \bigr) \bigr) \Bigr) \,,
\end{equation}
which is equivalent to Eq.~\eqref{eqn: 5.4 goal}.
\section{Discussion}
\label{sec:conc}
One of the main new physical consequences of our computations on the complex SYK model is the many-body density of states in Eq.~(\ref{DQE}). For each total charge $Q$, the energy dependence of the density of states is the same as in the Schwarzian theory, with a ground state energy $E_0 (Q)$, and a zero temperature entropy $\mathcal{S} (Q/N)$ determined by the value of $Q$. Although this result is natural from the physical point of view, we derived it from the effective action \eqref{Seff}, which describes an ensemble with fluctuating $Q$. The presence of the particle-hole asymmetry parameter $\calE$ in the action was essential for the consistency of that calculation.
The other parameters in the effective action in Eq.~(\ref{Seff}) are the charge compressibility $K$, and $\gamma$, the coefficient of the $T$-linear specific heat at fixed $Q$. While the value of $\gamma$ was determined by a low-energy analysis using conformal perturbation theory \cite{MS16-remarks,KS17-soft}, we have shown here that a similar procedure does not apply for $K$. This is highlighted by the UV divergence in the eigenmodes of the symmetric sector of the two-particle kernel shown in Fig.~\ref{phi21nAS}.
It is necessary to account for high energy contributions to obtain the correct value of $K$, and we presented three such computations in Sections~\ref{sec:exact}, \ref{numschwindys}, and \ref{kerneldiag}; the numerical values so obtained were consistent with each other. These distinct behaviors of $\gamma$ and $K$ are analogous to those in the Fermi liquid theory: the quasiparticle effective mass $m^\ast$ determines the specific heat, but an additional Landau parameter, $F_0^s$, is needed for the compressibility.
We presented a new computation of the zero temperature entropy $\calS$ of the complex SYK model in Section~\ref{sec:bulk}. The entropy was shown to be equal to the difference in the logarithm of the partition function of a massive Dirac fermion on ${\rm H^2}$ between Neumann and Dirichlet boundary conditions, in a manner similar to the influence of double-trace operators in the usual AdS/CFT correspondence \cite{gubser2003double,Gubser:2002vv,Diaz:2007an,Giombi:2013yva,Giombi:2014xxa,Giombi:2016pvg}. This bulk approach correctly reproduced the $\mathcal{Q}$ and $\calE$ dependence of $\calS$ in the SYK model.
The above computation of the entropy should be contrasted with that in higher dimensional black holes whose near-horizon geometry has an AdS$_2$ factor (reviewed in Appendix~\ref{app:em}), where the entropy is given by the horizon area in the higher-dimensional space, and arises from degrees of freedom unrelated to the fermions. While all entropies mentioned here obey Eq.~(\ref{dSdQ}), the functional form of $\calS (\mathcal{Q})$ is different for the higher-dimensional black holes \cite{SS15}. Probe Dirac fermions can be added to such higher-dimensional black holes \cite{Faulkner09,Iqbal:2011ae},
and their Green function agrees with those of the SYK model \cite{SS10,SS15}; however such fermions
only contribute $\mathcal{O}(1)$ entropy in the distinct large-$N$ limit of the usual AdS/CFT correspondence.
\section*{Acknowledgments}
We thank
Wenbo Fu, Antoine Georges,
Simone Giombi, Luca Iliesiu,
Igor Klebanov, Sung-Sik Lee,
Juan Maldacena,
Max Metlitski,
Olivier Parcollet,
Xiao-Liang Qi,
Shinsei Ryu,
Wei Song, Douglas Stanford and
Cenke Xu
for useful discussions.
Y.G.\ is supported by the Gordon and Betty Moore Foundation EPiQS Initiative through Grant (GBMF-4306) and DOE grant, DE-SC0019030. A.K.\ is supported by the Simons Foundation under grant~376205 and through the ``It from Qubit'' program, as well as by the Institute of Quantum Information and Matter, a NSF Frontier center funded in part by the Gordon and Betty Moore Foundation.
S.S.\ and G.T.\ are supported by DOE grant, DE-SC0019030.
G.T.\ acknowledges support from the MURI grant W911NF-14-1-0003 from ARO and by DOE grant DE-SC0007870.
This work was performed in part at KITP, University of California, Santa Barbara supported by the NSF under grant PHY-1748958.
|
2,869,038,155,455 | arxiv | \section{Introduction}
\label{sec:intro}
Discretely holomorphic observables are correlations functions in a two-dimensional
lattice model which satisfy a discrete version of the Cauchy--Riemann (CR) equations.
In the context of the Ising model, lattice fermions with this type of property
were first described in~\cite{DotsenkoP88}.
More recently, the discrete CR equations were used by Smirnov
as a basic tool to study rigorously the scaling properties of Ising
interfaces~\cite{Smir07}. They were then exploited in the Probability literature, to
obtain mathematical proofs of several Coulomb-gas results. For instance, still in the Ising model,
this approach yielded the scaling limit of domain walls and Fortuin-Kasteleyn cluster
boundaries~\cite{ChelkakS09,HonglerK11}, and the spin and energy correlation
functions~\cite{ChelkakHI12}.
For self-avoiding walks,
it provided a rigorous way to determine the bulk~\cite{DuminilS10} and
boundary~\cite{BeatondGG11} connectivity constants, and it has also proved very useful
for numerical purposes~\cite{BeatonGJ12}.
In~\cite{HonglerKZ12}, discretely holomorphic observables
for the Ising model have been related explicitly to the transfer-matrix formalism.
In the meantime, some discretely holomorphic observables have been identified
in other 2D lattice models, including the $\mathbb{Z}_N$ clock model~\cite{RajC07} and the
dense~\cite{RivaC06} and dilute~\cite{IkhlefC09} {Temperley--Lieb} (TL) loop models.
These observables are essentially non-local, either because they include
disorder operators (in spin models), or because they are defined in terms of an
open path attached to the boundary (in loop models).
In all the known examples, it was observed that the discrete holomorphicity
condition is satisfied precisely when the Boltzmann weights are such that the model
is integrable~\cite{RajC07,IkhlefC09}. Recently, this statement was
also extended to the boundary Boltzmann weights~\cite{Ikhlef12,deGierLR12}.
These observed relations between the notions of discrete holomorphicity and
integrability have been explored further recently~\cite{AlamBatchelor12}, but they still
call for a more systematic understanding: this is the object of the present work.
An obvious starting point for this
is to try and construct discrete holomorphic observables from the underlying
symmetries of a the lattice model.
This idea is very reminiscent of the construction, proposed by Bernard and Felder~\cite{BernardFelder91},
of non-local conserved currents $\psi(z)$ in lattice models possessing
a quantum group symmetry. Indeed, in that context, $\psi(z)$ is non-local
because it includes a path connecting $z$ to a reference point, in a similar way
to disorder operators in spin systems. Moreover, the conservation equation
for the current is a linear
relation between the values of $\psi$ at the points adjacent to a given vertex,
like the discrete holomorphicity condition. These resemblances~\cite{FendleyTalk12}
between discrete holomorphic observables and conserved currents can actually be made
more precise.
The present paper is based on a simple observation: in the case of an
affine quantum group symmetry, the conservation equation of currents
can be written as a discrete holomorphicity condition on the rhombic
lattice with opening angle $\alpha$, provided an
appropriate relation between $\alpha$ and the spectral parameter is introduced.
We thus consider
two simple loop models where discretely holomorphic observables -- which we shall call
for short {\it loop observables} -- are known:
the dense and dilute {Temperley--Lieb} models on the square lattice.
Using the mapping of these loop models onto vertex models
possessing $\Uq{A_1^{(1)}}$ and $\Uq{A_2^{(2)}}$ symmetry respectively,
we construct the conserved currents, and
show that they map to the loop observables identified previously.
This analysis is extended to boundary observables in the case of general
diagonal integrable boundary conditions.
This point of view explains the somewhat mysterious observation
of~\cite{RajC07,IkhlefC09}
that discrete holomorphicity somehow ``linearizes'' the Yang--Baxter
equation by providing us with a linear equation for integrable
Boltzmann weights. Indeed, from our point of view, this linearization
procedure is nothing but Jimbo's interpretation~\cite{Jimbo86} of the $R$-matrix of an
integrable model as a representation of the universal $R$-matrix of a quantized affine algebra,
which by definition satisfies such a linear relation, as will be explained
below.
The plan of the paper is as follows. In \secref{sec:vertex}, we review the Bernard--Felder
construction of conserved currents introduced in~\cite{BernardFelder91} and expose a general identity
for the adjoint action in this context.
\secref{sec:vertex-loop} reviews the correspondence between the six- and nineteen-vertex
models and the dense and dilute Temperley--Lieb loop models respectively. In particular, we show that
the integrable weights can be obtained by solving the intertwining relations, which are linear equations
in the Boltzmann weights assigned to loop model tiles. In \secref{sec:DH-bulk}, we use the mapping between
vertex and loop models to express the currents as loop observables, satisfying the discrete Cauchy--Riemann
equations. \secref{sec:vertex-bound} extends the work of Bernard--Felder~\cite{BernardFelder91} to systems
with a boundary, and focuses on the interpretation of current conservation at the boundary.
\secref{sec:loop-bound} introduces boundary tiles into the dense and dilute Temperley--Lieb\ loop models.
Integrable weights for these tiles are obtained by solving linear equations which are the boundary
analogue of the intertwining relations. In \secref{sec:DH-bound}, we express our currents
(which satisfy current conservation at the boundary) as loop observables obeying boundary
discrete Cauchy--Riemann equations. In \secref{sec:continuum}, we use the Coulomb-gas approach
to present the continuum limit of the loop observables. We conclude in \secref{sec:conclusion}.
\section{Vertex models, currents and quantized affine algebras}
\label{sec:vertex}
\subsection{Vertex models}
\label{ssec:lattice}
In this section, we recall how vertex models in statistical mechanics can be defined in terms of Boltzmann weights which are given as intertwiners of representations of quantized affine algebras. More specifically, we will consider the case in which we have a quantized affine algebra $U$ with a spectral parameter dependent module $V_z$. We denote the associated representation as $(\pi_z, V_z)$ where $\pi_z:U\rightarrow \hbox{End}(V_z)$ and additionally use the notation $\pi_{z_1,\dots,z_M}= \pi_{z_1}\otimes \cdots
\otimes \pi_{z_m}$ for tensor products of such representations.
Assuming that $V_{z}\otimes V_{w}$ is generically irreducible,
the $R$-matrix $R(z/w): V_{z}\otimes V_{w}\rightarrow V_{z}\otimes V_{w}$ that defines the vertex model is
given as the solution (unique up to overall normalization) of the linear relation
\begin{equation}
R(z/w) \, \pi_{z,w}(\Delta(X))
=
\pi_{z,w} (\Delta'(X)) \, R(z/w)
\label{eq:Rcomm}
\end{equation} for all $X\in U$; where $\Delta : U \rightarrow U \otimes U$ is the coproduct, $\Delta(X) = \sum X_1 \otimes X_2$, and $\Delta'$ the coproduct with the order of the tensor product reversed, $\Delta'(X) = \sum X_2 \otimes X_1$.
We represent the $R$-matrix pictorially by
\begin{equation*}
R(z/w)=\begin{tikzpicture}[baseline=-3pt,scale=0.75]
\draw[arr=0.25] (0,-1) node[below] {$w$} -- (0,1);
\draw[arr=0.25] (-1,0) node[left] {$z$} -- (1,0);
\end{tikzpicture}
\end{equation*}
The arrows drawn on the lines are purely to indicate ``time flow'': reading an equation from right to left corresponds to reading along a line in the direction of the arrows.
Let us define the multiple coproduct $\Delta^{(L)}:U\rightarrow U^{\otimes(L)}$ by $\Delta^{(L+1)}=(\Delta\otimes 1) \Delta^{(L)}$ and
$\Delta^{(2)}=\Delta$. The monodromy matrix
$
T^{(L)}(z;w_1,\dots,w_L): V_{z}\otimes V_{w_1} \otimes \cdots \otimes V_{w_L}\rightarrow
V_{z} \otimes V_{w_1} \otimes \cdots \otimes V_{w_L}
$
is defined as
\[
T^{(L)}(z;w_1,\dots,w_L)= R_{0L}(z/w_L) \dots R_{01}(z/w_1)\,,
\]
where the subscripts on $R$-matrices indicate the evaluation modules in which they act, {\it i.e.}, $0 \leftrightarrow V_z$, $1,\dots,L \leftrightarrow V_{w_1},\dots,V_{w_L}$. Its graphical representation is
\begin{equation*}
T^{(L)}(z;w_1,\dots,w_L)=\begin{tikzpicture}[baseline=-3pt,scale=0.75]
\foreach\x in {1,2}
\draw[arr=0.25] (\x,-1) node[below] {$w_{\x}$} -- (\x,1);
\foreach\x in {3}
\draw[arr=0.25] (\x,-1) node[below] {$\cdots$} -- (\x,1);
\foreach\x in {4,...,8}
\draw[arr=0.25] (\x,-1) -- (\x,1);
\foreach\x in {9}
\draw[arr=0.25] (\x,-1) node[below] {$w_{L}$} -- (\x,1);
\draw[arr=0.05] (0,0) node[left] {$z$} -- (10,0);
\end{tikzpicture}
\end{equation*}
Until specified otherwise, all lines will be oriented up/right in what follows,
so that we shall omit arrows on pictures.
By vertically concatenating $M$ monodromy matrices $T^{(L)}(z_i;w_1,\dots,w_L)$, $1\leq i \leq M$, one obtains a rectangular lattice $\Omega$ of width $L$ and height $M$. Each horizontal (resp.\ vertical) line of this lattice is oriented from left to right (resp.\ bottom to top), and denotes the vector space $V_{z_i}$ (resp.\ $V_{w_j}$).
To simplify the discussion, from now on we will work on a horizontally and vertically homogeneous lattice, by assuming that all horizontal (resp.\ vertical) lines carry the same evaluation parameter $z_i = z$ (resp.\ $w_j = w$). We will be interested in operators that act on the vector spaces encoded by the lattice, which graphically corresponds to the insertion of a ``node'' at an edge. The edges of the lattice will be denoted by coordinates pairs $(x \pm \frac{1}{2},t)$ or $(x,t \pm \frac{1}{2})$, where $(x,t)$ is a vertex of $\Omega$, and $x$ (resp.\ $t$) is the horizontal (resp.\ vertical) coordinate.
Typically, one is interested in the case where the left/right boundaries of this lattice are fixed in some way. In such a case, each row of the lattice becomes an operator in $V_{w_1} \otimes \cdots \otimes V_{w_L}:= V_1 \otimes \cdots \otimes V_L$ that we will rather loosely call the ``one-row transfer matrix'' $\mathbf{T}$. We will use this construction below, despite the fact that for now we do not discuss the boundary conditions
explicitly.
\subsection{Hopf algebras and graphical relations}
\label{ssec:hopf}
Following Bernard and Felder~\cite{BernardFelder91}, we consider a set of elements $\{J_a, \thab{a}{b}, \thabhat{a}{b}\}$, $a,b=1,\ldots,n$, of a Hopf algebra $U$. The elements $\thab{a}{b}$ and $\thabhat{a}{b}$ are assumed to be inverses of each other:
\begin{equation}
\thab{a}{b} \thabhat{c}{b}= \delta_{a,c}
\quad
\hbox{and}
\quad
\thabhat{b}{a} \thab{b}{c}= \delta_{a,c}
\label{eq:inversion}
\end{equation}
(where here and subsequently repeated indices are summed over) while the coproduct $\Delta$ and antipode $S$ of all elements have the form:
\begin{subequations}
\begin{align}
\label{eq:coprJ}
\Delta(J_a) &= J_a \otimes 1 + \Theta_a{}^b\otimes J_b
&
S(J_a)&=-\widehat\Theta^b{}_a J_b
\\
\label{eq:coprth}
\Delta(\Theta_a{}^c)&=\Theta_a{}^b\otimes \Theta_b{}^c
&
S(\Theta_a{}^b)&=\widehat\Theta^b{}_a
\\
\label{eq:coprhth}
\Delta(\widehat\Theta^a{}_c)&=\widehat\Theta^a{}_b\otimes \widehat\Theta^b{}_c
&
S(\widehat\Theta^a{}_b)&=\Theta_b{}^a.
\end{align}
It is also useful to define $\widehat J_a:=-\thabhat{b}{a} J_b$, which has the coproduct and antipode
\begin{align}
\label{eq:coprhJ}
\Delta(\widehat J_a) &=\widehat J_b \otimes \thabhat{b}{a} + 1 \otimes \widehat J_a
&
S(\widehat J_a) &= \thabhat{c}{b} J_c \thab{a}{b}.
\end{align}
\end{subequations}
Given two elements in $\{J_a\}$,
say $J_1$ and $J_2$, $J_1+J_2$ can be trivially added to the set $\{J_a\}$ in such a way that (\ref{eq:coprJ}--\ref{eq:coprhJ}) still hold. It is a little
less obvious that the same is true of $J_1 J_2$, with appropriately defined sets
$\{\thab{a}{b}\}$ and $\{\thabhat{a}{b}\}$. Therefore we only need to specify a set $\{J_a\}$
that generates $U$ as a unital algebra.
The $R$-matrix, $R: U\otimes U \rightarrow U \otimes U$, switches the order of tensor products in the coproduct: namely, $R \Delta(X_a) = \Delta'(X_a) R$ for all $X_a \in U$. Applying this to the coproducts in (\ref{eq:coprJ}--\ref{eq:coprhJ}) gives, respectively
\begin{subequations}
\begin{align}
\label{eq:RcoprJ}
R (J_a \otimes 1 + \Theta_a{}^b\otimes J_b)
&=
(1 \otimes J_a + J_b \otimes \Theta_a{}^b) R
\\
\label{eq:Rcoprth}
R (\Theta_a{}^b\otimes \Theta_b{}^c)
&=
(\Theta_b{}^c \otimes \Theta_a{}^b) R
\\
\label{eq:Rcoprhth}
R (\widehat\Theta^a{}_b\otimes \widehat\Theta^b{}_c)
&=
(\widehat\Theta^b{}_c \otimes \widehat\Theta^a{}_b) R
\\
\label{eq:RcoprhJ}
R (\widehat J_b \otimes \thabhat{b}{a} + 1 \otimes \widehat J_a)
&=
(\thabhat{b}{a} \otimes \widehat J_b + \widehat J_a \otimes 1) R\,.
\end{align}
\end{subequations}
Suppose now that we have a representation $(\pi,V)$ of the Hopf algebra $U$. In the spirit of~\cite{BernardFelder91}, we can represent $\pi(J_a)$, $\pi(\thab{a}{b})$ and $\pi(\thabhat{a}{b})$ by the following pictures (from now on we always discuss representations of $U$, and so suppress the appearance of the $\pi$):
\begin{equation*}
J_a=\begin{tikzpicture}[baseline=-3pt,scale=0.75]
\draw(1,1) -- (1,-1);
\draw[wavy]
(0,0) node[below] {$a$} -- (1,0) node[oper] {} ;
\end{tikzpicture}
\, ,
\quad\quad
\thab{a}{b}=\begin{tikzpicture}[baseline=-3pt,scale=0.75]
\draw (1,1) -- (1,-1);
\draw[wavy]
(0,0) node[below] {$a$}-- (2,0) node[below] {$b$} ;
\end{tikzpicture}\,,
\quad\quad
\thabhat{a}{b}=\begin{tikzpicture}[baseline=-3pt,scale=0.75]
\draw (1,1) -- (1,-1);
\draw[wavy]
(2,0) node[below] {$b$} --(0,0) node[below] {$a$} ;
\end{tikzpicture}
\end{equation*}
where the vertical line denotes the vector space $V$ with an upward arrow that we have suppressed, and subscripts (resp.\ superscripts) correspond to incoming
(resp.\ outgoing) arrows. The operator $\widehat J_a$ has the graphical
representation
\begin{equation}
\widehat J_a=-\ \begin{tikzpicture} [baseline=-3pt,scale=0.75]
\draw (0,-1.1) -- (0,0.5); \draw[wavy=0.4] (1,0) node[right] {$a$} -- (-1,0) -- (-1,-0.5) -- (0,-0.5) node[oper] {};
\end{tikzpicture}
\quad
\hbox{that we simplify to}
\quad
\widehat J_a=
\begin{tikzpicture} [baseline=-3pt,scale=0.75]
\draw (0,-0.7) -- (0,0.7); \draw[wavy=0.4] (1,0) node[right] {$a$} -- (0,0) node[oper] {};
\end{tikzpicture}\label{eq:jdual}
\end{equation}
so that a connection of a wavy line to a solid line
from the right (compared to the direction of time) corresponds to the
insertion of the operator $\widehat J_a$.
Using these notations, the equations listed have natural graphical meanings. Relation ~\eqref{eq:inversion} is expressed graphically by
\begin{equation}
\begin{tikzpicture} [baseline=-3pt,scale=0.75]
\draw (0,-0.75) -- (0,0.75);
\draw[wavy=0.35] (-1,0.3) -- (1,0.3) -- (1,0);
\draw[wavy=0.4] (1,0) -- (1,-0.3) -- (-1,-0.3);
\end{tikzpicture}
=
\begin{tikzpicture} [baseline=-3pt,scale=0.75]
\draw[wavy] (-1,0.3) -- (-0.5,0.3) to[out=0,in=0] (-0.5,-0.3) -- (-1,-0.3);
\draw (0,-0.75) -- (0,0.75);
\end{tikzpicture}
\quad
\hbox{and}
\quad
\begin{tikzpicture} [baseline=-3pt,scale=0.75,xscale=-1]
\draw (0,-0.75) -- (0,0.75);
\draw[wavy=0.35] (-1,0.3) -- (1,0.3) -- (1,0);
\draw[wavy=0.4] (1,0) -- (1,-0.3) -- (-1,-0.3);
\end{tikzpicture}
=
\begin{tikzpicture} [baseline=-3pt,scale=0.75,xscale=-1]
\draw[wavy] (-1,0.3) -- (-0.5,0.3) to[out=0,in=0] (-0.5,-0.3) -- (-1,-0.3);
\draw (0,-0.75) -- (0,0.75);
\end{tikzpicture}\label{eq:unitarity}
\end{equation}
and the first coproduct relation (\ref{eq:RcoprJ}) is equivalent to
\begin{equation}\label{pic:RcoprJ}
\begin{tikzpicture} [baseline=-3pt,scale=0.75]
\draw (-2,0) -- (2,0);
\draw (0,-2) node[below]{$R (J_a \otimes 1)$} -- (0,2);
\draw[wavy=0.3] (-2,1) node[below] {$a$} -- (-1,1) -- (-1,0) node[oper] {};
\end{tikzpicture}
+
\begin{tikzpicture} [baseline=-3pt,scale=0.75]
\draw (-2,0) -- (2,0);
\draw (0,-2) node[below]{$R (\thab{a}{b} \otimes J_b)$} -- (0,2);
\draw[wavy=0.65] (-2,1) node[below] {$a$} -- (-1,1) -- (-1,-1) -- (0,-1) node[oper] {};
\end{tikzpicture}
=
\begin{tikzpicture} [baseline=-3pt,scale=0.75]
\draw (-2,0) -- (2,0);
\draw (0,-2) node[below]{$(1 \otimes J_a) R$} -- (0,2);
\draw[wavy] (-2,1) node[below] {$a$} -- (0,1) node[oper] {};
\end{tikzpicture}
+
\begin{tikzpicture} [baseline=-3pt,scale=0.75]
\draw (-2,0) -- (2,0);
\draw (0,-2) node[below]{$(J_b \otimes \thab{a}{b}) R$} -- (0,2);
\draw[wavy=0.35] (-2,1) node[below] {$a$}
-- (1,1) -- (1,0) node[oper] {};
\end{tikzpicture} \quad
\end{equation}
where we recall that the ``time flow''is south-west to north-east.
Similarly, for ``tail operators'' one has
\begin{equation}\label{pic:Rcoprth}
\begin{tikzpicture} [baseline=-3pt,scale=0.75]
\draw (-2,0) -- (2,0);
\draw (0,-2) node[below]{$R (\thab{a}{c} \otimes \thab{c}{b})$} -- (0,2);
\draw[wavy=0.38] (-1,1) node[left] {$a$} -- (-1,-1) -- (1,-1) node[below] {$b$};
\end{tikzpicture}
=
\begin{tikzpicture} [baseline=-3pt,scale=0.75]
\draw (-2,0) -- (2,0);
\draw (0,-2) node[below]{$ (\thab{c}{b} \otimes \thab{a}{c}) R$} -- (0,2);
\draw[wavy=0.38] (-1,1) node[left] {$a$} -- (1,1) -- (1,-1) node[below] {$b$};
\end{tikzpicture} \quad
\end{equation}
which means one can move the tail freely across vertices.
The remaining equations (\ref{eq:Rcoprhth}--\ref{eq:RcoprhJ}) have analogous the graphical equivalents, but the tails now enter from the right:
\begin{equation}\label{pic:Rcoprhth}
\begin{tikzpicture} [baseline=-3pt,scale=0.75]
\draw (-2,0) -- (2,0);
\draw (0,-2) node[below]{$R (\thabhat{b}{c} \otimes \thabhat{c}{a})$} -- (0,2);
\draw[wavy=0.38] (1,-1) node[below] {$a$} -- (-1,-1) -- (-1,1) node[left] {$b$};
\end{tikzpicture}
=
\begin{tikzpicture} [baseline=-3pt,scale=0.75]
\draw (-2,0) -- (2,0);
\draw (0,-2) node[below]{$(\thabhat{c}{a} \otimes \thabhat{b}{c}) R$} -- (0,2);
\draw[wavy=0.38] (1,-1) node[below] {$a$} -- (1,1) -- (-1,1) node[left] {$b$};
\end{tikzpicture}
\end{equation}
\begin{equation}\label{pic:RcoprhJ}
\begin{tikzpicture} [baseline=-3pt,scale=0.75]
\draw (-2,0) -- (2,0);
\draw (0,-2) node[below]{$R (1 \otimes \widehat J_a)$} -- (0,2);
\draw[wavy] (2,-1) node[below] {$a$}
-- (0,-1) node[oper] {};
\end{tikzpicture}
+
\begin{tikzpicture} [baseline=-3pt,scale=0.75]
\draw (-2,0) -- (2,0);
\draw (0,-2) node[below]{$R (\widehat J_b \otimes \thabhat{b}{a})$} -- (0,2);
\draw[wavy=0.4] (2,-1) node[below] {$a$}
-- (-1,-1) -- (-1,0) node[oper] {};
\end{tikzpicture}
=
\begin{tikzpicture} [baseline=-3pt,scale=0.75]
\draw (-2,0) -- (2,0);
\draw (0,-2) node[below]{$(\widehat J_a \otimes 1) R$} -- (0,2);
\draw[wavy=0.4] (2,-1) node[below] {$a$}
-- (1,-1) -- (1,0) node[oper] {};
\end{tikzpicture}
+
\begin{tikzpicture} [baseline=-3pt,scale=0.75]
\draw (-2,0) -- (2,0);
\draw (0,-2) node[below]{$(\thabhat{b}{a} \otimes \widehat J_b) R$} -- (0,2);
\draw[wavy=0.6] (2,-1) node[below] {$a$}
-- (1,-1) -- (1,1) -- (0,1) node[oper] {};
\end{tikzpicture}
\end{equation}
\subsection{Non-local currents and conservation laws in the bulk}
\label{ssec:qg-bulk}
Continuing along the lines of~\cite{BernardFelder91}, we act with the repeated coproduct $\Delta^{(L)}$ on the elements of $U$ to define non-local currents. By iterating the coproduct in \eqref{eq:coprJ}, we obtain
\begin{equation}
\mathbf{J}_a :=
\Delta^{(L)}(J_a)=\sum_{x=1}^L j_a^{(\tsup)}(x),
\quad
j_a^{(\tsup)}(x) :=\delta_{a,a_1}
\thab{a_1}{a_2} \otimes\cdots\otimes \thab{a_{x-1}}{a_x}
\otimes J_{a_x} \otimes 1\otimes\cdots\otimes 1 \, .\label{eq:JLcoprod}
\end{equation}
The object thus constructed, $\mathbf{J}_a$, is the charge associated with the time component $j_a^{(\tsup)}(x)$ of a non-local current.
Acting on a
tensor product $V_1 \otimes \dots \otimes V_L$, we have the graphical representation
\begin{equation*}
j^{(\tsup)}_a(x) =
\begin{tikzpicture} [baseline=-3pt,scale=0.75]
\foreach\x in {1,...,9}
\draw (\x,1) -- (\x,-1);
\draw [wavy]
(0,0) node[below] {$a$} -- (5,0) node[oper] {} ;
\node at (5,-1.3) {$x$};
\end{tikzpicture}
\end{equation*}
where each solid line corresponds to a space $V_i$
(the tensor product is ordered from left to right, as in the corresponding
algebraic expression).
In this equation, and in what follows, we use superscripts $\tsup$ and $\xsup$ to indicate
vertical and horizontal directions respectively.
If $V_i\simeq \mathbb{C}^d$ as below, then each solid line will carry an index
in $\{1,\ldots,d\}$,
and each wavy line an index $a \in \{1,\ldots,n\}$.
The intersection of a wavy line and a solid line is a $\thab{a_i}{a_{i+1}}$
acting on the solid line.
It is also useful to introduce the following graphical notation for the sum occurring
in (\ref{eq:JLcoprod}):
\begin{equation*}
\mathbf{J}_a=
\begin{tikzpicture} [baseline=-3pt,scale=0.75]
\foreach\x in {1,...,9}
\draw (\x,1) -- (\x,-1);
\draw[contour=0.75] (0,-0.075) -- (10,-0.075);
\draw[wavy] (0,0.075) node[left] {$a$} -- (5,0.075) node[oper] {} ;
\end{tikzpicture}
\end{equation*}
where the dashed line represents summation over the position of the node ({\it i.e.},
discrete integration).
We want to reinterpret ~\eqref{eq:RcoprJ} (or its graphical equivalent, \eqref{pic:RcoprJ})
as a discrete current conservation
for a vector field $(j_a^{(\xsup)},j_a^{(\tsup)})$.
This leads us
to define $j_a^{(\xsup)}(x)$, $x\in\mathbb{Z}+1/2$,
as the insertion of a dot on the horizontal
edge $[x-1/2,x+1/2]$ (with a tail attached, one half-step up and then to the left). If one wants to
define $j_a^{(\xsup)}$ as an operator on $V_1\otimes
\cdots\otimes V_L$, one needs to ``embed'' it inside a transfer matrix: {\it i.e.}, letting $\mathbf{T}$ be the one-row transfer matrix
(recall that
the boundary conditions on the left/right are left undetermined in this
section),
we define
\begin{equation*}
\mathbf{T}^{1/2}
j^{(\xsup)}_a(x)
\mathbf{T}^{1/2}
=
\begin{tikzpicture}[baseline=-3pt,scale=0.75]
\draw (0,0) -- (8,0);
\foreach\x in {1,...,7}
\draw (\x,-1) -- (\x,1);
\draw[wavy=0.55]
(0,0.5) node[below] {$a$} -- (4.5,0.5) -- (4.5,0) node[oper] {} node[below=1mm] {$x$};
\end{tikzpicture}
\end{equation*}
Adding a tail to all terms in ~\eqref{pic:RcoprJ} which extends all the way to the left, we can straighten it using \eqref{pic:Rcoprth}. Assuming that the tail commutes with the left boundary, we find
\begin{equation} \label{eq:conserv-j}
j_a^{(\xsup)}(x-1/2,t) - j_a^{(\xsup)}(x+1/2,t)
+ j_a^{(\tsup)}(x,t-1/2) - j_a^{(\tsup)}(x,t+1/2)
= 0 \,
\end{equation}
where in the operator formalism, the time evolution of any operator $\mathcal{O}$ is given by $\mathcal{O}(t)=\mathbf{T}^t \mathcal{O} \mathbf{T}^{-t}$. Equation \eqref{eq:conserv-j} expresses the conservation of the current $(j_a^{(\xsup)},j_a^{(\tsup)})$ that we have defined.
Summing \eqref{eq:conserv-j} over $x$ results in the conservation law
for the associated global charge $\mathbf{J}_a$ \eqref{eq:JLcoprod}, up to boundary terms:
\begin{equation}\label{eq:conserv-Q}
\mathbf{J}_a(t+1/2) - \mathbf{J}_a(t-1/2)
= j_a^{(\xsup)}(1/2, t) - j_a^{(\xsup)}(L+1/2, t) \,.
\end{equation}
If, as above, we depict the charge $\mathbf{J}_a$ by
using a dotted line to denote summation, then equation \eqref{eq:conserv-Q} has the graphical interpretation
\begin{equation*}
\begin{tikzpicture} [baseline=-3pt,scale=0.75]
\draw (0,0.25) -- (9,0.25);
\draw[contour=0.8] (0,0.6) -- (0.35,0.6) -- (0.35,-0.4) -- (8.5,-0.4);
\foreach\x in {1,...,8}
\draw (\x,1.25) -- (\x,-0.75);
\draw [wavy] (0,0.75) node[left] {$a$} -- (0.5,0.75) -- (0.5,-0.25) -- (4,-0.25) node[oper] {} ;
\end{tikzpicture}
\quad=\quad
\begin{tikzpicture} [baseline=-3pt,scale=0.75]
\draw (0,0.25) -- (9,0.25);
\draw[contour=0.75] (0,0.6) -- (8.5,0.6) -- (8.5,-0.25) -- (9,-0.25);
\foreach\x in {1,...,8}
\draw (\x,1.25) -- (\x,-0.75);
\draw [wavy] (0,0.75) node[left] {$a$} -- (5,0.75) node[oper] {} ;
\end{tikzpicture}
\end{equation*}
In this paper, we shall be mostly concerned with the ``local'' relation
\eqref{eq:conserv-j} and not so much with the global relation
\eqref{eq:conserv-Q}. Note however that even the former relation
is not strictly local because of the tails; we therefore shall have
to pay attention to the boundary conditions to the left when applying
it in what follows.
It is the purpose of this paper to relate
\eqref{eq:conserv-j}
to the so-called discrete holomorphicity condition.
\subsection{Adjoint action}
\label{ssec:qg-adjoint}
A detailed discussion of the action of $U$ on local fields
and of its adjoint action can
be found in \S 2.4 and 2.5 of~\cite{BernardFelder91}.
Here we summarize some relevant facts. Let us write the general coproduct as
$$
\Delta(X)=\sum X^{(1)} \otimes X^{(2)} \,,
$$
for any $X \in U$.
The adjoint action of a Hopf algebra is defined by
$$
\ad{X}{Y} := \sum X^{(1)} \ Y \ S \left(X^{(2)} \right) \,.
$$
In the case of elements of the form $J_a$, whose coproduct and antipode are given by \eqref{eq:coprJ},
this means that
\begin{equation}
\ad{J_a}{J_b} = J_a J_b - \thab{a}{c} J_b \thabhat{d}{c} J_d \,.\label{eq:adaction}
\end{equation}
The natural action of
$J_a$ on $\mathbf{J}_b$, viewed as an operator on $V_1\otimes\cdots\otimes V_L$,
is obtained by applying $\Delta^{(L)}$:
\begin{equation*}
\Delta^{(L)} \left[ \ad{J_a}{J_b} \right] = \sum_{x=1}^{L}
A_a \left[ j_b^{(\tsup)}(x,t) \right] \,,
\end{equation*}
where $A_a$ on the r.h.s.\ gives the action of $J_a$ on local fields. This action follows from taking the
coproduct of (\ref{eq:adaction})
and is best described graphically:
\begin{eqnarray} \label{eq:adj-graph}
A_a \left[ j_b^{(\tsup)}(x,t) \right]
=
&&\begin{tikzpicture} [baseline=-3pt,scale=0.75]
\foreach\x in {1,...,8}
\draw (\x,1) -- (\x,-1);
\draw[wavy] (0,0) node[left] {$b$} -- (5,0) node[oper] {};
\node at (5,-1.3) {$x$};
\draw[contour=0.8] (0,0.5) -- (9.5,0.5);
\draw[wavy] (0,0.65) node[left] {$a$} -- (6,0.65) node[oper] {};
\end{tikzpicture}\\
-&&\begin{tikzpicture} [baseline=-3pt,scale=0.75]
\foreach\x in {1,...,8}
\draw (\x,1) -- (\x,-1.75);
\draw[wavy] (0,0) node[left] {$b$} -- (5,0) node[oper] {};
\node at (5,-1.95) {$x$};
\draw[contour=0.6] (0,-1.55) -- (9,-1.55);
\draw[wavy] (0,0.65) node[left] {$a$} -- (5.65,0.65) --(8.65,0.65) --
(8.65,-0.65) -- (0,-0.65) -- (0,-1.35) -- (3,-1.35) node[oper] {};
\end{tikzpicture}\nonumber\\
=
&&\begin{tikzpicture} [baseline=-3pt,scale=0.75]
\foreach\x in {1,...,8}
\draw (\x,1) -- (\x,-1);
\draw[wavy] (0,0) node[left] {$b$} -- (5,0) node[oper] {};
\node at (5,-1.3) {$x$};
\draw[contour=0.8] (0,0.5) -- (5.5,0.5) -- (5.5,-0.5) -- (0,-0.5);
\draw[wavy] (0,0.65) node[left] {$a$} -- (5.65,0.65) --
(5.65,-0.65) -- (3,-0.65) node[oper] {};
\end{tikzpicture}\nonumber
\end{eqnarray}
Once again, dashed lines denote discrete contour integration and the final equality follows by application of
relations (\ref{eq:jdual}) and (\ref{eq:unitarity}).
A similar procedure can be carried out for the action on the other component
$j_b^{(\xsup)}$ of currents. In this way, one can organize various currents as multiplets
of $U$ (or of subalgebras of $U$).
In general, starting from some generators $J_a$,
the adjoint action of the whole of $U$
produces a large subspace inside $U$. Note however
that in the case of quantized affine algebras, to be discussed now,
the Serre relations imply that the module generated by the adjoint
action of a $U_q(A_1) := \Uq{sl_2}$ subalgebra on some other
Chevalley generator is finite-dimensional.
We now apply the formalism of previous sections to particular quantized affine algebras.
\subsection{Quantized affine algebras}
\label{ssec:qaa}
In this paper we always take $U$ to be a quantized affine algebra $\Uq{\mathfrak{g}}$ corresponding
to a rank 1 affine Lie algebra $\mathfrak{g}$. Let $\mathcal{A}_{ij}$ denote the entries of the generalized Cartan matrix for
$\Uq{\mathfrak{g}}$, and let $d_i$ be integers such that
$d_i \mathcal{A}_{ij} = d_j \mathcal{A}_{ji}$, whose greatest common divisor is 1. Then the Chevalley presentation of
$\Uq{\mathfrak{g}}$ is given in terms of the generators $\{E_i,F_i,T_i\}$, $i \in \{0,1\}$,
satisfying the list of relations\footnote{For a review of quantized affine algebras see~\cite{chpr94}.}
\begin{align}
& T_i T_i^{-1} = T_i^{-1} T_i = 1, \quad [T_i,T_j] = 0 \,,
\label{1}
\\
&
T_i E_j T_i^{-1} = q^{d_i \mathcal{A}_{ij}} E_j,
\quad
T_i F_j T_i^{-1} = q^{-d_i \mathcal{A}_{ij}} F_j,
\quad
[E_i,F_j] = \delta_{ij} \frac{T_i-T_i^{-1}}{q^{d_i}-q^{-d_i}} \,,
\label{2}
\\
&
\sum_{k=0}^{1-\mathcal{A}_{ij}}
(-)^k \left[ \begin{array}{c} 1-\mathcal{A}_{ij} \\ k \end{array} \right]_{q^{d_i}}
(E_i)^{1-\mathcal{A}_{ij}-k} E_j (E_i)^k = 0 \,,
\label{3}
\\
&
\sum_{k=0}^{1-\mathcal{A}_{ij}}
(-)^k \left[ \begin{array}{c} 1-\mathcal{A}_{ij} \\ k \end{array} \right]_{q^{d_i}}
(F_i)^{1-\mathcal{A}_{ij}-k} F_j (F_i)^k = 0 \,,
\label{4}
\end{align}
where we use the notation
\begin{align*}
\left[\begin{array}{c} m \\ n \end{array}\right]_q
:=
\frac{(q^m-q^{-m})\dots (q^{m-n+1}-q^{-m+n-1})}{(q^n-q^{-n})\dots (q-q^{-1})} \,.
\end{align*}
The coproduct of these generators is taken to be
\begin{align*}
& \Delta(E_i) = E_i \otimes 1 + T_i \otimes E_i \,,&
&\Delta(F_i) = F_i \otimes T_i^{-1} + 1 \otimes F_i \,,&
\Delta(T_i) = T_i \otimes T_i \,.
\end{align*}
It is also useful for our purposes to introduce the modified generators $\bar E_i := q^{d_i} T_i F_i$, whose coproduct takes the more convenient form:
\begin{align*}
\Delta(\bar E_i) = \bar E_i \otimes 1 + T_i \otimes \bar E_i \,.
\end{align*}
The correspondence between these generators and the Hopf algebra elements $J_a,\thab{a}{b},\thabhat{a}{b}$ introduced in \secref{ssec:hopf} is immediate. For $a=0,1$ we set: $J_a = E_a, \bar E_a$, their coproduct
being of the form of \eqref{eq:coprJ}; $\thab{a}{b}= \delta_{a,b} T_a$ (resp.\ $\thabhat{a}{b} = \delta_{a,b} T_a^{-1}$) their coproduct being of the form \eqref{eq:coprth} (resp.\ \eqref{eq:coprhth}); $\widehat J_a = F_a$, their coproduct being of the form \eqref{eq:coprhJ}).
In the following subsections we describe the two quantized affine algebras which will interest us in this paper, as well as the representations to be considered.
\subsubsection{The case $U = U_q(A_1^{(1)})$}
The first case of interest to us is $U_q(A_1^{(1)})$, for which the generalized Cartan matrix is given by
\[
\left(
\begin{array}{cc}
\mathcal{A}_{00} & \mathcal{A}_{01} \\
\mathcal{A}_{10} & \mathcal{A}_{11}
\end{array}
\right)
=
\left(
\begin{array}{cc}
2 & -2 \\
-2 & 2
\end{array}
\right)
\]
and $d_0=d_1=1$.
The representation $(\pi_z,V_z)$ is taken as the level-zero fundamental (principal) evaluation representation $V_z=\mathbb{C}^2[[z]]$
\begin{align*}
\pi_z(E_0)&= \begin{pmatrix}0&0\\z&0\end{pmatrix}&
\pi_z(\bar E_0)&= \begin{pmatrix}0&z^{-1}\\0&0\end{pmatrix}& \pi_z(T_0)&=\begin{pmatrix}q^{-1}&0\\0&q\end{pmatrix}
\\
\pi_z(E_1)&=\begin{pmatrix}0&z\\0&0\end{pmatrix}&
\pi_z(\bar E_1)&=\begin{pmatrix}0&0\\z^{-1}&0\end{pmatrix}& \pi_z(T_1)&=\begin{pmatrix}q&0\\0&q^{-1}\end{pmatrix} \,.
\end{align*}
The $R$-matrix $R(z)$ is given by
\begin{equation} \label{eq:R-6V}
R(z) = \left(\begin{array}{cc|cc}
qz-q^{-1}z^{-1} & 0 & 0 & 0 \\
0 & z-z^{-1} & q-q^{-1} & 0 \\
\hline
0 & q-q^{-1} & z-z^{-1} & 0 \\
0 & 0 & 0 & qz-q^{-1}z^{-1}
\end{array}\right) \,,
\end{equation}
which gives the weights of the 6-vertex model in the principal gradation.
\subsubsection{The case $U=U_q(A_2^{(2)})$}
The second case which we study is $ U_q(A_2^{(2})$, for which the generalized Cartan matrix is
\[
\left(
\begin{array}{cc}
\mathcal{A}_{00} & \mathcal{A}_{01} \\
\mathcal{A}_{10} & \mathcal{A}_{11}
\end{array}
\right)
=
\left(
\begin{array}{cc}
2 & -4 \\
-1 & 2
\end{array}
\right)
\]
and $d_0=1$, $d_1=4$.
The representation $(\pi_z,V_z)$ is now $V_z=\mathbb{C}^3[[z,z^\ell]]$
with
\begin{align*}
\pi_z(E_0) &=
z^{1-\ell} \varphi(q)
\left( \begin{array}{ccc} 0 & 0 & 0 \\ 1 & 0 & 0 \\ 0 & q & 0 \end{array} \right)
&
\pi_z(E_1) &=
z^{+2\ell}
\left( \begin{array}{ccc} 0 & 0 & 1 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{array} \right)
\\
\pi_z(\bar E_0) &=
z^{\ell-1} \varphi(q)
\left( \begin{array}{ccc} 0 & q^{-1} & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{array} \right)
&
\pi_z(\bar E_1) &=
z^{-2\ell}
\left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 1 & 0 & 0 \end{array} \right)
\\
\pi_z(T_0) &=
\left( \begin{array}{ccc} q^{-2} & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & q^{2} \end{array} \right)
&
\pi_z(T_1) &=
\left( \begin{array}{ccc} q^{4} & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & q^{-4} \end{array} \right) \,,
\end{align*}
where $\varphi(q) = (q+q^{-1})^{1/2}$, and $\ell$ is an arbitrary constant which we will fix later.
The $R$-matrix is
\begin{equation} \label{eq:R-19V}
R(z) = \left( \begin{array}{ccc|ccc|ccc}
\omega_{14} &&&&&&&& \\
& \omega_{10} && \omega_5 &&&&& \\
&& \omega_{16} && \omega_8 && \omega_{19} \\
\hline
& \omega_2 && \omega_{12} &&&&& \\
&& \omega_7 && \omega_1 && \omega_6 \\
&&&&& \omega_{13} && \omega_3 \\
\hline
&& \omega_{18} && \omega_9 && \omega_{17} \\
&&&&& \omega_4 && \omega_{11} \\
&&&&&&&& \omega_{15}
\end{array} \right) \,,
\end{equation}
where the entries are given by
\begin{align*}
\omega_1 &= (z-z^{-1})(q^3z+q^{-3}z^{-1}) + (q^2-q^{-2})(q^3+q^{-3}) \\
\omega_2 = \omega_4 &= z^{-\ell} (q^2-q^{-2})(q^3z+q^{-3}z^{-1}) \\
\omega_3 = \omega_5 &= z^{+\ell} (q^2-q^{-2})(q^3z+q^{-3}z^{-1}) \\
\omega_6 = \omega_8 &= -q^{+2} z^{+\ell} (q^2-q^{-2})(z-z^{-1}) \\
\omega_7 = \omega_9 &= +q^{-2} z^{-\ell} (q^2-q^{-2})(z-z^{-1}) \\
\omega_{10} = \omega_{11} = \omega_{12} = \omega_{13}
&= (z-z^{-1})(q^3 z + q^{-3} z^{-1}) \\
\omega_{14} = \omega_{15} &= (q^2z-q^{-2}z^{-1})(q^3z+q^{-3}z^{-1}) \\
\omega_{16} = \omega_{17} &= (z-z^{-1})(qz+q^{-1}z^{-1}) \\
\omega_{18} &= z^{-2\ell} (q^2-q^{-2}) [(q^2+q^{-2})qz^2 - (q-q^{-1}) q^{-2}] \\
\omega_{19} &= z^{+2\ell} (q^2-q^{-2}) [(q^2+q^{-2}) q^{-1} z^{-2} + (q-q^{-1})q^2] \,.
\end{align*}
These coincide with the Boltzmann weights of the Izergin--Korepin 19-vertex model, up to factors of $z^{\pm\ell}$ which come from the fact that we have not yet fixed the gradation.
\section{From vertex models to loop models}
\label{sec:vertex-loop}
In this section we review the loop/vertex model connection for $U_q(A_1^{(1)})$ and $U_q(A_2^{(2)})$ models in the bulk. Until this stage, our analysis has been on a lattice $\Omega$ with an arbitrary angle between horizontal and vertical lines. We now specify our domain further, by considering a rhombic lattice of definite angle $\alpha$, which we will ultimately relate to the ratio of the spectral parameters $z/w$. We also draw the dual lattice using dotted lines:
\begin{equation}\label{eq:plaq}
R=
\begin{tikzpicture}[baseline=-3pt,scale=0.75,distort]
\draw (1,0) -- (-1,0) node[left] {$z$};
\draw (0,1) -- (0,-1) node[below] {$w$};
\draw (-0.3,0) arc (180:90:0.3) node[left=1.5mm] {$\alpha$};
\end{tikzpicture}
=
\begin{tikzpicture}[baseline=-3pt,scale=1.25,distort]
\plaq (0,0)
\end{tikzpicture}
\end{equation}
where the edges of an elementary ``plaquette'' (elementary rhombus
of the dual lattice) are of unit length.
\subsection{The dense Temperley--Lieb{} model and the \texorpdfstring{$U_q(A_1^{(1)})$}{Uq(A1(1))} vertex model}
The dense {Temperley--Lieb} model is defined by assigning weights $a$ and $b$ to the following two local configurations of a rhombus with top-left lattice angle $\alpha$:
\begin{center}
\begin{tabular}{ccc}
\begin{tikzpicture} [baseline=-3pt,scale=1.25,distort]
\plaqa(0,0)
\end{tikzpicture}
& &
\begin{tikzpicture} [baseline=-3pt,scale=1.25,distort]
\plaqb(0,0)
\end{tikzpicture}
\\
$a$ & & $b$
\end{tabular}
\end{center}
and weight $\tau$ to closed loops in a given lattice configuration $C$.
Thus, $C$ is assigned the Boltzmann weight
\begin{equation} \label{eq:W-TL}
W(C) = a^{N_a(C)} \ b^{N_b(C)} \ \tau^{N_\ell(C)} \,,
\end{equation}
where $N_a(C)$ (resp.\ $N_b(C)$) is the number of plaquettes of weight $a$ (resp.\ $b$),
and $N_\ell(C)$ is the number of closed loops in $C$.
Let us now recall how~\cite{Bax82} these weights are related to those of the six-vertex model.
Firstly, we identify
\begin{equation}
\tau=-(q+q^{-1}),
\quad\quad
q:= -e^{2\pi i\nu} .
\end{equation}
We then associate the above configurations with operators $A$, $B: V\otimes V \rightarrow V\otimes V$ (acting SW-NE),
with $V=\mathbb{C} \ket{\uparrow}\oplus \mathbb{C} \ket{\downarrow}$. This is done by dressing each configuration with arrows and reading off the associated
weight $e^{i\nu \delta}$ associated with total turning angle $\delta$.\footnote{
We assume, as shown on the pictures,
that the loop lines
enter/leave {\em orthogonally}\/ to the sides of the rhombus.} In this way we obtain
\begin{equation}\label{eq:TL-gen}
A := \left( \begin{array}{cc|cc}
1 & 0 & 0 & 0 \\
0 & e^{2i\nu\alpha} & 0 & 0 \\
\hline
0 & 0 & e^{-2i\nu\alpha} & 0 \\
0 & 0 & 0 & 1
\end{array} \right) \,,
\qquad
B:= \left( \begin{array}{cc|cc}
0 & 0 & 0 & 0 \\
0 & e^{-2i\nu(\pi-\alpha)} & 1 & 0 \\
\hline
0 & 1 & e^{2i\nu(\pi-\alpha)} & 0 \\
0 & 0 & 0 & 0
\end{array} \right) \,,
\end{equation}
in the basis $\{ \ket{\uparrow\uparrow}, \ket{\uparrow\downarrow},
\ket{\downarrow\uparrow}, \ket{\downarrow\downarrow} \}$.
The $\check{R}$-matrix is then the linear combination of these operators, dressed by their respective weights:
\begin{equation} \label{eq:R-6V-loop}
\Rc := P R = a\ A + b\ B
= \left( \begin{array}{cc|cc}
a & 0 & 0 & 0 \\
0 & e^{2i\nu\alpha}a + e^{-2i\nu(\pi-\alpha)}b & b & 0 \\
\hline
0 & b & e^{-2i\nu\alpha}a + e^{2i\nu(\pi-\alpha)}b & 0 \\
0 & 0 & 0 & a
\end{array} \right) \,.
\end{equation}
\subsection{Six-vertex model weights from intertwining relations}
It is well known that the commutant of the quantized algebra
$\Uq{A_1}$ acting on a tensor product of two-dimensional representations
is the {Temperley--Lieb} algebra~\cite{Jim85}.
This suggests that the two terms $A$ and $B$
in~\eqref{eq:R-6V-loop} are intertwiners for the subalgebra
$\aver{E_1,\bar{E}_1,T_1} \cong \Uq{A_1}$. Indeed, one can check that
\begin{equation}\label{eq:tl-AB}
\begin{cases}
A\, \pi_{z,w} (\Delta(X)) = \pi_{w,z} (\Delta(X)) \, A \\
B\, \pi_{z,w}(\Delta(X)) = \pi_{w,z} (\Delta(X)) \, B \,,
\end{cases}
\qquad
X =E_1,\bar{E}_1,T_1 \,,
\qquad
\end{equation}
provided the following relation holds between angle and spectral parameters:
$z/w=e^{-2i\nu\alpha}$.
Notice that the equation for $X=T_1$ is automatically satisfied,
since both $A$ and $B$ preserve the total magnetization.
Moreover, imposing that $\check R$ commute in a similar way with the action of $E_0$, $\bar E_0$
fixes the ratio $a/b$, say
\begin{equation}
a=q\,x-q^{-1}x^{-1},\quad\quad b=x-x^{-1}, \quad\quad x:= z/w.
\end{equation}
Substituting these values for the weights into \eqref{eq:R-6V-loop}, we recover the 6-vertex $R$-matrix \eqref{eq:R-6V}.
{\em Remark:} The explicit connection of the operators \eqref{eq:TL-gen} with the usual generators of the Temperley--Lieb algebra is as follows. On a strip of width $L$, the {Temperley--Lieb} algebra with generators $\{g_1, \dots, g_L\}$ satisfies the list of relations
\begin{equation}
\begin{array}{rcl}
g_j g_{j \pm 1} g_j &=& g_j \\
g_j^2 &=& \tau g_j \\
g_j g_k &=& g_k g_j, \quad \text{if $|j-k|>1$.}
\end{array}
\end{equation}
The generator $g_j$ acts non-trivially on spaces at positions $j$ and $j+1$, and the weights for a plaquette at this position are encoded in the $\check{\mathcal R}$-matrix $\check{\mathcal R}_{j,j+1} = a\ {\mathbf 1} + b\ g_j$. Both the identity ${\mathbf 1}$ and TL generator $g_j$ have well known graphical interpretations, as a 45 degree rotation and deformation of the tiles above into squares.
Consequently, we expect that $\check{R}$ and $\check{\mathcal R}$ are related by a simple gauge transformation (corresponding to the passage from principal
gradation to homogeneous gradation), which is indeed the case; we find that
$$
\check{\mathcal R} = \mathcal{U}^{-1} \ \Rc \ \mathcal{U}' \,,
\qquad
\mathcal{U} := w^{\sigma^z/2} \otimes z^{\sigma^z/2} \,,
\qquad
\mathcal{U}' := z^{\sigma^z/2} \otimes w^{\sigma^z/2} \,,
$$
since under this transformation the two terms in $\check{R}$ become
$$
\mathcal{U}^{-1} \ A \ \mathcal{U}' = {\mathbf 1} \,,
\qquad
\mathcal{U}^{-1} \ B \ \mathcal{U}' = \left( \begin{array}{cccc}
0 & 0 & 0 & 0 \\
0 & -q^{-1} & 1 & 0 \\
0 & 1 & -q & 0 \\
0 & 0 & 0 & 0
\end{array} \right) \,,
$$
where the second term is the well-known spin-$\frac{1}{2}$ representation of $g_j$.
\subsection{The dilute Temperley--Lieb{} model and the
\texorpdfstring{$U_q(A_2^{(2)})$}{Uq(A2(2))}
vertex model}
The dilute {Temperley--Lieb} model (or ${\rm O}(n)$ model) is defined
by the plaquette configurations:
\begin{center}
\begin{tabular}{cccccccc}
\begin{tikzpicture} [baseline=-3pt,scale=1.25,distort]
\plaqg(0,0)
\end{tikzpicture}
&
\begin{tikzpicture} [baseline=-3pt,scale=1.25,distort]
\plaqf(0,0)
\end{tikzpicture}
&
\begin{tikzpicture} [baseline=-3pt,scale=1.25,distort]
\plaqd(0,0)
\end{tikzpicture}
&
\begin{tikzpicture} [baseline=-3pt,scale=1.25,distort]
\plaqc(0,0)
\end{tikzpicture}
&
\begin{tikzpicture} [baseline=-3pt,scale=1.25,distort]
\plaqa(0,0)
\end{tikzpicture}
&
\begin{tikzpicture} [baseline=-3pt,scale=1.25,distort]
\plaqb(0,0)
\end{tikzpicture}\\
$t$ & $u_1$&$u_1$ &$v$ & $w_1$ & $w_2$ \\[2mm]
&
\begin{tikzpicture} [baseline=-3pt,scale=1.25,distort]
\plaqff(0,0)
\end{tikzpicture}
&
\begin{tikzpicture} [baseline=-3pt,scale=1.25,distort]
\plaqdd(0,0)
\end{tikzpicture}
&
\begin{tikzpicture} [baseline=-3pt,scale=1.25,distort]
\plaqcc(0,0)
\end{tikzpicture} \\
& $u_1$ & $u_2$ & $v$
\end{tabular}
\end{center}
with corresponding weights shown beneath the configurations.
The Boltzmann weight of a configuration $C$ is given by
$$
W(C) = t^{N_t(C)} \ u_1^{N_{u_1}(C)} u_2^{N_{u_2}(C)} w_1^{N_{w_1}(C)} w_2^{N_{w_2}(C)}
\tau^{N_\ell(C)} \,,
$$
where $N_\alpha$ is the number of plaquettes of type $\alpha$, and $N_\ell$ is the number
of closed loops.
In a similar way to the dense case, we introduce the parameters
$$\tau = -(q^4+q^{-4}), \quad\quad
q := e^{i\pi(\frac{\nu}{2}-\frac{1}{4})} .
$$
Now we identify the configurations with operators $T,U'_1,U''_1,U'_2,U''_2,V',V'',W_1,W_2 :$ $V\otimes V \rightarrow V\otimes V$, where $V=\mathbb{C}^3=\mathbb{C}\ket{\uparrow}\oplus \mathbb{C} \ket{0} \oplus \mathbb{C}\ket{\downarrow}$. We dress lines in the plaquette with arrows and associate them with $\ket{\uparrow}$ or $\ket{\downarrow}$, we identify missing lines with $\ket{0}$, and collect an associated weight $e^{i\nu\delta}$ for the total turning angle $\delta$. To save space, we will not write down the explicit form of the resulting operators.
The $\check{R}$-matrix is, as before, the linear combination of all operators dressed by their Boltzmann weights. In the basis
$\{\ket{\uparrow\uparrow},\ket{\uparrow 0},\ket{\uparrow\downarrow},
\ket{0\uparrow},\ket{00},\ket{0\downarrow},
\ket{\downarrow\uparrow},\ket{\downarrow 0},\ket{\downarrow\downarrow}\}$, it is given by
\begin{multline}
\label{eq:R-19V-loop}
\check{R}
=
t\ T + u_1\ (U'_1 + U''_1) + u_2\ (U'_2 + U''_2) + v\ (V' + V'') + w_1\ W_1 + w_2\ W_2
=
\\
\left(
\begin{array}{ccc|ccc|ccc}
w_1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0
\\
0 & u_1 e^{i\nu\alpha} & 0 & v & 0 & 0 & 0 & 0 & 0
\\
0 & 0 & (w_1 + w_2 e^{-2i\nu\pi}) e^{2i\nu\alpha} & 0 & u_2 e^{-i\nu(\pi-\alpha)} & 0 & w_2 & 0 & 0
\\
\hline
0 & v & 0 & u_1 e^{-i\nu\alpha} & 0 & 0 & 0 & 0 & 0
\\
0 & 0 & u_2 e^{-i\nu (\pi-\alpha)} & 0 & t & 0 & u_2 e^{i\nu (\pi-\alpha)} & 0 & 0
\\
0 & 0 & 0 & 0 & 0 & u_1 e^{i\nu\alpha} & 0 & v & 0
\\
\hline
0 & 0 & w_2 & 0 & u_2 e^{i\nu (\pi-\alpha)} & 0 & (w_1 + w_2 e^{2i\nu\pi})e^{-2i\nu\alpha} & 0 & 0
\\
0 & 0 & 0 & 0 & 0 & v & 0 & u_1 e^{-i\nu\alpha} & 0
\\
0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & w_1
\end{array}
\right)
\end{multline}
\subsection{Nineteen-vertex model weights from intertwining relations}
In analogy with the dense TL case, we wish to identify the plaquette operators as intertwiners of the subalgebra $\langle E_1, \bar E_1, T_1 \rangle$. We find that
$$
Y \, \pi_{z,w}(\Delta(X)) = \pi_{w,z}( \Delta(X)) \, Y \,,
\qquad
X = E_1,\bar E_1, T_1 \,,
\quad Y= T,U'_1,U''_1,U'_2,U''_2,V',V'',W_1,W_2 \,,
$$
provided we set $(z/w)^{2\ell} = e^{-2i\nu\alpha}$. Furthermore, imposing that
$$
\check{R} \ \pi_{z,w} (\Delta(X)) = \pi_{w,z} (\Delta(X)) \ \check{R}
\qquad X= E_0,\bar E_0, T_0 \,,
$$
determines the Boltzmann weights up to normalization:
\begin{align*}
t &= (x - x^{-1})(q^3 x + q^{-3} x^{-1}) + (q^2-q^{-2})(q^3+q^{-3}) \\
u_1 &= (q^{2}-q^{-2}) (q^3 x + q^{-3} x^{-1}) \\
u_2 &= i (q^{2}-q^{-2}) (x - x^{-1}) \\
v &= (x - x^{-1})(q^3 x + q^{-3} x^{-1}) \\
w_1 &= (q^2 x - q^{-2} x^{-1})(q^3 x + q^{-3} x^{-1}) \\
w_2 &= (x - x^{-1})(q x + q^{-1} x^{-1}) \,,
\end{align*}
where we have again set $x:=z/w$. Inserting these values for the weights into~\eqref{eq:R-19V-loop},
we recover the 19-vertex $R$-matrix~\eqref{eq:R-19V}.
\section{Discrete holomorphicity in the bulk}
\label{sec:DH-bulk}
The goal of the present section is to connect
\eqref{eq:conserv-j}
to discrete holomorphicity. To do so, we must finally specify our boundary conditions. In all cases to be considered, we shall choose ``reflecting'' boundary conditions, in which adjacent edges on the boundary are paired by a common loop (or are empty, which is allowed in the dilute TL model). The only exception to this rule will be two (or one, in the dilute case) boundary edges from which an open, unpaired loop propagates.
The reason for making this choice is that these are simple boundary conditions for which the boundary trivially commutes with the tail operators $T_0,T_1$, allowing us to apply \eqref{eq:conserv-j}. Indeed, our results extend to any choice of boundaries for which the tail operators satisfy this property. In particular, we wish to emphasise that our results \emph{do not require integrable boundary conditions.}
The discussion of integrable boundaries, and the associated boundary discrete holomorphicity, is deferred to \S \ref{sec:vertex-bound}--\ref{sec:DH-bound}.
\subsection{Application to the dense loop model}\label{ssec:loop-dense}
\subsubsection{Loop observables associated to $E_0$ and $\bar E_0$}
Let us now consider the insertion on the lattice of the current $e_0$
associated to the operator $E_0$.
We want to translate this insertion into the language of loops.
We consider a model in which all loops are closed except one open path $\gamma$ that
connects two fixed boundary defects. In the vertex model,
this is simply done by setting free boundary conditions for the two boundary defects. An example of such a configuration is shown below.
\begin{center}
\begin{tikzpicture}[scale=0.75,distort]
\foreach\x in {0,...,8}
\foreach\y in {0,...,5}
{\pgfmathrandominteger{\rand}{0}{1}
\ifnum\rand=0\plaqa(\x,\y)\else\plaqb(\x,\y)\fi
}
\foreach\y in {0,2,4}
{
\draw[edge] (-0.5,\y) \start\aw(-0.5,0)\go\an(0,0.5)\go\ae(0.5,0);
\draw[edge] (8.5,\y) \start\ae(0.5,0)\go\an(0,0.5)\go\aw(-0.5,0);
}
\foreach\x in {0,2,5,7}
{
\draw[edge] (\x,5.5) \start\an(0,0.5)\go\ae(0.5,0)\go\as(0,-0.5);
\draw[edge] (\x,-0.5) \start\as(0,-0.5)\go\ae(0.5,0)\go\an(0,0.5);
}
\draw[edge] (4,-0.5) -- ++(\as:0.4);
\draw[edge] (4,5.5) -- ++(\an:0.4);
\end{tikzpicture}
\end{center}
The important observation is that if the marked edge on which $E_0$ sits
belongs to a closed loop, then the contribution is necessarily zero. This is because
$E_0$ changes the direction of an arrow on a dressed loop configuration, whereas
the direction of arrows is continuous around a closed loop.
Graphically,
\begin{equation*}
\begin{tikzpicture} [baseline=-3pt,scale=0.6]
\draw[edge] (0,0) arc (0:360:0.8cm);
\node at (0.6,0) {$E_0$};
\draw[wavy] (-2.4,0) -- (0,0) node[oper] {};
\end{tikzpicture}
\ =0,
\qquad \text{and} \qquad
\begin{tikzpicture} [baseline=-3pt,scale=0.6]
\draw[edge] (0,0) arc (0:360:0.8cm);
\node at (-1,0) {$E_0$};
\draw[wavy] (-4,0) -- (-1.6,0) node[oper] {};
\end{tikzpicture}
= 0 \,.
\end{equation*}
To be precise, we have included the tail in these equations, but in fact the tail has an irrelevant contribution, since it is diagonal in the evaluation representation we are using. In subsequent pictures, the node marking will always correspond to insertion of
$E_0$, and the wavy line to $T_0$.
We conclude that in order to have a non-zero contribution,
the marked edge must lie on the open path $\gamma$, and that the latter, whose
orientation we had left unspecified so far, must be incoming at both
boundaries:
\begin{center}
\slow{
\begin{tikzpicture} [scale=0.75,distort]
\pgfmathsetseed{21}
\foreach\x in {0,...,8}
\foreach\y in {0,...,5}
{\pgfmathrandominteger{\rand}{0}{1}
\ifnum\rand=0\plaqb(\x,\y)\else\plaqa(\x,\y)\fi
}
\foreach\y in {0,2,4}
{
\draw[edge] (-0.5,\y) \start\aw(-0.5,0)\go\an(0,0.5)\go\ae(0.5,0);
\draw[edge] (8.5,\y) \start\ae(0.5,0)\go\an(0,0.5)\go\aw(-0.5,0);
}
\foreach\x in {0,2,5,7}
{
\draw[edge] (\x,5.5) \start\an(0,0.5)\go\ae(0.5,0)\go\as(0,-0.5);
\draw[edge] (\x,-0.5) \start\as(0,-0.5)\go\ae(0.5,0)\go\an(0,0.5);
}
\draw[red,line width=1.3pt,
decoration={markings, mark = at position 0.035 with {\arrow{>}}},
postaction={decorate}]
($(4,-0.5)-(\an:0.4)$) -- (4,-0.5) \start\an(0,0.5) \go\ae(0.5,0)\go\an(0,0.5)\go\ae(0.5,0)\go\as(0,-0.5)\go\ae(0.5,0)\go\as(0,-0.5)\go\ae(0.5,0)\go\an(0,0.5) \go\ae(0.5,0)\go\an(0,0.5)\go\aw(-0.5,0) \go\an(0,0.5)\go\aw(-0.5,0)\go\as(0,-0.5)\go\aw(-0.5,0)\go\an(0,0.5)\go\aw(-0.5,0)\go\an(0,0.5)\go\aw(-0.5,0)\go\an(0,0.5)\go\aw(-0.5,0)\go\an(0,0.5)\go\aw(-0.5,0)\go\as(0,-0.5)\go\aw(-0.5,0)\go\an(0,0.5)\go\aw(-0.5,0)\go\as(0,-0.5)\go\ae(0.5,0)\go\as(0,-0.5)\go\ae(0.5,0) coordinate(temp);
\draw[red,line width=1.3pt,
decoration={markings, mark = at position 0.88 with {\arrow{>}}},
postaction={decorate}]
(temp) \start\ae(0.5,0)
\go\an(0,0.5)\go\ae(0.5,0)\go\as(0,-0.5)\go\ae(0.5,0)\go\as(0,-0.5)\go\aw(-0.5,0)\go\as(0,-0.5)\go\ae(0.5,0)\go\an(0,0.5);
\draw[green!75!black,line width=1.3pt,
decoration={markings, mark = at position 0.06 with {\arrow{>}}},
postaction={decorate}]
($(4,5.5)-(\as:0.4)$) -- (4,5.5) \start\as(0,-0.5) \go\aw(-0.5,0)\go\an(0,0.5)\go\aw(-0.5,0)\go\as(0,-0.5) \go\aw(-0.5,0)\go\an(0,0.5)\go\aw(-0.5,0)\go\as(0,-0.5) \go\aw(-0.5,0) \go\as(0,-0.5)\go\ae(0.5,0) \go\as(0,-0.5)\go\ae(0.5,0)\go\as(0,-0.5)\go\ae(0.5,0) coordinate(temp);
\draw[green!75!black,line width=1.3pt,
decoration={markings,
mark = at position 0.09 with {\arrow{>}},
mark = at position 0.98 with {\arrow{>}}},
postaction={decorate}] (temp) \start\ae(0.5,0)
\go\as(0,-0.5)\go\aw(-0.5,0)\go\as(0,-0.5)\go\aw(-0.5,0)\go\as(0,-0.5) \go\ae(0.5,0)\go\an(0,0.5) \go\ae(0.5,0)\go\an(0,0.5)\go\ae(0.5,0)\go\as(0,-0.5)\go\aw(-0.5,0)\go\as(0,-0.5) \go\ae(0.5,0)\go\an(0,0.5) \go\ae(0.5,0)\go\an(0,0.5)\go\ae(0.5,0)\go\an(0,0.5)\go\aw(-0.5,0)\go\as(0,-0.5);
\draw[wavy] (-1,1.5) -- (4,1.5) node[oper] {};
\end{tikzpicture}
}
\end{center}
We first discuss $e_0^{(\tsup)}$, {\it i.e.}, insertion on a horizontal
edge of a plaquette, as on the picture above. We have $\pi_w(E_0)=w\sigma^-$,
and the Boltzmann weight $W(C)$ is the same as in~\eqref{eq:W-TL}.
The open path $\gamma$ however has an additional factor
$e^{2i\nu\theta(C)}$ where $\theta(C)$ is the angle spanned by
the portion of $\gamma$ between a boundary entry point and the insertion of $E_0$
(the red or green directed line) -- the two angles are equal. In fact it is easy to see that
$\theta(C)=\pi k(C)$ where $k(C)$ is an integer whose parity is fixed by the
relative locations of marked edge and boundary entry points of the arc
(on the example $k(C)=2$).
Finally, there is a contribution from the tail of $q^{-\sigma^z}$ (wavy line).
It is easy to check that if $k$ is even, red and green lines each cross
(algebraically) the wavy line $k/2$ times,
whereas if $k$ is odd the red line crosses it $\lfloor k/2\rfloor$ times
and the green line $\lceil k/2 \rceil$ times.
In both cases we find that this produces a factor $q^{k(C)}$.
Using $q=e^{i\pi (2\nu-1)}$
and putting everything together we find
\begin{equation*}
\aver{e_0^{(\tsup)}(x, t)}
= \frac{w}{Z} \sum_{C | (x, t)\in \gamma} W(C) \ e^{i(4\nu-1)\theta(C)} \,,
\end{equation*}
where we use the notation $\aver{\mathcal{O}}$ for the expectation value of an operator $\mathcal{O}$, and $Z$ is the partition function, $Z=\sum_{C} W(C)$.
Now let us apply the same reasoning to $e_0^{(\xsup)}$, for which a typical configuration is shown below:
\begin{center}
\slow{
\begin{tikzpicture} [scale=0.75,distort]
\pgfmathsetseed{2}
\foreach\x in {0,...,8}
\foreach\y in {0,...,5}
{\pgfmathrandominteger{\rand}{0}{1}
\ifnum\rand=0\plaqb(\x,\y)\else\plaqa(\x,\y)\fi
}
\foreach\y in {0,2,4}
{
\draw[edge] (-0.5,\y) \start\aw(-0.5,0)\go\an(0,0.5)\go\ae(0.5,0);
\draw[edge] (8.5,\y) \start\ae(0.5,0)\go\an(0,0.5)\go\aw(-0.5,0);
}
\foreach\x in {0,2,5,7}
{
\draw[edge] (\x,5.5) \start\an(0,0.5)\go\ae(0.5,0)\go\as(0,-0.5);
\draw[edge] (\x,-0.5) \start\as(0,-0.5)\go\ae(0.5,0)\go\an(0,0.5);
}
\draw[red,line width=1.3pt,
decoration={markings, mark = at position 0.03 with {\arrow{>}},
mark = at position 0.5 with {\arrow{>}},
mark = at position 0.98 with {\arrow{>}}},
postaction={decorate}] ($(4,-0.5)-(\an:0.4)$) -- (4,-0.5) \start\an(0,0.5) \go\aw(-0.5,0)\go\an(0,0.5)\go\aw(-0.5,0)\go\as(0,-0.5)\go\ae(0.5,0)\go\as(0,-0.5)
\go\aw(-0.5,0)\go\an(0,0.5) \go\aw(-0.5,0)\go\an(0,0.5)\go\aw(-0.5,0)\go\an(0,0.5)\go\ae(0.5,0)\go\an(0,0.5)\go\ae(0.5,0)\go\an(0,0.5)\go\aw(-0.5,0)\go\as(0,-0.5)\go\aw(-0.5,0)\go\an(0,0.5)\go\ae(0.5,0)\go\an(0,0.5)\go\ae(0.5,0)\go\an(0,0.5)
\go\ae(0.5,0)\go\as(0,-0.5) \go\aw(-0.5,0)\go\as(0,-0.5)\go\ae(0.5,0)\go\as(0,-0.5)\go\aw(-0.5,0);
\draw[green!75!black,line width=1.3pt,
decoration={markings, mark = at position 0.07 with {\arrow{>}},
mark = at position 0.96 with {\arrow{>}}},
postaction={decorate}] ($(4,5.5)-(\as:0.4)$) -- (4,5.5) \start\as(0,-0.5)\go\aw(-0.5,0)\go\as(0,-0.5)\go\ae(0.5,0)\go\as(0,-0.5)\go\aw(-0.5,0)\go\as(0,-0.5)\go\ae(0.5,0)\go\as(0,-0.5)\go\aw(-0.5,0)\go\an(0,0.5)\go\aw(-0.5,0)\go\an(0,0.5)\go\ae(0.5,0);
\draw[wavy] (-1,3.5) -- (2.5,3.5) -- (2.5,3) node[oper] {};
\end{tikzpicture}
}
\end{center}
First we have a factor $z$. Then we have a factor from
the angle $e^{2i\nu\theta}$ where this time $\theta=-\alpha+k\pi$ where
$k$ has fixed parity (on the example, $k=-1$). Finally,
the tail crosses algebraically the red line
$\lfloor k/2\rfloor$ times, and the green line $\lceil k/2\rceil$ times.
In total we get $e^{2i\nu\theta}q^{(\theta+\alpha)/\pi}=e^{i(4\nu-1)\theta}e^{i(2\nu-1)\alpha}$,\footnote{If we instead defined $q = e^{i\pi (2\nu+1)}$ we would obtain the total factor $e^{i(4\nu-1)\theta}e^{i(2\nu+1)\alpha}$, leading to an observable which is discretely anti-holomorphic. There is no preferred definition for $q$, since exactly half of the observables are anti-holomorphic regardless of which choice we make.} or
\begin{equation*}
\aver{e_0^{(\xsup)}(x,t)} = \frac{z \ e^{i(2\nu-1)\alpha}}{Z}
\sum_{C| (x,t) \in \gamma} W(C) \ e^{i(4\nu-1)\theta(C)} \,.
\end{equation*}
Note that $z \ e^{2i\nu\alpha}=w$.
Therefore, if we define a function on the edges of the lattice
\begin{equation*}
\phi_0(x,t) := w^{-1}
\begin{cases}
e_0^{(\tsup)}(x,t), & (x,t)\in\mathbb{Z}\times(\mathbb{Z}+\frac{1}{2}) \\
e^{i\alpha} e_0^{(\xsup)}(x,t), & (x,t)\in(\mathbb{Z}+\frac{1}{2})\times\mathbb{Z} \,,
\end{cases}
\end{equation*}
then we have
\begin{equation*}
\aver{\phi_0(x,t)} = \frac{1}{Z} \sum_{C| (x,t) \in \gamma} W(C) \ e^{i(4\nu-1)\theta(C)} \,,
\end{equation*}
where $\theta(C)$ is the angle spanned by the open path $\gamma$ from one boundary
to $(x,t)$.
Furthermore, the current conservation equation \eqref{eq:conserv-j} for $j_a = e_0$ is
\begin{equation}
\label{eq:conserv-e}
e_0^{(\xsup)}(x-1/2,t)
- e_0^{(\xsup)}(x+1/2,t)
+ e_0^{(\tsup)}(x,t-1/2)
- e_0^{(\tsup)}(x,t+1/2) = 0 \,.
\end{equation}
Rewriting \eqref{eq:conserv-e} in terms of $\phi_0(x,t)$ we find
\begin{equation} \label{eq:phi0_dh}
\phi_0(x,t-1/2)
+e^{i(\pi-\alpha)}\phi_0(x+1/2,t)
-e^{i(\pi-\alpha)}\phi_0(x-1/2,t)
-\phi_0(x,t+1/2) = 0 \,,
\end{equation}
which is precisely the discrete holomorphicity condition
around a plaquette of the type:
\begin{equation*}
\begin{tikzpicture} [distort,baseline=1cm]
\draw[dotted] (0,0) rectangle (2,2);
\draw (0.3,0) arc (0:90:0.3) node[right=1.5mm] {$\pi-\alpha$};
\draw (0.3,2) arc (0:-90:0.3) node[right=1.5mm] {$\alpha$};
\node[circle,fill,inner sep=1.5pt,label={below:$\ss(x,t-1/2)$}] at (1,0) {};
\node[circle,fill,inner sep=1.5pt,label={right:$\ss(x+1/2,t)$}] at (2,1) {};
\node[circle,fill,inner sep=1.5pt,label={above:$\ss(x,t+1/2)$}] at (1,2) {};
\node[circle,fill,inner sep=1.5pt,label={left:$\ss(x-1/2,t)$}] at (0,1) {};
\end{tikzpicture}
\qquad\qquad (x,t)\in\mathbb{Z}^2 \,.
\end{equation*}
This equality is valid at the operator level, {\it i.e.}, when inserted in an arbitrary correlation function
$\aver{\cdots}$.
Note that $\phi_0$ is exactly the lattice holomorphic observable
identified in~\cite{RivaC06,IkhlefC09}: we have shown here that this
observable can be obtained by the construction of conserved currents
associated to the $\Uq{A_1^{(1)}}$ symmetry of~\cite{BernardFelder91}.
Everything can be repeated with $\bar E_0$ instead of $E_0$. This leads to
the conjugate observable
\begin{equation*}
\aver{\bar\phi_0(x,t)} = \frac{1}{Z} \sum_{C| (x,t) \in \gamma} W(C) \ e^{-i(4\nu-1)\theta(C)} \,,
\end{equation*}
which therefore satisfies an
antiholomorphicity condition.
\subsubsection{Loop observables associated to $E_1$ and $\bar E_1$}
There is a simpler observable, obtained by considering $E_1$.
We use the same type of boundary conditions as above; the whole discussion
goes through, except for the following modifications.
Compared to $E_0$, the arrows on the open path $\gamma$
are inverted, but the tail is inverted as well (it is made
of $q^{+\sigma^z}$). The result
is that the tail and the angle factor almost compensate; for $e_1^{(\tsup)}$
we find using the exact same reasoning that
\begin{equation*}
\aver{e_1^{(\tsup)}(x,t)} = \frac{w}{Z} \sum_{C| (x,t) \in \gamma} W(C) \ e^{-i\theta(C)} \,.
\end{equation*}
Note that $e^{-i\theta}=(-1)^k$ is independent of the configuration
and simply measures the flux of $\gamma$ through the marked edge
({\it i.e.}, the only configurations that contribute to $\langle e_1^{(\tsup)} \rangle$
are those for which the average flux is nonzero, that is,
where the open path goes through the marked edge).
Similarly, for $e_1^{(\xsup)}$ one finds $e^{-2i\nu\theta} q^{(\theta+\alpha)/\pi}=
e^{-i\theta}e^{i(2\nu-1)\alpha}
$, so
\begin{equation*}
\aver{e_1^{(\xsup)}(x,t)} = \frac{z \ e^{i(2\nu-1)\alpha}}{Z}
\sum_{C| (x,t) \in \gamma} W(C) \ e^{-i\theta(C)} \,.
\end{equation*}
Again the same miracle happens in that $z e^{2i\nu\alpha}=w$.
Note that the conservation law for $e_1$ is the obvious conservation of the flux
of $\gamma$ through a plaquette.
Finally, we define
\begin{equation*}
\phi_1(x,t) := w^{-1}
\begin{cases}
e_1^{(\tsup)}(x,t), & (x,t)\in\mathbb{Z}\times(\mathbb{Z}+\frac{1}{2}) \\
e^{i\alpha} e_1^{(\xsup)}(x,t), & (x,t)\in(\mathbb{Z}+\frac{1}{2})\times\mathbb{Z} \,,
\end{cases}
\end{equation*}
and we have
\begin{equation*}
\aver{\phi_1(x,t)} = \frac{1}{Z} \sum_{C| (x,t) \in \gamma} W(C) \ e^{-i\theta(C)} \,,
\end{equation*}
with the discrete holomorphicity equation
\begin{equation} \label{eq:phi1_dh}
\phi_1(x,t-1/2)
+e^{i(\pi-\alpha)}\phi_1(x+1/2,t)
-e^{i(\pi-\alpha)}\phi_1(x-1/2,t)
-\phi_1(x,t+1/2) = 0\,.
\end{equation}
Note that since $E_1$ commutes with each loop plaquette separately,
{\it cf.\ } \eqref{eq:tl-AB}, the equation above is actually valid for each individual
configuration of the loop model.
Similarly, insertion of $\bar E_1$ leads to the antiholomorphic observable
$$
\aver{\bar\phi_1(x,t)} = \frac{1}{Z} \sum_{C| (x,t) \in \gamma} W(C) \ e^{+i\theta(C)} \,.
$$
In all cases of observables, we observe
that the different Borel subalgebras ($E$ operators versus $\bar E$ operators)
correspond to different chiralities from the point of view of Conformal
Field Theory.
\subsubsection{A remark on local observables}\label{sec:localobs}
Note that the model also possesses a local observable,
although its meaning is not transparent in the loop language.
Write $T_i=q^{H_i}$, so that
in the representation we use, $\pi_z(H_1)=-\pi_z(H_0)=\sigma^z$.
The coproduct of $H_i$ is the usual Lie algebra coproduct
$\Delta H_i = H_i\otimes 1 + 1\otimes H_i$,
so that the corresponding current
$h^{(\xsup)}=h_1^{(\xsup)}=-h_0^{(\xsup)}$
and $h^{(\tsup)}=h_1^{(\tsup)}=-h_0^{(\tsup)}$ does not carry a ``tail'', {\it i.e.}, is local.
In the vertex language,
$\aver{h^{(\xsup)}(x,t)}$ simply gives the average orientation
of the arrow sitting on the edge $(x,t)$, and similarly for $h^{(\tsup)}$.
This observable should not be confused with the flux observable associated
to $E_1$ or $\bar E_1$.
\subsection{Application to the dilute loop model}
\subsubsection{Loop observables associated to $E_0$ and $\bar E_0$}
Consider the dilute {Temperley--Lieb} loop model on the lattice $\Omega$ described in \secref{ssec:lattice},
with empty boundary conditions on all sides, except for one boundary site
which is pointing inwards. The reason for this choice is that
in the representation considered in~\secref{ssec:qaa}, the operator $E_0$ turns a spin $\u$ into $0$,
or a $0$ into $\d$. Therefore, in the loop model, inserting $E_0(x,t)$ forces
the path $\gamma$ to go from the boundary defect to $(x,t)$.
For definiteness, we place
the boundary defect at the bottom, say at $(x_d,1/2)$.
Consider firstly the case where the operator sits on a horizontal edge, $e_0^{(\tsup)}$:
\begin{center}
\slow{ \begin{tikzpicture} [scale=0.75,distort]
\fill[bgplaq,shift={(-0.5,-0.5)}] (0,0) rectangle (9,6);
\draw[dotted,shift={(-0.5,-0.5)}] (0,0) grid (9,6);
\draw[edge] (1,2.5) \start\as(0,-0.5)\go\ae(0.5,0)\go\ae(0.5,0)\go\as(0,-0.5)\go\aw(-0.5,0)\go\as(0,-0.5)\go\aw(-0.5,0)\go\an(0,0.5)\go\aw(-0.5,0)\go\an(0,0.5)\go\an(0,0.5)\go\ae(0.5,0)\go\as(0,-0.5);
\draw[edge] (2,3.5) \start\an(0,0.5)\go\ae(0.5,0)\go\as(0,-0.5)\go\ae(0.5,0)\go\ae(0.5,0)\go\an(0,0.5)\go\aw(-0.5,0)\go\an(0,0.5)\go\aw(-0.5,0)\go\aw(-0.5,0)\go\aw(-0.5,0)\go\as(0,-0.5)\go\as(0,-0.5)\go\ae(0.5,0)\go\an(0,0.5);
\draw[edge] (7,1.5) \start\as(0,-0.5)\go\ae(0.5,0)\go\an(0,0.5)\go\aw(-0.5,0)\go\as(0,-0.5);
\draw[edge] (6,2.5) \start\as(0,-0.5)\go\ae(0.5,0)\go\an(0,0.5)\go\an(0,0.5)\go\ae(0.5,0)\go\an(0,0.5)\go\aw(-0.5,0)\go\as(0,-0.5)\go\aw(-0.5,0)\go\as(0,-0.5)\go\as(0,-0.5);
\draw[red,line width=1.3pt,
decoration={markings, mark = at position 0.5 with {\arrow{>}}},
postaction={decorate}]
(4,-0.5) \start\an(0,0.5)\go\ae(0.5,0)\go\ae(0.5,0)\go\an(0,0.5)\go\an(0,0.5)\go\aw(-0.5,0)\go\an(0,0.5);
\draw[wavy] (-1,2.5) -- (5,2.5) node[oper] {};
\end{tikzpicture}}
\end{center}
For the open path $\gamma$, we get the following factors. Denote the total winding angle of $\gamma$
by $\theta$. It is an integer multiple of $\pi$: $\theta=k\pi$ (however we do not know anything
about the parity of $k$ this time because in the dilute model the $v$-tiles are parity-changing).
Then $e_0^{(\tsup)}$ gets a factor $e^{i\nu \theta}$ from the local turns.
Also, it is obvious that this loop crosses the tail (algebraically)
$\left\lfloor k/2 \right\rfloor$ times. This produces a factor of
$q^{2 \left\lfloor k/2 \right\rfloor}$ from the tail (since the tail is comprised of $T_0$ operators).
Also, the terms with $k$ odd get a factor $q$ from the matrix element of $E_0$.
The final expression for the time-component of the current $e_0^{(\tsup)}$ is
\begin{equation*}
\aver{e_0^{(\tsup)}(x,t)} = \frac{\varphi(q) w^{1-\ell}}{Z} \left(
\sum_{\substack{C|\gamma:(x_d,1/2)\to(x,t) \\ \text{$k$ even}}} e^{i (\frac{\nu}{2} - \frac{1}{4}) k\pi}
+ \sum_{\substack{C|\gamma:(x_d,1/2)\to(x,t) \\ \text{$k$ odd}}} q\ e^{i (\frac{\nu}{2} - \frac{1}{4}) (k-1)\pi}
\right) W(C) \ e^{i\nu k\pi} \,,
\end{equation*}
which can be combined into a single summation
\begin{equation*}
\aver{e_0^{(\tsup)}(x,t)} = \frac{\varphi(q) w^{1-\ell}}{Z}
\sum_{C|\gamma:(x_d,1/2)\to(x,t)}
W(C) \ e^{i(\frac{3\nu}{2}-\frac{1}{4}) \theta} \,.
\end{equation*}
Let us move on to the case where the operator sits on a vertical edge, $e_0^{(\xsup)}$:
\begin{center}
\slow{
\begin{tikzpicture} [scale=0.75,distort]
\fill[bgplaq,shift={(-0.5,-0.5)}] (0,0) rectangle (9,6);
\draw[dotted,shift={(-0.5,-0.5)}] (0,0) grid (9,6);
\draw[edge] (1,2.5) \start\as(0,-0.5)\go\ae(0.5,0)\go\ae(0.5,0)\go\as(0,-0.5)\go\aw(-0.5,0)\go\as(0,-0.5)\go\aw(-0.5,0)\go\an(0,0.5)\go\aw(-0.5,0)\go\an(0,0.5)\go\an(0,0.5)\go\ae(0.5,0)\go\as(0,-0.5);
\draw[edge] (2,3.5) \start\an(0,0.5)\go\ae(0.5,0)\go\as(0,-0.5)\go\ae(0.5,0)\go\ae(0.5,0)\go\an(0,0.5)\go\aw(-0.5,0)\go\an(0,0.5)\go\aw(-0.5,0)\go\aw(-0.5,0)\go\aw(-0.5,0)\go\as(0,-0.5)\go\as(0,-0.5)\go\ae(0.5,0)\go\an(0,0.5);
\draw[edge] (7,1.5) \start\as(0,-0.5)\go\ae(0.5,0)\go\an(0,0.5)\go\aw(-0.5,0)\go\as(0,-0.5);
\draw[red,line width=1.3pt,
decoration={markings,
mark = at position 0.2 with {\arrow{>}},
mark = at position 0.5 with {\arrow{>}},
mark = at position 0.95 with {\arrow{>}}},
postaction={decorate}] (4,-0.5) \start\an(0,0.5)\go\ae(0.5,0)\go\ae(0.5,0)\go\an(0,0.5)\go\an(0,0.5)\go\aw(-0.5,0)\go\an(0,0.5)\go\ae(0.5,0)\go\ae(0.5,0)\go\an(0,0.5)\go\ae(0.5,0)\go\as(0,-0.5)\go\aw(-0.5,0)\go\as(0,-0.5)\go\aw(-0.5,0);
\draw[wavy] (-1,2.5) -- (6.5,2.5) -- (6.5,2) node[oper] {};
\end{tikzpicture}
}
\end{center}
We now have $\theta = k\pi-\alpha$ with $k \in \mathbb{Z}$, and as in the previous case,
the factor from the tail is $q^{2\left\lfloor k/2 \right\rfloor}$.
Putting everything together,
we can again combine the two parities of $k$ into a single summation
\begin{equation*}
\aver{e_0^{(\xsup)}(x,t)} = \frac{\varphi(q) z^{1-\ell} e^{i(\frac{\nu}{2} - \frac{1}{4})\alpha}}{Z}
\sum_{C|\gamma:(x_d,1/2)\to(x,t)}
W(C) \ e^{i(\frac{3\nu}{2}-\frac{1}{4}) \theta} \,.
\end{equation*}
So far the value of $\ell$ (the constant introduced in the evaluation representation $\pi_z$) has been left arbitrary. In order to recover a discrete holomorphicity condition,
we need the above formal expressions for $e_0^{(\xsup)}$ and $e_0^{(\tsup)}$ to differ
by a factor of $e^{i\alpha}$, and hence we require that
\begin{equation}
\label{eq:fixingl}
w^{1-\ell} = \left[z^{1-\ell} e^{i(\nu/2-1/4)\alpha} \right] \times e^{i\alpha} \,.
\end{equation}
Recalling that $(z/w)^{2\ell} = e^{-2i\nu\alpha}$,
one can check \eqref{eq:fixingl} is satisfied by choosing
$\ell = \frac{2\nu}{3(2\nu+1)}$.
We now define
\begin{align*}
\phi_0(x,t) &:= \varphi(q)^{-1} w^{\ell-1}
\begin{cases}
e_0^{(\tsup)}(x,t), & (x,t) \in (\mathbb{Z},\mathbb{Z} +\frac{1}{2}) \\
e^{i\alpha} e_0^{(\xsup)}(x,t), & (x,t) \in (\mathbb{Z} +\frac{1}{2},\mathbb{Z}),
\end{cases} \\
\aver{\phi_0(x,t)} &= \frac{1}{Z} \sum_{C|\gamma:(x_d,1/2)\to(x,t)}
W(C) \ e^{i(\frac{3\nu}{2}-\frac{1}{4}) \theta} \,.
\end{align*}
This observable satisfies the discrete holomorphicity condition:
\begin{equation}
\phi_0(x,t-1/2)
+e^{i(\pi-\alpha)}\phi_0(x+1/2,t)
-e^{i(\pi-\alpha)}\phi_0(x-1/2,t)
-\phi_0(x,t+1/2) = 0 \,.
\end{equation}
As in the dense case, choosing $\bar E_0$ instead of $E_0$ would lead to
the complex conjugate $\bar\phi_0$.
{\em Remark:} If one defines $\nu=\frac{1}{2}-\nu'$, then we get
$$
\aver{\phi_0(x,t)} = \frac{1}{Z} \sum_{C|\gamma:(x_d,1/2)\to(x,t)}
W(C) \ e^{-i(\frac{3\nu'}{2}-\frac{1}{2}) \theta} \,,
$$
and $x=z/w=e^{3i (\nu'-1)\alpha}$, which gives the same discretely holomorphic observable
as in~\cite{IkhlefC09}, but a different relationship between the ratio of spectral parameters
$x$ and the angle $\alpha$.
\subsubsection{Loop observables associated to $E_1$ and $\bar E_1$}
The operator $E_1$ takes a spin $\d$ to $\u$. Hence, as in the dense model,
we should consider two boundary defects, keeping the rest of the boundary
empty for example.
The insertion of operator $E_1$ at a given point then
selects loop configurations such that
the open path $\gamma$ that starts/ends at the boundary defects
passes through this point.
Let us now compute explicitly the resulting factors.
Firstly there is a factor of $w^{2\ell}$. For the winding of the $\gamma$ from the
boundary to the point in the bulk (and from the point in the bulk to the boundary),
we obtain a total factor of $e^{-2i\nu\theta}$, where $\theta = k\pi$. Finally,
for the crossings of the tail (which is now composed of $T_1$ matrices),
we obtain a factor of $q^{4k} = e^{i(2\nu-1)\theta}$. Putting everything together, we have
$$
\aver{e^{(\tsup)}_1(x,t)} = \frac{w^{2\ell}}{Z} \sum_{C | (x,t) \in \gamma} W(C) \ e^{-i\theta} \,.
$$
Notice that, in contrast to the dense case, here $e^{-i\theta}$ is not independent
of the configuration since the parity of $k$ is not fixed.
The calculation is completely analogous for $e_1^{(\xsup)}$. Firstly there is a factor of $z^{2\ell}$.
For the winding of $\gamma$ we obtain a total factor of $e^{-2i\nu\theta}$,
where $\theta = k\pi-\alpha$. Finally, for the crossings of the tail we obtain a
factor of $q^{4k} = e^{i(2\nu-1)(\theta+\alpha)}$. Putting everything together gives
$$
\aver{e^{(\xsup)}_1} = \frac{z^{2\ell} e^{i(2\nu-1)\alpha}}{Z}
\sum_{C | (x,t) \in \gamma} W(C) \ e^{-i\theta} \,.
$$
Using $w^{2\ell} = z^{2\ell} e^{2i\nu\alpha}$, we define a function on the edges of the lattice
\begin{align*}
\phi_1(x,t) &:= w^{-2\ell}
\begin{cases}
e_1^{(\tsup)}(x,t), & (x,t)\in\mathbb{Z}\times(\mathbb{Z}+\frac{1}{2}) \\
e^{i\alpha} e_1^{(\xsup)}(x,t), & (x,t)\in(\mathbb{Z}+\frac{1}{2})\times\mathbb{Z},
\end{cases} \\
\aver{\phi_1(x,t)} &= \frac{1}{Z} \sum_{C | (x,t) \in \gamma} W(C) \ e^{-i\theta} \,,
\end{align*}
which satisfies the discrete holomorphicity condition
\begin{equation*}
\phi_1(x,t-1/2)
+e^{i(\pi-\alpha)}\phi_1(x+1/2,t)
-e^{i(\pi-\alpha)}\phi_1(x-1/2,t)
-\phi_1(x,t+1/2) = 0 \,.
\end{equation*}
Again, using $\bar E_1$ we obtain the antiholomorphic counterpart $\bar\phi_1$.
\section{Vertex models with integrable boundaries}
\label{sec:vertex-bound}
\subsection{Coideal subalgebras and \texorpdfstring{$K$}{K}-matrices}
Following the approach of~\cite{Skl88}, a left (resp.\ right) boundary quantized affine algebra $B$ can be defined as a subalgebra of $U$ with the left (resp.\ right) coideal property $\Delta: B\rightarrow B\otimes U$ (resp.\ $\Delta: B \rightarrow U \otimes B$).
If $V_z$ is irreducible as a $B$ module, then the left (resp.\ right) boundary reflection matrix
$K_\ell(z): V_{z^{-1}}\rightarrow V_{z}$ (resp.\ $K_r(z): V_{z} \rightarrow V_{z^{-1}}$) is the solution, unique up to an overall normalization, of the linear relation
\begin{equation}
K_\ell(z) \pi_{z^{-1}}(Y)
=
\pi_{z}(Y) K_\ell(z)
\quad\quad
\Big(
\text{resp.}\ \
K_r(z) \pi_{z}(Y)
=
\pi_{z^{-1}}(Y) K_r(z)
\Big)
\label{eq:Kcomm}
\end{equation}
for all $Y \in B$.
We take the following as the graphical representation of $K_\ell(z)$ and $K_r(z)$:
\begin{equation*}
K_\ell(z)
=
\begin{tikzpicture} [baseline=-3pt,scale=0.75]
\path[left color=white,right color=lightgray] (0,-1) rectangle (-0.5,1);
\draw (0,-1) -- (0,1);
\draw[arr=0.5] (1,-1) node[right] {$z^{-1}$} -- (0,0);
\draw[arr=0.5] (0,0) --
(1,1) node[right] {$z$} ;
\end{tikzpicture}
\qquad
\text{and}
\qquad
K_r(z)
=
\begin{tikzpicture} [baseline=-3pt,scale=0.75]
\path[right color=white,left color=lightgray] (0,-1) rectangle (0.5,1);
\draw (0,-1) -- (0,1);
\draw[arr=0.5] (-1,1) node[left] {$z$}-- (0,0);
\draw[arr=0.5] (0,0) -- (-1,-1) node[left] {$z^{-1}$};
\end{tikzpicture}
\end{equation*}
where we recall that the arrows represent the flow of time as one reads
algebraic expressions from right to left.
>From this, Sklyanin~\cite{Skl88} defined a ``double-row transfer matrix'' $\mathbf{T}_2$:
\[
\mathbf{T}_2(z;w_1,\dots,w_L)
=
{\rm Tr}_0
\left(
K_{{\rm r},0}(z)
R_{0L}(z/w_L) \dots R_{01}(z/w_1)
K_{{\rm l},0}(z)
R_{10}(zw_1) \dots R_{L0}(zw_L)
\right)
\]
which has the graphical representation
\begin{equation*}
\mathbf{T}_2(z;w_1,\dots,w_L)
=
\begin{tikzpicture} [baseline=-3pt,scale=0.75]
\path[left color=white,right color=lightgray] (0,-1) rectangle (-0.5,1);
\draw (0,-1) -- ++(0,2);
\path[right color=white,left color=lightgray] (10,-1) rectangle (10.5,1);
\draw (10,-1) -- ++(0,2);
\draw[arr=0.05,rounded corners] (10,0) -- (9.5,-0.5) -- (0.5,-0.5) node[below] {$z^{-1}$} -- (0,0);
\draw[arr=0.95,rounded corners] (0,0) -- (0.5,0.5) node[above] {$z$} --(9.5,0.5) -- (10,0);
\foreach\x in {1,2}
\draw[arr=0.2] (\x,-1.25) node[below] {$w_{\x}$}-- (\x,1.25);
\foreach\x in {3}
\draw[arr=0.2] (\x,-1.25) node[below] {$\cdots$}-- (\x,1.25);
\foreach\x in {4,...,8}
\draw[arr=0.2] (\x,-1.25) node[below] {$$}-- (\x,1.25);
\foreach\x in {9}
\draw[arr=0.2] (\x,-1.25) node[below] {$w_{L}$}-- (\x,1.25);
\end{tikzpicture}
\end{equation*}
By vertically stacking $M$ double-row transfer matrices $\mathbf{T}_2(z_i;w_1,\dots,w_L)$, $1\leq i \leq M$, one builds a rectangular lattice of width $L$ and height $2M$, whose left/right boundary conditions are fixed {\it a priori.} Each $\mathbf{T}_2$ is an operator acting in $V_{w_1} \otimes \cdots \otimes V_{w_L}$.
\subsection{Current conservation at the boundary}
\label{ssec:qg-bound}
We now extend the formalism of~\cite{BernardFelder91} to the case of an integrable boundary.
For simplicity, we treat only the case of the left boundary
in full detail, but analogous relations can also be written for right boundaries.
Suppose we are given a left coideal subalgebra $B_\ell \subset U$, {\it i.e.},
a subalgebra with the additional property $\Delta(B_\ell)\subset B_\ell\otimes U.$\footnote{In view of the simple coproduct relations (\ref{eq:coprJ}--\ref{eq:coprth}), one way to produce such a $B_\ell$ is
to have it generated by appropriate combinations of the elements $J_a$ and $\thab{a}{b}$. We will discuss examples of such left coideal subalgebras in \secref{ssec:coideal}.} Assume also that we have a left boundary reflection matrix $K_\ell(z)$ which satisfies \eqref{eq:Kcomm}, for all $Y \in B_\ell$. Provided that $J_a \in B_\ell$, we have $K_\ell(z) \pi_{z^{-1}}(J_a)=\pi_{z}(J_a) K_\ell(z)$, which is represented graphically by
\begin{equation}\label{eq:leftK-J}
\begin{tikzpicture} [baseline=-3pt,scale=0.75]
\path[left color=white,right color=lightgray] (0,-1) rectangle (-0.5,1);
\draw (0,-1) -- (0,1);
\draw[arr=0.25] (1,-1) node[right] {$z^{-1}$} -- (0,0);
\draw[arr=0.5] (0,0) -- (1,1) node[right] {$z$} ;
\draw[wavy] (-0.5,-0.5) node[left] {$a$} -- (0.5,-0.5) node[oper] {};
\end{tikzpicture}
\quad = \quad
\begin{tikzpicture} [baseline=-3pt,scale=0.75]
\path[left color=white,right color=lightgray] (0,-1) rectangle (-0.5,1);
\draw (0,-1) -- (0,1);
\draw[arr=0.5] (1,-1) node[right] {$z^{-1}$} -- (0,0);
\draw[arr=0.75] (0,0) -- (1,1) node[right] {$z$};
\draw[wavy] (-0.5,0.4) node[left] {$a$} -- (0.5,0.4) node[oper] {};
\end{tikzpicture}
\end{equation}
We have included tails of operators, but since with our conventions they extend to the left of the operator insertion, they never cross any lines and therefore play no role in equation \eqref{eq:leftK-J}.
This equation expresses the local conservation of the current at the left
boundary.
Similarly if $\thab{a}{b} \in B_\ell$, we have $K_\ell(z)\pi_{z^{-1}} (\thab{a}{b}) = \pi_{z}(\thab{a}{b}) K_\ell(z)$, which has the graphical representation
\begin{equation}\label{eq:leftK-th}
\begin{tikzpicture} [baseline=-3pt,scale=0.75]
\path[left color=white,right color=lightgray] (0,-1) rectangle (-0.5,1); \draw (0,-1) -- (0,1);
\draw[arr=0.25] (1,-1) node[right] {$z^{-1}$} -- (0,0);
\draw[arr=0.5] (0,0) -- (1,1) node[right] {$z$} ;
\draw[wavy=0.4] (0,-1) node[left] {$a$} -- (1,0) node[right] {$b$};
\end{tikzpicture}
\quad =\quad
\begin{tikzpicture} [baseline=-3pt,scale=0.75]
\path[left color=white,right color=lightgray] (0,-1) rectangle (-0.5,1); \draw (0,-1) -- (0,1);
\draw[arr=0.5] (1,-1) node[right] {$z^{-1}$} -- (0,0);
\draw[arr=0.75] (0,0) -- (1,1) node[right] {$z$} ;
\draw[wavy=0.4] (0,0.9) node[left] {$a$} -- (1,-0.1) node[right] {$b$};
\end{tikzpicture}
\end{equation}
Analogous relations apply if $J_a$ and $\thab{a}{b}$ are elements of a right coideal subalgebra $B_r$ with right boundary reflection matrix $K_r(z)$:
\begin{equation}\label{eq:rightK}
\begin{tikzpicture} [baseline=-3pt,scale=0.75]
\path[right color=white,left color=lightgray] (0,-1) rectangle (0.5,1);
\draw (0,-1) -- (0,1);
\draw[arr=0.25] (-1,1) node[left] {$z$} -- (0,0);
\draw[arr=0.5] (0,0) -- (-1,-1) node[left] {$z^{-1}$};
\draw[wavy] (-1.5,0.5) node[left] {$a$} -- (-0.5,0.5) node[oper] {};
\end{tikzpicture}
\quad = \quad
\begin{tikzpicture} [baseline=-3pt,scale=0.75]
\path[right color=white,left color=lightgray] (0,-1) rectangle (0.5,1);
\draw (0,-1) -- (0,1);
\draw[arr=0.5] (-1,1) node[left] {$z$} -- (0,0);
\draw[arr=0.75] (0,0) -- (-1,-1) node[left] {$z^{-1}$};
\draw[wavy] (-1.5,-0.5) node[left] {$a$} -- (-0.5,-0.5) node[oper] {};
\end{tikzpicture}
\quad\quad\hbox{and}\quad\quad
\begin{tikzpicture} [baseline=-3pt,scale=0.75]
\path[right color=white,left color=lightgray] (0,-1) rectangle (0.5,1);
\draw (0,-1) -- (0,1);
\draw[arr=0.25] (-1,1) node[left] {$z$} -- (0,0);
\draw[arr=0.5] (0,0) -- (-1,-1) node[left] {$z^{-1}$};
\draw[wavy=0.4] (-1,0) node[left] {$a$} -- (0,1) node[right] {$b$};
\end{tikzpicture}
\quad = \quad
\begin{tikzpicture} [baseline=-3pt,scale=0.75]
\path[right color=white,left color=lightgray] (0,-1) rectangle (0.5,1);
\draw (0,-1) -- (0,1);
\draw[arr=0.5] (-1,1) node[left] {$z$} -- (0,0);
\draw[arr=0.8] (0,0) -- (-1,-1) node[left] {$z^{-1}$};
\draw[wavy=0.4] (-1,-0.1) node[left] {$a$} -- (0,-1.1) node[right] {$b$};
\end{tikzpicture}
\end{equation}
The result of the preceding equations is that we now obtain current and charge conservation laws which are \emph{exact,} rather than correct up to boundary terms, as they were in \secref{ssec:qg-bulk}.
For example, current conservation \eqref{eq:conserv-j} is now automatic, since the assumption that the tail commutes with the left boundary is implicit in the condition $\thab{a}{b} \in B_\ell$.
Similarly, by using equations \eqref{pic:RcoprJ} and (\ref{eq:leftK-J})--(\ref{eq:rightK}), we find that
\begin{equation*}
\begin{tikzpicture} [baseline=-3pt,scale=0.75]
\path[left color=white,right color=lightgray] (0,-0.75) rectangle (-0.5,1.75);
\draw (0,-0.75) -- ++(0,2.5);
\path[right color=white,left color=lightgray] (9,-0.75) rectangle (9.5,1.75);
\draw (9,-0.75) -- ++(0,2.5);
\draw[arr=0.05,rounded corners] (0,0.75) -- (0.5,1.25) -- (8.5,1.25) -- (9,0.75);
\draw[arr=0.95,rounded corners] (9,0.75) -- (8.5,0.25) -- (0.5,0.25) -- (0,0.75);
\draw[contour=0.75]
(0,-0.25) -- (9,-0.25);
\foreach\x in {1,...,8}
\draw (\x,1.75) -- (\x,-0.75);
\draw [wavy] (0,-0.4) node[left] {$a$} -- (5,-0.4) node[oper] {} ;
\end{tikzpicture}
\quad=\quad
\begin{tikzpicture} [baseline=-3pt,scale=0.75]
\path[left color=white,right color=lightgray] (0,-0.75) rectangle (-0.5,1.75);
\draw (0,-0.75) -- ++(0,2.5);
\path[right color=white,left color=lightgray] (9,-0.75) rectangle (9.5,1.75);
\draw (9,-0.75) -- ++(0,2.5);
\draw[arr=0.05,rounded corners] (0,0.25) -- (0.5,0.75) -- (8.5,0.75) -- (9,0.25);
\draw[arr=0.95,rounded corners] (9,0.25) -- (8.5,-0.25) -- (0.5,-0.25) -- (0,0.25);
\draw[contour=0.75]
(0,1.25) -- (9,1.25);
\foreach\x in {1,...,8}
\draw (\x,-0.75) -- (\x,1.75);
\draw [wavy] (0,1.1) node[left] {$a$} -- (5,1.1) node[oper] {} ;
\end{tikzpicture}
\end{equation*}
Graphically, this says that any charge built from $J_a,\thab{a}{b} \in B_\ell,B_r$ commutes with the double-row transfer matrix; or equivalently, such a charge is conserved:
\begin{equation}
\label{eq:conserv-Q-bd}
\mathbf{J}_a(t-1/2) = \mathbf{J}_a(t+3/2) \,.
\end{equation}
\subsection{Light-cone lattice}\label{sec:lc}
Let us now consider a specialization of the parameters $w_i$ on the vertical lines in our lattice.
We assume $L$ to be even
and choose
\begin{equation*}
w_{j\ {\rm odd}} = z, \quad w_{j\ {\rm even}} = z^{-1}, \quad 1 \leq j \leq L.
\end{equation*}
Assuming that $R(1) \propto P$, which is the case for both $U=U_q(A^{(1)}_1)$ and $U=U_q(A^{(2)}_2)$, this causes every second $R$-matrix present in the lattice to degenerate into a $P$-matrix. The resulting ``light-cone'' lattice has half the number of vertices, with the remaining ones being rotated by 45 degrees. The double-row transfer matrix becomes
\begin{equation*}
\mathbf{T}_2=
\begin{tikzpicture}[baseline=-3pt,scale=-0.9]
\draw[arr=0.05] (2,0.5) -- ++(-0.5,-0.5) -- ++(0.5,-0.5);
\foreach\x in {3,5,...,9}
{
\draw[arr=0.05] (\x+1,0.5) -- ++ (-1,-1);
\draw[arr=0.05] (\x,0.5) -- ++ (1,-1);
}
\foreach\x in {2,4,...,10}
{
\draw (\x+1,1.5) -- ++ (-1,-1);
\draw (\x,1.5) -- ++ (1,-1);
}
\draw[arr=0.05] (11,0.5) -- ++ (0.5,-0.5) -- ++(-0.5,-0.5);
\path[left color=white,right color=lightgray] (11.5,-0.5) rectangle ++ (0.5,2); \draw (11.5,-0.5) -- (11.5,1.5);
\path[right color=white,left color=lightgray] (1,-0.5) rectangle ++ (0.5,2); \draw (1.5,-0.5) -- (1.5,1.5);
\end{tikzpicture}
\end{equation*}
where we have absorbed an $R$-matrix in the right boundary;
denote the resulting boundary operator $\tilde K_r(z)$ (we skip
the details since we shall focus on the left boundary in what follows).
All lines are now oriented upwards, so that we omit orientation arrows henceforth. Equivalently, $\mathbf{T}_2 =\mathbf{T}_e \mathbf{T}_o$, where
\begin{align*}
\mathbf{T}_{\rm e} := K_\ell(z)
\prod_{i} \check R_{2i}(z^2) \tilde K_r(z) &=
\begin{tikzpicture}[baseline=-3pt,scale=0.9]
\useasboundingbox (0,-0.5) -- (12,0.5);
\foreach\x in {2,4,...,8}
{
\draw (\x,-0.5) -- ++ (1,1);
\draw (\x,0.5) -- ++ (1,-1);
}
\draw (1,0.5) -- ++(-0.5,-0.5) -- ++(0.5,-0.5);
\draw (10,-0.5) -- ++(0.5,0.5) -- ++(-0.5,0.5);
\path[right color=white,left color=lightgray] (10.5,-0.5) rectangle ++ (0.5,1); \draw (10.5,-0.5) -- ++(0,1);
\path[left color=white,right color=lightgray] (0,-0.5) rectangle ++ (0.5,1); \draw (0.5,-0.5) -- ++(0,1);
\end{tikzpicture}
\\
\mathbf{T}_{\rm o} := \prod_{i} \check R_{2i+1}(z^2)&=
\begin{tikzpicture}[baseline=-3pt,scale=0.9]
\useasboundingbox (0,-0.5) -- (12,0.5);
\foreach\x in {1,3,...,9}
{
\draw (\x,-0.5) -- ++ (1,1);
\draw (\x,0.5) -- ++ (1,-1);
}
\end{tikzpicture}
\end{align*}
$\check R_i=P_{i,i+1}R_{i,i+1}$ is the $R$-matrix acting
on sites $i,i+1$ with an additional permutation $P$ of factors of the tensor
product, and $K_\ell(z)$ (resp.\ $\tilde K_r(z)$) acts on the first (resp.\ last) factor of the tensor product.
Since the light-cone lattice is obtained as a special case of the double-row, two-boundary lattice, all previous results continue to apply; they only need to be transcribed into the new orientation. The analogue of relation \eqref{pic:RcoprJ} is the commutation of $\check R$ with the action of $J_a$, that is:
\begin{equation}
\label{eq:local_conserv_lc}
\begin{tikzpicture}[baseline=-3pt,scale=0.9]
\draw (-1,-1) -- ++ (2,2);
\draw (1,-1) -- ++ (-2,2);
\draw[wavy=0.45] (-2,-0.5) node[below] {$a$} -- (-0.5,-0.5) node[oper] {};
\end{tikzpicture}
+
\begin{tikzpicture}[baseline=-3pt,scale=0.9]
\draw (-1,-1) -- ++ (2,2);
\draw (1,-1) -- ++ (-2,2);
\draw[wavy] (-2,-0.5) node[below] {$a$} -- (0.5,-0.5) node[oper] {};
\end{tikzpicture}
=
\begin{tikzpicture}[baseline=-3pt,scale=0.9]
\draw (-1,-1) -- ++ (2,2);
\draw (1,-1) -- ++ (-2,2);
\draw[wavy=0.45] (-2,0.5) node[below] {$a$} -- (-0.5,0.5) node[oper] {};
\end{tikzpicture}
+
\begin{tikzpicture}[baseline=-3pt,scale=0.9]
\draw (-1,-1) -- ++ (2,2);
\draw (1,-1) -- ++ (-2,2);
\draw[wavy] (-2,0.5) node[below] {$a$} -- (0.5,0.5) node[oper] {};
\end{tikzpicture}
\end{equation}
The 45 degree rotation makes ``space'' (resp.\ ``time'')
lines go south-west to north-east (resp.\ south-east to north-west).
The corresponding currents are simply
\begin{eqnarray*}
j_a(x)&=&
\begin{tikzpicture}[baseline=-3pt,scale=0.90]
\foreach\x in {1,3,...,9}
\draw (\x,-0.25) -- ++ (0.5,0.5) ++ (0.5,0) -- ++ (0.5,-0.5);
\draw (7,-0.25) -- ++ (0.5,0.5);
\draw[wavy=0.4] (0.5,0) node[below] {$a$} -- (4.25,0) node[oper] {} node[below=1mm] {$\ss x$};
\end{tikzpicture}\\\widehat{\jmath}_a(x)&=&
\hspace*{7mm}\begin{tikzpicture}[baseline=-3pt,scale=0.90]
\foreach\x in {1,3,...,9}
\draw (\x,-0.25) -- ++ (0.5,0.5) ++ (0.5,0) -- ++ (0.5,-0.5);
\draw (7,-0.25) -- ++ (0.5,0.5);
\draw[wavy=0.4] (11,0) node[below] {$a$} --(4.25,0) node[oper] {} node[below=1mm] {$\ss x$} ;
\end{tikzpicture}\\
\end{eqnarray*}
(or opposite tilt of the lines depending on parity of $t$),
where $x\in\mathbb{Z}$ and
the former distinction between ``time'' and ``space'' components
is unnecessary.
The same conservation equation \eqref{eq:conserv-j}
holds as a consequence of
\eqref{eq:local_conserv_lc}, and the fact that $\thab{a}{b} \in B_\ell$.
Rewritten in the light-cone approach, it becomes:
\begin{equation}\label{eq:conserv-jj}
j_a(x,t) + j_a(x+1,t)
= j_a(x,t+1) + j_a(x+1,t+1)
\,,\qquad (x,t)\in\mathbb{Z}^2,\ x+t=0\pmod{2}\,.
\end{equation}
Similarly, \eqref{eq:leftK-J} can be rewritten algebraically as
\begin{equation}\label{eq:leftconserv}
j_a(1,t)=j_a(1,t+1)\,,\qquad t=0\pmod{2}\,.
\end{equation}
There is a right boundary analogue:
\begin{equation}\label{eq:rightconserv}
j_a(L,t)=j_a(L,t+1)\,,\qquad t=0\pmod{2}\,.
\end{equation}
The charge $\mathbf{J}_a$ is now the obvious sum:
$\mathbf{J}_a=\sum_{x=1}^L j_a(x)$,
or graphically,
\begin{equation}
\label{eq:chargelc}
\mathbf{J}_a
=
\begin{tikzpicture}[baseline=-3pt,scale=0.90]
\foreach\x in {1,3,...,9}
\draw (\x,-0.25) -- ++ (0.5,0.5) ++ (0.5,0) -- ++ (0.5,-0.5);
\draw (7,-0.25) -- ++ (0.5,0.5);
\draw[wavy] (0.5,0.075) node[below] {$a$} -- (5.25,0.075) node[oper] {};
\draw[contour] (0.5,-0.075) -- (11,-0.075);
\end{tikzpicture}
\end{equation}
and as a direct consequence of
(\ref{eq:conserv-jj}--\ref{eq:rightconserv}),
commutes with both $\mathbf{T}_{\rm e}$ and $\mathbf{T}_{\rm o}$, {\it i.e.},
\begin{equation}\label{eq:globalconserv}
\mathbf{J}_a(t)=\mathbf{J}_a(t+1)\,.
\end{equation}
As already pointed out, we are mainly concerned with local current conservation
rather than conservation of the charge;
in particular the local conservation at the left boundary \eqref{eq:leftconserv} only requires integrability at the left boundary.
\subsection{Adjoint action on the light-cone lattice}\label{sec:adj-lc}
The adjoint action of an element
$J_a$ of $U$ on another element $J_b$ is defined as in~\eqref{eq:adj-graph}.
On the light-cone lattice, and in the
case of an operator $J_a$ commuting with $K$-matrices
on all boundaries, the expression of
$A_a[j_b^{(\tsup)}]$ can be greatly simplified. We illustrate this by a series of pictures in which,
in preparation for the switch to loop models, we use dual lattice pictures,
{\it cf.\ } \eqref{eq:plaq} in the bulk, and at the boundary,
$
K_\ell =
\begin{tikzpicture}[baseline=-3pt,scale=0.75,rotate=\lcrot,distort]
\bplaqw(0,0)
\end{tikzpicture}
$
and similarly for the other boundaries.
Using the conservation equation \eqref{eq:globalconserv},
the contour defining $A_a[j_b^{(\tsup)}]$ can be deformed to follow the top
and bottom boundaries, {\it i.e.},
\begin{align*}
A_a[j_b(x,t)]&=
\begin{tikzpicture}[baseline=0,rotate=\lcrot,distort
\newcount\u\newcount\v
\foreach\x in {-5,...,5}
\foreach\y in {-5,...,5}
{
\pgfmathsetcount{\u}{\x+\y}
\pgfmathsetcount{\v}{\x-\y}
\pgfmathrandominteger{\rand}{0}{1}
\ifnum\u<4
\ifnum\u>-4
\ifnum\v<7
\ifnum\v>-7
\plaq(\x,\y)
\else
\fi
\else
\fi
\else
\fi
\else
\fi
\ifnum\u=4
\bplaqn(\x,\y)
\else
\ifnum\u=-4
\ifnum\y=-2
\else
\bplaqs(\x,\y)
\fi
\else
\fi
\fi
\ifnum\v=7
\bplaqe(\x,\y)
\else
\ifnum\v=-7
\bplaqw(\x,\y)
\else
\fi
\fi
}
\draw[wavy] (-3.5,4) node[left] {$b$} -- (0,0.5) node[oper] {};
\draw[contour] (-3.4,4.1) -- (4.1,-3.4) -- (3.85,-3.65)-- (-3.65,3.85) ;
\draw[wavy] (-3.35,4.2) node[left] {$a$} -- (1.35,-0.5) node[oper] {};
\end{tikzpicture}
\\
&=
\begin{tikzpicture}[baseline=0,rotate=\lcrot,distort]
\newcount\u\newcount\v
\foreach\x in {-5,...,5}
\foreach\y in {-5,...,5}
{
\pgfmathsetcount{\u}{\x+\y}
\pgfmathsetcount{\v}{\x-\y}
\pgfmathrandominteger{\rand}{0}{1}
\ifnum\u<4
\ifnum\u>-4
\ifnum\v<7
\ifnum\v>-7
\plaq(\x,\y)
\else
\fi
\else
\fi
\else
\fi
\else
\fi
\ifnum\u=4
\bplaqn(\x,\y)
\else
\ifnum\u=-4
\ifnum\y=-2
\else
\bplaqs(\x,\y)
\fi
\else
\fi
\fi
\ifnum\v=7
\bplaqe(\x,\y)
\else
\ifnum\v=-7
\bplaqw(\x,\y)
\else
\fi
\fi
}
\draw[wavy] (-3.5,4) node[left] {$b$} -- (0,0.5) node[oper] {};
\draw[contour] (-2,5.5) -- (5.5,-2) -- (2,-5.5) -- (-5.5,2) ;
\draw[wavy] (-1.95,5.6) node[above] {$a$} -- (2.1,1.55) node[oper] {};
\end{tikzpicture}
\end{align*}
We now suppose that top and bottom boundary conditions are also integrable, such that $J_a$
commutes with the corresponding $K$-matrices, except
at some boundary defects which sit say on the bottom
boundary. Then the top part of the contour can be moved through the
top row of $K$-matrices, using again the boundary conservation relations. Indeed, the sum over the two points on each triangle along the top boundary gives a zero contribution.\footnote{To see this, we must pay attention to a crucial sign issue -- our currents are associated to edges
which are oriented upwards, so current conservation at the top/bottom
boundaries must be accompanied by a sign in one of the terms. It is precisely this sign which causes the pairwise cancellation for each triangle.} On the bottom part, the contour is reduced
to an arch enclosing the defects:
\begin{align*}
A_a[j_b(x,t)]&=
\begin{tikzpicture}[baseline=0,rotate=\lcrot,distort]
\newcount\u\newcount\v
\foreach\x in {-5,...,5}
\foreach\y in {-5,...,5}
{
\pgfmathsetcount{\u}{\x+\y}
\pgfmathsetcount{\v}{\x-\y}
\pgfmathrandominteger{\rand}{0}{1}
\ifnum\u<4
\ifnum\u>-4
\ifnum\v<7
\ifnum\v>-7
\plaq(\x,\y)
\else
\fi
\else
\fi
\else
\fi
\else
\fi
\ifnum\u=4
\bplaqn(\x,\y)
\else
\ifnum\u=-4
\ifnum\y=-2
\else
\bplaqs(\x,\y)
\fi
\else
\fi
\fi
\ifnum\v=7
\bplaqe(\x,\y)
\else
\ifnum\v=-7
\bplaqw(\x,\y)
\else
\fi
\fi
}
\draw[wavy] (-3.5,4) node[left] {$b$} -- (0,0.5) node[oper] {};
\draw[contour] (-2,-2.5) -- (-1.5,-2) -- (-2,-1.5)-- (-2.5,-2);
\draw[wavy] (2,-5.5) node[right] {$a$} -- (-1.5,-2) node[oper] {};
\end{tikzpicture}
\end{align*}
Translating this final picture into algebraic form, we obtain
\begin{equation} \label{eq:adjoint-defect}
\aver{ A_a[j_b(x,t)] }=
\aver{\left[\widehat{\jmath}_a(x_d,0) + \widehat{\jmath}_a(x_d+1,0) \right] j_b(x,t)} \,,
\end{equation}
where we assume that there are two defects at adjacent locations $x_d$ and $x_d+1$, and we recall that $\widehat{\jmath}_a$ denotes a non-local current with a ``right'' tail.
\subsection{Coideal subalgebras for quantized affine algebras}
\label{ssec:coideal}
Here we give examples of (left) coideal subalgebras and boundary reflection matrices for the quantized affine algebras that interest us. Since we do not discuss analogous right boundary results, we omit the subscript ``$\ell$'' from all subsequent equations.
\subsubsection{The $ U_q(A_1^{(1)})$ boundary algebra}
\label{sssec:bound-alg-1}
The choice of coideal subalgebra $B$
we shall consider in this paper is generated by
\begin{equation*}
\{
T_0 \,, T_1 \,,
Q:= E_1 + r \bar E_0 \,,
\bar Q:=\bar E_1 + r E_0
\} \,,
\end{equation*}
where $r$ is a real parameter. The left coideal property is satisfied because
\begin{equation*}
\Delta(Q) =
Q \otimes 1 + T_1 \otimes E_1 + T_0 \otimes r \bar E_0
\qquad \text{and} \qquad
\Delta(\bar Q)=
\bar Q \otimes 1 + T_1 \otimes \bar E_1 + T_0 \otimes r E_0
\end{equation*}
are both elements of $B \otimes U$.
After a choice of normalization the solution of (\ref{eq:Kcomm}) is~\cite{NepMez98}
\begin{equation} \label{eq:K-6V}
K(z) =\left( \begin{array}{cc}
z+rz^{-1} & 0 \\
0 & z^{-1} + rz
\end{array} \right) \,.
\end{equation}
Note that $K(z)$ is diagonal as a consequence of the fact that $T_0$ and $T_1$ are elements of $B$.
\subsubsection{The $ U_q(A_2^{(2)})$ boundary algebra}
\label{sssec:bound-alg-2}
The boundary algebra $B$ in this case is taken to be that generated by
$\{T_0,T_1,E_1,\bar E_1,Q, \bar Q\}$, where
\begin{align*}
Q := [E_1,E_0]_{q^{-4}} + r \bar E_0 \,,
\qquad
\bar Q := [\bar E_1,\bar E_0]_{q^4} -r q^2 E_0 \,.
\end{align*}
in which we use the notation $[a,b]_x=ab-xba$.
The left coideal property $\Delta(B)\in B\otimes U$ is easy
to check. With a choice of normalization, the solution of~\eqref{eq:Kcomm} in the case when $r$ is fixed to be
$r=\pm i q^{-1}$ reads~\cite{BFKZ96,Nep02}
\begin{equation} \label{eq:K-19V}
K(z) = \left( \begin{array}{ccc}
z^{2\ell}(z^{-1}+ r z) & 0 & 0 \\
0 & z+ r z^{-1} & 0 \\
0 & 0 & z^{-2\ell}(z^{-1}+r z)
\end{array} \right) \,.
\end{equation}
For definiteness, we shall henceforth take the root $r=iq^{-1}$. As in the $U_q(A_1^{(1)})$ model case , $K(z)$ is diagonal because $T_0$ and $T_1$ are elements of $B$. In contrast to the $U_q(A_1^{(1)})$ case, there is no solution of \eqref{eq:Kcomm} for general values of the parameter $r$.
\section{Integrable boundaries for loop models}
\label{sec:loop-bound}
In this section we repeat the ideas of \secref{sec:vertex-loop} to introduce boundary tiles into the dense and dilute Temperley--Lieb\ models. In complete analogy with \secref{sec:vertex-loop}, the corresponding $K$ matrices \eqref{eq:K-6V} and \eqref{eq:K-19V} are recovered as linear combinations of the boundary tiles.
We use the light-cone approach of \secref{sec:lc}, with an angle of $\alpha$ on the lattice, {\it i.e.},
\begin{align*}
\check{R} &=
\begin{tikzpicture}[baseline=-3pt,scale=0.75,rotate=\lcrot,distort]
\draw (1,0) -- (-1,0) node[below] {$z$};
\draw (0,1) -- (0,-1) node[below] {$z^{-1}$};
\draw (-0.2,0) arc (180:90:0.2cm) node[left] {$\alpha$};
\end{tikzpicture}
=\
\begin{tikzpicture}[baseline=-3pt,scale=1.25,rotate=\lcrot,distort]
\plaq(0,0)
\end{tikzpicture}
\\
K &=
\begin{tikzpicture}[baseline=-3pt,scale=0.75
\path[left color=white,right color=lightgray] (0,-0.75) rectangle (-0.5,0.75);
\draw (0,-0.75) -- (0,0.75);
\begin{scope}[rotate=\lcrot,distort]
\draw (0,-1) node[right] {$z^{-1}$} -- (0,0);
\draw (0,0) --
(1,0) node[right] {$z$} ;
\draw (0.2,0) arc (0:-90:0.2cm) node[right] {$\alpha$};
\end{scope}
\end{tikzpicture}
=\
\begin{tikzpicture}[baseline=-3pt,scale=1.25,rotate=\lcrot,distort]
\bplaqw(0,0)
\end{tikzpicture}
\end{align*}
and similarly for other boundaries.
\subsection{The boundary dense Temperley--Lieb{} model and the
\texorpdfstring{$U_q(A_1^{(1)})$}{Uq(A1(1))}
vertex model}
Let us define a boundary loop model by introducing an additional weight 1 boundary plaquette, as follows:
$$
\begin{tikzpicture} [baseline=-3pt,scale=1.25,rotate=\lcrot,distort]
\bplaqwb(0,0)
\end{tikzpicture}
\,,
$$
and by assigning weight $\tau^{(n)}=-(e^{i\nu(2\pi - n\xi)}+ e^{- i\nu (2\pi - n\xi)})$ to any loop that that passes
$n$ times through a boundary. Thus we can view the boundary as introducing a deficit angle of $\xi$.
Again, we can turn this boundary weight into that of a vertex model by viewing it as an operator from $V\to V$ in
a S-N direction, resulting in a boundary weight
\begin{equation}
\label{eq:newK-6V}
K= \begin{pmatrix} e^{- i\nu(\alpha-\xi)} & 0\\ 0& e^{i\nu(\alpha-\xi)}
\end{pmatrix} \,.
\end{equation}
Since the spectral parameter $z$ is related to the angle $\alpha$ by $z=e^{-i\nu\alpha}$, this boundary matrix coincides with $K(z)$ of ~\eqref{eq:K-6V}\footnote{In fact the two $K$-matrices coincide after renormalizing \eqref{eq:newK-6V} by
$((1+rz^{-2})(1+rz^2))^{1/2}$. Since this only produces a global factor in the observables we will consider, we omit this normalization for simplicity.} if we take
\[ e^{2i\nu\xi}=(1+rz^{-2})/(1+rz^2).\]
Clearly the $r=0$ case corresponds to a zero deficit angle $\xi=0$ in which case the boundary TL plaquette becomes the
single plaquette with weight one which we call ``free boundary conditions'' and denote by
\[
\begin{tikzpicture} [baseline=-3pt,scale=1.25,rotate=\lcrot,distort]
\bplaqwa(0,0)
\end{tikzpicture}
\,.
\]
\subsection{The boundary dilute Temperley--Lieb{} model and the
\texorpdfstring{$U_q(A_2^{(2)})$}{Uq(A2(2))}
vertex model}
We introduce two additional boundary plaquette configurations with associated weights $\rho$ and $\kappa$, as follows:
\[
\begin{tikzpicture} [baseline=-3pt,scale=1.25,rotate=\lcrot,distort]
\bplaqwz(0,0)
\node at (-0.75,-1) {$\rho$};
\end{tikzpicture}\qquad
\begin{tikzpicture} [baseline=-3pt,scale=1.25,rotate=\lcrot,distort]
\bplaqwa(0,0)
\node at (-0.75,-1) {$\kappa$};
\end{tikzpicture}
\]
Interpreting the plaquettes as operators as above yields a boundary reflection matrix
\begin{equation}
K=\begin{pmatrix} \kappa e^{-i\nu \alpha} & 0 &0\\ 0 &\rho & 0\\ 0& 0& \kappa e^{i\nu \alpha} \end{pmatrix},
\end{equation}
which is equal to the matrix $K(z)$ of equation ~\eqref{eq:K-19V} if $\rho=z+iq^{-1}z^{-1}$ and $\kappa=z^{-1}+iq^{-1}z$, using also the fact that $z^{2\ell} = e^{-i\nu\alpha}$.
\section{Non-local currents and boundary discrete holomorphicity}
\label{sec:DH-bound}
\subsection{Application to the dense loop model}
\label{ssec:bound-dense}
For convenience, in this section we shall use exclusively the light-cone orientation of the lattice. We consider loop configurations on the lattice which contain a single open path $\gamma$. For simplicity, we assume that the ends of $\gamma$ are situated next to each other, as follows:
\begin{center}
\slow{
\begin{tikzpicture} [scale=0.75,rotate=\lcrot,distort]
\newcount\u\newcount\v
\pgfmathsetseed{45}
\foreach\x in {-7,...,7}
\foreach\y in {-7,...,7}
{\pgfmathrandominteger{\rand}{0}{1}
\pgfmathsetcount{\u}{\x+\y}
\pgfmathsetcount{\v}{\x-\y}
\ifnum\u<6
\ifnum\u>-6
\ifnum\v<9
\ifnum\v>-9
\ifnum\rand=0\plaqb(\x,\y)\else\plaqa(\x,\y)\fi
\else
\fi
\else
\fi
\else
\fi
\else
\fi
\ifnum\u=6
\bplaqnb(\x,\y)
\else
\ifnum\u=-6
\ifnum\y=-3
\else
\bplaqsb(\x,\y)
\fi
\else
\fi
\fi
\ifnum\v=9
\bplaqeb(\x,\y)
\else
\ifnum\v=-9
\bplaqwb(\x,\y)
\else
\fi
\fi
}
\begin{scope}[shift={(-3,-3)}]
\clip (-0.5,0.5) -- (0.5,-0.5) -- (1,0) -- (0,1) -- cycle;
\draw[edge] (0.5,0) \start\aw(-0.5,0)\go\as(0,-0.5);
\draw[edge] (0,0.5) \start\as(0,-0.5)\go\aw(-0.5,0);
\end{scope}
\end{tikzpicture}
}
\end{center}
As we did in the case of trivial boundary conditions, we consider observables which are constructed by requiring that the open loop goes through a certain point $(x,t)$, either in the bulk or on the boundary of the lattice. Because the $K$-matrices are diagonal the correct way to obtain such observables is, as before, to insert a local operator (say $E_0$, complete with its tail) at $(x,t)$, since all lattice configurations vanish for which $E_0$ is situated on a closed loop:
\begin{equation*}
\begin{tikzpicture}[baseline=-3pt,scale=0.75]
\draw[edge] (0,0) arc (0:360:0.8cm) node[oper,label={left:$E_0$}] {}
node[blob,pos=0.38] {}
node[blob,pos=0.62] {}
node[pos=0.473,right] {$\vdots$};
\end{tikzpicture}
\ =0 \,.
\end{equation*}
\subsubsection{Loop observables in the bulk}
Let us repeat the analysis of the observables in the bulk,
but now in the presence of non-trivial boundaries and on the light-cone lattice.
Insert $e_0$ at the point $(x+1,t) \in \mathbb{Z}^2$, $x+t=0\ \text{(mod 2)}$, in the
bulk of the lattice. As mentioned above, the only configurations which survive are those for
which the open loop goes through $(x+1,t)$. The contribution of the open path to the
weight can be found in a similar way as before as $e^{i\nu(2\theta-\pi+n\xi)}q^k$,
where $\theta=k\pi$ is the angle formed by the left-incoming portion of the open loop
(if we treat the boundary tiles on equal footing with bulk tiles),
and $n$ the number of contacts of the left portion of this loop with the boundary minus
that of the right portion. So we find
\begin{equation*}
\aver{e_0(x+1,t)} = \frac{z^{-1}e^{-i\nu\pi}}{Z}
\sum_{C|(x+1,t)\in \gamma} W(C) \ e^{i(4\nu-1)\theta} \ e^{ni\nu\xi} \,,
\end{equation*}
Similarly, repeating for $e_0(x,t)$ with $x+t=0\ \text{(mod 2)}$, we obtain
\begin{equation*}
\aver{e_0(x,t)} = \frac{z\,e^{-i\nu\pi}e^{i(2\nu-1)\alpha}}{Z}
\sum_{C|(x,t)\in \gamma} W(C) \ e^{i(4\nu-1)\theta} e^{ni\nu\xi} \,.
\end{equation*}
The local conservation law \eqref{eq:conserv-jj} with $j_a = e_0$ is
\begin{equation}\label{eq:conserv-ee}
e_0(x,t) + e_0(x+1,t)
=
e_0(x,t+1) + e_0(x+1,t+1)
\,,\qquad (x,t)\in\mathbb{Z}^2,\ x+t=0\pmod{2} \,.
\end{equation}
This holds in the bulk because the tail, which is comprised of $T_0$ operators, commutes with the left $K$-matrix (since $T_0 \in B$, as explained in \secref{sssec:bound-alg-1}). Therefore, using analogous arguments to those of \secref{ssec:loop-dense}, we define the function
\begin{equation*}
\phi_0(x,t) := z e^{i\nu\pi}
\begin{cases}
e_0(x,t), & x+t=1\pmod 2 \\
e^{i\alpha} e_0(x,t), & x+t=0\pmod 2\,,
\end{cases}
\end{equation*}
and in view of the fact that $z=e^{-i\nu\alpha}$, we have
\begin{equation} \label{eq:phi0_bdry}
\aver{\phi_0(x,t)} = \frac{1}{Z}
\sum_{C|(x,t)\in \gamma} W(C) \ e^{i(4\nu-1)\theta} e^{ni\nu\xi}\,.
\end{equation}
Applying \eqref{eq:conserv-ee} to this observable, we find that it satisfies
\begin{equation*}
e^{i(\pi-\alpha)} \phi_0(x,t)
+
\phi_0(x+1,t)
-
e^{i(\pi-\alpha)} \phi_0(x+1,t+1)
-
\phi_0(x,t+1)
=
0\,,
\end{equation*}
which is discrete holomorphicity on the light-cone lattice.
The observable corresponding to insertion of $E_1$ gets modified in an analogous way; namely
\begin{equation*}
\phi_1(x,t) := z e^{-i\nu\pi}
\begin{cases}
e_1(x,t), & x+t=1\pmod 2 \\
e^{i\alpha} e_1(x,t), & x+t=0\pmod 2\,
\end{cases}
\end{equation*}
gives rise to the observable
\begin{equation} \label{eq:phi1_bdry}
\aver{\phi_1(x,t)} = \frac{1}{Z}
\sum_{C|(x,t)\in \gamma} W(C) \ e^{-i\theta} \ e^{-ni\nu\xi}\,.
\end{equation}
This is the flux observable in the presence of a non-trivial boundary and it satisfies the discrete holomorphicity equation
\begin{equation*}
e^{i(\pi-\alpha)} \phi_1(x,t)
+
\phi_1(x+1,t)
-
e^{i(\pi-\alpha)} \phi_1(x+1,t+1)
-
\phi_1(x,t+1)
=
0\,.
\end{equation*}
\subsubsection{Loop observables at the boundary}
By \emph{boundary} we mean in what follows the {\em left}\/ boundary.
Since $r\neq 0$,
the operators $E_0,E_1,\bar E_0,\bar E_1$ are not elements of the
coideal $B$,
and hence they are not conserved separately at the boundary (with trivial
boundaries, $E_1$ and $\bar E_1$ were conserved),
and in particular
their associated charge will not be conserved.
We now consider
the combinations $Q=E_1+r\bar E_0$ and $\bar Q=\bar E_1+r E_0$, which are
in $B$. Using \eqref{eq:leftconserv} in the case $j_a=e_1+r\bar e_0$ we find that
\begin{equation*}
e_1(1,t) + r \bar e_0(1,t)
=
e_1(1,t+1) + r \bar e_0(1,t+1)\,,
\qquad t=0\pmod{2} \,,
\end{equation*}
which can be translated into the following equation for the observables
$\phi_1$ and $\bar\phi_0$:
\begin{equation*}
z^{-1} \phi_1(1,t) + rz \bar\phi_0(1,t)
=
e^{-i\alpha} z^{-1} \phi_1(1,t+1) + e^{i\alpha} rz \bar\phi_0(1,t+1) \,.
\end{equation*}
This is neither a holomorphicity nor an antiholomorphicity condition
because we are mixing operators from the two chiralities. However, by taking the real part (or equivalently, summing this identity and the one
satisfied by the conjugate observable $\bar Q$),
one finds that $\psi:=z^{-1}(\phi_1+r\phi_0)$ satisfies
\begin{equation*}
\mathrm{Re} \left[\psi(1,t)+e^{i(\pi-\alpha)}\psi(1,t+1) \right]=0 \,,
\end{equation*}
which is a boundary discrete holomorphicity condition around the plaquette
\begin{equation*}
\label{eq:bound-plaq}
\begin{tikzpicture} [rotate=\lcrot,distort,baseline=1cm]
\draw[dotted] (0,0) -- (2,0) -- (2,2) -- cycle;
\draw (2,0.3) arc (90:180:0.3) node[right=1.5mm] {$\alpha$};
\node[circle,fill,inner sep=1.5pt,label={right:$\ss(1,t)$}] at (1,0) {};
\node[circle,fill,inner sep=1.5pt,label={right:$\ss(1,t+1)$}] at (2,1) {};
\end{tikzpicture}
\qquad t=0\pmod 2 \,.
\end{equation*}
{\em Remark:} At the left boundary the tails (which are on the left) disappear
and therefore the two observables $\phi_1$ and $\bar\phi_0$ are
the same up to a constant. This can be seen more explicitly in the fact that
there cannot be any winding at the boundary, so the angle $\theta$
in (\ref{eq:phi0_bdry},\ref{eq:phi1_bdry}) is fixed and independent of
the configuration.
\subsection{Application to the dilute loop model}
\label{sec:bound-dil}
We consider the dilute loop model on the light-cone lattice, with as in
the dense case possible defects located next to each other on the bottom row.
All the observables that we consider force a line to go from the insertion
point to the boundary defect, so that a typical
configuration is as depicted below.
\begin{center}
\slow{
\begin{tikzpicture} [scale=0.75,rotate=\lcrot,distort]
\newcount\u\newcount\v
\pgfmathsetseed{45}
\foreach\x in {-7,...,7}
\foreach\y in {-7,...,7}
{\pgfmathrandominteger{\rand}{0}{1}
\pgfmathsetcount{\u}{\x+\y}
\pgfmathsetcount{\v}{\x-\y}
\ifnum\u<6
\ifnum\u>-6
\ifnum\v<9
\ifnum\v>-9
\plaqz(\x,\y)
\else
\fi
\else
\fi
\else
\fi
\else
\fi
\ifnum\u=6
\bplaqnz(\x,\y)
\else
\ifnum\u=-6
\ifnum\y=-3
\else
\bplaqsz(\x,\y)
\fi
\else
\fi
\fi
\ifnum\v=9
\bplaqez(\x,\y)
\else
\ifnum\v=-9
\bplaqwz(\x,\y)
\else
\fi
\fi
}
\begin{scope}[shift={(-3,-3)}]
\clip (-4.5,4.5) -- (4.5,-4.5) -- (4.5,-0.5) -- (-0.5,4.5) -- cycle;
\draw[edge] (-0.5,0) \start\ae(0.5,0)\go\an(0,0.5)\go\aw(-0.5,0)\go\an(0,0.5)\go\ae(0.5,0)\go\an(0,0.5)\go\an(0,0.5)\go\aw(-0.5,0) node[oper] {};
\end{scope}
\draw[edge] (2,2.5) \start\an(0,0.5)\go\aw(-0.5,0)\go\an(0,0.5)\go\aw(-0.5,0)\go\as(0,-0.5)\go\ae(0.5,0)\go\as(0,-0.5)\go\ae(0.5,0)\go\an(0,0.5);
\draw[edge] (5,-2.5) \start\an(0,0.5)\go\aw(-0.5,0)\go\aw(-0.5,0)\go\as(0,-0.5)\go\as(0,-0.5)\go\as(0,-0.5)\go\ae(0.5,0)\go\an(0,0.5)\go\an(0,0.5)\go\ae(0.5,0)\go\an(0,0.5);
\draw[edge] (0,0.5) \start\as(0,-0.5)\go\ae(0.5,0)\go\as(0,-0.5)\go\as(0,-0.5)\go\aw(-0.5,0)\go\as(0,-0.5)\go\as(0,-0.5)\go\as(0,-0.5)\go\aw(-0.5,0)\go\an(0,0.5)\go\aw(-0.5,0)\go\an(0,0.5)\go\an(0,0.5)\go\ae(0.5,0)\go\an(0,0.5)\go\an(0,0.5)\go\aw(-0.5,0)\go\an(0,0.5)\go\ae(0.5,0)\go\ae(0.5,0)\go\as(0,-0.5);
\draw[edge] (-5,2.5) \start\as(0,-0.5)\go\ae(0.5,0)\go\an(0,0.5)\go\aw(-0.5,0)\go\as(0,-0.5);
\draw[edge] (3,0.5) \start\as(0,-0.5)\go\ae(0.5,0)\go\ae(0.5,0)\go\an(0,0.5)\go\aw(-0.5,0)\go\aw(-0.5,0)\go\as(0,-0.5);
\draw[edge] (1,-6.5) \start\as(0,-0.5)\go\ae(0.5,0)\go\an(0,0.5)\go\aw(-0.5,0)\go\as(0,-0.5);
\end{tikzpicture}
}
\end{center}
\subsubsection{Loop observables in the bulk}
In contrast with the dense case, the $K$-matrix~\eqref{eq:K-19V} does not
introduce any orientation-dependent phase factor, but simply a relative
weight for contacts with the boundary. Therefore, the analysis of
observables $E_0$ and $E_1$
in the bulk is unchanged, and we do not repeat it here.
A natural question is whether one can use the adjoint action to build
new currents.
We consider the element $P \in \Uq{A^{(2)}_2}$ given by
$P = \ad{E_1}{E_0}$.
Explicitly,
\begin{align*}
P
= E_1 E_0 - T_1 E_0 T_1^{-1} E_1
= E_1 E_0 -q^{-4} E_0 E_1 \,,
\end{align*}
and hence we can use the treatment of \secref{sec:adj-lc}
to express the corresponding lattice observable $p$.
From~\eqref{eq:adjoint-defect}, we have
\[
\aver{p(x,t)} =
\aver{A_{E_1}\left[e_0(x,t)\right]} =
\aver{\left[ \widehat{e}_1(x_d,0) + \widehat{e}_1(x_d+1,0)\right] e_0(x,t)} \,.
\]
Since $E_1$ can only flip a state $\downarrow$ to a state $\uparrow$, the non-zero terms
correspond to a pair of defects $(\downarrow,0)$ or $(0,\downarrow)$.
Summing over these two possibilities, we get:
$\aver{p(x,t)} = -q^{-4} (z^{2\ell}+z^{-2\ell})
\aver{ e_0(x,t)}$.
Hence, the operator $P$ does not lead to a new holomorphic observable; it simply
produces $\phi_0$, up to a multiplicative constant.
Equivalently, let us define
\begin{equation*}
\xi(x,t) := z^{\ell-1}
\begin{cases}
p(x,t), & x+t=1\pmod 2 \\
e^{i\alpha} p(x,t), & x+t=0\pmod 2\,,
\end{cases}
\qquad
\phi_0(x,t)
:=
z^{\ell-1}
\begin{cases}
e_0(x,t), & x+t=1\pmod 2 \\
e^{i\alpha} e_0(x,t), & x+t=0\pmod 2\,.
\end{cases}
\end{equation*}
Then
\begin{equation}\label{eq:pvse0}
\aver{\xi(x,t)}=c\,\aver{\phi_0(x,t)},
\qquad
\text{where}\ \ c=-q^{-4} (z^{2\ell}+z^{-2\ell}).
\end{equation}
Since it is only possible to relate $\xi$ and $\phi_0$ as expectation values $\aver{\cdots}$, the constant $c$ is dependent on the choice of boundary conditions; it would differ, had we chosen alternative boundaries to the ones shown in the figure above.
\subsubsection{Loop observables at the boundary}
We are now in a position to construct an observable which satisfies discrete
holomorphicity at the left boundary. We consider the operator
$Q = P + iq^{-1} \bar E_0$ (with $P$ as in the previous section),
which commutes
with the $K$-matrix in the sense of \eqref{eq:Kcomm}, {\it cf.\ } \S \ref{sssec:bound-alg-2}.
>From \eqref{eq:leftconserv} with $j_a=p+iq^{-1}\bar e_0$ we obtain
\begin{equation*}
p(1,t) + iq^{-1} \bar e_0(1,t)
=
p(1,t+1) + iq^{-1} \bar e_0(1,t+1)\,,
\end{equation*}
which can be translated into an equation for the observables $\xi$ and $\bar\phi_0$:
\begin{equation*}
z^{1-\ell} \xi(1,t)
+
iq^{-1}z^{\ell-1} \bar\phi_0(1,t)
=
e^{-i\alpha} z^{1-\ell} \xi(1,t+1)
+
e^{i\alpha} iq^{-1} z^{\ell-1} \bar\phi_0(1,t+1)
\,.
\end{equation*}
Taking the real part of this equation, we obtain
\begin{equation*}
\mathrm{Re} \left[\psi(1,t) + e^{i(\pi-\alpha)} \psi(1,t+1) \right] = 0 \,,
\end{equation*}
where $\psi=z^{1-\ell}(\xi-iq\phi_0)$.
Now taking into account \eqref{eq:pvse0},
we find that in the equation above one can replace
$\psi$ with $e^{i\lambda}\phi_0$ ($\lambda$ being some phase, dependent on our choice of boundary conditions, which we do not write explicitly).
\section{The continuum limit}
\label{sec:continuum}
\subsection{Dense loops}
In this section, we identify the operators corresponding to the lattice
holomorphic observables, as holomorphic currents in the
CFT describing the continuum limit.
\subsubsection{Scaling theory}
Let us first briefly review the Coulomb gas construction~\cite{Nienhuis84,DFSZ87}.
We define a height function $\Phi(x+1/2,t+1/2)$ on the dual lattice, such that
the oriented loops are the contour lines of $\Phi$, and with
the convention that the values of $\Phi$ across a
contour line differ by $\pi$. In the continuum limit, the
coarse-grained height function
is then subject to a Gaussian distribution:
\begin{equation} \label{eq:boson}
S[\Phi] = \frac{g}{4\pi} \int (\nabla\Phi)^2 dx dt \,,
\qquad \text{where} \qquad
g = 1-2\nu \,.
\end{equation}
Note that the coupling constant spans the interval $0<g<1$.
In what follows, the model is defined on a cylinder of circumference $L$,
and the axis of the cylinder is in the time direction.
Because of the definition of $\Phi$ by local increments, $\Phi$ can be
discontinuous along the circumference:
\[
\Phi(L+x+1/2,t+1/2) - \Phi(x+1/2,t+1/2) = \pi m \,,
\qquad m \in \mathbb{Z} \,.
\]
Thus, the height function should be considered as living on a circle:
\[
\Phi \equiv \Phi +\pi \,.
\]
Note that, for this dense loop model, when $L$ is even (resp. odd),
only even integer (resp.\ odd integer) values of $m$ are reached.
Also, the local Boltzmann weights associated to loop turns ensure that
the closed loops get the correct weight $\tau=2\cos(2\pi\nu)$, except for the
non-contractible loops: these loops have a vanishing total winding, and thus get a weight
$\widetilde{\tau}=2$. To restore the correct weight, one introduces a seam in the time direction,
such that every right (resp.\ left) arrow crossing the seam gets a weight $e^{i\pi\alpha}$ (resp.\ $e^{-i\pi\alpha}$).
The weight of non-contractible loops becomes $\widetilde{\tau}=2\cos\pi\alpha$, and one sets $\alpha:=2\nu$
to get $\widetilde{\tau}=\tau$.
\subsubsection{Operator content}
To recover the full-plane geometry, we use the complex coordinates
\[
z = e^{2\pi(t+ix)/L} \,, \qquad \bar{z} = e^{2\pi(t-ix)/L} \,.
\]
In this setting, the seam described above goes from the origin to infinity, and
amounts to introducing a pair of vertex operators
$e^{i\alpha \Phi(\infty)} e^{-i\alpha \Phi(0)}$.
More generally, if we decompose the height field as $\Phi(z,\bar{z})= \varphi(z)+\bar{\varphi}(\bar{z})$,
the primary operators are of the form $\mathcal{O}_{\mu,\bar{\mu}} = e^{i(\mu\varphi+\bar{\mu}\bar{\varphi})}$, which we write as
\[
\mathcal{O}_{\mu,\bar{\mu}} = e^{i(\mu+\bar{\mu})(\varphi+\bar{\varphi})/2}
\times e^{i(\mu-\bar{\mu})(\varphi-\bar{\varphi})/2} \,.
\]
The first factor is only well-defined if $n:=(\mu+\bar{\mu})/4$ is an integer,
which is called the electric charge $n \in \mathbb{Z}$.
The average value of the
second factor is $e^{i \Phi_{\rm cl}}$, where
$\Phi_{\rm cl} = i(\mu-\bar{\mu})(\log z - \log \bar{z})/(4g)$.
The discontinuity of $\Phi_{\rm cl}$ around the origin is $\delta\Phi_{\rm cl}=\pi\times (\mu-\bar{\mu})/g$, and
hence the number $m:=(\mu-\bar{\mu})/g$ must be an integer, and is called the magnetic charge
$m \in \mathbb{Z}$.
Since the conformal weight of the chiral vertex operator $e^{i\mu\varphi}$ is $h_\mu = \mu (\mu+2\alpha)/(4g)$,
we get for $\mathcal{O}_{\mu,\bar{\mu}}$:
\[
h = \frac{(2n+\alpha + mg/2)^2 - \alpha^2}{4g} \,,
\qquad
\bar{h} = \frac{(2n+\alpha - mg/2)^2 - \alpha^2}{4g} \,,
\]
In this context, $\alpha$ appears as a background electric charge.
Part of the above spectrum fits in the Kac table for minimal models:
\[
h_{r,s} = \frac{(r-gs)^2 - (1-g)^2}{4g} \,,
\]
with integer $r,s$.
\subsubsection{Scaling limit of the lattice observables}\label{sec:lattobs}
We will now show that the discrete (anti-)holomorphic observables discussed in this paper scale
to operators of the form $\mathcal{O}_{\mu,\bar{\mu}}$.
Let us first consider the two-point function associated to $E_0$:
\[
\aver{\phi_0(z,\bar{z}) \phi_0^*(w,\bar{w})} :=
\frac{1}{Z} \sum_{C|\gamma_1,\gamma_2:w\to z} e^{i(4\nu-1)\theta} \ W(C) \,,
\]
where the sum is on loop configurations with two open oriented paths $\gamma_1,\gamma_2$ going from $w$ to $z$, and $\theta$ is the winding
angle of each path. In terms of the height model, this correlation function includes magnetic defects of charges $-2$ and $+2$
at $z$ and $w$, respectively. Moreover, the winding angle is given by $\theta=\Phi(z,\bar{z})-\Phi(w,\bar{w})$, and
hence the factor $e^{i(4\nu-1)\theta}$ corresponds to an electric operator of charge $2n+\alpha=4\nu-1$ at $z$, and opposite charge at $w$. These charges yield the values $\mu = -2g$ and $\bar{\mu}=0$, and thus we identify:
\[
\phi_0 = \phi_0(z) = e^{-2ig \varphi(z)} \,,
\]
with conformal dimensions $h=2g-1=h_{13}$ and $\bar{h}=0$. By reversing the arrows, we obtain
\[
\bar\phi_0(\bar{z}) = e^{-2ig \bar{\varphi}(\bar{z})} \,.
\]
Similarly, the two-point function associated to $E_1$ is:
\[
\aver{\phi_1(z,\bar{z}) \phi_1^*(w,\bar{w})} :=
\frac{1}{Z} \sum_{C|\gamma_1,\gamma_2:w\to z} e^{i\theta} \ W(C) \,,
\]
which has charges $m=2$ and $2n+\alpha = 1$, and hence we get
\[
\phi_1(z) = e^{2ig \varphi(z)} \,,
\qquad
\bar\phi_1(\bar z) = e^{2ig \bar{\varphi}(\bar{z})} \,.
\]
The holomorphic current $\phi_1$ has conformal dimensions $h=1$ and $\bar{h}=0$: this is the
``screening operator''.
We now turn to the lattice operator associated to the diagonal generators $H_i$.
Since $H_i \propto \sigma^z$, it simply measures the local orientation of loops.
The increment of $\Phi$ across an up (resp.\ down) arrow
is $\pi$ (resp.\ $-\pi$), and thus one has $a \partial_x \Phi = \pi h^{(\tsup)}$, and likewise $a \partial_t \Phi = -\pi h^{(\xsup)}$,
where $a$ is the lattice mesh size. Using the complex coordinates $w=t+ix$ and $\bar w=t-ix$,
we identify the chiral currents:
\[
h^{(\xsup)}+ih^{(\tsup)} \propto \partial_w \varphi \,,
\qquad
h^{(\xsup)}-ih^{(\tsup)} \propto \partial_{\bar w} \bar\varphi \,,
\]
The conservation law found in \secref{sec:localobs} corresponds in the continuum
limit to the
conservation of non-chiral current $d^\ast \Phi=d^\ast(\varphi+\bar\varphi)$,
or in components, $\epsilon^{\mu\nu}\partial_\nu \Phi$, which ensures local
well-definedness of $\Phi$.
The above results can be summarised in the following diagram:
\begin{center}
\begin{tikzpicture}
\draw[dashed] (0,-3.2) -- (0,3.2) node[above] {$d$};
\draw[dashed] (-5.3,0) -- (5.3,0) node[right] {$\sigma^z$};
\draw[thick] (-2,-1) -- (2,1);
\draw[thick] (-2,1) -- (2,-1);
\node[circle,draw,fill=white,inner sep=2pt,label={below:$H_1,H_0$},label={above:$\ss d^\ast(\varphi+\bar\varphi)$}]
at (0,0) {};
\node[circle,fill,inner sep=2pt,label={below right:$E_1$},label={above:$\ss
\phi_1=e^{2ig\varphi}=\mathcal{O}_{\rm screening}$}]
at (2,1) {};
\node[circle,fill,inner sep=2pt,label={above left:$\bar E_1$},label={below:$\ss
\bar\phi_1=e^{2ig\bar\varphi}=\bar{\mathcal{O}}_{\rm screening}$}]
at (-2,-1) {};
\node[circle,fill,inner sep=2pt,label={below left:$E_0$},label={above:$\ss
\phi_0=e^{-2ig\varphi}=\phi_{1,3}$}]
at (-2,1) {};
\node[circle,fill,inner sep=2pt,label={above right:$\bar E_0$},label={below:$\ss
\bar\phi_0=e^{-2ig\bar\varphi}=\bar\phi_{1,3}$}]
at (2,-1) {};
\end{tikzpicture}
\end{center}
In this figure, the horizontal axis is the $U(1)$ charge $\sigma^z$, and the vertical
axis is the gradation $d$ in the evaluation representation of $A_1^{(1)}$.
We note the similarity with the discussion
of nonlocal charges in the (ultra-violet limit of the)
sine--Gordon model in~\cite{BernardLeclair91}.
A notable difference is the choice of gradation, which has different origins
in the two situations.
\subsection{Dilute loops}
The mapping to a compactified free boson CFT~\eqref{eq:boson} also holds in the dilute case, up
to minor adaptations. We keep the convention that the height function~$\Phi$ has jumps of $\pm\pi$
across an oriented loop,
and, in the scaling theory, one should set $\Phi \equiv \Phi+\pi$. So we can keep the same notations as in the
previous section, except
that $\nu$ is now chosen in the interval $[-1/2,0]$, and we have $1<g<2$.
The two-point function for $\phi_0$ is:
\[
\aver{\phi_0(z,\bar{z}) \phi_0^*(w,\bar{w})} =
\frac{1}{Z} \sum_{C|\gamma:w\to z} e^{i(3\nu/2-1/4)\theta} \ W(C) \,,
\]
where the sum is on loop configurations with one open oriented path $\gamma$ going from $w$ to
$z$, and $\theta$ is the winding angle of $\gamma$. Since this is a one-leg defect and
$\theta = 2[\Phi(z,\bar z)-\Phi(w,\bar w)]$, the corresponding charges are $m=-1$ and $2n+\alpha=
3\nu-1/2$. The ``flux observables'' $\phi_1$ and $\bar \phi_1$ are the same as in
the dense model. Thus we obtain
\begin{align*}
\phi_0(z) = e^{-ig \varphi(z)} \,,
&\qquad
\bar\phi_0(\bar z) = e^{-ig \bar\varphi(\bar z)} \,, \\
\phi_1(z) = e^{2ig \varphi(z)} \,,
&\qquad
\bar\phi_1(\bar z) = e^{2ig \bar\varphi(\bar z)} \,,
\end{align*}
and the conformal dimension for $\phi_0$ is $h=(3g-2)/4=h_{12}$, whereas
for $\phi_1$ it is $h=1$.
Finally, the diagonal operators
$h^{(x,t)}$ relate to $\partial_w \varphi$ and $\partial_{\bar w} \bar\varphi$.
\section{Conclusions}\label{sec:conclusion}
In this paper, we have described a general procedure to obtain discretely
holomorphic observables out of nonlocal currents in quantum integrable
lattice models. We have shown in several examples how these
observables are naturally expressed in terms of loop models. We have
identified them in the continuum limit, connecting to Conformal Field Theory.
It should be noted that in CFT the conserved currents
always come in pairs: a current $j^\mu$ and its dual current $\tilde\jmath^\mu=\epsilon^{\mu}_\nu j^\nu$. Only the two conservation laws combined
imply separation of chiralities, and therefore
existence of holomorphic observables. Here we only base our analysis on a single
conservation law for each observable, hence a ``weak'' discrete holomorphicity
condition -- the dual equation is missing. This absence can be traced
to the step in which we identify the two components (say, time and space) of
our current as a single function corresponding to the observable. This step
would require additional justification in order to proceed with a rigorous
proof of the conformal limit.\footnote{In fact, such an identification is not
possible for the nonchiral observable associated to $H_1$, {\it cf.\ } \secref{sec:lattobs}.} The fact that in all cases,
our would-be chiral observables have, in the loop language, a unifying
definition on both vertical and horizontal edges is certainly a strong
indication that such an identification is correct.
This work opens the way to further study and interpretation of discrete
holomorphic observables, in particular in the case of more general
boundary conditions (as recently studied in~\cite{deGierLR12}).
Also, the application of this approach to off-critical models (see
the treatment of the Ising model in~\cite{RivaC06}) needs to be
developed.
\bibliographystyle{amsplainhyper}
|
2,869,038,155,456 | arxiv | |
2,869,038,155,457 | arxiv | \section*{Calculation of lesser and greater Green's functions}
\label{app:lessgreat}
\subsection{Lesser Green's function: $G^< (x,t,y,t')$}
\label{sec:grLes}
We provide here the details of the calculation for the lesser Green's function $G^<(x,t,y,t')$ for a $N$-particle TG gas.
The lesser Green's function is defined as
\begin{equation}
\begin{split}\label{eq:grLesI}
\imath G^< (x,t,y,t')_{\boldsymbol \eta}&= \expval{\hat \psi^\dagger (y,t') \hat \psi(x,t)}_{\boldsymbol \eta} \\
&= \expval{ e^{i H t'} \hat \psi^\dagger (y) e^{-i H t'} e^{i H t} \hat \psi(x) e^{-i H t}}_{\boldsymbol \eta}
\end{split}
\end{equation}
where $\langle...\rangle_{\boldsymbol \eta}$ indicates the expectation value over the many-body state $|{\boldsymbol \eta}\rangle$, $H$ is the many-body Hamiltonian and $\hat \psi(x)$, $\hat \psi^\dagger(x) $ are bosonic field operators, satisfying the communtation relations $[\hat \psi(x),\hat \psi^\dagger(y)]=\delta(x-y)$.
In order to perform the exact calculation for a TG gas, based on the Girardeau mapping on noninteracting fermions, it is useful to rewrite the Green's function in the first quantization formalism. We introduce the completeness relation in the $N-1$ particles Hilbert space $\sum_n \dyad{n} = \mathbb{I}_{N-1}$, with $\ket{n}$ being an eigenstate of the TG Hamiltonian and the sum being restricted to inequivalent states, and the completeness relation in the $N-1$ particles Hilbert space in the position representation $\frac{1}{{N-1!}} \int \dd X \dyad{X}= \mathbb{I}_{N-1}$, with $X=x_2 \dots x_N$.
\begin{equation}
\begin{split}
\imath G^< (x,t,y,t')_{\boldsymbol \eta} & = \frac{1}{(N-1!)^2}\expval{e^{i H t'} \hat \psi^\dagger (y) \int \dd Y \dyad{Y} e^{-i H t'} (\sum_n \dyad{n}) e^{i H t} \int \dd X \dyad{X} \hat \psi(x) e^{-i H t}}_{\boldsymbol \eta} \\
&=\frac{1}{(N-1!)^2} \sum_n \int dY \int dX
{}_{t\!'}\!\!\braket{\boldsymbol \eta}{y,Y} \braket{Y}{n}_{t\!'} {}_{t}\!\braket{n}{X} \braket{x,X}{\boldsymbol \eta}_{t}\\
&= \frac{1}{(N-1!)^2} \sum_n \int dY \Psi_{\boldsymbol \eta}^*(y,Y;t')\Psi_n(Y;t') \int dX \Psi_n^*(X;t) \Psi_{\boldsymbol \eta}(x,X;t),
\end{split}
\end{equation}
where we have used the definition of many-body wavefunction $\braket{x,X}{\boldsymbol \eta}=\Psi_{\boldsymbol \eta}(x,X)$ and similarly $\braket{X}{n}=\Psi_n(X)$.
We now apply the Bose-Fermi mapping and write the bosonic wavefunction $\Psi_{\boldsymbol \eta}(x_1,...x_N)=\prod_{j,\ell\in\{\vec {\boldsymbol \eta}\}} {\rm sign}(x_j-x_\ell) \Psi_{\boldsymbol \eta}^F(x_1,..,x_N)$, where $\Psi_{\boldsymbol \eta}^F(x_1,..,x_N)=(1/\sqrt{N!})\det[\phi_{{\boldsymbol \eta}_j}(x_\ell)]$ with $j,\ell=1..N$, $\phi_j(x)$ the single-particle orbitals for the given external potential with energy $e_j$, and we have introduced the notation ${\boldsymbol \eta}=\{\eta_1,...\eta_N \}$. We thus obtain the expression for the lesser Green's function of a TG gas:
\begin{equation}\label{eq:lesser3}
\imath G^< (x,t,y,t')_{\boldsymbol \eta} = \frac{1}{(N-1!)^2} \sum_n \int dX \prod_{k=2}^N {\rm sign}(x-x_k) \Psi_{\boldsymbol \eta}^F(x,X;t)\Psi_n^{*F}(X;t)
\int dY \prod_{k=2}^N {\rm sign}(y-y_k) \Psi_{\boldsymbol \eta}^{*F}(y,Y;t')\Psi_n^{F}(Y;t')
\end{equation}
Each of the two multidimensional integrals can be evaluated separately; we will start by writing the first one as a function of single particle states, by using the properties of Slater determinants.
We identify the generic $(N-1)$ particles eigenstate of the free fermions Hamiltonian, labeled by $n$, as the one with single-particle orbitals $\boldsymbol \alpha = \{\alpha_2, ... ,\alpha_N \}$.
Expanding the determinant in $\Psi_{\boldsymbol \eta}^F(x,X;t)$ by the first column, we have
\begin{equation}
\label{eq:split}
\begin{split}
&\int dX \prod_{k=2}^N {\rm sign}(x-x_k) \Psi_{\boldsymbol \eta}^F(x,X;t)\Psi_n^{*F}(X;t)= \\
&= \sum_{i=1}^N (-1)^{i+1}\phi_{\eta_i}(x,t) \int dX \prod_{k=2}^N {\rm sign}(x-x_k)
\smdet{\phi_{\eta_1}(x_2,t) & \dots & \phi_{\eta_1}(x_N,t)\\
\vdots & \dots & \vdots\\
\phi_{\eta_{i-1}}(x_2,t) & \dots & \phi_{\eta_{i-1}}(x_N,t)\\
\phi_{\eta_{i+1}}(x_2,t) & \dots & \phi_{\eta_{i+1}}(x_N,t)\\
\vdots & \vdots & \vdots\\
\phi_{\eta_N}(x_2,t) & \dots & \phi_{\eta_N}(x_N,t)\\}
\smdet{\phi^*{\alpha_2}(x_2,t) & \dots & \phi^*{\alpha_2}(x_N,t)\\
\vdots & \ddots & \vdots\\
\phi^*{\alpha_N}(x_2,t) & \dots & \phi^*{\alpha_N}(x_N,t)\\}
\end{split}
\end{equation}
We can combine the two determinants using the Andréief's integration formula \cite{Forrester2002}
\begin{equation}\label{eq:Andreief}
\int \dd x_1 \dots \int \dd x_M \det[f_j(x_k]_{j,k=1,M} \det[g_j(x_k]_{j,k=1,M} = M! \det[\int \dd x f_j(x) g_k(x) ]_{j,k=1,M} \end{equation}
Then, noticing the fact that $\int_{-\infty}^{\infty} {\rm sign}(x-\bar x ) f(\bar x) \dd \bar x = \int_{-\infty}^{\infty}f(\bar x ) \dd \bar x - 2 \int_{x}^{\infty} f (\bar x) \dd \bar x $, we obtain
\begin{equation}
\int \dd X \prod_{k=2}^N {\rm sign}(x-x_k) \Psi_{\boldsymbol \eta}^F(x,X;t)\Psi_n^{*F}(X;t)= (N-1)!\sum_{i=1}^N (-1)^{i+1} \phi_{\eta_i}(x,t)
\det[\textbf P (x,t)]_{{\boldsymbol \eta} \smallsetminus \{\eta_i\},{\boldsymbol \alpha}}.
\end{equation}
The determinant $\det [ \textbf P]_{{\boldsymbol \eta} \smallsetminus \{\eta_i\},{\boldsymbol \alpha}}$ is the $N-1$ order minor of the matrix $ \textbf P$ having selected the rows ${\boldsymbol \eta}\smallsetminus \{\eta_i\}$ and the columns ${\boldsymbol \alpha}$, and
\begin{multline}\label{eq:TGP}
P_{l,m}(x,t)= \int_{-\infty}^{\infty} \phi_l(\bar x,t)\phi^*_m(\bar x,t) \dd \bar x - 2 \int_{x}^{\infty} \phi_l(\bar x,t)\phi^*_m(\bar x,t) \dd \bar x=\delta_{l,m} - 2 \ e^{-\imath t (e_l - e_m)} \int_{x}^{\infty} \phi_l(\bar x)\phi^*_m(\bar x) \dd \bar x
\end{multline}
In the same way we can write the second integral in the expression for $G^<$, obtaining:
\begin{equation}
\imath G^< (x,t,y,t')_\eta=\widetilde \sum_{\boldsymbol \alpha} \sum_{i,j=1}^N (-1)^{i+j} \phi_{\eta_i} (x,t) \phi^*_{\eta_j} (y,t')
\det[\textbf P(x,t)]_{{\boldsymbol \eta}\smallsetminus \{\eta_i\},{\boldsymbol \alpha}} \det[\textbf P(y,t')]_{{\boldsymbol \alpha},{\boldsymbol \eta}\smallsetminus \{\eta_j\}}
\end{equation}
The sum over $n$ of Eq.~(\ref{eq:lesser3}) corresponds to the sum over $\boldsymbol \alpha$ in the equation above, that has to be restricted to collections of indices that are not related by permutations, and that will be indicated from now on by $ \widetilde \sum$. This sum can be simplified by using the generalized Cauchy-Binet formula for the product of minors
\begin{equation}
\widetilde \sum_{\boldsymbol \alpha}\ \det[\textbf A]_{\vec I,{\boldsymbol \alpha}} \det[\textbf B]_{{\boldsymbol \alpha},\vec J} = \det[\textbf {A B}]_{\vec I,\vec J} \end{equation}
obtaining:
\begin{multline}
(-1)^{i+j} \sum_{\boldsymbol \alpha} \det[\textbf P(x,t)]_{{\boldsymbol \eta}\smallsetminus \{\eta_i\},{\boldsymbol \alpha}} \det[\textbf P(y,t')]_{{\boldsymbol \alpha},{\boldsymbol \eta}\smallsetminus \{\eta_j\}}= \\
(-1)^{i+j} \det[\textbf P(x,t) \textbf P(y,t')]_{{{\boldsymbol \eta}\smallsetminus \{\eta_i\}},{{\boldsymbol \eta}\smallsetminus \{\eta_j\}}}
={\{[{\textbf P} (x,t) {\textbf P} (y,t') ]_{{\boldsymbol \eta},{\boldsymbol \eta}}\}^{-1}}^T \det[{\textbf P} (x,t) {\textbf P} (y,t')]_{{\boldsymbol \eta},{\boldsymbol \eta}}
\end{multline}
where, in the last step, we have used the definition of the inverse of a matrix via minors. It is important to note that the product between matrices in the last equation is not constrained to the ${\boldsymbol \eta}$ elements of the single particle Hilbert space; rather, it spans the whole single particle Hilbert space. In the numerical calculation a suitably chosen truncation has been employed.
We can finally write:
\begin{equation}
\imath G^< (x,t,y,t')_{\boldsymbol \eta}= \sum_{i,j=1}^N \phi_{\eta_i} (x) e^{- \imath e_{\eta_i} t} \phi^*_{\eta_j}
(y) e^{ \imath e_{\eta_j} t'} A_{\eta_i,\eta_j} (x,t,y,t')
\end{equation}
with
\begin{equation}
{\textbf A}_{{\boldsymbol \eta},{\boldsymbol \eta}} (x,t,y,t')={\{[{\textbf P} (x,t) {\textbf P} (y,t') ]_{{\boldsymbol \eta},{\boldsymbol \eta}}\}^{-1}}^T \det[{\textbf P} (x,t) {\textbf P} (y,t')]_{{\boldsymbol \eta},{\boldsymbol \eta}}.
\end{equation}
This result generalizes the calculation of the one-body density matrix in Ref.~\cite{Pezer2007}, to which it reduces when $\ket {\boldsymbol \eta}$ corresponds to the ground state and when we take equal times $t=t'$.
\subsection{Greater Green's Function: $G^>(x,t,y,t')$}
\label{sec:grGre}
In an analogous fashion, we can evaluate the greater Green's function $G^>(x,t,y,t')$ for a TG gas, which is defined as
\begin{equation}
\label{eq:grGreI}
\imath G^> (x,t,y,t')_{\boldsymbol \eta}= \expval{\hat \psi(x,t) \hat \psi^\dagger (y,t')}_{\boldsymbol \eta}
= \expval{ e^{i H t} \hat \psi(x) e^{-i H t}e^{i H t'} \hat \psi^\dagger (y) e^{-i H t'}}_{\boldsymbol \eta}
\end{equation}
In order to write it in the first quantization formalism and to apply the time evolution operator, we introduce this time the completeness relation in the $N+1$ particles Hilbert space $\sum_n \dyad{n} = \mathbb{I}_{N+1}$, with $\ket{n}$ being an eigenstate of the TG Hamiltonian with $N+1$ particles.
The expression for the greater Green's function for a TG gas then reads
\bigbreak
\begin{equation}
\begin{split}\label{eq:grGreII}
\hspace*{-20pt} \imath G^> (x,t,y,t')_{\boldsymbol \eta}
&= \frac{1}{(N!)^2}\expval{ e^{i H t} \int dX \dyad{X} \hat \psi(x) e^{-i H t} (\sum_n \dyad{n}) e^{i H t'} \hat \psi^\dagger (y) \int dY \dyad{Y} e^{-i H t'} }_{\boldsymbol \eta}\\
&= \frac{1}{(N!)^2}\sum_n \int dX \int dY
{}_{t}\!\!\braket{{\boldsymbol \eta}}{X} \braket{x,X}{n}_{\!t} {}_{t\!'}\!\!\braket{n}{y,Y} \braket{Y}{\boldsymbol \eta}_{\!t\!'}=\\
&= \frac{1}{(N!)^2}\sum_n \int dX \Psi_{\boldsymbol \eta}^*(X;t)\Psi_n(x,X;t) \int dY \Psi_n^*(y,Y;t') \Psi_{\boldsymbol \eta}(Y;t')
\end{split}
\end{equation}
The use of the Bose-Fermi mapping then leads to
\begin{equation}
\begin{split}
\imath G^> (x,t,y,t')_{\boldsymbol \eta} =\frac{1}{(N!)^2} \sum_n \int \dd X \prod_{k=1}^N {\rm sign}(x-x_k) \Psi^{F*}_{\boldsymbol \eta}(X;t)\Psi^{F}_n(x,X;t) \\
\times \int \dd Y \prod_{k=1}^N {\rm sign}(y-y_k) \Psi^{F}_{\boldsymbol \eta}(Y;t')\Psi^{F*}_n(y,Y;t').
\end{split}
\end{equation}
As in Eq.~\ref{eq:split}, the calculation of the first integral yields
\begin{equation}
\begin{split}
&\int \dd X \prod_{k=1}^N {\rm sign}(x-x_k) \Psi_{\boldsymbol \eta}^*(X;t)\Psi_n(x,X;t)\\
=& \sum_{i=1}^{N+1} {(-1)^{i+1}}\phi_{\alpha_i}(x,t) \int \dd X \prod_{k=1}^N {\rm sign}(x-x_k) \\
&\hspace*{100pt} \times \smdet{\phi^*_{\eta_1}(x_1,t) & \dots & \phi^*_{\eta_1}(x_N,t)\\
\vdots & \ddots & \vdots\\
\phi^*_{\eta_N}(x_1,t) & \dots & \phi^*_{\eta_N}(x_N,t)\\}
\smdet{\phi_{\alpha_1}(x_1,t) & \dots & \phi_{\alpha_i}(x_{N+1},t)\\
\vdots & \dots & \vdots\\
\phi_{\alpha_{i-1}}(x_1,t) & \dots & \phi_{\alpha_{i-1}}(x_{N+1},t)\\
\phi_{\alpha_{i+1}}(x_1,t) & \dots & \phi_{\alpha_{i+1}}(x_{N+1},t)\\
\vdots & \vdots & \vdots\\
\phi_{\alpha_{N+1}}(x_1,t) & \dots & \phi_{\alpha_{N+1}}(x_{N+1},t)\\}.
\end{split}
\end{equation}
As for the lesser Green's function, we can combine the two determinants using the Andréief's integration formula, Eq.~(\ref{eq:Andreief}). Then, by noticing that $\int_{-\infty}^{\infty} {\rm sign}(x-\bar x ) f(\bar x) \dd \bar x = \int_{-\infty}^{\infty}f(\bar x ) \dd \bar x - 2 \int_{x}^{\infty} f (\bar x) \dd \bar x $, we obtain
\begin{equation}
\int \dd X \prod_{k=1}^N {\rm sign}(x-x_k) \Psi^{F*}_{\boldsymbol \eta}(X;t)\Psi^F_n(x,X;t)= N! \sum_{i=1}^{N+1} (-1)^{i+1} \phi_{\alpha_i}(x,t)
\ det[\textbf P (x,t)]_{{\boldsymbol \alpha \smallsetminus \{\alpha_i\}},{\boldsymbol \eta} }.
\end{equation}
The determinant $\det[\textbf P (x,t)]_{{\boldsymbol \alpha \smallsetminus \{\alpha_i\}},{{\boldsymbol \eta}} }$ is the $N$ order minor of the matrix $ \textbf P$, once we select the rows ${{ \boldsymbol \alpha \smallsetminus \{\alpha_i\}}}$ and the columns ${\boldsymbol \eta}$, and $P_{l,m}(x,t)$ defined as in Eq.~(\ref{eq:TGP}).
The main difference with the calculation of the lesser Green's function is that the Cauchy-Binet theorem cannot be applied to the above expression. We then insert all the $ \phi_{\alpha_i}(x,t)$ elements into an extended "P" matrix, by adding a "0" column, as follows,
\begin{equation}
\int dX \prod_{k=1}^N {\rm sign}(x-x_k) \Psi^{F*}_{\boldsymbol \eta}(X;t)\Psi^{F*}_n(x,X;t)=N! \det[\vec \phi(x,t),\textbf P (x,t)]_{{\boldsymbol \alpha},{\{0\} \cup {\boldsymbol \eta}} },
\end{equation}
in which we have defined a column vector $\vec \phi(x,t)= [\phi_1(x,t),\dots,\phi_M(x,t)]^T$ on the whole Hilbert space.
Following the same line for the second integral, we obtain:
\begin{equation}
\imath G^> (x,t,y,t')_{\boldsymbol \eta}=\widetilde\sum_{\boldsymbol \alpha}
\det\mqty[\vec \phi(y,t')^\dagger \\
\textbf P (y,t')]_{{\{0\} \cup {\boldsymbol \eta}},{\boldsymbol \alpha}}
\det\mqty[\vec \phi(x,t) && \textbf P (x,t)]_{{\boldsymbol \alpha},{\{0\} \cup {\boldsymbol \eta}} }
\end{equation}
The sum $\widetilde\sum_{\boldsymbol \alpha}$ has to be restricted to collections of indices that are not related by permutations.
Now we can apply the generalized Cauchy-Binet formula, for products between determinats, obtaining:
\begin{equation}
\begin{split}
\imath G^> (x,t,y,t')_{\boldsymbol \eta}&=
\det \mqty[\vec \phi(y,t')^\dagger \vec \phi(x,t) && \vec \phi(y,t')^\dagger \textbf P (x,t)\\
\textbf P (y,t') \vec\phi(x,t) && \textbf P (y,t')\textbf P (x,t)]_{{\{0\} \cup {\boldsymbol \eta}},{\{0\} \cup {\boldsymbol \eta}}}\\
&= \det[\textbf P (y,t')\textbf P (x,t)]_{{\boldsymbol \eta},{\boldsymbol \eta}} \\
&\times \left( \vec \phi(y,t')^\dagger \vec\phi(x,t) -[\vec \phi(y,t')^\dagger \textbf P (x,t)]_{1,{\boldsymbol \eta}}\ [\textbf P (y,t')\textbf P (x,t)]^{-1}_{{\boldsymbol \eta},{\boldsymbol \eta}} \ [\textbf P (y,t') \vec\phi(x,t)]_{{\boldsymbol \eta},1} \right)
\end{split}
\end{equation}
in which all the products, where not explicitely indicated, should be thought as in the whole Hilbert space.
\subsection{Final expressions for the lesser and greater Green's functions of a TG gas}
\label{app:finallg}
The lesser and greater Green's functions for an eingenstate ${\boldsymbol \eta}$ of the TG Hamiltonian can be finally recast as:
\begin{subequations}
\begin{equation}\label{eq:lesserTGSupp}
\imath G^< (x,t,y,t')_{\boldsymbol \eta}= \det[\textbf P (x,t)\textbf P (y,t')|_{{\boldsymbol \eta}{\boldsymbol \eta}}] a^<(x,t,y,t')
\end{equation}
\begin{equation}\label{eq:greaterTGSupp}
\imath G^> (x,t,y,t')_{\boldsymbol \eta}= \det[\textbf P (y,t')\textbf P (x,t)|_{{\boldsymbol \eta}{\boldsymbol \eta}}] a^>(x,t,y,t')
\end{equation}
\end{subequations}
with
\begin{subequations}
\begin{align}
a^<(x,t,y,t') &= {\vec \phi(x,t)_{{\boldsymbol \eta}}^T} \ {[{\textbf P}(x,t) {\textbf P}(y,t')]^{-1 T}}|_{{\boldsymbol \eta}{\boldsymbol \eta}} \ {\vec \phi^*(y,t')}_{{\boldsymbol \eta}}\\
\begin{split}
a^>(x,t,y,t') &= \vec \phi(y,t')^\dagger \vec \phi(x,t) -[\vec \phi(y,t')^\dagger \textbf P (x,t)]_{\boldsymbol \eta}\ \\& [\textbf P (y,t')\textbf P (x,t)]^{-1}|_{{\boldsymbol \eta}{\boldsymbol \eta}} \ [\textbf P (y,t') \vec\phi(x,t)]_{{\boldsymbol \eta}} .
\end{split}
\end{align}
\end{subequations}
From the above expressions we readily recover the limit of non-interacting fermions by replacing $\text{{\rm sign}}(x-y)$ with $1$, obtaining $P_{l,m}(x,t)=\delta_{l,m}$, hence
$G^<_{F} (x,t,y,t')_{\boldsymbol \eta} = \imath \sum_{\boldsymbol \eta} e^{ \imath e_i t'}\phi^*_i (y) \phi_i (x) e^{- \imath e_i t} $ and
$G^>_{F} (x,t,y,t')_{\boldsymbol \eta} = -\imath \sum_{\bar{\boldsymbol \eta}} e^{ \imath e_i t'}\phi^*_i (y) \phi_i (x) e^{- \imath e_i t}$, corresponding to
the Green's functions for a non-interacting Fermi gas in the state ${\boldsymbol \eta}$ \cite{Stefanucci2010}.
\section*{Power-law exponents of the spectral function of a homogeneous Bose gas from non-linear Luttinger liquid theory}
\label{app:pwrlaw}
In this section, we provide for reference the values of the power-law exponents of the spectral function for a homogeneous TG gas as obtained using the mobile impurity or depleton model applied to the Lieb-Liniger Hamiltonian in the limit of infinite interactions, as deduced from Refs. \cite{Imambekov2009,Imambekov2012,Imambekov2008,Imambekov2009a,Campbell2017}.
The exponents are obtained in terms of phase shifts, which can be written as\cite{Kamenev2009,Imambekov2009a}
\begin{equation}
\frac{\delta_\pm}{2 \pi} = \frac{1}{2}\left[\frac{1}{v(k)\mp v_s} \left( \frac{\sqrt{K}}{\pi} \frac{\partial \epsilon (k)}{\partial n} \pm \frac{1}{\sqrt{K}} \frac{k}{m} \right) \mp \frac{1}{\sqrt{K}}\right],
\end{equation}
where $K$ is the Luttinger parameter, $m$ is the mass of the particles, $\epsilon(k)$ is the dispersion of the excitation branch and $v(k)$ is the corresponding group velocity.
In the TG limit, obtained as the infinite interaction limit of the Lieb-Liniger model the value of the Luttinger parameter is $K=1$. Correspondingly, $v_s=k_F/m$, $v(k)=k/m$ and $n=k_F/\pi$.
In order to calculate the exponents $\mu_A$ and $\mu_D$, respectively $\overline {\mu_+}$ and $\underline {\mu_-}$ of Ref.~\cite{Imambekov2009a}, we have to choose $\epsilon(k)=\epsilon_1(k)$, the Lieb-I curve of the main text. This results in $\frac{\delta_\pm}{2 \pi} =\frac{1}{2}$. Using Eqs.(9) and (10) of Ref.~\cite{Campbell2017} we then obtain
\begin{equation}
\mu_A = 1 - \frac{1}{2} \left(\frac{\delta_+-\delta_-}{2 \pi} \right)^2 - \frac{1}{2} \left(\frac{\delta_+ +\delta_-}{2 \pi} \right)^2 = \frac{1}{2}
\end{equation}
\begin{equation}
\mu_D = 1 - \frac{1}{2} \left(2+\frac{\delta_+-\delta_-}{2 \pi} \right)^2 - \frac{1}{2} \left(\frac{\delta_+ +\delta_-}{2 \pi} \right)^2 = -\frac{3}{2}.
\end{equation}
In order to calculate $\mu_B$ and $\mu_C$, respectively $\underline {\mu_+}$ and $\overline {\mu_-}$ of Ref. \cite{Imambekov2009a}, we choose $\epsilon(k)=-\epsilon_2(k)$, the Lieb-II curve of the main text, resulting in $\frac{\delta_\pm}{2 \pi} =-\frac{1}{2}$. Again using Eqs.(9) and (10) of Ref. \cite{Campbell2017} we obtain
\begin{equation}
\mu_C = 1 - \frac{1}{2} \left(\frac{\delta_+-\delta_-}{2 \pi} \right)^2 - \frac{1}{2} \left(\frac{\delta_+ +\delta_-}{2 \pi} \right)^2 = \frac{1}{2}
\end{equation}
\begin{equation}
\mu_B = 1 - \frac{1}{2} \left(2+\frac{\delta_+-\delta_-}{2 \pi} \right)^2 - \frac{1}{2} \left(\frac{\delta_+ +\delta_-}{2 \pi} \right)^2 = -\frac{3}{2}.
\end{equation}
We finally remark that the presence of the lattice is expected to renormalize such exponents, except in the limit of very low filling.
\end{document}
|
2,869,038,155,458 | arxiv | \section{Introduction}
Humans possess a natural yet remarkable ability of seamlessly transferring and sharing knowledge
across multiple related domains while doing inference for a given task. Effective mechanisms
for sharing \emph{relevant} information across multiple prediction tasks (referred as \emph{multi-task learning})
are also arguably crucial for making significant advances towards machine intelligence.
In this paper, we
propose a novel approach for multi-task learning in the context of deep neural networks for computer vision tasks.
We particularly aim for two desirable characteristics in the proposed approach:
(i) \emph{automatic learning of multi-task architectures} based on branching,
(ii) \emph{selective sharing} among tasks with automated learning of whom to share with.
In addition, we want our multi-task models to have low memory footprint and low latency during prediction (forward pass through the network).
A natural approach for enabling sharing across multiple tasks is to share model parameters (partially or fully) across
the corresponding layers of the task-specific deep neural networks. At an extreme, we can imagine
a fully shared multi-task network architecture where all layers are shared except the last layer
which predicts the labels for individual tasks. However, this unrestricted
sharing may suffer from the problem of \emph{negative transfer} where inadequate sharing across
two unrelated tasks can worsen the performance on both. To avoid this, most of the multi-task
deep architectures share the bottom layers till some layer $l$ after which the sharing is blocked,
resulting in task-specific sub-networks or branches beyond it \cite{HyperFace16,Brendan16,huang2015cross}.
This is motivated by the observation made by several earlier works that bottom layers capture
low level detailed features, which can be shared across multiple tasks, whereas top
layers capture features at a higher level of abstraction that are more task specific.
It can be further extended to a more general tree-like architecture, e.g., a smaller group of tasks can share
parameters even after the first break-point at layer $l$ and breakup at a later layer.
However, the space of such possible branching architectures is combinatorially large and current approaches
largely make a decision based on limited manual exploration of this space, often biased by designer's perception of the
relationship among different tasks \cite{Misra16}.
Our goal in this work is to develop a principled approach for designing multi-task deep learning
architectures obviating the need for tedious manual explorations. The proposed approach operates in a greedy
top-down manner, making branching and task-grouping decisions at each layer of the network using
a novel criterion that promotes the creation of separate branches for unrelated tasks (or groups of tasks) while
penalizing for the model complexity. Since we also desire a multi-task model with low memory
footprint, the proposed approach starts with a \emph{thin} network and dynamically grows it
during the training phase by creating new branches based on the aforementioned criterion.
We also propose a method based on simultaneous orthogonal matching pursuit (SOMP) \cite{somp}
for initializing a thin network from a pretrained wider network
(e.g., VGG-16) as a side contribution in this work.
We evaluate the proposed approach on person attribute classification, where each attribute is considered a task (with non-mutually exclusive labels),
achieving state-of-the-art results with highly compact multi-task models.
On the CelebA dataset \cite{liu2015deep}, we match the current top results on facial attribute classification (90\% accuracy) with a model 90x more compact and 3x faster than the original VGG-16 model. We draw similar conclusions for clothing category recognition on the DeepFashion dataset \cite{liu2016deepfashion}, demonstrating that we can perform simultaneous facial and clothing attribute prediction using a single compact multi-task model, while preserving accuracy.
\noindent In summary, our main contributions are listed below:
\begin{compactitem}[$\circ$]
\item We propose to automate learning of multi-task deep network architectures through a novel dynamic branching procedure,
which makes task grouping decisions at each layer of the network (deciding with whom each task should share features) by
taking into account both task relatedness and complexity of the model.
\item A novel method based on Simultaneous Orthogonal Matching Pursuit is proposed for initializing a thin network from a wider pre-trained network model,
leading to faster convergence and higher accuracy.
\item We perform {\em joint prediction} of facial and clothing attributes, achieving state-of-the-art results on standard datasets with a significantly
more compact and efficient multi-task model. We also conduct relevant ablation studies providing insights into the proposed approach.
\end{compactitem}
\section{Related Work}
{\bf Multi-Task Learning.} There is a long history of research in multi-task learning \cite{Caruana97,Thrun98,jacob2009clustered,Abhishek12,Misra16}. Most proposed techniques assume that all tasks are related and appropriate for joint training. A few methods have addressed the problem of ``with whom'' each task should share features \cite{xue2007multi,jacob2009clustered,zhou2011clustered,Kristen11,Abhishek12,passos2012flexible}. These methods are generally designed for shallow classification models, while our work investigates feature sharing among tasks in hierarchical models such as deep neural networks.
Recently, several methods have been proposed for multi-task learning using deep neural networks. HyperFace \cite{HyperFace16} simultaneously learns to perform face detection, landmarks localization, pose estimation and gender recognition. UberNet \cite{UberNet16} jointly learns low-, mid-, and high-level computer vision tasks using a compact network model. MultiNet \cite{MultiNet16} exploits recurrent networks for transferring information across tasks. Cross-ResNet \cite{Brendan16} connects tasks through residual learning for knowledge transfer. However, all these methods rely on {\em hand-designed} network architectures composed of base layers that are shared across tasks and specialized branches that learn task-specific features.
As network architectures become deeper, defining the right level of feature sharing across tasks through handcrafted network branches is impractical. Cross-stitching networks \cite{Misra16} have been recently proposed to learn an optimal combination of shared and task-specific representations. Although cross-stitching units connecting task-specific sub-networks are designed to \emph{learn} the feature sharing among tasks,
the size of the network grows linearly with the number of tasks, causing scalability issues. We instead propose a novel algorithm that makes decisions about branching based on task relatedness, while optimizing for the efficiency of the model. We note that other techniques such as HD-CNN \cite{HDCNN15} and Network of Experts \cite{ahmed2016network} also group related classes to perform hierarchical classification, but these methods are not applicable for the multi-label setting (where labels are not mutually exclusive).
{\bf Model Compression and Acceleration.} Existing deep convolutional neural network models are computationally and memory intensive, hindering their deployment in devices with low memory resources or in applications with strict latency requirements. Methods for compressing and accelerating convolutional networks include knowledge distillation \cite{Hinton15,romero2014fitnets}, low-rank-factorization \cite{ioannou2015training,tai2015convolutional,sainath2013low}, pruning and quantization \cite{han2015deep,polyak2015channel}, structured matrices \cite{Circulant15,sindhwani2015structured,gong2016tamp}, and dynamic capacity networks \cite{almahairi2015dynamic}. These methods are task-agnostic and therefore most of them are complementary to our approach, which seeks to obtain a compact multi-task model by widening a low-capacity network based on task relatedness.
Moreover, many of these state-of-the-art compression techniques can be used to further reduce the size of our learned multi-task architectures.
{\bf Person Attribute Classification.} Methods for recognizing attributes of people, such as facial and clothing attributes, have received increased attention in the past few years. In the visual surveillance domain, person attributes serve as features for improving person re-identification \cite{su2016deep} and enable search of suspects based on their description \cite{vaquero2009attribute,feris2014attribute}. In e-commerce applications, these attributes have proven effective in improving clothing retrieval \cite{huang2015cross}, and fashion recommendation \cite{liu2014wow}. It has also been shown that facial attribute prediction is helpful as an auxiliary task for improving face detection \cite{yang2015facial} and face alignment \cite{zhang2016learning}.
State-of-the-art methods for person attribute prediction are based on deep convolutional neural networks \cite{wang2016walk,liu2015deep,chen2015deep,zhang2014panda}. Most methods either train separate classifiers per attribute \cite{zhang2014panda} or perform joint learning with a fully shared network \cite{rudd2016moon}. Multi-task networks have been used with base layers that are shared across all attributes, and branches to encode task-specific features for each attribute category \cite{huang2015cross,sudowe2015person}. However, in contrast to our work, the network branches are hand-designed and do not exploit the fact that some attributes are more related than others in order to determine the level of sharing among tasks in the network. Moreover, we show that our approach produces a single compact network that can predict both facial and clothing attributes simultaneously.
\section{Methodology}
Let the linear operation in a layer $l$ of the network be paramterized by $W^l$.
Let $x^l \in \mathbb{R}^{c_l}$ be the input vector of layer $l$, and $y^l \in \mathbb{R}^{c_{l+1}}$ be the output vector. In feedforward networks that are of interest to this work, it is always the case that $x^l = y^{l-1}$. In other words, the output of a layer is the input to the layer above. In vision applications, the feature maps are often considered as three-way tensors and one should think of $x^l$ and $y^l$ as appropriately vectorized versions of the input and output feature tensors. The functional form of the network is a series of within-layer computations chained in a sequence linking the lowest to the highest (output) layer. The within-layer computation (for both convolutional and fully-connected layers) can be concisely represented by a simple linear operation parametrized by $W^l$, followed by a non-linearity $\sigma_l(\cdot)$ as
\begin{equation}
\label{eqn:within_layer}
y^l = \sigma_l(\mathcal{P}(W^l) x^l),
\end{equation}
\noindent where $\mathcal{P}$ is an operator that maps the parameters $W^l$ to the appropriate matrix $\mathcal{P}(W^l)\in\mathbb{R}^{c_{l+1}\times c_l}$.
For a fully connected layer $\mathcal{P}$ reduces to the identity operator, whereas for a convolutional layer with $f_l$ filters, $W^l\in\mathbb{R}^{f_l\times d_l}$ contains the vectorized filter coefficients in each row and the operator $\mathcal{P}$ maps it to an appropriate matrix that represents convolution as matrix multiplication.
With this unified representation, we define the width of the network at layer $l$ as $c_l$ for the fully connected layers, and as $f_l$ for the convolutional layers.
The widths at different layers are critical hyper-parameters for a network design. In general, a wider network is more expensive to train and deploy, but it has the capacity to model a richer set of visual patterns. The relative width across layers is a particularly relevant consideration in the design of a multi-task network. It is widely observed that higher layer represents a level of abstraction that is more task dependent. This is confirmed by previous works on visualization of filters at different layers \cite{zeiler2014visualizing}.
Traditional approaches tackle the width design problem largely through hand-crafted layer design and manual model selection. Notably, popular deep convolutional network architectures, such as AlexNet \cite{krizhevsky2012imagenet}, VGG \cite{simonyan2014very}, Inception \cite{szegedy2015going} and ResNet \cite{he2015deep}
all use wider layers at the top of the network in what can be called an ``inverse pyramid'' pattern. These architectures serve as excellent reference designs in a myriad of domains, but researchers have noted that the width schedule (especially at the top layers) need to be tuned for the underlying set of tasks the network has to perform in order to achieve best accuracy \cite{Misra16}.
Here we propose an algorithm that dynamically finds the appropriate width of the multi-task network along with the task groupings through a multi-round training procedure. It has three main phases:
{\bf Thin Model Initialization.}
We start with a thin neural network model, initializing it from a pre-trained wider VGG-16 model by selecting a subset of filters using simultaneous orthogonal matching pursuit (ref. Section \ref{sec:somp}).
{\bf Adaptive Model Widening.}
The thin initialized model goes through a multi-round widening and training procedure. The widening is done in a greedy top-down layer-wise manner starting from the top layer. For the current layer to be widened, our algorithm makes a decision on the number of branches to be created at this layer along with task assignments for each branch. The network architecture is frozen when the algorithm decides to create no further branches (ref. Section \ref{sec:widen}).
{\bf Training with the Final Model.} In this last phase, the fixed final network is trained
until convergence.
More technical details are discussed in the next few sections. Algorithm \ref{alg:outline} provides a summary of the procedure.
\begin{algorithm}[!t]\label{alg:outline} \small
\SetInd{1ex}{1ex}
\KwData{Input data $D=(x_i,y_i)_{i=1}^N$. The labels $y$ are for a set of $T$ tasks.}
\KwIn{Branch factor $\alpha$, and thinness factor $\omega$. Optionally, a pre-trained network $M_p$ with parameters $\Theta_p$.}
\KwResult{A trained network $M_f$ with parameters $\Theta_f$.}
{\bf Initialization}: $M_0$ is a thin-$\omega$ model with $L$ layers. \\
\eIf {exist $M_p,\Theta_p$} {
$\Theta_0 \leftarrow \text{\em SompInit}(M_0, M_p, \Theta_p)$. $t \leftarrow 1$, $d \leftarrow T$. (Sec. \ref{sec:somp}) \\
}{
$\Theta_0 \leftarrow$ Random initialization
}
\While{($t \leq L$) and ($d > 1$)}{
$\Theta_t, A_t\leftarrow \text{\em TrainAndGetAffinity}(D, M_t, \Theta_t)$ (Sec. \ref{sec:aff}) \\
$d \leftarrow \text{\em FindNumberBranches}(M_t, A_t, \alpha)$ (Sec. \ref{sec:width_sel}) \\
$M_{t+1}, \Theta_{t+1} \leftarrow \text{\em WidenModel}(M_t, \Theta_t, A_t, d)$ (Sec. \ref{sec:widen}) \\
$t \leftarrow t+1$
}
Train model $M_t$ with sufficient iterations, update $\Theta_t$.
$M_f \leftarrow M_t$, $\Theta_f \leftarrow \Theta_t$.
\caption{Training with Adaptive Widening}
\end{algorithm}
\begin{figure}[t]
\begin{center}
\includegraphics[width=3.2in, height=1.2in]{figs/cmp_vgg16.pdf}
\end{center}
\caption{Comparing the thin model with VGG-16. As shown, the light color blobs shows the layers in the VGG-16 architecture. It has an inverse pyramid structure with a width plan of 64-128-256-512-512-4096-4096. The dark color blobs shows a thin network with $\omega=32$. The convolutional layers all have widths of $32$, and the fully connected layers have widths of $64$.}
\label{fig:cmp_vgg16}
\vspace{-2mm}
\end{figure}
\subsection{Thin Networks and Filter Selection using Simultaneous Orthogonal Matching Pursuit}
\label{sec:somp}
The initial model we use is a thin version of the VGG-16 network. It has the same structure as VGG-16 except for the widths at each layer. We experiment with a range of thin models that are denoted as thin-$\omega$ models. The width of a convolutional layer of the thin-$\omega$ model is the minimum between $\omega$ and the width of the corresponding layer of the VGG-16 network. The width of the fully connected layers are set to $2\omega$. We shall call $\omega$ the ``thinness factor''. Figure \ref{fig:cmp_vgg16} illustrates a thin model side by side with VGG-16.
Using weights from pre-trained models is known to speed up training and improve model generalization. However, the standard direct copy method is only suitable when the source and the target networks have the same architecture (at least for most of the layers). Our adoption of a thin initial model forbids the use of direct copy, as there is a mismatch in the dimension of the weight matrix (for both the input and output dimensions, see Equation \ref{eqn:within_layer} and discussions). In the literature a set of general methods for training arbitrarily small networks using an existing larger network and the training data are known as ``knowledge distillation' \cite{Hinton15,romero2014fitnets}. However, for the limited use case of this work we propose a faster, data-free, and simple yet reasonably effective method.
Let $W^{p,l}$ be the parameters of the pre-trained model at layer $l$ with $d$ rows. For convolutional layers, each row of $W^{p,l}$ represents a vectorized filter kernel. The initialization procedure aims to identify a subset of $d' (<d)$ rows of $W^{p,l}$ to form $W^{0,l}$ (the superscript $0$ denotes initialized parameters for the thin model).
We would like the selected rows that minimize the following objective:
\begin{equation}
\label{eqn:somp}
A^\star, \omega^{\star}(l) = \underset{A \in \mathbb{R}^{d \times d'}, |\omega|=d'}{\arg\min} ||W^{p,l} - AW^{p,l}_{\omega:}||_F,
\vspace{-1mm}
\end{equation}
\noindent where $W^{p,l}_{\omega:}$ is a truncated weight matrix that only keeps the rows indexed by the set $\omega$. This problem is NP-hard, however, there exist approaches based on convex relaxation \cite{tropp2006algorithms} and greedy simultaneous orthogonal matching pursuit (SOMP) \cite{somp} which can produce approximate solutions.
We use the greedy SOMP to find the approximate solution $\omega^{\star}(l)$
which is then used to initialize the parameter matrix of the thin model as $W^{0,l}\leftarrow W^{p,l}_{\omega^{\star}(l):}$.
We run this procedure layer by layer, starting from the input layer. At layer $l$, after initializing $W^{0,l}$, we replace $W^{p,l+1}$ with a column-truncated version that only keeps the columns indexed by $\omega^{\star}(l)$ to keep the input dimensions consistent. This initialization procedure is applicable for both convolutional and fully connected layers. See Algorithm \ref{alg:init}.
\begin{algorithm}[!t]\label{alg:init} \small
\KwIn{The architecture of the thin network $M_0$ with $L$ layers. The pretrained network and its parameters $M_p$, $\Theta_p$. Denote the weight matrix at layer $l$ as $W^{p,l} \in \Theta_p$.}
\KwResult{The initial parmaeters of the thin network $\Theta_0$.}
\ForEach{$l \in 1, 2, \cdots, L$}{
Find $\omega^{\star}(l)$ in Equation \ref{eqn:somp} by SOMP, using $W^{p,l}$ as weight matrix. \\
$W^{0,l} \leftarrow {W^{p,l}_{\omega^{\star}(l):}}$ \\
$W^{p,l+1} \leftarrow \left((W^{p,l+1})^T_{\omega^{\star}(l):}\right)^T$
}
Aggregate $W^{0,l}$ for $l \in \{1, 2, \cdots, L\}$ to form $\Theta_0$.
\caption{SompInit($M_0$, $M_p$, $\Theta_p$)}
\end{algorithm}
\subsection{Top-Down Layer-wise Model Widening}
\label{sec:widen}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=6.5in, height=1.3in]{figs/Widen2.png}
\end{center}
\caption{Illustration of the widening procedure. {\em Left:} the active layer is at layer $L$, there is one junction with 7 branches at the top. {\em Middle:} The seven branches are clustered into three groups. Three branches are created at layer $L$, resulting in a junction at layer $L-1$. Layer $L-1$ is now the active layer. {\em Right:} Two branches are created at layer $L-1$, making layer $L-2$ now the active layer. At each branch creation, the filters at the newly created junction are initialized by direct copy from the old filter. }
\label{fig:widen}
\end{figure*}
At the core of our training algorithm is a procedure that incrementally widens the current design in a layer-wise fashion.
Let us
introduce the concept of a ``junction''. A junction is a point at which the network splits into two or more independent sub-networks. We shall call such a sub-network a ``branch''. Each branch leads to a subset of prediction tasks performed by the full network. In the context of person attributes classification, each prediction is a {\em sigmoid} unit that produces a normalized confidence score on the existence of an attribute.
We propose to widen the network only at these junctions. More formally, consider a junction at layer $l$ with input $x^l$ and $d$ outputs $\{y^l_i\}_{i=1}^d$. Note that each output is the input to one of the $d$ top sub-networks. Similar to Equation \ref{eqn:within_layer} the within-layer computation is given as
\begin{equation}
\vspace{-1mm}
\label{eqn:widen_high}
y^l_i = \sigma_l(\mathcal{P}(W^l_i) x^l) \quad\quad \mbox{for} \quad i \in [d],
\end{equation}
\noindent where $W^l_i$ parameterizes the connection from input $x^l$ to the $i$'th output $y^l_i$ at layer $l$.
The set $[d]$ is the indexing set $\{1, 2, \cdots, d\}$. A junction is widened by creating new outputs at the layer below. To widen layer $l$ by a factor of $c$, we make layer $l-1$ a junction with $2 \leq c \leq d$ outputs. We use $y^{l-1}_j$ to denote an output in layer $l-1$ (each is an input for layer $l$) and $W^{l-1}_j$ to denote its parameter matrix. All of the newly-created parameter matrices have the same shape as $W^{l-1}$ (the parameter matrix before widening). The single output $y^{l-1}=x^l$ is replaced by a set of outputs $\{y^{l-1}_j\}_{j=1}^{c}$ where
\begin{equation}
\label{eqn:widen_low}
y_j^{l-1} = \sigma_{l-1} (\mathcal{P}(W^{l-1}_j) x^{l-1}) \quad\quad \mbox{for} \quad j \in [c].
\end{equation}
Let $g^l : [d] \to [c]$ be a given grouping function at layer $l$. After widening, the within-layer computation at layer $l$ is given as (cf. Equation \ref{eqn:widen_high})
\begin{equation}
\label{eqn:widened}
y_i^l = \sigma_l(\mathcal{P}(W^l_i) x^l_{g^l(i)}) = \sigma_{l}\left(\mathcal{P}(W^l_i) \sigma_{l-1}(\mathcal{P}(W^{l-1}_{g^l(i)}) x^{l-1})\right)
\end{equation}
\noindent where the latter equality is a consequence of Equation \ref{eqn:widen_high}. The widening operation sets the initial weight for $W^{l-1}_j$ to be equal to the original weight of $W^{l-1}$. It allows the widened network to preserve the functional form of the smaller network, enabling faster training.
To put the widening of one junction into the context of the multi-round progressive model widening procedure, consider a situation where there are $T$ tasks. Before any widening, the output layer of the initial thin multi-task network has a junction with $T$ outputs, each is the output of a sub-network (branch). It is also the only junction at initialization. The widening operation naturally starts from the output layer (denoted as layer $l$). It will cluster the $T$ branches into $t$ groups where $t \leq T$. In this manner the widening operation creates $t$ branches at layer $l-1$. The operation is performed recursively in a top-down manner towards the lower layers. Note that each branch will be associated with a sub-set of tasks. There is a 1-1 correspondence between tasks and branches at the output layer, but the granularity goes coarser at lower layers. An illustration of this procedure can be found in Figure \ref{fig:widen}.
\subsection{Task Grouping based on the Probability of Concurrently Simple or Difficult Examples}
\label{sec:aff}
Ideally, dissimilar tasks are separated starting from a low layer, resulting in less sharing of features. For similar tasks the situation is the opposite. We observe that if an easy example for one task is typically a difficult example for another, intuitively a distinctive set of filters are required for each task to accurately model both in a single network. Thus we define the affinity between a pair of tasks as the probability of observing concurrently simple or difficult examples for the underlying pair of tasks from a random sample of the training data.
To make it mathematically concrete, we need to properly define the notion of a ``difficult'' and a ``simple'' example. Consider an arbitrary attribute classification task $i$. Denote the prediction of the task for example $n$ as $s_i^n$, and the error margin as $m_i^n = |t_i^n - s_i^n|$, where $t_i^n$ is the binary label for task $i$ at sample $n$. Following the previous discussion, it seems natural to set a fixed threshold on $m_i^n$ to decide whether example $n$ is simple or difficult. However, we observe that this is problematic since as the training progresses most of the examples will become simple as the error rate decreases, rendering this measure of affinity useless. An adaptive but universal (across all tasks) threshold is also problematic as it creates a bias that makes intrinsically easier tasks less related to all the other tasks.
These observations lead us to the following approach. Instead of setting a fixed threshold, we estimate the average margin for each task,
$\mathbb{E}\{m_i\}$. We define the indicator variable for a difficult example for task $i$ as $e_i^n = \mathbf{1}_{m_i^n \geq \mathbb{E}\{m_i\}}$.
For a pair of tasks $i$, $j$, we define their affinity as
\begin{eqnarray}
\vspace{-1mm}
\label{eqn:aff}
A(i,j) & = & \mathbb{P}(e_i^n=1, e_j^n=1) + \mathbb{P}(e_i^n=0, e_j^n=0) \nonumber \\
& = & \mathbb{E}\{e_i^n e_j^n + (1-e_i^n)(1-e_j^n)\}.
\vspace{-1mm}
\end{eqnarray}
Both $\mathbb{E}\{m_i\}$ and the expectation on Equation \ref{eqn:aff} can be estimated by their sample averages. Since these expectations are functions of the current neural network model, a naive implementation would require a large number of time consuming forward passes after every training iterations. As a much more efficient implementation, we alternatively collect the sample averages from each training mini-batches. The expectations are estimated by computing a weighted average of the within-batch sample averages. To make the estimation closer to the true expectations from the current model, an exponentially decaying weight is used.
The estimated task affinity is used directly for the clustering at the output layer. It is natural as branches at the output layer has a 1-1 map to the tasks. But at lower layers the mapping is one to many, as a branch can be associated with more than one tasks. In this case, affinity is computed to reflect groups of tasks. In particular, let $k$, $l$ denote two branches at the current layer, where $i_{k}$ and $j_{l}$ denotes the $i$-th and $j$-th task associated with each branch respectively. The affinity of the two branches are defined by
\begin{eqnarray}
\vspace{-1mm}
\label{eqn:between_sep}
\tilde{A_b}(k, l) &=& \underset{i_k}{\mbox{mean}}\left(\underset{j_l}{\min}\quad {A(i_{k}, j_{l})}\right) \\
\tilde{A_b}(l, k) &=& \underset{j_l}{\mbox{mean}}\left(\underset{i_k}{\min}\quad {A(i_{k}, j_{l})}\right)
\vspace{-1mm}
\end{eqnarray}
The final affinity score is computed as $A_b(k, l) = (\tilde{A_b}(k, l) + \tilde{A_b}(l, k))/2$.
Note that if branches and tasks form a 1-1 map (the situation at the output layer),
this
reduces to the definition in Equation \ref{eqn:aff}. For branches with coarser task granularity,
$A_b(k, l)$
measures the affinity between two branches by looking at the largest distance (smallest affinity) between their associated tasks.
\subsection{Complexity-aware Width Selection}
\label{sec:width_sel}
The number of branches to be created determines how much wider the network becomes after a widening operation. This number is determined by a loss function that balances complexity and the separation of dissimilar tasks to different branches. For each number of clusters $1\leq d \leq c$, we perform spectral clustering to get a grouping function $g_d: [d] \to [c]$ that associates the newly created branches with the $c$ old branches at one layer above. At layer $l$ the loss function is given by
\begin{equation}
\label{eqn:cost_widen}
L^l(g_d) = (d-1) L_0 2^{p_l} + \alpha L_s(g_d)
\end{equation}
\noindent where $(d-1) L_0 2^{p_l}$ is a penalty term for creating branches at layer $l$, $L_s(g_d)$ is a penalty for separation. $p_l$ is defined as the number of pooling layers above the layer $l$ and $L_0$ is the unit cost for branch creation. The first term grows linearly with the number of branches, with a scalar that defines how expensive it is to create a branch at the current layer (which is heuristically set to double after every pooling layers). Note that in this formulation a larger $\alpha$ encourages the creation of more branches. We call $\alpha$ the branching factor. The network is widened by creating the number of branches that minimizes the loss function, or ${g_d^{l}}^{\star} = \underset{g_d}{\arg\min}\quad {L^l(g_d)}$.
The separation term is a function of the branch affinity matrix $A_b$. For each $i \in [d]$, we have
\begin{equation}
\vspace{-2mm}
\label{eqn:within_sep}
L_s^{i}(g_d) = 1 - \underset{k \in g^{-1}(i)}{\mbox{mean}}\left(\underset{l \in g^{-1}(i) }{\min}{A_b(k,l)}\right),
\end{equation}
\noindent and the separation cost is the average across each newly created branches
\begin{equation}
\vspace{-2mm}
L_s(g_d) = \frac{1}{d} \underset{i \in [d]}{\sum} L_s^i(g_d).
\end{equation}
Note Equation \ref{eqn:within_sep} measures the maximum distances (minimum affinity) between the tasks within the same group. It penalizes cases where very dissimilar tasks are included in the same branch.
\section{Experiments}
\begin{table*}
\begin{center} \small
\begin{tabular}{|l|c|c|c|c|c|}
\hline
\bf Method & \bf Accuracy (\%) & \bf Top-10 Recall (\%) & \bf Test Speed (ms) & \bf Parameters (millions) & \bf Jointly? \\ \hline
LNet+ANet & 87 & N/A & $+$ & $+$ & No \\
Walk and Learn & 88 & N/A & $+$ & $+$ & No \\
MOON & 90.94 & N/A & $\approx 33^*$ & 119.73 & No \\ \hline
Our VGG-16 Baseline & 91.44 & 73.55 & 33.2 & 134.41 & No \\
Our Low-rank Baseline & 90.88 & 69.82 & 16.0 & 4.52 & No \\
Our Baseline-thin-32 & 89.96 & 65.95 & 5.1 & 0.22 & No \\ \hline
Our Branch-32-1.0 & 90.74 & 69.95 & 9.6 & 1.49 & No \\
Our Branch-32-2.0 & 90.90 & 71.08 & 15.7 & 2.09 & No \\
Our Branch-64-1.0 & 91.26 & 72.03 & 15.2 & 4.99 & No \\ \hline
Our Joint Branch-32-2.0 & 90.4 & 68.72 & 10.01 & 3.25 & Yes \\
Out Joint Branch-64-2.0 & 91.02 & 71.38 & 16.28 & 10.53 & Yes \\
\hline
\end{tabular}
\end{center}
\caption{Comparison of accuracy, speed and compactness on CelebA test set. LNet+ANet and Walk and Learn results are cited from \cite{wang2016walk}. MOON results are cited from \cite{rudd2016moon}. $+$: There is no reported number to cite. $*$: MOON uses the VGG16 architecture, thus its test time should be similar to our VGG-16 baseline.}
\label{tab:celeba_complexity}
\end{table*}
\begin{table*}
\begin{center} \small
\begin{tabular}{|l|c|c|c|c|c|}
\hline
\bf Method & \bf Top-3 Accuracy (\%) & \bf Top-5 Accuracy (\%) & \bf Test Speed (ms) & \bf Parameters (millions) & \bf Jointly? \\ \hline
WTBI & 43.73 & 66.26 & $+$ & $+$ & No \\
DARN & 59.48 & 79.58 & $+$ & $+$ & No \\
FashionNet & ${82.58}^{\#}$ & ${90.17}^{\#}$ & $\approx 34^*$ & $\approx 134^*$ & No \\ \hline
Our VGG-16 Baseline & 86.72 & 92.51 & 34.0 & 134.45 & No \\
Our Low-rank Baseline & 84.14 & 90.96 & 16.34 & 4.52 & No \\ \hline
Our Joint Branch-32-2.0 & 79.91 & 88.09 & 10.01 & 3.25 & Yes \\
Our Joint Branch-64-2.0 & 83.24 & 90.39 & 16.28 & 10.53 & Yes \\
\hline
\end{tabular}
\end{center}
\caption{Comparison of accuracy, speed and compactness on Deepfashion test set. WTBI and DARN results are cited from \cite{liu2016deepfashion}. The experiments are reportedly performed in the same condition on the FashionNet method and tested on the DeepFashion test set. $+$: There is no reported number to cite. $*$: There is no reported number, but based on the adoption of VGG-16 network as base architecture they should be similar to those of our VGG-16 baseline. $\#$: The results are from a network jointly trained for clothing landmark, clothing attribute and clothing categories predictions. We cite the reported results for clothing category \cite{liu2016deepfashion}.}
\label{tab:deepfashion_complexity}
\vspace{-3mm}
\end{table*}
We perform an extensive evaluation of our approach on person attribute classification tasks. We use CelebA \cite{liu2015deep} dataset for facial attribute classification tasks and Deepfashion \cite{liu2016deepfashion} for clothing category classification tasks.
CelebA consists of images of celebrities labeled with 40 attribute classes. Most images also include the torso region in addition to the face. Our models are evaluated using the standard classification accuracy (average of classification accuracy rate over all attribute classes) and the top-10 recall rate (proportion of correctly retrieved attributes from the top-10 prediction scores for each image). Top-10 is used as there are on average about 9 positive facial attributes per image on this dataset.
DeepFashion is richly labeled with 50 categories of clothes, such as ``shorts'', ``jeans'', ``coats'', etc. (the labels are mutually exclusive). Faces are often visible on these images. We evaluate top-3 and top-5 classification accuracy to directly compare with benchmark results in \cite{liu2016deepfashion}.
\subsection{Comparison with the State of the art}
We establish three baselines. The first baseline is a VGG-16 model initialized from the a model trained from imdb-wiki gender classification \cite{imdb}. The second baseline is a low-rank model with low rank factorization at all layers. This model is also initialized from the imdb-wiki gender pretrained model, but the initialization is through truncated Singular Value Decomposition (SVD) \cite{denton2014exploiting}. The number of basis filters is 8-16-32-64-64 for the convolutional layers, 64-64 for the two fully-connected layers and 16 for the output layer. The third is a thin model initialized using the SOMP initialization method introduced in Section \ref{sec:somp}, using the same pre-trained model. Our VGG-16 baselines are stronger than all previously reported methods, while the low-rank baselines closely matches the state-of-the-art while being faster and more compact. The thin baseline is up to 6 times faster, 500 times more compact than the VGG-16 baseline, but still reasonably accurate.
We find several contributing factors to the strength of our baselines. Firstly, the choice of pre-trained model is critical. Most recent works use the VGG face descriptor, whereas in our work we use the pre-trained model from imdb-wiki \cite{imdb-wiki}. For the thin baseline, it is also important to use Batch Normalization (BN) \cite{ioffe2015batch}. Without the adoption of BN layers the training error ceases to decrease after a small number of training iterations. We observe this phenomenon in both random initialization and SOMP initialization.
A comparison of the models generated by our adaptive widening algorithm with baseline results are shown in Table \ref{tab:celeba_complexity} and \ref{tab:deepfashion_complexity}. Our ``branching'' models achieves similar or better accuracy compared to these state-of-the-art methods, while being much more compact and faster.
\subsection{Cross-domain Training of Joint Person Attribute Network}
\vspace{-1mm}
To examine the ability of our approach in handling cross-domain tasks, we train a network that jointly predict facial and clothing attributes. The model is trained on the union of the two training sets. Note that the CelebA dataset is not annotated with clothing labels, and the Deepfashion dataset is not annotated with facial attribute labels. To augment the annotations for both datasets, we use the predictions provided by the baseline VGG-16 models as soft training targets. We demonstrate that the joint model is comparable to the state-of-the-art on both facial and clothing tasks, while being a much more efficient combined model rather than two separate models. The comparison between the joint models with the baselines is shown in Table \ref{tab:celeba_complexity} and \ref{tab:deepfashion_complexity}.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=6.8in]{figs/visual_grouping.pdf}
\end{center}
\caption{The actual task grouping in the Branch-32-2.0 model on CelebA. Upper: fc7 layer. Lower: fc6 layer. Other layers are omitted.}
\label{fig:visual_grouping}
\end{figure*}
\vspace{-1mm}
\subsection{Visual Validation of Task Grouping}
We visually inspect the task groupings in the generated model. Figure \ref{fig:visual_grouping} displays the actual task grouping in the Branch-32-2.0 model trained on CelebA. The grouping are often highly intuitive. For instance, ``5-o-clock Shadow'', ``Bushy Eyebrows'' and ``No Beard'', which all describe some forms of facial hairs, are grouped. The cluster with ``Heavy Makeup'', ``Pale Skin'' and ``Wearing Lipstick'' is clearly related. Groupings at lower layers are also sensible. As an example, the group ``Bags Under Eyes'', ``Big Nose'' and ``Young'' are joined by ``Attractive'' and ``Receding Hairline'' at fc6, probably because they all describe age cues. This is particularly interesting as no human intervention is involved in model generation.
\begin{figure}[t]
\begin{center}
\includegraphics[width=3.6in]{figs/grouping.eps}
\end{center}
\caption{The reduction in accuracy when changing the task grouping to favor grouping of dissimilar tasks. A positive number suggests a reduction in accuracy when changing from original to the new grouping. This figure shows our automatic grouping strategy improves accuracy for most tasks. }
\label{fig:task_group}
\end{figure}
\vspace{-1mm}
\subsection{Ablation Studies}
\vspace{-1mm}
{\bf What are the advantages of grouping similar tasks?} We shuffle the correspondence between training targets and the output of the network for ``Branch-32-2.0'' model from CelebA and report the reduction in accuracies for each tasks. Both random and manual shuffling are tested but we only report the one from manual shuffling as they are similar. In particular, for manual shuffling we choose a new grouping of tasks so that the network separates many tasks that are originally in the same branch. Figure \ref{fig:task_group} summarizes our findings. Clearly grouping tasks according to similarity improves accuracy for most tasks.
Closer examination yields other interesting observations. The three tasks that actually benefit from the shuffling significantly (unlike most of the tasks), namely ``wavy hair'', ``wearing necklace'' and ``pointy nose'' are all from the branch with the largest number of tasks. This is sensible as after the shuffling they are not forced to share filters with many other tasks. But other tasks from the same branch, namely ``black hair'' and ``wearing earrings'' are significantly improved from the original grouping. One possible explanation is that while grouping similar tasks allow them to benefit from multi-task learning, some tasks are intrinsically more difficult and require a wider branch. Our current design lacks the ability to change the width of a branch, which is an interesting future direction.
\begin{table}
\begin{center} \small
\begin{tabular}{|l|c|c|}
\hline
\bf Method & \bf Accuracy (\%) & \bf Top-10 Recall (\%) \\ \hline
w/ pre-trained & -0.54 & -2.47 \\
w/o pre-trained & -0.65 & -3.77 \\
\hline
\end{tabular}
\end{center}
\caption{Accuracy gap with and without initialization from pre-trained model, defined as accuracy of Branch-32-2.0 minus the one from VGG-16 Baseline.}
\label{tab:capacity}
\vspace{-4mm}
\end{table}
{\bf Sub-optimal use of pretrained network or smaller capacity?} The gap in accuracy between Branch-32-2.0 and VGG-16 baseline can be caused by sub-optimal use of the pretrained model or the intrinsically smaller capacity of the former. To determine if both factors contribute to the gap, we compare training the Branch-32-2.0 model and VGG-16 from scratch on CelebA. As neither model benefit from the information from a pre-trained network, we expect a much smaller gap in accuracy if the sub-optimal use of the pretrained model is the main cause. Our results summarized in Table \ref{tab:capacity} suggest that the smaller capacity of the Branch-32-2.0 model is likely the main reason for the accuracy gap.
\begin{figure}[t]
\begin{center}
\includegraphics[width=3.0in]{figs/somp.eps}
\end{center}
\caption{Comparison of training progress with and without SOMP initialization. The model using SOMP initialization clearly converges faster and better. }
\label{fig:somp}
\vspace{-4mm}
\end{figure}
{\bf How does SOMP help the training?} We compare training with and without this initialization using the Baseline-thin-32 model on CelebA, under identical training conditions. The evolution of training and validation accuracies are shown in Figure \ref{fig:somp}. Clearly, the network initialized with SOMP initialization converges faster and better than the one without SOMP initialization.
\vspace{-1mm}
\section{Conclusion}
\vspace{-1mm}
We have proposed a novel method for learning the structure of compact multi-task deep neural networks. Our method starts with a thin network model and expands it during training by means of a novel multi-round branching mechanism, which determines with whom each task shares features in each layer of the network, while penalizing for the complexity of the model. We demonstrated compelling results of the proposed approach on the problem of person attribute classification. As future work, we plan to adapt our approach to other related problems, such as incremental learning and domain adaptation.
{\small
\bibliographystyle{ieee}
|
2,869,038,155,459 | arxiv | \section{Introduction}
The success of 2D spectroscopy derives from its ability to correlate electronic transitions at ultrafast time delays, by achieving high resolutions both in frequency and in time.\cite{10.1088/978-0-750-31062-8,doi:10.1146/annurev.physchem.54.011002.103907,doi:10.1002/andp.201300153,GELZINIS2019271,mukamel1995principles} By portraying the data in a 2D map, coupling between states may be revealed and dark states can be exposed via their coupling to bright states. Moreover, the two dimensions separate homogeneous broadening and inhomogeneous broadening into perpendicular directions, thereby untangling fast fluctuations from the static or slowly fluctuating energy levels.\cite{doi:10.1063/1.3613679} By investigating the time evolution of the spectra, information about the dynamics of the system, including vibrations, relaxation and dephasing, can be obtained.
In multi-pulse spectroscopy, the possibilities explode as the complexity of the experiment increases. However, some designs are more broadly applicable and meaningful and thus establish themselves as routine measurements in labs around the world. As Pump-Probe became the go-to experiment in two-pulse spectroscopy, 2D spectroscopy is becoming the standard measurement with three pulses.
Pump-Probe, which is the simplest spectroscopic technique used to study ultrafast electronic dynamics, measures the frequency spectrum for a range of pump-to-probe times, and thus provides information on the transient absorption. The extra pulse in 2D spectroscopy enables two frequency spectra to be resolved, one for excitation and one for detection, which are plotted against each other for a range of pulse delays in order to time-resolve the correlation between excitation and detection frequencies.
2D spectroscopy aims to measure the third-order response of a quantum system to an electric field. By clever design of pulses, this goal can be achieved with a high fidelity. First and foremost, the width of the pulse should be short to enhance the time resolution. Equally important is the phase of the pulse which is precisely controlled in order to separate the third-order signal from other orders.
Since its inception, 2D spectroscopy has employed heterodyne-detection (HD) to record the amplitude and the phase of the signal.\cite{10.1088/978-0-750-31062-8,doi:10.1146/annurev.physchem.54.011002.103907,doi:10.1002/andp.201300153,GELZINIS2019271,hamm_zanni_2011} In HD, the signal is measured by interference with a much stronger electric field, a local oscillator. As HD 2D spectroscopy (HD2D) has matured, so has the understanding of the relationship between the underlying dynamics of the sample and their spectral features. The technique is most widely known for shedding new light on quantum coherent transport in photosynthesis, particularly the hotly debated oscillations in 2D spectra and their origin.\cite{Engel2007,doi:10.1063/1.4846275,Duan_2015,Duan8493} However, HD2D has been shown to be a broadly applicable method.
In recent years, an alternative approach to measure the third-order signal has gained a lot of interest, namely fluorescence-detected 2D spectroscopy (FD2D). Until recently, FD2D was primarily a proof-of-concept method, focusing on experimental developments,\cite{doi:10.1063/1.2800560,doi:10.1063/1.4874697,Draeger:17,Goetz:18} but it is now establishing itself as an incisive tool with real applications.\cite{Tiwari:18,Tiwari2018,doi:10.1063/1.5046645,C9SC01888C} In the following we focus on comparing HD2D and FD2D; for a broader perspective on multidimensional ultrafast spectroscopy, references \citenum{hamm_zanni_2011}, \citenum{doi:10.1098/rsos.171425} and \citenum{SONG2018184} are excellent resources.
In this work, we compare the two detection schemes to better understand what they have in common and where they differ. In order to not limit the study to idealised experiments, we chose to go beyond the double-sided Feynman diagrams\cite{mukamel1995principles,doi:10.1063/1.4973975} and simulate the spectra non-perturbatively.\cite{BRUGGEMANN2007192,Br_ggemann_2011} This approach enables more aspects of the experiments to be covered,\cite{Richtereaar7697,Smallwood:17,Perlik:17,doi:10.1063/1.4985888} such as the pulse duration and the pulse amplitude, which are another key focus of our study. However, other sources of divergence bebtween the detection methods could be interesting to investigate, e.g. dephasing and relaxation mechanisms, transition strengths and quantum yields, the effect of vibrations, and the energy level structure.
\section{Background}
A great advantage of HD2D is that the third-order signal is spatially separated into three spots depending on the type of interaction that occurred between the sample and the electric pulse, with similar phase evolutions interfering constructively in these directions. For historical reasons, these contributios are named the rephasing (R), the nonrephasing (NR) and the double-quantum coherence (DQC) signals.
In FD2D, a fourth pulse, similar to the first three, replaces the local oscillator in the conventional experiment, with the effect that the desired third-order signal is encoded into the excited state populations, from which fluorescence is collected during an acquisition time. The modulation in the integrated fluorescence, as a function of the pulse intervals, are then Fourier transformed to give the 2D spectra which, as for HD2D, can be separated into the R, NR and DQC contributions.
Other actions, such as photoelectron or -ion emission\cite{PhysRevA.92.053412,C5CP03868E} or photocurrent,\cite{Karki2014} could also be recorded to produce 2D spectra with complementary information, but we devote our attention to FD2D as it is currently more popular and also because it affords a more direct comparison to HD2D.
Detecting the fluorescence instead of the polarisation has a number of potential benefits. The most known advantage is that signals from small volumes, in principle single molecules,\cite{Liebel2018,Tiwari:18} can be used to generate 2D spectra, whereas the conventional technique requires sample sizes larger than the wavelength of the pulse. This opens up the possibility to pierce through the ensemble and study isolated systems or variations across a sample.\cite{Tiwari2018}
In addition, inherent differences between the two detection schemes affect the selection of interaction pathways, which in turn give rise to discrepancies in the spectra. By contrasting HD and FD 2D spectra, it is possible to infer new information which would otherwise be inconclusive with either detection method. Karki et al. compared the HD and FD DQC spectra of LH2 and were thus able to deduce that the initial excitation is shared between the two bacteriochlorophyll rings, contrary to the generally accepted picture up until that point.\cite{C9SC01888C}
Recently, Maly and Mancal suggested that the acquisition time, which has no analogue in HD2D, could be varied in order to isolate specific contributions to the total signal, e.g. the exciton-exciton annihilation.\cite{Maly2018} This opens up a new window into the underlying dynamics, but long measurement times increase the incoherent mixing of linear signals arising from nonlinear population dynamics.\cite{doi:10.1063/1.4994987} However, careful analysis can differentiate between true nonlinear signals and incoherently mixed linear signals.\cite{doi:10.1021/acs.jpca.9b01129}
Experimentally, FD2D is currently more demanding than its counterpart. This is partly because two pulse delays need to be scanned instead of one. Moreover, labs which have implemented the fluorescence-detection setup are relatively few, and don't enjoy the same level of experience and literature to draw upon. However, the increasing rate of articles published suggest that FD2D is leaving infancy and is becoming a powerful addition to the toolbox.
Only a handful of papers have tackled the theory side of FD2D.\cite{Perdomo-Ortiz2012,doi:10.1021/acs.jpca.9b01129,PhysRevA.96.053830,Maly2018,khn2019interpreting} Using the double-sided Feynman diagrams derived from perturbation theory, it is readily shown that each diagram from the traditional method has an equivalent diagram in fluorescence detection, which in addition has an extra set of excited state absorption (ESA) diagrams.\cite{Perdomo-Ortiz2012} Although this knowledge forms an important basis for the interpretation of FD2D spectra, the differences do not end there.
Knowing how challenging it can be to analyse traditional 2D spectra, where the various contributions to the signal are well understood, it is of great interest to explore all possible ways that FD2D spectra can deviate from its counterpart, in order to fully unlock the potential of the method.
In order to enhance the comparison between HD and FD, and to clarify the interpretation of the 2D spectra, we make some simplifications. Whereas the FD2D experiment requires two pulse intervals to be scanned, and the HD2D version only one, we chose to scan both in our simulations. Also, the coherent detection in the HD experiment is performed by taking the instantaneous expectation value of the transition dipole moment, we do not model the local oscillator with a nonequilibrium Green's function QED approach.\cite{PhysRevA.77.022110} Both of the abovementioned choices are in line with how the perturbative simulations using double-sided Feynman diagrams are typically performed.
Moreover, as FD2D only requires a single absorber to generate spectra, we do not include a distribution of energy levels as a source of inhomogeneous broadening. For the same reason, we disregard the vector property of the transition dipole moment and treat them as scalars. To promote as identical conditions as possible, we do the same for the HD2D simulations, which \emph{do} require an ensemble of absorbers - however, only the positions are different.
In our simulations, we use the rotating frame instead of the laboratory frame or the quasi-rotating frame.\cite{Kramer:16} We also sample uniformly, opting for simplicity rather than trying to reduce the computational cost with non-uniform sampling.\cite{doi:10.1063/1.4976309,wang2019compressed} We do not investigate the polarisation dependency, which is sometimes exploited to select specific pathways\cite{Zhang14227,Stone1169,Mueller2018,Thyrhaug2018,Kramer2020}, but it is known that the early-time dynamics suffer from incorrect pulse-ordering artifacts.\cite{doi:10.1063/1.5079817}
We assume non-interacting chromophores in our calculations, although the delocalisation of excitation energy in many cases is unexpectedly long-range\cite{C8CP05851B} and its role in energy transfer is a hot topic in the field.\cite{Strumpfer2012,doi:10.1063/1.5046645,C5CP06491K}
\section{Theory}
First we introduce the theoretical and computational methods used throughout the article, starting with the evolution of a quantum system interacting semiclassically with an electric field. We then describe the equations used to detect the third-order signal and construct the 2D spectra for both heterodyne and fluorescence detection. Lastly, we explain the model system which we use in our simulations.
\subsection{The quantum dynamics of a system interacting with an electric field}
The vast majority of 2D spectroscopy models employ the semiclassical approximation, where the state of the system is described quantum mechanically, but the field, $E(\mathbf{r},t)$, is described classically:
\begin{equation}
E(\mathbf{r},t) = \sum_n A_n(t-t_n) \mathrm{exp}(-i\omega (t-t_n) + i\mathbf{k}_n\cdot \mathbf{r} - i\phi_n)
\end{equation}
\noindent Here, $A_n(t-t_n)$ describes the envelope of the $n$th pulse centred at $t_n$; $\omega$ is the frequency of the field; $\mathbf{k}_n$ is the wave vector of the $n$th pulse; and $\phi_n$ is the phase angle of the $n$th pulse.
The coupling between the quantum state and the classical field is given by the interaction Hamiltonian
\begin{equation}
H_{\mathrm{int}}(t) = -\mu E(t)
\end{equation}
\noindent where $\mu$ is the transition dipole moment operator, which is assumed to be a scalar for simplicity. For an otherwise isolated quantum system, the dynamics obeys the time-dependent Schrodinger equation
\begin{equation} i\hbar \frac{\partial}{\partial t} | \Psi (\mathbf{r}, t) \rangle = [ H_0 + H_{int}(t)] | \Psi (\mathbf{r}, t) \rangle \equiv H(t) | \Psi (\mathbf{r}, t) \rangle
\end{equation}
\noindent where $H_0$ is the system Hamiltonian in the absence of the field. In the condensed phase, however, the fluctuations in the quantum system's environment perturb the system sufficiently that an isolated-quantum system theory is inadequate to describe the dynamics, as it can not include relaxation and dephasing processes. To accommodate these effects, an open-quantum system approach is needed. For the sake of simplicity, we use the Lindblad equation to propagate the state, which is now represented by the reduced density operator, $\rho(t)$.\cite{Lindblad1976}
\begin{align}\label{eq:generalLindblad}
\frac{d}{dt}\rho(t) = &\frac{i}{\hbar}[\rho(t),H(t)] + \sum_{j=1}^{N}\Gamma_j \big \{L_j\rho(t)L_j^\dagger - \frac{1}{2}[\rho(t)L_j^\dagger L_j + L_j^\dagger L_j \rho(t)]\big \} \\ \equiv & \mathcal{L}[\rho (t)] \nonumber
\end{align}
\noindent where an off-diagonal $L_j = a_m^\dagger a_n$ operator represents spontaneous relaxation or excitation, a diagonal $L_j$ operator represents a dephasing process, and $N$ denote the number of different decoherence channels. The formal solution to the differential equation in (\ref{eq:generalLindblad}) is found by integrating both sides of the equation: $\rho(t) = e^{\mathcal{L}t}\rho(0)$.
Having established the dynamical equations for an open quantum system interacting with a classical field, we proceed to discuss the different detection schemes for the third-order signal. Traditionally, heterodyne-detection has been used, where the emitted light is interfered with a local oscillator and the electric field is measured. More recently, fluorescence-detection has become an increasingly viable and attractive method, as technological advances push towards single-molecule 2DES experiments, exploiting the high sensitivity of fluorescence detectors.
\subsection{Heterodyne-Detected 2DES}
Heterodyne-Detected 2DES employs phase matching to isolate the third-order response signal. In short, three pulses impart their unique wave vectors to the system, thus causing the radiated fields from individual emitters to constructively interfere in the direction of the vector sum $\mathbf{k}_{signal} = \mp \mathbf{k}_{1} \pm \mathbf{k}_{2} + \mathbf{k}_{3}$, where the upper combination corresponds to the rephasing signal and the lower combination to the nonrephasing signal. A third (possible) phase matching direction, $\mathbf{k}_{\mathrm{DQC}} = \mathbf{k}_{1} + \mathbf{k}_{2} - \mathbf{k}_{3}$, produces the so-called double-quantum coherence signal, but this will not be considered in our study.
Importantly, a detectable signal in the phase-matched directions relies on an ensemble of oscillating dipoles to coherently add up to a macroscopic polarisation in the sample. While this enables high signal-to-noise ratios, it also puts a lower bound on the spatial resolution: $\gtrsim \lambda ^3$, where $\lambda$ is the radiation wavelength.
Each optically active molecular system, $j$, picks up a unique phase factor by virtue of its position, $\mathbf{r}_j$, since the electric field is given by
\begin{equation} E_j(t) = E_0\sum_{n=1}^3 \mathrm{exp}\bigg [\frac{t-t_n}{2\sigma^2}\bigg ]\mathrm{exp}(i\omega t - i \mathbf{k}_n\cdot \mathbf{r}_j)
\end{equation}
\noindent where a Gaussian envelope function is used and the $\phi_n$ phase has been dropped as it is not necessary for phase matching. The electric field couples to the transition dipole moment of the system which makes the $\mathrm{exp}(- i \mathbf{k}_p\cdot \mathbf{r}_j)$ phase factor act as a book-keeper of the transitions. By using linearly independent $\mathbf{k}$-vectors for each of the three pulses, it is possible to differentiate which transitions were caused by which pulses.
However, averaging over the polarisation of the individual molecules will cause the third-order signal to vanish due to the random phases, unless the polarisation is measured in one of the phase-matching directions. That is, to read out the desired signal, the wave vector of the local oscillator, $\mathbf{k}_{signal}$, must be chosen such that it simultaneously cancels the phase factors absorbed by all molecular systems for the relevant pathways. For 2DES, we are interested in the part of the response which stems from a single interaction with each of the three pulses. This reduces the number of phase-matching directions to three: $- \mathbf{k}_{1} + \mathbf{k}_{2} + \mathbf{k}_{3}$, $\mathbf{k}_{1} - \mathbf{k}_{2} + \mathbf{k}_{3}$ and $\mathbf{k}_{1} + \mathbf{k}_{2} - \mathbf{k}_{3}$. A schematic of the phase matching method is shown in figure \ref{fig:HDFDsetup}(a).
\begin{figure}
\includegraphics[width=0.5\textwidth]{HDreiterated.pdf}
\put(-230,155){(a)}
\includegraphics[width=0.5\textwidth]{FDreiterated.pdf}
\put(-210,155){(b)}
\caption{\label{fig:HDFDsetup}(a) Heterodyne-detection of two-dimensional electronic spectra employs a local oscillator (LO) to read out the macroscopic polarisation generated by three interactions with the electric field. In the non-collinear geometry, linearly independent wave vectors imprint unique phases with each photo-excitation and de-excitation. The selection of specific third-order processes, such as rephasing and nonrephasing, is achieved through phase-matching, in which the wave vector of the LO is a combination of the previous wave vectors, but with the necessary signs flipped to reflect the order of (de-)excitation for the respective processes. (b) Fluorescence-detection of two-dimensional electronic spectra is performed in a collinear setup. Instead of measuring the polarisation, fluorescence is detected up to a cut-off acquisition-time. The integrated fluorescence as a function of the pulse delays constitutes the signal. However, the selection of third-order processes requires each pulse train to be executed 27 times with phases cycled. Subsequent Fourier transforms with respect to the distinct phase evolutions of rephasing and nonrephasing processes, produce the 2D spectra which are related to, but not equivalent to, the heterodyne-detected 2D spectra.}
\end{figure}
To achieve the phase-matching in a non-perturbative calculation, the dynamics of each individual molecule $j$ must be computed separately using equation~\ref{eq:generalLindblad}. Here, we assume that the individual molecules evolve independently and that each molecule is initially in its ground state. For simplicity, we employ identical Hamiltonian and Lindblad operators on each individual quantum system. A generalisation to a distribution of Hamiltonians and Lindbladians is straightforward and with no added computational cost, however, this would be a source of both inhomogeneous and homogeneous broadening which could potentially make it more difficult to interpret (the differences between) the HD and FD 2D spectra.
The polarisation in the chosen phase-matching direction is given by:
\begin{equation} \label{eq:detectPolarisation}
P(\mathbf{k}_{signal},\tau, T, t) = 2\mathrm{Re}\sum_j \mathrm{exp}(i \mathbf{k}_{signal} \cdot \mathbf{r}_j)\sum_{\alpha < \beta} \mu_{\alpha \beta}^{(j)} \rho_{\beta \alpha}^{(j)} (E_j;t)
\end{equation}
\noindent The last step is to perform a two-dimensional Fourier transform from the time domain to the frequency domain:
\begin{equation} S_{\mathrm{HD2D}}(\mathbf{k}_{signal},\omega_\tau, T, \omega_t) = -i \int_{0}^{\infty} \mathrm{d}\tau \mathrm{exp}(\pm i \omega_\tau \tau ) \int_{0}^{\infty} \mathrm{d}t \mathrm{exp}(i \omega_t t) P(\mathbf{k}_{signal},\tau, T, t)
\end{equation}
\noindent where ``+'' is used for the nonrephasing and double-quantum coherence signals and ``-'' is used for the rephasing signal. In the following, only the real parts of the signals will be investigated, as this information is far more often used in studies - the imaginary part is mostly neglected.
\subsection{Fluorescence-Detected 2DES}
Instead of detecting the third-order polarisation with a local oscillator, the integrated fluorescence intensity can be used to report on the nonlinear response of the sample.
Fluorescence-detection is extremely sensitive, which allows for studies of single molecules. As fluorescence is an incoherent process, no phase-matching condition can exist, and there is therefore no need to perform the experiment in a noncollinear setup, which is standard for heterodyne-detected 2DES. The advantage of the collinear geometry is that it facilitates rapid data acquisition and that it is inherently phase stable. Two major drawbacks are the need to scan two coherence times and that it is necessary to scan a number of different pulse phases for each unique combination of pulse delays, in order to extract the desired rephasing and nonrephasing signals. The latter requirement is achieved by two similar but different methods known as phase modulation\cite{doi:10.1063/1.2800560}, which will not discussed here, and phase cycling.\cite{Tian1553}
\subsubsection{Phase Cycling}
In the collinear setup, the signal is no longer spatially separated according to the order of the light-matter interaction; it now contains all orders. To isolate the rephasing and nonrephasing (and DQC) signals, one must exploit the unique phase evolutions of the third-order signals. This can be done by Fourier transforming the signal with respect to the phase for each pulse interval, so that it matches the characteristic phases of the R, NR and DQC signals. Practical limitations prevent a continuous Fourier transform, but it can be shown that 27 different phase combinations is enough to extract the third-order signals.\cite{doi:10.1063/1.2978381} Figure~\ref{fig:HDFDsetup}(b) is a schematic of the FD2D experiment emphasising the collinear geometry and the phase-cycling approach, where the pulses are tagged with specific phases which are then rotated or cycled with respect to each other.
\subsubsection{Example with excited state population}
From a purely theoretical point of view, the excited state populations have the same dependence on the interactions with the four pulses as the integrated fluorescence, meaning that the populations can be used as proxies for the measured signal.\cite{doi:10.1021/acs.jpca.9b01129} In principle, one can distinguish between different excited state populations, but since it is difficult to detect fluorescence from distinct excited states, we neglect this possibility.\cite{PhysRevA.96.053830} It is, however, relevant for other action spectroscopies.\cite{Karki2014,PhysRevA.92.053412,C5CP03868E}
The excited state population after interactions with four pulses with phases $\phi_{1-4}$ and delay times $\tau$, $T$ and $t$ can be found by summing all the contributing coherence transfer pathways:
\begin{equation} \label{eq:pop1} p(\tau,T,t,\phi_1,\phi_2,\phi_3,\phi_4) = \sum_{\alpha,\beta,\gamma,\delta}\tilde{p}(\tau,T,t,\alpha,\beta,\gamma,\delta)\mathrm{exp}[i(\alpha \phi_1 + \beta \phi_2 + \gamma \phi_3 + \delta \phi_4)]
\end{equation}
Here, $\alpha,\beta,\gamma,\delta$ count the number of positive phase factors minus the number of negative phase factors imparted on the system by the respective pulses, which in the language of double-sided Feynman diagrams amounts to the number of arrows pointing to the left minus the number of arrows pointing to the right. $\tilde{p}$ denotes that the contributions are specific to the particular set of $\alpha,\beta,\gamma,\delta$ and that the absorbed phase factors have been taken out.
Because it is assumed that the initial state is diagonal, $\rho_0 = |g\rangle \langle g |$, and the final state is diagonal, the following condition must be fulfilled: $\alpha + \beta + \gamma + \delta = 0$. Consequently, $\alpha$, $\beta$, $\gamma$, $\delta$ are dependent, which allows us to define $\phi_{21} \equiv \phi_2 - \phi_1$, $\phi_{31} \equiv \phi_3 - \phi_1$, $\phi_{41} \equiv \phi_4 - \phi_1$. Equation \eqref{eq:pop1} then becomes:
\begin{equation}
p(\tau,T,t,\phi_{21},\phi_{31},\phi_{41}) = \sum_{\beta,\gamma,\delta}\tilde{p}(\tau,T,t,\beta,\gamma,\delta)\mathrm{exp}[i(\beta \phi_{21} + \gamma \phi_{31} + \delta \phi_{41})] \label{eq:pop2}
\end{equation}
\noindent However, for the 2DES signals, we are not interested in expressing the total population as a sum of its contributions with unique phase factors. Instead, we wish to isolate the individual contributions, in particular the rephasing, nonrephasing and double coherence signals, which are given by $\tilde{p}(\beta=1, \gamma=1,\delta=-1)$, $\tilde{p}(\beta=-1, \gamma=1,\delta=-1)$ and $\tilde{p}(\beta=1, \gamma=-1,\delta=-1)$, respectively. This is achieved by Fourier transforming the total population with respect to the characteristic phase evolutions of the third-order signals.
\begin{equation}\tilde{p}(\tau,T,t,\beta, \gamma,\delta) = \frac{1}{(2\pi)^3} \int_0^{2\pi} \int_0^{2\pi} \int_0^{2\pi} d\phi_{41} d\phi_{31} d\phi_{21} p(\tau,T,t,\phi_{21},\phi_{31},\phi_{41})e^{-i\beta \phi_{21}}e^{-i\gamma \phi_{31}}e^{-i\delta\phi_{41}}
\end{equation}
\noindent As noted above, a continuous Fourier transform is impractical, so a discrete Fourier transform is used.
\begin{equation}\label{eq:discreteFT}
\tilde{p}(\tau, T, t,\beta, \gamma,\delta) = \frac{1}{LMN}\sum_{n=0}^{N-1}\sum_{m=0}^{M-1}\sum_{l=0}^{L-1}p(\tau,T,t,l\Delta \phi_{21},m\Delta \phi_{31},n\Delta \phi_{41})e^{-il\beta \Delta \phi_{21}}e^{-im\gamma \Delta \phi_{31}}e^{-in\delta \Delta \phi_{41}}
\end{equation}
As shown by Howe-Siang Tan,\cite{doi:10.1063/1.2978381} a phase-cycling scheme of $L\times M\times N = 3\times3\times3$ is sufficient when only contributions from fourth order and below are considered, i.e. $|\alpha| + |\beta| + |\gamma| + |\delta| \leqslant 4$. This gives $\Delta \phi_{21} = \Delta \phi_{31} = \Delta \phi_{41} = \frac{2\pi}{3}$.
It follows that the appropriate electric field needed to perform the phase-cycling to extract the R, NR, and DQC signals is
\begin{equation} E^{lmn}(t) = E_1(t) + E_2^l(t) + E_3^m(t) + E_4^n(t)
\end{equation}
\begin{equation}
\begin{split}E_1(t) & = E_0 \mathrm{exp}\big [\frac{t-t_1}{2\sigma^2}\big ]^2\mathrm{exp}(i\omega t)\\
E_2^l(t) & = E_0 \mathrm{exp}\big [\frac{t-t_2}{2\sigma^2}\big ]^2\mathrm{exp}(i\omega t + il\tfrac{2\pi}{3}) \\
E_3^m(t) & = E_0 \mathrm{exp}\big [\frac{t-t_3}{2\sigma^2}\big ]^2\mathrm{exp}(i\omega t + im\tfrac{2\pi}{3}) \\
E_4^n(t) & = E_0 \mathrm{exp}\big [\frac{t-t_4}{2\sigma^2}\big ]^2\mathrm{exp}(i\omega t + in\tfrac{2\pi}{3})
\end{split}
\end{equation}
\noindent where $l,m,n$ are cycled between 0,1 and 2 and the resulting populations from the 27 combinations added for each set of \{$t_1,t_2,t_3,t_4$\}. Note that the $\mathbf{k}_n$ vector is dropped as the $\mathbf{k}_n \cdot \mathbf{r}$ phase factors cancel out in the collinear geometry. The R, NR, and DQC spectra are then found using equation \ref{eq:discreteFT} and performing the usual
\small
\begin{equation} S_{\mathrm{FD2D}}(\beta, \gamma,\delta,\omega_\tau, T, \omega_t) = -i \int_{0}^{\infty} \mathrm{d}\tau \mathrm{exp}(\pm i \omega_\tau \tau ) \int_{0}^{\infty} \mathrm{d}t \mathrm{exp}(i \omega_t t) \tilde{p}(\beta, \gamma,\delta,\tau, T, t)
\end{equation}
\normalsize
\noindent and inserting the appropriate $\beta, \gamma, \delta$ for the R, NR, and DQC signals. The sign of the Fourier transforms are the same as for the HD2D case.
It should be noted, however, that higher-order processes can have identical phase evolutions as the third-order processes; for example, three interactions with the first pulse can leave the system in the same state as only one interaction with the pulse will. The phase-cycling operation will not be able to filter out these higher-order contributions and the spectra will become distorted as a consequence. Care should therefore be taken to ensure that the third-order contributions are predominant, both in experiments and in simulations.
\subsubsection{Using the fluorescence signal}
Instead of using the final populations as proxies\cite{doi:10.1021/acs.jpca.9b01129} to report on the molecular response, the integrated fluorescence can be recorded. This is also more in line with the actual experiment. Theoretically, the integrated fluorescence can be found as the integral of the spontaneous relaxation which is given by
\begin{equation} \label{eq:detectRelax}
\mathrm{Rel}_{10} = \int_0^{t_{acq}}dt \Gamma_{10} \mathrm{Tr} [L_{10}\rho_S(t)L_{10}^\dagger]
\end{equation}
\begin{equation}
L_{10} = a_0^\dagger a_1
\end{equation}
\noindent and similarly for relaxations between other states. $t_{acq}$ denotes the acquisition time in which fluorescence is collected. If it is possible to distinguish particular transitions, which is the case for some action spectroscopies, they can give rise to multiple 2D spectra and provide even more incisive information on the studied system. Otherwise the total integrated fluorescence is just the sum of the individual transitions. Note that the acquisition time can be varied which opens up another window into the dynamics of the sample.
Once the spontaneous relaxation has been recorded, the data undergoes the same phase cycling process as for the population data to construct the rephasing and nonrephasing spectra.
\subsection{The Diagrammatic Approach to 2DES}
The theoretical framework for both the HD and FD spectra can be simplified substantially by invoking a few approximations, resulting in a set of equations each representing unique spectroscopic pathways. These are commonly depicted as double-sided Feynman diagrams (DSFD) which provide a visual connection to the physical processes and are constructed by following a list of rules dictated by the topology of the model system. A neat property of the DSFD equations is that the Fourier transforms can be calculated analytically in Liouville space,\cite{C7CP06583C}
\begin{align}\label{FTliouville}
\mathcal{G}^{\pm}(\omega, \tau_f ) \equiv \int_0^{\tau_f} e^{\pm i\omega t}\mathcal{G}(t)dt = [\pm i\omega \mathds{1} + \mathcal{L}]^{-1}(e^{\pm i\omega \tau \mathds{1}+\mathcal{L}\tau} - \mathds{1})
\end{align}
\noindent as each propagator $\mathcal{G}(t) \equiv e^{\mathcal{L}t} $ is independent of other time variables. This obviates the cumbersome process of numerically integrating the state of the system for all realisations of the pulse intervals.
More detailed accounts on the subject can be found elsewhere, but we include three diagrams which highlight the difference between HD and FD. Figure~\ref{fig:esa} shows excited state absorption diagrams which evolve identically until the detection event as is also evident from their corresponding equations:
\small
\begin{align}
& \mathrm{ESA1_{HD}}(\omega_1,t_2, \omega_3) = \label{HDdiagram} \mathrm{Tr} \big [ \overrightarrow{\mathcal{V}_{03}}\mathcal{G}^{+}( \omega_3) \overrightarrow{\mathcal{V}_{32}}\mathcal{G}( t_2) \overrightarrow{\mathcal{V}_{20}}\mathcal{G}^{-}( \omega_1) \overleftarrow{\mathcal{V}_{01}}|0\rangle \langle 0 | \big ]
\\ \nonumber
\\
& \mathrm{ESA1_{FD}}(\omega_1,t_2, \omega_3;t_{acq}) = \label{FDdiagram1} \int_0^{t_{acq}}dt\gamma_1 \mathrm{Tr}\big [|1\rangle \langle 1 | \mathcal{G}( t)\overrightarrow{\mathcal{V}_{13}}\mathcal{G}^{+}( \omega_3) \overrightarrow{\mathcal{V}_{32}}\mathcal{G}( t_2) \overrightarrow{\mathcal{V}_{20}}\mathcal{G}^{-}( \omega_1) \overleftarrow{\mathcal{V}_{01}}|0\rangle \langle 0 | \big ]
\\ \nonumber
\\
& \mathrm{ESA2_{FD}}(\omega_1,t_2, \omega_3;t_{acq}) = \label{FDdiagram3} \int_0^{t_{acq}}dt\gamma_3 \mathrm{Tr}\big [|3\rangle \langle 3 | \mathcal{G}( t)\overleftarrow{\mathcal{V}_{13}}\mathcal{G}^{+}( \omega_3) \overrightarrow{\mathcal{V}_{32}}\mathcal{G}( t_2) \overrightarrow{\mathcal{V}_{20}}\mathcal{G}^{-}( \omega_1) \overleftarrow{\mathcal{V}_{01}}|0\rangle \langle 0 |\big ]
\end{align}
\normalsize
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{FDHD_ESA_reiterated.pdf}
\caption{\label{fig:esa} Double-sided Feynman diagrams of an identical excited state absorption process, but with different read-outs due to the choice of detection method. Traditional HD (left) interferes a local oscillator with the final coherence to produce a signal. FD, on the other hand, interacts with a fourth pulse which can result in a population of a lower excited state (middle) or a higher excited state (right), from which emission of fluorescence is continually detected. The middle diagram shares the same phase evolution as the HD diagram to the left (both are ESA1) as indeed every HD diagram has an equivalent FD diagram. The rightmost diagram, however, is unique to FD, and is denoted ESA2. Owing to the opposite sign acquired in the last pulse interaction, the middle and the right diagrams largely cancel, which becomes the main source of discrepancy between HD and FD spectra.}
\end{figure}
\noindent Here, $\overrightarrow{\mathcal{V}_{ij}}$ denotes the transition dipole moment operator acting on the right, causing a transition from $j$ to $i$: $\overrightarrow{\mathcal{V}_{ij}} |j\rangle \langle \bullet | = |i\rangle \langle \bullet |$. Conversely, $\overleftarrow{\mathcal{V}_{ij}}$ acts on the left, causing a transition from $i$ to $j$: $\overleftarrow{\mathcal{V}_{ij}} |\bullet \rangle \langle i | = |\bullet \rangle \langle j |$ The fluorescence yields are given by $\gamma_i$, and $t_{acq}$ is the acquisition time. The initial state, on the right of the equations, is assumed to be the ground state, hence $|0\rangle \langle 0 |$, but can be completely general if desired.
In theory, dropping the integral and taking only the projected populations would give perfectly valid third-order signals, however, this would not be in line with the experiment. Additionally, the acquisition time parameter can be exploited to reveal more information about the system.
Using FD, the last pulse interaction can bring the state to two different excited state populations: The lower excited state, equation~\eqref{FDdiagram1}, which is equivalent to the HD diagram, equation~\eqref{HDdiagram}, and the higher excited state, equation~\eqref{FDdiagram3}, which has no HD counterpart. Moreover, the two FD diagrams have opposite signs resulting in cancellations of these contributions. Depending on the model system, this can lead to peaks appearing with one detection method, but not the other.
For future reference, we note that rephasing (R) designates contributions/diagrams with opposite phase evolutions after the first and third pulse, whereas nonrephasing (NR) contributions/diagrams oscillate with the same sign in these periods. Double-quantum coherence (DQC) contributions/diagrams are in this sense a special case of NR, with the distinction that they are not static or slowly evolving after the second pulse, but oscillate with double frequency.
\subsection{Model}
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{ModelSystemFigure.pdf}
\caption{\label{fig:model} The model system has four energy levels where all transitions are optically allowed and equally strong, apart from $|0\rangle \rightarrow |3\rangle$, which is forbidden. The relaxation rates $\tau_{ij}$ indicate an ultrafast relaxation pathway from the highest-excited state, and a slightly slower relaxation from the second excited state. The decay from the lowest-excited state to the ground state is orders of magnitude slower. This model system is identical to the one in ref. \citenum{PhysRevA.96.053830}.}
\end{figure}
In figure \ref{fig:model} we depict our model system, which we adopted from reference \citenum{PhysRevA.96.053830}. This four-level system captures a lot of the physics of 2D spectroscopy without complicating the interpretation unnecessarily. It also encompasses the case of an excitonic dimer, which is of fundamental interest and a natural starting point for a comparison between heterodyne-detection and fluorescence-detection. Moreover, the fact that the authors of reference \citenum{PhysRevA.96.053830} simulated the FD2D spectrum using the phase-modulation technique, and not phase-cycling, allows us to validate our calculations and compare the two
methods of extracting the desired signal.
We use the same values for the energy levels as given in reference \citenum{PhysRevA.96.053830}:
\begin{align}
&E_0 = 0 \\
&E_1 = 1.46 \: \mathrm{eV}\nonumber \\
&E_2 = 1.55 \: \mathrm{eV} \nonumber \\
&E_3 = E_1 + E_2 \nonumber
\end{align} \label{levels}
\noindent where $E_1$ and $E_2$ correspond to the B850 and B800 absorption bands of the inner and outer rings of bacteriochlorophylls in the light-harvesting complex of purple bacteria.\cite{McDermott1995,Anda2016,Anda2017,DeVicoE9051,Anda2019} The optical transitions between the energy levels are all allowed and are equally strong: $eE_0\mu_{ij} = 8 \: \mathrm{meV} $, with the exception of the transition from ground to the highest excited state, which is forbidden. Electronic relaxation is modelled by the jump operators, the off-diagonal Lindblad operators: $L_{10} = |0\rangle \langle 1 |$, $L_{21} = |1\rangle \langle 2 |$ and $L_{32} = |2\rangle \langle 3 |$, which are scaled by $\Gamma_{10} = 4.13 \: \mu \mathrm{eV}$, $\Gamma_{21} = 4.13 \: \mathrm{m eV}$, $\Gamma_{32} = 13.78 \: \mathrm{m eV}$ respectively. This yields relaxation times ($=\hbar / \Gamma_j$) of 160 ps, 160 fs and 48 fs, which is representative of a dimer system. Dephasing is included similarly with diagonal Lindblad operators: $L_{00} = |0\rangle \langle 0 |$, $L_{11} = |1\rangle \langle 1 |$, $L_{22} = |2\rangle \langle 2 |$ and $L_{33} = |3\rangle \langle 3 |$, which are all scaled by the same dephasing strength $\Gamma_{Dephasing} = 41.3 \: \mathrm{meV}$.
The model system is initialised with the ground state fully populated and then propagated according to equation \ref{eq:generalLindblad} with the appropriate (light-matter) interaction Hamiltonian until it is detected as described by equation \ref{eq:detectPolarisation} or \ref{eq:detectRelax}. As FD only requires an individual photoactive system, in contrast to HD which requires an ensemble, we chose to neglect the distributions of energy levels and transition dipole moments as they potentially could obscure the effects we want to study. However, the computational cost of incorporating these details into our model would be negligible.
In order to ensure a random sampling of the position-dependent phase, it is necessary to sample a volume of space with dimensions greater than the wavelength of the incident field. This is achieved by randomly generating the molecular positions and multiplying with a factor which make intermolecular distances large compared to the wavelength of the pulses.
\section{Results}
By applying the theory of HD and FD 2DES on the model system, we are able to non-perturbatively simulate 2D spectra under a range of different conditions and to study effects that will be applicable to a broad range of samples and experiments. Our goal is to investigate under which limits the extraction of the third-order signal breaks down due to pulse intensity or pulse duration, and hence to understand the optimal regime for 2D spectra.
For comparison, and as a starting point for the discussion of the
effect of a non-idealised pulse, we compute the corresponding spectra using the double-sided Feynman diagrams laid out in ref. \citenum{PhysRevA.96.053830}, see figure~\ref{fig:fdhdDiagrams}.
It is evident that the cross peaks in the HD spectrum are weaker than in the FD spectrum as would be expected by investigating the phase evolutions of the diagrams which reveal cancellations for the HD cross-peak diagrams. In fact, the cross peaks in the HD spectrum turn out to be artifacts of finite Fourier transforms of the first ($t_1=50$ fs) and last ($t_3=50$ fs) pulse delay, and disappear as these are increased. The reason for using finite Fourier transforms is simply to match the broadening arising from finite Fourier transforms in the non-perturbative simulations below.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{FDHDdiagrams.pdf}
\put(-315,195){(a)}
\put(-155,195){(b)}
\caption{\label{fig:fdhdDiagrams} The real parts of the zero-waiting time total 2D spectra, calculated with the traditional double-sided Feynman diagrams derived from perturbation theory. (a) the spectrum is simulated with pathways selected by the fluorescence detection setup, and (b) the heterodyne-detected counterpart is shown on the right. Note that the Fourier transform of the coherence time $t_1$ and signal time $t_3$ is taken for a time interval of 50 fs in order to achieve similar broadening as for the non-perturbative simulation, due to the finite Fourier transform.}
\end{figure}
Before we discuss the response to changes in the pulse amplitude, we note two general differences between FD and HD that we observe in figure~\ref{fig:fdhdDiagrams}: 1) The diagonal peaks are equally intense in the HD spectra, whereas the FD counterparts are biased to the lower diagonal peak. The explanation is that our model, as in reality, collects fluorescence emanating from the lowest excited state; this favours the lower diagonal peak over the upper diagonal which relies on the appropriate pathways to relax to the lowest excited state prior to fluorescing. In other words, the upper diagonal grows in as the fluorescence integration time is increased. For the HD version, both the highest and lowest excited states are detected, and the symmetry is only broken by the relaxation and dephasing processes. 2) The cross peak amplitudes are stronger for FD than for HD. This discrepancy is a direct consequence of the additional pathways selected by the FD scheme, specifically by excited state absorption (ESA) processes where the fourth pulse brings the state to a higher-lying population. The remaining pathways where the frequency evolutions correspond to a cross peak are equivalent for FD and HD and largely cancel out due to pairs of pathways with opposite phase progressions, but the cancellation is incomplete in non-idealised realisations, i.e. real-life experiments and simulations with finite pulses.
To calibrate our non-perturbative simulations we begin by finding appropriate pulse amplitudes for each detection scheme. For FD, we start off by replicating the calculation performed by Damtie et al\cite{PhysRevA.96.053830}., but with phase-cycling replacing phase-modulation. As can be seen in the appendix, we were able to produce the same spectra with our phase-cycling method using the same parameters, specifically a peak interaction energy of 8 meV, thus validating our approach and finding a suitable starting point for the pulse amplitude.
Intuitively, one might expect the HD and FD versions to operate within the same parameter space and to be similarly affected by changes to the experiment and system variables/parameters. However, when we repeated the simulation with the HD model, keeping all common parameters identical, we found that the two methods do not share the same parameter regime for the extraction of third-order signals. Specifically, the amplitude of the pulses required a sevenfold increase for easy acquisition of the HD signal, i.e.\ without an exceedingly high number of randomly positioned absorbers. Note that the HD third-order signal is a function of the number of absorbers (and their positions and orientations) so a large number of absorbers can to some degree compensate for low pulse amplitudes.
\subsection{Pulse Amplitude}
\begin{figure}
\includegraphics[width=\linewidth]{HDpowerSweep.pdf}
\caption{\label{fig:hdAmplitude} The real parts of the zero-waiting time total 2D spectra using heterodyne-detection. The insets indicate the peak interaction energies, which were chosen to illustrate failure at low pulse amplitude (linear artifacts), at high pulse amplitude (higher-order effects) and the intermediate regime. The dotted lines guide the eye to the model system energies, the diagonal and antidiagonal. }
\end{figure}
One of the challenges that arises with explicit simulation of 2D spectra is that of the pulse/field power, which affects how much the state of the sample changes upon each interaction with a pulse. Unlike the pulse durations, which can be taken to be similar to the experimental standard or state-of-the-art, the pulse amplitudes must often be tweaked until a good signal is achieved. If the power is too low or too high, it may be difficult to filter the desired third-order signal from the background of linear or higher orders. The window that is favourable for third-order detection may depend on how the signal is constructed. Given the different nature of the HD and FD schemes, and in particular how lower and higher orders are suppressed, this gives rise to a different range for the pulse amplitude. To illustrate this, we fix the pulse width by setting $\sigma=10$ fs and vary the pulse amplitude. The resulting total-correlation 2D spectra at zero waiting time for HD and FD are shown in figures \ref{fig:hdAmplitude} and \ref{fig:fdAmplitude}, respectively.
It should be noted that the first and third pulse interval are scanned in steps of 10 fs up to 300 fs, which is much longer than the 50 fs interval used for Fourier transforming the corresponding intervals in the double-sided Feynman diagram calcaulations. This discrepancy stems from the fact that the undersampling, once folded within the Nyquist frequency appears to sample on a faster time scale. In our case the folding factor is 3, which means the undersampling at 10 fs corresponds to a normal sampling at $10-3\times \frac{2\pi \hbar}{\omega_{pulse}}= 1.73 $ fs. 30 steps of 1.73 fs adds up to about 50 fs. It follows that the decay from $|2\rangle$ to $|1\rangle$ will be greater with the longer scanning time. The reduced sensitivity by sampling beyond 1.3 times the decay rate\cite{Maciejewski2012} is counterbalanced by the increased frequency resolution and 300 fs is a good compromise.
\subsubsection{The effect on HD spectra}
By sweeping the pulse amplitude across the third-order regime, see figure~\ref{fig:hdAmplitude}, we gain insight into the behaviour in the limits of linear and higher orders, as well as the intermediate regime. At the lowest pulse amplitude, linear artifacts appear in HD spectra as vertical streaks which translates as noise in the excitation frequency for a specific detection frequency. For an experienced experimenter or theoretician observing the spectra, such streaks would be met with suspicion and faced with scrutiny before they would be interpreted as a result of the underlying physics of the sample. In other words, these linear artifacts are not likely to be misinterpreted.
On the opposite end of the third-order regime, see the rightmost spectrum of figure~\ref{fig:hdAmplitude}, we see a perfectly plausible HD2D spectrum. It is only by comparison to the intermediate regime, the middle spectra of the same figure, that it is clear that the high pulse amplitude is introducing higher-order contributions into the spectrum. This demonstrates the importance of a well-calibrated pulse amplitude.
In the intermediate regime, see the middle spectra in figure \ref{fig:hdAmplitude}, we observe slight changes as the amplitude is increased. It can therefore be hard to justify quantitative conclusions based on a single 2D spectrum. Particularly the cross peaks can be sensitive to the pulse amplitude. These arise from pathways with incorrect time-ordering, which are more likely at short waiting times. The zero waiting time therefore presents an additional challenge for the explicit simulation, as well as experiments carried out in the lab.
\begin{figure}
\centering
\includegraphics[width=0.45\linewidth]{HorizontalEvolution2.pdf}
\caption{\label{fig:fig4} The intensity along the horizontal line $\omega_3=E_1$ of a HD2D spectrum is plotted as a function of the waiting time in steps of 5 fs, with 0 fs waiting time at the top and 50 fs at the bottom. The full line is calculated using an intermediate pulse amplitude, corresponding to a peak interaction energy of 27 meV, and the dotted line is calculated using a low pulse amplitude with a peak interaction energy of 9 meV. The linear artifacts are strongest for the shortest waiting times such as 0 and 5 fs, with particularly the diagonal peak recovering as the waiting time is increased, but the effect is much less pronounced than for the FD case. Each pair of curves are normalised to the same maximum and are offset for visual clarity. }
\end{figure}
We also investigate the time evolution of the spectra as the waiting time is scanned in steps of 5 fs. Because the linear artifacts are clearly more pronounced in the horizontal axis, we pick the evolution of the $\omega_3 = E_1$ horizontal line to investigate whether the erratic spectrum at zero waiting time becomes smooth for longer waiting times. Figure \ref{fig:fig4} shows that there is a slight suppression of the linear artifacts as the waiting time is increased for the lower amplitude case, especially for the diagonal peak, but some noise still remains when compared to the intermediate amplitude regime.
\begin{figure}
\includegraphics[width=\linewidth]{FDpowerSweep.pdf}
\caption{\label{fig:fdAmplitude} The real parts of the zero-waiting time total 2D spectra using fluorescence-detection. The insets indicate the peak interaction energies, which were chosen to illustrate failure at low pulse amplitude (linear artifacts), at high pulse amplitude (higher-order effects) and the intermediate regime. The dotted lines guide the eye to the model system energies, the diagonal and antidiagonal. }
\end{figure}
\subsubsection{The effect on FD spectra}
The effect of changes in the pulse amplitude on FD spectra is, similarly to the HD case, slight within the third-order regime, and strong at the limits of the regime. Interestingly, the way the spectra break down in these limits is in stark contrast to the HD case. At too low pulse amplitude, see the leftmost spectrum of figure~\ref{fig:fdAmplitude}, we do not observe linear artifacts as vertical streaks but instead the spectral peaks have a higher degree of randomness to them. At high pulse amplitudes, see the rightmost spectrum of figure~\ref{fig:fdAmplitude}, we see the lower diagonal peak gaining intensity and the upper diagonal peak losing intensity. The result is a spectrum which looks reasonable, but can lead to completely wrong analysis.
We investigate the time evolution of the spectra by scanning the waiting time in steps of 5 fs. Figure \ref{fig:fig3} shows how the diagonal evolves for the low pulse amplitude case (corresponding to the leftmost spectrum in figure \ref{fig:fdAmplitude}) and for an intermediate pulse amplitude (corresponding to the second from the left in figure \ref{fig:fdAmplitude}). It is evident that, when the waiting time is increased, the low-amplitude case clears up and gradually resembles the intermediate amplitude case as the pulse overlap between the second and third pulse becomes negligible.
Comparing the two detection methods, we note that the HD third-order signal is more robust against higher-order contributions as the power is increased, as evidenced by the spectra acquired with pulse intensities corresponding to peak interaction energies of 27 and 56 meV, see the middle spectra in figure~\ref{fig:hdAmplitude}. The manner of which the third-order signal breaks apart in the limits of the third-order regime is very different, and stems from the disparate ways of how the third-order signal was constructed.
Note that amplitude affects the number of chromophores needed to phase the signal in HD2D. The comparison to FD2D is therefore not straightforward as only one chromophore is needed to compute its signal. HD tends to prefer a peak interaction energy upwards of 9 meV and can go to quite strong laser powers without compromising the third order signal, whereas FD is more prone to higher order effects, but still produces a clear signal at lower laser powers than HD.
\begin{figure}
\centering
\includegraphics[width=0.45\linewidth]{FD_Evolution_Diagonal2.pdf}
\caption{\label{fig:fig3} The intensity on the diagonal line $\omega_1=\omega_3$ of a FD2D spectrum is plotted as a function of the waiting time in steps of 5 fs, with 0 fs waiting time at the top and 50 fs at the bottom. The full line is calculated using an intermediate pulse amplitude with a peak interaction energy of 3 meV, and the dotted line is calculated using a low pulse amplitude corresponding to a peak interaction energy of 1 meV. When the waiting time becomes large compared to the pulse width, the noise vanishes as a result of a diminishing contribution from incorrect time-ordering pathways. Each pair of curves are normalised to the same maximum and are offset for visual clarity.}
\end{figure}
\subsection{Pulse Durations}
Seeing how influential the pulse overlap can be when the waiting time is varied, we proceed to discuss the effect of the pulse duration which will increase or decrease not only the overlaps between pulses 2 and 3 but potentially between all pulses.
For a fair comparison of the effect of the pulse duration, the pulse amplitude must be adjusted accordingly to ensure that the pulse power remains the same, thus keeping the population and coherence transfer rates on the same scale.
When the pulse duration is increased, the uncertainty in the excitation and detection frequencies also increases. To limit this effect, one can scan the respective pulse delays for longer. However, it is not possible to correct for the effects caused by pulse overlap. The more the pulses overlap, the stronger the signal from incorrect pulse ordering becomes, particularly in polarisation-controlled variants of 2DES as shown by Pale\v{c}ek et al.\cite{doi:10.1063/1.5079817} Care should therefore be taken when analysing experiments with considerable pulse overlap, especially of the early-time dynamics where pulse 2 and 3 are close together.
Figures~\ref{fig:hdDuration} and \ref{fig:fdDuration} show the total correlation 2D spectra at zero waiting time using FD and HD, respectively. Interestingly, HD is more robust against changes in the pulse duration. FD is more easily smeared and also struggles as the duration becomes very short, although this may be of little experimental interest as such pulse durations are not realistic/possible. From a theoretical standpoint, it is interesting to determine why the two detection methods differ. In HD, the phase picked up by each chromophore is solely by virtue of its position, whereas the phases picked up in FD are contained in the waveform and may be distorted as the pulse duration becomes comparable to the period of the pulse frequency. In any case, the linear artifacts seen for FD at the lowest pulse duration disappear when the waiting time is increased, just as we observed for HD at low pulse amplitude.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{HDdurationSweep2.pdf}
\caption{\label{fig:hdDuration} The real parts of the zero-waiting time total 2D spectra using heterodyne-detection. The insets indicate the standard deviation of the Gaussian in fs, and hence the duration of the pulses. The $\sigma$ values range from 2.5 fs to 20 fs, which is spanning what is currently not possible to what is easily achieved in most 2DES setups. The dotted lines guide the eye to the model system energies, the diagonal and antidiagonal. }
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{FDdurationSweep2.pdf}
\caption{\label{fig:fdDuration} The real parts of the zero-waiting time total 2D spectra using fluorescence-detection. The insets indicate the standard deviation of the Gaussian in fs, and hence the duration of the pulses. The $\sigma$ values range from 2.5 fs to 20 fs, which is spanning what is currently not possible to what is easily achieved in most 2DES setups. The dotted lines guide the eye to the model system energies, the diagonal and antidiagonal. }
\end{figure}
\section{Conclusions}
In our modelling of FD and HD 2DES we used pulses with variable amplitudes and durations. Within the semi-classical approximation, this approach can be considered a full model spectroscopy-wise. We observe effects related to the finite pulses, which can not be accounted for with the commonly employed Feynman diagram method. Of particular interest is the fact that the optimal window for the pulse power is different for the two detection schemes. The underlying reason for this discrepancy is the disparate ways that the third-order signal is constructed from the raw signal data. It is also interesting from a theoretical point of view to observe the dissimilar behaviour of the two methods as the limits of pulse amplitude and pulse duration are tested.
Computationally, the FD simulation is quite cheap because only 1 model system is required to create the 2D spectra, although 27 runs are required for phase-cycling. For HD, the number of simulation runs can be much higher, starting from hundreds but potentially requiring 10,000-100,000, depending on the ``phasing'' conditions (dephasing, timestep, scanning length etc.)
The investigation of the effect of the pulse amplitude and the pulse duration shows substantial and non-trivial changes within the third-order regime which calls for great care whenever quantitative analyses are attempted. Conclusions from quantitative analysis should essentially be backed up by non-perturbative simulations to take pulse effects into account. This is even more important when probing short-time dynamics where increased pulse overlapping is detrimental to the selection of spectroscopic pathways.
On top of the complementary information found in FD2D spectra, much of the promise of FD2D is the fact that it can be contrasted against HD2D spectra to provide information that is not contained in either FD2D or HD2D. Therefore, it is imperative that all aspects of the experiment are well understood.
\section{Acknowledgements}
The authors acknowledge support of the Australian Research Council through grant CE170100026. This research was undertaken with the assistance of resources from the National Computational Infrastructure, which is supported by the Australian Government.
|
2,869,038,155,460 | arxiv | \section{Introduction}
The Calogero-Sutherland model (CSM)
\cite{Calogero-1969,Sutherland-1971} describes particles moving on a
circle and interacting through an inverse $\sin$-square potential.
The Hamiltonian of the model reads \begin{equation}
\label{CSM}
{\cal H}_{CSM} = \frac{1}{2}\sum_{j=1}^{N}p_j^{2}
+\frac{1}{2}\left(\frac{\pi}{L}\right)^{2}\sum_{j,k=1; j\neq k}^{N}
\frac{g^{2}}{\sin^{2}\frac{\pi}{L}(x_{j}-x_{k})},
\end{equation} where $x_{j}$ are coordinates of $N$ particles, $p_{j}$ are
their momenta, and $g$ is the coupling constant. We took the mass of
the particles to be unity. The momenta $p_{j}$ and coordinates
$x_{j}$ are canonically conjugate variables.
The model (classical and quantum) occupies an exceptional place in
physics and mathematics and has been studied extensively. It is
completely integrable.
Its solutions can be written down explicitly as finite dimensional
determinants (for review see
\cite{1981-OlshanetskyPerelomov-classical}).
In the limit of a large period $L\to\infty$ the CSM degenerates to
its rational version -- Calogero (aka Calogero-Moser) model (CM)
where the pair-particle interaction is $1/{x^{2}}$.
\footnote{In the rational case one usually adds a harmonic
potential, $\frac{1}{2}\omega^2 \sum_i x_i^2$, to the Hamiltonian to
prevent particles from escaping. This addition does not destroy the
integrability of the system \cite{Sutherland-1971}.} The CSM itself
is a degeneration of the elliptic Calogero model, where the pair
particle interaction is given by the Weierstrass $\wp $-function of
the distance. In this paper we discuss the classical trigonometric
model (\ref{CSM}) commenting on the rational limit when
appropriate.
We are interested in describing a Calogero-Sutherland {\it liquid},
i.e., the system (\ref{CSM}) in thermodynamic limit when $N\to
\infty$ and $L\to \infty$ while the average density $N/L$ is kept
constant. We assume that the limit exists and that in this limit a
microscopic density and current fields
\begin{eqnarray}
\rho(x,t) &=& \sum_{j=1}^{N}\delta(x-x_{j}(t)),
\label{3} \\
j(x,t) &=& \sum_{j=1}^{N} p_{j}(t) \delta(x-x_{j}(t))
\end{eqnarray}
are smooth single-valued real periodic functions with a period $L$
equal to the period of the potential \footnote{It is likely that
there are classes of solutions of the CSM, whose thermodynamic limit
consists of a number of interacting liquids. In this case the
microscopic density give rises to a number of functions in the
continuum - the densities of the distinct interacting liquids. In
this paper we consider a class of solutions which leads to a single
liquid.}. In this case the system will be described by hydrodynamic
equations written on the density field $\rho(x,t)$ and the velocity
field $v(x,t)$. The velocity is defined as $j=\rho v$.
The hydrodynamic approach is a powerful tool to study the
evolution of smooth features with typical size much larger than the
inter-particle distance. Apart from application to the CSM, the
hydrodynamic equations obtained in this paper are interesting
integrable equations. We show that they are new real reductions of
the modified Kadomtzev-Petviashvili equation (MKP1).
In this paper we consider a classical system, however the approach
developed below can be extended to the quantum case
$\{p_{j},\,x_{k}\}=\delta_{jk}\rightarrow [p_{j},\,x_{k}]=i\hbar
\delta_{jk}$ almost without changes. For a brief description of the
hydrodynamics of the quantum system see
Ref.~\cite{2005-AbanovWiegmann}. The hydrodynamics of the quantum
Calogero model has been studied previously
\cite{AJL-1983,AndricBardek-1988} in the framework of the {\it
collective field theory} and some of the results below can be obtained in a classical
limit (see \cite{1995-Polychronakos}) of the quantum counterparts of Refs.
\cite{AJL-1983,AndricBardek-1988}.
The outline of this paper is the following. In
Sec.~\ref{particlespoles} we parameterize the particles of CSM
as poles of auxiliary complex fields so that the motion of particles
is encoded by evolution equations for fields. In
Sec.~\ref{hydrodynamiclimit} we derive a hydrodynamic limit of these
equations - continuity and Euler equations with a particular form of
specific enthalpy. We will refer to these equations as to the
bidirectional Benjamin-Ono equation or 2BO. We present the
Hamiltonian form of 2BO in Sec.~\ref{sec:HformdBO}. In
Sec.~\ref{sec:BilformdBO} we discuss the bilinear form of 2BO and
its relation to MKP1. In Sec.~\ref{sec:chiral} we obtain the Chiral
Non-Linear equation (CNL) - chiral reduction of 2BO and discuss some of
its properties. In Sec.~\ref{sec:Multi-phase} we construct
multi-phase and multi-soliton solutions of 2BO and CNL as a real reduction
of MKP1. These solutions correspond to collective excitations of the
original many-body system. Some technical points are relegated to
the appendices.
\section{Particles as poles of meromorphic functions}
\label{particlespoles}
The Equations of motion of the CSM are readily obtained from the
Hamiltonian (\ref{CSM})
\begin{eqnarray}
\dot{x}_{j} &=& p_{j},
\label{csmeq1} \\
\dot{p}_{j} &=& -g^{2} \frac{\partial}{\partial x_{j}}
\sum_{k=1\, (k\neq j)}^{N}\left(\frac{\pi}{L}\cot\frac{\pi}{L}(x_{j}-x_{k})\right)^{2}.
\label{csmeq2}
\end{eqnarray}
We rewrite this system in an equivalent way as
\begin{eqnarray}
i\frac{\dot{w}_{j}}{w_{j}} &=& \frac{g}{2}\left(\frac{2\pi}{L}\right)^{2}
\left(\sum_{k=1}^{N}\frac{w_{j}+u_{k}}{w_{j}-u_{k}}
- \sum_{k=1\,(k\neq j)}^{N} \frac{w_{j}+w_{k}}{w_{j}-w_{k}}\right),\quad j=1,\dots, N
\label{pmotx}
\\
-i\frac{\dot{u}_{j}}{u_{j}} &=& \frac{g}{2}\left(\frac{2\pi}{L}\right)^{2}
\left(\sum_{k=1}^{N} \frac{u_{j}+w_{k}}{u_{j}-w_{k}}
- \sum_{k=1\,(k\neq j)}^{N}\frac{u_{j}+u_{k}}{u_{j}-u_{k}}\right),\quad j=1,\dots, N,
\label{pmoty}
\end{eqnarray}
where $w_{j}(t) = e^{i\frac{2\pi}{L}x_{j}(t)}$ are complex
coordinates lying on a unit circle, while $u_{j}(t) =
e^{i\frac{2\pi}{L}y_{j}(t)}$ are auxiliary coordinates. Indeed,
differentiating (\ref{pmotx}) with respect to time and using
(\ref{pmotx},\ref{pmoty}) to remove first derivatives in time one
obtains equations equivalent to (\ref{csmeq1},\ref{csmeq2}).
We note that while the coordinates $x_j$ are real, i.e., $|w_j|=1$,
the auxiliary coordinates, $y_{j}(t)$, are necessarily complex.
Given initial data as real positions and velocities $x_{j}(0)$ and
$\dot{x}_{j}(0)$ one can find complex $y_{j}$ from (\ref{pmotx}) and
then initial complex velocities $\dot{y}_{j}(0)$ from (\ref{pmoty}).
Once $x_{j}$ and $\dot{x}_{j}$ are chosen to be real they will
stay real at later times, even though coordinates $y_i$ are moving in a complex plane.
The coordinates $w_{j}(t)$ and $u_{j}(t)$ determine an evolution of
two functions
\begin{eqnarray}
u_1(w) &=& g\frac{\pi}{L}\sum_{j=1}^{N}\frac{w+w_{j}}{w-w_{j}}
=-i g \sum_{j=1}^{N} \frac{\pi}{L} \cot\frac{\pi}{L}(x-x_{j}),\quad w= e^{i\frac{2\pi}{L}x},
\label{u-pa} \\
u_0(w) &=& -g\frac{\pi}{L}\sum_{j=1}^{N}\frac{w+u_{j}}{w-u_{j}}
=i g \sum_{j=1}^{N} \frac{\pi}{L} \cot\frac{\pi}{L}(x-y_{j}), \quad w= e^{i\frac{2\pi}{L}x}.
\label{u+pa}
\end{eqnarray}
The latter functions play a major role in our approach. These are rational
functions of $w$ regular at infinity and having particle coordinates
as simple poles with equal residues $2\pi g/L$.
The condition that the coordinates of particles $x_{j}$ are real
yields Schwarz reflection condition for the function $u_1$ with
respect to the unit circle \begin{eqnarray}
\overline{u_1(w)} = -u_1(1/\bar{w})
\qquad \mbox{or}
\qquad
\overline{u_1(x)} = - u_1(\bar{x}),
\label{Schwarz}
\end{eqnarray} where bar denotes complex conjugation. The values of $u_1(w)$
in the interior and exterior of a unit circle are related by Schwarz
reflection.
Comparing (\ref{pmotx}), (\ref{csmeq1}) and (\ref{u+pa}) we notice
that while the function $u_{1}(w)$ encodes the positions of particles
$w_{j}$, the function $u_0(w)$ encodes the momenta of particles as
its values at particle positions $w_{j}$ \begin{equation}
p_{j} = u_0(w_{j}) + g\frac{\pi}{L} \sum_{k=1\,(k\neq j)}^{N} \frac{w_{j}+w_{k}}{w_{j}-w_{k}}.
\label{csmmomenta}
\end{equation} We notice here that the positions of the particles fully
determine the imaginary part of the field $u_0$ on a unit circle.
Indeed, we have from (\ref{csmmomenta}) \begin{equation}
\I u_0(x_{j}) = g {\sum_{k\neq j}}\frac{\pi}{L}\cot \frac{\pi}{L}(x_{j}-x_{k}).
\label{u+real1}
\end{equation}
We now introduce complex functions \begin{equation}
u=u_0+u_1,\quad \tilde u=u_0-u_1.
\label{decomp}
\end{equation}
One can show that they obey the equation
\begin{equation}
\label{2BO}
u_{t}+\partial_{x}\left[\frac{1}{2}u^{2}+i\frac{g}{2} \partial_{x}\tilde u \right] =0.
\end{equation} Indeed substituting the \textit{pole ansatz}
(\ref{u-pa},\ref{u+pa}) into (\ref{2BO}) and comparing the residues
at poles $w_{j}$ and $u_{j}$ one arrives at
(\ref{pmotx},\ref{pmoty}).
The equation (\ref{2BO}) connects two complex functions $u_0$ and
$u_1$. The equation is equivalent to the {\it modified
Kadomtzev-Petvisashvili} equation (or simply MKP1). We will discuss
its relation to MKP1 in Sec.~\ref{sec:BilformdBO}.
However, being complemented by the Schwarz reflection condition
(\ref{Schwarz}), analyticity requirements, and an additional reality requirement
it becomes an equation uniquely determining $u_0$ and $u_1$ through their initial data.
The analyticity requirements read: $u_0(w)$ is analytic in a
neighborhood of a unit circle $|w|=1$, while $u_1$ is analytic
inside $|w|<1$ and outside $|w|>1$ of the unit circle, approaching
a constant at $w\to\infty$. An additional reality requirement is the
relation between the imaginary part of $u_{0}$ on a unit circle and
$u_{1}$ stemming from the condition (\ref{u+real1}). We formulate
and discuss these conditions in Sec.~\ref{subsec:2BO} and
Sec.~\ref{sec:BilformdBO}.
We will refer to the equation (\ref{2BO}) as the bidirectional
Benjamin-Ono equation (2BO). It is a bidirectional (having both
right and left moving waves) generalization of the conventional {\it
Benjamin-Ono} equation (BO) arising in the hydrodynamics of
stratified fluids \cite{AblowitzClarkson-book}. We discuss its
hydrodynamic form in the next section.
The solution of (\ref{2BO}) given by (\ref{u-pa},\ref{u+pa}) is the CSM many body system with a
finite number of particles (\ref{CSM}). Other solutions describe CSM
fluids. They are the central issue of this paper.
To conclude this section we make the following comment. The function
$u_1$ can be expressed solely in terms of the microscopic density of
particles (\ref{3}) as \begin{equation}
u_1(w) = -\pi g \oint \frac{d\zeta}{2\pi i \zeta}\, \frac{\zeta+w}{\zeta-w}\,\rho(\zeta).
\label{u-def}
\end{equation} The integral in this formula goes over the unit circle
$\zeta=\exp\left(i\frac{2\pi}{L}x\right)$. In the following we will
denote for brevity $\rho(\zeta)$ as $\rho(x)$, when $\zeta$ lies on
a unit circle $\zeta = e^{i\frac{2\pi}{L}x}$. The density itself
can be obtained as a difference of limiting values of the field
$u_1$ at the real $x$ (on the unit circle). The discontinuity of
$u_{1}$ on the unit circle gives a microscopic density (\ref{3}) of particles
\begin{eqnarray}
u_1(x+ i0)-u_1(x- i0) &=& -2\pi g\rho(x), \quad \I x=0,\quad 0<\R x<L.
\label{discont}
\end{eqnarray}
\section{Hydrodynamics of Calogero-Sutherland liquid}
\label{hydrodynamiclimit}
\subsection{Density and velocity}
We assume that in the thermodynamic limit $N, L\to\infty$, $N/L=const$ the poles
of the function $u_1$ are distributed along the real axis with a
smooth density $\rho(x)$ and consider a complex field $u_1(w)$
given by formula (\ref{u-def}). Notice that $u_1(w)$ defined by
(\ref{u-def}) is analytic everywhere outside of the real axis of $x$
(everywhere off the unit circle in $z$-plane) approaching a constant
as $z\to\infty$. It also satisfies the reality condition
(\ref{Schwarz}) (the density $\rho(x)$ is real). In the thermodynamic limit the function $u_1$
is not a rational function anymore. It is discontinuous across
the real axis with the discontinuity related to the density of
particles by (\ref{discont}). The value of the field $u_1(x)$ on a
real axis (on a unit circle in $z$ plane) depends on whether one
approaches the real axis from above or below (unit circle from the
interior $z\to e^{i\frac{2\pi}{L}(x+i0)}$ or from the exterior $z\to
e^{i\frac{2\pi}{L}(x-i0)}$). More explicitly, we have from
(\ref{u-def}) \begin{equation}
u_1(x\pm i0)
= \pi g (\mp\rho + i\rho^{H}).
\label{u-hydr}
\end{equation}
The superscript $H$ in the second term of (\ref{epsrho0}) denotes the Hilbert transform and is defined as (see \ref{app-hilbert} for definitions and some properties of the Hilbert transform)
\begin{equation}
f^{H}(x) =\Xint-_{0}^{L}\frac{dy}{L}\, f(y)\cot\frac{\pi}{L}(y-x).
\label{HtransC}
\end{equation}
We also assume that in $N\to\infty$ limit the complex field $u_0(w)$
remains analytic in the vicinity of the real axis in $x$-plane
(i.e., in the vicinity of a unit circle in $z$-plane).
The 2BO (\ref{2BO}) does not explicitly depend on the number of
particles $N$. It holds also in thermodynamic limit $N, L\to\infty$,
$N/L=const$, however solutions describing a liquid are not rational
functions any longer.
We can use 2BO to define velocity through the \textit{continuity
equation} \begin{eqnarray}
\rho_{t} &+&\partial_{x}(\rho v) = 0.
\label{continuity}
\end{eqnarray}
The discontinuity of the complex field $u(x)$ (\ref{decomp}) across
the real axis, as well as a discontinuity of the field $u_1$ (see
(\ref{discont})) is the density \begin{equation}
u(x+i0)-u(x-i0) = u_1(x+i0)-u_1(x-i0) = -2\pi g \rho(x).
\label{udiscont}
\end{equation}
Differentiating (\ref{udiscont}) with respect to time and using
2BO (\ref{2BO}) we obtain the continuity equation and identify the
\textit{velocity field} $v(x)$ as \begin{eqnarray}
v(x) &=& u_0(x) +\frac{1}{2}\left(u_1(x+i0)+u_1(x-i0)\right)
-ig\partial_{x}\log\sqrt\rho(x)
\nonumber \\
&=& u_0(x) +ig\left(\pi \rho^{H}(x) -\partial_{x}\log\sqrt\rho(x)\right)
\end{eqnarray}
or
\begin{equation}
u_0(x) = v -ig\left(\pi \rho^{H} - \partial_{x}\log\sqrt\rho\right).
\label{u+hydr}
\end{equation}
Since $v(x)$ is a real field (\ref{u+hydr}) provides a reality
condition analogous to (\ref{u+real1}). Indeed, one can see from
(\ref{u+hydr}) that \begin{equation}
\I u_0(x) = -g\left(\pi \rho^{H}-\partial_{x}\log\sqrt\rho\right),
\label{u+real2}
\end{equation} i.e., the imaginary part of $u_0(x)$ is completely determined by
the density of particles or equivalently by the field $u_1$. It is
also convenient to have an expression for $u(x)$ on a real axis \begin{eqnarray}
u(x\pm i0) = v + g\left(\mp \pi \rho +i\partial_{x}\log\sqrt\rho\right).
\label{ux}
\end{eqnarray}
It has the same discontinuity across the real axis as $u_1(x)$.
\subsection{Hydrodynamic form of 2BO.}
Now we are ready to cast the equation (\ref{2BO}) into hydrodynamic form.
Taking the real part of 2BO (\ref{2BO}) on the real axis and using
identifications (\ref{u-hydr},\ref{u+hydr}) and the continuity
equation, (\ref{continuity}), after some algebra we arrive at the
\textit{Euler equation} \begin{eqnarray}
v_{t} &+& \partial_{x}\left(\frac{v^{2}}{2} +w(\rho)\right) = 0,
\label{Euler}
\end{eqnarray}
with specific (per particle) enthalpy or chemical potential\footnote{The specific enthalpy and chemical potential are identical at zero temperature.} given by
\begin{equation}
w(\rho) =\frac{1}{2}(\pi g \rho)^{2}
-\frac{g^2}{2}\frac{1}{\sqrt{\rho}}\,\partial_x^2\sqrt{\rho} +\pi g^{2}\rho_{x}^{H}.
\label{enthalpy}
\end{equation}
Equations (\ref{continuity},\ref{Euler}) are the continuity and
Euler\footnote{The eq.~(\ref{Euler}) has a form of an Euler equation for an isentropic flow. Because of the long range character of interactions the enthalpy cannot be replaced by the conventional pressure term $\partial_{x}w(\rho) \to \rho^{-1}\partial_{x}(p(\rho))$ - the standard form of the Euler equation.} equations of classical Calogero-Sutherland model. They are
the classical analogues of quantum hydrodynamic equations that have
been obtained for the quantum CSM in Refs.
\cite{AJL-1983,AndricBardek-1988,Awata} first using collective field
theory approach \cite{JevickiSakita,Sakita-book,Jevicki-1992} and
later by the {\it pole ansatz} similar to the one used above
\cite{2005-AbanovWiegmann}.
It was noticed in \cite{Jevicki-1992} and then in
\cite{1995-Polychronakos} that the system
(\ref{continuity},\ref{Euler},\ref{enthalpy}) has a lot of
similarities with classical Benjamin-Ono equation
\cite{Benjamin-Ono}. The similarities and differences with
Benjamin-Ono equation are discussed below. We will refer to
(\ref{continuity},\ref{Euler},\ref{enthalpy}) as to a hydrodynamic
form of the \textit{bidirectional Benjamin-Ono equation} (2BO).
\subsection
{Bidirectional Benjamin-Ono equation (2BO).}
\label{subsec:2BO}
Let us now summarize the 2BO equation:
\begin{eqnarray}
\label{2BO1}
&& u_{t}+\partial_{x}\left[\frac{1}{2}u^{2}+i\frac{g}{2} \partial_{x}\tilde u \right] =0,\\
&& u=u_0+u_1,\quad \tilde u=u_0-u_1.
\end{eqnarray}
The functions $u_0$ and $u_1$ are subject to analyticity conditions
\begin{eqnarray}
\label{A1}
&& u_1(x) \qquad \mbox{- analytic for}\;\; \I(x) \neq 0,
\\
\label{A2}
&& u_0(x) \qquad \mbox{- analytic for}\;\; |\I(x)|
<\epsilon \;\; \mbox{for some}\;\; \epsilon>0,
\end{eqnarray}
and to reality conditions
\begin{equation}
\label{A3}
\overline{u_1(x)}=-u_1(\bar{x}).
\end{equation}
In addition, the fact that the equation (\ref{2BO1}) holds in
the upper half plane and in the lower half plane (inside and outside
of the unit circle) yields the condition
\begin{equation} \I[u(x\pm i0)] = \frac{g}{2} \partial_{x} \log \R [u_1(x\pm i0)].
\label{u+Im}
\end{equation} It also follows from (\ref{u-hydr},\ref{u+real2},\ref{ux}). The
condition (\ref{u+Im}) looks more ``natural'' in the bilinear
formulation (see eq. (\ref{blreal}) below).
These reality and analyticity conditions reduce two complex fields
$u_0$ and $u_1$ to two real fields - density $\rho(x)$ and velocity
$v(x)$ as (\ref{u-hydr},\ref{u+hydr}). Then, a complex equation
(\ref{2BO}) defined in both half planes immediately yields the
hydrodynamic equations
(\ref{continuity},\ref{Euler},\ref{enthalpy}). Inversely, knowing
real periodic fields $\rho(x)$ and $v(x)$ one can find fields $u_0,
u_1$ everywhere in a complex $x$-plane.
\subsubsection*{Mode expansion}
The analyticity and reality conditions can be recast in the language
of mode expansions. It follows from (\ref{u-def}) that \begin{equation}\label{22}
u_1(w) = \left\{
\begin{array}{lr}
- \pi g\left(\rho_{0}+2\sum_{n=1}^{\infty}\rho_{ n}w^n\right),\quad |w|<1
\\
\pi g\left(\rho_{0}+2\sum_{n=1}^{\infty}\rho_{n}^\dag w^{- n}\right),\quad |w|>1
\end{array}
\right.\end{equation}
where $\rho_{n} =\rho^\dag_{-n}=\int_{0}^{L}\frac{dx}{L}\, \rho(x) e^{-i\frac{2\pi n}{L}x}$ are Fourier components of the density.
The values of the field $u_{1}(w)$ in the upper and lower half-planes are then automatically related by Schwarz reflection (\ref{Schwarz}).
Conversely, the field $u_0(x)$ being analytic in a strip around the unit circle is represented by Laurent series
\begin{equation}
\label{38}
u_0(w) =V_0+ \sum_{n=1}^{\infty}\left(a_{n} w^{n}+b_nw^{-n}\right), \quad |\I\log w|<2\pi \epsilon/L.
\end{equation}
The 2BO equation remains intact in the case of rational
degeneration. Rational degeneration of formulas of the
Sec.~\ref{particlespoles} are obtained by a direct expansion in
$1/L$ . In this limit fields are defined microscopically as $u_1(x)=
-ig\sum_j\frac{1}{x-x_j}$ and $u_0(x)= ig\sum_j\frac{1}{x-y_j}$.
\section{Hamiltonian form of 2BO}
\label{sec:HformdBO}
The 2BO is a Hamiltonian equation. Let us start with its Hamiltonian
formulation in the hydrodynamic form
$\rho_{t}=\left\{H,\rho\right\},\;v_{t}=\left\{H,v\right\}$ with the
canonical Poisson bracket of density and velocity fields \begin{equation}
\left\{\rho(x),v(y)\right\} = \delta'(x-y).
\label{rhovcom}
\end{equation}
Equations (\ref{continuity},\ref{Euler},\ref{enthalpy}) follow from
\begin{eqnarray}
H&=& \int dx\, \left(\frac{\rho v^{2}}{2}
+\rho\epsilon(\rho)\right),
\label{csmrv0} \\
\epsilon(\rho) &=&\frac{g^2}{2}(\pi \rho^{H} -\partial_{x}\log\sqrt\rho)^{2}.
\label{epsrho0}
\end{eqnarray}
Here the ``internal energy'' (\ref{epsrho0}) and the enthalpy
(\ref{enthalpy}) are related by a general formula
$w(\rho)=\frac{\delta }{\delta \rho(x)}\int dx\, \rho
\epsilon(\rho)$.
For references we will give alternative expressions for the
Hamiltonian. Let $\Psi=\sqrt\rho e^{i\vartheta}$ where
$v=g\partial_x\vartheta$ then \begin{equation}
H= \frac{g^2}{2}\int \left|\partial_{x}\Psi-\pi\rho^{H}\Psi\right|^2 dx,
\label{hamPsi}
\end{equation}
where $\rho = |\Psi|^{2}$. The Poisson's brackets for $\Psi(x)$ are canonical:
$\left\{\Psi(x),\Psi(y)\right\}=0$, and
$\left\{\Psi(x),\Psi^{\star}(y)\right\}=\frac{i}{g}\delta(x-y)$. The
equations of motion for $\Psi$ and $\Psi^{\star}$ are
\begin{equation}
\frac{i}{g}\partial_{t}\Psi
= \left[-\frac{1}{2}\partial_{x}^{2}+\frac{\pi^{2}}{2}|\Psi|^{4}+\pi \left(|\Psi|^{2}\right)^{H}_{x}\right] \Psi
\end{equation}
and its complex conjugate. A simple change of a dependent variable $\Phi=\Psi e^{i\pi\int^{x}dx'\,|\Psi(x')|^{2}}$ leads to
\begin{equation}
\frac{i}{g}\partial_{t}\Phi = \left[-\frac{1}{2}\partial_{x}^{2}+i 2\pi \left(|\Phi|^{2}\right)^{+}_{x}\right] \Phi,
\label{INLS}
\end{equation}
where $f^{+}$ denotes the function analytical in the upper
half-plane of $x$ defined as $f^{+}=\frac{f-if^{H}}{2}$. One can
recognize in (\ref{INLS}) the intermediate nonlinear Schr\"odinger
equation (INLS) which appeared in Ref.\cite{1995-Pelinovsky} as an
evolution of the modulated internal wave in a deep stratified fluid.
Therefore, one can alternatively think of 2BO as the hydrodynamic
form of (\ref{INLS}) identifying hydrodynamic fields $\rho$ and $v$
to be
\begin{equation}
\Phi = \sqrt{\rho} \exp\left\{\frac{i}{g}\int^{x}dx'\, (v+\pi g\rho)\right\}
\label{back1}
\end{equation}
or with the field $u(x)$ from (\ref{ux}) as
\begin{equation}
ig\partial_{x}\log\Phi^{\star} = u(x-i0).
\label{back2}
\end{equation}
The Hamiltonian (\ref{csmrv0}) or (\ref{hamPsi}) can be rewritten in terms of $\Phi$ as
\begin{equation}
H= \frac{g^2}{2}\int \left|\partial_{x}\Phi-i 2\pi\rho^{+}\Phi\right|^2 dx,
\label{hamPhi}
\end{equation}
where $\rho=|\Phi|^{2}$.
However, the Poisson's brackets for $\Phi$ are not canonical
anymore.\footnote{Simple calculation using (\ref{rhovcom}) gives
$\left\{\Phi(x),\Phi(y)\right\}=\frac{\pi}{g}\Phi(x)\Phi(y)\,\mbox{sgn}\,(x-y)$,
$\left\{\Phi(x),\Phi^{\star}(y)\right\}=\frac{i}{g}\delta(x-y)-\frac{\pi}{g}\Phi(x)\Phi^{\star}(y)\,\mbox{sgn}\,(x-y)$
and similar expressions for complex conjugated fields. One should
think of $\Psi(x)$ as a canonical bosonic field while of $\Phi(x)$ as a classical analogue of a field with fractional statistics.}
2BO is an integrable system. It has infinitely many integrals of motion. The first three of them follow from global symmetries. They are conventional the number of
particles $N= \int dx\, \rho$, the total momentum $P= \int dx\,
\rho v$, and the total energy $H=\int dx\, \left(\frac{\rho v^{2}}{2}+\rho\epsilon(\rho)\right)$.
They are conveniently written in terms of the fields $u$ and $\tilde u$ as
\begin{eqnarray}
I_{1} &=& N = \frac{1}{2\pi g}\oint_{C}dx\, u,
\label{numdbo} \\
I_{2} &=& P = \frac{1}{2\pi g}\oint_{C}dx\, \frac{1}{2}u^{2},
\label{momdbo} \\
I_{3} &=& 2H = \frac{1}{2\pi g}\oint_{C}dx\,
\left[\frac{1}{3}u^{3} +i\frac{g}{2} u \partial_{x}\tilde u\right],
\label{hamdbo}
\end{eqnarray}
where the integral is taken over the both sides of the unit circle.
(``double'' contour $C$ shown in Fig. \ref{fig:leftright}).
For more details on conserved integrals see \ref{app:int}.
\begin{figure
\bigskip
\begin{center}
\includegraphics[width=6cm]{leftright1.pdf}
\end{center}
\caption{Contour $C$ surrounding the unit circle is shown together with our conventions in defining right and left fields.}
\label{fig:leftright}
\end{figure}
The Poisson bracket for the fields $u_0(w)$ and $u_1(w)$ can be
easily obtained from from (\ref{u-hydr},\ref{u+hydr}) and
(\ref{rhovcom}) by analytic continuation. We find that
$\left\{u_0(w),u_0(w')\right\} =\left\{u_1(w),u_1(w')\right\}=0$ and
\begin{eqnarray}
\left\{u_0(w),u_1(w')\right\} &=&
ig\left(\frac{2\pi}{L}\right)^{2} \frac{ww'}{(w-w')^{2}}
= i g \partial_{x} \frac{\pi}{L}\cot \frac{\pi}{L}(x-y).
\label{PB01}
\end{eqnarray}
\section{Bilinearization and relation to MKP1 equation}
\label{sec:BilformdBO}
The equations described in the previous section, their integrable
structures and their connection to integrable hierarchies are the
most transparent in bilinear form.
Let us introduce tau-functions $\tau_{0}$ and $\tau_{1}$ as
\begin{eqnarray}
u_0 &=& ig \partial_{x}\log\tau_{0},
\label{tau01} \\
u_{1} &=& -ig \partial_{x}\log\tau_{1}.
\nonumber
\end{eqnarray}
It can be easily checked that the 2BO (\ref{2BO}) can be rewritten as an elegant bilinear Hirota equation on $\tau$-functions:
\begin{equation}
\left(iD_{t}+\frac{g}{2}\, D_{x}^{2}\right) \tau_1 \cdot \tau_0=0.
\label{2BO-hirota}
\end{equation}
Here we used the Hirota derivative symbols defined as
\begin{eqnarray}
D_{x}^{n}f(x)\cdot g(x) \equiv \lim_{y\to x}(\partial_{x}-\partial_{y})^{n}f(x)g(y).
\end{eqnarray}
For example,
\begin{eqnarray}
D_{t}f \cdot g &=& (\partial_{t}f) g-f(\partial_{t}g),
\nonumber \\
D_{x}^{2} f \cdot g &=& (\partial_{x}^{2}f) g-2(\partial_{x}f)(\partial_{x}g)
+f(\partial_{x}^{2}g),
\end{eqnarray}
etc.
We emphasize that the bilinear equation holds on both sides of the unit circle. Introducing notations
\begin{equation}
\tau_{\pm 1}=\tau_1(x\pm i0)
\end{equation}
we can rewrite the equation as
\begin{eqnarray}
&&\left(iD_{t}+\frac{g}{2}\, D_{x}^{2}\right) \tau_{+1} \cdot \tau_0=0, \\
&&\left(iD_{t}+\frac{g}{2}\, D_{x}^{2}\right) \tau_{-1} \cdot \tau_{0=}\left(-iD_{t}+\frac{g}{2}\, D_{x}^{2}\right) \tau_{0} \cdot \tau_{-1}=0.
\label{2BO-hirota1}
\end{eqnarray}
Equation (\ref{2BO-hirota}) is the modified Kadomtsev-Petviashvili
equation (MKP1). MKP1 contains two independent functions $\tau_1$
and $\tau_0$ and is formally not closed. The analyticity and
reality conditions (\ref{A1}-\ref{u+Im}) stemming from the fact
that all solutions are determined by two real functions $\rho(x,t)$
and $v(x,t)$, close the equation. Under these conditions the
equations can be seen as a real reduction of MKP1. Let us formulate
these conditions in terms of tau-functions.
The first requirement is that $\tau_{\pm 1}$ is analytic and does
not have zeros for $\I x>0$ ($<0$) after analytic continuation. Also
$\tau_0$ should be analytic and should not have zeros in the
vicinity of the real axis, i.e., for $|\I x|<\epsilon$ for some
$\epsilon>0$.
The second requirement is that $\tau_{\pm 1}$ should be related by
Schwarz reflection (\ref{Schwarz}). In terms of tau-functions it
becomes on the unit circle (for real $x$) \begin{equation}
\tau_{-1} =\overline{ \tau_{+1}} e^{i\Theta(t)},
\label{SchwarzTau}
\end{equation}
where a phase $\Theta(t)$ can be any time-dependent function.
The third requirement is related to the fact that $\I u_0$ is a
function of density only and, therefore, can be expressed in terms
of $u_1$ as can be easily seen from (\ref{u-hydr},\ref{u+hydr}).
This condition (\ref{u+Im}) can be written in a bilinear form as
follows \begin{eqnarray}
i D_{x}\tau_{+1}\cdot {\tau_{- 1} }= 2\pi\, \tau_0 \overline{\tau_0}.
\label{blreal} \end{eqnarray} The multiplicative constant in the r.h.s of
(\ref{blreal}) fixes the relative normalization of $\tau_{0}$ and
$\tau_{1}$ and is arbitrary. We have chosen it to be $2\pi$.
Finally, we note that the pole ansatz solution (\ref{u-pa},\ref{u+pa})
corresponds to the polynomial form of tau-functions with zeros at $w_j$
and $u_j$
\begin{eqnarray}
\tau_1(w,t) &=& w^{-N/2} \prod_{j=1}^{N}(w-w_{j}(t)),
\\
\tau_0(w,t) &=& w^{-N/2} \prod_{j=1}^{N}(w-u_{j}(t)).
\end{eqnarray}
\section{Chiral Fields and Chiral Reduction}
\label{sec:chiral}
\subsection{Chiral fields and currents}
The 2BO equation can be conveniently expressed through yet another
right and left handed chiral fields
\begin{equation}
\label{RL}
J_{R,L} = v \pm g\left[\pi \rho+\partial_{x}(\log\sqrt\rho)^{H}\right].
\end{equation}
These fields are real. \footnote{$J_{R,L}$ can be expressed solely in terms of $u_{0}$ field. It is easy to check that (\ref{RL}) is equivalent to $J_{R,L}= \R\left(u_{0}\mp i u_{0}^{H}\right)$.} In terms of them, the 2BO equation (\ref{2BO}) reads
\begin{eqnarray}
\partial_{t}J_{R,L} &+&
\partial_{x}\left(\frac{J_{R,L}^{2}}{2}
\pm \frac{g}{2}\partial_x J_{R,L}^{H}\right)
\nonumber \\
&\mp&
g\partial_{x}\left[J_{R,L}\partial_{x}(\log\sqrt{\rho})^{H}
-(J_{R,L}\partial_{x}\log\sqrt{\rho})^{H}\right] =0.
\label{xieq}
\end{eqnarray}
Here $\rho$ is a function of $J_{R}$ and $J_{L}$ implicitly
given by (\ref{RL}). The Hamiltonian acquires a Sugawara-like form
\begin{equation}
\label{49}
H=\frac{1}{8}\int dx\, \rho \Big[(J_R+J_L)^2+(J_R^H-J_L^H)^2\Big]
\end{equation}
with Poisson brackets
\begin{eqnarray}
\{J_{R,L}(x),J_{R,L}(y)\} &=& \pm 2\pi g\partial_x\delta(x-y)
\label{P} \\
& & \pm \frac{g}{2L}\partial_x\partial_y
\left[\left(\frac{1}{\rho(x)}+\frac{1}{\rho(y)}\right)\cot\frac{\pi}{L}(x-y)\right],
\nonumber \\
\{J_R(x),J_L(y)\} &=& -\frac{g}{2L}\partial_x\partial_y
\left[\left(\frac{1}{\rho(x)}-\frac{1}{\rho(y)}\right)\cot\frac{\pi}{L}(x-y)\right].
\end{eqnarray} We note that Poisson brackets become canonical and left and
right fields decouple in the limit of a constant density.
\subsection{Chiral Reduction}
\label{sec:chiraldBO}
We first note that the right and left currents $J_{R,L}$ are not
separated in eq.~(\ref{xieq}). The equations for $J_{R}$ and $J_{L}$
are coupled through the density $\rho$ which should be found in
terms of $J_{R,L}$ from (\ref{RL}). However, it is possible to find
the \textit{chiral reductions} of 2BO assuming that one of the
currents is constant. We explain this reduction in some detail
in this section.
The 2BO (\ref{2BO}) or (\ref{xieq}) admits an additional reduction to a chiral
sector \cite{2006-BAW-PRL-shocks} where one of the chiral currents
(\ref{RL}), say left current, is a constant $J_L(x,t)=v_0-\pi
g\rho_0$. We can always choose a coordinate system moving with
velocity $v_0$. This is equivalent to setting the zero mode of
velocity to zero $v_0=0$. The condition $J_{L}=-\pi g \rho_{0}$ becomes
\begin{equation}
\label{chiralconstr1}
v=g\left[\pi (\rho-\rho_0)+\partial_{x}(\log\sqrt\rho)^{H}\right].
\end{equation}
Then the currents can be expressed in terms of the density field only
\begin{eqnarray}
J_{L}(x) &=& J_{0},
\nonumber \\
J_{R}(x) &=& J_{0} +J(x),
\label{48} \\
J_{0} &=& \pi g\rho_{0},
\nonumber \\
J(x) &=& 2g\left[\pi \left( \rho-\rho_0\right)+\partial_{x}(\log\sqrt\rho)^{H}\right].
\end{eqnarray} It follows from Eq.~(\ref{xieq}) that once the current $J_{L}$
is chosen to be constant $J_{L}(x)=J_{0}$ at $t=0$ it remains
constant at any later time. The condition (\ref{chiralconstr1}),
therefore, is compatible with 2BO. Then the density $\rho(x,t)$
evolves according to the continuity equation (\ref{continuity})
with velocity determined by the density according to
(\ref{chiralconstr1}). We obtain an important equation (written in
the coordinate system moving with velocity $v_0$) \begin{equation}
\rho_{t}+g\left[\rho\left(\pi\left (\rho-\rho_0\right)
+ \partial_{x}(\log\sqrt\rho)^{H}\right)\right]_{x} =0.
\label{cheq10}
\end{equation} We refer to this equation as the Non-Linear Chiral Equation
(NLC). A substitution of the chiral constraint (\ref{48}) to
(\ref{49}) gives the Hamiltonian for NLC
\begin{eqnarray}
H &=& \frac{1}{8}\int dx\, \rho \left[J^2+(J^H)^2\right]
\label{JJ}
\end{eqnarray}
with Poisson brackets for $J(x)$ following from (\ref{P}). This equation constitutes one of major results of this paper.
NLC can be written in several useful forms. One of them is: \begin{equation}
\label{chi}
\varphi_{t}+g\left[\pi \rho_0\left(2e^{\varphi}-\varphi\right)
+\frac{1}{2}\varphi_{x}^{H}\right]_{x} +\frac{g}{2}\varphi_x \varphi^{H}_{x}=0,
\end{equation}
where $\rho(x)=\rho_0\, e^{\varphi(x)}$.
\subsection{Holomorphic Chiral field}
Under the chiral condition (\ref{chiralconstr1}) the field $u_0$
becomes analytic inside the disk. Indeed, combining (\ref
{chiralconstr1}) and (\ref{u+hydr}) we obtain \begin{equation}
u_0(w)=\frac{1}{2}\oint\frac{d\zeta}{2\pi i\zeta}\, \frac{\zeta+w}{\zeta-w}J(\zeta),\quad |w|<1.
\end{equation} In the chiral case it has only non-negative powers of $w$ in
the expansion (\ref{38}). Negative modes vanish $b_n=0$.
Conversely, the condition of $u_0$ to be analytic inside the unit
disk is equivalent to $J_{L}=const$.
The current itself (\ref{48}) is the boundary value of the field
$\R u_0$ harmonic inside the disk \begin{equation}
J(x)=2J_{0}+2\R u_0 =2J_0+\sum_{n=1^{\infty}}\left(a_{n}w^n+\bar a_{n}w^{-n}\right).
\end{equation}
The fields $u$ and $\tilde u$ are in turn also analytic inside the
disk. Let $\varphi$ to be a harmonic function inside the disk with
the boundary value $\log(\rho/\rho_{0})$. Then
$\varphi=\phi(w)+\overline{\phi(w)}$, where $\phi(w)=
(\log(\rho/\rho_{0}))^{+}$. Here $
f^{+}(w)=\frac{1}{2}\int\frac{\zeta+w}{\zeta-w}f(\zeta)\,\frac{d\zeta}{2\pi
i \zeta}$ is a function analytic in the interior of a unit circle
which value on the boundary of the disk is $(f(x)-if^{H}(x))/2$. It
follows from (\ref{ux},\ref{chiralconstr1}) that \begin{eqnarray}
u=-J_{0}+ig \partial \phi,\quad |w|<1,
\\
\tilde u=u+4\pi g\rho_0 \left(e^{\varphi}\right)^{+},\quad |w|<1,
\end{eqnarray}
Then 2BO (\ref{2BO1}) becomes an equation on an analytic function in the interior of a unit circle
\begin{equation}
\dot\phi+i\frac{g}{2} \left[(\partial\phi)^2+\partial^2\phi\right]
+\pi g\rho_0\partial\left(2e^{\varphi}-\varphi\right)^{+}
=0.
\end{equation} This is the ``positive part'' of (\ref{chi}) which is a direct
consequence of (\ref{cheq10}).
We remark here that the chiral equation (\ref{cheq10}) has a
geometric interpretation as an evolution equation describing the
dynamics of a contour on a plane. Within this interpretation the
term $\partial_{x}(\log\sqrt{\rho})^{H}$ of (\ref{cheq10}) is the
curvature of the contour (see \ref{sec:geom}).
\subsection{Benjamin-Ono Equation}
Another form of the Chiral Equation (\ref{cheq10}) arises when one
considers the fields $u$ and $\tilde u$ outside the disk. There
neither $u$ nor $\tilde u$ are analytic, but their boundary values
are connected by the Hilbert transform \begin{eqnarray}
\label{12}
u(x-i0) &=& -J_{0}+2g \left[\pi \rho +i\partial_{x}(\log\sqrt{\rho})^{+}\right],
\\
\tilde u(x-i0) &=&-J_{0} -i u^H(x-i0).
\end{eqnarray}
The bidirectional equation Eq.~(\ref{2BO1}) complemented by
this condition becomes unidirectional (chiral)
\begin{eqnarray}
\label{BO2}
&&u_{t}+\partial_{x}\left[\frac{1}{2}u^{2}+\frac{g}{2} \partial_{x} u^H \right] =0.
\end{eqnarray}
This is just another form of the chiral equation (\ref{cheq10}).
The chiral equation (\ref{BO2}) has the form of the
Benjamin-Ono equation \cite{Benjamin-Ono}. There are noticeable
differences, however. Contrary to the Benjamin-Ono equation,
Eq.~(\ref{BO2}) is written on a complex function, whose real and
imaginary values at real $x$ are related by conditions (\ref{12})
implementing the reality of the density: \begin{equation}
\label{C}
\R u = -J_{0}+ 2g\pi\rho+g\left(\partial_x\log\sqrt\rho\right)^H, \quad \I u=g\partial_x\log\sqrt\rho.
\end{equation} One understands this relation as a condition on the initial
data. Once it is imposed by choosing the initial data for the
density $\rho$, the condition remains intact during the evolution.
However, in the case when the deviation of a density is small with
respect to the average density $|\rho-\rho_0|\ll \rho_0$, the
imaginary part of $u$ vanishes in the leading order of $1/\rho_0$
expansion
$$
u\approx J_{0}+ 2\pi g\varphi\approx 2\pi g (\rho-\rho_0) +J_{0}
$$
and the condition (\ref{C}) becomes non-restrictive. In this limit
Eq.~(\ref{BO2}) becomes an equation on a single real function. It
is the conventional Benjamin-Ono equation.
One can think of NLC (\ref{cheq10}) as of finite amplitude extension of BO.
Similarly, 2BO is an integrable bidirectional finite amplitude extension of BO. It is interesting that there exists another bidirectional finite amplitude extension of BO -- the Choi-Camassa equation \cite{1996-ChoiCamassa}. However, it seems that the latter is not integrable.
\section{Multi-phase solution}
\label{sec:Multi-phase}
In this section we describe the most general finite dimensional
solutions of 2BO. These are multi-phase solutions and their
degenerations -- multi-soliton solutions. In the former case the
$\tau$-functions are polynomials of $e^{ik_ix}$, where $k_i$ is a
finite set of parameters, the latter are just polynomials of $x$.
These solutions are given by determinants of finite dimensional
matrices. They appeared in the arXiv version of
Ref.~\cite{2006-BAW-shocks}. One can construct those solutions
using the transformation (\ref{back1}) of 2BO to INLS (\ref{INLS}).
For the latter multi-phase solutions were written in \cite{1995-Pelinovsky} (see also \cite{2004-Matsuno,2004-Matsuno-INLS}). We use a different route in this section deriving multi-phase and multi-soliton solutions as a real reduction of corresponding solutions for MKP1.
\subsection{Multi-phase and multi-soliton solutions of MKP1}
We start from a general multi-phase solution of MKP1 equation
and then restrict it to 2BO equation.
A general multi-phase solution of MKP1 equation
\begin{eqnarray}
\left(iD_{t}+\frac{g}{2}\, D_{x}^{2}\right) \tau_{1}\cdot \tau_{0} =0.
\label{MKP1}
\end{eqnarray} is given by the following determinant formulae
\cite{1979-SatsumaIshimori,Matsuno-book}
\begin{eqnarray}
\tau_a &=& e^{i\theta_a} \det\left[\delta_{jk}
+c_{a,j} \frac{e^{i\theta_{j}}}{p_{j}-q_{k}}\right],\quad a=0,1
\label{86}\\
\frac{c_{1,j}}{c_{0,j}}&=& \frac{q_{j}-K-v_0}{p_{j}-K-v_0},
\label{relcc}
\end{eqnarray}
where the phases are
\begin{eqnarray}
g\,\theta_{j}(x,t) &=&
(q_{j}-p_j)(x-x_{0j})-\frac{q_{j}^{2}-p_j^2}{2}t,
\label{thetaj} \\
g\, \theta_0(x,t) &=& Kx-\frac{K^{2}}{2}t
-\left((v_{0}+K)x-\frac{(v_{0}+K)^{2}}{2}t\right),
\label{thetap} \\
g\, \theta_1(x,t) &=& Kx-\frac{K^{2}}{2}t.
\label{thetam}
\end{eqnarray} This solution is characterized by an integer number $N$ (number
of ``phases''), and by $4 N-1$ parameters $p_{j}$, $q_{j}$,
$c_{0,j}$, $x_{0j}$ and moduli $K$ and $v_0$. The solutions become
single-valued on a unit circle if $p_{j}$ and $q_{j}$ are integers
in units of $g\frac{2\pi}{L}$.
\subsection{Multi-phase solution of 2BO}
\label{sec:mps2BO}
Without further restrictions the parameters entering
(\ref{86}-\ref{thetam}) are general complex numbers. Reality nature
of 2BO equation restricts them to be real.
The real moduli $K$ and $v_0$ are obviously zero modes of the fields
$u_1$ and $u_0$ respectively, and therefore, they are zero modes of
the density $\rho_{0}=\frac{1}{L}\int \rho \,dx= -K/(\pi g)$ and
velocity $\frac{1}{L}\int v \,dx= v_{0}$.
\subsubsection{Schwarz reflection condition}
We have to restrict the coefficients $c_{a,j}$, so that there exists another
solution $\tau_{-1},\tau_0$ of Eq.~(\ref{MKP1}) sharing the same
$\tau_0$ with the solution (\ref{MKP1sol-t},\ref{relcct}) and
obeying the Schwarz reflection property (\ref{SchwarzTau}).
The Galilean symmetry of the equation (\ref{MKP1}) is here to help.
If $\tau_a(x,t),\,a=0,1$ give a solution of (\ref{MKP1}) then the
pair $e^{iP_ax-iE_at}\tau_a(x-gP_at,t)$ is also a solution provided
that $E_{a+1}-E_a=\frac{1}{2}(P_{a+1}-P_a)^2$.
Being applied to the solution (\ref{86}-\ref{thetam}) the Galilean invariance can be utilized as follows. We notice from (\ref{thetaj}) that
\begin{equation}
\theta_{j}(x-Pt, t; \left\{p_{j},q_{j}\right\}) = \theta_{j}(x, t; \left\{p_{j}+P,q_{j}+P\right\}).
\label{shiftprop}
\end{equation}
Performing the Galilean boost to (\ref{86}), multiplying both tau-functions by $e^{-i Pv_{0}t}$ and shifting $p_{j}\to p_{j}-P$, $q_{j}\to q_{j}-P$ we obtain that
\begin{eqnarray}
\tau_{-1} &=& e^{i\theta_1-i P (K+v_{0})t+\frac{i}{g}(Px-\frac{P^{2}}{2}t)} \det\left[\delta_{jk}
+b_{j} \frac{e^{i\theta_{j}}}{p_{j}-q_{k}}\right],
\label{MKP1sol-t} \\
\frac{b_{j}}{c_{0,j}} &=& \frac{q_{j}-K-v_{0}-P}{p_{j}-K-v_{0}-P}
\label{relcct}
\end{eqnarray}
form a solution of (\ref{MKP1}) with the same $\tau_0$ (\ref{86}).
Now we are going to show that for a particular choice of
coefficients $c_{1,j}$ the Galilean boosted solution
(\ref{MKP1sol-t},\ref{relcct}) is a complex conjugate of $\tau_{+1}$
from (\ref{86}). To show this we will employ the determinant
identity (\ref{detid}).\footnote{A similar trick was used by Matsuno
\cite{2004-Matsuno} to prove the reality of a multi-phase solution
for conventional Benjamin-Ono equation.}
We apply the determinant identity (\ref{detid}) to (\ref{MKP1sol-t}) and obtain
\begin{eqnarray}
{\tau}_{-1} &=&
e^{i\theta_1-i P (K+v_{0})t+\frac{i}{g}(Px-\frac{P^{2}}{2}t)}
\left(\prod_{j}
e^{i\theta_{j}}\sqrt{\frac{b_{j}}{\tilde{b}_{j}}}\right) \det\left[\delta_{jk}
+\tilde{b}_{j}
\frac{e^{-i\theta_{j}}}{p_{j}-q_{k}}\right],
\label{MKP1sol-trans}
\end{eqnarray}
where
\begin{equation}
\tilde{b}_{j} b_{j}= (p_{j}-q_{j})^{2}
{\prod_{k \neq j}}\frac{(p_{j}-q_{k})(q_{j}-p_{k})}{(p_{j}-p_{k})(q_{j}-q_{k})}.
\label{btjbj}
\end{equation}
The Schwarz reflection condition(\ref{SchwarzTau}) ${\tau}_{-1}=\overline{\tau_{+1}}e^{i\Theta(t)}$ requires
\begin{equation}
\tilde{b}_{j} = c_{1,j},
\label{btj}
\end{equation}
determines $ \Theta(t)$, and gives a relation
\begin{equation}
P=\sum_j(p_j-q_j)-2K.
\label{PpqK}
\end{equation}
Finally, combining all relations (\ref{relcc},\ref{relcct},\ref{btjbj},\ref{btj}) together we obtain the condition on coefficients $c_{a,j}$
\begin{eqnarray}
\label{caj2BO}
\left(\frac{c_{a,j}}{p_{j}-q_{j}}\right)^{2}
&&{\prod_{k \neq j}}\frac{(p_{j}-p_{k})(q_{j}-q_{k})}{(p_{j}-q_{k})(q_{j}-p_{k})}
\\
&=&
\left(\frac{p_{j}-K-v_{0}}{q_{j}-K-v_{0}}\right)^{1-2a}
\frac{p_{j}-K-v_{0}-P}{q_{j}-K-v_{0}-P}.
\nonumber
\end{eqnarray}
Condition (\ref{caj2BO}) is necessary to turn a general solution of MKP1 into a solution of 2BO. We also have to find a condition that guarantees that $\tau_{1}$ has no zeros inside the unit disk. Before turning to this analysis, we first discuss degeneration of formulas (\ref{86}) into multi-soliton solution.
\subsubsection{Multi-soliton solution of 2BO}
The multi-soliton solution of 2BO follows from the multi-phase solution in the limit $p_{j}\to q_{j}$. We introduce
\begin{eqnarray}
k_{j} &=& p_{j}-q_{j},
\\
v_{j} &=& \frac{1}{2}(p_{j}+q_{j})
\end{eqnarray}
and consider the limit $k_{j}\to 0$ keeping $v_{j}$ fixed. After some straightforward calculations we obtain
\begin{eqnarray}
\tau_a &=& e^{i\theta_a} \det\left[\delta_{jk} \left( x-x_{0j}-v_{j}t+iA_{a,j}\right)
+i g \frac{1-\delta_{jk}}{v_{j}-v_{k}}\right],
\label{taupmmsol}\\
A_{a,j} &=& \frac{g}{2} \left(\frac{1}{v_{j}-v_{0}+K}\pm \frac{1}{v_{j}-v_{0}-K}\right),\quad a=0,1.
\label{xij}
\end{eqnarray}
One notices that in the limit $t\to +\infty$ the solution
(\ref{taupmmsol}) asymptotically goes to the factorized form
\begin{eqnarray}
\tau_a \to e^{i\theta_a}\prod_{j}(x-x_{0j}-v_{j}t+iA_{a,j}),
\label{taufact}
\end{eqnarray} describing separated single solitons.
Eq.~(\ref{taufact}) gives a large time value of zeros of $\tau_1$. Their imaginary part is
\begin{equation}
-\R A_{1,j} = g \frac{K}{(v_{j}-v_{0})^{2}-K^{2}}.
\label{Ajmlim}
\end{equation}
It must be negative in order for $\tau_1$ to have no zeroes inside the unit disk. Since $K<0$ we must require
\begin{equation}
\label{103}
(v_j-v_0)^2>K^2.
\end{equation}
In the next paragraph we argue that under this condition and additional restrictions on parameters $p_{j}$, $q_{j}$ (see eq.~(\ref{ordering1}) below) the moving
zeros never cross the real axis, and therefore zeros stay outside of
the unit disk at all times.
To conclude this section we note a unique property of the 2BO
equation (shared with the BO equation). Namely, there is a
``quantization'' of the mass of solitons: each soliton of 2BO
carries a unit of mass regardless of its velocity. We have for $N$-soliton solution
\begin{equation}
\int dx\, (\rho-\rho_{0})=N.
\label{mscharge}
\end{equation}
Where $K=-\pi g\rho_{0}$.
The total momentum, and the total energy of a multisoliton solution is given by
\begin{eqnarray}
&& \int dx\, (\rho v-\rho_{0}v_{0}) = \sum_{j}v_{j},
\label{msmom}\\
&& \int dx\,\left(\frac{\rho v^{2}}{2}+\rho\epsilon(\rho)
-\frac{\rho_{0} v_{0}^{2}}{2}-\rho_{0}\epsilon(\rho_{0}) \right)
= \sum_{j}\frac{v_{j}^{2}}{2},
\label{msenergy}
\end{eqnarray}
where $\epsilon(\rho)$ is defined in (\ref{epsrho0}).
One-soliton solution has a form
\begin{eqnarray}
\rho &=& \rho_{0} +\frac{1}{\pi} \frac{A_{1}}{(x-x_{01}-v_{1}t)^{2}+A_{1}^{2}},
\\
v &=& v_{0} +g \frac{A_{0}}{(x-x_{01}-v_{1}t)^{2}+A_{0}^{2}},
\end{eqnarray}
where
\begin{eqnarray}
A_{1} &=& \R A_{1,1} = \frac{\pi g^{2}\rho_{0}}{(v_{j}-v_{0})^{2}-(\pi g\rho_{0})^{2}},
\\
A_{0} &=& \R A_{0,1}= \frac{g(v_{j}-v_{0})}{(v_{j}-v_{0})^{2}-(\pi g\rho_{0})^{2}}.
\end{eqnarray}
This one-soliton solution has been found first in Ref.~\cite{1995-Polychronakos} (see also \cite{AndricBardekJonke-1995}).
\subsubsection{Analyticity condition}
Now we can turn to the multiphase solution and derive conditions
sufficient in order for $u_{1}$ to be analytic in the upper
half-plane in complex $x$-variable (inside the unit disk).
Analyticity in the lower half-plane follows from the Schwarz
reflection condition (\ref{SchwarzTau}). We will follow the
approach of Dobrokhotov and Krichever
\cite{1991-DobrokhotovKrichever} developed for Benjamin-Ono
equation.
Analyticity of $u_1$ means that $\tau_1$ given by (\ref{86}) has no zeros in the upper half plane, or that the matrix
\begin{eqnarray}
M_{jk} =\delta_{jk}
+\frac{c_{1,j}
e^{i\theta_j} }{p_{j}-q_{k}}
\label{M-matrix}
\end{eqnarray}
is non-degenerate.
Following the approach of Ref. \cite{1991-DobrokhotovKrichever} we
derived in \ref{sec:ndc} a sufficient condition of non-degeneracy of
the matrix $M$ from (\ref{M-matrix}). Let us now write that
condition (\ref{mndcond},\ref{fj}) with $c_{j}$ defined by
(\ref{cj1},\ref{caj2BO}). We obtain (calculating $f_{j}$)
\begin{equation}
\frac{P}{\tilde{p}_{j}(\tilde{q}_{j}-P)}
\prod_{k(k\neq j)}\frac{\tilde{p}_{j}-\tilde{q}_{k}}{\tilde{p}_{j}-\tilde{p}_{k}}
\;\; \mbox{same sign for all }j,
\label{f1cond}
\end{equation}
where we used shifted numbers $\tilde{p}_{j}=p_{j}-K-v_{0}$ and similar for $\tilde{q}_{j}$.
Here $P=-2K +\sum_j(\tilde{p}_j-\tilde{q}_j)$.
The set of conditions
\begin{eqnarray}
\tilde{q}_{1} <\tilde{p}_{1}< \ldots <\tilde{q}_{m} <\tilde{p}_{m}
<0 <P
< \tilde{q}_{m+1} <\tilde{p}_{m+1}< \ldots <\tilde{q}_{N} <\tilde{p}_{N}
\label{ordering1}
\end{eqnarray}
satisfies (\ref{f1cond}). Moreover, (\ref{ordering1}) yields
to (\ref{103}), which in its turn means that at least at some values
of parameters (large time and soliton limit) no zeros of $\tau_1$
are inside the unit disk. Since they also can not be on the circle
they do not cross it while moving in time and in the space of
parameters.
Condition (\ref{ordering1}) suggests that a general solution is
characterized by a integer number $N-2m$. This is chirality -- the
difference between the number of $N-m$ right and $m$ - left moving
modes
\begin{eqnarray}
\frac{1}{2\pi g}\int (J_R-v_{0}-\pi g\rho_{0})\,dx
&=& N-m,
\\
\frac{1}{2\pi g}\int (J_L-v_{0}+\pi g\rho_{0})\,dx
&=& m.
\end{eqnarray}
Eqs.~(\ref{86}-\ref{thetam},\ref{caj2BO},\ref{ordering1})
summarize a general finite dimensional quasi-periodic solution. We
emphasize here that this solution is not chiral and contains both right and left-moving modes.
\subsection{Multi-phase solution of the Chiral Non-linear Equation}
The (right) chiral case appears when $\tau_0$ has no zeros outside the unit disk. It naturally happens when the number of, say, left-moving modes $m$ in (\ref{ordering1}) vanishes $m=0$. In this case all $v_{j}-v_{0}<0$. In their turn, imaginary parts of zeros of $\tau_0$ in the multi-soliton limit (as in (\ref{Ajmlim}))
\begin{equation}
-A_{0j} =- g \frac{v_{j}-v_{0}}{(v_{j}-v_{0})^{2}-K^{2}}>0
\label{AJplim}
\end{equation} are positive. One can check that in this case (\ref{M-matrix}) with $c_{1,j}\to c_{0,j}$ is non-degenerate for arbitrary values of parameters satisfying (\ref{ordering1}) with $m=0$ (and similarly for $m=N$).
Therefore, $\tau_{0}(x)$ is non-zero in one
of half-planes.
This is a chiral multi-phase solution of 2BO.
\subsection{Multi-phase solution of the Benjamin-Ono equation}
The known solutions of the Benjamin-Ono equation \cite{CLP-1979,1979-SatsumaIshimori} are obtained from the solutions of the Chiral Non-linear equation by taking the limit $\rho_{0} \to \infty$. In this case $K\to-\infty$ and conditions (\ref{ordering1}) allow for a good limit only if $m=N$ (left sector) or $m=0$ (right sector). Let us concentrate on the right sector. We redefine $p_{j}\to p_{j}-K$, $q_{j}\to q_{j}-K$, $v_{0}\to v_{0}-2 K$, go to the frame moving with velocity $-K$ ($x\to x+Kt$), and obtain from (\ref{ordering1},\ref{caj2BO}) in the limit $K\to -\infty$
\begin{eqnarray}
q_{1}<p_{1}<q_{2}<\ldots<q_{N}<p_{N},
\label{ordering2}
\\
\left(\frac{c_{a,j}}{p_{j}-q_{j}}\right)^{2}
{\prod_{k \neq j}}\frac{(p_{j}-p_{k})(q_{j}-q_{k})}{(p_{j}-q_{k})(q_{j}-p_{k})}
=
\left(\frac{p_{j}-v_{0}}{q_{j}-v_{0}}\right)^{1-2a}
\label{cajBO}
\end{eqnarray}
with solution given by (\ref{86},\ref{thetaj}) and with (\ref{thetap},\ref{thetam}) (one should put $K=0$ in latter two). This is nothing else but the multiphase solution of conventional Benjamin-Ono equation \cite{1979-SatsumaIshimori,2004-Matsuno}.
\subsection{Moving Poles}
The 2BO equation (\ref{2BO}) looks very similar to the classical BO
equation. One of important tools in studying the classical BO
equation is the so-called pole ansatz - solutions in the form of
poles moving in a complex plane. \cite{CLP-1979} We have already seen that the pole
ansatz (\ref{u-pa},\ref{u+pa}) describes the dynamics of the
original Calogero-Sutherland model with finite number of particles
$N$.
In this section we consider collective excitations of
Calogero-Sutherland model in the limit of infinitely many particles.
These excitations are given by ``complex'' pole solutions of the
2BO.
In the Pole Ansatz (\ref{u-pa},\ref{u+pa}), the reality conditions
were satisfied by requiring $x_{j}$ to be real (or $w_{j}(t)$ moving
on a unit circle). One could generalize the Pole Ansatz
(\ref{u-pa},\ref{u+pa}) to case where $w_{j}(t)$ are away from the
unit circle and moving in a complex plane. The equations
(\ref{pmotx},\ref{pmoty}) describing the motion of poles preserve
their form. However, $u_{-1}(w)$ outside of the unit circle is not
related to the $u_{1}(w)$ inside of the circle by analytic
continuation but only by Schwarz reflection (\ref{Schwarz}). The
field $u_{1}(w)$ is analytic inside the unit circle and has moving
poles outside of the unit circle (and vice versa for $u_{-1}(w)$).
Of course, having obtained the solution of 2BO inside the unit
circle does not mean automatically that the Schwarz reflected
function (\ref{Schwarz}) will solve 2BO in the exterior of the
circle \textit{with the same} $u_0$. The property (\ref{Schwarz})
requires that (\ref{pmotx},\ref{pmoty}) are satisfied not only by
$u_{j}$ and $w_{j}$ but also by $u_{j}$ and $1/\bar{w}_{j}$. This
requirement will significantly constrain the positions of poles
$w_{j}$ and $u_{j}$ in a complex plane. It turns out
that this constraint allows for non-trivial solutions.
We emphasize here once again that while real axis poles $x_{j}$ of
$u_{1}$ in the pole ansatz represent the original CS particles, the
complex poles $x_{j}$ represent collective excitations of the CS
liquid moving in the background of macroscopic number of particles.
Instead of looking for moving pole solution in this section we have
taken a different route. We first construct the much more general
solution of 2BO (\ref{2BO}) with proper reality conditions and then
obtain a moving pole (i.e., multi-soliton) solution as a limit of
the multi-phase solution. One can see from (\ref{taupmmsol}) that for soliton solutions
the zeros of tau-functions move in a complex plane. It is especially clear at large times
when solitons are well separated (\ref{taufact}).
\section{Conclusion and discussion}
In this paper we have shown that the dynamics of the classical
Calogero-Sutherland model in the limit of infinite number of
particle is equivalent to the bidirectional Benjamin-Ono equation
(\ref{2BO}). The bidirectional Benjamin-Ono equation (2BO) is an
integrable classical integro-differential equation. Its
integrability can be deduced from the fact that it is a Hamiltonian
reduction of MKP1 as it is shown in this paper. As an alternative,
one can use the equivalence of 2BO to INLS (\ref{INLS}). The
integrability of INLS was proven and the spectral transform was
constructed for INLS in Ref. \cite{1995-PelinovskyGrimshaw} (see
also \cite{2004-Matsuno-INLS}). Therefore, one can use all techniques
developed in the field of classical integrable equations for 2BO. It
has multi-phase solutions (explicitly constructed in this paper),
bi-Hamiltonian structure, an associated hierarchy of higher order
equations, etc. 2BO is intrinsically simpler than many other
classical integrable models. Its solitons have ``quantized'' area
independent of soliton's velocity. The collision of two solitons
goes without any time delay etc. This is a reflection of the fact
that underlying Calogero-Sutherland model is essentially a model of
non-interacting particles in disguise. In particular, 2BO supports a
phenomenon of dispersive shock waves. Some applications of this
phenomenon to quasi-classical description of quantum systems were
considered in \cite{2006-BAW-PRL-shocks}.
Most of the results of this paper can be generalized along two
avenues: generalization to an elliptic case and generalization to a
quantum model.
The Calogero-Sutherland model (trigonometric case) can be
generalized to an elliptic case --- elliptic Calogero model where
the interaction between particles is either Weierstrass
$\wp(x|\omega_1,\omega_2)$-function with purely real and purely
imaginary periods $\omega_1,i\omega_2$, and to its hyperbolic
degeneration (hyperbolic case) with inter-particle interaction given
by $\sinh^{-2}(x/\omega_2)$ (see \cite{1981-OlshanetskyPerelomov-classical} for review).
In both cases most of formulas remain unchanged if one substitutes
the Hilbert transform $f^H$ for a transform with respect to a strip
$0<\I x<\omega$, where $\omega$ is an imaginary period
\begin{equation}
\label{Helliptic}
f^H=\int \zeta(x-x')f(x')dx'\;\;\;\; \mbox{or} \;\;\;
\int \frac{1}{\omega_2} \coth\frac{1}{\omega_2}(x-x')f(x')dx'.
\end{equation} In the first case the integration goes over a real period of the
Weierstrass $\zeta$-function.
The elliptic Calogero model allows one to study a crossover
between liquids with long range inter-particle interaction to
liquids with short range interaction. In the limit of a large
imaginary period $\omega_{2}\to\infty$ the $\wp$-function
degenerates to $1/\sin(x/\omega_1)^2$ -- the case of long range
inter-particle interaction. The opposite limit $\omega_2\to 0$
gives rise to a short range interaction: $\omega_2\wp(x)\to
\delta(x)$.
In the latter case the the Hilbert transform (\ref{Helliptic}) becomes a
derivative $f^H\to \omega\partial_x f$ and the equations discussed in
this paper become local. In particular the Benjamin-Ono equation
flows to the KdV equation, while the bidirectional BO-equation flows
to NLS - the nonlinear Schr\"odinger equation.
2BO in the limit of small amplitudes and in the chiral sector becomes
the conventional Benjamin-Ono equation. In elliptic
case (and in the hyperbolic one) the limit of small amplitudes in
the chiral sector leads to a generalization of the Benjamin-Ono
equation, known as the ILW (Intermediate Long Wave) equation
\cite{AblowitzClarkson-book}. Contrary to the Benjamin-Ono equation
and to 2BO, the latter and its bidirectional generalization
2ILW have elliptic solutions.
We intend to address the elliptic case in a separate publication.
Probably, even more interesting is a generalization of the results of this paper
to the quantum case. It is well known that the classical CSM model
(\ref{CSM}) can be lifted to a quantum integrable
Calogero-Sutherland model \cite{Calogero-1969,Sutherland-1971,1999-Polychronakos}. The
latter model is defined by (\ref{CSM}) with $g^2 \to
\hbar^2\lambda(\lambda-1)$ and $p_i=-i\hbar\partial_{x_i}$. The 2BO
equation in the form (\ref{2BO}) remains unchanged, except for the
change of the coefficient $g\to\lambda-1$ and for the change of
Poisson brackets (\ref{PB01}) by a commutator: $\{\,,\,\}\to
\frac{i}{\hbar}[\,,\,]$. The change $g\to \lambda-1$ valid for eq.
(\ref{2BO}) is not correct for all formulas. For example,
the bilinear form of classical 2BO (\ref{2BO-hirota}) is identical
to its quantum version with just a change of notations $g\to
\lambda$. For some details see \cite{2005-AbanovWiegmann}.
Multi-soliton solution of 2BO presented here corresponds to exact
quasiparticle excitations of quantum Calogero-Sutherland model \cite{Sutherland-book,1995-Polychronakos}. The
more detailed study of the relations between integrable structures
of the classical 2BO and its quantum analogue is necessary.
\section{Acknowledgments}
AGA is grateful to A. Polychronakos for the discussion of the chiral case. PW thanks J. Shiraishi for discussions.
The work of AGA was supported by the NSF under the grant DMR-0348358. EB was supported by ISF grant number 206/07. PW was supported by NSF under the grant NSF DMR-0540811/FAS 5-27837 and MRSEC DMR-0213745. We also thank the Galileo Galilei Institute for Theoretical Physics for the hospitality and the INFN for partial support during the completion of this work.
|
2,869,038,155,461 | arxiv | \section{Introduction}
Over the last decade, robotic systems have witnessed a great increase in popularity and are used in increasingly complex applications like search and rescue \cite{erdelj2017help}, exploration and mapping \cite{losch2018design}, etc. However, accurate and robust state estimation in these scenarios is not trivial due to the poorly lit (e.g. fog, dust) and self-similar structures (e.g. tunnels, long corridors), which leads to the lack of reliable visual and geometry features. These problems pose unique challenges to methods based on single sensor observations (e.g. incorrect feature matching or degradation problems) \cite{mur2017orb} \cite{shan2018lego}. Thus multi-modal sensor fusion has been widely deployed in such tasks \cite{zhao2021super}. Based on their design scheme, these approaches can be grouped into two main categories: tightly-coupled methods and loosely-coupled methods.
The former considers the error of different observations simultaneously, such as reprojection error of visual features and point-to-plane error of Light Detection and Ranging (LiDAR) points \cite{shan2021lvi} \cite{zuo2019lic} \cite{lin2021r}. They show marked improvement in accuracy and robustness in most scenarios. But as for above mentioned extreme conditions, tightly-coupled methods may be susceptible to sensor failures since most of them only use a single estimation engine \cite{zhao2021super}. Moreover, although these methods have an analysis of the quality of different measurements (e.g. number of feature points), in some conditions, like dark but geometrically feature-rich places, their localization accuracy may be lower than those using single high-fidelity measurements because performing multi-sensor fusion all the time injects more noise to the estimator.
In comparison, loosely coupled methods are more robust in such perceptually-challenging conditions. Usually, they have a primary estimation engine (typically LiDAR-based methods) and regard the estimation result of different odometry as the initial value of scan-to-scan or scan-to-map \cite{palieri2020locus} \cite{khattak2020complementary} when they detect degeneration of the primary engine. These approaches provide a decent trade-off between accuracy and robustness and have witnessed great success in the recent DARPA Robotics Challenge \cite{rouvcek2019darpa}. However, the other odometry in these methods have no influence on the Hessian matrix of the system, so they do not fully utilize the information of other odometry and the performance may be severely compromised in the case of a poor prior.
\begin{figure}
\setlength{\belowcaptionskip}{-0.5cm}
\centering \includegraphics[width=0.45\textwidth]{fig/bigshow.pdf}
\captionsetup{font={footnotesize}}
\caption{{(a) is the mapping result of DAMS-LIO in the CERBERUS DARPA Subterranean Challenge Datasets, which contains lots of challengeing environments as shown in (b)-(c) including darkness, long tunnel and textureless area. We compare our mapping results with the state-of-the-art LiDAR-inertial odometry, LIO-SAM, in the upper right corner of (a).}}\label{begin}
\end{figure}
Motivated by the discussion above, we propose a degeneration-aware and modular sensor-fusion pipeline in the iterated extended Kalman filter framework (iEKF) \cite{xu2022fast}. Following the loosely-coupled model, it works as a LiDAR-inertial odometry in a well-condition and performs sensor-fusion with other odometry when degeneration is detected. The distinctive insight of our approach is that we take both LiDAR feature points and relative pose provided by other odometry as measurements to participate in the subsequent update in iEKF, so that the other odometery information can be fully utilized while relying less on them.
Theoretical analysis based on Cramer-Rao Lower Bound (CRLB) \cite{gorman1990lower} theorem demonstrates that integrating relative pose as measurements into the update process can achieve higher accuracy than as initial values in registration.
In summary, the contributions of this paper are listed as follows:
\begin{itemize}
\item A lightweight degeneration-aware and modular sensor-fusion LiDAR-inertial odometry system (DAMS-LIO) is proposed which performs robust and accurate state estimation in extreme environments, which offers a marked advantage for the complex exploration tasks by robots with limited computing resources.
\item A novel sensor-fusion method to fully fuse the information of LiDAR and the other odometry is proposed which takes both LiDAR points and relative pose from the other odometry as measurements in the update process only when degeneration is detected.
\item Theoretical analysis based on CRLB theorem is performed to quantify the performance and demonstrate the high accuracy of the proposed sensor-fusion method.
\item Extensive experiments on simulation and real-world datasets validate the robustness and accuracy of our method.
\end{itemize}
\section{Related Works}
Since autonomous robots in extreme and unknown environments (eg. underground or planetary exploration) is subject to many challenges and limitations, individual sensor modalities might fail (e.g., due to camera blackouts or degenerate geometries for LiDAR). Hence in recent years, there are several efforts have been made for the multi-sensor fusion method which can be classified as either loosely coupled methods or tightly coupled methods.
\subsection{Tightly Coupled Method}
The tightly-coupled method typically incorporates the measurement of different sensors into the state optimization process. In the work of \cite{shan2021lvi}, LiDAR-inertial and visual-inertial systems are fused based on a tightly-coupled smooth and mapping framework, which can work independently when failure is detected in one of them or jointly in a well-conditon. \cite{zuo2019lic} proposes a MSCKF-based LIC-fusion framework, which performs state estimation by IMU measurements, sparse visual and LiDAR features, and simultaneous online spatial and temporal calibration. Although these methods show marked improvement in the system's robustness and accuracy, they are susceptible to failure when the sensors are damaged and difficult to extend to other sensors.
\subsection{Loosely Coupled Method}
Zhang and Singh \cite{zhang2018laser} propose V-LOAM which utilizes the result of loosely-coupled Visual-Inertial odometry (VIO) as the prior to the initialization of the LiDAR mapping system. \cite{palieri2020locus} comes up with a multi-sensor LiDAR-centric solution, LOCUS, which adds a health monitoring module to select a near-optimal prior to the LiDAR scan-matching optimization. These loosely-coupled approaches show more robust performance and higher resilience compared to tightly-coupled ones. However, their final performance still relies on laser scan alignment thereby these methods are still sensitive to the quality of LiDAR data. Hence, to fully utilize the information of prior pose while retaining the advantages of the loosely-coupled method, we come up with a degeneration-aware and LiDAR-centric sensor fusion pipeline. It only receives pose measurements when LiDAR inertial odometry fails to make a trade-off between robustness and accuracy.
\begin{figure}
\setlength{\belowcaptionskip}{-0.6cm}
\setlength{\abovecaptionskip}{-0.2cm}
\centering \includegraphics[width=0.38\textwidth]{fig/frame.pdf}
\captionsetup{font={footnotesize}}
\caption{{Illustration of each frames.}}\label{frame}
\end{figure}
\section{System Overview}
The definition of each frame in our system is illustrated in Fig. \ref{frame}. Following typical LIO frame definitions, $\{L\}$ and $\{I\}$ are corresponding to the LiDAR and IMU frame respectively, while $\{G\}$ is a local-vertical reference frame whose origin coincides with the initial IMU position. $\{O\}$ represents the other odometry measurement, whose origin is denoted as $\{M\}$. $^L\bm{p}$ denotes the position of each point to the LiDAR frame.
The definition of some important and frequently used symbols are shown in Table \ref{symbol_definition}.
\begin{table}[h]
\centering
\scriptsize
\caption{
Important Symbols Definition
}
\label{symbol_definition}
\begin{tabular}{cl} \hline \specialrule{0em}{1.5pt}{1.5pt}
\textbf{Symbols} & \textbf{Definition} \\ \midrule
$t_k,t_{k+1}$ & The sample time of $\mathit{k}$-th and its next IMU measurement\\
$\tau_i,\tau_{i+1}$ & The scan time of $\mathit{i}$-th and its next scan\\
$I_k,I_i$ & The IMU frame at time $t_k$ and $\tau_{i}$ \\
$L_k,L_{k+1}$ & The LiDAR frame at time $t_k$ and $t_{k+1}$ \\
$O_k,O_{k+1}$ & The other odometry frame at time $t_k$ and $t_{k+1}$ \\
$\mathbf{x},\hat{\mathbf{x}},\bar{\mathbf{x}}$ & The true, predicted and updated value of x \\
$\mathbf{\hat{x}}^{\kappa}$& The $\kappa$-th update of $\bm{x}$ in the iterated Kalman filter\\
$\tilde{\mathbf{x}}$ & The error between true x and its estimation value $\bar{\mathbf{x}}$\\
$^{A}_{B}q,{^{A}p_{B}}$ & \makecell{The rotation (represented by quaternion and its rotation \\matrix is $^{A}_{B}R$) and translation from frame A to B} \\
\hline \specialrule{0em}{1.5pt}{1.5pt}
\end{tabular}
\vspace{-0.8cm}
\end{table}
\subsection{State Vector} The states estimated in our system containing current IMU states $\mathbf{x}_{I}$, the extrinsic parameters between LiDAR and IMU $^{I}\mathbf{x}_L$, and the transformation from other odometry frame to IMU frame $^{I}\mathbf{x}_O$. At time $t_k$, the state is written as:
\begin{flalign}
\mathbf{x}_k =& \ [\bm{x}_{I,k}^{\top}\quad \bm{x}_L^{\top} \quad \bm{x}_{O}^{\top}] & \\[1.5mm] \label{state vector}
\mathbf{x}_{I,k} =&\ [^{G}_{I,k}\bar{\bm{q}}^{\top} \quad
^G\bm{p}^{\top}_{I,k} \quad
^G\bm{v}^{\top}_{I,k} \quad
\bm{b}^{\top}_{g,k} \quad
\bm{b}^{\top}_{a,k} \quad
\bm{g}_{k}\ ]^{\top}
& \\[1.5mm]
\mathbf{x}_{L} =&\ [^{I}_{L}\bar{\bm{q}}^{\top} \quad ^{I}\bm{p}^{\top}_{L}]^{\top} , \quad \mathbf{x}_{O} =\ [^{I}_{O}\bar{\bm{q}}^{\top} \quad ^{I}\bm{p}^{\top}_{O}]^{\top}
\end{flalign}
$^G\bm{v}_{I}$ is the velocity of IMU in global frame, $\bm{b}_{g,k}$ and $\bm{b}_{a,k}$ represent the gyroscope and accelerometer biases respectively, $\bm{g}$ is the gravity vector in frame $\{G\}$, $\mathbf{x}_L = \{^{I}_{L}\bar{\bm{q}},\ ^{I}\bm{p}_{L}\}$ and $\mathbf{x}_O = \{^{I}_{O}\bar{\bm{q}},\ ^{I}\bm{p}_{O}\}$ denotes the transformation from other odometry frame $\{O\}$ and LiDAR frame $\{L\}$ to IMU frame $\{I\}$.
\subsection{IMU Propagation}
Since measurements of IMU are affected by bias $\bm{b}$ and zero-mean Gaussian noise $\bm{n}$ \cite{yang2019degenerate}, they can be modeled as:
\begin{eqnarray}\label{IMU measurement model}
\bm{\omega}_{m}(t) &=& ^I\bm{\omega}(t) + \bm{b}_g(t) + \bm{n}_g(t) \\
\bm{a}_m(t) &=& {^{I(t)}_G\bm{R}(^G\bm{a}_I(t) + {^G\bm{g}})} + \bm{b}_a(t)+ \bm{n}_a(t)
\end{eqnarray}
where $\bm{\omega}_m(t)$ and $\bm{a}_m(t)$ are the raw measurement data. $^I\bm{\omega}(t)$ is the angular velocity of IMU in local frame $\{I\}$. $^G\bm{g}$ and $^G\bm{a}_I(t)$ are the acceleration of gravity and IMU expressed in the global frame.
The IMU kinematics is the same as \cite{li2013high}, to keep our presentation concise, we do not repeat the description here.
To propagate the covariance matrix from time $t_k$ to $t_{k+1}$, we have the generation format of the linearized discrete-time model, following \cite{mourikis2007multi} as:
\begin{align}
\label{discrete-time-model} \tilde{\bm{x}}_{k+1} &= \bm{\Phi}_{k}\tilde{\bm{x}}_k + \bm{G}\bm{n}_k
\end{align}
where $\bm{\Phi}_{k}$ is the linearized system state transition matrix, $\bm{n}_k = [\bm{n}_{g} \quad\bm{n}_{wg}\quad\bm{n}_{a}\quad\bm{n}_{wa}]$ is the system noise. The error state is defined as $\tilde{\bm{x}} = \bm{x} - \hat{\bm{x}}$ for all variables except for quaternion, which is defined by the relation $q = \hat{q} \otimes \delta q$. Same as \cite{li2013high}, the symbol $\otimes$ means quaternion multiplication, and the error quaternion is defined as $\delta q \simeq [\frac{1}{2}\bm{\delta \theta}^{\top}\quad 1]^{\top}$.
Denoting the covariance of $\bm{n}_k$ as $\bm{Q}_k$, then we can propagate the state covariance from $t_k$ to $t_{k+1}$ as:
\begin{align}
\bm{P}_{k+1} = \bm{\Phi}\bm{P}_{k}\bm{\Phi}^{\top} +\bm{G}\bm{Q}_k\bm{G}^{\top}
\end{align}
where $\bm{\Phi} = {\rm diag}(\bm{\Phi}_{I}, \bm{\Phi}_{O})$ and $\bm{G} = [\bm{G}^{\top}_{I},\bm{G}^{\top}_{O}]^{\top}$. $\bm{\Phi}_{I}$ and $\bm{G}_{I}$ indicate the part related to the variable other than the extrinsic between other odometry and IMU, which are the same as the definition in \cite{xu2022fast}. Moreover $\bm{\Phi}_{O} = \bm{I}_{6\times6}$ and $\bm{G}_{O} = \bm{0}_{6\times12}$.
\begin{figure}
\setlength{\belowcaptionskip}{-0.6cm}
\setlength{\abovecaptionskip}{-0.3cm}
\centering \includegraphics[width=0.43\textwidth]{fig/system_overview.pdf}
\captionsetup{font={footnotesize}}
\caption{{Framework of the proposed DAMS-LIO.}}\label{framework}
\end{figure}
\subsection{Measurement Model}
1) \textbf{LiDAR measurement}:
As for the measurements from LiDAR, we model them as in \cite{xu2022fast}. After motion compensation for the scan sampled at time $\tau_i$, we define the $\mathit{j}$-th point at the local LiDAR frame as $^{L}\bm{p}_j$. Through backward propagation, the point can be changed to a scan-end measurement corresponding to the IMU measurement at $t_k$. Meanwhile, since each point should lie on a small plane patch in the map, we can get
\begin{small}
\begin{align} \label{LiDAR model}
\bm{0} = {^{G}}\bm{u}_j^{\top}(^{G}_{I_k}\bm{R} ({^{I}_L\bm{R}}(^{L}\bm{p}_j + {^{L}\bm{n}_j})+{^{I}\bm{p}_L})+{^{G}\bm{p}_{I_k}} -{^{G}\bm{q}_j} )
\end{align}
\end{small}
where $^{G}\bm{q}_j$ is the point on the small plane and $^{G}\bm{u}_j$ is the normal of the plane. $^{I}\bm{R}_L$ and $^{G}\bm{R}_{I_k}$ are the rotation matrix corresponding to the $^{I}_{L}\bm{q}$ and $^{G}_{I_K}\bm{q}$. $^{L}\bm{n}_j$ is the ranging and beam-directing noise of the point $^{L}\bm{p}_j$. (\ref{LiDAR model}) can also be summarized in a more compact from as
\begin{equation}\label{LiDAR h}
{\bm{z}_l} = \bm{0} = {\bm{h}_l}(\bm{x}_k,{^{L}\bm{p}_j + {^{L}\bm{n}_j}})
\end{equation}
To linearize the measurement model for the update, we approximate the measurement model by its first order approximation at $\hat{\bm{x}_k}$ as
\begin{equation} \label{LiDAR r}
\bm{r}_l = \bm{0} - {\bm{h}_l}(\bm{x}_k,{^{L}\bm{p}_j}) \simeq H_L {\bm{\tilde{x}_k}} + {\bm{v}_l}
\end{equation}
where ${\bm{v}_l} \in \mathit{N}(\bm{0},{\bm{R}_l})$ is the noise of Gaussian distribution due to the raw measurement noise ${^{L}\bm{n}_j}$. $H_L$ is the jacobian matrix of residual $\bm{r}_l$ for error state as ${\bm{\tilde{x}_k}}$, which is given as
\begin{align}
H_L &= {^{G}\bm{u}_j^{\top}}\begin{bmatrix}H_{L1}
\ \bm{I}_3\ \bm{0}_{3\times12}& H_{L2}\quad ^G\hat{\bm{R}}_{\bm{I}_k}\quad\bm{0}_{3\times6}
\end{bmatrix} \label{LiDAR jacobian1}\\
H_{L1} &= {^G\hat{\bm{R}}_{\bm{I}_k}}\lfloor {^L\bm{p}_j+{^I\hat{\bm{p}}_L}}\times\rfloor, H_{L2} = {^G\hat{\bm{R}}_{\bm{I}_k}}\lfloor{^L\bm{p}_j}\times\rfloor \label{LiDAR jacobian2}
\end{align}
2) \textbf{The other odometry measurement}:
Due to the different publish frequency of each odometry, we perform the linear interpolation to obtain the other odometry poses with the estimated state at time $t_k$ and $t_{k+1}$ and denote their frame as $O_{k-1}$ and $O_k$. We define the transformation matrix, $^{A}T_B$, from A to B as $^{A}T_B = [ ^{A}\bm{R}_{B} , ^{A}\bm{p}_{B} ;
\bm{0}_{3\times3} , 1 ]$. Then according to Fig. \ref{frame}, we have the transformation relationship as
\begin{align}\label{transformation definition}
^{O_{k-1}}T_{O_{k}} = (^{G}T_{I_{k-1}} {^{I}T_{O}})^{-1}(^{G}T_{I_{k}} {^{I}T_{O}})
\end{align}
where $^{O_{k-1}}T_{O_{k}} = {^{G}T_{O_{k-1}}^{-1}}{^{G}T_{O_{k}}}$ is the relative pose measurement calculated by other odometry. Hence based on (\ref{transformation definition}), we can easily get the measurement model $\bm{z}_O= \begin{bmatrix} \bm{z}_r \\ \bm{z}_p\\\end{bmatrix}$ as
\begin{small}
\begin{align}
\bm{z}_r &= ({^{G}\bm{R}_{I_{k-1}}}{^{I}\bm{R}_O})^{\top}{{^{G}\bm{R}_{I_{k}}}{^{I}\bm{R}_O}} \label{odometry_meas1}\\
\begin{split}
\label{odomety_meas2}
\bm{z}_p &= ({^{G}\bm{R}_{I_{k-1}}}{^{I}\bm{R}_O})^{\top}(({^{G}\bm{R}_{I_{k}}} - {^{G}\bm{R}_{I_{k-1}}}){^{I}\bm{p}_O} +{^{G}\bm{p}_{I_{k}}} - {^{G}\bm{p}_{I_{k-1}}})
\end{split}
\end{align}
\end{small}
We assume the state at $t_{k-1}$ is known, thus similar to (\ref{LiDAR h}) and (\ref{LiDAR r}), we have the residual of odometry measurement as $\bm{r}_O = \begin{bmatrix}\bm{r}_r\\ \bm{r}_p\end{bmatrix} \simeq \begin{bmatrix} H_{O_r} {\bm{\tilde{x}_k}} + {\bm{v}_r}\\H_{O_p} {\bm{\tilde{x}_k}} + {\bm{v}_p} \end{bmatrix}$, where $\bm{r}_r$\footnote{For roation the minus operation is defined in Lie group} and $\bm{r}_p$ are the rotation and translation errors calculated like (\ref{LiDAR r}), ${\bm{v}_r} \in \mathit{N}(\bm{0},{\bm{R}_r})$ and ${\bm{v}_p} \in \mathit{N}(\bm{0},{\bm{R}_p})$ are the corresponding Gaussian noise.
Thus, based on (\ref{odometry_meas1}) and (\ref{odomety_meas2}), we have the jacobian matrix of rotation and translation residual for error state as
\begin{align}
H_{O_r} &=\begin{bmatrix} ^I\hat{\bm{R}}_O^{\top} & \bm{0}_{3\times3}& \bm{0}_{3\times18}& H_{O_{r1}} & \bm{0}_{3\times3}
\end{bmatrix} \\
H_{O_p} &=\begin{bmatrix} H_{O_{p1}} &
H_{O_{p2}}
& \bm{0}_{3\times18}&
H_{O_{p3}}
& H_{O_{p4}}
\end{bmatrix}
\end{align}
where
\begin{align}
H_{O_{r1}} &=\bm{I}_3 - {{^I}\hat{\bm{R}}{^{\top}}_O} {^G}\hat{\bm{R}}{^{\top}}_{I_k} {^G}\hat{\bm{R}}_{I_{k-1}} {^I}\hat{\bm{R}}_O \\
H_{O_{p1}} &=-^I\hat{\bm{R}}{^{\top}}{_O} {^G}\hat{\bm{R}}{^{\top}}{_{I_{k-1}}} {^G}\hat{\bm{R}}_{I_k}\lfloor{^I}\hat{\bm{p}}_O\times\rfloor \\
H_{O_{p2}} &= ({^G}\hat{\bm{R}}_{I_{k-1}}{^I\hat{\bm{R}}{_O}})^{\top}\\
H_{O_{p3}} &=\lfloor {^I}\hat{\bm{R}}^{\top}_O {^G}\hat{\bm{R}}{^\top}{_{I_{k-1}}} \hat{\bm{P}}_1 \times\rfloor
\end{align}
\begin{align}
\hat{\bm{P}}_1 &=({^G}\hat{\bm{R}}_{I_k}-{^G}\hat{\bm{R}}_{I_{k-1}}) {^I\hat{\bm{p}}_O} + {^G\hat{\bm{p}}_{I_k}} -{^G\hat{\bm{p}}_{I_{k-1}}}\\
H_{O_{p4}} &=({^G}\hat{\bm{R}}{_{I_{k-1}}}{^I\hat{\bm{R}}{_O}})^{\top} ({^G}\hat{\bm{R}}{_{I_k}} - {^G}\hat{\bm{R}}_{I_{k-1}} )
\end{align}
\subsection{Degeneration-Aware Update}
Following \cite{xu2022fast}, we have the following error state:
\begin{equation} \label{prior}
\bm{x}_k \boxminus \hat{\bm{x}}{_k} = ({\hat{\bm{x}}^{\kappa}}_k \boxplus {\tilde{\bm{x}}^{\kappa}}_k )\boxminus {\hat{\bm{x}}_k} = {\hat{\bm{x}}^{\kappa}}\boxminus {\hat{\bm{x}}_k} + {\bm{M}^{\kappa}} {\tilde{\bm{x}}^{\kappa}}_k
\end{equation}
where $\boxplus/ \boxminus$ means plus and minus operators in Lie group \cite{sola2018micro}. ${\bm{M}^{\kappa}}$ is partial differentiation of $({\hat{\bm{x}}^{\kappa}}_k \boxplus {\tilde{\bm{x}}^{\kappa}}_k )\boxminus {\hat{\bm{x}}_k}$ with respect to ${\tilde{\bm{x}}^{\kappa}}_k$ evaluated at zeros:
\begin{equation}
\begin{split}
{\bm{M}^{\kappa}} = {\rm diag}(\bm{A}({\delta{^G{\bm{\theta}}_{I_k}}})^{-\top},\bm{I}_{15\times15},\bm{A}({\delta{^I{\bm{\theta}}_{L_k}}})^{-{\top}}, \\
\bm{I}_{3\times3},\bm{A}({\delta{^I{\bm{\theta}}_{O_k}}})^{-{\top}},\bm{I}_{3\times3})
\end{split}
\end{equation}
where $\bm{A}(\cdot)$ is defined in \cite{he2021kalman} and ${\delta{^X{\bm{\theta}}_{Y}}} = {^X{\hat{\bm{R}}^{\kappa}}_Y}\boxminus{^X{\bm{\hat{R}}}_Y}$.
Based on (\ref{prior}) and taking it into the first-order approximation measurement model mentioned above, the problem can be summarized as:
\begin{equation}
\underset{{\tilde{\bm{x}}^{\kappa}}_k}{{\rm min}}(\parallel {\hat{\bm{x}}^{\kappa}}\boxminus {\hat{\bm{x}}_k}\parallel^2_{\hat{\bm{P}}_k^{-1}} + \sum\parallel \bm{z}^{\kappa} + {H^{\kappa}}{{\tilde{\bm{x}}^{\kappa}}_k} \parallel^2_{\bm{R}^{-1}})
\end{equation}
To avoid the noise of pose measurement degrading the LiDAR odometry in well-condition and robustify the odometry in degenaration conditions, we exploit the agile update scheme.
Theoretically, the eigenvalues corresponding to the degeneration dimensions are precisely zero while are typically a small value in the actual calculation due to the noise of data and limited computational accuracy. Hence, we first calculate the eigenvalues $\{\lambda_i\}$ of the Hessian matrix $\bm{H}^{\top}_L\bm{H}_L$ and refer to the heuristic method in \cite{ding2021degeneration} to determine how well the geometry feature of the scene. If the eigenvalue is smaller than the thread, we can infer the existence of degeneration.
If LIO is in well condition and assume we have m LiDAR measurements, then ${{\bm{z}^\kappa}} = \bm{z}^{\kappa}_l=[{{\bm{z}^{\kappa}_{l1}}},\dots,{{\bm{z}^{\kappa}_{lm}}}]^{\top}$, $H ={H^{\kappa}_L}=[H^{\kappa}_{l1},\dots, H^{\kappa}_{lm}]^{\top}$ and $\bm{R}=\bm{R}_L={\rm diag}(\bm{R}_{l1},\dots,\bm{R}_{lm})$. Otherwise, if degeneration is detected, ${{\bm{z}^\kappa}} = [\bm{z}^{\kappa}_l,{\bm{z}^{\kappa}_r} ,{{\bm{z}^{\kappa}_p}}]^\top$, $H = [{H^{\kappa}_L} ,{H^{\kappa}_{O_r}} ,{H^{\kappa}_{O_p}} ]^\top$ and $\bm{R} ={\rm diag}(\bm{R}_L,\bm{R_{O_r}},\bm{R_{O_p}})$.
Then we can perform iterated Kalman filter update the same in \cite{xu2022fast}.
\section{Craméra–Rao Lower Bound Theorem}
In this section, we furnish the reader with further insight into the proposed fusion method and compare the accuracy with other purely LiDAR-based method using the Cramér-Rao Lower Bound (CRLB). The CRLB is a lower bound on the variance of an estimator, which is often used to evaluate the performance of the data fusion method \cite{domhof2017multi} \cite{blanc2007data}. Following \cite{kowalski2019crlb}, the CRLB is calculated by taking inverse of the Fisher information matrix as:
\begin{equation} \label{crlb_formation}
CRLB = {\mathit{\bm{J}}^{-1}} = {{\bm{H}^{\top}}{\bm{R}^{-1}}{\bm{H}}^{-1}}
\end{equation}
where $\bm{H}$ and $\bm{R}$ are corresponding to the Jacobian and covariance matrix of measurement in section \uppercase\expandafter{\romannumeral3}. The smaller CRLB, the higher accuracy the system can achieve in theory.
For the needs of analysis, the following lemmas \cite{horn2012matrix} are needed.
\textit{Lemma 1}: The inverse of a positive definite matrix is also positive definite
\textit{Lemma 2}: A real symmetric matrix A is positive definite if there exists a real nonsingular matrix B such that $A=BB^{\top}$
\textit{Lemma 3}: Let A be a positive definite matrix and B be a $m\times n$ real matrix. ${B^{\top}AB}$ is positive definite if the rank for B:$\mathit{r}(B)=n$
Based on the theory mentioned above, we then calculate and compare the CRLB of different methods separately.
1) \textbf{Purely LiDAR-based method}:
Since the purely LiDAR-based method does not use odometry pose measurements, the extrinsic parameters related to odometry are removed from the state vector. Moreover, for simplicity of calculation, we ignore the variable whose corresponding part in $H$ is zero vector. Thus based on (\ref{LiDAR jacobian1}) and (\ref{LiDAR jacobian2}), the measurement Jacobian matrix of purely LiDAR-based method is rewritten as
\begin{small}
\begin{equation} \label{purely-LiDARH}
\bm{H}_{li} = {^{G}\bm{u}_j^{\top}}\begin{bmatrix}\bm{H}_{L1}
\; \bm{I}_3 \; \bm{H}_{L2} \; ^G_{\bm{I}_k}\hat{\bm{R}} \end{bmatrix} =\begin{bmatrix}\bm{H}_{pose}\quad\bm{H}_{extrinsic}\end{bmatrix}
\end{equation}
\end{small}
We substitute (\ref{purely-LiDARH}) into (\ref{crlb_formation}) and represent it in the form of a chunking matrix as follows:
\begin{equation}\label{purely-LiDARJ}
\mathit{\mathbf{J}}_{li} = \begin{bmatrix}
\bm{U} & \bm{B} \\ \bm{B}^{\top} & \bm{C}\\
\end{bmatrix}
\end{equation}
Since covariance matrix of LiDAR noise is a real symmetric matrix and the rank of $H_{li}$ is equal to its row, we can reason that $\mathit{\mathbf{J}}{_{li}}^{-1}$ is positive define matrix, which is denoted as $\mathit{\mathbf{J}}{_{li}}^{-1} > 0$. Then, based on the inverse formula for chunking matrix, we can easily obtain the part corresponding to the estimated pose as
\begin{small}
\begin{equation} \label{crlb1}
{\rm CRLB}_{li} = ({\bm{U} - \bm{B}{\bm{C}^{-1}}\bm{B}^{\top}})^{-1}
\end{equation}
\end{small}
\begin{table*}\small
\caption{Accuracy Comparison results of Each Method Operating on Various Challenge Environments.}\label{ate compare}
\centering
\begin{tabular}{lcccccccc:cccccc}
\hline\hline
Dataset & \multicolumn{8}{c}{CERBERUS} & \multicolumn{6}{c}{M2DGR} \\
& \multicolumn{2}{c}{anymal1} & \multicolumn{2}{c}{anymal2} & \multicolumn{2}{c}{anymal3} & \multicolumn{2}{c}{anymal4} & \multicolumn{2}{c}{gate01} & \multicolumn{2}{c}{gate03} & \multicolumn{2}{c}{street03} \\
\hline
& max & mean & max & mean & max & mean & max & mean & max & mean & max & mean & max & mean \\
\hline
LOCUS & 1.61 & 0.49 & -\tnote{1} & - & 1.22 & 0.26 & 2.69 & 0.67 & 13.09 & 6.28 & 20.31 & 6.56 & - & - \\
LVI-SAM & 8.18 & 1.11 & 1.86 & 0.44 & 2.87 & 0.45& 1.20& 0.36 & 10.01 & 4.28 & 22.93 & 9.06 & 19.14 & 9.67 \\
LIO-SAM & 23.9 & 10.3 & - & - & - & - & 23.1 & 8.61 &13.27 & 6.68 & 12.27 & 3.30 & 12.86 & 7.52 \\
VINS-MONO & 5.93 & 2.34 & 11.20 & 1.55 &4.40 & 1.48 &2.00 & 0.72 & 6.92 & 3.75 & 12.75 & 8.21 &14.46 & 4.61 \\
DAMS-LIO & \textbf{0.67} &\textbf{0.20} & \textbf{0.35} & \textbf{0.14} & \textbf{0.73} & \textbf{0.17} & \textbf{1.09} & \textbf{0.27} & \textbf{2.73} & \textbf{0.94} & \textbf{3.38} & \textbf{2.24} & \textbf{1.69} & \textbf{0.47} \\
\hline\hline
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[1] "-" means that the method fails. The units for all values are meters.
\end{tablenotes}
\vspace{-0.5cm}
\end{table*}
2) \textbf{Pose-fusion method}:
Following the process mentioned above, we have the measurement Jacobian matrix as
\begin{small}
\begin{equation} \label{pfH}
\bm{H}_{pf} = \begin{bmatrix}\bm{H}_{pose}&\bm{H}_{extrinsic}&\bm{0}_{3\times6}\\\bm{H}_{pose1}&\bm{0}_{3\times6}&\bm{H}_{extrinsic1}
\\\bm{H}_{pose2}&\bm{0}_{3\times6}&\bm{H}_{extrinsic2}
\end{bmatrix}
\end{equation}
\end{small}
Then we substitute (\ref{pfH}) into (\ref{crlb_formation}) and represent it same as (\ref{purely-LiDARJ}):
\begin{small}
\begin{equation}
\mathit{\mathbf{J}}_{pf} = \begin{bmatrix}
\bm{U}+\bm{F} & \bm{B}& \bm{D} \\ \bm{B}^{\top} & \bm{C} &\bm{0}\\ \bm{D}^{\top} &\bm{0}& \bm{E}
\end{bmatrix}
\end{equation}
\end{small}
Then, we marginalize the odometry-extrinsic-related variable to get the marginalized Fisher Information Matrix of the variables ${\mathit{\mathbf{J}}_{pfmar}}$ as
\begin{equation}
{\mathit{\mathbf{J}}_{pfmar}} = \begin{bmatrix}
\bm{U}+\bm{F}-\bm{D}{\bm{E}^{-1}}\bm{D}^{\top} & \bm{B} \\ \bm{B}^{\top} & \bm{C}
\end{bmatrix}
\end{equation}
Then following (\ref{crlb1}), the corresponding CRLB for estimated pose is
\begin{equation}
{\rm CRLB}_{pf} = ({\bm{U} - \bm{B}{\bm{C}^{-1}}\bm{B}^{\top} + \bm{F} - \bm{D}{\bm{E}^{-1}}\bm{D}^{\top}})^{-1}
\end{equation}
3) \textbf{Comparison}:
Considering a case in which only the odometry pose measurement is used (i.e. no extrinsic variables between LiDAR and IMU in the state vector), the Fisher information matrix is
\begin{equation}
{\mathit{\mathbf{J}}_{op}} = \begin{bmatrix}
\bm{F} & \bm{D} \\ \bm{D}^{\top} & \bm{E}\\
\end{bmatrix}
\end{equation}
Under this condition, $\begin{small}({\bm{F} - \bm{D}{\bm{E}^{-1}}\bm{D}^{\top}})^{-1}\end{small}$ is the information matrix corresponding to estimated pose and thus obviously is positive definite. Based on Lemma 1, we can refer that $\!({\bm{F} - \bm{D}{\bm{E}^{-1}}\bm{D}^{\top}}) > 0\!$. According to \cite{fu2021high}, we have
\begin{small}
\begin{align}
\hspace{-2mm}
\!{({\bm{U}-\bm{B}{\bm{C}^{-1}}\bm{B}^{\top} + \bm{F} - \bm{D}{\bm{E}^{-1}}\bm{D}^{\top}})^{-1}} < (\bm{U} - \bm{B}{\bm{C}^{-1}}\bm{B}^{\top} )^{-1} \!
\end{align}
\end{small}
Then, it can be verified that ${\rm CRLB}_{pf} < {\rm CRLB}_{li}$. Namely, the method that fuses pose measurement and LiDAR points has a smaller lower bound on the covariance of pose estimation than the method that uses LiDAR points alone. Substituting the above analysis into the application scenario, it shows that our sensor-fusion method makes fuller use of the other odometry measurement than those using odometry information as an initial value, and therefore achieves better accuracy in pose estimation.
\begin{figure}
\setlength{\belowcaptionskip}{-0.6cm}
\setlength{\abovecaptionskip}{-0.1cm}
\centering \includegraphics[width=0.42\textwidth]{fig/sim_modify.pdf}
\captionsetup{font={footnotesize}}
\caption{{Simulation validation result. The value of the horizontal coordinate represents the standard deviation of the noise. W and W/O pose mean with and without pose measurement.}}\label{simulation}
\end{figure}
\section{Experiments}
In this section, we fully evaluate the proposed DAMS-LIO method with both simulation and real-world experiments. The simulation experiments are mainly used to verify the advantages of the proposed fusion method. Then we compare the accuracy and robustness of our method against other state-of-the-art methods in the public dataset and demonstrate the computing efficiency.
\subsection{Simulation Validations}
The main purpose of the simulation experiment, which is build with Gazebo simulator, is to validate the theoretical analysis in section \uppercase\expandafter{\romannumeral4}. We construct a long corridor to simulate the degeneration environment and sample data through turtlebot3 mobile robot equipped velodyne VLP16 LiDAR sensor and wheel encoder. Same to the simulator in Openvins \cite{geneva2020openvins}, we obtain the simulated IMU data by sample interpolation of the true values of the robot trajectory.
\begin{figure}[htp]
\centering
\setlength{\belowcaptionskip}{-0.6cm}
\captionsetup{font={footnotesize}}
\subfigcapskip=-5pt
\subfigure[CERBERUS dataset]{
\label{CERBERUS dataset}
\includegraphics[width=0.4\textwidth]{fig/CERBERUS.pdf}}\vspace{-0.3cm}
\\
\subfigure[M2DGR dataset]{
\label{M2DGR dataset}
\includegraphics[width=0.4\textwidth]{fig/M2DGR.pdf}}\vspace{-0.3cm}
\caption{Illustration of the real-world datasets used in experiment. (a) shows complex underground scenes in the CERBERUS dataset, including dusty and long tunnel-like degeneration scenes. (b) shows challenging urban scenes in M2DGR, which contains the poorly illuminated conditions and image-blur caused by sharp turns.}
\label{real-dataset}
\end{figure}
The gaussian noise $\mathit{N}(0,{\sigma}^{2})$ is added to the LiDAR point data. Here the covariance ${\sigma}^{2}$ varies from 3 to 9 cm with an interval of 2 cm. We run our method repeatedly, with and without pose measurement, 20 times to obtain the absolute trajectory error (ATE) and evaluate the mean and dispersion of the errors in the form of box plots. As shown in Fig. \ref{simulation}, with an increase in the noise variance, the mean and covariance of the APE obtained by the method with pose observation are smaller than those without pose measurements. Therefore, this validates that fusing the odometry pose and LiDAR points observation simultaneously can achieve higher accuracy performance and lower covariance bound in pose estimation.
\begin{figure*}[htp]
\setlength{\belowcaptionskip}{-0.4cm}
\centering \includegraphics[width=0.94\textwidth]{fig/map/robust_new.pdf}
\captionsetup{font={footnotesize}}
\caption{Map and trajectory comparison of different sensor-fusion methods in the urban data sequence. Red dashed circles indicate limited LiDAR sensing distance and green dashed means drop of LiDAR measurement, which servely degrade the performance of LOCUS and LVI-SAM at the end.}
\label{robustness}
\end{figure*}
\subsection{Real-world Experiments}
To further validate the practical performance of our method, we compare with current state-of-the-art state estimation systems on publicly available and challenging datasets.
To maintain the viability of various methods operations, we choose the CERBERUS DARPA Subterranean Challenge dataset \cite{tranzatto2022cerberus} and the M2DGR dataset\footnote{Since these scenarios do not have degeneration of LIO, we select some periods in each scenario to limit the range of LiDAR measurements to produce degeneration} \cite{yin2021m2dgr}, which have a rich set of sensor types as shown in Fig \ref{real-dataset}.
1) \textbf{Accuracy}:
We first use these challenging datasets to evaluate the accuracy of state estimation. We compare our method with other vision-based methods (VINS-MONO \cite{qin2018vins}), LiDAR-based method (LIO-SAM \cite{shan2020lio}), and sensor-fusion methods (LVI-SAM \cite{shan2021lvi}, LOCUS \cite{palieri2020locus}). We use the trajectory output by VINS-MONO as the odometer input for LOCUS and our method. The EVO package \cite{grupp2017evo} is adopted to calculate the translational part of ATE against the ground truth. The maximum and mean results of ATEs are presented in Table \ref{ate compare}. The table shows that our method has lower trajectory error and lower error fluctuation range in each scenario. Also, it can be seen that LIO-SAM, which does not utilize odometer information, is more likely to fail in the CERBERUS dataset where a large number of degeneration scenarios exist. In addition, when there is a large error in the odometer (i.e. the max error of VINS-MONO is large), methods such as LOCUS that use the odometer as the matching initial value are more likely to fail, while the proposed method still maintains a good accuracy.
2) \textbf{Robustness}:
Our key insight is that our method can have accuracy estimation relying on the fusion of LiDAR and pose measurement even with poor observations of another odometry and LiDAR points, which highlights the robustness of systems. Therefore, we select gate03 in dataset M2DGR to manually add some difficult scenarios, including the restricted maximum LiDAR sensing distance of 5m in the 15-20s and 165-180s time periods and the loss of LiDAR data at 100-105s (the part in the red and green dashed circle in Fig. \ref{robustness}).
The comparison of trajectory accuracy and map-building results of the three different methods is shown in Fig \ref{robustness}.
It can be seen that compared to LVI-SAM and LOCUS, our method is less sensitive to sensor data drop and poor sensor data. In addition, compared to LOCUS, since we consider both odometry pose and LiDAR point observations in the update process, it allows us to guarantee the accuracy of the estimation despite the poor data from LiDAR (limited measurement range) and poor odometry pose input (The result of VINS-MONO is poor at night). While methods like LOCUS, which only use the odometery as the initial value, will produce large errors and affect the map building results.
\begin{figure}
\setlength{\belowcaptionskip}{-0.7cm}
\centering \includegraphics[width=0.34\textwidth]{fig/efficiency.pdf}
\captionsetup{font={footnotesize}}
\caption{{Comparison of the running time of each system.}}\label{running time}
\end{figure}
3) \textbf{Efficiency}:
We compare the computation time taken by different methods to process the LiDAR data in operation, starting from the reception of LiDAR data until the pose is estimated. All efficiency tests are performed on a laptop ( intel i5-9300 @2.4GHz$\times$8 ) running ubuntu 16.04 LTS. As shown in Fig \ref{running time}, our method can always maintain a lower runtime compared to other methods, with the growth of runtime.
Since the LiDAR data is published at 10 Hz, and the average processing time of our method in Fig \ref{running time} is mostly less than 0.1s, which indicates that our system achieves an efficiency of real-time operation.
\section{Conclusions}
Multi-sensor fusion is a promising solution to the problem of state estimation in the perceptually-challenge environment while adopting a fusion scheme that can balance robustness and accuracy is intractable.
To handle this problem, we propose the DAMS-LIO, a degeneration-aware and modular sensor-fusion LiDAR-inertial odometry system that incorporates both LiDAR and odometry pose measurements when degeneration is detected in LIO system. The CRLB theory analysis and extensive experiments are performed to demonstrate our method has high accuracy, robustness, and efficiency.
In future work, we intend to evaluate the observability information obtained from the odometry and retain only the most informative segments to improve the accuracy and robustness.
\bibliographystyle{IEEEtran}
|
2,869,038,155,462 | arxiv | \section{Introduction}
In this paper we are going to study the zero temperature dynamics of the one
dimensional spin-$\frac{1}{2}$ antiferromagnetic Heisenberg model
\begin{equation}\label{hop}
H \equiv 2 \sum_{x=1}^N \vec{S}(x) \vec{S}(x+1) - 2 B \sum_{x=1}^N S_3 (x),
\end{equation}
in the presence of a uniform external field $B$. The quantities of interest are
the dynamical structure factors at fixed magnetization $M \equiv S/N$:
\begin{eqnarray}\label{dsf}
S_a(\omega,p,M,N) = \sum_{n} \delta\biglb( \omega-(E_n-E_s)\bigrb)
|\langle n|S_a(p)|s\rangle|^2, \quad a=3,+,-.
\end{eqnarray}
They are defined by the transition probabilities $|\langle n|S_a(p)|s\rangle|^2$
from the ground states $|s\rangle\equiv|S,S_3=S\rangle$ in the subspaces with
total spin $S$ and energy $E_s$ to the excited states $|n\rangle$ with energy
$E_n$. The transition operators we are concerned with, are the Fourier
transforms of the single-site spin operators $S_a(x)$:
\begin{equation}\label{fts}
S_a(p) \equiv \frac{1}{\sqrt N}\sum_{x=1}^{N} e^{i p x} S_a(x), \quad a=3,+,-.
\end{equation}
The structure factors (\ref{dsf}) have been investigated before by M\"uller
{\it et al.}.\cite{bib1} They performed a complete diagonalization of the
Hamiltonian (\ref{hop}) on small systems ($N\leq 10$) and analysed the
spinwave continua by approximately solving the Bethe {\it ansatz} equations
for the low lying excitations. In particular, they found a lower bound
\begin{equation}\label{lbw}
\omega \ge |\omega_3(p,M)|,
\end{equation}
\begin{equation}\label{lw3}
\omega_3(p,M) = 2 D \sin \frac{p}{2} \sin \frac{p-p_3(M)}{2},
\end{equation}
for the excitations contributing to the longitudinal structure factor
$S_3(\omega,p,M)$. The constant $D$ on the right hand side of (\ref{lw3}) is
fixed by the magnetization curve:\cite{bib2}
\begin{equation}\label{mc}
B(M)=2 D \sin \pi M.
\end{equation}
The lower bound vanishes at $p=0$ and at the field dependent momentum
\begin{equation}\label{fdm}
p_3(M) = \pi (1-2M),
\end{equation}
signalling the emergence of zero frequency modes ('soft modes') in the spectrum
of excitation energies. The analysis of the spinwave continua relevant for the
transverse structure factors $S_{\pm}(\omega,p,M)$ leads to the
following approximate lower
bounds
\begin{equation}\label{flb}
\omega \ge \omega_{\pm}(p,M),
\end{equation}
for the excitations produced by the raising and lowering operators $S_{+}(p),
S_{-}(p)$, respectively:
\begin{eqnarray}\label{rlor}
\omega_{+}(p,M) = 2 D \left[\sin \frac{p}{2} \cos \left(\frac{p}{2}-\pi
M\right) - \sin \pi M\right] \quad \mbox{ for } \quad p_1(M) \le p \le
\pi,
\end{eqnarray}
and
\begin{equation}\label{alor}
\omega_{-}(p,M) = |\omega_3(\pi-p,M)|
\quad \mbox{ for } \quad 0 \le p \le \pi.
\end{equation}
Both bounds vanish at $p=\pi$ and at $p=p_1(M)=2 \pi M$. The softmodes at the
field dependent momenta $p_j(M), j=1,3$ produce characteristic structures in
the momentum dependence of the corresponding static structure
factors.\cite{bib3,bib4} It is the purpose of this paper to analyse
the singularities in the static
structure factors and the infrared singularities in the dynamical structure
factors (\ref{dsf}) at the softmode momenta. In Sec. II we review our method to
compute the excitation energies and transition probabilities for finite rings
($N \le 36$). The finite-size dependence of the lowest excitation energy at the
soft mode momenta is analysed by
solving the Bethe {\it ansatz} equations on
large systems ($N \le 2048 $).
The critical
behavior of the static structure factors at the softmode momenta
$p=p_a(M),\;a=1,3$ and fixed magnetization $M=1/4$ is investigated in Sec. III
based on a numerical computation of the ground state on rings with
$N=12,16,...,32,36$ sites. In Sec. IV, we demonstrate how infrared
singularities emerge in a finite-size scaling analysis of the dynamical
structure factors in the euclidean time representation. Finally, in
Sec. V we compare our numerical results with the predictions of
conformal field theory.
\section{Softmodes in the excitation spectrum.}
An approximate scheme to determine low lying excitation energies and transition
probabilities has been proposed in Ref.[\onlinecite{bib5}]. It starts from the
recursion algorithm,\cite{bib6} which generates a tridiagonal
matrix. Eigenvalues and eigenvectors of this matrix yield the exact excitation
energies and transition probabilities. There are, however, two sources for
numerical errors in this scheme. The orthogonality of the states produced by
the recursion algorithm is lost more and more with an increasing number of
steps, due to rounding errors. Moreover, the iteration has to be truncated
before the Hilbert space is exhausted.
Nevertheless the method yields good results for the lowest $10$ excitations
-- provided that these contain the dominant part of the spectral
distribution. This condition is satisfied for the excitations in
$S_a(\omega,p,M,N),\, a=3,+$. For $S_-(\omega,p,M,N)$ near the softmode
momentum $p_1(M)$, however, this is not the case. In Table I we compare the
low lying excitations for $S_-(\omega,p,M,N),\,M=1/4, p=\pi$ and $p=\pi/2 -
2\pi/16$ on a ring with $N=16$ sites, as they follow from an exact
diagonalization (upper part of Table I) and the recursion algorithm (lower
part of Table I), respectively.
At $p=\pi$, $76.95\%$ of the spectral weight is found in the first excitation.
Energy and relative spectral weight of the first excitation are reproduced
within 13 digits. The following $7$ excitations can be identified term by
term with decreasing accuracy for the energies and the relative spectral
weights.
The situation is different for $p_-=\pi/2-2\pi/16 $, which can be seen in
the right hand part of Table I. The exact result yields large spectral
weights -- marked by an asterisk -- for the 1st ($19.55\%$), the 15th
($18.33 \%$) and the 20th ($13.80 \%$) excitation. The recursion method
reproduces the energy and spectral weight of the first excitation within 13
digits. The two other excitations with large spectral weight -- marked by
an asterisk -- are only in rough agreement with the
exact result. We found, however, that this inaccuracy has no effect on the
dynamical structure factors in the euclidean time representation
(\ref{ietl}). The latter will be investigated in Sec. IV. In Figs. 1(a),(b),
(c) we present the momentum dependence of the excitation energies in the
dynamical structure factors $S_a(\omega,p,M=1/4,N=28)$ as they follow from
the recursion method. The size of the symbols measures the relative spectral
weight $|\langle n| S_a(p)|s\rangle|^2/S_a(p,M,N)$. The normalization is
given by the static structure factors:
\begin{equation}\label{gbts}
S_a(p,M,N) = \int_{\omega_a(p,M,N)}^{\infty} d\omega\; S_a(\omega,p,M,N),
\quad a=3,+,-.
\end{equation}
There is a strict relation between the static transverse structure factors:
\begin{equation}\label{tias}
S_-(p,M,N) = S_+(p,M,N) + 2 M.
\end{equation}
It should be noted that $S_+(p,M,N) \approx 0$ for $p < p_1(M)$ [cf. Fig.
3(b)], which implies that the absolute spectral weight $|\langle n
|S_+(p)|s\rangle|^2$ is almost zero for $p<p_1(M)$.
The solid curves represent the lower bounds (\ref{lw3}),(\ref{rlor}) and
(\ref{alor}) obtained from the analysis of the spinwave continua.\cite{bib1}
The emergence of the softmode at $p=p_3(M=1/4)=\pi/2$ in the longitudinal
case [Fig. 1(a)] is clearly visible. Note, that there are some excitations
with small spectral weights below the bound (\ref{lw3}) (for $p>3\pi/4$).
We do not know, whether the spectral weights will survive in the
thermodynamical limit.
The lowest excitations in the transverse cases [Figs. 1(b) and 1(c)] are
found at $p=\pi$ and at the field dependent momenta
\begin{equation}\label{tfd}
p_1^\pm(M) = p_1(M) \pm \frac{2 \pi}{N} .
\end{equation}
We have analysed the finite-size dependence of the lowest excitation energies:
\begin{mathletters}\label{lee}
\begin{eqnarray}
\omega_3\biglb(p_{3}(M),M,N\bigrb) &=&
E\biglb(p=p_s+p_{3}(M),M=S/N,N\bigrb) - E(p_s,M=S/N,N),
\label{lee1} \\
\omega_1\biglb(\pi,M,N\bigrb) &=&
E\biglb(p=p_s+\pi,M=(S+1)/N,N\bigrb)-E(p_s,M=S/N,N),
\label{lee2} \\
\omega_{\pm}\biglb(p=p_1^\pm(M),M,N\bigrb) &=&
E\biglb(p_s+p_1^\pm(M),M=(S\pm1)/N,N\bigrb) - E(p_s,M=S/N,N),
\label{lee3}
\end{eqnarray}
\end{mathletters}
$p_s$ denotes the groundstate momentum in the sector with total spin
$S$; $p_s=0$ if $N+2S$ is a multiple of $4$, $p_s=\pi$ otherwise.
The lowest energy eigenvalues $E(p,M,N)$ with momentum $p$ and spin $S$ were
computed on large systems ($N \le 2048 $) by solving the Bethe {\it ansatz}
equations. The extrapolation of the energy differences (\ref{lee}) to
the thermodynamical limit
\begin{mathletters}\label{tttl}
\begin{eqnarray}
\lim_{N\to\infty}N\omega_3(p_3(M),M,N)= \Omega_3(M), \quad & &
\lim_{N\to\infty}N\omega_1(\pi,M,N)= \Omega_1(M),\label{tttl1}\\
\lim_{N\to\infty}N\omega_{\pm}\biglb(p_1^\pm(M),M,N\bigrb)
&=& \Omega_1^\pm(M),\label{tttl2}
\end{eqnarray}
\end{mathletters}
obey the following relations:
\begin{equation}\label{omeg1}
\Omega_1^\pm(M) = \Omega_3(M) \pm \Omega_1(M) .
\end{equation}
Together with the spinwave velocity $v(M)$
\begin{equation}\label{geschw}
2 \pi v(M)=\lim_{N \to \infty} N[E(p_s+2\pi/N,M,N)-E(p_s,M,N)],
\end{equation}
they define the scaled energy gaps:
\begin{mathletters}\label{teta}
\begin{eqnarray}
2 \theta_a(M) &=& \frac{\Omega_a(M)}{\pi v(M)}, \quad a=3,1,
\label{tetaa} \\
2 \theta_1^\pm(M) &=& \frac{\Omega_1^\pm(M)}{\pi v(M)}
= 2 [\theta_3(M)\pm\theta_1(M)] \label{tetein}.
\end{eqnarray}
\end{mathletters}
The $M$-dependence of the quantities $\theta_a(M),
\; a=3,1$, is shown in Fig. 2. It turns out that
\begin{equation}\label{quot}
2 \theta_1(M) = \frac{1}{2 \theta_3(M)}
\end{equation}
in accord with the analytical result of Bogoliubov, Izergin and
Korepin \cite{bib7}.
In the limit $M \rightarrow 1/2$ one finds $2 \theta_3(M) = 1+2M$.\cite{bik}
The dotted line in Fig. 2 near $M = 0$ indicates the logarithmic singularity
\begin{equation}\label{celim2}
2 \theta_3(M) \stackrel{M\to0}{\longrightarrow}
1+\left(\ln\frac{1}{M^2}\right)^{-1},
\end{equation}
which was obtained by Bogoliubov, Izergin and Korepin \cite{bik} by a
perturbative approach to the Bethe {\it ansatz} equations.
\section{Critical behavior of the static structure factors at the softmode
momenta}
The static structure factors of the antiferromagnetic Heisenberg model in the
presence of a magnetic field have been investigated in a previous numerical
study on systems up to $N=28$.\cite{bib4} Meanwhile we have extended the system
size to $N=32$ and $N=36$ at fixed magnetization $M=1/4$. We find the following
features:
\begin{enumerate}
\item The transverse structure factor at momentum $p=\pi$ diverges for
$N \to \infty$. A power law fit
\begin{equation}\label{powlaw}
S_1(\pi,M,N) \stackrel{N\to\infty}{\longrightarrow} 0.503
N^{1-\eta_1(M)},
\end{equation}
to the finite system results for $N=36,32,28$ leads to the value
$\eta_1(M=1/4)= 0.65$ for the critical exponent. The same exponent
governs the approach to the singularity in the momentum $p$:
\begin{equation}\label{mompe}
S_1(p,M,\infty) \stackrel{p\to\pi}{\longrightarrow}
0.316 \left(1-\frac{p}{\pi}\right)^{\eta_1(M)-1},
\end{equation}
The finite-size dependence (\ref{powlaw}) is shown in Fig. 3(a).
The momentum dependence can be seen in Fig. 3(b) where we have
plotted $S_1(p=\pi,M=\frac{1}{4},N)$ versus $(1-\frac{p}
{\pi})^{\eta_1(M)-1}$
using the critical exponent determined in Fig. 3(a).
\item The approach to the field dependent soft mode $p_1(M)=2\pi M$
in the transverse structure factor is shown in the upper left $[p\to
p_1(M)-0]$ and the lower right $[p \to p_1(M)+0]$ insets of Fig. 3(b).
The numerical data behave as
\begin{equation}\label{powf2}
S_1\biglb(p\to p_1(M)\pm 0,M,\infty\bigrb)\sim
\left | 1 - \frac{p}{p_1(M)}\right | ^{\eta_1^\pm(M) -1}.
\nonumber
\end{equation}
if the critical exponents are chosen to be $\eta_1^+(M=1/4)=2.17$ ,
$\eta_1^-(M=1/4)=0.8 ... 1.2$. The uncertainty in
$\eta_1^-(M=1/4)$ reflects an instability in the fit to the numerical
data. Note, that the right hand side of (\ref{powf2}) diverges for
$\eta_1^-(M=1/4)<1$ but converges for $\eta_1^-(M=1/4)>1$. An
unambiguous determination of $\eta_1^-(M=1/4)$ demands for much larger
systems than $N=36$.
\item The finite-size dependence of the longitudinal structure factors
at $p=p_3(M)$
\begin{equation}\label{powf3l}
S_3\biglb(p_3(M),M,N\bigrb) \stackrel{N\to\infty}
{\longrightarrow} -0.124 N^{1-\eta_3(M)} + 0.308,
\end{equation}
is shown in Fig. 4(a) for $M=1/4,\; p=p_3(M)=\pi/2$.
A power law fit to the finite system results with $N=36,32,28$ yields:
$\eta_3(M=1/4)=1.51$. The same exponent governs the approach to the
singularity from the left:
\begin{equation}\label{fleft}
S_3\biglb(p\to p_3(M)-0,M,N\bigrb)
\stackrel{N\to\infty}{\longrightarrow}
-0.312\left(1-\frac{p}{p_3(M)}\right)^{\eta_3(M)-1} + 0.322,
\end{equation}
as is demonstrated in Fig. 4(b). It is not so easy to decide, whether a
different exponent is needed to describe the approach to the
singularity from the right. In the inset of Fig. 4(b) we plotted the
approach from the
right versus $|1-p/p_3(M)|^{\eta_3(M=1/4)-1}$.
The Fourier transform of the singularities in the static structure
factors determines the large distance behavior of the corresponding
spin spin correlators:
\begin{mathletters}\label{ldbe}
\begin{eqnarray}\label{ldbe1}
\langle s|S_1(0) S_1(x)|s\rangle \;&&
\stackrel{x\to\infty}{\longrightarrow} \;
\cos(\pi x) \frac{A_1(M)}{x^{\eta_1(M)}}
+ \cos[p_1(M)x]
\left(\frac{A_1^+(M)}{x^{\eta_1^+(M)}} +
\frac{A_1^-(M)}{x^{\eta_1^-(M)}}\right), \\
\langle s|S_3(0)S_3(x)|s\rangle-\langle
s|S_3(0)|s\rangle^2 \;
&& \stackrel{x\to\infty}{\longrightarrow} \;
\cos [p_3(M) x] \frac{A_3(M)}{x^{\eta_3(M)}}. \label{ldbe2}
\end{eqnarray}
\end{mathletters}
Conformal field theory \cite{bib9} predicts a relation between the
critical exponents $\eta(M)$ in (\ref{ldbe}) and the
scaled energy gaps (\ref{teta}):\cite{bib7,bib8}
\begin{mathletters}\label{reled}
\begin{eqnarray}
2 \theta_a(M) &=& \eta_a(M), \quad a=3,1, \\
2 \theta_1^\pm(M)&=& \eta_1^\pm(M).
\end{eqnarray}
\end{mathletters}
A derivation of (\ref{reled}) is presented in appendix A. A
comparison of the left and right hand sides of (\ref{reled}) is
presented in Table II.
\end{enumerate}
\section{Finite-size scaling analysis of the infrared singularities}
The euclidean time representation
\begin{eqnarray}\label{ietl}
S_a(\tau,p,M,N) = \int^{\infty}_{\omega_a(p,M,N)} d\omega \;
e^{-\omega\tau}S_a(\omega,p,M,N), \quad a=3,+,-,
\end{eqnarray}
is most suited to study finite-size effects in the dynamical structure factors
(\ref{dsf}). The singularities in the static structure factors
$S_a(\tau=0,p,M,N)$ at the softmode momenta originate from the infrared
singularities in the dynamical structure factors. In the combined limit
\begin{equation}\label{ctcl}
\tau \to \infty, \qquad N\to\infty,
\end{equation}
- keeping fixed the 'scaling' variables -
\begin{equation}\label{scvar}
z_a(p,M)\equiv \tau \omega_a(p,M,N), \quad a=3,+,- ,
\end{equation}
the low frequency part at the softmode momenta $p=\pi,p=p_1(M)\pm
2\pi/N,p=p_3(M)$ is projected out. We therefore expect to see here directly
signatures for the infrared singularities. Let us assume that the emergence of
the infrared singularities on finite systems can be described by a finite-size
scaling ansatz:
\begin{equation}\label{fsans}
S_a(\omega,p,M,N) = \omega^{-2\alpha_a(p,M)}
g_a\biglb(\omega/\omega_a(p,M,N),n_a(p,M,N)\bigrb),\quad a=3,+,-.
\end{equation}
The scaling functions $g_a$ are supposed to depend only on the scaled
excitation energies $\omega/\omega_a(p,M)$ and the variable
\begin{equation}\label{hvar}
n_a(p,M,N) = [p-p_a(M)]N/(2\pi),
\end{equation}
which describes the approach to the softmode momenta. The ansatz (\ref{fsans})
induces the following finite-size scaling behavior of the euclidean time
representation (\ref{ietl}) in the combined limit (\ref{ctcl}) and
(\ref{scvar}):
\begin{equation}\label{bitl}
\tau^{1-2\alpha_a(p,M)} S_a(\tau,p,M,N) =
G_a\biglb(z_a(p,M),n_a(p,M,N)\bigrb) \exp[-z_a(p,M)].
\end{equation}
The two scaling functions on the right hand sides of equations
(\ref{fsans}) and (\ref{bitl}) are related via:
\begin{equation}
G(z,n) = z^{1-2 \alpha} \int_1^{\infty} dx \; e^{-(x-1)z}
g(x,n) .
\nonumber
\end{equation}
Based on our numerical results for $S_a(\tau,p,M,N)$ at
$M=\frac{1}{4}$, $a=3,+$, $N=16,20,...,36$ and $a=-$,
$N=16,20,...,32$
at the softmode momenta we will now
test the validity of the finite-size scaling ansatz (\ref{bitl}).
Let us start with the longitudinal structure factor at the softmode
$p=p_3(M=1/4) =\pi/2$. In this case the variable (\ref{hvar}) is
$n_3(p=\pi/2,M=1/4) = 0$. The left hand side of (\ref{bitl}) versus the
scaling variable $z_3(p=\pi/2, M=1/4)$ is shown in Fig. 5(a) for the
following values of $\alpha_3(p=\pi/2,M=1/4)=0.22, 0.23, 0.234$. For $z_3
\ge 0.4 $ (inset of Fig. 5(a)) the finite system results coincide best if
\begin{equation}\label{zwodrei}
\alpha_3(p=\pi/2, M=1/4) = 0.23 .
\end{equation}
Therefore, this is the expected critical exponent for the infrared singularity
in the longitudinal structure factor. Deviations from this value for
$\alpha_3$ on the left hand side of (\ref{bitl}) obviously lead to a violation
of finite size scaling. It is remarkable to note that finite-size scaling
[with the exponent $\alpha_3(p=\pi/2, M=1/4) = 0.23$] persists for all values
$z_3 \ge 0.4$. In the limit $z_3 \to \infty$ the first excitation alone
survives and we can conclude for the finite-size dependence of the transition
probability:
\begin{equation}\label{trprob}
|\langle n=1|S_3(p=\pi/2)|s\rangle|^2 \stackrel{N\to\infty}
{\longrightarrow}
N^{2\alpha_3-1}.
\end{equation}
In other words, the critical exponent $\alpha_3$ for the infrared singularity
can by read off the finite-size dependence of the transition probability for
the first excitation. Indeed this feature is predicted by conformal field
theory.\cite{bib7} (cf. (A9) in appendix A)
Next we turn to the infrared singularities of the transverse structure factors
$S_{\pm}(\omega,p=\pi,M=1/4)$. As can be seen from Fig. 5(b), finite-size
scaling is found for the following choice of the critical exponents:
\begin{mathletters}\label{choice}
\begin{eqnarray}
\alpha_+(p=\pi,M=1/4) &=& 0.69,\\
\alpha_-(p=\pi,M=1/4) &=& 0.66 .
\end{eqnarray}
\end{mathletters}
In contrast to the longitudinal case, finite-size scaling can be observed here
for all values of the scaling variables $z_+, z_-$.
Finally we present in Figs. 6(a) and 6(b) the tests of finite-size scaling for
the transverse structure factors $S_{\pm}(\tau, p=\pi/2 \pm 2\pi/N, M=1/4, N)$
if we approach the field dependent soft mode $p_1(M=1/4)=\pi/2$ from the left
$(p=\pi/2-2\pi/N)$ and from the right $(p=\pi/2+2\pi/N)$, respectively. The
critical exponents are found to be
\begin{mathletters}\label{rtce}
\begin{eqnarray}
\alpha_+(p=\pi/2+2\pi/N,M=1/4) &=& -0.20, \\
\alpha_-(p=\pi/2-2\pi/N,M=1/4) &=& -0.05.
\end{eqnarray}
\end{mathletters}
Finite-size scaling works quite well for $S_+$ for large and small values of
the scaling variable $z_+$ as can be seen from the inset in Fig. 6(a). This
is not the case for $S_-$. Here finite-size scaling breaks down for small
values of $z_-$ as is demonstrated in the inset of Fig. 6(b). The critical
exponent $\alpha_-(p=\pi/2-2\pi/N,M=1/4)=-0.05$ results from the finite size
scaling analysis for large values of $z_-$, where the transition
probability for the first excitation is projected out and has the
following finite size dependence:
\begin{equation}\label{fexa}
|\langle n=1|S_-(p=\pi/2-2\pi/N)|s\rangle|^2
\stackrel{N\to\infty}{\longrightarrow} N^{2\alpha_- -1}.
\end{equation}
\section{Discussion and conclusions.}
In the presence of a uniform field, the one-dimensional antiferromagnetic
Heisenberg model is critical in the following sense:
The excitation spectrum is gapless at the momenta $p=0, \; p=\pi, \;
p=p_3(M)=\pi(1-2M) $ and $p=p_1(M)=\pi \cdot 2 M$.
In this paper we have tried to answer the following question: Is
conformal field theory applicable to describe the low energy
excitations at these momenta ? To answer this question we have determined:
\begin{enumerate}
\item the scaled energy gaps $2 \theta(M)$, defined through (\ref{lee})-
(\ref{teta})
\item the critical exponents $\eta(M)$ for the singularities
(\ref{mompe}), (\ref{powf2}) and (\ref{fleft})
in the static structure factors
\item the exponents $\alpha(M)$ for the infrared singularities
(\ref{fsans})
in the dynamical structure factors
\end{enumerate}
A compilation of the various critical quantities for $M=1/4$ is given in
Table II.
The predictions of conformal field theory are reviewed in appendix A.
In particular the following relation is expected to hold:
\begin{equation}\label{ceis}
2 \theta(M) = \eta(M) = 2[1-\alpha(p,M)].
\end{equation}
Looking at Table II we find:
\begin{itemize}
\item[(a)] The critical quantities $2 \theta_3(M=1/4),\;
\eta_3(M=1/4)$ and $2-2\alpha_3(p=\pi/2,M=1/4)$ agree within the
numerical uncertainty. Moreover, the critical exponent
$\alpha_3(p=\pi/2, M=1/4)$ also governs the finite-size dependence
of the transition probability for the lowest excitation
(\ref{trprob}). We therefore conclude, that the excitations
in the longitudinal
structure factors at the softmode $p_3(M)=\pi(1-2M)$ are correctly
described by conformal field theory.
\item[(b)] The critical quantities $2 \theta_1(M=1/4),
\eta_1(M=1/4), 2-2\alpha_+(p=\pi,M=1/4), 2-2\alpha_-(p=\pi,M=1/4) $
agree within numerical uncertainties.
In both cases the finite-size dependence of the
transition probability for the lowest excitation is in accord with the
prediction of conformal field theory.
\item[(c)] The critical quantities $2\theta_1^+(M=1/4)$ and
$\eta_1^+(M=1/4)$ agree within numerical uncertainties and deviate
by about $15 \%$ from the exponent $2[1-\alpha_+
(p=\pi/2+2\pi/N,M=1/4)]$.
\item[(d)] The scaled energy gap $2\theta_1^-(M=1/4)$ agrees with the
critical exponent $\eta_1^-(M=1/4)$ - within the large
numerical uncertainty - but
strongly deviates by more than a factor of 2 from the exponent
$2[1-\alpha_-(\pi/2-1/(2N),M=1/4)]$, which we extracted from the
finite-size scaling analysis of the infrared singularity in
the transverse structure factor $S_-$ at the softmode
$p=p_1(M)-2\pi/N,\;M=1/4$. It was demonstrated
in Fig. 6(b) that finite-size scaling only works for large values of
the variable $z_-$, where the first excitation alone contributes.
Therefore, the
exponent $2[1-\alpha(\pi/2-2\pi/N,M=1/4)]$ is fixed by the
finite-size behavior (\ref{fexa}) of the transition probability for the
first excitation. The exponent is definitely different from the
scaled energy gap $2\theta_1^-(M=1/4)$.
\end{itemize}
It is worthwhile to note that in the cases (a), (b) and (c), where we find
agreement of our numerical results with the prediction (\ref{ceis}) of
conformal field theory the spectral weight of the excitations is concentrated
at low frequencies. This can be seen directly for the case (b) ($p=\pi$) in
the left hand part of Table I. In contrast, the right hand part of Table I
shows the widespread distribution of the spectral weight for case (d). Here
we were not able to establish the identity (\ref{ceis}).
\section*{Acknowledgement}
We are indebted to Prof. K. Fabricius, who made us available the exact
numerical results in the upper part of Table I.
We thank Prof. G. M\"uller for helpful comments on this paper.
M. Karbach gratefully acknowledges support by the Max Kade Foundation.
C. Gerhardt was supported by the Graduiertenkolleg 'Feldtheoretische
und numerische Methoden in der Elementarteilchen Physik und
Statistischen Physik'.
\begin{appendix}
|
2,869,038,155,463 | arxiv | \section{Introduction}
Attention-based deep learning models for natural language processing (NLP) have shown promise for a variety of machine translation and natural language understanding tasks. For word-level, sequence-to-sequence tasks such as translation, paraphrasing, and text summarization, attention-based models allow a single token ($e.g.$, a word or subword) in a sequence to be represented as a combination of all tokens in the sequence \citep{luong2015effective}. The distributed context allows attention-based models to infer rich representations for tokens, leading to more robust performance. One such model is the Transformer, which features a multi-headed self- and cross-attention mechanism that allows many different representations to be learned for a given token in parallel \citep{vaswani2017attention}. The encoder and decoder arms each contain several identical subunits that are chained together to learn embeddings for tokens in the source and target vocabularies.
Though the Transformer works well across a variety of different language pairs, such as (English, German) and (English, French), it consists of a large number of parameters and relies on a significant amount of data and extensive training to accurately pick up on syntactic and semantic relationships. Previous studies have shown that an NLP model's performance improves with the ability to learn underlying grammatical structure of a sentence \citep{kuncoro2018lstms, linzen2016assessing}.
In addition, it has been shown that simultaneously training models for machine translation, part of speech (POS) tagging, and named entity recognition provides a slight improvement over baseline on each task for small datasets \citep{niehues2017exploiting}. Inspired by these previous efforts, we propose to utilize the syntactic features that are inherent in natural language sequences, to enhance the performance of the Transformer model.
We suggest a modification to the embeddings fed into the Transformer architecture, that allows tokens inputted into the encoder to attend to not only other tokens but also syntactic features including POS, case, and subword position. These features are identified using a separate model (for POS) or are directly specified (for case and subword position) and are appended to the one-hot vector encoding for each token. Embeddings for the tokens and their features are learned jointly during the Transformer training process. As the embeddings are passed through the layers of the Transformer, the representation for each token is synthesized using a combination of word and syntactic features.
We evaluate the proposed model on English to German (EN-DE) translation on the WMT '14 dataset. For the EN-DE translation task, we utilize multiple syntactic features including POS, case and subword tags that denote the relative position of subwords within a word \citep{sennrich2016linguistic}. Like POS, case is a categorical feature, which can allow the model to distinguish common words from important ones. Subword tags can help bring cohesion among subwords of a complex word (say, ``amalgamation'') so that their identity as a unit is not compromised by tokenization. We prove that the incorporation of these features improves the translation performance in the EN-DE task with a number of different experiments. We show that the BLEU score improvements of the feature-rich syntax-infused Transformer uniformly outperforms the baseline Transformer as a function of the training data size. Examining the attention weights learned by the proposed model further justifies the effectiveness of incorporating syntactic features.
We also experiment with this modification of embeddings on the BERT\textsubscript{BASE} model on a number of General Language Understanding Evaluation (GLUE) benchmarks and observe considerable improvement in performance on multiple tasks. With the addition of POS embeddings, the BERT\textsubscript{BASE + POS} model outperforms BERT\textsubscript{BASE} on 4 out of 8 downstream tasks.
To summarize, our main contributions are as follows:
\begin{enumerate}
\item We propose a modification to the trainable embeddings of the Transformer model, incorporating explicit syntax information, and demonstrate superior performance on EN-DE machine translation task.
\item We modify pretrained BERT\textsubscript{BASE} embeddings by feeding in syntax information and find that the performance of BERT\textsubscript{BASE + POS} outperforms BERT\textsubscript{BASE} on a number of GLUE benchmark tasks.
\end{enumerate}
\section{Background}
\subsection{Baseline Transformer}
The Transformer consists of encoder and decoder modules, each containing several subunits that act sequentially to generate abstract representations for words in the source and target sequences \citep{vaswani2017attention}. As a preprocessing step, each word is first divided into subwords of length less than or equal to that of the original word \citep{sennrich2015neural}. These subwords are shared between the source and target vocabularies.
For all $m \in \{ 1,\: 2,\: \ldots, \: M\}$, where $M$ is the length of the source sequence, the encoder embedding layer first converts subwords $\mathbf{x}_m$ into embeddings $\mathbf{e}_m$:
\begin{align}
\mathbf{e}_m = \mathbf{E}\mathbf{x}_m
\end{align}
where $\mathbf{E} \in \mathbb{R}^{D\times N}$ is a trainable matrix with column $m$ constituting the embedding for subword $m$, $N$ is the total number of subwords in the shared vocabulary, and $\mathbf{x}_m \in \{0, 1\}^N : \sum_i x_{mi} = 1$ is a one-hot vector corresponding to subword $m$. These embeddings are passed sequentially through six encoder subunits. Each of these subunits features a self-attention mechanism, that allows subwords in the input sequence to be represented as a combination of all subwords in the sequence. Attention is accomplished using three sets of weights: the key, query, and value matrices ($\mathbf{K}$, $\mathbf{Q}$, and $\mathbf{V}$, respectively). The key and query matrices interact to score each subword in relation to other subwords, and the value matrix gives the weights to which the score is applied to generate output embedding of a given subword. Stated mathematically,
\begin{align}
\begin{split}
\mathbf{K} &= \mathbf{H}\mathbf{W}_K\\
\mathbf{Q} &= \mathbf{H}\mathbf{W}_Q\\
\mathbf{V} &= \mathbf{H}\mathbf{W}_V\\
\mathbf{A} &= \mbox{softmax}\left(\frac{\mathbf{Q}\mathbf{K}^\top}{\sqrt{\rho}}\right)\mathbf{V}
\end{split}
\label{eq:attention}
\end{align}
where $\mathbf{H} = [\mathbf{h}_1 \: \mathbf{h}_2 \: \cdots \: \mathbf{h}_M]^\top \in \mathbb{R}^{M\times D}$ are the $D$-dimensional embeddings for a sequence of $M$ subwords indexed by $m$; $\mathbf{W}_K$, $\mathbf{W}_Q$, and $\mathbf{W}_V$ all $\in \mathbb{R}^{D\times P}$ are the projection matrices for keys, queries, and values, respectively; $\rho$ is a scaling constant (here, taken to be $P$) and $\mathbf{A} \in \mathbb{R}^{M \times P}$ is the attention-weighted representation of each subword. Note that these are subunit-specific -- a separate attention-weighted representation is generated by each subunit and passed on to the next. Moreover, for the first layer, $\mathbf{h}_m \coloneqq \mathbf{e}_m$.
The final subunit then passes its information to the decoder, that also consists of six identical subunits that behave similarly to those of the encoder. One key difference between the encoder and decoder is that the decoder not only features self-attention but also cross-attention; thus, when generating new words, the decoder pays attention to the entire input sequence as well as to previously decoded words.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{featuresum.png}
\caption{Formation of attention matrices ($\mathbf{K}$, $\mathbf{Q}$, and $\mathbf{V}$) with syntactic information. The left column shows the word embedding matrix; the embedding matrices for the various features are shown on top.
Embeddings for the chosen features are either concatenated or summed together (denoted by $\oplus$) and finally, concatenated to the word embeddings. Matrix multiplication with learned weights results in $\mathbf{K}$, $\mathbf{Q}$, and $\mathbf{V}$.
The attention matrices are double shaded to indicate the mix of word and syntax information.}
\label{fig:subwordconcat}
\end{figure}
\subsection{BERT}
While the Transformer is able to generate rich representations of words in a sequence by utilizing attention, its decoder arm restricts it to be task-specific. The word embeddings learned by the Transformer encoder, however, can be fine-tuned to perform a number of different downstream tasks. Bidirectional encoder representations of Transformers (BERT) is an extension of the Transformer model that allows for such fine-tuning. The BERT model is essentially a Transformer encoder (with number of layers $l$, embedding dimension $D$, and number of attention heads $\alpha$) which is pre-trained using two methods: masked language modeling (MLM) and next-sentence prediction (NSP). Subsequently, a softmax layer is added, allowing the model to perform various tasks such as classification, sequence labeling, question answering, and language inference. According to \citep{devlin2018bert}, BERT significantly outperforms previous state-of-the-art models on the eleven NLP tasks in the GLUE benchmark \citep{wang2018glue}.
\section{Model}
\subsection{Syntax-infused Transformer}
Syntax is an essential feature of grammar that facilitates generation of coherent sentences. For instance, POS dictates how words relate to one another ($e.g.$, verbs represent the actions of nouns, adjectives describe nouns, etc.). Studies have shown that when trained for a sufficiently large number of steps, NLP models can potentially learn underlying patterns about text like syntax and semantics, but this knowledge is imperfect \citep{jawahar2019does}. However, works such as \citep{kuncoro2018lstms, linzen2016assessing} show that NLP models that acquire even a weak understanding of syntactic structure through training demonstrate improved performance relative to baseline. Hence, we hypothesize that explicit prior knowledge of syntactic information can benefit NLP models in a variety of tasks.
To aid the Transformer in more rapidly acquiring and utilizing syntactic information for better translation, we ($i$) employ a pretrained model\footnote{https://spacy.io/} to tag words in the source sequence with their POS, ($ii$) identify the case of each word, and ($iii$) identify the position of each subword relative to other subwords that are part of the same word (subword tagging). We then append trainable syntax embedding vectors to the token embeddings, resulting in a combined representation of syntactic and semantic elements.
Specifically, each word in the source sequence is first associated with its POS label according to syntactic structure. After breaking up words into their corresponding subwords (interchangeably denoted as tokens), we assign each subword the POS label of the word from which it originated. For example, if the word \texttt{sunshine} is broken up into subwords \texttt{sun}, \texttt{sh}, and \texttt{ine}, each subword would be assigned the POS \texttt{NOUN}. The POS embeddings are then extracted from a trainable embedding matrix using a look-up table, in a manner similar to that of the subword embeddings (see Figure \ref{fig:subwordconcat}). The POS embeddings $\mathbf{f}_m^{P}$ of each subword (indexed by $m$) are then concatenated with the subword embeddings $\mathbf{e}_m \in \mathbb{R}^{D-d}$ to create a combined embedding where $d$ is the dimension of the feature embedding.
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{bertmodel.png}
\caption{The BERT\textsubscript{BASE + POS} model. Token embeddings are combined with trainable POS embeddings and fed into the BERT encoder. The final embedding of the [CLS] token is fed into a softmax classifer for downstream classification tasks. The model is illustrated as taking in a pair of sequences but single sequence classification is also possible.}
\label{fig:bertpos}
\end{figure}
In a similar manner, we incorporate case and subword position features. For case, we use a binary element $z_m^c \in \{0, 1\}$ to look up a feature embedding $\mathbf{f}_m^c$ for each subword, depending on whether the original word is capitalized. For subword position, we use a categorical element $z_m^s \in \{B, M, E, O\}$ to identify a feature embedding $\mathbf{f}_m^s$ for each subword depending on whether the subword is at the beginning ($B$), middle ($M$), or end ($E$) of the word; if the subword comprises the full word, it is given a tag of $O$. These are then added onto the POS embedding. Mathematically, in the input stage, $\mathbf{h}_m$ becomes:
$$[\mathbf{e}_m^\top \: \mathbf{f}_m^\top]^\top = \mathbf{h}_m' \in \mathbb{R}^{D}$$
where $\mathbf{f}_m = \mathbf{f}_m^{P} \oplus \mathbf{f}_m^{c} \oplus \mathbf{f}_m^{s} \in \mathbb{R}^d$ is the learned embedding for the syntactic features of subword $m$ in the sequence of $M$ subwords and $\oplus$ denotes either the concatenation or summation operation.
We conjecture that our syntax-infused Transformer model can boost translation performance by injecting grammatical relationships, without having to learn them from examples.
\subsection{Syntax-infused BERT}
Adding syntactic features to the BERT model is a natural extension of the above modification to the Transformer. As mentioned above, embeddings trained by BERT can be utilized for a variety of downstream tasks. We hypothesize that infusing BERT with syntactic features is beneficial in many of these tasks, especially those involving semantic structure.
Many of the datasets on which we evaluate our modified BERT model are low-resource (as few as 2.5k sentences) relative to those on which we evaluate the syntax-infused Transformer; hence, we choose to utilize only POS as a syntactic feature for BERT. We consider two approaches for combining POS features with the pre-trained embeddings in BERT, a model we denote as BERT\textsubscript{BASE + POS}: (1) addition of the trainable POS embedding vector of dimension $d=D$ to the token embedding and (2) concatenation of the POS embedding with the token embedding. To make a fair comparison with BERT\textsubscript{BASE}, the input dimension $D$ of the encoder must match that of BERT\textsubscript{BASE} ($D=768$). Thus, if option 2 is used, the concatenated embedding must be passed through a trainable affine transformation with weight matrix of size $(D+d) \times D$ . While this option provides a more robust way to merge POS and word embeddings, it requires learning a large matrix, which is problematic for downstream tasks with very little training data. Hence, to facilitate training for these tasks and to standardize the comparison across different downstream tasks, we choose to use the first approach. Therefore, for a given token, its input representation is constructed by summing the corresponding BERT token embeddings with POS embeddings (see Figure \ref{fig:bertpos}).
Mathematically, the input tokens $\mathbf{h}_m' \in \mathbb{R}^D$ are given by $\mathbf{h}_m' = \mathbf{e}_m + \mathbf{f}_m^P$, where $\mathbf{e}_m$ is the BERT token embedding and $\mathbf{f}_m^P$ is the POS embedding for token $m$. For single sequence tasks, $m = 1, 2, \ldots, M$, where $M$ is the number of tokens in the sequence; while for paired sequence tasks, $m = 1, 2, \ldots, M_1 + M_2$, where $M_1$ and $M_2$ are the number of tokens in each sequence. As is standard with BERT, for downstream classification tasks, the final embedded representation $\mathbf{\hat{y}}_{CLS}$ of the first token (denoted as [CLS]) is passed through a softmax classifer to generate a label.
\section{Datasets and Experimental Details}
For translation, we consider WMT '14 EN-DE dataset. The WMT '14 dataset consists of 4.5M training sentences. Validation is performed on newstest2013 (3000 sentences) and testing is on the newstest2014 dataset (2737 sentences, \citep{zhang2019improving}). Parsers that infer syntax from EN sentences are typically trained on a greater number and variety of sentences and are therefore more robust than parsers for other languages. Since one of the key features of our models is to incorporate POS features into the source sequence, we translate \textit{from} EN \textit{to} DE. While incorporating all linguistic features described above is generally beneficial to NLP models, adding features may compromise the model by restricting the number of dimensions allocated to word embeddings, which still the play the primary role. We consider this tradeoff in greater detail below.
\subsection{Machine translation}
We train both the baseline and syntax-infused Transformer for 100,000 steps. All hyperparameter settings of the baseline Transformer, including embedding dimensions of the encoder and decoder, match those of \citep{vaswani2017attention}. We train the syntax-infused Transformer model using 512-dimensional embedding vectors. In the encoder, $D = 492$ dimensions are allocated for word embeddings while $d = 20$ for feature embeddings (chosen by hyperparameter tuning). In the decoder, all 512 dimensions are used for word embeddings (since we are interested only in decoding words, not word-POS pairs).
The model architecture consists of six encoder and six decoder layers, with eight heads for multi-headed attention. Parameters are initialized with Glorot \citep{glorot2010understanding}. We use a dropout rate of 0.1 and batch size of 4096. We utilize the Adam optimizer to train the model with $\beta_1 = 0.9$ and $\beta_2 = 0.998$; gradients are accumulated for two batches before updating parameters. A label-smoothing factor of 0.1 is employed.
\begin{table}[t]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
Data & Number of & Baseline & Syntax-infused \\
Fraction & Sentences & Transformer &Transformer \\
\hline
1\% & 45k & 1.10 & \textbf{1.67} \\ \hline
5\% & 225k & 8.51 & \textbf{10.50} \\ \hline
10\%& 450k & 16.28 & \textbf{17.28} \\ \hline
25\%& 1.1M & 22.72 & \textbf{23.24} \\ \hline
50\%& 2.25M& 25.41 & \textbf{25.74} \\ \hline
100\%&4.5M & 28.94 & \textbf{29.64} \\ \hline
\end{tabular}
\caption{BLEU scores for different proportions of the data for baseline Transformer vs syntax-infused Transformer for the EN-DE task on newstest2014.}
\label{tab:hrmtvsdatasize}
\end{table}
\begin{figure}[t]
\centering
\begin{subfigure}{0.23\textwidth}
\centering
\includegraphics[width=\textwidth, height=\textwidth]{sent458_mod_base.pdf}
\caption{Baseline (EN-DE)}
\end{subfigure}
\begin{subfigure}{0.225\textwidth}
\centering
\includegraphics[width=\textwidth, height=\textwidth]{sent458.pdf}
\caption{Syntax-infused (EN-DE)}
\end{subfigure}
\caption{Comparison of attention for example sentences translated by baseline and POS Transformer models (obtained from the last layer). Rows depict the attention score for a given target subword to each of the subwords in the source sequence. In syntax-infused models for EN-DE translation, we find that attention is more widely distributed across subwords. For instance, the subword ``Vater'' (the German word for ``father'') attends mostly to the nearby subwords ``his'' and ``father'' in the base model while ``Vater'' also attends to the more distant words ``Bwelle'' (a person) and ``escorting'' in the syntax-infused model. This suggests that the syntax-infused model is able to better connect disparate parts of a sentence to aid translation. Note that the number of rows in the baseline and syntax-infused Transformer are different because each produces different predictions.}
\label{fig:attention}
\end{figure}
The context and size of the EN-DE translation dataset is quite different compared that of the datasets on which POS tagging methods are typically trained, implying that the POS tagging model may not generalize well. Hence, we include not only POS but also case and subword tag features. The training procedure is identical to that of \citep{vaswani2017attention} except that, for the syntax-infused Transformer, the dimension $d$ of features $\mathbf{f}_m$ is chosen to be 20 by doing a grid search over the range of 8 to 64.
\subsection{Natural language understanding}
The General Language Understanding Evaluation (GLUE) benchmark \citep{wang2018glue} is a collection of different natural language understanding tasks evaluated on eight datasets: Multi-Genre Natural Language Inference (MNLI), Quora Question Pairs (QQP), Question Natural Language Inference (QNLI), Stanford Sentiment Treebank (SST-2), The Corpus of Linguistic Acceptability (CoLA), The Semantic Textual Similarity Benchmark (STS-B), Microsoft Research Paraphrase Corpus (MRPC), and Recognizing Textual Entailment (RTE). For a summary of these datasets, see \citep{devlin2018bert}. We use POS as the syntactic feature for BERT for these tasks. Aside from the learning rate, we use identical hyperparameter settings to fine-tune both the BERT\textsubscript{BASE} and BERT\textsubscript{BASE + POS} models for each task. This includes a batch size of 32 and 3 epochs of training for all tasks. For each model, we also choose a task-specific learning rate among the values $\{5, 4, 3, 2\} \times 10^{-5}$, which is standard for BERT\textsubscript{BASE}.
\begin{table*}[!h]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline \bf System & \bf MNLI & \bf QQP & \bf QNLI & \bf SST-2 & \bf CoLA & \bf STS-B & \bf MRPC & \bf RTE & \bf Average \\
& 392k & 363k & 108k & 67k & 8.5k & 5.7k & 3.5k & 2.5k & - \\ \hline
Pre-OpenAI SOTA & 80.6/80.1 & 66.1 & 82.3 & 93.2 & 35.0 & 81.0 & 86.0 & 61.7 & 74.0 \\ \hline
BiLSTM+ELMo+Attn & 76.4/76.1 & 64.8 & 79.8 & 90.4 & 36.0 & 73.3 & 84.9 & 56.8 & 71.0 \\ \hline
OpenAI GPT & 82.1/81.4 & 70.3 & 87.4 & 91.3 & 45.4 & 80.0 & 82.3 & 56.0 & 75.1 \\ \hline
BERT\textsubscript{BASE} & 84.6/83.4 & 71.2 & 90.5 & 93.5 & 52.1 & 85.8 & 88.9 & 66.4 & 79.6 \\ \hline
BERT\textsubscript{BASE + POS} & 84.4/83.3 & \textbf{71.4} & 90.4 & \textbf{93.9} & \textbf{52.9} & 85.5 & 88.8 & \textbf{66.9} & \textbf{79.7} \\ \hline
\hline
\end{tabular}
\end{center}
\caption{GLUE test results scored using the GLUE evaluation server. The number below each task denotes the number of training examples. The scores in bold denote the tasks for which BERT\textsubscript{BASE + POS} outperforms BERT\textsubscript{BASE}.}
\label{tab:glueresults}
\end{table*}
\section{Experimental Results}
\subsection{Machine translation}
We evaluate the impact of infusing syntax into the baseline Transformer for the EN-DE translation task. We add three features namely POS, subword tags, and case to aid Transformer model learn underlying patterns about the sentences.
With more than one feature, there are multiple ways to incorporate feature embeddings into the word embeddings. For a fair comparison to the Transformer baseline, we use a total of 512 dimensions for representing both the word embeddings as well as feature embeddings. One important tradeoff is that as the dimensionality of the syntax information increases, the dimensionality for actual word embeddings decreases. Since POS, case, and subword tags have only a limited number of values they can take, dedicating a high dimensionality for each feature proves detrimental (experimentally found). We find that the total feature dimension for which the gain in BLEU score is maximized is 20 (found through grid search). This means that (1) each feature embedding dimension can be allocated to 20 and summed together or (2) the feature embeddings can be concatenated to each other such that their total dimensionality is 20. Therefore, in order to efficiently learn the feature embeddings while also not sacrificing the word embedding dimensionality, we find that summing the embeddings for all three different features of $d=20$ and concatenating the sum to the word embeddings of $D=492$ gives the maximum performance on translation. We also find that incorporation of a combination of two features among \{POS, case, subword tags\} does not perform as well as having all the three features.
In Table \ref{tab:hrmtvsdatasize}, we vary the proportion of data used for training and observe the performance of both the baseline and syntax-infused Transformer. The syntax-infused model markedly outperforms the baseline model, offering an improvement of 0.57, 1.99, 1, 0.52, 0.33, and 0.7 points, respectively, for 1, 5, 10, 25, 50, and 100\% of the data. It is notable that the syntax-infused model translates the best relative to the baseline when only a fraction of the dataset is used for training. Specifically, the maximum improvement is 1.99 BLEU points when only 10\% of the training data is used. This shows that explicit syntax information is most helpful under limited training data conditions. As shown in Figure \ref{fig:attention}(a)-(b), the syntax-infused model is better able to capture connections between tokens that are far apart yet semantically related, resulting in improved translation performance. In addition, Table \ref{tab:translationexamples} shows a set of sample German predictions made by the baseline and syntax-infused Transformer.
\begin{table*}[t]
\begin{center}
\begin{tabular}{|p{5.15cm}|p{5.15cm}|p{5.15cm}|}
\hline \bf Reference & \bf Baseline Transformer & \bf Syntax-infused Transformer \\
\hline Parken in Frankfurt k{\"o}nnte bald empfindlich teurer werden . & Das Personal war sehr freundlich und hilfsbereit . & \textcolor{blue}{Parken in Frankfurt} \textcolor{blue}{k{\"o}nnte bald} sp{\"u}rbar \textcolor{blue}{teurer} sein .
\\ \hline
Die zur{\"u}ckgerufenen Modelle wurden zwischen dem 1. August und 10. September hergestellt .
&
Zwischen August 1 und September 10.
&
Die \textcolor{blue}{zur{\"u}ckgerufenen Modelle wurden zwischen dem 1. August und 10. September} gebaut
\\ \hline
Stattdessen verbrachte Bwelle Jahre damit , seinen Vater in {\"u}berf{\"u}llte Kliniken und Hospit{\"a}ler zu begleiten , um dort die Behandlung zu bekommen , die sie zu bieten hatten .
&
Stattdessen verbrachte Bwelle Jahre damit , seinen Vater mit {\"u}ber f{\"u}llten Kliniken und Krankenh{\"q}usern zu beherbergen .
&
Stattdessen verbrachte Bwelle Jahre damit , seinen Vater zu {\"u}berf{\"u}llten Kliniken und Krankenh{\"a}usern zu begleiten , \textcolor{blue}{um} jede \textcolor{blue}{Behandlung zu bekommen , die sie bekommen} konnten .
\\ \hline
Patek kann gegen sein Urteil noch Berufung ein legen .
&
Patek kann noch seinen Satz an rufen .
&
Patek mag sein \textcolor{blue}{Urteil noch Berufung ein legen} .
\\ \hline
\end{tabular}
\end{center}
\caption{Translation examples of baseline Transformer vs. syntax-infused Transformer on the EN-DE dataset. The text highlighted in blue represents words correctly predicted by the syntax-infused model but not by the baseline Transformer.}
\label{tab:translationexamples}
\end{table*}
\begin{table*}[t]
\begin{center}
\begin{tabular}{|p{8.5cm}|p{4.8cm}|p{2.15cm}|}
\hline \bf Sentence 1 & \bf Sentence 2 & \bf True label \\ \hline
The Qin (from which the name China is derived) established the approximate boundaries and basic administrative system that all subsequent dynasties were to follow . & Qin Shi Huang was the first Chinese Emperor . & Not entailment \\ \hline
In Nigeria, by far the most populous country in sub-Saharan Africa, over 2.7 million people are infected with HIV . & 2.7 percent of the people infected with HIV live in Africa . & Not entailment \\ \hline
\end{tabular}
\end{center}
\caption{Examples of randomly chosen sentences from the RTE dataset (for evaluation of entailment between pairs of sentences) that were misclassified by BERT\textsubscript{BASE} and correctly classified by BERT\textsubscript{BASE + POS}.}
\label{tab:bertexamples}
\end{table*}
\subsection{Natural language understanding}
The results obtained for the BERT\textsubscript{BASE + POS} model on the GLUE benchmark test set are presented in Table \ref{tab:glueresults}. BERT\textsubscript{BASE + POS} outperforms BERT\textsubscript{BASE} on 4 out of the 8 tasks. The improvements range from marginal to significant, with a maximum improvement of 0.8 points of the POS model over BERT\textsubscript{BASE} on CoLA. Fittingly, CoLA is a task which assesses the linguistic structure of a sentence, which is explictly informed by POS embeddings. Moreover, BERT\textsubscript{BASE + POS} outperforms BERT\textsubscript{BASE} on tasks that are concerned with evaluating semantic relatedness. For examples of predictions made on the RTE dataset, see Table \ref{tab:bertexamples}.
\section{Related Works}
Previous work has sought to improve the self-attention module to aid NLP models. For instance, \citep{yang2018modeling} introduced a Gaussian bias to model locality, to enhance model ability to capture local context while also maintaining the long-range dependency. Instead of absolute positional embeddings, \citep{shaw2018self} experimented with relative positional embeddings or distance between sequences and found that it led to a drastic improvement in performance.
Adding linguistic structure to models like the Transformer can be thought of as a way of improving the attention mechanism. The POS and subword tags act as a form of relative positional embedding by enforcing the sentence structure. \citep{li2018multi} encourages different attention heads to learn about different information like position and representation by introducing a disagreement regularization. In order to model the local dependency between words more efficiently, \citep{im2017distance} introduced distance between words and incorporated that into the self-attention.
Previous literature also has sought to incorporate syntax into deep learning NLP models. \citep{bastings2017graph} used syntax dependency tree information on a bidirectional RNN on translation systems by modeling the trees using Graph Convolutional Networks (GCNs) \citep{kipf2016semi}. Modeling source label syntax information has helped significantly in the Chinese-English translation \citep{li2017modeling} by linearizing parse trees to obtain drastic performance improvements. Adding a syntax-based distance constraint on the attention module, to generate a more semantic context vector, has proven to work for translation systems in the Chinese-English as well as English-German tasks.
These works affirm that adding syntax information can help the NLP models to translate better from one language to another and also achieve better performance measures.
\section{Conclusions}
We have augmented the Transformer network with syntax information for machine translation. The syntax-infused Transformer improvements were highest when a subset of the training data is used. We then distinguish the syntax-infused and baseline Transformer models by providing an interpretation of attention visualization. Additionally, we find that the syntax-infused BERT model performs better than baseline on a number of GLUE downstream tasks.
It is an open question whether the efficiency of these sophisticated models can further be improved by creating an architecture that is enabled to model the language structure more inherently than using end to end models. Future work may extend toward this direction.
|
2,869,038,155,464 | arxiv | \section{Introduction}
A \emph{partial order} of a set $X$ is a reflexive, asymmetric and transitive binary relation on $X$. When the relation is total the order is called a \emph{linear order}, since it captures the idea of ordering the elements of $X$ in a line. In contrast, to capture the idea of ordering a set $X$ in a circle, it is well known that a ternary relation is needed.
A partial order can be graphically represented by a transitive oriented graph. In particular, the oriented graph associated to a linear order is a transitive oriented tournament. There is no obvious way for representing graphically a partial cyclic order. In \cite{Alles1991}, the authors introduce certain subclass of cyclic orders that can be represented by means of oriented graphs. However, in the literature there is no custom manner to represent any partial cyclic order.
In this paper we introduce a definition for oriented $3$--hypergraph and a notion of transitivity, in order to represent partial cyclic orders. Just as it occurs with linear orders and transitive tournaments, with our definition, it happens that the transitive oriented $3$--hypertournament is unique. Among the study of intersection graphs, the class of permutation graphs has received lots of attention (see, for instance, \cite{Golumbic1980} and references therein). In particular, an old result by Pnueli, Lempel and Even~\cite{Pnueli1971} characterizes permutation graphs in terms of comparability graphs. In this work we define the $3$--hypergraph associated to cyclic permutation, and cyclic comparability $3$--hypergraphs.
Using these notions we extend the result mentioned above, by characterizing cyclic permutation $3$--hypergraphs in terms of cyclic comparability $3$--hypergraphs (Theorem~\ref{thm:comparability}). In order to prove Theorem~\ref{thm:comparability} we explore first its oriented version (Theorem\ref{thm:permutation}).
One of the most important problems studied within the study of cyclic ordersis the extendability of cyclic orders to complete (or total) cyclic orders (see \cite{Alles1991,Megiddo1976}). This is, when the orientation of each triplet in the cyclic order can be derived from a total cyclic order. Examples of cyclic orders that are not extendable are well known~\cite{Alles1991}. In this context, we exhibit a class of cyclic orders which are totally extendable (Theorem~\ref{thm:main_nes}).
\section{Transitive Oriented $3$--Hypergraphs and cyclic orders}
\label{sec:cyclic}
Let $X$ be a set of cardinality $n$. A \emph{partial cyclic order} of $X$ is a ternary relation $T\subset X^{3}$ which is \emph{cyclic}: $(x,y,z)\in T\Rightarrow (y,z,x)\in T$; \emph{asymmetric}: $(x,y,z)\in T\Rightarrow(z,y,x)\not\in T$; and \emph{transitive}: $(x,y,z),(x,z,w)\in T\Rightarrow(x,y,w)\in T$. If in addition $T$ is \emph{total}: for each $x\neq y\neq z\neq x$, either $(x,y,z)\in T$ or $(z,y,x)\in T$, then $T$ is called a \emph{complete cyclic order}. Partial cyclic orders has been studied in several papers, see \cite{Megiddo1976, Alles1991, Novak1982, Novotny1983} for excellent references.
A \emph{cyclic ordering} of $X$ is an equivalence class, $[\phi ]$, of the set of linear orderings (i.e. bijections $\phi :[n]\rightarrow X$, where $[n]$ denotes the set $\{ 1,2,\dots n\}$) with respect to the \emph{cyclic equivalence relation} defined as: $\phi \sim \psi $, if and only if there exist $k\leq n$ such that $\phi (i)=\psi (i+k)$ for every $i\in \lbrack n]$, where $i+k$ is taken (mod $n$). Note that complete cyclic orders and cyclic orderings correspond to the same concept in different contexts. We use the word \textquotedblleft order\textquotedblright \ to refer to a ternary relation, and the word \textquotedblleft ordering\textquotedblright \ to refer to the cyclic equivalence class.
For the reminder of this paper we will denote each cyclic ordering $[\phi]$ in cyclic permutation notation, $(\phi(1)\, \phi(2)\, \ldots\,\phi(n))$. For example, there are two different cyclic orderings of $X=\{u,v,w\}$, namely $(u\, v\, w)$ and $(u\, w\, v)$, where $(u\, v\, w)=(v\, w\, u)=(w\,u\, v)$ and $(u\, w\, v)=(v\, u\, w)=(w\, v\, u)$.
\subsection{Transitive and Self Transitive oriented 3-hypergraphs}
We use the standard definition of $3$--hypergraph, to be a pair of sets
$H=(V(H),E(H))$ where $V(H)$ is the vertex set of $H$, and
the edge set of $H$ is $E(H)\subseteq {{V(H)}\choose{3}}$.
\begin{defi}
Let $H$ be a $3$--hypergraph. An \emph{orientation} of $H$ is an assignment of exactly one of the two possible cyclic orderings to each edge. An orientation of a $3$--hypergraph is called an \emph{oriented $3$--hypergraph}, and we will denote the oriented edges by $O(H)$.
\end{defi}
\begin{defi} \label{def:transitive}
An oriented $3$--hypergraph $H$ is \emph{transitive} if whenever $(u\, v\, z)$ and $(z\, v\, w) \in O(H)$ then $(u\, v\, w) \in O(H)$ (this implies $(u\, w\, z) \in O(H)$).
\end{defi}
Note that every $3$-subhypergraph of a transitive oriented $3$-hypergraph is transitive, fact that we will use throughout the rest of the paper. Also, there is a natural correspondence between partial cyclic orders and transitive oriented $3$--hypergraphs.
A transitive oriented $3$--hypergraph, $H$, with $E(H)={\binom{{V(H)}}{{3}}}$ is called a \emph{$3$--hypertournament}. Let $TT_n^{3}$ be the oriented $3$--hypergraph with $V(TT_n^{3})=[n]$ and $E(H)= {\binom{{[n]}}{{3}}}$, where the orientation of each edge is the one induced by the cyclic ordering $(1\, 2\,\dots\, n)$. Clearly $TT_{n}^{3}$ is a transitive $3$--hypertournament on $n$ vertices. It is important to note that every transitive $3$--hypertournament on $n$ vertices is isomorphic to $TT_{n}^{3}$ which allows us to hereafter refer to \emph{the} transitive $3$--hypertournament on $n$ vertices. This fact, is implied in the literature regarding cyclic orders, and that is indeed the reason why total, cyclic, asymmetric and transitive ternary relations are called complete cyclic orders (or circles as in \cite{Alles1991}).
Given a (non oriented) $3$--hypergraph the complement is naturally defined. For an oriented $3$--hypergraph it is not clear how to define its oriented complement. However, for $H$ a spanning oriented subhypergraph of $TT_n^3$, we define its \emph{complement} as the oriented $3$--hypergraph $ \overline{H}$ with $V( \overline{H})=V(TT_n^{3})$ and $O(\overline{H})=O(TT_n^3)\setminus O(H)$.
\begin{defi}
An oriented $3$--hypergraph $H$ which is a spanning subhypergraph of $TT_n^3$ is called \emph{self-transitive} if it is transitive and its complement is also transitive.
\end{defi}
The following lemma that we will refer to it as the \emph{eveness property} is not difficult to proof and it is left to the reader.
\begin{lem}\label{eveness}
Let $H$ be a self transitive $3$--hypergraph, then all its induced $3$--subhypergraphs with four vertices necessarily have an even number of hyperedges.
\end{lem}
\subsection{The oriented $3$--hypergraph associated to a cyclic permutation}
\label{sec:cyclic_permutation}
In resemblance to the graph associated to a linear permutation, we will now define the $3$--hypergraph associated to a cyclic permutation and study some of its properties.
A \emph{cyclic permutation} is a cyclic ordering of $[n]$. This is, an equivalence class $[\phi]$ of the set of bijections $\phi:[n] \rightarrow [n]$, in respect to the cyclic equivalence relation. As mentioned before, a cyclic permutation $[\phi]$ will be denoted by $(\phi(1)\, \phi(2)\,\ldots\,\phi(n))$, and its \emph{reversed permutation} $[\phi^{\prime }]$ corresponds to $(\phi(n)\, \phi(n-1)\, \ldots\,\phi(1))$.
Let $[\phi]$ be a cyclic permutation. Three elements $i, j, k \in [n]$, with $i< j< k$, are said to be in \emph{clock-wise order} in respect to $[\phi]$ if there is $\psi\in [\phi]$ such that $\psi^{-1}(i)<\psi^{-1}(j)<\psi^{-1}(k)$; otherwise the elements $i,j,k$ are said to be in \emph{counter-clockwise order} with respect to $[\phi]$.
\begin{defi}
The \emph{oriented $3$--hypergraph $H_{[\phi ]}$ associated to a cyclic permutation} $[\phi ]$ is the hypergraph with vertex set $V(H_{[\phi ]})=[n]$ whose edges are the triplets $\{i,j,k\}$, with $i<j<k$, which are in clockwise order in respect to $[\phi ]$, and whose edge orientations are induced by $[\phi ]$.
\end{defi}
It can be easily checked that for any cyclic permutation $[\phi]$, the associated oriented $3$--hypergraph $H_{[\phi]}$ is a transitive oriented $3$--hypergraph naturally embedded in $TT_{n}^{3}$. Moreover, the complement of $H_{[\phi ]}$ is precisely $H_{[\phi ^{\prime }]}$, where $[\phi ^{\prime }]$ is the reversed cyclic permutation of $[\phi ]$. So, for every cyclic permutation $[\phi ]$, the $3$--hypergraph $H_{[\phi ]}$ is self-transitive.
A transitive $3$--hypergraph $H$ is called an \emph{cyclic permutation $3$--hypergraph} if there is a cyclic permutation $[\phi]$ such that $H_{[\phi]} \cong H$. Next we shall prove that the self--transitive property characterizes the class of oriented cyclic permutation $3$--hypergraphs. For a vertex $v \in V(H)$ of an oriented $3$--hypergraph $H$, the \emph{link} of $v$, denote by $link_H (v)$, is the oriented graph with vertex and arc sets equal to: $V(link_H (v))= V(H)\setminus \{ v\}$, and $O(link_H (v))= \{ (uw) | \: \exists \: (v\,u\,w) \in O(H)\}$.
\begin{lem}
\label{link} Let $H$ be a self--transitive $3$--hypergraph. Then, $link_H (v)$ is a self transitive oriented graph, for any $v\in V(H)$.
\end{lem}
\begin{proof}
Let $(v\, v_i\, v_j)$ and $(v\, v_j\, v_k) \in O(H)$, by transitivity $(v\, v_i\, v_k) \in O(H)$. So that, if $link_H(v)$ contains arcs $(v_i \, v_j)$ and $(v_j \, v_k)$, then the transitivity of $H$ implies $(v_i\, v_k)$ is also an arc of $link_H(v)$.
\end{proof}
\begin{thm}
\label{thm:permutation} $H$ is an oriented cyclic permutation $3$--hypergraph if and only if $H$ is self transitive.
\end{thm}
\begin{proof}
If there is a cyclic permutation $[\phi]$ such that $H_{[\phi]} \cong H$, then clearly $H$ is self transitive.
Conversely, let $H$ be a self transitive oriented $3$--hypergraph with $n$ vertices. We proceed by induction on the number of vertices, in order to prove that $H$ is an orientedcyclic permutation $3$--hypergraph. The statement is obvious for $n=3$.
Assume $n\geq 4$, and label the vertices of $H$ by $V(H)=\{1,2, ...,n\}$. Consider the $3$--hypergraph obtained from $H$ by removing the vertex $n$, this is $H\setminus \{n\}$. Since $H\setminus \{n\}$ is a self transitive $3$--hypergraph, thus by induction hypothesis there is a cyclic permutation $[\varphi]$ of $[n-1]$, such that $(H\setminus \{n\})\cong H_{[\varphi]}$.
Consider now, the oriented graph $link_H (n)$. By Lemma~\ref{link}, $link_H (n)$ is a self transitive oriented graph, and therefore there is a linear permutation $\psi$, such that $link_H (n)\cong H_{\psi}$.
In the remainder of the proof we will show that the cyclic equivalent class of $\psi$ is precisely $[\varphi]$, and that $[\varphi]$ can be extend to a cyclic permutation $[\phi]$ of $[n]$ in such a way that $H\cong H_{[\phi]}$.
\textbf{Claim 1.} $\psi \in [\varphi]$
Since $[\varphi]$ is a cyclic permutation, with out lost of generality we may assume that $\varphi(1)=\psi(1)$. Next we will prove that $\varphi(i)=\psi(i)$ for every $i\in [n-1]$. By contradiction, let $j\in [n-1]$ be the first integer such that $\varphi(j)\neq \psi(j)$. Then, the cyclic ordering of $\{\varphi(1),\varphi(j),\psi(j)\}$ induced by $[\varphi]$ is $(\varphi(1)\,\varphi(j)\,\psi(j))$. Hence, either $(\varphi(1)\,\varphi(j)\,\psi(j))\in O(H)$ or $(\varphi(1),\psi(j)\,\varphi(j))\in O(H)$. With out lost of generality we may suppose the first case, then one of the following holds: $\varphi(1)< \varphi(j)< \psi(j)$ or $\psi(j)< \varphi(1)< \varphi(j)$ or $\varphi(j)< \psi(j)<\varphi(1)$. Assume first $\varphi(1)< \varphi(j)< \psi(j)$, note that $(\varphi(j)\,\psi(j))\not\in O(link_H (n))$, thus $(n\,\psi(j)\, \varphi(j))\in O(H)$, and by transitivity $(n\,\varphi(j)\,\varphi(1))\in O(H)$, which is a contradiction since $(\varphi(1)\,\varphi(j)), (\varphi(1)\,\psi(j))\in O(link_H (n))$. The other cases follows by similar arguments, so the claim holds.
Consider now the cyclic permutation
$(\psi(1)\,\psi(2)\,...\,\psi(n-1)\,n)$ and call it $[\phi]$.
\textbf{Claim 2.} $H\cong H_{[\phi]}$.
Recall $(H\setminus \{n\})\cong H_{[\varphi]}$. Since $\psi \in [\varphi]$ then the orientations of all edges in $H\setminus \{n\}$ are induced by $[\phi]$. It remains to prove that the orientations of all edges containing $n$, are induced by $[\phi]$. To see this, note that the orientations of such edges are given by $link_H (n)$, and for every $j\in [n-1]$ it happens that $\phi^{-1}(j)<\phi^{-1}(n)=n$. Thus the orientations of all edges containing $n$ are induced by $[\phi]$, which completes the proof.
\end{proof}
\subsection{Cyclic comparability $3$--hypergraphs}
\label{sec:compa}
One of the classic results in the study of permutation graphs is their characterization in terms of comparability graphs also referred in the literature \cite{Golumbic1980} as transitively oriented graphs. The aim of this section is to prove an equivalent result for cyclic permutations and cyclic comparability $3$--hypergraphs.
\begin{defi}
A $3$--hypergraph, $H$, is called a \emph{cyclic comparability hypergraph} if it admits a transitive orientation.
\end{defi}
\begin{lem}
\label{lem:union2}Let $H$ be a cyclic comparability $3$--hypergraph such that $\overline{H}$ is also a cyclic comparability $3$--hypergraph. Let $H_{o}$ and $\overline{H_{o}}$ be any transitive oriented $3$--hypergraphs whose underlying $3$--hypergraphs are $H$ and $\overline{H}$ respectively. Then the union $H_{o} \cup \overline{H_{o}}$ is isomorphic to $TT_n^3$.
\end{lem}
\begin{proof}
Clearly $H_{o} \cup \overline{H_{o}}$ is a complete $3$--hypergraph, thus, we only need to verify that $H_{o} \cup \overline{H_{o}}$ is transitive. Observed, that the, transitivity in 3-hypergraphs is a local condition, therefore it is sufficient to check that the transitivity follows for every set of four vertices.
Let $F$ be any comparability $3$--hypergraph with four vertices such that $\overline{F}$ is also a comparability $3$--hypergraph. We shall prove that $F$ is transitive by giving a cyclic order of its vertices.
If $F$ has no edges, or four edges, the transitivity follows. So we might assume both $F$ and $\overline{F}$ have two edges each, as if any of them has three edges, by the evenness condition (Lemma~\ref{eveness}), it can't be oriented transitively.
We might assume with out lost of generality that $E(F)=\{\{v_i, v_j, v_k\}, \{v_i, v_j, v_l\}\}$ and $ E(\overline{F})=\{\{v_i, v_k, v_l\}, \{v_j, v_k, v_l\}\}$. Then for the orientation of $F$ we might assume $O(F)=\{(v_i\; v_j\; v_k), (v_i\; v_j\; v_l) \}$ (otherwise we might relabel the vertices) and $O(\overline{F})=\{(v_i\; v_k\; v_l), (v_j\;v_k\; v_l)\}$ or $O(\overline{F})=\{(v_i\; v_l\; v_k), (v_j\; v_l\; v_k)\}$.
Assume $O(\overline{F})=\{(v_i\; v_k\; v_l), (v_j\;v_k\; v_l)\}$. Out of all the six possible pairings of the four edges in $H \cup \overline{H}$, only two have opposite orders along their common pair of vertices, namely $\{(v_i\; v_j\; v_k), (v_i\; v_k\; v_l)\}$ and $\{(v_i\; v_j\; v_l), (v_j\; v_k\; v_l)\}$. By transitivity the first pairing indicates that the orientations of the two other edges are induced from the ciclic order $(v_i\; v_j\; v_k\; v_l)$. For the proof of the case, $O(\overline{F})=\{(v_i\; v_l\; v_k), (v_j\; v_l\; v_k)\}$ the argument is the same.
\end{proof}
An un-oriented $3$--hypergraph $H$ is called a \emph{cyclic permutation $3$--hypergraph} if $H$ can be oriented in such a way that the resulting oriented $3$--hypergraph is an oriented cyclic permutation $3$--hypergraph.
The following Theorem is a direct Corollary of Lemma \ref{lem:union2} and Theorem \ref{thm:permutation}.
\begin{thm}\label{thm:comparability}
A $3$-- hypergraph $H$ is a cyclic permutation $3$--hypergraph if and only if $H$ and $\overline{H}$ are cyclic comparability $3$--hypergraphs.
\end{thm}
\subsection{Total extendability of a certain class of cyclic orders}
It is a well known fact ~\cite{Alles1991, Megiddo1976} that, in contrast to
linear orders, not every cyclic order can be extended to a complete cyclic
order, which in our setting means that not every transitive $3$--hypergraph
is a subhypergraph of a transitive $3$--hypertournament.
The decision whether a cyclic order is totally extendable is known to be
NP-complete, and examples of cyclic orders that are not extendable are well
known~\cite{Alles1991, Megiddo1976}. In ~\cite{Alles1991} the authors
exhibit some classes of cyclic orders which are totally extendable.
As a direct consequence of Lemma \ref{lem:union2} we can state the following
theorem stating a sufficient condition for the extendability of cyclic
orders.
\begin{thm}
\label{thm:main_nes} Let $T$ be a partial cyclic order on $X$, if the complement relation $\overline{%
T}$ is a cyclic order then $T$ is totally extendable.
\end{thm}
\begin{proof}
Let $H_{T}$ and $H_{\overline T}$ be the transitive oriented
$3$--hypergraphs associated to the cyclic orders $T$ and $\overline
T$, respectively. Then by Lemma \ref{lem:union2}, $H_{T} \cup
H_{\overline T} \cong TT_{n}^{3}$, so that there is a cyclic
ordering of the elements in $X$ that induces all orientations of
edges in $H_{T}$, this is $T$ is totally extendable.
\end{proof}
\section{Conclusions}
The proofs of all theorems on oriented $3$-hypergraphs in this paper follow naturally from the definition of transitivity and, not surprisingly, all concepts defined for oriented $3$-hypergraphs are reminiscent of the corresponding notions for oriented graphs; such evidence allows us to suggest that perhaps many parts of the theory of transitive graphs can be extended to transitive $3$-hypergraphs.\\
In particular, we are interested in the extension of the concept of perfection for hypergraphs. That was indeed the reason why we started the exploration on the subject of transitive $3$-hypergraphs with the extension of the concepts of comparability graphs and permutation graphs, which are two important classes of perfect graphs \cite{Golumbic1980}.
\medskip
\noindent The authors will like to thank the support from Centro de
Innovaci\'on Matem\'atica A.C.
|
2,869,038,155,465 | arxiv | \section{Introduction}\label{Intro}
\IEEEPARstart{A}{utomatic} control systems have had a wide impact in multiple fields, including finance, robotics, manufacturing, and automobiles. Decision automation has gained relatively little attention, especially when compared to decision support systems where the primary aim is to aid humans in the decision-making process. In practice, decision automation systems often do not eliminate human decision makers entirely but rather optimize decision making in specific instances where the automation system can surpass human performance. In fact, human decision makers play a very important role in the selection of models, determining the set of rules, and developing methods that automate the decisions. Nonetheless, decision automation systems remain indispensable in applications where humans are unable to make rational decisions, whether because of the sheer complexity of the system, the enormity of the set of alternatives, or the massive amount of data that must be processed.
Our focus in this paper is to develop a framework that automates decisions for post-disaster recovery of communities. Designing such a framework is ambitious given that it should ideally possess several key properties such as the ability to incorporate sources of uncertainty in the models, information gained at periodic intervals during the recovery process, current policies of the decision-maker, and multiple decision objectives under resource constraints \cite{ress2}. Our framework possesses these desired properties; in addition, our framework uses reasonable computational resources even for massive problems, has the \emph{lookahead} property, and does not suffer from \emph{decision fatigue}.
Civil infrastructure systems, including building infrastructure, power, transportation, and water networks, play a major role in the welfare of any community. The interdependence between the recovery of these networks post-hazard and community welfare addressing the issue of food-security, has been studied in \cite{iEMSs,icasp,saeedinfra}. In this study, we focus on electric power networks (EPNs) because almost all other infrastructure systems rely heavily on the availability of this network. In this study, a stochastic model characterizes the damage to the components of the EPN after an earthquake; similarly, the repair times associated with the repair actions are also given by a stochastic model.
The assignment of limited resources, including repair crews composed of humans and machines, to the damaged components of the EPN after a hazard can be posed as the generalized assignment problem (as defined in \cite{nphard}), which is known to be \mbox{NP-hard}. Several heuristic methods have been demonstrated in the literature to address this problem\cite{heuristic}.
\textbf{Our Contribution:}
Instead of these classical methods, we employ Markov decision processes (MDPs) for the representation and solution of our stochastic decision-making problem, which naturally extends its appealing properties to our framework. In our framework, the solution to the assignment problem formulated as a MDP is computed in an \emph{online} fashion using an approximate dynamic programming method known as \emph{rollout}\cite{online,rollout}. This approach addresses the \emph{curse of dimensionality} associated with large state spaces\cite{adp}. Furthermore, in our framework, the massive action space is handled by using a linear belief model, where a small number of candidate actions are used to estimate the parameters in the model based on a least-squares solution. Our method also employs adaptive sampling inspired by solutions to multi-armed bandit problems to carefully expend the limited simulation budget---a limit on the simulation budget is often a constraint while dealing with large real-world problems. Our approach successfully addresses the goal of developing a technique to deal with problems when the state and actions spaces of the MDP are jointly exceptionally large.
\section{THE ASSIGNMENT PROBLEM}
\subsection{Problem Setup: The Gilroy Community}
The description in this section comes mainly from \cite{saeed}; we give a complete description here for the sake of being self-contained. We describe the EPN of Gilroy, California, which provides the context for our assignment problem. We also briefly discuss the earthquake model, the EPN restoration model, and the computational challenges associated with the assignment problem.
\subsubsection{Network Characterization}\label{test1}
Gilroy is a moderately-sized growing city located approximately 50 km south of the city of San Jose with a population of 48,821 at the time of the 2010 census \cite{Gilroy1}. The study area is divided into 36 gridded rectangles to define the community and encompasses 41.9 km\textsuperscript{2} area of Gilroy with a population of 47,905. The average number of people per household in Gilroy in 2010 was 3.4, greater than the state and county averages\cite{harnish}. A heat map of the population in the grid is shown in Fig.~\ref{fig1}\cite{ress1}. This model has a resolution that is sufficient to study the methodology at the community level under hazard events. The community is susceptible to severe earthquakes on the San Andreas Fault (SAF).
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{Population}
\caption{Map of Gilroy's population over the defined grid}
\label{fig1}
\end{figure}
The modeled EPN of Gilroy within the defined boundary is shown in Fig.~\ref{fig2}. A 115 kV transmission line supplies the Llagas power substation, which provides electricity to the distribution system. The distribution line components are placed at intervals of 100~m and modeled from the power substation to the centers of the urban grid rectangles. If a component of the EPN is damaged, then along with the damaged EPN component, all the EPN components dependent on the damaged component are rendered nonfunctional or unavailable. If at least one EPN component serving a particular gridded rectangle is unavailable, the entire population of the gridded rectangle does not have electricity.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{EPN}
\caption{The modeled electric power network of Gilroy}
\label{fig2}
\end{figure}
\subsubsection{Seismic Hazard Simulation}\label{test2}
In this study, we assume that a seismic event of moment magnitude $Mw=6.9$ occurs at the closest points on the SAF projection to downtown Gilroy with an epicentral distance of approximately 12 km\cite{saeed}; this event is similar to the devastating Loma Prieta earthquake of 1989 near Gilroy \cite{Loma}. Ground motion prediction equations (GMPEs) determine the conditional probability of exceeding the ground motion intensity at specific geographic locations within Gilroy given a fault rupture mechanism and epicentral distance for the earthquake \cite{saeedinfra}. We use the Abrahamson et al. \cite{abrahamson} GMPE to estimate the intensity measures (peak ground acceleration) throughout Gilroy.
\subsubsection{Fragility and Restoration Assessment of EPN}\label{test3}
Based on the ground-motion intensities using the above seismic model, we use seismic fragility curves presented in HAZUS-MH\cite{hazus} to calculate the damages to the components of the EPN.
Repair crews, replacement components, and equipment are considered as available units of resources to restore the damaged components of the EPN following the hazard. One unit of resource (RU) is required to repair each damaged component \cite{ouyang}. To restore the EPN, we use the restoration times based on exponential distributions synthesized from HAZUS-MH, as summarized by expected repair times in Table~\ref{T1}.
\begin{table}[h]
\caption{Expected repair times (Unit: days)}\label{T1}
\resizebox{\linewidth}{!}{
\begin{tabular}{llllll}
\hline
&Damage States\\
\hline
Component & Undamaged & Minor & Moderate & Extensive & Complete \\
\hline
Electric sub-station &0 & 1 & 3 & 7 & 30 \\
Transmission line component &0& 0.5 & 1 & 1 & 2 \\
Distribution line component &0& 0.5 & 1 & 1 & 1 \\
\hline
\end{tabular}
}
\end{table}
\subsubsection{Challenges}\label{chal}
The total number of modeled EPN components is equal to 327, denoted by $L$. On average, about 60\% of these components are damaged after the simulated earthquake event. At each decision epoch $t=0,1,2,\ldots\,$, the decision maker has to select the assignment of RUs to the damaged components; each component cannot be assigned more than one RU. Note that the symbol $t$ is used to denote a discrete-index representing decision-epoch and is not to be confused with the actual time for recovery. Let the total number of damaged components at any $t$ be represented by $M_t$, and let the total number of RUs be equal to $N$, where $N \ll M_t$ (typically, the number of resource units for repair is significantly less than the damaged components). Then, the total number of possible choices for the assignment at any $t$ is $M_t \choose N$. For 196 damaged components and 29 RUs (15\% of the damaged components), the possible choices at the first decision epoch is approximately $10^{34}$. In addition, the reassignment of all RUs is done when one component gets repaired so that the total number of choices at the second decision epoch is ${195 \choose 29} \approx 10^{34}$.
Note that the repair time associated with a damaged component will depend on the level of damage, as determined from the fragility analysis described in Section~\ref{test3}. This repair time is random and is exponentially distributed with expected repair times shown in Table~\ref{T1}. Therefore, the outcomes of the repair actions are also random. It is difficult for a human decision maker to anticipate the outcome of repair actions when the outcomes are uncertain; therefore, planning with foresight is difficult. In fact, the problem is difficult to such an extent that assignment of RUs at the first decision epoch itself is challenging. Further, an additional layer of complexity to the problem is manifested owing to the level of damage at each location specified by a probabilistic model\cite{hazus}.
Because of the extraordinarily large number of choices, stochastic initial conditions, and the stochastic behavior of the outcome of the repair actions, our problem has a distinct flavor compared to the generalized assignment problem, and the classical heuristic solutions are not well-suited to this problem. In addition to dealing with these issues, the decision maker has to incorporate the dynamics and the sequential nature of decision making during recovery; thus, our problem represents a stochastic sequential decision-making problem. Last, we would also like our solution to admit most of the desirable properties previously discussed in the Section~\ref{Intro}. Our framework addresses \emph{all} these issues.
\subsection{Problem Formulation}
In this section, we briefly discuss MDPs and the simulation-based representation pertaining to our problem, previously described in \cite{case}, and repeated here for the sake of continuity and completeness. We then specify the components of the MDP for our problem.
\subsubsection{MDP Framework and Simulation-Based Representation}
An MDP is a controlled stochastic dynamical process, widely used to solve disparate decision-making problems. In the simplest form, it can be represented by the 4-tuple $\langle S,A,T,R \rangle$. Here, $S$ represents the set of \emph{states}, and $A$ represents the set of \emph{actions}. The state makes a transition to a new state at each decision epoch (represented by discrete-index $t$) as a result of taking an action. Let $s,s' \in S$ and $a \in A$; then $T$ is the state transition function, where $T(s,a,s')=P(s'\mid s,a)$ is the probability of transitioning to state $s'$ after taking action $a$ in state $s$, and $R$ is the reward function, where $R(s,a,s')$ is the reward received after transitioning from $s$ to $s'$ as a result of action $a$. In our problem, $|S|$ and $|A|$ are finite; $R$ is real-valued and a stochastic function of $s$ and $a$ (deterministic function of $s$, $a$, and $s'$). Implicit in our presentation are also the following assumptions \cite{Puterman}: First-order Markovian dynamics (history independence), stationary dynamics (transition function is not a function of absolute time), and full observability of the state space (outcome of an action in a state might be random, but the state reached is known after the action is completed). The last assumption simplifies our presentation in that we do not need to take actions specifically to reinforce or modify our belief about the underlying state. We assume that recovery actions (decisions) can be taken indefinitely as needed, e.g., until all the damaged components are repaired (infinite-horizon planning). In this setting, we define a \emph{stationary policy} as a mapping $\pi: S \rightarrow A$. Our objective is to find an optimal policy $\pi^*$. For the infinite-horizon case, $\pi^*$ is defined as
\begin{equation}\label{opt}
\pi^*=\arg\max_{\pi} V^{\pi}(s_0),
\end{equation}
where
\begin{equation}\label{val}
V^\pi(s_0)=E\left\lbrack\sum_{t=0}^{\infty}\gamma^{\,t}R(s_t,\pi(s_t),s_{t+1})\middle| s_0\right\rbrack
\end{equation}
is called the \emph{value function} for a fixed policy $\pi$, and $\gamma \in (0,1]$ is the discount factor. Note that in~\eqref{opt} we maximize over policies $\pi$, where at each decision epoch $t$ the action taken is $a_t=\pi(s_t)$. Stationary optimal policies are guaranteed to exist for the discounted infinite-horizon optimization criterion \cite{howard}. To summarize, our framework is built on discounted infinite-horizon discrete-time MDPs with finite state and action spaces, though the role $\gamma$ is somewhat tangential in our application.
We now briefly explain the simulation-based representation of an MDP \cite{Fern}. Such a representation serves well for large state, action, and outcome spaces, which is a characteristic feature of many real-world problems; it is infeasible to represent $T$ and $R$ in a simple matrix form for such problems. A simulation-based representation of an MDP is a 4-tuple $\langle S,A,\tilde R,\tilde T \rangle$, where $S$ and $A$ are as before. Here, $\tilde R$ is a stochastic real-valued function that stochastically returns a reward when input $s$ and $a$ are provided, where $a$ is the action applied in state $s$; $\tilde T$ is a \emph{simulator}, which stochastically returns a state sample $s'$ when state $s$ and action $a$ are provided as inputs. We can think of $\tilde R$ and $\tilde T$ as callable library functions that can be implemented in any programming language.
\subsubsection{MDP Specification for EPN Recovery Problem}\label{probform}
\hfill\\
\textbf{States:} Let $s_t$ denote the state of our MDP at discrete decision epoch $t$: $s_t=(s_t^1,\ldots,s_t^{L}, \rho_t^{1},\ldots,\rho_t^{L})$, $s_t^l$ is the damage state of the $l$th damaged EPN component (the possible damage states are Undamaged, Minor, Moderate, Extensive, and Complete, as shown in Table~\ref{T1}); and $\rho_t^{l}$ is the remaining repair time associated with the $l$th damaged component, where $l \in \{1,\ldots,L\}$. The state transition, and consequently the calculation of $\rho_t^{l}$ and $s_t^{l}$ at each $t$, is explained in the description of simulator $\tilde T$ below.\\
\textbf{Actions:} Let $a_t$ denote the repair action to be carried out at decision epoch $t$: $a_t=(a_t^1, \ldots, a_t^{L})$, and $a_t^l \in \{0,1\}~\forall l,t$. When $a_t^l=0$, no repair work is to be carried out at $l$th component. Conversely, when $a_t^l=1$, repair work is carried out at the $l$th component. Note that $\sum_{l}a_t^l = N$, and $a_t^l=0$ for all $l$ where $s_t^l$ is equal to Undamaged. Let $D_t$ be the set of all damaged components before a repair action $a_t$ is performed. Let $\mathcal{P}(D_t)$ be the powerset of $D_t$. The total number of possible choices at any decision epoch $t$ is given by $|\mathcal{P}_N(D_t)|$, where
\begin{equation}
\mathcal{P}_N(D_t)=\{C \in \mathcal{P}(D_t): |C|=N\},
\end{equation}
$|D_t|=M_t$, and $|\mathcal{P}_N(D_t)|= {M_t \choose N}$.\\
\textbf{Initial State:} The stochastic damage model, previously described in Sections~\ref{test2} and \ref{test3}, is used to calculate the initial damage state $s_0^{l}$. Once the initial damage states of the EPN components are known, depending on the type of the damaged EPN component, the repair times $\rho_0^{l}$ associated with the damaged components are calculated using the mean restoration times provided in Table~\ref{T1}.\\
\textbf{Simulator $\tilde T$:} Given $s_t$ and $a_t$, $\tilde T$ gives us the new (stochastic) state $s_{t+1}$. We define a \emph{repair completion} as the instant when at least one of the locations where repair work is carried out is fully repaired. The decision epochs occur at these repair-completion times. A damaged component is fully repaired when the damage state of the component changes from any of the four damage states (except the Undamaged state) in Table~\ref{T1} to the Undamaged state. Let us denote the \emph{inter-completion} time by $r_t$, which is the time duration between decision epochs $t$ and $t+1$, and let $\Delta_t=\{\rho_t^l : l\in\{1,\ldots,L\},\ \rho_t^l>0\}$. Then, $r_t=\min\Delta_t$ and $\rho_{t+1}^l = \max(\rho_t^l - r_t, 0)$. Note that it is possible in principle for the repair work at two or more locations to be completed simultaneously, though this virtually never happens in simulation or in practice. When a damaged component is in any of the Minor, Moderate, Extensive, or Complete states, it can only transition directly to the Undamaged state. Instead of modeling the effect of repair via inter-transitions among damage states, the same effect is captured by the remaining repair time $\rho_t$.
Once a damaged component is restored to the Undamaged state, the RUs previously assigned to it become available for reassignment to other damaged components. Moreover, the RUs at remaining locations, where repair work is unfinished, are also available for reassignment---the repair of a component is \emph{preemptive}. It is also possible for a RU to remain at its previously assigned unrepaired location if we choose so. Because of this reason, preemption of repair work during reassignment is not a restrictive assumption; on the contrary, it allows greater flexibility to the decision maker for planning. Preemptive assignment is known to be particularly useful when an infrastructure system is managed by a central authority, an example of which is EPN \cite{ress2}.
Even if the same assignment is applied repeatedly to the same system state (let us call this the \emph{current} system state), the system state at the subsequent decision epoch could be different because different components might be restored in the current system state, because of random repair times; i.e., our simulator $\tilde T$ is stochastic. When $M_t$ eventually becomes less than or equal to $N$ because of the sequential application of the repair actions (say at decision epoch $t_a$), the extra RUs are retired so that we have $M_t=N~\forall {t\geq t_{a+1}}$, and the assignment problem is trivial. The evolution of the state of the community as a result of the nontrivial assignments is therefore given by $(s_0,\ldots,s_{t_a})$.\\
\textbf{Rewards:} We define two \emph{reward functions} corresponding to two different objectives:
In the first objective, the goal is to minimize the days required to restore electricity to a certain fraction ($\zeta$) of the total population ($p$); recall that for our region of study in Gilroy, $p=47905$. We capture this objective by defining the corresponding reward function as follows:
\begin{equation}\label{rew1}
R_1(s_t,a_t,s_{t+1})= r_t,
\end{equation}
where we recall that $r_t$ is the inter-completion time between the decision epochs $t$ and $t+1$.
Let $\hat t_{c}$ denote the decision epoch at which the outcome of repair action $a_{\hat t_{c}-1}$ results in the restoration of electricity to $\zeta \cdot p$ number of people. The corresponding state reached resulting from action $a_{\hat t_{c}-1}$ is $s_{\hat t_{c}}$, called the \emph{goal state} for the first objective.
In the second objective, the goal is to maximize the sum (over all the discrete decision epochs $t$) of the product of the total number of people with electricity ($n_t$) after the completion of a repair action $a_t$ and the \emph{per-action time}, defined as the time required ($r_t$) to complete the repair action $a_t$, divided by the total number of days ($t_{\text{tot}}$) required to restore electricity to $p$ people. We capture this objective by defining our second reward function as:
\begin{equation}\label{rew2}
R_2(s_t,a_t,s_{t+1})=\frac{n_t \cdot r_t}{t_{\text{tot}}}.
\end{equation}
The terms in \eqref{rew2} have been carefully selected so that the product of the terms $n_t$ and $r_t/t_{\text{tot}}$ captures the \emph{impact} of automating a repair action at each decision epoch $t$, in the spirit of maximizing electricity benefit in a minimum amount of time. Let $\tilde t_{c}$ denote the decision epoch at which the outcome of repair action $a_{\tilde t_{c}-1}$ results in the restoration of electricity to the entire population. Then the corresponding goal state is $s_{\tilde t_{c}}$.
Note that both $\hat t_{c}$ and $\tilde t_{c}$ need not belong to the set $\{0,\ldots, t_{a-1}\}$, i.e., both $s_{\hat t_{c}}$ and $s_{\tilde t_{c}}$ need not be reached only with a nontrivial assignment. Also, note that our reward function is stochastic because the outcome of each action is random.\\
\textbf{Discount factor $\gamma$:} A natural consequence of sequential decision making is the problem of \emph{intertemporal choice} \cite{intertemporal}. The problem consists in balancing the rewards and costs at different decision epochs so that the uncertainty in the future choices can be accounted for. To deal with the problem, the MDP model, specifically for our formulation, accommodates a discounted utility, which has been the preferred method of tackling this topic for over a century. In this study, the discount factor $\gamma$ is fixed at 0.99. We have selected a value closer to one because of the use of sophisticated stochastic models described in Sections~\ref{test2} and \ref{test3}; the uncertainty in the outcome of the future choices is modeled precisely via these models, and therefore we can evaluate the value of the decisions several decision-epochs in the future accurately to estimate the impact of the current decision. In our framework, it is possible to select a value closer to zero if the decision automation problem demands the use of simpler models. Moreover, the discounting can be done based on $r_t$---the \emph{real} time required for repair in days (the inter-epoch time)---rather than the number of decision epochs, but this distinction is practically inconsequential for our purposes because of our choice of $\gamma$ being very close to one.
Next we highlight the salient features of our MDP framework; in particular, we discuss the successful mitigation of the challenges previously discussed in Section~\ref{chal}
Recall that we have a probability distribution for the initial damage state of the EPN components for a simulated earthquake. We generate multiple samples from this distribution to initialize $s_0$ and optimize the repair actions for each of the initial states separately. The outcomes of the optimized repair action for each initial state constitutes a distinct stochastic unfolding of recovery events (recovery path or recovery trajectory). We average over these recovery paths to evaluate the performance of our methods. In our framework, as long as sufficient samples (with respect to some measure of dispersion) are generated, we can appropriately deal with the probabilistic damage-state model.
Our sequential decision-making formulation also includes modeling the uncertainty in the outcome of repair actions. Thus, our framework can handle both stochastic initial conditions and stochastic repair actions.
We have formulated the impact of the current decisions on the future choices with exponential discounting. In addition, our sequential decision-making framework addresses the issue of making restoration decisions in stages, where feedback (information) gathered at each stage can play an important role in successive decision making. This is essentially a closed-loop design to compute decisions at each decision epoch.
Finally, we have defined the second reward function to account for multiple objectives (benefit of electricity ($n_t$) and per-action repair time ($r_t/t_{\text{tot}}$)) without relaxing the constraint on the number of resources.
In the next section, we address the computational difficulties associated with solving the problem, show how to account for the current preferences and policies of the decision maker, and discuss the lookahead property.
\section{PROBLEM SOLUTION}
\subsection{MDP Solution: Exact Methods}\label{sol}
A solution to an MDP is an optimal policy $\pi^*$. There are several methods to exactly compute $\pi^*$; here, we discuss the \emph{policy iteration} algorithm because it bears some relationship with the \emph{rollout} method, which we describe later.
Suppose that we have access to a nonoptimal policy $\pi$. The value function for this policy $\pi$ in \eqref{val} can be written as
\begin{equation}\label{bellu}
V^\pi(s)= R(s,\pi(s))+\gamma \sum_{s'} P(s'\mid s,\pi(s))\cdot V^\pi(s')~\forall s \in S,
\end{equation}
where $V^\pi$ can be calculated iteratively using the \emph{Bellman's update equation} or by solving a linear program \cite{belllin}. This calculation of $V^\pi$ is known as the policy \emph{evaluation} step of the policy iteration algorithm.
The $Q$ value function of policy $\pi$ is given by
\begin{equation}\label{Qval}
Q_{\pi}(s,a)=R(s,a)+\gamma \sum_{s'} P(s'\mid s,a)\cdot V^\pi(s'),
\end{equation}
which is the expected discounted reward in the future after starting in some state $s$, taking action $a$, and following policy $\pi$ thereafter.
An improved policy $\pi'$ can be calculated as
\begin{equation}\label{roll}
\pi'(s_t)=\arg\max_{a_t}Q_\pi(s_t,a_t).
\end{equation}
The calculation of an improved policy in \eqref{roll} is known as the policy \emph{improvement} step of the policy iteration algorithm. Even if the policy $\pi'$ defined in \eqref{roll} is nonoptimal, it is a \emph{strict} improvement over $\pi$ \cite{howard}. This result is called the \emph{policy improvement theorem}. Note that the improved policy $\pi'$ is generated by solving, at each state $s$, an optimization problem with $Q_\pi(s,\cdot)$ as the objective function. In the policy iteration algorithm, to compute the optimal policy $\pi^*$, the policy evaluation and improvement steps are repeated iteratively until the policy improvement step does not yield a strict improvement.
Unfortunately, algorithms to compute the exact optimal policy are intractable for even moderate-sized state and actions spaces. Each iteration of the policy evaluation step requires $\mathcal{O}(|S|^3)$ time using a linear program and $\mathcal{O}(|S||A|)$ time using Bellman's update for a given $\pi$.\footnote{If the policy evaluation step is done using the Bellman's update with a given $\pi$, instead of solving a linear program, the algorithm is called a \emph{modified} policy iteration; conventionally, the term \emph{policy iteration} is used only when the policy evaluation step is performed by solving a linear program.} In the previous example from Section~\ref{chal}, where the total number of damaged components after the initial shock is equal to 196, for the five damage states in Table~\ref{T1} and two repair actions (repair and no-repair), $|S|=5^{196}$ and the $|A|=2^{196}$. Note that our state and action space is jointly massive. In our case, and for other large real-world problems, calculating an exact solution is practically impossible; even enumerating and storing these values in a high-end supercomputer equipped with state-of-the-art hardware is impractical.
\subsection{Rollout: Dealing with Massive S}
We now motivate the rollout algorithm \cite{rollout} in relation to our simulation-based framework and the policy iteration algorithm.
When dealing with large $S$ and $A$, approximation techniques have to be employed given the computational intractability of the exact methods. A general framework of using approximation within the policy iteration algorithm is called \emph{approximate policy iteration}---rollout algorithms are classified under this framework \cite{lagoudakis2003reinforcement}. In rollout algorithms, usually the policy evaluation step is performed approximately using Monte Carlo sampling and the policy improvement step is exact. The policy improvement step is typically exact, at some computational cost, because approximating the policy improvement step requires the use of sophisticated techniques tailored to the specific problem being solved by rollout to avoid poor solution quality. A novel feature of our work is that we approximate both the policy improvement and policy evaluation step. The approximation to the policy improvement step is explained in Section~\ref{lin}.
The policy evaluation step is approximated as follows. An implementable (in a programming sense) stochastic function (simulator) $SimQ(s_t,a_t,\pi,h)$ is defined in such a way that its expected value is $Q_\pi(s_t,a_t,h)$, where $Q_\pi(s_t,a_t,h)$ denotes a finite-horizon approximation of $Q_\pi(s_t,a_t)$, and $h$ is a finite number representing horizon length. In the rollout algorithm, $Q_\pi(s_t,a_t,h)$ is calculated by simulating action $a_t$ in state $s_t$ and thereafter following $\pi$ for another $h-1$ decision epochs, which represents the approximate policy evaluation step. This is done for candidate actions $a_t \in A(s_t)$, where $A(s_t)$ is the set of all the possible actions in the state $s_t$. A finite-horizon approximation $Q_\pi(s_t,a_t,h)$) is unavoidable because, in practice, it is of course impossible to simulate the system under policy $\pi$ for an infinite number of epochs. Recall, however, that $V^\pi(s_t)$, and consequently $Q_\pi(s_t,a_t)$, is defined over the infinite horizon. It is easy to show the following result \cite{Fern}:
\begin{equation}\label{Qapprox}
\left | Q_\pi(s_t,a_t)- Q_\pi(s_t,a_t,h)\right |=\frac{\gamma^{\,h}R_{\text{max}}}{1-\gamma},
\end{equation}
where $R_{\text{max}}$ is the largest value of the reward function (either $R_1$ or $R_2$).
The approximation error in \eqref{Qapprox} reduces exponentially fast as $h$ grows. Therefore, the $h$-horizon calculation appropriately approximates the infinite-horizon version, for we can always choose $h$ sufficiently large such that the error in \eqref{Qapprox} is arbitrarily small. The algorithm for rollout and the simulator is presented in Algorithms~\ref{rollout} and \ref{sim}, respectively, where $\alpha=|A(s_t)|$, $a_{t,i} \in A(s_t)$ (here $i \in \{1,\ldots,\alpha\}$), and $\beta$ is the total number of samples available to estimate $Q_\pi(s_t,a_t,h)$. Algorithm~\ref{rollout} is also called a \emph{uniform} rollout algorithm because $\beta$ samples are allocated to each action $a_t$ in $A(s_t)$ uniformly. In essence, rollout uses Monte-Carlo simulations in the policy evaluation step to calculate approximate $Q$ values; the quality of the approximation is often practically good enough even for small $h$.
\begin{algorithm}
\caption{\textbf{Uniform\_Rollout}($\pi, h, \beta, s_t, A(s_t)$)}
\label{rollout}
\begin{algorithmic}
\For{$i=1$ to $\alpha$}
\For{$j=1$ to $\beta$}
\State $ Q^{i,j} \gets \textbf{SimQ}(s_t,a_{t,i},\pi,h)$ \Comment{See algorithm 2}
\EndFor
\State $ Q_t(i) \gets \emph{Average}(Q^{i,j}$) \Comment{With respect to $j$}
\EndFor
\State $k \gets \arg\max_{i} Q_t$
\State \Return $a_{t,k}$
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{Simulator \textbf{SimQ}$(s_t, a_{t,i}, \pi, h)$}
\label{sim}
\begin{algorithmic}
\State $t'=0$
\State $s'_0\gets s_t$
\State $s'_{t'+1}\gets \tilde T(s'_{t'},a_{t,i})$
\State $r \gets \tilde R(s'_{t'},a_{t,i},s'_{t'+1})$
\For{$\lambda=1$ to $h-1$}
\State $s'_{t'+1+\lambda} \gets \tilde T(s'_{t'+\lambda},\pi(s'_{t'+\lambda}))$
\State $r\gets r+ \gamma^{\,\lambda}\tilde R(s'_{t'+\lambda},\pi(s'_{t'+\lambda}),s'_{t'+1+\lambda})$
\EndFor
\State \Return $r$
\end{algorithmic}
\end{algorithm}
Rollout fits well in the paradigm of online planning. In online planning, the optimal action is calculated only for the current state $s_t$, reducing the computational effort associated with a large state space. Similarly, in our problem, we need to calculate repair actions for the current state of the EPN without wasting computational resources on computing repair actions for the states that are never encountered during the recovery process. Therefore, the property of online planning associated with Algorithm~\ref{rollout} is important for recovery, and even if the policy $\pi$ (called the \emph{base policy} in the context of Algorithm~\ref{rollout}) is applied repeatedly (``rolled out'') for $h-1$ decision epochs, we focus only on the \emph{encountered} states as opposed to dealing with \emph{all} the possible states (cf., \eqref{bellu}). In essence, for the recovery problem, rollout can effectively deal with large sizes of the state space because the calculation of the policy is amortized over time.
Consider the following example. In the context of online planning, for the sake of argument suppose that the action space has only a single action. Even for such a superficially trivial example, the outcome space can be massive. However, the representation of the problem in our framework limits the possible outcomes for any $(s,a)$ pair to $N$, bypassing the problem with the massive outcome space.
We can use existing policies of expert human decision makers as the base policy in the rollout algorithm. The ability of rollout to incorporate such policies is reflected by its interpretation as one-step of policy iteration, which itself starts from a nonoptimal policy $\pi$. In fact, rollout as described here is a ``one-step lookahead" approach
(here, one-step lookahead means one application of policy improvement) \cite{rollout}. Despite the stochastic nature of the recovery problem, the \emph{uniform} rollout algorithm (as defined by Algorithm~\ref{rollout}) computes the expected future impact of every action to determine the optimized repair action at each $t$. Because the policy evaluation step is approximate, rollout cannot guarantee a \emph{strict} improvement over the base policy; however, the solution obtained using rollout is never worse than that obtained using the base policy \cite{rollout} because we can always choose the value of $h$ and $\beta$ such that the rollout solution is no worse than the base policy solution \cite{Dimitrakakis2008b}. In practice, compared to the \emph{accelerated policy gradient} techniques, rollout requires relatively few simulator calls (Algorithm~\ref{sim}) to compute equally good near-optimal actions \cite{accelerate}.
\subsection{Linear Belief Model: Dealing with Massive A}\label{lin}
The last remaining major bottleneck with the rollout solution proposed above is that for any state $s_t$, to calculate the repair action, we must compute the $argmax$ of the $Q$ function at $s_t$. This involves evaluating the $Q$ values for candidate actions and searching over the space of feasible actions. Because of online planning, we no longer deal with the entire action space $A$ but merely $A(s_t)$. For the example previously discussed in Section~\ref{chal}, even though this is a reduction from $2^{196}$ to $196\choose29$, the required computation after the reduction remains substantial.
Instead of rolling out all $a_t \in A(s_t)$ exhaustively, we train a set of parameters of a linear belief model (explained below) based on a small subset of $A(s_t)$, denoted by $\tilde A(s_t)$. The elements of $\tilde A(s_t)$, denoted by $\tilde a_t$, are chosen randomly, and the size of the set $\tilde A(s_t)$, denoted by $\tilde \alpha$, is determined in accordance with the simulation budget available at each decision epoch $t$. The simulation budget $B$ at each decision epoch will vary according to the computational resources employed and the run-time of Algorithm~\ref{sim}. Thereafter, $a_t$ is calculated using the estimated parameters of the linear belief model.
Linear belief models are popular in several fields, especially in drug discovery \cite{free}. Given an action $\tilde a_{t,i}$ selected from $\tilde A(s_t)$, the linear belief model can be represented as
\begin{equation}\label{linsum}
\tilde Q^{i,j} = \sum_{n=1}^{N}\sum_{m=1}^{M} \mathbf{X}_{mn}\cdot \Theta_{mn} + \eta_{mn},
\end{equation}
where
\begin{equation}\label{mod}
\mathbf{X}_{mn}=
\begin{cases}
1 & \text{if \textit{n}th RU is assigned to \textit{m}th location}\\
0 & \text{otherwise,}
\end{cases}
\end{equation}
$i \in \{1,\ldots,\tilde\alpha\}$, $j \in \{1,\ldots,\beta\}$, $\tilde Q^{i,j}$ are the $Q$ values corresponding to $\tilde a_{t,i}$ obtained with Algorithm~\ref{sim}, and $\eta_{mn}$ represents noise.
Let $\tilde Q^i=\frac{1}{\beta}\sum_{j=1}^{\beta}\tilde{Q}^{i,j}$.
In this formulation, each parameter $\Theta_{mn}$ additively captures the impact on the $Q$ value of assigning a RU (indexed by $n$) to a damaged component (indexed by $m$). In particular, the contribution of each parameter is assumed to be independent of the presence or absence of the other parameters (see the discussion at the end of this section). Typically, linear belief models include an additional parameter: the constant intercept term $\Theta_0$ so that \eqref{linsum} would be expressed as
\begin{equation}\label{linsum2}
\tilde Q^{i,j} = \Theta_0 + \sum_{n=1}^{N}\sum_{m=1}^{M} \mathbf{X}_{mn}\cdot \Theta_{mn} + \eta_{mn}.
\end{equation}
However, our model excludes $\Theta_0$ because it would carry no corresponding physical significance unlike the other parameters.
The linear belief model in~\eqref{linsum} can be equivalently written as
\begin{equation}\label{mat}
\mathbf{y}=\mathbf{H}\cdot\theta+\eta,
\end{equation}
where $\mathbf{y}$ (of size $\tilde \alpha \times 1$) is a vector of the $\tilde Q^i$ values calculated for all the actions $\tilde a_t \in \tilde A(s_t)$, $\mathbf{H}$ (of size $\tilde \alpha \times (M_t \cdot N)$) is a binary matrix where the entries are in accordance with \eqref{linsum}, \eqref{mod}, and the choice of set $\tilde A(s_t)$, $\theta$ (of size $(M_t \cdot N) \times 1$) is a vector of parameters $\Theta_{mn}$, and $\eta$ (of size $(M_t \cdot N) \times1$) is the noise vector. The simulation budget $B$ at each decision epoch is divided among $\tilde \alpha$ and $\beta$ such that ${B=\tilde\alpha\cdot\beta}$. In essence, based on the $\tilde a_t \in \tilde A(s_t)$---which corresponds to the assignment of $N$ RUs to $M_t$ damaged components according to~\eqref{mod}---the matrix $\mathbf{H}$ is constructed. The vector $\mathbf{y}$ is constructed by computing the $Q$ values corresponding to $\tilde a_t$ according to Algorithm~\ref{sim}.
We estimate the parameter vector $\hat{\theta}$ by solving the least squares problem of minimizing $\|\mathbf{y}-\mathbf{H}\hat{\theta}\|_2$ with respect to $\hat{\theta}$. We chose a least squares solution to estimate $\hat{\theta}$ because least-squares solutions are well-established numerical solution methods, and if the noise is an uncorrelated Gaussian error, then $\hat{\theta}$ estimated by minimizing $\|\mathbf{y}-\mathbf{H}\hat{\theta}\|_2$ is the \emph{maximum likelihood estimate}. In our framework, the rank of $\mathbf{H}$ is $(M_t\cdot N)-(N-1)$. Therefore, the estimated parameter vector $\hat \theta$, which consists of parameters $\hat \Theta_{mn}$ and is calculated using the ordinary least squares solution, is not unique and admits an infinite number of solutions \cite{kailath}. Even though $\hat{\theta}$ is not unique, $\mathbf{\hat y}$ defined by the equation $\mathbf{\hat y}=\mathbf{H}\cdot \hat \theta$ is unique; moreover, the value of $\norm{\mathbf{y}-\mathbf{H}\cdot \hat \theta}_2^2$ is unique. We can solve our least squares problem uniquely using either the Moore-Penrose pseudo-inverse or singular value decomposition by calculating the minimum-norm solution \cite{chong}. In this work, we have used the Moore-Penrose pseudo-inverse. Note that ${\tilde{\alpha}\gg (M_t\cdot N)-(N-1)}$ (the number of rows of the matrix $\mathbf{H}$ is much greater than its rank).
Once the parameters $\hat{\Theta}_{mn}$ are estimated, the optimum assignment of the RUs is calculated successively (one RU at a time) depending on the objective in \eqref{rew1} and \eqref{rew2}. In the calculation of the successive optimum assignments of RU in Algorithm~\ref{rolloutwbelief}, let $\hat m$ denote the assigned location at each RU assignment step; then all the estimated parameters corresponding to $\hat m$ (denoted by parameters $\hat \Theta_{\hat m,index}$, where $index \in \{1,\ldots,N\}$) are set to $\infty$ or $-\infty$ depending on \eqref{rew1} and \eqref{rew2}, respectively. This step ensures that only a single RU is assigned at each location. This computation is summarized in Algorithm~\ref{rolloutwbelief}. Similar to Algorithm~\ref{rollout}, the assignment of $\beta$ samples to every action in $\tilde A(s_t)$ is uniform.
\begin{algorithm}
\caption{\textbf{Uniform\_Rollout w/ Linear\_Belief} ($\pi, h, \beta, s_t, \mathbf{H}, \tilde A(s_t)$)}
\label{rolloutwbelief}
\begin{algorithmic}
\State \textbf{Intialize} $a_t=[\mathbf{0}]$
\For{$i=1$ to $\tilde \alpha$}
\For{$j=1$ to $\beta$}
\State $ \tilde Q^{i,j} \gets \textbf{SimQ}(s_t,\tilde a_{t,i},\pi,h)$ \Comment{See algorithm 2}
\EndFor
\State $ y(i) \gets \emph{Average}(\tilde Q^{i,j}$) \Comment{With respect to $j$}
\EndFor
\State $\hat \theta\gets\textbf{OLS}( y,\mathbf{H})$ \Comment{Ordinary least squares solution}
\For{$k=1$ to $N$} \Comment{RU assignment step begins}
\State $ (\hat m, \hat n)\gets\arg\min_{m,n}\hat\theta $ \Comment{Min for~\eqref{rew1} and max for~\eqref{rew2}}
\State $a_t^{\hat m}\gets1$
\For{$index=1$ to $N$}
\State $\hat \Theta_{\hat m,index} \gets \infty$ \Comment{$-\infty$ for~\eqref{rew2}}
\EndFor
\EndFor
\State \Return $a_t$
\end{algorithmic}
\end{algorithm}
Our Algorithm~\ref{rolloutwbelief} has several subtleties, as summarized in the following discussion.
The use of linear approximation for dynamic programming is not novel in its own right (it was first proposed by Bellman et al. \cite{bellmanlin}). The only similarity between the typical related methods (described in \cite{lagoudakis2003reinforcement}) and our approach is that we are fitting a linear function over the rollout values---the belief model is a function approximator for the $Q$ value function in Algorithm~\ref{rollout}---whereas the primary difference is explained next.
Most of the error and convergence analyses for MDPs use the max-norm ($\mathcal{L}_\infty$ norm) to guarantee performance; in particular, the performance guarantee on the policy improvement step in \eqref{roll} and the computation of $a_t$ using rollout in Algorithm~\ref{rollout} are two examples. It is possible to estimate the parameters $\hat \theta$ to optimize the $\mathcal{L}_\infty$ norm by solving the resultant optimization problem using linear programming (see \cite{Stiefel}). The influence of estimating $\hat \theta$ to optimize the $\mathcal{L}_\infty$ norm, when a linear function approximator is used to approximate the $Q$ value function, on the error performance of any algorithm that falls in the general framework of approximate policy iteration is analyzed in \cite{Guestrin}.\footnote{Instead of formulating the approximation of the $Q$ value function as a regression problem, it is also possible to pose the $Q$ value function approximation as a classification problem.\cite{lagoudakis2003reinforcement}} Our approach is different from such methods because in our setting, the least squares solution optimizes the $\mathcal{L}_2$ norm, which we found to be advantageous.
Indeed, our solution shows promising performance. Three commonly used statistics to validate the use of the linear-belief model and the least squares solution in Algorithm~\ref{rolloutwbelief} are as follows: residual standard error (RSE), R-squared ($R^2$), and F-statistic.
The RSE for our model is $10^{-5}$, which indicates that the linear model satisfactorily fits the $Q$ values computed using rollout.
The $R^2$ value for our model is 0.99, which indicates that the computed features/predictors ($\hat \theta$) can effectively predict the $Q$ values.
The F-statistic is 4 (away from 1) for a large $\tilde \alpha$ ($\tilde \alpha=10^6$; whereas, at each $t$, the rank of $\mathbf{H}$ is never greater than 5850), which indicates that the features/predictors defined in \eqref{linsum} and \eqref{mod} are statistically significant. We can increase the number of predictors by including the interactions between the current predictors at the risk of overfitting the $Q$ values with the linear model \cite{occam}. As the authors in \cite{lagoudakis2003reinforcement} aptly point out, ``increasing expressive power can lead to a surprisingly worse performance, which can make feature engineering a counterintuitive and tedious task."
\subsection{Adaptive Sampling: Utilizing Limited Simulation Budget}
Despite implementing best software practices to code fast simulators and deploying the simulators on modern supercomputers, the simulation budget $B$ is a precious resource, especially for massive real-world problems. A significant amount of research has been done in the simulation-based optimization literature \cite{jia, sg, hx, sg2} to manage simulation budget. The related methods have also been demonstrated on real-world problems \cite{gp, case}.
A classic simulation-based approach such as optimal computing budget allocation \cite{Chen2000} is not employed here to manage budget, instead the techniques in our study are inspired by solutions to the multi-armed bandit problems \cite{bandit,dar,Auer,ucb}, which are topical in the computer science and artificial intelligence community, especially in research related to \emph{reinforcement learning}. The problem of (managing budget) expending limited resources is studied in reinforcement learning, although in a completely different context, where few optimal choices must be selected among a large number of options to optimize a stochastic objective function.
It has been our observation that two independent research communities---simulation-based optimization and computer science---have worked on similar problems in isolation. In this work, our solutions have been inspired by the later approach and will serve to bridge the gap between the work in the two research communities.
Algorithm~\ref{rollout}, and consequently also Algorithm~\ref{rolloutwbelief}, is not only directly dependent upon the speed of Algorithm~\ref{sim} (simulator) but also requires an accurate $Q$ value function estimate to guarantee performance. Therefore, typically a huge sampling budget in the form of large $\beta$ is allocated uniformly to every action $\tilde a_t \in \tilde A(s_t)$. This naive approach decreases the value of $\tilde \alpha$ (which is the size of the set $\tilde A(s_t)$);\footnote{Note that $B$ is fixed and depends on the simulator runtime and the computational platform on which the algorithm runs. Recall that $B=\tilde \alpha \cdot \beta$, and the larger the value of $\beta$ required to guarantee performance, the smaller the value of $\tilde \alpha$.} consequently, the parameter vector $\theta$ is trained on a smaller number of $Q$ values. In practice, we would like to get a rough estimate of the $Q$ value associated with every action in the set $\tilde A(s_t)$ and adaptively spend the remaining simulation budget in refining the accuracy of the $Q$ values corresponding to the best-performing actions; this is the \emph{exploration vs. exploitation} problem in optimal learning and simulation optimization problems\cite{WSC}. Spending the simulation budget $B$ in a nonuniform, adaptive fashion in the estimation of the $Q$ value function would not only train the parameter vector $\theta$ on a larger size of the set $\tilde A(s_t)$ via the additive model in \eqref{linsum} but also train the parameters $\Theta_{mn}$ on $Q$ values corresponding to superior actions (this is because in an adaptive scheme, $B$ is allocated in refining the accuracy of only those actions that show promising performance), consequently refining the accuracy of the parameters. The nonuniform allocation of simulation budget is the experiential learning component of our method, which further enhances Algorithm~\ref{rolloutwbelief}.
An interesting closed-loop sequential method pertaining to drug discovery that bears some resemblance to the experiential learning component of our method is described in \cite{knowledge}, where the alternatives (actions are called alternatives in their work) are selected adaptively using \emph{knowledge gradient} (KG). Further, in their work, KG is combined with a linear-belief model, and the results are demonstrated on a moderate-sized problem. Unfortunately, the algorithms proposed in \cite{knowledge} are not directly applicable to our problem because the algorithms in \cite{knowledge} necessitate sampling over the actions in $A(s_t)$, instead of $\tilde A(s_t)$.
Instead of uniformly allocating $\beta$ samples to each action in Algorithm~\ref{rollout}, nonuniform allocation methods have been explored in the literature to manage the rollout budget \cite{Dimitrakakis2008b}. An analysis of performance guarantees for nonuniform allocation of the rollout samples remains an active area of research \cite{dimitri2008a}. However, we extend the ideas in \cite{Dimitrakakis2008b} and \cite{dimitri2008a}, pertaining to nonuniform allocation, to Algorithm~\ref{rolloutwbelief} based on the theory of \emph{multi-armed bandits}.
In bandit problems, the agent has to sequentially allocate resources among a set of bandits, each one having an unknown reward function, so that a \emph{bandit objective} \cite{bandit} is optimized. There is a direct overlap between managing $B$ and the resource allocation problem in multi-armed bandit theory; the allocation of the simulation budget $B^*$ defined by the equation $B^*=B-\tilde\alpha$ sequentially to the state-action pair $(s_t,\tilde a_t)$ during rollout is equivalent to a variant of the classic multi-armed bandit problem \cite{Dimitrakakis2008b}.
In this study, we consider two bandit objectives: probable approximate correctness (PAC) and cumulative regret. In the \emph{PAC} setting, the goal is to allocate budget $B^*$ sequentially so that we find a near-optimal ($\epsilon$ of optimal) action $\tilde a_t$ with high probability ($1-\delta$) when the budget $B^*$ is exhausted. Algorithm~\ref{rollout} is PAC optimal when $h$ and $\beta$ are selected in accordance with the \emph{fixed algorithm} in \cite{dimitri2008a}. For our decision-automation problem, the value of $\beta$ required to guarantee performance is typically large. Nonuniform allocation algorithms like \emph{median elimination} are PAC optimal \cite{dar} (the median elimination algorithm is asymptotically optimal, so no other nonuniform resource-allocation algorithm can outperform the median elimination algorithm in the worst case). However, the choice of $(\epsilon,\delta)$ for the PAC objective is arbitrary; therefore, the PAC objective is not well-suited to our decision automation problem. Further, the parameters of the median elimination algorithm that guarantee performance are directly dependent on the $(\epsilon,\delta)$ pair.
The second common objective function in bandits problems mentioned earlier, \emph{cumulative regret} is well-suited to our problem. During the optimization of \emph{cumulative regret}, the budget $B^*$ is allocated sequentially in such a way that when the budget is exhausted, the expected total reward is very close to the best possible reward (called minimizing the cumulative regret). An algorithm in \cite{Auer} called \emph{UCB1} minimizes the cumulative regret; in fact, no other algorithm can achieve a better cumulative expected regret (in the sense of scaling law). Usually, cumulative regret is not an appropriate objective function to be considered in nonuniform rollout allocation \cite{ucb} because almost all common applications require finding the best (approximately) action $a_t$, whereas in our problem, we would like to allocate the budget nonuniformly so that the parameter vector $\hat \theta$ in Algorithm~\ref{rolloutwbelief} is estimated in the most efficient way. Therefore, it is natural to allocate the computing budget so that the expected cumulative reward over all the $\tilde a_t$ ($Q$ values in the vector $\mathbf{y}$ in Algorithm~\ref{rolloutwbelief}) is close to the optimal value.
Based on the simulator runtime, the underlying computational platform, and the actual time provided by the decision maker to our automation system, suppose that we fix $B$ and in turn the size of the set $\tilde A(s_t)$. We exhaust a budget of $\tilde \alpha$ samples (one per action) from $B$ on getting rough estimates of the $Q$ value function for the entire set $\tilde{A}(s_t)$; the remaining budget $B-{\tilde\alpha}$ (denoted by $B^*$) is allocated adaptively using the UCB1 algorithm. This scheme of adaptively managing $B^*$ in Algorithm~\ref{rolloutwbelief} is summarized in Algorithm~\ref{rolloutwbeliefa}.
Algorithm~\ref{rolloutwbeliefa} alleviates the shortcomings of Algorithm~\ref{rolloutwbelief} by embedding the experiential learning component using the UCB1 algorithm. The UCB1 algorithm assumes that the rewards lie in the interval [0,1]. Satisfying this condition is trivial in our case because the rewards are bounded and thus can be always normalized so that they lie in the interval [0,1]; it is important to implement the normalization of $\tilde R$ in Algorithm~\ref{sim} when we use Algorithm~\ref{rolloutwbeliefa}. In Algorithm~\ref{rolloutwbeliefa}, not only is $B^*\gg\beta$, but we can also select $\tilde \alpha$ larger than that in Algorithm~\ref{rolloutwbelief} and train the parameter vector $\theta$ on a larger size of the set $\tilde A(s_t)$, which in turn will yield better estimates of $\hat \theta$. Note that Algorithm~\ref{rolloutwbeliefa} does not merely manage the budget $B^*$ adaptively (adaptive rollout), but it also handles massive action spaces through the linear belief model described in Section~\ref{lin} (this is because Algorithm~\ref{rolloutwbeliefa} is Algorithm~\ref{rolloutwbelief} with the UCB1 step appended).
In essence, Algorithm~\ref{rolloutwbeliefa} has three important steps: First, $Q$ values corresponding to $\tilde \alpha$ actions in the set $\tilde A(s_t)$ are computed. Second, the estimates for the $Q$ values corresponding to the most promising actions are refined by nonuniform allocation of the simulation budget using the UCB1 algorithm. Last, based on the ordinary least squares solution to calculate $\hat \theta$, the RUs are assigned sequentially just like in Algorithm~\ref{rolloutwbelief} described in Section~\ref{lin}.
\begin{algorithm}
\caption{\textbf{Adaptive\_Rollout w/ Linear\_Belief} ($\pi, h, B^*, s_t, \mathbf{H}$)}
\label{rolloutwbeliefa}
\begin{algorithmic}
\State \textbf{Intialize} $a_t=[\mathbf{0}]$
\For{$i=1$ to $\tilde \alpha$}
\State $ \tilde y(i) \gets \textbf{SimQ}(s_t,\tilde a_{t,i},\pi,h)$ \Comment{See algorithm 2}
\EndFor
\State $Count\gets\tilde \alpha$
\State $Count_{i}\gets[\mathbf{1}]$ \Comment{Counts the number of samples assigned to the $i$th action}
\While{$B^*$ is not zero} \Comment{UCB1 step}
\For{$i=1$ to $\tilde \alpha$}
\State $ d(i) \gets \tilde y(i)+\sqrt{\frac{2\ln (Count)}{Count_i(i)}}$
\EndFor
\State $\tau\gets\arg\max_i d$
\State $Count_i(\tau)\gets Count_i(\tau)+1$
\State $Count\gets Count+1$
\State $ \tilde y(\tau) \gets \frac{(Count_i(\tau)-1)\cdot \tilde y(\tau)+ \textbf{SimQ}(s_t,a_{t,\tau},\pi,h)}{Count_i(\tau)}$
\State $B^*\gets B^*-1$
\EndWhile
\State $\hat \theta=\textbf{OLS}( \tilde y,\mathbf{H})$ \Comment{Ordinary least squares solution}
\For{$k=1$ to $N$}
\State $ (\hat m, \hat n)\gets\arg\max_{m,n}\hat\theta $ \Comment{Min for~\eqref{rew1} and max for~\eqref{rew2}}
\State $a_t^{\hat m}\gets1$
\For{$index=1$ to $N$}
\State $\hat \Theta_{\hat m,index} \gets -\infty$ \Comment{$\infty$ for~\eqref{rew1}}
\EndFor
\EndFor
\State \Return $a_t$
\end{algorithmic}
\end{algorithm}
\section{SIMULATION RESULTS: MODELING GILROY RECOVERY}\label{simres}
We simulate 25 different damage scenarios (stochastic initial conditions) for each of the figures presented in this section. Calculation of the recovery for a single damage scenario is computationally expensive. Nevertheless, multiple initial conditions are generated to deal with the stochastic earthquake model as discussed in Section~\ref{probform}. In case of both Objective 1 and Objective 2, corresponding to $R_1$ and $R_2$ respectively, there will be a distinct recovery path for each of the initial damage scenarios. To present the results for Objective 1, we do not explicitly show the recovery trajectories. We are only interested in the number of days it takes to provide maximum benefit in the sense of optimizing $R_1$. Therefore, the results are presented in terms of a cumulative moving average plot. In Objective 2, for both Algorithm~\ref{rolloutwbelief} and Algorithm~\ref{rolloutwbeliefa}, the recovery computed using these algorithms outperform the base policy for every single scenario.
There are several candidates for determining the base policy to be used in the simulation. For a detailed discussion on these candidates in post-hazard recovery planning, see \cite{ress1}. For the simulations presented in this study, a random base policy is used. The total number of RUs are capped at 15\% of the damaged components for each scenario. The maximum number of damaged components in any scenario encountered in this study is 205, i.e., the size of the assignment problem at any $t$ is less than $10^{37}$. The simulators have a runtime of $10^{-5}$~s when $h=1$, and this runtime varies with the parameter $h$. The deeper we rollout the base policy in any variation of the rollout algorithm, the larger the simulation time per-action and the smaller the action space covered to train our parameters.
For Algorithm~\ref{rolloutwbelief} and the computational platform (AMD EPYC 7451, 2.3 GHz, and 96 cores), the value of $\beta$ is capped at 100 and the value of $\tilde \alpha$ is capped at $10^6$. Note that it is possible to parallelize Algorithm~\ref{rolloutwbelief} at two levels. The recovery of each damage scenario can be computed on a different processor, and then their average can be calculated. Further, Algorithm~\ref{rolloutwbelief} offers the opportunity to parallelize over $\tilde A(s_t)$ because a uniform budget can be allocated to a separate processor to return the average $Q$ value for each $\tilde a(s_t)$. On the contrary, the allocation of budget $B^*$ in Algorithm~\ref{rolloutwbeliefa} is sequential, and only a single $Q$ value corresponding to the allocated sample is evaluated (see the UCB1 step in Algorithm~\ref{rolloutwbeliefa}). Based on the updated $Q$ value (calculation of $\tilde y(\tau)$ in Algorithm~\ref{rolloutwbeliefa}), further allocation is continued until the budget ($B^*$) is exhausted. Therefore, barring the rough estimates at the first iteration, Algorithm~\ref{rolloutwbeliefa} cannot be parallelized for allocation. However, just like Algorithm~\ref{rolloutwbelief}, each processor can compute the recovery for a distinct initial condition ($s_0$) separately. Because of reduction in the parallelization in Algorithm~\ref{rolloutwbeliefa}, the solutions, even though high-quality, are computed at a slower rate. For our simulations, $B^*\leq9 \cdot 10^5$ and $\tilde \alpha\leq 10^5$ in Algorithm~\ref{rolloutwbeliefa}.
Fig.~\ref{fig3} compares the performance of Algorithm~\ref{rolloutwbelief} with the base policy for Objective 1. For the simulations, $\zeta=0.8$; the goal is to calculate recovery actions so that 80\% of the population has electricity in minimum time. The figure depicts the cumulative moving average plot of the number of days required to achieve Objective 1. The cumulative moving average plot is computed by averaging the days required to reach the threshold for the total number of scenarios depicted on the X-axis of Fig.~\ref{fig3}. The cumulative moving average is used to smooth the data. As the number of scenarios increases in order to represent the stochastic behaviour of the earthquake model accurately, our algorithm saves about half a day over the recovery computed using the base policy. We manage to achieve the performance at scale (without any restriction on the number of workers, whereas all our earlier related work (see \cite{ress2,iEMSs,icasp}, \cite{ress1}, and \cite{case}) put a cap on the number of RUs); in addition, this performance is achieved on a local computational machine.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Obj1Uni}
\caption{A cumulative moving average plot for the number of days required to provide electricity to 80\% of the population with respect to the total number of scenarios using Algorithm~\ref{rolloutwbelief}.}
\label{fig3}
\end{figure}
Fig.~\ref{fig4} compares the performance of Algorithm~\ref{rolloutwbelief} with the base policy for Objective 2. The recovery path (trajectories) for both the base policy and Algorithm~\ref{rolloutwbelief} are computed by calculating the average of 25 different recoveries over different initial conditions. The recovery path represents the number of people that have electricity after a given amount of time (days) because of recovery actions. Evaluating the performance of our algorithm in meeting Objective 2 (defined in Section~\ref{probform}) boils down to calculating the area under the curve of our plots normalized by the total time for the recovery (12 days). The area represents the product of the number of people who have electricity after the completion of each repair action ($n_t$) and the time required in days for the completion of that action (the inter-completion time $r_t$). A larger value of this area ($\sum_{t}n_t\cdot r_t$) normalized by total time to recovery ($t_{\text{tot}}$) represents the situation where a greater number of people were benefitted as a result of the recovery actions. Normalization of the area ($\sum_{t}n_t\cdot r_t$) with the total time to recovery ($t_{\text{tot}}$) is important because the amount of time required to finish the recovery ($t_{\text{tot}}$) using the base policy and rollout with linear belief can be different. It is evident by visual inspection of the figure that recovery with Algorithm~\ref{rolloutwbelief} results in more benefit than its base counterpart; however, calculating $(\sum_{t}n_t\cdot r_t)/t_{\text{tot}}$ for the plots is necessary when the recovery achieved by the algorithms intersect at several points (see \cite{ress1}), a behaviour commonly seen with the rollout algorithm because of the lookahead property.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Obj2Uni}
\caption{Average (of 25 recovery paths) recovery path using base policy and uniform rollout with linear belief for Objective 2.}
\label{fig4}
\end{figure}
Fig.~\ref{fig5} compares the performance of Algorithm~\ref{rolloutwbeliefa} with the base policy for Objective 1. Again, we set $\zeta=0.8$. In contrast to Algorithm~\ref{rolloutwbelief}, Algorithm~\ref{rolloutwbeliefa} improves the performance by another half a day so that the recovery because of its actions results in a saving of one day over the base policy to meet the objective. Adaptively allocating $B^*$ using UCB1, even though slower in runtime, can achieve better performance than Algorithm~\ref{rolloutwbelief} with a smaller simulation budget. In the end, the choice between Algorithm~\ref{rolloutwbeliefa} and Algorithm~\ref{rolloutwbelief} will be dictated by the urgency of the recovery action demanded from the automation framework and the computational platform deployed.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Obj1Non}
\caption{A cumulative moving average plot for the number of days required to provide electricity to 80\% of the population with respect to the total number of scenarios using Algorithm~\ref{rolloutwbeliefa}.}
\label{fig5}
\end{figure}
Fig.~\ref{fig6} compares the performance of Algorithm~\ref{rolloutwbeliefa} with the base policy for Objective 2. Algorithm~\ref{rolloutwbeliefa} shows substantial improvement over the recovery calculated using both base policy and that using Algorithm~\ref{rolloutwbelief} in Fig.~\ref{fig4}. This is ascertained by calculating the area under the respective curves and normalizing it with the total time to recovery. Even though direct comparison between the recoveries of both the algorithms is not entirely appropriate owing to the stochastic initial conditions, random repair times, and a random base policy, it is worth re-noting that the performance of Algorithm~\ref{rolloutwbeliefa} is better than Algorithm~\ref{rolloutwbelief} at a lower simulation budget. Minimizing the cumulative regret to allocate $B^*$ during the parameter training provides for better recovery actions at each decision epoch. Because the entire framework is closed-loop, Algorithm~\ref{rolloutwbeliefa} (which uses both experiential and anticipatory learning) and Algorithm~\ref{rolloutwbelief} (which uses only anticipatory learning) exploit small improvements at each decision epoch $t$ and provides an enhanced recovery. Essentially, the small improvements squeezed at the earlier stages set a better platform for these algorithms to further exploit the anticipatory and experiential learning components at a later point in the recovery.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Obj2Non}
\caption{Performance comparison of adaptive rollout w/ linear belief vs. base policy for the second objective.}
\label{fig6}
\end{figure}
\section{Conclusion}
In this work, we presented a novel, systematic approach to MDPs that have jointly massive finite state and action spaces. When the action space consists of large number of discrete actions, the method of choice has been to embed these actions in continuous action spaces \cite{dulac2015deep}, where deep reinforcement learning techniques have shown promising performance on $|A|\approx10^6$. In contrast, in this study, we present a unique approach to address the problem, where the size of the discrete action space that we consider is significantly large than that in \cite{dulac2015deep}.
We studied an intricate real-world problem, modeled it in our framework, and demonstrated the powerful applicability of our algorithm on this challenging problem. The community recovery problem is a stochastic combinatorial decision-making problem, and the solution to such decision-making problems is critically tied with the welfare of communities in the face of ever-increasing natural and anthropogenic hazards. Our modeling of the problem is general enough to accommodate the uncertainty in the hazard models and the outcome of repair actions. Ultimately, we would like to test the techniques developed in this work on other real-world problems, e.g., large recommender systems (like those in use with the organizations YouTube and Amazon) and large industrial control systems.
\textbf{Ongoing Work:}
In our work on post-hazard community management (see \cite{ress2,iEMSs,icasp,emi}, \cite{ress1}, and \cite{case}), including this study, we have been focusing on obtaining solutions by the use of a single base policy. Currently, we are developing a framework where we leverage the availability of multiple base polices in the aftermath of hazards. Two algorithms are particularly appealing in this regard: parallel rollout and policy switching \cite{Chang}. In parallel rollout, just like in \cite{knowledge}, the optimization is done over the entire set $A(s_t)$. In our ongoing work, we are formulating a non-preemptive stochastic scheduling framework, where the size of set $A(s_t)$ grows linearly with the number of RUs, which circumvents the issue of large action spaces. In addition, we are also exploring heuristic search algorithms to guide the stochastic search, i.e., adaptively select the samples of the parallel rollout algorithm. There, we consider several infrastructure systems in a community, such as building structures, EPN, WN, and food retailers simultaneously (all these systems are inter-connected), and we compute the recovery of the community post-hazard.
|
2,869,038,155,466 | arxiv | \section{Introduction}
Let $X$ be a compact Hausdorff space and let $C(X)$ be the Banach algebra of complex-valued continuous functions on $X$. We say that $F : C(X) \to C(X)$ is \emph{entire} (in the sense of Lorch) if it is Fr\'{e}chet differentiable at every point $w \in C(X)$ and its differential is given by a multiplication operator $L_w(h) = F'(w) h$, for some $F'(w) \in C(X)$ (see \cite{Lo1943} for details). We denote the set of entire functions by $\mathcal{H}\bigl(\CX\bigr)$ and make it into a unital algebra with the usual operations. It is well known that $F \in \mathcal{H}\bigl(\CX\bigr)$ if and only if it admits a power series expansion
\begin{equation}
F(w) = \sum_{n=0}^{\infty} a_n w^n, \qquad w \in C(X),
\end{equation}
where $a_n \in C(X)$ for all $n \geq 0$, $\limsup_n \norm{a_n}^{1/n} = 0$ and the series converges in norm for each fixed $w\inC(X)$.
To any entire function $F$, we may associate the map $X \times \mathbb{C} \to \mathbb{C}$ defined by
\begin{equation} \label{Associate_Function}
(x,z) \mapsto \sum_{n=0}^{\infty} a_n(x) z^n\ \Bigl(= F\left(z 1_{C(X)}\right)(x)\Bigr),
\end{equation}
which is easily seen to be continuous on $X \times \mathbb{C}$ and holomorphic with respect to $z$ for $x \in X$ fixed. On the other hand, it is obvious that the above map uniquely determines $F$. By a customary abuse of notation, we also write $F$ for the map in \eqref{Associate_Function}; it should be clear from the context which case we are referring to.
We say that $F \in \mathcal{H}\bigl(\CX\bigr)$ has a \emph{root} in $C(X)$, if there exists $w \in C(X)$ such that $F(x,w(x)) = 0$ for all $x \in X$. If $X$ is a locally connected compact Hausdorff space, it was observed by Miura and Niijima \cite{MN2003} that $C(X)$ is algebraically closed, i.e., every monic polynomial with coefficients in the algebra has at least one root in the algebra, if and only if $X$ is hereditarily unicoherent (see also Honma and Miura \cite{HM2007}). We recall that $X$ is said to be hereditarily unicoherent, if the intersection $A \cap B$ is connected for all closed connected subsets $A, B$ of $X$. A short, but accurate introduction to the state of the art in monic algebraic equations can be found in Kawamura and Miura \cite{KM2009}.
However, if we consider more general functions in $\mathcal{H}\bigl(\CX\bigr)$, the existence of continuous roots is no longer guaranteed, even if $X$ is as simple as the unit interval. For example, the function $F(x,z) = x^2 z - x$ does not have a root in $C([0,1])$. We now introduce two phenomena that arise in the preceding example and have a strong relation with the existence of solutions of the equation $F(w)=0$.
\begin{definition} \label{Def_Degeneracy}
Let $X$ be a compact Hausdorff space. A function $F \in \mathcal{H}\bigl(\CX\bigr)$ is said to be \emph{degenerate} at $x_0 \in X$ if the map $z \mapsto F(x_0,z)$ is constant; otherwise, it is said to be \emph{nondegenerate} at $x_0$.
\end{definition}
\begin{definition} \label{Def_AsymptoticZero}
Let $X$ be a compact Hausdorff space, let $Y \subset X$ be a connected subset and $x_0 \in \overline{Y}\setminus Y$. A function $w \in C(Y)$ is said to be an \emph{asymptotic root} of $F \in \mathcal{H}\bigl(\CX\bigr)$ if $F(x,w(x)) = 0$ for all $x \in Y$ and
\begin{equation}
\lim_{x \to x_0} w(x) = \infty, \quad x \in Y.
\end{equation}
\end{definition}
The aim of this paper is to prove that if $X$ is a connected, locally connected, hereditarily unicoherent compact Hausdorff space, then any nowhere degenerate function $F \in \mathcal{H}\bigl(\CX\bigr)$ with no asymptotic roots, satisfying $F(x_0,z_0) = 0$, has at least one root $w \in C(X)$ such that $w(x_0) = z_0$. It is easily seen that monic polynomials are nondegenerate at every point of $X$ and do not have asymptotic roots. Consequently, our result generalizes that of Miura and Niijima \cite{MN2003}.
It is important to mention that Gorin and S\'{a}nchez Fern\'{a}ndez \cite{GS1977} studied the case where $X$ is a connected, locally connected, hereditarily unicoherent, compact metric space and showed that any nowhere degenerate function $F \in \mathcal{H}\bigl(\CX\bigr)$ with no asymptotic arcs, satisfying the condition $F(x_0,z_0)=0$, has at least one root $w\in C(X)$ such that $w(x_0)=z_0$ (for a definition of asymptotic arc, see \cite{GS1977}). In our work, we do not assume that $X$ is a first-countable space.
\section{Existence of Roots}
We start by pointing out a very useful lemma, which arises naturally from Rouch\'{e}'s Theorem.
\begin{lemma} \label{Lemma:Algebrization}
Let $X$ be a compact Hausdorff space, $F \in \mathcal{H}\bigl(\CX\bigr)$ and pick $x_0 \in X$ such that the map $z \mapsto F(x_0,z)$ has a zero $z_0$ of multiplicity $n$. Then, there exist an open disk $D_r(z_0)$ and a neighborhood $V$ of $x_0$ such that
\begin{equation}
F(x,z) = P(x,z)\:G(x,z), \qquad (x,z) \in V \times D_r(z_0),
\end{equation}
where $P(x,z) = z^n + a_1(x)z^{n-1}+\ldots+a_n(x)$ is a monic polynomial with coefficients in $C(V)$ satisfying $P(x_0,z) = (z-z_0)^n$ and $G$ never vanishes in $V \times D_r(z_0)$.
\end{lemma}
\begin{proof}
Set $r>0$ such that the map $z \mapsto F(x_0,z)$ has no roots in $\overline{D_r(z_0)}\setminus \{z_0\}$ and write $\Gamma = \{z \in \mathbb{C}\: :\:|z-z_0|=r\}$. Also, write $m = \min_{\Gamma} |F(x_0,z)| > 0$. By a standard compactness argument, we can find a neighborhood $V$ of $x_0$ such that $|F(x,z) - F(x_0,z)|<m$ for all $x \in V$ and $z \in \Gamma$. Then, an application of Rouch\'{e}'s Theorem shows that $z \mapsto F(x,z)$ has exactly $n$ zeros in $D_r(z_0)$, counting multiplicities, whenever $x \in V$.
For any $x \in V$, we denote the zeros of $z \mapsto F(x,z)$ in $D_r(z_0)$ by $z_1(x), \ldots, z_n(x)$, taken in any order and we define
\begin{equation}
P(x,z) = \bigl(z-z_1(x)\bigr)\ldots\bigl(z-z_n(x)\bigr) = z^n + a_1(x)z^{n-1}+\ldots+a_n(x).
\end{equation}
Obviously, we have $P(x_0,z) = (z-z_0)^n$. Now, consider the central symmetric functions
\begin{equation}
s_k(x) = \sum_{i=1}^{n} \bigl(z_i(x)\bigr)^k, \qquad k \geq 0.
\end{equation}
Since $z_1(x), \ldots, z_n(x)$ are the zeros of $z \mapsto F(x,z)$ in the interior of $\Gamma$, it is well known (and easily verified) that
\begin{equation}
s_k(x) = \frac{1}{2\pi i} \int_{\Gamma}\:z^k\: \frac{\frac{\partial F}{\partial z} (x,z)}{F(x,z)}\: dz.
\end{equation}
Consequently, $s_k \in C(V)$ for all $k \geq 0$. It is also well known that the functions $s_k$ are connected to the functions $a_k$ via the so-called Newton identities. Therefore, the continuity of $a_k$ for $1 \leq k \leq n$ can be established by an easy induction.
Finally, for $(x,z) \in V \times D_r(z_0)$, define $G(x,z)$ as the quotient $F(x,z)/P(x,z)$ if $P(x,z)\neq 0$ and set $G(x,z) = 1$ otherwise.
\end{proof}
Before going any further, we need some topological remarks. A good exposition of such facts can be found in \cite{MN2003}, a great deal of which we reproduce for completeness. Let $X$ be a connected topological space. A point $p \in X$ separates the distinct points $a, b \in X\setminus\{p\}$ if there exist disjoint open sets $A$ and $B$ such that $a \in A$, $b \in B$ and $X\setminus\{p\} = A \cup B$. If the point $p$ belongs to every connected closed subset of $X$ containing $a$ and $b$, we say that $p$ cuts $X$ between $a$ and $b$. If $X$ is a locally connected and connected compact Hausdorff space, then $p$ cuts $X$ between $a$ and $b$ if and only if $p$ separates the points $a$ and $b$ (cf. \cite[Theorem 3-6]{HY1988}).
If $X$ is a connected compact Hausdorff space, there exists a minimal connected closed subset, with respect to set inclusion, containing both $a$ and $b$ (cf. \cite[Theorem 2-10]{HY1988}). If $X$ is hereditarily unicoherent, such a minimal set is unique and we denote it by $E[a,b]$. Clearly, every point in $E[a,b]\setminus\{a,b\}$ cuts $X$ between $a$ and $b$. Therefore, if we assume that $X$ is also locally connected, such points also separate $a$ and $b$. We define the separation order $\preceq$ in $E[a,b]$ the following way: for distinct points $p, q \in E[a,b]$, we say that $p \prec q$ if $p = a$ or $p$ separates $a$ and $q$. Then, we write $p \preceq q$ if $p=q$ or $p \prec q$. Such choice makes $E[a,b]$ into a totally ordered space (cf. \cite[Theorem 2-21]{HY1988}). If we define the order topology in $E[a,b]$ the usual way, then it coincides with the induced topology in $E[a,b]$ (cf. \cite[Theorem 2-25]{HY1988}). Also, by \cite[Theorem 2-26]{HY1988}, every non-empty subset of $E[a,b]$ has a least upper bound, i.e., $E[a,b]$ is order-complete.
To avoid repetitions, we assume henceforth that $X$ is a connected, locally connected, hereditarily unicoherent compact Hausdorff space, unless stated otherwise.
\begin{lemma}\label{Lemma:E[a,b]} The following two properties hold:
i-) Any connected subset of $X$ containing $a$ and $b$, must contain $E[a,b]$.
ii-) An arbitrary intersection of connected subsets of $X$ is either empty or connected.
\end{lemma}
\begin{proof}
The first part is a direct consequence of the fact that any point in the set $E[a,b]\setminus\{a,b\}$ separates $a$ and $b$. For the second part, let $\{M_\alpha\}$ be a collection of connected subsets of $X$ and suppose that $\cap_\alpha M_\alpha$ has at least two points. Given any pair of distinct points $a ,b \in \cap_\alpha M_\alpha$, we must have $E[a,b] \subset M_\alpha$ for all $\alpha$, whence we obtain $E[a,b] \subset \cap_\alpha M_\alpha$. The connectedness of $\cap_\alpha M_\alpha$ is now obvious.
\end{proof}
The above lemma will be used very often later.
\begin{lemma} \label{Lemma:Extension}
Let $D \subset X$ be connected and $x^* \in \overline{D} \setminus D$. Suppose that the function $F \in \mathcal{H}\bigl(\CX\bigr)$ is nondegenerate at $x^*$ and consider $w \in C(D)$ such that $F(x,w(x)) = 0$ for all $x \in D$. Then, there exists the limit
\begin{equation}
\lim_{x \to x^*} w(x),\quad x \in D,
\end{equation}
in the Riemann sphere.
\end{lemma}
\begin{proof}
Denote the Riemann sphere by $\widehat{\mathbb{C}} = \mathbb{C} \cup \{\infty\}$ and let $\{U_\alpha\}_{\alpha \in I}$ be a local basis at $x^*$ consisting of connected open sets. It is readily seen that the family $\mathcal{F} = \bigl\{\overline{w(D \cap U_\alpha)}\: :\:\alpha \in I\bigr\}$ is a filterbase in $\widehat{\mathbb{C}}$. Since the latter is compact, $\mathcal{F}$ has at least one accumulation point, i.e.,
\begin{equation}
\mathcal{F}_{ac} = \bigcap_{\alpha \in I} \overline{w(D \cap U_\alpha)} \neq \emptyset.
\end{equation}
Next, by Lemma \ref{Lemma:E[a,b]}, it is easy to see that $D \cap U_\alpha$ is connected for all $\alpha \in I$ and the continuity of $w$ implies that $\overline{w(D \cap U_\alpha)}$ is also connected. Suppose that $\mathcal{F}_{ac}$ is not connected, i.e., there exist disjoint open sets $A, B \subset \widehat{\mathbb{C}}$ such that $\mathcal{F}_{ac} \subset A \cup B$, $\mathcal{F}_{ac} \cap A \neq \emptyset$ and $\mathcal{F}_{ac} \cap B \neq \emptyset$. Note that we can write
\begin{equation}
\bigcap_{\alpha \in I} \overline{w(D \cap U_\alpha)}\ \cap \ \bigl(\widehat{\mathbb{C}}\setminus(A \cup B)\bigr) = \mathcal{F}_{ac} \ \cap \ \bigl(\widehat{\mathbb{C}}\setminus(A \cup B)\bigr) = \emptyset
\end{equation}
and accordingly, the compactness of $\widehat{\mathbb{C}}$ implies the existence of a finite set of indices $\alpha_1, \ldots,\alpha_n \in I$ such that $\overline{w(D \cap U_{\alpha_1})} \cap \ldots \cap \overline{w(D \cap U_{\alpha_n})} \cap \bigl(\widehat{\mathbb{C}}\setminus(A \cup B)\bigr) = \emptyset$. Since $\mathcal{F}$ is a filterbase, we can find $\beta \in I$ such that $\overline{w(D \cap U_\beta)} \subset \overline{w(D \cap U_{\alpha_1})} \cap \ldots \cap \overline{w(D \cap U_{\alpha_n})}$ and thus, $\overline{w(D \cap U_\beta)} \subset A \cup B$. However, as $\mathcal{F}_{ac} \subset \overline{w(D \cap U_\beta)}$, we must have $\overline{w(D \cap U_\beta)} \cap A \neq \emptyset$ and $\overline{w(D \cap U_\beta)} \cap B \neq \emptyset$. Hence, $\overline{w(D \cap U_\beta)}$ cannot be connected, which is absurd.
We assume, towards contradiction that $\mathcal{F}_{ac}$ contains at least two points. Let $\epsilon>0$ be arbitrary and let $z^* \in \mathcal{F}_{ac}$, $z^* \neq \infty$. Pick $\delta > 0$ and a neighborhood $U_\gamma$ of $x^*$ with $\gamma\in I$ such that $|F(x,z) - F(x^*,z^*)|<\epsilon$ whenever $x \in U_\gamma$ and $|z-z^*|<\delta$. Since $z^* \in \overline{w(D \cap U_\gamma)}$, there exists $x_\gamma \in D \cap U_\gamma$ such that $|w(x_\gamma) - z^*|<\delta$, whence we obtain that $|F(x_\gamma,w(x_\gamma)) - F(x^*,z^*)|<\epsilon$. Given that $F(x_\gamma,w(x_\gamma)) = 0$, we must have $|F(x^*,z^*)| < \epsilon$. Since $\epsilon$ is arbitrary, $F(x^*,z^*) =0$. Therefore, any finite point of $\mathcal{F}_{ac}$ is a root of $z \mapsto F(x^*,z)$. Since $F$ is nondegenerate at $x^*$, $z \mapsto F(x^*,z)$ is a non-constant entire function and therefore has at most countably many roots. As a result, $\mathcal{F}_{ac}$ is at most countable. Since it is also a non-empty,
connected subset of $\widehat{\mathbb{C}}$, we get our desired contradiction and conclude that $\mathcal{F}_{ac}$ reduces to a single point. Then, it is straightforward to see that such point must be the limit of $w(x)$ as $x \to x^*$.
\end{proof}
We now prove the main result of the paper.
\begin{theorem} \label{Thm:Main}
Let $F \in \mathcal{H}\bigl(\CX\bigr)$ be a nowhere degenerate function, having no asymptotic roots and assume that there exist $x_0 \in X$ and $z_0 \in \mathbb{C}$ such that $F(x_0, z_0) = 0$. Then there exists $w \in C(X)$ such that $w(x_0) = z_0$ and $F(x,w(x))=0$ for all $x \in X$.
\end{theorem}
\begin{proof}
Let $\mathfrak{D}$ be the set of pairs $(D,w)$, where $D \subset X$ is a connected subset containing $x_0$, $w \in C(D)$, $w(x_0) = z_0$ and $F(x,w(x))=0$ for all $x \in X$. The family $\mathfrak{D}$ is not empty, as it contains the pair $(D_0,w_0)$, where $D_0=\{x_0\}$ and $w_0:D_0 \to \mathbb{C}$ is defined by $w_0(x_0)=z_0$. We define a partial order in $\mathfrak{D}$ as follows: we write $(D_1,w_1) \leq (D_2,w_2)$ if $D_1 \subset D_2$ and $w_2|_{D_1} = w_1$.
Let $\{(D_\alpha,w_\alpha)\}_{\alpha \in I}$ be a chain in $\mathfrak{D}$. Set $\widetilde{D} = \bigcup_\alpha D_\alpha$ and define $\widetilde{w}:\widetilde{D} \to \mathbb{C}$ by $\widetilde{w}(x) = w_\alpha(x)$, if $x \in D_\alpha$. It is obvious that $\widetilde{D}$ is a connected subset of $X$ containing $x_0$ and $\widetilde{w}$ is a well defined function such that $\widetilde{w}(x_0) = z_0$ and $F(x,\widetilde{w}(x))=0$ for all $x \in \widetilde{D}$.
We subsequently prove that $\widetilde{w}$ is continuous on $\widetilde{D}$. Let $\widetilde{x} \in \widetilde{D}$ be arbitrary and consider a local basis $\{U_\beta\}_{\beta \in J}$ at $\widetilde{x}$ consisting of connected open sets. The family $\mathcal{F} = \bigl\{\overline{\widetilde{w}(\widetilde{D} \cap U_\beta)}\: :\:\beta \in J\bigr\}$ may be regarded as a filterbase in $\widehat{\mathbb{C}}$. If we denote its set of accumulation points by $\mathcal{F}_{ac} = \bigcap_{\beta} \overline{\widetilde{w}(\widetilde{D} \cap U_\beta)}$, it is obvious that $\widetilde{w}(\widetilde{x}) \in \mathcal{F}_{ac}$, since $\widetilde{x} \in \widetilde{D} \cap U_\beta$ for all $\beta \in J$.
We show that $\widetilde{w}(\widetilde{D} \cap U_\beta)$ is connected for all $\beta \in J$. Suppose on the contrary that there exist two disjoint open sets $A, B \subset \widehat{\mathbb{C}}$ such that $\widetilde{w}(\widetilde{D} \cap U_\beta) \subset A\cup B$, $\widetilde{w}(\widetilde{D} \cap U_\beta) \cap A \neq \emptyset$ and $\widetilde{w}(\widetilde{D} \cap U_\beta) \cap B \neq \emptyset$. Pick $\xi_A \in \widetilde{w}(\widetilde{D} \cap U_\beta) \cap A$ and $\xi_B \in \widetilde{w}(\widetilde{D} \cap U_\beta) \cap B$. Then, we can find $x_A, x_B \in \widetilde{D} \cap U_\beta$ such that $\widetilde{w}(x_A) = \xi_A$ and $\widetilde{w}(x_B) = \xi_B$. Note that
\begin{equation}
x_A \in \Biggl(\bigcup_{\alpha \in I} D_\alpha \Biggr) \cap U_\beta = \bigcup_{\alpha \in I} (D_\alpha \cap U_\beta)
\end{equation}
and accordingly, there exists an index $\alpha_1 \in I$ such that $x_A \in D_{\alpha_1} \cap U_\beta$. Similarly, there exists $\alpha_2 \in I$ such that $x_B \in D_{\alpha_2} \cap U_\beta$. Since $\{(D_\alpha,w_\alpha)\}_{\alpha \in I}$ is a chain, we may assume $D_{\alpha_1} \subset D_{\alpha_2}$. In that case, $x_A, x_B \in D_{\alpha_2} \cap U_\beta$, whence we derive that $E[x_A,x_B] \subset D_{\alpha_2} \cap U_\beta$, by an application of Lemma \ref{Lemma:E[a,b]}. Observe that $\widetilde{w}(E[x_A,x_B]) = w_{\alpha_2}(E[x_A,x_B])$ is connected; however, $\widetilde{w}(E[x_A,x_B]) \subset A \cup B$, $\xi_A \in \widetilde{w}(E[x_A,x_B]) \cap A$ and $\xi_B \in \widetilde{w}(E[x_A,x_B]) \cap B$, which is clearly impossible. We have reached a contradiction, which proves the connectedness of $\widetilde{w}(\widetilde{D} \cap U_\beta)$ for all $\beta \in J$. Therefore, $\overline{\widetilde{w}(\widetilde{D} \cap U_\beta)}$ is also connected and an analogous argument to that of Lemma \ref{Lemma:Extension} shows that $\mathcal{F}_{ac}$ must be connected as well.
Also, by reviewing the techniques introduced in the proof of Lemma \ref{Lemma:Extension}, it is straightforward to see that any finite point of $\mathcal{F}_{ac}$ is a zero of the non-constant entire function $z \mapsto F(\widetilde{x},z)$, which shows that $\mathcal{F}_{ac}$ is at most countable. Since it is also non-empty and connected, it must reduce to a single point, which in this case is obviously $\widetilde{w}(\widetilde{x})$. Then, it is easy to conclude that $\widetilde{w}$ is continuous at $\widetilde{x}$.
A standard application of Zorn's Lemma shows that $\mathfrak{D}$ has a maximal element, which we denote by $(D^*,w^*)$. We wish to prove that $D^*=X$.
We first show that $D^*$ is closed. Conversely, suppose that there exists $x^* \in \overline{D^*}\setminus D^*$. A direct application of Lemma \ref{Lemma:Extension} shows that $w^*(x)$ has a limit in the Riemman sphere as $x \to x^*$ ($x \in D^*$), which cannot be infinity by the assumption on the non-existence of asymptotic roots for $F$. Therefore, $w^*$ has a continuous extension $\widetilde{w}^*$ to $D^* \cup \{x^*\}$. Note that the map $x \mapsto F(x,\widetilde{w}^*(x))$ vanishes on $D^*$ and is continuous on the connected set $D^* \cup \{x^*\}$, whence we deduce that $F(x,\widetilde{w}^*(x)) = 0$ for all $x \in D^* \cup \{x^*\}$. Consequently, we have proven that $(D^*,w^*) < (D^* \cup \{x^*\},\widetilde{w}^*)$, which contradicts the maximality of $(D^*,w^*)$.
Finally, suppose that $D^* \neq X$, i.e., there exists $y \in X\setminus D^*$. Since, as noted in page 4, $E[x_0,y]$ is order-complete with respect to the separation order, there exists a least upper bound $m$ of $E[x_0,y] \cap D^*$. Since $D^*$ is closed, it is easy to see that $m \in D^*$; moreover, we have the inclusions $E[x_0,m] \subset D^*$ (by Lemma \ref{Lemma:E[a,b]}) and $E[m,y]\setminus\{m\} \subset X \setminus D^*$. By taking into account that $F(m,w^*(m))=0$ and $F$ is nowhere degenerate, we can use Lemma \ref{Lemma:Algebrization} to find an open disk $D_r(w^*(m))$ and a neighborhood $V$ of $m$ such that $F(x,z)=P(x,z)\:G(x,z)$ for all $(x,z) \in V \times D_r(w^*(m))$, where $P$ is a monic polynomial with coefficients in $C(V)$ and $G$ is free of zeros in $V \times D_r(w^*(m))$. Without loss of generality, we may assume that $V$ is connected and then, we select $y_1 \in E[m,y]\setminus\{m\}$ such that $E[m,y_1] \subset V$. Since $E[m,y_1]$ is a totally ordered and order-complete space, we can find $w_1 \in C(E[m,y_1])$ such that $P(x,w_1(x))=0$ for all $x \in E[m,y_1]$, by \cite[Theorem 3]{DP1964}. Also, given that $P(m,z)$ is a power of $(z-w^*(m))$ (see Lemma \ref{Lemma:Algebrization}), we must have $w_1(m) = w^*(m)$. By the continuity of $w_1$, we can pick $\bar{y} \in E[m,y_1]\setminus\{m\}$ such that $w_1(E[m,\bar{y}]) \subset D_r(w^*(m))$. Now, we write $\widetilde{D} = D^* \cup E[m,\bar{y}]$ and consider the function $\widetilde{w}:\widetilde{D} \to \mathbb{C}$ defined by
\begin{equation}
\widetilde{w}(x) = \begin{cases}
w^*(x), & x \in D^*;\\
w_1(x), & x \in E[m,\bar{y}].
\end{cases}
\end{equation}
It is easy to see that $D^*\setminus\{m\}$ and $E[m,\bar{y}]\setminus\{m\}$ are both open in $\widetilde{D}$, whence it may be inferred that $\widetilde{w}$ is continuous on $\widetilde{D}$. We prove that $F(x, \widetilde{w}(x))=0$ for all $x \in \widetilde{D}$. The result is obvious for $x \in D^*$. On the other hand, if $x \in E[m,\bar{y}]$, then it is straightforward to see that $\widetilde{w}(x) \in D_r(w^*(m))$ (recall the choice of $\bar{y}$) and consequently, we have $F(x,\widetilde{w}(x)) = P(x,\widetilde{w}(x))\:G(x,\widetilde{w}(x)) = 0$. Thus, we have shown that $(D^*,w^*) < (\widetilde{D},\widetilde{w})$, which contradicts the maximality of $(D^*,w^*)$. The proof is now complete.
\end{proof}
\begin{remark}
Note that we have assumed that $X$ is connected in the preceding theorem, while Miura and Niijima \cite{MN2003} have shown that such restriction is unnecessary for $C(X)$ to be algebraically closed. Can we drop the connectedness hypothesis in Theorem \ref{Thm:Main}? Not completely. The connected components of a locally connected space are open. Hence, if we can find a root of $F$ in $C(X_\lambda)$ for every connected component $X_\lambda$ of $X$, we easily conclude that $F$ has a root in $C(X)$. If $F$ is nowhere degenerate and has no asymptotic roots, this can be done by Theorem \ref{Thm:Main}, \emph{provided that $F(x_0,z_0) = 0$ for some $x_0 \in X_\lambda$ and $z_0 \in \mathbb{C}$}. Such condition is not always met for arbitrary functions $F \in \mathcal{H}\bigl(\CX\bigr)$ (e.g., take $F$ to be a suitable exponential function in one connected component of $X$). However, if $F$ is a non-constant monic polynomial, it is trivially fulfilled and we may recover the results from \cite{MN2003}.
\end{remark}
\begin{remark}
The restrictions imposed to $F$ in the hypotheses of Theorem \ref{Thm:Main} are not necessary for the existence of roots. For example, consider the algebra $C([0,1])$ and define $F_1(x,z) = \exp(xz) - 1$. It is clearly degenerate at $x_0=0$. Moreover, the function $\omega:(0,1] \to \mathbb{C}$ defined by $\omega(x) = 2\pi i x^{-1}$ is an asymptotic root of $F_1$. However, it obviously has the zero function as a root.
\end{remark}
To finish this paper, we introduce two examples showing how the presence of degeneracy and asymptotic roots can interfere with the existence of roots.
\begin{example} Recall that $F$ is degenerate at $x_0 \in X$ if $z \mapsto F(x_0,z)$ is a constant map. Obviously, if it is not the zero map, $F$ cannot have any root. On the other hand, let $X=[0,1]$ and write $h(x)=\sin(1/x)$. Consider the function
\begin{equation}
F(x,z) = \begin{cases}
x\bigl(\exp z - \exp {h(x)}\bigr), & 0 < x \leq 1; \\
0, & x = 0.
\end{cases}
\end{equation}
It can be easily verified that $F\in \mathcal{H}\bigl(\CX\bigr)$. Also, note that $F$ is degenerate at $x_0=0$ and $z \mapsto F(0,z)$ is the zero function. Suppose that $w \in C(X)$ is a root of $F$. Then, $F(x,w(x))=0$ for all $x\in[0,1]$ implies that $w(x)=h(x)+2k(x)\pi i$ for $x \in (0,1]$, where $k(x) \in \mathbb{Z}$. By continuity, $k(x)$ must be constant, which yields $w(x)=\sin(1/x)+2k\pi i$ for all $x \in (0,1]$. Since this function does not have a continuous extension to the interval $[0,1]$, we have reached a contradiction. Moreover, although the function $g(x) = \sin(1/x)+2k\pi i$ satisfies $F(x,g(x))=0$ for all $x \in (0,1]$, it does not have a limit in the Riemann sphere as $x \to 0$. Therefore, the hypothesis of nondegeneracy is also essential for Lemma \ref{Lemma:Extension}.
\end{example}
\begin{example}
Let $X=[0,1]$. Consider the function $\varphi(z)=z\exp(-z)$ and any continuous curve $\omega:[0,1)\to \mathbb{C}$ such that $\omega(0)=0$, $\omega(x)=(1-x)^{-1}$ for $1/2\leq x<1$ and its image avoids the point $1$ (the zero of $\varphi '$). Define the function
\begin{equation}
F(x,z) = \begin{cases}
\varphi(z) - \varphi(\omega(x)), & 0 \leq x < 1; \\
\varphi(z), & x = 1.
\end{cases}
\end{equation}
It can be easily seen that $F \in \mathcal{H}\bigl(\CX\bigr)$ and is nowhere degenerate; however, $\omega$ is an asymptotic root of $F$. Suppose that $w\in C(X)$ is a root of $F$. Then, we must have $\varphi(w(x)) = \varphi(\omega(x))$ for all $x \in [0,1)$. We prove that the set $A = \{x\in [0,1)\:|\: w(x)=\omega(x)\}$ is open and closed in $[0,1)$. The second assertion is obvious from the continuity of $w-\omega$. On the other hand, if $w(x_0) = \omega(x_0) = z_0$, we have that $\varphi$ is locally injective at $z_0$ (since $\varphi'(\omega(x)) \neq 0$ for all $x \in [0,1)$). Since $\varphi(w(x)) = \varphi(\omega(x))$, the continuity of $w$ and $\omega$ implies that such functions must coincide in a neighborhood of $x_0$, proving that $A$ is open in $[0,1)$. Next, note that $0\in A$. Since $[0,1)$ is connected, we conclude that $A=[0,1)$. However, this means that $w(x) \to \infty$ as $x \to 1$, which is clearly absurd.
\end{example}
\bibliographystyle{amsplain}
|
2,869,038,155,467 | arxiv | \chapter{Introduction}
\indent
\input{introduction}
\chapter{The CAPTAIN Detector}
\indent
\input{detector}
\chapter{Neutrons}
\indent
\input{neutrons}
\chapter{Neutrinos}
\indent
\input{neutrinos}
\chapter{Conclusions}
\indent
\input{conclusions}
\newpage
\section{Cryostats}
The CAPTAIN project utilizes two cryostats for TPC development. The
first is a small, 1700L vacuum jacketed cryostat provided by UCLA for
the effort. It is being modified at LANL to provide features to
accommodate and test mini-CAPTAIN. The vacuum jacket is ~60.25
inches in diameter, and the vessel is ~64.4 inches in height.
The primary CAPTAIN cryostat is a 7700L vacuum insulated liquid argon
cryostat which will house the final TPC. It is an ASME Section VIII,
Division 1 U stamped vessel making operation at several national (or
international) laboratories more straightforward. The outer shell of the
cryostat is 107.5 inches in diameter, and it is ~115 inches tall. The
vessel is designed with a thin (3/16 inch) inner vessel to minimize
heat leak to the argon. All instrumentation and cryogenics are made
through the vessel top head. The vessel also has side ports allowing
optical access to the liquid argon volume for the laser calibration
system or other instrumentation. A work deck is to be mounted on the
top head to provide safe worker access to the top ports of the
cryostat. A baffle assembly will be included in the cold gas above
the liquid argon to mitigate radiation heat transfer from the
un-insulated top head. Figure \ref{CAPTAIN} shows a schematic of the CAPTAIN
cryostat, TPC, and work deck.
\section{Cryogenics}
Liquid argon (LAr) serves as target and detection medium for the CAPTAIN detector. The argon must
stay in the form of a stable liquid and must remain minimally contaminated by
impurities such as oxygen and water. This is to prevent the loss of
drifting electrons to these
electronegative molecules.
It must also stay sufficiently free of contaminants such as nitrogen to
avoid absorption of the scintillation light.
The maximum drift distance is 100.0 cm for the full CAPTAIN detector and 32.0 cm for the prototype.
To achieve a sufficiently long drift-distance for electrons,
the O$_2$ contamination is
required to be smaller than
750 ppt for mini-CAPTAIN and 240 ppt for CAPTAIN.
The purity received at
Los Alamos from industry has an oxygen level of not more than 2.3 ppm.
Quenching and absorption of
scintillation light are demonstrated~\cite{ref:WArP, ref:FNAL-N2} to be
negligible when the N$_2$
contamination is smaller than 2 ppm.
The cryogenics system must receive liquid argon from a
commercial vendor, test its purity, and further purify it. Figure~\ref{fig:cryo} shows the basic design.
Cryogenic pumps and filter vessels purify the liquid in the detector by removing
electronegative contaminants. Cryogenic controls monitor and regulate the state of the argon in the
detector.
Commercial analytic instruments are used to characterize the oxygen and water
contaminant levels in the argon.
The CAPTAIN liquid argon delivery and purification design is
based on experiences of the
MicroBooNE experiment~\cite{ref:microboone}
and the
Liquid Argon Purity Demonstration (LAPD)~\cite{ref:lapd},
both based at Fermilab.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=0.6\textwidth]{purification.pdf}
\end{center}
\caption{The design of purification system~\cite{ref:microboone}~\cite{ref:lapd}.}
\label{fig:cryo}
\end{figure}
The CAPTAIN TPC has a liquid argon volume of 7.5 m$^3$, this is equivalent to 6300 m$^3$
of argon gas at STP. Assuming a bulk liquid argon contamination level of O$_2$ with a
1.0 ppm, this is equivalent to 8.253 grams of O$_2$ in the total volume of the
detector.
The current design for the vessel that will
hold the two filter mediums has a total volume of $\sim$80.0 liters, or $\sim$40.0 liters
for each filter material.
The dual filter system consists of a bed of molecular sieve
(208604-5KG Type 4A) to remove moisture and another bed of activated copper
material (CU-0226 S 14 X 28) to remove oxygen.
Experience from LAPD shows this should be sufficient.
Both of these filter materials can be reactivated, after reaching
saturation, by flowing a mixture of argon gas with 2.5\% hydrogen gas at an elevated temperature.
The design
utilizes a 10-12 gal/min capacity commercial centrifugal pump.
Magnetic coupling prevents contamination through shaft seals.
The pump will be mounted with the filter vessel on a single skid in
order to achieve portability.
A sintered metal filter is used to remove dust from the
liquid argon prior to its delivery to the cryovessel.
\section{Electronics}
The electronic components for the TPC are identical to those of the
MicroBooNE experiment at FNAL~\cite{ref:microboone}. A block diagram of the
electronics is in Figure \ref{fig:electronics}. The front-end mother board is designed
with twelve custom CMOS Application Specific Integrated Circuits
(ASIC). Each ASIC reads-out 16 channels from the TPC. The mother
board is mounted directly on the TPC wire planes and is designed to be
operated in liquid argon. The output signals from the mother board
are transmitted through the cold cables to the cryostat feed-thru to
the intermediate amplifier board. The intermediate amplifier is
designed to drive the differential signals through long cable lengths
to the 64 channel receiver ADC board. The digital signal is then
processed in an FPGA on the Front End Module (FEM) board. All signals
are transmitted via fiber optic from a transmit module to the data
acquisition computer.
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=0.6\textwidth]{electronics.eps}
\end{center}
\caption{The MicroBooNE electronics chain to be used for CAPTAIN~\cite{ref:microboone}.}
\label{fig:electronics}
\end{figure}
\section{TPC}
\subsection{CAPTAIN TPC}
The TPC consists of a field cage in a hexagonal shape with a mesh
cathode plane on the bottom of the hexagon and a series of four wire
planes on the top with a mesh ground plane.
The apothem of the TPC
is 100 cm and the drift length between the anode
and cathode is 100 cm.
In the direction of the electron drift, there are four wire planes.
In order, they are the a grid, U, V, and collection (anode)
plane.
The construction material of the TPC is FR4 glass fiber
composite.
All wire planes have 75 $\mu$m diameter copper beryllium wire
spaced 3 mm apart and the plane separation is 3.125 mm. Each wire
plane has 667 wires.
The U and V planes detect the induced signal
when the electron passes through the wires.
The U and V wires are
oriented $\pm$60 degrees with respect to the anode wires.
The anode wires
measure the coordinate in direction of the track and U and V are
orthogonal to the track. The third coordinate is determined by the
drift time to the anode plane.
The field cage is realized in two modules: the
drift cage module, and the wire plane module. The wire plane module
incorporates a 2.54 cm thick FR4 structural component that supports
the load of the four wire planes so that the wire tension is
maintained. The field cage is double sided gold plated copper clad
FR4 arranged with 5 mm wide traces separated by 1 cm. A resistive
divider chain provides the voltage for each trace. The design voltage
gradient on the divider chain is 500 V/cm when 50 kV is applied to the
cathode.
The electrons from the ionized event are collected on the
anode plane.
The U, or V planes detect signals via induction, and are made
transparent to electrons via biasing.
The drift velocity of the
electrons with 500 V/cm is 1.6 $mm/\mu s$. See Figure~\ref{fig:tpc}
\begin{figure}[!htb]
\begin{center}
\includegraphics[width=0.6\textwidth]{TPCfigure.png}
\end{center}
\caption{Component detailed view of the CAPTAIN TPC.}
\label{fig:tpc}
\end{figure}
\subsection{Prototype TPC}
The prototype TPC is a smaller version the CAPTAIN TPC. The drift
length is 32 cm and the apothem is 50 cm. Each
wire plane has 337 wires. The prototype is designed to test the
mechanical construction details of the TPC, the cold electronics, and
the back-end data acquisition system prior to the construction of the
full scale CAPTAIN. It also allows the early development of the data
acquisition software so that CAPTAIN can produce data as soon as the
hardware is ready. It will also provide the needed operational
experience to run the full scale CAPTAIN.
\section{Photon Detection System}
By detecting the scintillation light produced during interactions in the CAPTAIN detector, the photon detection system provides valuable information. Simulations show that detection of several photoelectrons per MeV for a minimum ionizing particle (MIP) in a TPC with a field of 500 V/cm improves the projected energy resolution of the detector by 10-20\%.
Such improvement stems from the anti-correlation between the production of scintillation
photons and ionization electrons, a phenomenon which has been conclusively observed
to improve calorimetry already in liquid xenon~\cite{ref:conti}.
Hints of it have already been seen
in our own re-analysis of older liquid argon data that included simultaneous measurements
of light and charge yields at the same electric fields~\cite{ref:doke}.
If confirmed by CAPTAIN, it will increase the utility of the photon detection
systems of other experiments such as LBNE, as well as argon-based dark matter
detectors. Just as with the charge signal, the amount of light produced by a
particle traversing argon is a function of the energy deposited.
The scintillation light can also be used to determine the energy of neutrons from time of flight when the experiment is placed in a neutron beamline
by giving the time of the interaction with few nanosecond resolution.
Liquid argon scintillates at a wavelength of 128 nm which unfortunately is readily absorbed by most photodetector window materials. It is thus necessary to shift the light to the visible. The photon detection system is composed of a wavelength shifter covering a large area of the detector and a number of photodetectors to collect the visible light. The baseline CAPTAIN photon detection system uses tetraphenyl butadiene (TPB) as a wavelength shifter and sixteen Hamamatsu R8520-500 photomultiplier tubes (PMT) for light detection. The R8520 is a compact PMT approximately 1'' x 1'' x 1'' in size with a borosilicate glass window and a special bialkali photocathode capable of operation at liquid argon temperatures (87 K). It has a 25\% quantum efficiency at 340 nm. TPB is the most commonly used wavelength shifter for liquid argon detectors and has a conversion efficiency of about 120\% when evaporated in a thin film. It has a re-emission spectrum that peaks at about 420 nm \cite{bibTPB}. The TPB will be coated on a thin piece of acrylic in front of the PMTs. Eight PMTs will be located on top of the TPC volume and eight on the bottom. This will provide a minimum detection of 2.2 photoelectrons per MeV for a MIP. The amount detected will increase if the entire top and bottom surfaces are coated with TPB.
The PMTs will use a base with cryogenically compatible discrete components. The cable from the base to the cryostat feedthrough is Gore CXN 3598 with a 0.045" diameter to reduce the overall heat load. The PMT signals will be digitized at 250 MHz using two 8-channel CAEN V1720 boards.
The digitizers are readout through fiber optic cables by a data acquisition system written for the MiniCLEAN experiment \cite{bibGastler}.
The CAPTAIN detector will serve as a test platform for the evaluation of
alternative photon detection system designs.
The design will allow testing options in a operating TPC
with cosmic muons and in various beamlines.
Such options include other wavelength shifting films,
acrylic light guides or
doped panels to the photodetectors, and other types of
photodetectors such as SiPMTs, larger cryogenic PMTs,
and avalanche photodiode arrays.
In addition, CAPTAIN can test methods of calibrating the
photon detection system with the laser calibration system
or alternatively a series of UV and blue LEDs.
\section{Laser Calibration System}
The first measurement of photoionization of liquid Argon was performed by Sun et al.~\cite{ref:SunLaser}.
Using frequency quadrupled Nd-YAG laser to generate 266nm light the authors demonstrated that the ionization
was proportional to the square of the laser intensity. The ionization potential of liquid Argon is 13.78 eV,
slightly lower than the energy of 3 photons from a 266nm quadrupled Nd-YAG laser. The ability to create
well-defined ionization tracks within a liquid Argon TPC provides an excellent calibration source that can
be used to measure the electron lifetime in-situ and to determine the drift field within the TPC itself.
Significant progress has been made in this field and is documented in Rossi et al.~\cite{ref:RossiLaser}.
The CAPTAIN TPC provides an excellent test bed for a future LBNE laser calibration system.
\begin{tabular}{|c|c|c|c|}
\hline
Wavelength & 1064 nm & 532 & 266 \\
\hline
Pulse Energy & 850 mJ & 400 mJ & 90 mJ \\
Pulse Duration & 6 ns & 4.3 ns & 3 ns \\
Peak Power & 133 MW & 87 MW & 28 MW \\
Peak Intensity & 1500 $GW/cm^2$ & 985 $GW/cm^2$ & 317 $GW/cm^2$ \\
Photon Energy & 1.17 eV & 2.33 eV & 4.66 eV \\
Photon Flux & $ 8 x 10 ^{30} \gamma/(s-cm^2)$ & $ 2.6 x 10 ^{30} \gamma/(s-cm^2)$ & $ 0.42 x 10 ^{30} \gamma/(s-cm^2)$ \\
\hline
\end{tabular}
To avoid surface irregularities that may disperse the laser beam the CAPTAIN TPC will employ optical
access on the sides of the detector (Fig.~\ref{CAPTAIN}). A LANL existing
Quantel �Brilliant B� Nd-YAG laser will be used to
ionize the liquid Argon. The laser parameters are given in Table 1. The laser and mirrors are in hand
and the design of the mirror mounting system on the TPC frame has begun. The design seeks to be
flexible and allow several paths through the liquid Argon, including parallel and at an angle to the wire
plane. This will allow us to determine the electron lifetime within the CAPTAIN TPC.
\section{Special Run Modes}
\subsection{Tests of Doping Liquid Argon to Improve Light Output}
Previous research \cite{bibKubota1, bibKubota2, bibPollman} suggests
that there is a potential benefit to doping liquid argon with xenon or
other wavelength shifting compounds to improve the collection of
photons in a LAr TPC with little effect on the ionization readout.
The possible advantages would be to shorten the triplet state lifetime
for the scintillation photons from 1.6 microseconds and possibly shift
the scintillation light from 128 nm to 178 nm (for xenon) or higher
(for other compounds). A shift in scintillation light wavelength
would have a large impact since the Rayleigh scattering length is
proportional to ${1/\lambda^4}$ resulting in less scattering and better
time resolution in a large detector. Higher wavelength
scintillation light would also open up the possible use of other
photodetectors and remove the need for wavelength shifting coatings
such as TPB. CAPTAIN would serve as an ideal detector to study how
much xenon or dopant would be needed to speed up the triplet
lifetime. The literature has studied a broad range of levels from
several ppm to $\sim$1\% but only in small detectors with a poor
ability to determine the final mixture. How the xenon or dopant
remains in the LAr over time could be examined along with the
ability to achieve uniform mixing in a large detector.
With the TPC, the affect of concentration on the drift
velocity and electrostatic properties would be examined. Finally,
the best method for introducing the xenon or dopant could be developed.
\section{Running at NuMI}
LBNE will measure neutrino oscillation phenomena with a baseline of 1300 km
using, primarily, the first oscillation maximum. At that baseline, the neutrino
energies in the first maximum range from ~ 1.5 to 5 GeV.
Neutrino cross-sections are poorly understood on any nuclear target in this
energy regime. For argon, the ArgoNEUT collaboration has produced the first
and only inclusive cross-section measurement in the energy regime important
for LBNE with 379 events~\cite{ref:argoneut} integrated over the neutrino
spectrum produced by the NuMI low-energy tune.
LBNE must use the {\it full} CC cross-section for the oscillation analysis and thus
must have robust methods to determine the neutrino energy.
A detailed study of interactions in the energy regime corresponding to the LBNE
first maximum is crucial. {\it The experiment simply will not work without it.}
Figure~\ref{fig:zellerformaggio} shows the state of the art for the exclusive channel:
$\nu_{\mu} p \rightarrow \mu^{-} p \pi^{+} \pi^{0} $ on a free nucleon~\cite{ref:forzel}.
Clearly more data are crucial in an era of precision neutrino physics.
\begin{figure}
\begin{center}
\includegraphics[scale=0.5, angle=0]{two-pi-2.eps}
\caption{\label{fig:zellerformaggio} Existing measurements of the
$\nu_{\mu} p \rightarrow \mu^{-} p \pi^{+} \pi^{0} $ cross-section as a function of
neutrino energy reproduced from Figure 24 of Formaggio and Zeller~\cite{ref:forzel}. }
\end{center}
\end{figure}
\subsection{On-axis running in NuMI}
The NuMI beamline was constructed for the MINOS experiment and
will be running with the medium energy tune to support the No$\nu$a
and Miner$\nu$a experiments. The medium energy tune will provide
an intense neutrino beam with a broad peak between approximately
1 and 10 GeV.
The NuMI running of the CAPTAIN detector in both a neutrino and
antineutrino beam is an integral part of understanding the neutrino
cross sections needed by LBNE, and liquid argon detectors in general,
to interpret neutrino oscillation signals. Measurements of CAPTAIN in an
on-axis position in the NuMI beam are complementary to low-energy neutrino
measurements
made using MicroBooNE; moreover, with a fiducial mass approximately 20
times larger than the ArgoNEUT detector, CAPTAIN will contain the hadronic
system for a significant fraction of events. CAPTAIN will make high
statistics measurements of neutrino interactions and cross-sections in
a broad neutrino energy range, from pion production threshold to deep
inelastic scattering.
The fine resolution of the detector will allow detailed studies of low
energy protons that are often invisible in other neutrino detector
technologies. In addition, liquid argon TPC's give good separation
between pions, protons and muons over a broad momentum range. With
these characteristics, CAPTAIN will make detailed studies of
charged-current (CC) and neutral-current (NC) inclusive and exclusive
channels in the important and poorly understood neutrino energy range
where baryon resonances dominate. The first oscillation maximum
energy at LBNE (2-5 GeV)
is similarly dominated by baryon resonances.
The exclusive channels CAPTAIN will measure include single charged and
neutral pion production and single photon production. Measurements
near strange production threshold will also be made.
\begin{figure}
\begin{center}
\includegraphics[scale=0.4, angle=0]{GENIEModesCAPTAINNuMI.pdf}
\caption{\label{CAPTAINandNUMI} Each graphic shows the furthest distance any charged particles or electromagnetic
showers travelled in the detector from the selection of contained events. The upper left figure shows quasielastic events. The upper right figure shows the same from events created via nucleon resonances. The lower figure shows the same from events created by deep inelastic scattering. The detector contains about 10 \% of all interactions where ``contained'' is as defined in the text.}
\end{center}
\end{figure}
Since the NuMI running comes after the neutron running at LANSCE,
we will employ the neutron-interaction identification techniques developed
in CAPTAIN in the reconstruction of the energy of the hadronic system.
In addition to the physics program for this detector, running in a neutrino beam with
similar characteristics to the proposed LBNE beam will provide validation of its
technology, including exclusive particle reconstruction and identification, shower
reconstruction, and reconstruction of higher energy neutrino events with significant particle multiplicities.
Finally, in order to accommodate neutrino running, a simulation of neutrino interactions with argon with a neutrino
energy spectrum of the on-axis, medium-energy NUMI tune was carried out.
It was determined that the current geometry
would contain 10\% of all neutrino events where containment is defined to be ``all but lepton and neutrons.''
Depending on achieved POT, this will yield roughly 400,000 contained events per year. Distributions of the particle
that travels the furthest from the vertex (with the exception of neutrons and leptons) is shown in Figure~\ref{CAPTAINandNUMI}.
\subsection{Off-axis running in NuMI}
We may be able to deploy the CAPTAIN detector in front of the No$\nu$a near detector in an off-axis
position. The off-axis NuMI beam provides a relatively narrow neutrino energy peak at about 2 GeV.
Such running would provide a wealth of information at a specific neutrino energy close to the LBNE
oscillation maximum. While interesting in its own right, the information would serve as a valuable
lever arm for interpreting the broad-band on-axis data. Studies of this option are just beginning.
\section{Running at the SNS}
The measurement of the time evolution of the energy and flavor spectrum of neutrinos from supernovae can
revolutionize our understanding of neutrino properties, supernova physics, and discover or tightly constrain
non-standard neutrino interactions. LBNE has the capability to make precise measurements of supernova
neutrinos. For example, collective neutrino oscillations imprint distinctive signatures on the time evolution
of the neutrino spectrum that depend, in a dramatic fashion, on the neutrino mass hierarchy and mixing
angle $\theta_{13}$. Current knowledge of the neutrino argon cross sections and interaction products at
the relevant energies ($<$50 MeV) (there are no neutrino measurements in this energy range) limit the
ability of detectors to extract information on neutrino properties from a supernova neutrino burst. Cascades
of characteristic de-excitation gamma rays are expected to be associated with different interaction channels,
which could enable flavor tagging and background reduction. Currently the ability of LArTPC detectors to
observe these gamma rays (and accurately reconstruct their energy) is very uncertain.
CAPTAIN will afford a nearly unique opportunity
to measure key neutrino-nuclear cross sections in both
the charged and neutral current channels that would allow us to make better use of the supernova neutrino
burst signal.
The observation of 11 (mostly) anti-neutrino events from SN1987A confirmed the general model of supernovae explosions, demonstrating that the bulk of the gravitational binding
energy, $ 8 \times 10^{52} $ ergs is released in the form of neutrinos. The LBNE detector with
34 kT of fiducial mass and an Argon target would detect more than 1000 events from a supernova at 10 kpc.
There are four processes that can be used to detect of supernova neutrinos in a liquid Argon detector:
\begin{equation}
\nu_e + ^{40}Ar \rightarrow e^{-} + ^{40}K^{*}
\end{equation}
\begin{equation}
\bar{\nu_{e}} + ^{40}Ar \rightarrow e^{+} + ^{40}Cl^{*}
\end{equation}
\begin{equation}
\nu_x + e^{-} \rightarrow \nu_x + e^{-}
\end{equation}
\begin{equation}
\nu_x + ^{40}Ar \rightarrow ^{40}Ar^{*} + \gamma
\end{equation}
The vast majority of these neutrinos would be detected via the first process above.
Though small in number, the elastic scattering reaction preserves the neutrino direction, enabling localization of the direction of the supernova and the neutral current reaction
allows for a calibration of the total energy released in neutrinos.
While the elastic scattering cross section has been measured, the
charged current reactions in argon have only theoretical predictions.
In addition, while elastic scattering can also be measured in water
Cherenkov detectors, the argon CC interaction is unique in
that it has a large cross-section and has the potential to provide a
better handle on the energy of the initial neutrino.
The estimated theoretical uncertainty in the cross section calculations
is stated to be 7\%, however the authors note that a small change in the
Q value of the excited argon state could lead to a 10-15\% change in the cross section.
It has recently been realized that the evolution of the neutrinos as they leave the
protoneutron star surface is more complicated than previously believed. Collective
oscillations of the late time ( approximately 10-20 seconds after the neutronization burst)
neutrinos leads to a spectral swap~\cite{ref:duan2010}
shown in Figure \ref{SNswap}. The figure shows the probability that a neutrino of species x would survive without oscillating to another species as it propagates through the neutrinosphere. This survival probability is shown as a function of the neutrino energy (x-axis) and the emission angle from the protoneutron star (y-axis). The left panel is for the normal mass hierarchy and the right panel is for an inverted neutrino mass hierarchy. What can be seen is that in the normal hierarchy, all neutrinos with energy less than 10 MeV oscillate to a different species and all those above 10 MeV survive as the same species. For an inverted hierarchy it is just the opposite (the low-energy neutrinos survive without oscillation). Since the temperature (or energy spectrum) of the neutrinos is dependent upon the neutrino flavor, this spectral swap could be observed in a detector that is sensitive to electron neutrinos.
Extracting the physics from the detection of a Galactic supernova depends upon understanding of the neutrino argon cross sections, the ability to distinguish the neutrino charged current reaction given above from the anti-neutrino charged current reaction (which leads to an excited Cl nucleus), the ability to isolate the neutral current reaction, and the energy resolution of the detector.
To extract the neutrino physics from the detection of a supernova burst one needs to convert the measured electron neutrino spectrum to a source flux (as a function of energy) and to measure the complete (over all neutrino species) neutrino energy distribution. This will require accurate knowledge of the charged current cross section for converting Ar to K and the neutral current cross section for creating the excited $^{40}Ar$ state. In addition one needs to clearly tag such events, which can be done by detecting (and measuring the energy of) the de-excitation gamma rays as the excited states of K, Ar or Cl decay.
We propose to run the CAPTAIN (Cryogenic Apparatus) LAr TPC at the SNS to:
\begin{itemize}
\item Measure the neutrino argon charged current cross sections in the energy region of interest.
\item Investigate the capability of a liquid argon TPC to measure the de-excitation gamma rays from the excited nuclear states of Ar, Cl, and K).
\item Measure the energy resolution of a liquid argon TPC at low energies in a realistic environment similar to operation of LBNE at the surface, to demonstrate that one can properly tag the events.
\end{itemize}
The Spallation Neutron Source in Oak Ridge, TN, provides a high-intensity source of neutrinos from stopped pions in a mercury target. The energy spectrum of the neutrinos from a stopped pion source is well known,
has an endpoint near 50 MeV, and covers the energy range of supernova burst neutrinos.
Figure \ref{nspectrum} shows the SNS neutrino energy spectra and that of supernova burst neutrinos.
The timing characteristics of this source will help reduce the neutron background, though shielding will
most likely be required.
The interaction rates in argon are shown in Figure \ref{SNSEventRates} as a function of distance and argon mass.
A five-ton detector would measure thousands of events per year if sited sufficiently close to the target. Neutrino interaction cross-sections in argon and interaction product distributions, including de-excitation gammas, could be measured for the first time. Such an experiment would also be valuable for understanding the response of a LArTPC detector to neutrinos in this energy range.
\begin{figure}[!htb]
\begin{center}
\includegraphics[scale=0.75,angle=0]{SNswap.eps}
\caption{\label{SNswap} Spectral swap of neutrino energy spectrum between normal hierarchy (left panel) and inverted hierarchy (right panel). The y-axis is the emission angle of the neutrino, which is not observable. The observed energy spectrum is the projection of these figures onto the x-axis. Figure from reference \cite{ref:duan2010} .}
\end{center}
\end{figure}
\begin{figure}[!htb]
\begin{center}
\includegraphics[scale=0.75,angle=0]{nspectrum.eps}
\caption{\label{nspectrum} Neutrino energy spectra from supernova bursts neutrinos (solid lines) and the neutrino energy spectra from the stopped pion source at the SNS. Figure from reference \cite{ref:Bolozdynya2013} }
\end{center}
\end{figure}
\begin{figure}[!htb]
\begin{center}
\includegraphics[scale=0.5,angle=0]{SNSEventRates.eps}
\caption{\label{SNSEventRates} Estimated event counts per year in argon as a function of detector mass and distance from the SNS target \cite{ref:Bolozdynya2013}.}
\end{center}
\end{figure}
\section{Stopped Pion Source at the BNB}
The Booster Neutrino Beam (BNB) at FNAL was designed and built as a conventional neutrino
beam with a decay region to produce pion-decay-in-flight neutrinos for the MiniBooNE experiment and
will run to support the MicroBooNE experiment. Due to the short decay region, it also can serve as
as a source of neutrinos from stopped pions in the target, horn, and surrounding structures. It therefore
could perform similar neutrino measurements as those outlined in the above section on the SNS.
The maximum beam power the BNB is 32 kWatt, while the SNS is $\sim$1 MWatt.
While the SNS has a factor of 30 higher beam power, it is possible the CAPTAIN detector at the BNB could
be built as close as 10m to the absorber. At the SNS, it is likely the detector would be at least 20 to 30 meters away from
the target. The BNB stopped pion flux at 10m from the absorber is estimated to be $2\times 10^{6} \nu/cm^2/s$,
compared with $4.7\times 10^{6} \nu/cm^2/s$ \cite{BNBFlux} at 30m from the SNS target.
The BNB may be a competitive choice for carrying out measurements of low-energy neutrinos.
Further studies will be required to determine
neutron background rates and if it becomes a limiting factor in how close the detector can be to the BNB
absorber.
\section{Other Neutrino Possibilities}
There are other neutrino running possibilities. For example, space may be available in the SciBooNE hall
in the Booster Neutrino Beamline at Fermilab. While preliminary studies do not demonstrate a significant
benefit to MicroBooNE from running a 5-ton near detector, the situation could change if any anomalies arise
in MicroBooNE's first data.
Other spallation neutron sources exist around the world. If running at SNS or the BNB becomes problematic,
there may be opportunities at other facilities.
\section{Physics Importance}
A detailed understanding of neutron interactions with argon is crucial for the success of LBNE. They impact two major LBNE missions: low-energy neutrino detection - important for supernova neutrino studies, and neutrino oscillation studies with medium-energy neutrinos.
In the first case, neutron spallation on argon nuclei is an important channel for the production of isotopes that comprise the background to the detection of low-energy neutrinos - for example, those from supernova bursts.
Studying neutron spallation of the argon nucleus with a well-characterized neutron beam is therefore compelling. Additionally,
the neutral-current interactions of supernovae neutrinos on argon nuclei will leave them in excited states.
This interaction can be well-simulated by bombarding argon with
fast neutrons.
The detection and identification of de-excitation events following neutron-argon interactions is an important step in establishing whether or not neutral-current interactions of supernovae neutrinos are detectable in a LAr TPC.
\begin{figure}
\begin{center}
\includegraphics[scale=0.4,angle=0]{PionRatio}
\caption{\label{PionRatio} number of pions produced by n-n and n-p interactions and their ratios from~\cite{ref:Brooks}.}
\end{center}
\end{figure}
In the second case, neutrons are an important component of the hadronic system in medium-energy neutrino interactions with argon.
Charged hadrons are well-measured and neutral pions will decay quickly into
gamma-rays that point back to the neutrino vertex. Neutrons, on the other hand,
will travel some distance from the neutrino vertex before interacting and will
complicate the reconstruction of these events.
Besides this, the study of neutron-induced pion production and spallation events in
LAr in terms of their topology, of the multiplicity and identity of the visible particles in
the final state and of their kinematic properties, is important for LBNE because similar
events will be produced by neutrino and antineutrino interactions.
Additionally,
at the near site, neutrons will comprise an important in-time background to
neutrino detection. Measuring neutron interactions as a function of neutron
energy up to relatively high kinetic energies is thus important. With these
topics in mind, we have developed a neutron running program with CAPTAIN
in addition to measurements at neutrino beams. Such measurements have not been previously performed - they are unique for this program.
For few GeV neutrinos (antineutrinos) delta production is the principle source of pion production.
In neutral current interactions only $\Delta^+$ and $\Delta^0$ will be produced.
However the neutrino cross section is larger by a factor of $\sim 2$ because of differing interference effects due to the opposite helicity of neutrinos and antineutrinos.
The final charge state of the pions produced in neutral current interactions is important for identifying the relative neutrino and antineutrino flux.
Pions are readily absorbed and can change their charge state via final state interactions, so it is also important to characterize the final state interactions in argon.
Figure \ref{PionRatio} left (reproduced from~\cite{ref:Brooks}) shows the cross-section
for pion production from 450 MeV neutrons at various angles from different nuclei scaled by $A^{2/3}$.
The upper 5 distributions are for $\pi^-$ while the one at the bottom is for $\pi^+$.
The large dominance of $\pi^-$ over $\pi^+$ might be surprising but is qualitatively understandable if one considers pion production as proceeding via delta production as shown in Figure \ref{PionRatio} right.
The figure on the right illustrates the asymmetry in pion production by neutrons on a N=Z nucleus via the delta resonance.
In reality, the observed ratio $\pi^-/\pi^+$ is always less than 11 indicating the
importance of the role played by final state interactions.
It will be important to separate the pions coming from deltas formed in the nucleus from those formed on the incident neutron as the former better reflect the deltas formed via neutrinos (antineutrinos).
The CAPTAIN program takes advantage of the proximity of the Los
Alamos Neutron Science Center (LANSCE) to the CAPTAIN commissioning hall. LANSCE
has a beamline with a well-characterized
neutron energy spectrum with an endpoint close to 800 MeV kinetic energy (Figure~\ref{NeutronFluxWNR}).
The energy of the incoming neutrons can be determined on an event-by-event basis by measuring their time of flight.
\begin{figure}
\begin{center}
\includegraphics[scale=0.4,angle=0]{WNRNeutronSpectra}
\caption{\label{NeutronFluxWNR} The neutron flux at the LANSCE WNR facility. It is anticipated CAPTAIN would run in the 15R beamline (i.e. 15 degrees off of center).}
\end{center}
\end{figure}
It is worth noting, while the goal for LBNE is to install the far detector deep underground,
the current approved scope for LBNE has the far detector on the surface.
For surface operations,
collection and analysis of neutron data are absolutely critical.
First, at the surface, there is a significant cosmic-ray
neutron flux that will impinge on the detector. The spectrum shown in Figure~\ref{NeutronFluxWNR}
is quite similar to the cosmic-ray spectrum, but much more intense. With a few days of running, we will measure the neutron production of isotopes such as $^{40}Cl$ that constitute an important background to supernova neutrino detection. Surface running also presents a challenge to the long-baseline neutrino program. High-energy cosmogenically produced neutrons can produce events
with neutral pions that can mimic the electron neutrino appearance signal for LBNE.
Currently, simulations show that ~10\% of the electron neutrino appearance background could come from fast neutrons with complicated FSI, but uncertainties are large and must be measured prior to finalizing surface shielding requirements and photon system specifications.
In the following we briefly describe the measurements that we plan to perform using CAPTAIN
in the LANSCE neutron beam.
\section{High-intensity neutron running}
The neutron flux shown in Figure~\ref{NeutronFluxWNR} is much more intense than that
produced by cosmic-ray interactions at the LBNE far detector site, so a single day of running
will produce years worth of neutron spallation events in CAPTAIN.
We will run in an integrated mode where we expose the detector to the full intensity of the
beam, close the shutter, and observe the decay of isotopes such as $ ^{40}Cl$.
We are currently investigating making measurements in neutron beamlines with dedicated
detectors such as GEANIE (GErmanium Array for Neutron Induced Excitations)
for high-precision measurements of production cross-sections that will be input to
simulations in LBNE and will cross-check the measurements made in CAPTAIN to
determine the efficiency.
\section{Low-intensity neutron running}
Low-intensity running allows us to correlate specific topologies with neutron kinetic energy via
time of flight.
Although they are named ''low-energy neutron run'' and ''high-energy neutron run'' both
measurements will be performed at the same time, given the wide range of the continuous
energy spectrum of the incoming neutrons.
\subsection{Low-energy neutron run}
The goal of a low intensity, low-energy neutron run is to measure an excitation spectrum in $^{40}Ar$ and to study the reconstruction capabilities of $^{40}Ar^*$ de-excitation events in a liquid argon TPC.
The detection of a galactic supernova neutrino burst in LBNE requires the capability to tag and identify the following charged-current (CC) and neutral-current (NC) interactions:
\begin{equation}
\begin{split}
\nu_e+^{40}Ar\rightarrow e^- +^{40}K^* (CC)\\
\nu_x+^{40}Ar\rightarrow \nu_x +^{40}Ar^* (NC)
\end{split}
\end{equation}
In order to gain insight into the neutral-current interaction we propose to place the CAPTAIN detector in the neutron beam at LANSCE and measure:
\begin{equation}
n+^{40}Ar\rightarrow n +^{40}Ar^*
\end{equation}
This interaction will provide a test bed for identifying $^{40}Ar^*$ de-excitation events inside a liquid Argon TPC. The relationship between neutron-induced and neutrino-induced interactions on $^{40}Ar$ will be investigated by comparing the neutron beam data to Monte Carlo simulations of both types of events.
\subsection{High-energy neutron run}
The goal of a low-intensity, high-energy neutron run is to study neutron induced events in order to characterize their topology, multiplicity, and identity of the visible particles produced along with their kinematic properties. We plan to compare our results with existing models, improve them, and use them to simulate LBNE events.
The experiment will use neutrons above $\sim 400$ MeV, where pion production can occur and the relevant events can be clearly seen in a LAr TPC. The differential cross section of pion production on C, Al, Cu, and W has been measured . The observed $A^{2/3}$ dependence of the cross sections shows that these cross sections can be readily predicted in argon, even if final states are very uncertain.
In LAr the $\pi^-$ will be absorbed on an argon nucleus leading to a variety of multi-nucleon final states.
Brooks et al.\ \cite{ref:Brooks} did not observe anything beyond the pion, so measurements with CAPTAIN will provide further information on details of the interaction.
CAPTAIN will also measure spallation events in the neutron beam and try to measure the effect of these events on LBNE electron neutrino appearance backgrounds.
\section{Run Plans}
We anticipate neutron running in early FY15. The 2015 run cycle begins in August of 2014 and continues
through early calendar 2015. Proposals are due to the LANSCE PAC in mid-April of 2014, so we will
prepare our proposals during FY14.
|
2,869,038,155,468 | arxiv | \section{Introduction}
Vortex pinning by material defects \cite{CampbellEvetts_2006} determines the
phenomenological properties of all technically relevant (type II)
superconducting materials, e.g., their dissipation-free transport or magnetic
response. Similar applies to the pinning of dislocations in metals
\cite{Kassner_2015} or domain walls in magnets \cite{Gorchon_2014}, with the
commonalities found in the topological defects of the ordered phase being
pinned by defects in the host material: these topological defects are the
vortices \cite{Abrikosov_1957}, dislocations \cite{Burgers_1940}, or domain
walls \cite{Bloch_1932, LandauLifshitz_1935} appearing within the respective
ordered phases---superconducting, crystalline, or magnetic. The theory
describing the pinning of topological defects has been furthest developed in
superconductors, with the strong pinning paradigm
\cite{Labusch_1969,LarkinOvch_1979} having been strongly pushed during the
last decade \cite{Koopmann_2004, Thomann_2012, Willa_2015_PRL, Buchacek_2019}.
In its simplest form, it boils down to the setup involving a single vortex
subject to one defect and the cage potential \cite{ErtasNelson_1996,
Vinokur_1998} of other vortices. While still exhibiting a remarkable
complexity, it produces quantitative results which benefit the comparison
between theoretical predictions and experimental findings
\cite{Buchacek_2019_exp}. So far, strong pinning has focused on isotropic
defects, with the implicit expectation that more general potential shapes
would produce small changes. This is not the case, as first demonstrated by
Buchacek et al.\ \cite{Buchacek_2020} in their study of correlation effects
between defects that can be mapped to the problem of a string pinned to an
anisotropic pinning potential. In the present work, we generalize strong
pinning theory to defect potentials of arbitrary shape. We find that this
simple generalization has pronounced (geometric) effects near the onset of
strong pinning that even change the growth of the pinning force density
$F_\mathrm{pin} \propto (\kappa - 1)^\mu$ with increasing pinning strength
$\kappa > 1$ in a qualitative manner, changing the exponent $\mu$ from $\mu =
2$ for isotropic defects \cite{Labusch_1969,Koopmann_2004} to $\mu = 5/2$ for
general anisotropic pinning potentials.
The pinning of topological defects poses a rather complex problem that has been
attacked within two paradigms, weak-collective- and strong pinning. These have
been developed in several stages: originating in the sixties of the last
century, weak pinning and creep \cite{LarkinOvch_1979} has been further
developed with the discovery of high temperature superconductors as a subfield
of vortex matter physics \cite{Blatter_1994}. Strong pinning was originally
introduced by Labusch \cite{Labusch_1969} and by Larkin and Ovchinnikov
\cite{LarkinOvch_1979} and has been further developed recently with several
works studying critical currents \cite{Koopmann_2004}, current--voltage
characteristics \cite{Thomann_2012, Thomann_2017}, magnetic field penetration
\cite{Willa_2015_PRL, Willa_2016, Gaggioli_2022}, and creep \cite{Buchacek_2018,
Buchacek_2019, Gaggioli_2022}; results on numerical simulations involving strong pins have
been reported in Refs.\ \onlinecite{Kwok_2016, Willa_2018a, Willa_2018b}. The
two theories come together at the onset of strong pinning: an individual
defect is qualified as weak if it is unable to pin a vortex, i.e., a vortex
traverses the pin smoothly. Crossing a strong pin, however, the vortex
undergoes jumps that mathematically originate in bistable distinct vortex
configurations, `free' and `pinned'. Quantitatively, the onset of strong
pinning is given by the Labusch criterion $\kappa = 1$, with the Labusch
parameter $\kappa \equiv \max[-e_p^{\prime \prime}]/\Cbar \sim f_p/\xi\Cbar$,
the dimensionless ratio of the negative curvature $e_p^{\prime\prime}$ of the
isotropic pinning potential and the effective elasticity $\Cbar$ of the
vortex lattice. Strong pinning appears for $\kappa > 1$, i.e., when the
lattice is soft compared to the curvatures in the pinning landscape.
So far, the strong pinning transition at $\kappa = 1$ has been described for
defects with isotropic pinning potentials; it can be mapped
\cite{Koopmann_2004} to the magnetic transition in the $h$-$T$
(field--temperature) space, with the strong-pinning phenomenology at $\kappa >
1$ corresponding to the first-order Ising magnetic transition at $T < T_c$ and
the critical point at $T = T_c$ corresponding to the strong pinning transition
at $\kappa = 1$. The role of the reduced temperature $T/T_c$ is then assumed
by the Labusch parameter $\kappa$ and the bistabilities associated with the
ferromagnetic phases at $T/T_c < 1$ translate to the bistable pinned and free
vortex states at $\kappa > 1$, with the bistability disappearing on
approaching the critical point, $T/T_c =1$ and $\kappa = 1$, respectively.
A first attempt to account for correlations between defects has been done in
Ref.\ \onlinecite{Buchacek_2020}. The latter analysis takes into account the
enhanced pinning force excerted by pairs of isotropic defects that can be cast
in the form of {\it anisotropic effective} pinning centers. Besides shifting
the onset of strong pinning to $\kappa = 1/2$ (with $\kappa$ defined for one
individual defect), the analysis unravelled quite astonishing (geometric)
features that appeared as a consequence of the symmetry reduction in the
pinning potential. In the present paper, we take a step back and study the
transition to strong pinning for anisotropic defect potentials $e_p({\bf R})$,
with $\mathbf{R}$ a planar coordinate, see Fig.\ \ref{fig:3D_setup}. Note that
collective effects of many weak defects can add up to effectively strong pins
that smoothen the transition at $\kappa = 1$, thereby turning the strong
pinning transition into a weak-to-strong pinning crossover.
We find that the onset of strong pinning proceeds quite differently when going
from the isotropic defect to the anisotropic potential of a generic defect
without special symmetries and further on to a general random pinning
landscape. The simplest comparison is between an isotropic and a uniaxially
anisotropic defect, acting on a vortex lattice that is directed along the
magnetic field ${\bf B} \parallel {\bf e}_z$ chosen parallel to the $z$-axis;
for convenience, we place the defect at the origin of our coordinate system
${\bf r} = ({\bf R}, z)$ and have it act only in the $z = 0$-plane. In this
setup, see Fig.\ \ref{fig:3D_setup}, the pinning potential $e_p({\bf R})$ acts
on the {\it nearest} vortex with a force ${\bf f}_p({\bf R}) = -\nabla_{\bf R}
e_p|_{z=0}$ attracting the vortex to the defect; the presence of the {\it
other vortices} constituting the lattice renormalizes the vortex elasticity
$\Cbar$. With the pinning potential acting in the $z = 0$ plane, the vortex
is deformed with a pronounced cusp at $z=0$, see Fig.\ \ref{fig:3D_setup}; we
denote the tip position of the vortex where the cusp appears by $\tilde{\bf R}$, while
the asymptotic position of the vortex at $z \to \pm \infty$ is fixed at
$\bar{\bf R}$. With this setup the problem can be reduced to a planar one, with the
tip coordinate $\tilde{\bf R}$ and the asymptotic coordinate $\bar{\bf R}$ determining the
location and full shape (and hence the pinning force) of the vortex line.
\begin{figure}[t]
\includegraphics[width = 1.\columnwidth]{figures/vortex_illustration}
\caption{Sketch of a vortex interacting with a defect located at the origin.
The vortex approaches the asymptotic position $\bar{\bf R}$ at $z \to \pm
\infty$ and is attracted to the defect residing at the origin; the cusp
at $z=0$ defines the tip position $\tilde{\bf R}$ and its angle quantifies the
pinning strength.}
\label{fig:3D_setup}
\end{figure}
In the case of an {\it isotropic} pin, e.g., produced by a point-like defect
\cite{Thomann_2012}, strong pinning first appears on a circle of finite radius
$R_m \sim \xi$, typically of order of the vortex core radius $\xi$, see left
panel of Fig.\ \ref{fig:Ras-plane}(a). This is owed to the fact that, given
the radial symmetry, the Labusch criterion $\kappa =
\max_R[-e_p^{\prime\prime}(R)]/\Cbar = 1$ is satisfied on a circle $R = R_m$
where the (negative) curvature $-e_p^{\prime\prime} >0$ is maximal.
Associated with the radius $R_m$ where the tip is located at $\kappa = 1$,
$\tilde{R}(\kappa = 1) \equiv \tilde{R}_m = R_m$, there is an asymptotic vortex position
$\bar{R}(\kappa = 1) = \bar{R}_m > \tilde{R}_m$. Increasing the Labusch parameter beyond
$\kappa = 1$, the circle of radius $\bar{R}_m$ transforms into a ring $\bar{R}_- <
\bar{R} < \bar{R}_+$ of finite width. Vortices placed inside the ring at small
distances $\bar{R} < \bar{R}_-$ near the defect are qualified as `pinned', while
vortices at large distances $\bar{R} > \bar{R}_+$ away from the pin are described as
`free', see right panel in Fig.\ \ref{fig:Ras-plane}(a); physically, we denote
a vortex configuration as `free' when it is smoothly connected to the
asymptotic undeformed state, while a `pinned' vortex is localized to a finite
region around the defect. Vortices placed inside the bistable ring at $\bar{R}_-
< \bar{R} < \bar{R}_+$ acquire two possible states, pinned and free (colored
magenta in Fig.\ \ref{fig:Ras-plane}, the superposition of red (pinned state)
and blue (free state) colors).
The onset of strong pinning for the {\it uniaxially anisotropic} defect
proceeds in several stages. Let us consider an illustrative example and
assume a defect with an anisotropy aligned with the axes and a steeper
potential along $x$. In this situation, strong pinning as defined by the
criterion $\kappa_m = 1$, with a properly generalized Labusch parameter
$\kappa_m$, appears out of two points $(\pm \bar{x}_m,0)$ where the Labusch
criterion $\kappa_m = 1$ is met first, see Fig.\ \ref{fig:Ras-plane}(b) left.
Increasing $\kappa_m > 1$ beyond unity, two bistable domains spread around
these points and develop two crescent-shaped areas (with their large extent
along $\bar{y}$) in asymptotic $\bar{\bf R}$-space, see Fig.\ \ref{fig:Ras-plane}(b)
right. Vortices with asymptotic positions within these crescent-shaped
regions experience bistability, while outside these regions the vortex state
is unique. Classifying the bistable solutions as `free' and `pinned' is not
possible, with the situation resembling the one around the gas--liquid
critical point with a smooth crossover (from blue to white to red) between
phases. With $\kappa_m$ increasing further, the cusps of the crescents
approach one another. As the arms of the two crescents touch and merge at
a sufficiently large value of $\kappa_m$, the topology of the bistable area
changes: the two merged crescents now define a ring-like geometry and separate
$\bar{\bf R}$-space into an inside region where vortices are pinned, an outside
region where vortices are free and the bistable region with pinned and
free states inside the ring-like region. As a result, the pinning geometry
of the isotropic defect is recovered, though with the perfect ring replaced by
a deformed ring with varying width. Using the language describing a
thermodynamic first-order transition, the cusps of the crescents correspond to
critical points while its boundaries map to spinodal lines; the merging of
critical points changing the topology of the bistable regions of the pinning
landscape goes beyond the standard thermodynamic analogue of phase diagrams.
\begin{figure}[t]
\includegraphics[width = 1.\columnwidth]{figures/Ras-plane}
\caption{Illustration of bistable regions in asymptotic $\bar{\bf R}$-space
for a vortex pinned to a defect located at the origin. (a) For an
isotropic defect (Lorentzian shape with $\kappa = 1,~1.5$), pinning
appears at $\kappa = 1$ along a ring with radius $\bar{R}_m$, with the
red area corresponding to pinned states and free states colored in
blue. With increasing pinning strength $\kappa$, see right panel at
$\kappa = 1.5$, a bistable region (in magenta) appears in a ring
geometry, with vortices residing inside, $\bar{R} < \bar{R}_-$, being pinned
and vortices outside, $\bar{R} > \bar{R}_+$, remaining free. Vortices with
asymptotic positions inside the ring ($\bar{R}_- < \bar{R} < \bar{R}_+$)
exhibit bistable states, pinned and free. The dashed circle $\bar{R}_0$
marks the crossing of pinned and free branches, see Fig.\
\ref{fig:e_pin}. (b) For a uniaxially anisotropic defect, see Eq.\
\eqref{eq:uniax_potential_formal} with $\epsilon = 0.3$ and largest
(negative) curvature along $x$, pinning appears in two points $(\pm
\bar{x}_m,0)$ along the $x$-axis. As the pinning strength increases
beyond unity, see right panel, bistable regions (magenta) develop in a
crescent-shape geometry. Pinned- and free-like states are smoothly
connected as indicated by the crossover of colors (see Sec.\
\ref{sec:Bas} for the precise description of coloring in terms of an
`order parameter'). As $\kappa_m$ further increases, the cusps of the
two crescents merge on the $y$-axis, changing the topology of the
$\bar{\bf R}$-plane through separation into inner and outer regions (not
shown). A ring-like bistable region appears as in $(\mathrm{a})$,
with the inner (outer) region corresponding to unique vortex states
that are pinned (free), while vortices residing inside the ring-shaped
domain exhibit bistable states, pinned and free.}
\label{fig:Ras-plane}
\end{figure}
The bistable area is defining the trapping area where vortices get pinned to
the defect; this trapping area is one of the relevant quantities determining
the pinning force density $F_\mathrm{pin}$, the other being the jumps in
energy associated with the difference between the bistable states
\cite{Labusch_1969, Koopmann_2004}, see the discussion in Secs.\
\ref{sec:F_pin_gen}, \ref{sec:F_pin_iso}, and \ref{sec:F_pin_anis} below. It is
the change in the bistable- and hence trapping geometry that modifies the
exponent $\mu$ in $F_\mathrm{pin} \propto (\kappa - 1)^\mu$, replacing the
exponent $\mu = 2$ for isotropic defects by the new exponent $\mu = 5/2$ for
general anisotropic pinning potentials.
While the existence of bistable regions $\mathcal{B}_{\Ras}$ in the space of asymptotic
vortex positions $\bar{\bf R}$ is an established element of strong pinning theory by
now, in the present paper, we introduce the new concept of unstable domains
$\mathcal{U}_{\Rti}$ in tip-space. The two coordinates $\tilde{\bf R}$ and $\bar{\bf R}$ represent dual
variables in the sense of the thermodynamic analog, with the asymptotic
coordinate $\bar{\bf R}$ corresponding to the driving field $h$ in the Ising model
and the tip position $\tilde{\bf R}$ replacing the magnetic response $m$; from a
thermodynamic perspective it is then quite natural to change view by going
back and forth between intensive ($h$) and extensive ($m$) variables. In tip
space $\tilde{\bf R}$, the onset of pinning appears at isolated points $\tilde{\bf R}_m$ that
grow into ellipses as $\kappa$ is increased beyond unity. These ellipses
describe {\it unstable areas} $\mathcal{U}_{\Rti}$ in the $\tilde{\bf R}$-plane across which vortex
tips jump when flipping between bistable states; they relate to the {\it
bistable crescent-shaped} areas $\mathcal{B}_{\Ras}$ in asymptotic space through the force
balance equation; the latter determines the vortex shape with elastic and
pinning forces compensating one another. The unstable regions $\mathcal{U}_{\Rti}$ in tip
space are actually more directly accessible than the bistable regions $\mathcal{B}_{\Ras}$
in asymptotic space and play an equally central role in the discussion of the
strong pinning landscape.
The simplification introduced by the concept of unstable domains $\mathcal{U}_{\Rti}$ in tip
space $\tilde{\bf R}$ is particularly evident when going from individual defects as
described above to a generic pinning landscape. Here, we focus on a model
pinning potential landscape (or short pinscape) confined to the
two-dimensional (2D) $\mathbf{R}$ plane at $z=0$; such a pinscape can be
produced, e.g., by defects that reside in the $z = 0$ plane. The pinned
vortex tip $\tilde{\bf R}$ then still resides in the $z=0$ plane as well and the strong
pinning problem remains two-dimensional. For a 2D random pinscape, unstable
ellipses appear sequentially out of different (isolated) points and at
different pinning strength $\kappa_m$; their assembly defines the unstable
area $\mathcal{U}_{\Rti}$, with each newly appearing ellipse changing the topology of
$\mathcal{U}_{\Rti}$, specifically, its number of components. Increasing $\kappa_m$, the
ellipses first grow in size, then deform away from their original elliptical
shapes, and finally touch and merge in a hyperbolic geometry. Such mergers
change, or more precisely reduce, the number of components in $\mathcal{U}_{\Rti}$ and hence
correspond again to topological transitions as described by a change in the
Euler characteristic $\chi$ associated with the shape of $\mathcal{U}_{\Rti}$. Furthermore,
these mergers tend to produce $\mathcal{U}_{\Rti}$ shapes that are non-simply connected,
again implying a topological transition in $\mathcal{U}_{\Rti}$ with a change in $\chi$.
Such non-simply connected parts of $\mathcal{U}_{\Rti}$ separate the tip space into `inner'
and `outer' regions that allows to define proper `pinned' states (localized
near a potential minimum) in the `inner' of $\mathcal{U}_{\Rti}$, while `free' states
(smoothly connected to asymptotically undeformed vortices) occupy the regions
outside of $\mathcal{U}_{\Rti}$.
The discussion below is dominated by three mathematical tools: for one, it is
the Hessian matrix $\mathrm{H}({\bf R})$ of the pinning potential
\cite{Buchacek_2020,Willa_2022} $e_p({\bf R})$, its eigenvalues
$\lambda_\pm({\bf R})$ and eigenvectors ${\bf v}_\pm({\bf R})$, its
determinant $\mathrm{det}[\mathrm{H}]({\bf R})$ and trace $\mathrm{tr}[\mathrm{H}]({\bf R})$.
The Hessian matrix involves the curvatures $\mathrm{H}_{ij} =
\partial_i\partial_j e_p({\bf R})$, $i, j \in \{x, y\}$, of the pinning
potential, that in turn are the quantities determining strong pinning, as can
be easily conjectured from the form of the Labusch parameter $\kappa \propto
-e_p^{\prime\prime}$ for the isotropic defect. The second tool is the
Landau-type expansion of the total pinning energy near the strong-pinning
onset around $\tilde{\bf R}_m$ at $\kappa_m = 1$ (appearance of a critical point) as
well as near merging around $\tilde{\bf R}_s$ at $\kappa(\tilde{\bf R}_s) \equiv \kappa_s = 1$
(disappearance of a pair of critical points); the standard manipulations as
they are known from the description of a thermodynamic first-order phase
transition produce most of the new results. Third, the topological structure
of the unstable domain $\mathcal{U}_{\Rti}$ associated with a generic 2D pinning landscape,
i.e., its components and their connectedness, is conveniently described
through its Euler characteristic $\chi$ with the help of Morse theory.
The structure of the paper is as follows: In Section \ref{sec:intro}, we
briefly introduce the concepts of strong pinning theory with a focus on the
isotropic defect. The onset of strong pinning by a defect of arbitrary shape
is presented in Sec.\ \ref{sec:arb_shape}; we start with a translation and
extension of the strong pinning ideas from the isotropic situation to a
general anisotropic one, that leads us to the Hessian analysis of the pinning
potential as our basic mathematical tool. Close to onset, we find (using a
Landau-type expansion, see Sec.\ \ref{sec:ell_expansion}) that the unstable
(Sec.\ \ref{sec:Uti}) and bistable (Sec.\ \ref{sec:Bas}) domains are
associated with minima of the determinant of the Hessian curvature matrix and
assume the shape of an ellipse and a crescent, respectively. Due to the
anisotropy, the geometry of the trapping region depends non-trivially on the
Labusch parameter and the critical exponent for the pinning force is changed
from $\mu=2$ to $\mu=5/2$, see Sec.\ \ref{sec:F_pin_anis}. The analytic
solution of the strong pinning onset for a weakly uniaxial defect presented in
Sec.\ \ref{sec:uniax_defect} leads us to define new hyperbolic points
associated with saddle points of the determinant of the Hessian curvature
matrix. These hyperbolic points describe the merging of unstable and bistable
domains, see Sec.\ \ref{sec:hyp_expansion}, and allow us to relate the new
results for the anisotropic defect to our established understanding of
isotropic defects. In a final step, we extend the local perspective on the
pinscape, as acquired through the analysis of minima and saddles of the
determinant of the Hessian curvature matrix, to a global description in terms
of the topological characteristics of the unstable domain $\mathcal{U}_{\Rti}$: in Sec.\
\ref{sec:2D_landscape}, we discuss strong pinning in a two-dimensional pinning
potential of arbitrary shape, e.g., as it appears when multiple pinning defects
overlap (though all located in one plane). We follow the evolution of the
unstable domain $\mathcal{U}_{\Rti}$ with increasing pinning strength $\kappa_m$ and express
its topological properties through the Euler characteristic $\chi$; the latter
is related to the local differential properties of the pinscape's curvature,
its minima, saddles, and maxima, through Morse theory. Finally, in Appendix
\ref{sec:eff_1D}, we map the two-dimensional Landau-type theories (involving
two order parameters) describing onset and merging, to effective
one-dimensional Landau theories and rederive previous results following
standard statistical mechanics calculations as they are performed in the
analysis of the critical point in the van der Waals gas.
\section{Strong pinning theory}\label{sec:intro}
We start with a brief introduction to strong pinning theory, keeping a focus
on the transition region at moderate values of $\kappa > 1$. We consider an
isotropic defect (Sec.\ \ref{sec:iso_def}) and determine the unstable and
bistable ring domains for this situation in Sec.\ \ref{sec:U-B-domains}. We
derive the general expression for the pinning force density $\mathbf{F}_\mathrm{pin}$ in Sec.\
\ref{sec:F_pin_gen}, determine the relevant scales of the strong pinning
characteristic near the crossover in Sec.\ \ref{sec:sp_char}, and apply the
results to derive the scaling $\mathbf{F}_\mathrm{pin} \propto (\kappa - 1)^2$ for the isotropic
defect (Sec.\ \ref{sec:F_pin_iso}). In Sec.\ \ref{sec:Landau}, we relate the
strong pinning theory for the isotropic defect to the Landau mean-field
description for the Ising model in a magnetic field.
\subsection{Isotropic defect}\label{sec:iso_def}
The standard strong-pinning setup involves a vortex lattice directed along $z$
with a lattice constant $a_0$ determined by the induction $B = \phi_0/a_0^2$
that is interacting with a dilute set of randomly arranged defects of density
$n_p$. This many-body problem can be reduced \cite{Koopmann_2004, Willa_2016,
Buchacek_2019} to a much simpler effective problem involving an elastic string
with effective elasticity $\Cbar$ that is pinned by a defect potential
$e_p({\bf R})$ acting in the origin, as described by the energy function
\begin{equation}\label{eq:en_pin_tot}
e_{\mathrm{pin}}(\tilde{\bf R};\bar{\bf R}) = \frac{\Cbar}{2}(\tilde{\bf R}-\bar{\bf R})^2 + e_p(\tilde{\bf R})
\end{equation}
depending on the tip- and asymptotic coordinates $\tilde{\bf R}$ and $\bar{\bf R}$ of the
vortex, see Fig.\ \ref{fig:3D_setup}. The energy (or Hamiltonian)
$e_{\mathrm{pin}}(\tilde{\bf R};\bar{\bf R})$ of this setup involves an elastic term and the pinning
energy $e_p({\bf R})$ evaluated at the location $\tilde{\bf R}$ of the vortex tip. We
denote the depth of the pinning potential by $e_p$. A specific example is the
point-like defect that produces an isotropic pinning potential which is
determined by the form of the vortex \cite{Thomann_2012} and assumes a
Lorentzian shape $e_p(R) = -e_p/(1+ R^2/2\xi^2)$ with $R = \abs{{\bf R}}$; in
Sec.\ \ref{sec:arb_shape} below, we will consider pinning potentials of
arbitrary shape $e_p({\bf R})$ but assume a small (compared to the coherence
length $\xi$) extension along $z$. `Integrating out' the vortex lattice, the
remaining string or vortex is described by the effective elasticity $\bar{C}
\approx \nu \varepsilon (a_0^2/\lambda_{\rm\scriptscriptstyle L})
\sqrt{c_{66}c_{44}(0)} \sim \varepsilon \varepsilon_0/a_0$. Here,
$\varepsilon_0 = (\phi_0/4\pi \lambda_{\rm\scriptscriptstyle L})^2$ is the
vortex line energy, $\lambda_{\rm\scriptscriptstyle L}$ denotes the London
penetration depth, $\varepsilon < 1$ is the anisotropy parameter for a
uniaxial material \cite{Blatter_1994}, and $\nu$ is a numerical, see Refs.\
\onlinecite{Kwok_2016, Willa_2018b}.
\begin{figure}
\includegraphics[width = 1.\columnwidth]{figures/self_consistent_inset.pdf}
\caption{ Graphical illustration\cite{Buchacek_2019} of the
self-consistent solution of the microscopic force-balance equation
Eq.\ \eqref{eq:force_balance} for a Lorentzian potential with $\kappa
= 2.5$. The vortex coordinates $\tilde{x}$ and $\bar{x}$ are expressed in
units of $\xi$. When moving the asymptotic vortex position $\bar{x}$
across the bistable interval $[\bar{x}_-,\bar{x}_+]$, we obtain three
solutions describing pinned $\tilde{x}_\mathrm{p} \lesssim \xi$, free
$\tilde{x}_\mathrm{f}$ close to $\bar{x}$, and unstable $\tilde{x}_\mathrm{us}$
states; they define the corresponding pinned (red), free (blue), and
unstable (black dotted) branches. The tip-positions at the edges of
the bistable interval denoted by $\tilde{x}_\mathrm{p+}$ and
$\tilde{x}_\mathrm{f-}$ denote jump points where the vortex tip turns
unstable, see Eq.\ \eqref{eq:der_force_balance}; they are defined by
the condition $f^\prime_p (\tilde{x}_\mathrm{p+}) = f^\prime_p
(\tilde{x}_\mathrm{f-}) = \Cbar$ (black solid dots). The associated
positions $\tilde{x}_\mathrm{f+}$ and $\tilde{x}_\mathrm{p-}$ denote the tip
landing points after the jump (open circles); they are given by the
second solution of Eq.\ \eqref{eq:force_balance} at the same
asymptotic position $\bar{x}$. The open red/blue circles and the cross
mark the positions of metastable minima and the unstable maximum in
Fig.\ \ref{fig:e_pin}. The lower right inset shows the weak-pinning
situation at $\kappa < 1$, here implemented with a larger $\Cbar$,
where the tip solution $\tilde{x}$ is unique for all $\bar{x}$.}
\label{fig:self-cons-sol}
\end{figure}
The most simple pinning geometry is for a vortex that traverses the defect
through its center. Given the rotational symmetry of the isotropic defect, we
choose a vortex that impacts the defect in a head-on collision from the left
with asymptotic coordinate $\bar{\bf R} = (\bar{x},0)$ and increase $\bar{x}$ along the
$x$-axis; finite impact parameters $\bar{y} \neq 0$ will be discussed later. The
geometry then simplifies considerably and involves the asymptotic vortex
position $\bar{x}$ and the tip position $\tilde{x}$ of the vortex, reducing the
problem to a one-dimensional one; the full geometry of the deformed string can
be determined straightforwardly \cite{Willa_2016} once the tip position $\tilde{x}$
has been found. The latter follows from minimizing \eqref{eq:en_pin_tot} with
respect to $\tilde{x}$ at fixed asymptotic position $\bar{x}$ and leads to the
non-linear equation
\begin{equation}\label{eq:force_balance}
\Cbar(\tilde{x}-\bar{x})=-\partial_x e_p|_{x=\tilde{x}} = f_p(\tilde{x}).
\end{equation}
This can be solved graphically, see Fig.\ \ref{fig:self-cons-sol}, and
produces either a single solution or multiple solutions---the appearance of
multiple tip solutions is the signature of strong pinning. The relevant
parameter that distinguishes the two cases is found by taking the derivative
of \eqref{eq:force_balance} with respect to $\bar{x}$ that leads to
\begin{equation}\label{eq:der_force_balance}
\partial_{\bar{x}} \tilde{x} = \frac{1}{1-f_p'(\tilde{x})/\Cbar},
\end{equation}
where prime denotes the derivative, $f'_p(x) =\partial_x f_p(x) = -
\partial_x^2 e_p(x)$. Strong pinning involves vortex instabilities, i.e.,
jumps in the tip coordinate $\tilde{x}$, that appear when the denominator in
\eqref{eq:der_force_balance} vanishes; this leads us to the strong pinning
parameter $\kappa$ first introduced by Labusch \cite{Labusch_1969},
\begin{equation}\label{eq:Lab_par}
\kappa = \max_{\tilde{x}} \frac{f'_p(\tilde{x})}{\Cbar} = \frac{f'_p(\tilde{x}_m)}{\Cbar},
\end{equation}
with $\tilde{x}_m$ defined as the position of maximal force derivative $f_p'$, i.e.,
$f_p''(\tilde{x}_m) = 0$, or maximal negative curvature $-e_p''$ of the defect
potential. Defining the force scale $f_p \equiv e_p/\xi$ and estimating the
force derivative or curvature $f_p^\prime = -e_p^{\prime\prime} \sim f_p/\xi$
produces a Labusch parameter $\kappa \sim e_p/\Cbar\xi^2$; for the Lorentzian
potential, we find that $f_p^\prime(\tilde{x}_m) = e_p /4 \xi^2$ at $\tilde{x}_m =
\sqrt{2}\,\xi$ and hence $\kappa = e_p/4\Cbar\xi^2$. We see that strong
pinning is realized for either large pinning energy $e_p$ or small effective
elasticity $\Cbar$.
As follows from Fig.\ \ref{fig:self-cons-sol} (inset), for $\kappa < 1$ (large
$\Cbar$) the solution to Eq.\ \eqref{eq:force_balance} is unique for all
values of $\bar{x}$ and pinning is weak, while for $\kappa > 1$ (small $\Cbar$),
multiple solutions appear in the vicinity of $\tilde{x}_m$ and pinning is strong.
These multiple solutions appear in a finite interval $\bar{x} \in
[\bar{x}_-,\bar{x}_+]$ and we denote them by $\tilde{x} = \tilde{x}_\mathrm{f},
\tilde{x}_\mathrm{p}, \tilde{x}_\mathrm{us}$, see Fig.\ \ref{fig:self-cons-sol}; they
are associated with free (weakly deformed vortex with $\tilde{x}_\mathrm{f}$ close
to $\bar{x}$), pinned (strongly deformed vortex with $\tilde{x}_\mathrm{p} < \xi$),
and unstable vortex states.
Inserting the solutions $\tilde{x}(\bar{x}) = \tilde{x}_\mathrm{f}(\bar{x}),
\tilde{x}_\mathrm{p}(\bar{x}), \tilde{x}_\mathrm{us}(\bar{x})$ of Eq.\
\eqref{eq:force_balance} at a given vortex position $\bar{x}$ back into the pinning
energy $e_{\mathrm{pin}}(\tilde{x};\bar{x})$, we find the energies of the corresponding branches,
\begin{equation}\label{eq:e_pin^i}
e^\mathrm{i}_\mathrm{pin} (\bar{x}) \equiv e_\mathrm{pin}[\tilde{x}_\mathrm{i}(\bar{x});\bar{x}],
\quad \mathrm{i} = \mathrm{f,p,us}.
\end{equation}
The pair $e_p(\tilde{x})$ and $e^\mathrm{i}_\mathrm{pin}(\bar{x})$ of energies in
tip- and asymptotic spaces then has its correspondence in the force: associated
with $f_p(\tilde{x})$ in tip space are the force branches
$f^\mathrm{i}_\mathrm{pin}(\bar{x})$ in asymptotic $\bar{x}$-space defined as
\begin{equation}\label{eq:f_pin^i}
f^\mathrm{i}_\mathrm{pin}(\bar{x}) = f_p[\tilde{x}_\mathrm{i}(\bar{x})],
\quad \textrm{i} = \mathrm{f,p,us}.
\end{equation}
Using Eq.\ \eqref{eq:force_balance}, it turns out that the force $f_{\mathrm{pin}}$
can be written as the total derivative of $e_{\mathrm{pin}}$,
\begin{equation}\label{eq:f_pin}
f_\mathrm{pin}(\bar{x}) = - \frac{d e_\mathrm{pin}[\tilde{x}(\bar{x});\bar{x}]}{d\bar{x}}.
\end{equation}
The multiple branches $e^\mathrm{i}_\mathrm{pin}$ and
$f^\mathrm{i}_\mathrm{pin}$ associated with a strong pinning situation at
$\kappa > 1$ are shown in Figs.\ \ref{fig:e_pin} and \ref{fig:f_pin}$(\mathrm{b})$.
\begin{figure}
\includegraphics[width = 1.\columnwidth]{figures/e_pin.pdf}
\caption{ Multi-valued pinning energy landscape
$e_{\mathrm{pin}}^\mathrm{i}(\bar{x})$ for a defect producing a Lorentzian-shaped
potential with $\kappa = 2.5$; the branches
$\mathrm{i}=\mathrm{p,f,us}$ correspond to the pinned (red), free
(blue), and unstable (black dotted) vortex states. The bistability
extends over the intervals $|\bar{x}| \in \left[\bar{x}_-, \bar{x}_+\right]$
where the different branches coexist; pinned and free vortex branches
cut at the branch crossing point $\bar{x}=\bar{x}_0$. A vortex traversing
the defect from left to right assumes the free and pinned states
marked with thick colored lines and undergoes jumps $\Delta
e_\mathrm{pin}^\mathrm{fp}$ and $\Delta e_\mathrm{pin}^\mathrm{pf}$ in
energy (vertical black solid lines) at the boundaries $-\bar{x}_-$ and
$\bar{x}_+$. The asymmetric occupation of states produces a finite
pinning force density $\mathbf{F}_\mathrm{pin}$. Inset: Total energy $e_{\mathrm{pin}}(\tilde{x};\bar{x})$
versus vortex tip position $\tilde{x}$ for a fixed vortex position $\bar{x}$
(vertical dashed line in the main figure). The points
$\tilde{x}_\mathrm{f}$, $\tilde{x}_\mathrm{p}$, and $\tilde{x}_\mathrm{us}$ mark the
free, pinned, and unstable solutions of the force-balance equation
\eqref{eq:force_balance}; they correspond to local minima and the
maximum in $e_{\mathrm{pin}}(\tilde{x};\bar{x})$ and are marked with corresponding
symbols in Fig.\ \ref{fig:self-cons-sol}.}
\label{fig:e_pin}
\end{figure}
\subsection{Unstable and bistable domains $\mathcal{U}_{\tilde{\bf R}}$ and
$\mathcal{B}_{\bar{\bf R}}$}\label{sec:U-B-domains}
Next, we identify the unstable (in $\tilde{x}$) and bistable (in $\bar{x}$) domains of
the pinning landscape that appear as signatures of strong pinning when
$\kappa$ increases beyond unity. Figure \ref{fig:f_pin}(a) shows the force
profile $f_p(\tilde{x})$ as experienced by the tip coordinate $\tilde{x}$. A vortex
passing the defect on a head-on trajectory from left to right undergoes a
forward jump in the tip from $-\tilde{x}_\mathrm{f-}$ to $-\tilde{x}_\mathrm{p-}$;
subsequently, the tip follows the pinned branch until $\tilde{x}_\mathrm{p+}$ and
then returns back to the free state with a forward jump from
$\tilde{x}_\mathrm{p+}$ to $\tilde{x}_\mathrm{f+}$. The {\it jump positions}
(later indexed by a subscript `$\mathrm{jp}$') are determined by the two
solutions of the equation
\begin{equation}\label{eq:uti_iso_j}
f_p'(x)\Big|_{-\tilde{x}_\mathrm{f-}, \tilde{x}_\mathrm{p+}} = \Cbar
\end{equation}
that involves the curvature of the pinning potential $e_p(x)$; the {\it
landing positions} $-\tilde{x}_\mathrm{p-}$ and $\tilde{x}_\mathrm{f+}$ (later indexed
by a subscript `$\mathrm{lp}$'), on the other hand, are given by the second
solution of the force-balance equation \eqref{eq:force_balance} that involves
the driving term $\Cbar(\tilde{x}-\bar{x})$ and hence depends on the asymptotic
position $\bar{x}$. Finally, the positions in asymptotic space $\bar{x}$ where the
vortex tip jumps are obtained again from the force balance equation
\eqref{eq:force_balance},
\begin{eqnarray}\label{eq:xas_pm}
\bar{x}_- &=& \tilde{x}_\mathrm{f-} - f_p(\tilde{x}_\mathrm{f-})/\Cbar, \\ \nonumber
\bar{x}_+ &=& \tilde{x}_\mathrm{p+} - f_p(\tilde{x}_\mathrm{p+})/\Cbar.
\end{eqnarray}
Note that the two pairs of tip jump and landing positions,
$\tilde{x}_\mathrm{p+},~\tilde{x}_\mathrm{f+}$ and $\tilde{x}_\mathrm{f-},~\tilde{x}_\mathrm{p-}$
are associated with only two asymptotic positions $\bar{x}_+$ and $\bar{x}_-$.
Let us generalize the geometry and consider a vortex moving parallel to
$\bar{x}$, impacting the defect at a finite distance $\bar{y}$. We then have to
extend the above discussion to the entire $z=0$ plane, see Fig.\
\ref{fig:f_pin}. For an isotropic defect, the jump- and landing points now
define jump circles with radii $\tilde{R}_\mathrm{jp}$ given by $\tilde{R}_\mathrm{f-} =
\tilde{x}_\mathrm{f-}$ and $\tilde{R}_\mathrm{p+} = \tilde{x}_\mathrm{p+}$ (solid circles in
Fig.\ \ref{fig:f_pin}$(\mathrm{c})$) and landing circles with radii
$\tilde{R}_\mathrm{lp}$ given by $\tilde{R}_\mathrm{f+} = \tilde{x}_\mathrm{f+}$,
$\tilde{R}_\mathrm{p-} = \tilde{x}_\mathrm{p-}$ (dashed circles in Fig.\
\ref{fig:f_pin}$(\mathrm{c})$). Their combination defines an unstable ring
$\tilde{R}_\mathrm{p+} < \tilde{R} < \tilde{R}_\mathrm{f-}$ in tip space where tips cannot
reside. The existence of unstable domains $\mathcal{U}_{\tilde{\bf R}}$ in tip space is
a signature of strong pinning.
\begin{figure}
\includegraphics[width = 1.\columnwidth]{figures/force-scape.pdf}
\caption{(a) and (b): Force profiles $f_p(\tilde{x})$ and $f_\mathrm{pin}
(\bar{x})$ in tip- and asymptotic coordinates for a Lorentzian-shaped
potential with $\kappa = 2.5$. The tip of a vortex moving from left to
right along the $x$-axis approaches the defect on the free branch
(thick blue line) undergoes a jump (arrow) from $-\tilde{x}_\mathrm{f-}$ to
$-\tilde{x}_\mathrm{p-}$, follows the pinned branch (red) until
$\tilde{x}_\mathrm{p+}$ and then jumps back (arrow) to the free (blue)
state at $\tilde{x}_\mathrm{f+}$. Extending these jump positions to the
$(\tilde{x},\tilde{y})$-plane, see (c), defines jump (solid) and landing
(dashed) circles, with the jump circles enclosing an unstable domain
$\mathcal{U}_{\tilde{\bf R}}$ characteristic of strong pinning. The force
profile $f_\mathrm{pin} (\bar{x})$ in $(\mathrm{b})$ includes free
(blue), pinned (red), and unstable branches (black dotted). (d)
Extending the bistable intervals $[-\bar{x}_+,-\bar{x}_-]$ and
$[\bar{x}_-,\bar{x}_+]$ to the $[\bar{x},\bar{y}]$-plane defines a bistable ring
$\mathcal{B}_{\bar{\bf R}}$ (magenta), again a strong pinning characteristic.
The dashed circle of radius $\bar{R}_0$ in (d) marks the branch crossing
point. Vortices passing the defect with a finite impact parameter
$\bar{y} \neq 0$ move on a straight line in asymptotic space, see (d);
the associated trajectory in tip space is nontrivial, see
$(\mathrm{c})$ and undergoes jumps at pinning (circle $\tilde{R}_\mathrm{f-}$) and
depinning (circle $\tilde{R}_\mathrm{p+}$).
}
\label{fig:f_pin}
\end{figure}
Figures \ref{fig:f_pin}$(\mathrm{b})$ and $(\mathrm{d})$ show the
corresponding results in asymptotic coordinates $\bar{x}$ and $\bar{\bf R}$,
respectively. The pinning force $f_\mathrm{pin}(\bar{x}) = f_p[\tilde{x}(\bar{x})]$ shown
in $(\mathrm{b})$ is simply an `outward tilted' version of $f_p(\tilde{x})$, with
$S$-shaped overhangs that generate bistable intervals $[-\bar{x}_+, -\bar{x}_-]$ and
$[\bar{x}_-, \bar{x}_+]$. Extending them to the asymptotic $\bar{\bf R}$-plane with radii
$\bar{R}_- \equiv \bar{x}_-$ and $\bar{R}_+ \equiv \bar{x}_+$, see Fig.\
\ref{fig:f_pin}$(\mathrm{d})$, we obtain a ring $\bar{R}_- < \bar{R} < \bar{R}_+$ that
marks the location of bistability. Again, the appearance of bistable domains
$\mathcal{B}_{\bar{\bf R}}$ in asymptotic space is a signature of strong pinning.
Both, the size of the unstable- and bistable rings depend on the Labusch
parameter $\kappa$; they appear out of circles with radii $\tilde{R} = \tilde{x}_m$ and
$\bar{R} = \bar{x}_m = \tilde{x}_m - f_p(\tilde{x}_m)/\Cbar$ at $\kappa = 1$ and grow in
radius and width when $\kappa$ increases. The unstable and bistable domains
$\mathcal{U}_{\tilde{\bf R}}$ and $\mathcal{B}_{\bar{\bf R}}$ (see Ref.\
\onlinecite{Buchacek_PhD}) will exhibit interesting non-trivial behavior as a
function of $\kappa$ when generalizing the analysis to defect potentials of
arbitrary shape.
\subsubsection{Alternative strong pinning formulation}\label{sec:alt_sp}
An alternative formulation of strong pinning physics is centered on the local
differential properties of the pinning energy $e_{\mathrm{pin}}(\tilde{x}; \bar{x})$,
i.e., its extremal points in $\tilde{x}$ at different values of the asymptotic
coordinate $\bar{x}$. We start from equation \eqref{eq:en_pin_tot} restricted to
one dimension and rearrange terms to arrive at the expression
\begin{equation}\label{eq:e_pin_eff}
e_{\mathrm{pin}}(\tilde{x};\bar{x}) = e_\mathrm{eff}(\tilde{x}) - \Cbar \bar{x}\>\tilde{x} +\Cbar \bar{x}^2/2
\end{equation}
with the effective pinning energy
\begin{equation}\label{eq:e_eff}
e_\mathrm{eff}(\tilde{x}) = e_p(\tilde{x}) + \Cbar \tilde{x}^2/2
\end{equation}
involving both pinning and elastic terms. Equation \eqref{eq:e_pin_eff}
describes a particle at position $\tilde{x}$ subject to the potential
$e_\mathrm{eff}(\tilde{x})$ and the force term $f \> \tilde{x} = -\Cbar\bar{x} \> \tilde{x}$,
see also Ref.\ \onlinecite{Willa_2022}. The potential $e_\mathrm{eff}(\tilde{x})$
can trap two particle states if there is a protecting maximum with negative
curvature $\partial_{\tilde{x}}^2 e_\mathrm{eff} = \partial_{\tilde{x}}^2 e_{\mathrm{pin}} < 0$,
preventing its escape from the metastable state at forces $f = \pm \Cbar \bar{x}$
with $\bar{x} \in [\bar{x}_+,\bar{x}_-]$; the maximum in $e_{\mathrm{pin}}$ at $\tilde{x}_\mathrm{us}$
then separates two minima in $e_{\mathrm{pin}}$ defining distinct branches with different
tip coordinates $\tilde{x}_\mathrm{p}$ and $\tilde{x}_\mathrm{f}$, see the inset of Fig.\
\ref{fig:e_pin}.
As the asymptotic position $\bar{x}$ approaches the boundaries $\bar{x}_\pm$, one of
the minima joins up with the maximum to define an inflection point with
\begin{equation}\label{eq:e_eff_jp}
[\partial_{\tilde{x}}^2 e_\mathrm{eff}]_{\xti_{\mathrm{jp}}} = [\partial_{\tilde{x}}^2 e_{\mathrm{pin}}]_{\xti_{\mathrm{jp}}} = 0,
\end{equation}
that corresponds to the instability condition \eqref{eq:uti_iso_j} where the
vortex tip jumps; the persistent second minimum in $e_{\mathrm{pin}}(\tilde{x};\bar{x})$ defines
the landing position $\xti_{\mathrm{lp}}$ and the condition for a flat inflection point
$[\partial_{\tilde{x}} e_{\mathrm{pin}}]_{\xti_{\mathrm{jp}}} = 0$ defines the associated asymptotic
coordinate $\pm \bar{x}_\pm$.
Finally, strong pinning vanishes at the Labusch point $\kappa = 1$, with the
inflection point in $e_\mathrm{eff}(\tilde{x})$ coalescing with the second minimum
at $\tilde{x}_m$, hence
\begin{eqnarray}\label{eq:e_eff_m}
[\partial_{\tilde{x}}^2 e_\mathrm{eff}]_{\tilde{x}_m} &=& 0 \quad \textrm{and}\\
\nonumber
[\partial_{\tilde{x}}^3 e_\mathrm{eff}]_{\tilde{x}_m} &=& [\partial_{\tilde{x}}^3 e_p]_{\tilde{x}_m} = 0.
\end{eqnarray}
Note the subtle use of $e_{\mathrm{pin}}$ versus $e_\mathrm{eff}$ versus $e_p$ in the
above discussion; as we go to higher derivatives, first the asymptotic
coordinate $\bar{x}$ turns irrelevant in the second derivative $\partial_{\tilde{x}}^2
e_{\mathrm{pin}} = \partial_{\tilde{x}}^2 e_\mathrm{eff}$ and then all of the elastic
response, i.e., $\Cbar$, drops out in the third derivative $[\partial_{\tilde{x}}^3
e_{\mathrm{pin}}] = [\partial_{\tilde{x}}^3 e_p]$.
The above alternative formulation of strong pinning turns out helpful in
several discussions below, e.g., the derivation of strong pinning
characteristics near the transition in Secs.\ \ref{sec:sp_char} and
\ref{sec:ell_expansion} and in the generalization of the instability condition
to an anisotropic defect in Sec.\ \ref{sec:arb_shape} and furthermore provides
an inspiring link to the Landau theory of phase transitions discussed below in
Sec.\ \ref{sec:Landau}.
\subsection{Pinning force density $\mathbf{F}_\mathrm{pin}$}\label{sec:F_pin_gen}
Next, we determine the pinning force density $\mathbf{F}_\mathrm{pin}$ at strong pinning,
assuming a random homogeneous distribution of pins with a small density $n_p$,
$n_p a_0\xi^2 \ll 1$, see Refs.\ \onlinecite{Willa_2016, Buchacek_2019}. The
derivation of $\mathbf{F}_\mathrm{pin}$ is conveniently done in asymptotic $\bar{\bf R}$ coordinates
where vortex trajectories follow simple straight lines. Vortices approach
the pin by following the free branch until its termination, jump to the pinned
branch to again follow this to its termination, and finally jump back to the
free branch. This produces an asymmetric pinned-branch occupation
$p_{c}(\bar{\bf R})$ that leads to the pinning force density (we assume vortices
approaching the defect along $\bar{x}$ from the left; following convention, we
include a minus sign)
\begin{align}\label{eq:F_pin_vec}
\mathbf{F}_c &= - n_p \int \frac{d^2\bar{\bf R}}{a_0^2}\bigl[
p_{c}(\bar{\bf R})\mathbf{f}^\mathrm{p}_\mathrm{pin}(\bar{\bf R}) + (1-p_{c}(\bar{\bf R}))
\mathbf{f}^\mathrm{f}_\mathrm{pin}(\bar{\bf R})\bigr]\nonumber\\
&= - n_p \int \frac{d^2\bar{\bf R}}{a_0^2} p_{c}(\bar{\bf R})
[\partial_x\Delta e^\mathrm{fp}_\mathrm{pin}(\bar{\bf R})]\,\mathbf{e}_{\bar{x}},
\end{align}
with the energy difference $\Delta e^\mathrm{fp}_\mathrm{pin}(\bar{\bf R}) =
e^\mathrm{f}_\mathrm{pin}(\bar{\bf R}) - e^\mathrm{p}_\mathrm{pin}(\bar{\bf R})$ and
$\mathbf{e}_{\bar{x}}$ the unit vector along $\bar{x}$; the $\bar{y}$-component of the
pinning force density vanishes due to the antisymmetry in
$f_{\mathrm{pin},\bar{y}}$. For the isotropic defect, the jumps $\Delta
e^\mathrm{fp}_\mathrm{pin} (\bar{\bf R})$ in energy appearing upon changing branches
are independent of angle and the average in \eqref{eq:F_pin_vec} separates in
$\bar{x}$ and $\bar{y}$ coordinates; note that the energy jumps are no longer
constant for an anisotropic defect and hence such a separation does not occur.
Furthermore, i) all vortices approaching the defect within the transverse
length $|\bar{y}| < \bar{R}_-$ get pinned, see Fig.\ \ref{fig:f_pin}(d), while those
passing further away follow a smooth (weak pinning) trajectory that does not
undergo jumps and hence do not contribute to the pinning force, and ii) all
vortices that get pinned contribute the same force that is most easily
evaluated for a head-on vortex--defect collision on the $\bar{x}$-axis with
$p_c(\bar{x}) = \Theta(\bar{x} + \bar{x}_-) - \Theta(\bar{x} - \bar{x}_+)$ and
\begin{align}\label{eq:f_pin_av_def}
\langle f_\mathrm{pin} \rangle &= - \!\! \int_{-a_0/2}^{a_0/2} \frac{d\bar{x}}{a_0} \>
\bigl[ p_{c}(\bar{x}) f^\mathrm{p}_\mathrm{pin}(\bar{x}) + (1-p_{c}(\bar{x}))
f^\mathrm{f}_\mathrm{pin}(\bar{x})\bigr]\nonumber\\
&= \frac{\Delta e^\mathrm{fp}_\mathrm{pin}(-\bar{x}_-) +
\Delta e^\mathrm{pf}_\mathrm{pin}(\bar{x}_+)}{a_0},
\end{align}
where we have replaced $-\Delta e^\mathrm{fp}_\mathrm{pin}(\bar{x}_+)$ by $\Delta
e^\mathrm{pf}_\mathrm{pin}(\bar{x}_+) > 0$. Hence, the average pinning force
$\langle f_\mathrm{pin} \rangle$ is given by the jumps in the pinning energy
$e_\mathrm{pin}^\mathrm{i}(\bar{x})$ associated with different branches
$\mathrm{i} = \mathrm{p,f}$, see Fig.\ \ref{fig:e_pin}.
Finally, accounting for trajectories with finite impact parameter
$|\bar{y}| < \bar{R}_-$, we arrive at the result for the pinning force density
$\mathbf{F}_\mathrm{pin}$ acting on the vortex system,
\begin{equation}\label{eq:F_pin}
F_\mathrm{pin} = n_p \frac{2\bar{R}_-}{a_0} \langle f_\mathrm{pin} \rangle
= n_p \frac{2 \bar{R}_-}{a_0} \frac{\Delta e^\mathrm{fp}_\mathrm{pin} +
\Delta e^\mathrm{pf}_\mathrm{pin}}{a_0},
\end{equation}
where the factor $2 \bar{\bf R}_\mathrm{-}/{a_0}$ accounts for the averaging of the
pinning force along the $y$-axis. As strong pins act independently, a
consequence of the small defect density $n_p$, the pinning force density is
linear in the defect density, $F_\mathrm{pin} \propto n_p$. If pinning is
weak, i.e., $\kappa < 1$, we have no jumps, $\langle f_\mathrm{pin} \rangle =
0$, and $F_\mathrm{pin}|_\mathrm{strong} = 0$. A finite pinning force then
only arises from correlations between pinning defects and scales in density as
\cite{LarkinOvch_1979,Koopmann_2004} $F_\mathrm{pin}|_\mathrm{weak} \propto n_p^{2}$. This
contribution to the pinning force density $\mathbf{F}_\mathrm{pin}$ continues beyond $\kappa =
1$, hence, while the strong pinning onset at $\kappa = 1$ can be formulated in
terms of a transition, weak pinning goes to strong pinning in a smooth
crossover.
Knowing the pinning force density $F_\mathrm{pin}$, the motion of the vortex
lattice follows from the bulk dynamical equation
\begin{equation}\label{eq:macroscopic_force_balance}
\eta \mathbf{v} = \mathbf{F}_{\rm \scriptscriptstyle
L}(\mathbf{j})-\mathbf{F}_\mathrm{pin}.
\end{equation}
Here, $\eta = B H_{c2}/\rho_n c^2$ is the Bardeen-Stephen viscosity
\cite{Bardeen_1965} (per unit volume; $\rho_n$ is the normal state
resistivity) and $\mathbf{F}_{\rm \scriptscriptstyle L} = \mathbf{j} \times
\mathbf{B}/c$ is the Lorentz force density driving the vortex system. The
pinning force density $\mathbf{F}_\mathrm{pin}$ is directed along
$\mathbf{v}$, in our case along $x$.
Next, we determine the strong pinning characteristics $\bar{x}_-$, $\bar{x}_+$,
$\tilde{x}_{\mathrm{f}\pm}$, $\tilde{x}_{\mathrm{p}\pm}$, $\Delta
e^\mathrm{fp}_\mathrm{pin}$ and $\Delta e^\mathrm{pf}_\mathrm{pin}$ as a
function of the Labusch parameter $\kappa$ close to the strong pinning
transition, i.e., $\kappa \gtrsim 1$.
\subsection{Strong pinning characteristics near the transition} \label{sec:sp_char}
Near the strong pinning transition at $\kappa \gtrsim 1$, we can derive
quantitative results for the strong pinning characteristics by expanding the
pinning energy $e_{\mathrm{pin}}(\tilde{x};\bar{x})$ in $\tilde{x}$ at fixed $\bar{x}$; this reminds
about the Landau expansion of the free energy $f(\phi,h)$ in the order
parameter $\phi$ at a fixed field $h$ in a thermodynamic transition, see Sec.\
\ref{sec:Landau} below for a detailed discussion.
We expand $e_{\mathrm{pin}}(\tilde{x};\bar{x})$ in $\tilde{x}$ around the point of first instability
$\tilde{x}_m$ by introducing the relative tip and asymptotic positions $\tilde{u} = \tilde{x}
- \tilde{x}_m$ and $\bar{u} = \bar{x} - \bar{x}_m$ and make use of our alternative strong
pinning formulation summarized in Sec.\ \ref{sec:alt_sp}. At $\tilde{x}_m$ and
close to $\kappa = 1$, we have $[\partial_{\tilde{x}}^2 e_{\mathrm{pin}}]_{\tilde{x}_m} =
[\partial_{\tilde{x}}^2 e_p]_{\tilde{x}_m} + \Cbar = \Cbar (1-\kappa)$ and
$[\partial_{\tilde{x}}^3 e_{\mathrm{pin}}]_{\tilde{x}_m} = 0$, hence,
\begin{eqnarray}\label{eq:e_pin_expans}
e_\mathrm{pin}(\tilde{x};\bar{x}) \approx \frac{\Cbar}{2}
(1-\kappa)\> \tilde{u}^{2} + \frac{\gamma}{24} \>
\tilde{u}^{4} - \Cbar \bar{u} \tilde{u},
\end{eqnarray}
where we have introduced the shape parameter $\gamma = [\partial^4_{x}
e_p]_{\tilde{x}_m}$ describing the quartic term in the expansion and we have made
use of the force balance equation \eqref{eq:force_balance} to rewrite
$f_p(\tilde{x}_m) = \Cbar (\tilde{x}_m - \bar{x}_m)$; furthermore, we have dropped all
irrelevant terms that do not depend on $\tilde{u}$.
We find the jump and landing positions $\xti_{\mathrm{jp}}$ and $\xti_{\mathrm{lp}}$ exploiting the
differential properties of $e_{\mathrm{pin}}(\tilde{x})$ at a fixed $\bar{x}$: As discussed
above, the vortex tip jumps at the boundaries $\bar{x}_\pm$ of the bistable
regime, where $e_{\mathrm{pin}}$ develops a flat inflection point at $\xti_{\mathrm{jp}}$ with one
minimum joining up with the unstable maximum and the second minimum at the
landing position $\xti_{\mathrm{lp}}$ staying isolated. Within our fourth-order expansion
the jump positions at (de)pinning are placed symmetrically with respect to the
onset at $\tilde{x}_m$,
\begin{equation}\label{eq:jp_pos}
\tilde{x}_\mathrm{p+} = \tilde{x}_m + \tilde{u}_\mathrm{jp}, ~~~
\tilde{x}_\mathrm{f-} = \tilde{x}_m - \tilde{u}_\mathrm{jp}
\end{equation}
and imposing the condition $[\partial_{\tilde{u}}^2e_{\mathrm{pin}}]_{\xti_{\mathrm{jp}}} = 0$ (that is
equivalent to the jump condition $f_p'[\tilde{x}_\mathrm{f-}] =
f_p'[\tilde{x}_\mathrm{p+}] = \Cbar$ of Eq.\ \eqref{eq:uti_iso_j}, see also Fig.\
\ref{fig:self-cons-sol}), we find that
\begin{equation}\label{eq:ujp}
\tilde{u}_\mathrm{jp} \approx
- \sqrt{\frac{2\Cbar}{\gamma}} (\kappa-1)^{1/2}.
\end{equation}
In order to find the (symmetric) landing positions, it is convenient to shift
the origin of the expansion to the jump position, $\tilde{u} \to \tilde{u} - \uti_{\mathrm{jp}} \equiv
\tilde{u}'$, and define the jump distance $\Delta \tilde{u}$,
\begin{eqnarray}\label{eq:lp_pos}
\tilde{x}_\mathrm{f+} = \tilde{x}_\mathrm{p+} + \Delta\tilde{u}, ~~~ \tilde{x}_\mathrm{p-} =
\tilde{x}_\mathrm{f-} - \Delta\tilde{u}.
\end{eqnarray}
At the jump position, the linear and quadratic terms in $\tilde{u}'$ vanish,
resulting in the expansion (up to an irrelevant constant)
\begin{equation}\label{eq:e_pin_expans_jp}
e_\mathrm{pin}(\tilde{x}_\mathrm{p+} + \tilde{u}';\bar{x}_+) \approx
\frac{\gamma}{6} \uti_{\mathrm{jp}} \tilde{u}^{\prime\, 3}
+ \frac{\gamma}{24} \tilde{u}^{\prime\, 4}
\end{equation}
and similar at $\tilde{x}_\mathrm{f-}$ and $\bar{x}_-$ for a left moving vortex. This
expression is minimal at the landing position $\tilde{x}_\mathrm{f+}$, i.e., at
$\tilde{u}' = \Delta \tilde{u}$, $[\partial_{\tilde{u}'} e_\mathrm{pin}]_{\Delta \tilde{u}} = 0$,
and we find the jump distance
\begin{equation}\label{eq:j_dist}
\Delta\tilde{u} = - 3 \tilde{u}_\mathrm{jp}.
\end{equation}
Inserting this result back into \eqref{eq:e_pin_expans_jp}, we obtain the jump
in energy $\Delta e_{\mathrm{pin}}^\mathrm{pf} = e_\mathrm{pin}(\tilde{x}_\mathrm{p+};\bar{x}_+) -
e_\mathrm{pin}(\tilde{x}_\mathrm{f+};\bar{x}_+)$,
\begin{equation}\label{eq:d_epin^pf}
\Delta e_{\mathrm{pin}}^\mathrm{pf} (\bar{x}_+) \approx \frac{\gamma}{72}(\Delta\tilde{u})^4
\approx \frac{9\Cbar^2}{2\gamma}(\kappa - 1)^2,
\end{equation}
and similar at $\bar{x}_-$. Note that all these results have been obtained
without explicit knowledge of the asymptotic coordinates $\bar{x}_\pm$ where
these tip jumps are triggered. The latter follow from the force equation
\eqref{eq:force_balance} that corresponds to the condition
$[\partial_{\tilde{x}}e_{\mathrm{pin}}]_{\xti_{\mathrm{jp}}} = 0$ for a flat inflection point. Using the
expansion \eqref{eq:e_pin_expans} of the pinning energy, we find that
\begin{equation}\label{eq:bs_pos}
\bar{x}_{\pm} - \bar{x}_m = \mp \frac{2}{3} \tilde{u}_\mathrm{jp}(\kappa - 1)
= \pm \frac{2}{3} \sqrt{\frac{2\Cbar}{\gamma}} (\kappa - 1)^{3/2}.
\end{equation}
The pair $\bar{x}_m$ and $\tilde{x}_m$ of asymptotic and tip positions depends on the
details of the potential; while $\tilde{x}_m$ derives solely from the shape
$e_p(\tilde{x})$, $\bar{x}_m$ as given by \eqref{eq:force_balance} involves $\Cbar$
and shifts $\propto (\kappa - 1)$. For a Lorentzian potential, we find that
\begin{equation}\label{eq:xti_xas_m}
\tilde{x}_m = \sqrt{2}\xi, \quad \bar{x}_m = 2\sqrt{2} \xi + \sqrt{2}\xi (\kappa - 1).
\end{equation}
The shape coefficient is $\gamma = 3e_p/4\xi^4$ and the Labusch parameter is
given by $\kappa = e_p/4\Cbar\xi^2$ (hence $\Cbar^2/\gamma = e_p/12
\kappa^2$), providing us with the results
\begin{equation}\label{eq:ujp_depin}
\tilde{u}_\mathrm{jp} \approx -\xi \, [2(\kappa-1)/3]^{1/2} \mathrm{~~and~~}
\Delta e_{\mathrm{pin}}^\mathrm{pf} \approx \frac{3}{8} e_p (\kappa - 1)^2.
\end{equation}
\subsection{Pinning force density for the isotropic defect}\label{sec:F_pin_iso}
Using the results of Sec.\ \ref{sec:sp_char} in the expression
\eqref{eq:F_pin} for the pinning force density, we find, to leading order in
$\kappa -1$,
\begin{equation}\label{eq:F_pin_iso_result}
F_\mathrm{pin} = 9 n_p \frac{\bar{x}_m}{a_0} \frac{\Cbar^2}{\gamma a_0}
(\kappa - 1)^2.
\end{equation}
The scaling $F_\mathrm{pin} \sim n_p (\xi/a_0)^2 f_p (\kappa - 1)^2$ (with
$\Cbar \xi^2/e_p \sim 1/\kappa$, up to a numerical) uniquely derives from the
scaling $\propto (\kappa - 1)^2$ of the energy jumps in \eqref{eq:d_epin^pf},
as the asymptotic trapping length $\bar{x}_- \sim \xi$ remains finite as $\kappa
\to 1$ for the isotropic defect; this will change for the anisotropic defect.
\subsection{Relation to Landau's theory of phase transitions}\label{sec:Landau}
The expansion \eqref{eq:e_pin_expans} of the pinning energy $e_\mathrm{pin}
(\tilde{x}; \bar{x})$ around the inflection point $\tilde{x}_m$ of the force takes the same
form as the Landau free energy of a phase transition\cite{Koopmann_2004},
\begin{eqnarray}\label{eq:f_phi}
f(\phi;h) &=& \frac{r_0}{2}(T/T_c-1)\phi^2 +u\phi^4 - h\phi,
\end{eqnarray}
with the straightforward transcription $\tilde{u} \leftrightarrow \phi$,
$\Cbar (1-\kappa) \leftrightarrow r_0 (T/T_c - 1)$, $\gamma /24
\leftrightarrow u$ and the conjugate field $\Cbar \bar{u}
\leftrightarrow h$. The functional \eqref{eq:f_phi} describes a one-component
oder parameter $\phi$ driven by $h$, e.g., an Ising model with magnetization
density $\phi$ in an external magnetic field $h$. This model develops a
mean-field transition with a first-order line in the $h$--$T$ phase diagram
that terminates in a critical point at $T=T_c$ and $h=0$. The translation to strong
pinning describes a strong pinning region at large $\kappa$ that terminates
(upon decreasing $\kappa$) at $\kappa = 1$. The ferromagnetic phases with
$\phi = \pm \sqrt{r_0(1-T/T_c)/4u}$ correspond to pinned and unpinned states,
the paramagnetic phase at $T > T_c$ with $\phi = 0$ translates to the unpinned
domain at $\kappa < 1$. The spinodals associated with the hysteresis in the
first-order magnetic transition correspond to the termination of the free and
pinned branches at $\bar{x}_\pm$; indeed, the flat inflection points appearing in
$e_{\mathrm{pin}}(\tilde{x}; \bar{x})$ at the boundaries of the bistable region $\mathcal{B}_{\Ras}$ as discussed in
Sec.\ \ref{sec:U-B-domains} correspond to the disappearance of metastable
magnetic phases in \eqref{eq:f_phi} at the spinodals of the first-order
transition where $\partial_\phi f (\phi; h) = \partial_\phi^2 f (\phi; h) =
0$. When including correlations between defects, the unpinned phase at $\kappa
< 1$ transforms into a weakly pinned phase that continues beyond $\kappa = 1$
into the strongly pinned phase. Including such correlations, the
strong-pinning transition at the onset of strong pinning at $\kappa = 1$
transforms into a weak-to-strong pinning crossover.
\section{Anisotropic defects}\label{sec:arb_shape}
Let us generalize the above analysis to make it fit for the ensuing discussion
of an arbitrary pinning landscape or short, pinscape. Central to the
discussion are the unstable and bistable domains $\mathcal{U}_{\tilde{\bf R}}$ and
$\mathcal{B}_{\bar{\bf R}}$ in tip- and asymptotic space. The boundary of the
unstable domain $\mathcal{U}_{\tilde{\bf R}}$ in tip space is determined by the jump
positions of the vortex tip. The latter follows from the local differential
properties of $e_{\mathrm{pin}}(\tilde{\bf R};\bar{\bf R})$ at fixed asymptotic coordinate $\bar{\bf R}$, for
the isotropic defect, the appearence of an inflection point
$[\partial_{\tilde{x}}^2 e_{\mathrm{pin}}(\tilde{x},\bar{x})] = 0$, see Eq.\ \eqref{eq:e_eff_jp}. In
generalizing this condition to the anisotropic situation, we have to study
the Hessian matrix of $e_{\mathrm{pin}}(\tilde{\bf R};\bar{\bf R})$ defined in Eq.\
\eqref{eq:en_pin_tot},
\begin{equation}\label{eq:Hessian}
\bigl[\mathrm{Hess}\bigl[e_{\mathrm{pin}}(\tilde{\bf R};\bar{\bf R})|_{\bar{\bf R}}\bigr]\bigr]_{ij}
= \Cbar \delta_{ij} + \mathrm{H}_{ij}(\tilde{\bf R})
\end{equation}
with
\begin{equation}\label{eq:Hessian_e_p}
\mathrm{H}_{ij}(\tilde{\bf R}) =
\partial_{\tilde{x}_i} \partial_{\tilde{x}_j} e_p(\tilde{\bf R};\bar{\bf R})
\end{equation}
the Hessian matrix associated with the defect potential $e_p(\tilde{\bf R})$. The
vortex tip jumps when the pinning landscape $e_{\mathrm{pin}}(\tilde{\bf R};\bar{\bf R})$ at fixed $\bar{\bf R}$
opens up in an unstable direction, i.e., develops an inflection point; this
happens when the lower eigenvalue $\lambda_-(\tilde{\bf R}) < 0$ of the Hessian matrix
$\mathrm{H}_{ij}(\tilde{\bf R})$ matches up with $\Cbar$,
\begin{align}\label{eq:match_LC}
\lambda_-(\tilde{\bf R}) + \Cbar = 0,
\end{align}
and strong pinning appears in the location where this happens first, say in
the point $\tilde{\bf R}_m$, implying that the eigenvalue $\lambda_-(\tilde{\bf R})$ has a
minimum at $\tilde{\bf R}_m$. Furthermore, the eigenvector $\mathbf{v}_-(\tilde{\bf R}_m)$
associated with the eigenvalue $\lambda_-(\tilde{\bf R}_m)$ provides the unstable
direction in the pinscape $e_{\mathrm{pin}}(\tilde{\bf R};\bar{\bf R})$ along which the vortex tip
escapes.
Defining the reduced curvature function
\begin{align}\label{eq:red_curv_kappa}
\kappa(\tilde{\bf R}) \equiv \frac{-\lambda_-(\tilde{\bf R})}{\Cbar},
\end{align}
we find the generalized Labusch parameter
\begin{align}\label{eq:gen_Lab}
\kappa_m \equiv \kappa(\tilde{\bf R}_m),
\end{align}
and the Labusch criterion takes the form
\begin{align}\label{eq:gen_Lab_crit}
\kappa_m = 1.
\end{align}
The latter has to be read as a double condition: i) find the location $\tilde{\bf R}_m$
where the smaller eigenvalue $\lambda_-(\tilde{\bf R})$ is negative and largest, from
which ii), one obtains the critical elasticity $\Cbar$ where strong pinning
sets in.
A useful variant of the strong pinning condition \eqref{eq:match_LC} is
provided by the representation of the determinant of the Hessian matrix,
\begin{align}\label{eq:det_Hessian}
D(\tilde{\bf R}) &\equiv \mathrm{det}\bigl\{\mathrm{Hess}\bigl[e_{\mathrm{pin}}(\tilde{\bf R};\bar{\bf R})|_{\bar{\bf R}}\bigr]\bigr\},
\end{align}
in terms of its eigenvalues $\lambda_\pm(\tilde{\bf R})$, $D(\tilde{\bf R}) = [\Cbar +
\lambda_-(\tilde{\bf R})] [\Cbar + \lambda_+(\tilde{\bf R})]$; near onset, the second factor
$\Cbar + \lambda_+(\tilde{\bf R})$ stays positive and the strong pinning onset appears
in the point $\tilde{\bf R}_m$ where $D(\tilde{\bf R})$ has a minimum which touches zero for the
first time, i.e., the two conditions $\nabla D(\tilde{\bf R})|_{\tilde{\bf R}_m} = 0$ and
$D(\tilde{\bf R}_m) = 0$ are satisfied simultaneously. The latter conditions make sure
that the minima of $\lambda_-(\tilde{\bf R})$ and $D(\tilde{\bf R})$ line up at $\tilde{\bf R}_m$. Note
that the Hessian determinant $D(\tilde{\bf R})$ does not depend on the asymptotic
coordinate $\bar{\bf R}$ as it involves only second derivatives of
$e_{\mathrm{pin}}(\tilde{\bf R};\bar{\bf R})$.
The Labusch criterion defines the situation where jumps of vortex tips appear
for the first time in the isolated point $\tilde{\bf R}_m$. Increasing the pinning
strength, e.g., by decreasing the elasticity $\Cbar$ for a fixed pinning
potential $e_p({\bf R})$ (alternatively, the pinning scale $e_p$ could be
increased at fixed $\Cbar$) the condition \eqref{eq:match_LC} is satisfied on
the boundary of a finite domain and we can define the unstable domain
$\mathcal{U}_{\tilde{\bf R}}$ through (see also Ref.\ \onlinecite{Buchacek_PhD})
\begin{align}\label{eq:def_calU}
\mathcal{U}_{\tilde{\bf R}} = \left\{ \tilde{\bf R}~~|~~\lambda_-(\tilde{\bf R}) + \Cbar \leq 0 \right\}.
\end{align}
Once the latter has been determined, the bistable domain $\mathcal{B}_{\bar{\bf R}}$
follows straightforwardly from the force balance equation
\begin{align}\label{eq:gen_force_balance}
\Cbar (\tilde{\bf R} - \bar{\bf R}) = {\bf f}_p(\tilde{\bf R}) = {\bf f}_\mathrm{pin}(\bar{\bf R}),
\end{align}
i.e.,\cite{Buchacek_PhD}
\begin{align}\label{eq:def_calB}
\mathcal{B}_{\bar{\bf R}} = \left\{ \bar{\bf R} = \tilde{\bf R} -{\bf f}_p(\tilde{\bf R})/\Cbar~~|~~\tilde{\bf R} \in
\mathcal{U}_{\tilde{\bf R}}\right \}.
\end{align}
In a last step, one then evaluates the energy jumps appearing at the boundary
of $\mathcal{B}_{\bar{\bf R}}$ and proper averaging produces the pinning force density
$\mathbf{F}_\mathrm{pin}$.
Let us apply the above generalized formulation to the isotropic situation.
Choosing cylindrical coordinates $(r,\varphi)$, the Hessian matrix
$\mathrm{H}_{ij}$ is already diagonal; close to the inflection point $\tilde{R}_m$,
where $e_p'''(\tilde{R}_m) = 0$, the eigenvalues are $\lambda_-(\tilde{R}) =
e_p''(\tilde{R}) < 0$ and $\lambda_+(\tilde{R}) = e_p'(\tilde{R})/\tilde{R} > 0$, producing
results in line with our discussion above.
\subsection{Expansion near strong pinning onset}\label{sec:ell_expansion}
With our focus on the strong pinning transition near $\kappa(\tilde{\bf R}_m) = 1$, we
can obtain quantitative results using the expansion of the pinning energy
$e_{\mathrm{pin}}(\tilde{\bf R};\bar{\bf R})$, Eq.\ \eqref{eq:en_pin_tot}, close to $\tilde{\bf R}_m$, cf.\ Sec.\
\ref{sec:sp_char}. Hence, we construct the Landau-type pinning energy
corresponding to \eqref{eq:f_phi} for the case of an anisotropic pinning
potential, i.e., we generalize \eqref{eq:e_pin_expans} to the two-dimensional
situation.
When generalizing the strong pinning problem to the anisotropic situation, we
are free to define local coordinate systems $(\tilde{u},\tilde{v})$ and $(\bar{u}, \bar{v})$
in tip- and asymptotic space centered at $\tilde{\bf R}_m$ and $\bar{\bf R}_m$, where the
latter is associated with $\tilde{\bf R}_m$ through the force balance equation
\eqref{eq:gen_force_balance} in the original laboratory system. Furthermore,
we fix our axes such that the unstable direction coincides with the $u$-axis,
i.e., the eigenvector ${\bf v}_-(\tilde{\bf R}_m)$ associated with $\lambda_-(\tilde{\bf R}_m)$
points along $u$; as a result, the mixed term $\propto \tilde{u} \tilde{v}$ is absent
from the expansion. Keeping all potentially relevant terms up to fourth order
in $\tilde{u}$ and $\tilde{v}$ in the expansion, we then have to deal with an expression
of the form
\begin{align}\nonumber
&e_\mathrm{pin}(\tilde{\bf R}; \bar{\bf R}) =
\frac{\Cbar+\lambda_-}{2} \, \tilde{u}^2
+ \frac{\Cbar + \lambda_+}{2}\, \tilde{v}^2 -\Cbar\,\bar{u} \tilde{u} - \Cbar\, \bar{v} \tilde{v} \nonumber \\
&\quad+\frac{a}{2}\, \tilde{u} \tilde{v}^2 + \frac{a'}{2}\, \tilde{u}^2 \tilde{v} + \frac{b'}{6}\, \tilde{u}^3
+ \frac{b''}{6}\, \tilde{v}^3 \label{eq:e_pin_expans_ani_orig} \\ \nonumber
&\qquad+ \frac{\alpha}{4}\, \tilde{u}^2\tilde{v}^2 + \frac{\beta}{6}\, \tilde{u}^3\tilde{v}
+\frac{\beta''}{6}\, \tilde{u}\tilde{v}^3 +\frac{\gamma}{24}\, \tilde{u}^4 + \frac{\gamma''}{24}\, \tilde{v}^4,
\end{align}
with $\lambda_\pm = \lambda_\pm(\tilde{\bf R}_m)$,
\begin{align}\label{eq:coord_uv}
\tilde{\bf R} &= \tilde{\bf R}_m + \delta\tilde{\bf R}, \quad \delta\tilde{\bf R} = (\tilde{u},\tilde{v}), \\ \nonumber
\bar{\bf R} &= \bar{\bf R}_m + \delta\bar{\bf R}, \quad \delta\bar{\bf R} = (\bar{u},\bar{v}),
\end{align}
and coefficients given by the corresponding derivatives of $e_p(\bf R)$, e.g.,
$a \equiv \partial_u\partial_v^2 e_p({\bf R})|_{\tilde{\bf R}_m}$, $\dots$, $\gamma''
\equiv \partial_v^4 e_p({\bf R})|_{\tilde{\bf R}_m}$. As we are going to see, the
primed terms in this expansion vanish due to the condition of a minimal
Hessian determinant at the onset of strong pinning, while double-primed terms
will turn out irrelevant to leading order in the small distortions $\tilde{u}$ and
$\tilde{v}$.
The first term in \eqref{eq:e_pin_expans_ani_orig} drives the strong pinning
transition as it changes sign when $\lambda_- = -\Cbar$. Making use of the
Labusch parameter $\kappa_m$ defined in \eqref{eq:gen_Lab}, we can replace
(see also \eqref{eq:e_pin_expans})
\begin{equation} \label{eq:one-kappa}
\Cbar +\lambda_- \to \Cbar(1-\kappa_m).
\end{equation}
In our further considerations below, the quantity $\kappa_m - 1
\ll 1$ acts as the small parameter; it assumes the role of the distance
$1-T/T_c$ to the critical point in the Landau expansion of a thermodynamic
phase transition.
The second term in \eqref{eq:e_pin_expans_ani_orig} stabilizes the theory
along the $v$ direction as $\Cbar + \lambda_+ > 0$ close to the Labusch point,
while the sign of the cubic term $a\,\tilde{u}\tilde{v}^2/2$ determines the direction of
the instability along $x$, i.e., to the right ($a > 0$) or left ($a < 0$). The
quartic terms $\propto \alpha, \gamma >0$ bound the pinning energy at large
distances, while the term $\propto \beta$ determines the skew angle in the shape
of the unstable domain $\mathcal{U}_{\Rti}$, see below. Finally, we have used the force
balance equation \eqref{eq:gen_force_balance} in the derivation of the driving
terms $\Cbar \, \bar{u} \tilde{u}$ and $\Cbar \, \bar{v} \tilde{v}$.
The parameters in \eqref{eq:e_pin_expans_ani_orig} are constrained by the
requirement of a minimal determinant $D(\tilde{\bf R})$ at the strong
pinning onset $\tilde{\bf R} = \tilde{\bf R}_m$ and $\kappa_m = 1$, i.e., its gradient has to vanish,
\begin{equation}
\mathbf{\nabla}_{\tilde{\bf R}}\,D(\tilde{\bf R})\big|_{\tilde{\bf R}_m} = 0,
\end{equation}
and its Hessian $\mathrm{Hess}[D(\tilde{\bf R})]$ has to satisfy the relations
\begin{align}\label{eq:det_min2D}
\mathrm{det}\bigl[\mathrm{Hess}\bigl[ D(\tilde{\bf R}) \bigr]\bigr] \big|_{\tilde{\bf R}_m} &> 0,\\
\mathrm{tr} \bigl[\mathrm{Hess}\bigl[ D(\tilde{\bf R}) \bigr]\bigr] \big|_{\tilde{\bf R}_m} &> 0.
\label{eq:det_tr2D}
\end{align}
Making use of the expansion \eqref{eq:e_pin_expans_ani_orig}, the determinant
$D(\tilde{\bf R})$ reads
\begin{equation}\label{eq:full_det}
D(\tilde{\bf R}) = \big\{[\partial_{\tilde{u}}^2 e_{\mathrm{pin}}][\partial_{\tilde{v}}^2 e_{\mathrm{pin}}] - [\partial_{\tilde{u}}
\partial_{\tilde{v}} e_{\mathrm{pin}}]^2\big\}_{\tilde{\bf R}}
\end{equation}
with
\begin{align}\nonumber
&\partial_{\tilde{u}}^2 e_{\mathrm{pin}} =
\Cbar\left(1\!-\!\kappa_m\right) + a' \tilde{v} + b' \tilde{u} + \alpha \tilde{v}^2\!/2 + \beta \tilde{u} \tilde{v}
+ \gamma \tilde{u}^2\!/2,
\\ \nonumber
&\partial_{\tilde{v}}^2 e_{\mathrm{pin}} =
\Cbar + \lambda_+ + a \tilde{u} + b''\tilde{v} + \alpha\tilde{u}^2/2 +\beta''\tilde{u}\tilde{v} + \gamma''\tilde{v}^2/2,
\\ \nonumber
&\partial_{\tilde{u}} \partial_{\tilde{v}} e_{\mathrm{pin}} = a\tilde{v} + a'\tilde{u} + \alpha\tilde{u}\tilde{v} + \beta\tilde{u}^2/2
+ \beta''\tilde{v}^2/2,
\end{align}
and produces the gradient
\begin{align}\label{eq:grad_D}
\mathbf{\nabla}_{\tilde{\bf R}}\,D(\tilde{\bf R})\Big|_{\tilde{\bf R}_m} = (\Cbar + \lambda_+)(b',a'),
\end{align}
hence the primed parameters indeed vanish, $a' = 0$ and $b' = 0$.
The Hessian then takes the form
\begin{align}\label{eq:Hess_D}
\mathrm{Hess}\bigl[ D(\tilde{\bf R}) \bigr]\Big|_{\tilde{\bf R}_m} &= (\Cbar + \lambda_+)
\begin{bmatrix}
\gamma &~~~ \beta\\
\beta &~~~ \delta
\end{bmatrix}
\end{align}
at the Labusch point $\kappa_m = 1$, where we have introduced the parameter
\begin{equation}\label{eq:delta}
\delta \equiv \alpha -\frac{2a^2}{\Cbar}\frac{1}{1+ \lambda_{+}/\Cbar}.
\end{equation}
The stability conditions \eqref{eq:det_min2D} and \eqref{eq:det_tr2D}
translate, respectively, to
\begin{equation}\label{eq:detHD}
\gamma \delta - \beta^2 > 0
\end{equation}
(implying $\delta > 0$) and
\begin{equation}\label{eq:trHD}
\gamma + \delta > 0.
\end{equation}
The Landau-type theory \eqref{eq:e_pin_expans_ani_orig} involves the two
`order parameters' $\tilde{u}$ and $\tilde{v}$ and is driven by the dual coordinates
$\bar{u}$ and $\bar{v}$. This $n=2$ theory involves a soft order parameter $\tilde{u}$
and the stiff $\tilde{v}$, allowing us to integrate out $\tilde{v}$ and reformulate the
problem as an effective one-dimensional Landau theory \eqref{eq:eff_landau_1D}
of the van der Waals kind---the way of solving the strong pinning problem near
onset in this 1D formulation is presented in Appendix \ref{sec:eff_1D_onset}.
\subsection{Unstable domain $\mathcal{U}_{\Rti}$}\label{sec:Uti}
Next, we determine the unstable domain $\mathcal{U}_{\Rti}$ in tip space as defined in
\eqref{eq:def_calU}. We will find that, up to quadratic order, the boundary
of $\mathcal{U}_{\Rti}$ has the shape of an ellipse with the semiaxes lengths scaling as
$\sqrt{\kappa_m-1}$.
\subsubsection{Jump line $\mathcal{J}_\mathrm{\Rti}$}\label{sec:Jti}
We find the unstable domain $\mathcal{U}_{\tilde{\bf R}}$ by determining its boundary
$\partial \mathcal{U}_{\Rti}$ that is given by the set of jump positions $\Rti_{\mathrm{jp}}$ making up the
jump line $\mathcal{J}_\mathrm{\Rti}$. The boundary $\partial \mathcal{U}_{\Rti}$ is determined by the condition
$\Cbar + \lambda_- = 0$ or, equivalently, the vanishing of the determinant
\begin{equation}\label{eq:Rjp}
D(\Rti_{\mathrm{jp}}) \equiv 0.
\end{equation}
The latter condition guarantees the existence of an unstable direction
parallel to the eigenvector $\mathbf{v}_-(\Rti_{\mathrm{jp}})$ associated with the
eigenvalue $\lambda_-(\Rti_{\mathrm{jp}})$ where the energy \eqref{eq:e_pin_expans_ani_orig}
turns flat, cf.\ our discussion in Sec.\ \ref{sec:U-B-domains}. The edges of
the unstable domain $\mathcal{U}_{\Rti}$ therefore correspond to a line of inflection points
in $e_{\mathrm{pin}}(\tilde{\bf R};\bar{\bf R})$ along which one of the bistable tip configurations of
the force balance equation \eqref{eq:gen_force_balance} coalesces with the
unstable solution. Near the onset of strong pinning, the unstable domain
$\mathcal{U}_{\Rti}$ is closely confined around the point $\tilde{\bf R}_m$ where
$\mathbf{v}_-(\tilde{\bf R}_m) \parallel \hat{\mathbf{u}}$. The unstable direction
$\mathbf{v}_-(\Rti_{\mathrm{jp}})$ is therefore approximately homogeneous within the
unstable domain $\mathcal{U}_{\Rti}$ and is parallel to the $u$ axis. This fact will be of
importance later, when determining the topological properties of the unstable
domain $\mathcal{U}_{\Rti}$.
Inspection of the condition \eqref{eq:Rjp} with $D(\tilde{\bf R})$ given by Eq.\
\eqref{eq:full_det} shows that the components of $\delta\tilde{\bf R}_\mathrm{jp}$
scale as $\sqrt{\kappa_m - 1}$: in the product $[\partial_{\tilde{u}}^2e_{\mathrm{pin}}]
[\partial_{\tilde{v}}^2e_{\mathrm{pin}}]$, the first factor involves the small constant
$\Cbar (1 - \kappa_m)$ plus quadratic terms (as $a' = 0$ and $b' = 0$), while
the second factor comes with the large constant $\Cbar + \lambda_+$ plus
corrections. The leading term in $[\partial_{\tilde{u}}\partial_{\tilde{v}}e_{\mathrm{pin}}]$ is
linear in $\tilde{v}$ with the remaining terms providing corrections. To leading
order, the condition of vanishing determinant then produces the quadratic form
\begin{equation}\label{eq:quadratic_form}
[\gamma\,\tilde{u}^2 + 2\beta\,\tilde{u}\tilde{v} + \delta\, \tilde{v}^2]_{\Rti_{\mathrm{jp}}}
= 2\Cbar\left(\kappa_m-1\right).
\end{equation}
With $\gamma$ and $\delta$ positive, this form is associated with an elliptic
geometry of extent $\propto \sqrt{\kappa_m - 1}$. For later convenience, we
rewrite Eq.\ \eqref{eq:quadratic_form} in matrix form
\begin{equation}\label{eq:matrix_eq_jp}
\delta\Rti_{\mathrm{jp}}^\mathrm{T} M_\mathrm{jp}\, \delta\Rti_{\mathrm{jp}} = \Cbar (\kappa_m - 1)
\end{equation}
with
\begin{align}\label{eq:ellipse_jp}
M_\mathrm{jp} &= \begin{bmatrix} \gamma/2 &~~~ \beta/2\\
\beta/2 &~~~ \delta/2
\end{bmatrix}
\end{align}
and $\mathrm{det} M_\mathrm{jp} = (\gamma\delta -\beta^2)/4 >0$, see Eq.\ \eqref{eq:detHD}.
The jump line $\mathcal{J}_\mathrm{\Rti}$ can be expressed in the parametric form
\begin{equation}\label{eq:uti_jp}
\begin{split}
\tilde{u}_\mathrm{jp}(|\tilde{v}| < \tilde{v}_c) &= -\frac{1}{\gamma}\Bigl[\beta\tilde{v}\\
&\pm \sqrt{2\gamma\Cbar(\kappa_m-1)- (\gamma\delta -\beta^2) \tilde{v}^2} \Bigr],
\end{split}
\end{equation}
with
\begin{equation}\label{eq:vti_c}
\tilde{v}_c = \sqrt{2\gamma\,\Cbar(\kappa_m - 1)/(\gamma\delta -\beta^2)}
\end{equation}
and is shown in Fig.\ \ref{fig:ellipses} for the example of an anisotropic
potential inspired by the uniaxial defect in Sec.\ \ref{sec:uniax_defect} with
10 \% anisotropy. The associated unstable domain $\mathcal{U}_{\Rti}$ assumes a compact
elliptic shape, with the parameter $\beta$ describing the ellipse's skew.
Comparing with the isotropic defect, this ellipse assumes the role of the ring
bounded by solid lines in Fig.\ \ref{fig:f_pin}(c), see Sec.\
\ref{sec:topology} for a discussion of its different topology.
An additional result of the above discussion concerns the terms that we need to keep
in the expansion of the pinning energy \eqref{eq:e_pin_expans_ani_orig}: indeed,
dropping corrections amounts to dropping terms with double-primed coefficients
and we find that the simplified expansion
\begin{align}\label{eq:e_pin_expans_ani}
&e_\mathrm{pin}(\tilde{\bf R}; \bar{\bf R}) =
\frac{\Cbar}{2} (1 - \kappa_m) \, \tilde{u}^2
+ \frac{\Cbar + \lambda_+}{2}\, \tilde{v}^2
+\frac{a}{2}\, \tilde{u} \tilde{v}^2 \nonumber \\
&\quad+\frac{\alpha}{4}\, \tilde{u}^2\tilde{v}^2
+\frac{\beta}{6}\, \tilde{u}^3\tilde{v}
+\frac{\gamma}{24}\, \tilde{u}^4
-\Cbar\,\bar{u} \tilde{u} - \Cbar\, \bar{v} \tilde{v}
\end{align}
produces all of our desired results to leading order.
\begin{figure}
\includegraphics[width = 1.\columnwidth]{figures/ellipses.pdf}
\caption{Jump line $\mathcal{J}_\mathrm{\Rti}$ (solid red/blue, see Eq.\
\eqref{eq:matrix_eq_jp}) and landing line (dashed red/blue, see Eq.\
\eqref{eq:matrix_eq_lp}) $\mathcal{L}_\mathrm{\Rti}$ in tip space $\tilde{\bf R}$ (in units of
$\xi$), with the ellipse $\mathcal{J}_\mathrm{\Rti}$ representing the edge $\partial \mathcal{U}_{\Rti}$
of the unstable domain $\mathcal{U}_{\Rti}$. We choose parameters $\kappa_m - 1=
10^{-2}$, with $\lambda_- = -0.25 \,e_p/\xi^2, \lambda_+ = 0.05
\,e_p/\xi^2$, and $a = 0.07 \,e_p/\xi^3$, $\alpha = 0.1\, e_p/\xi^4,
\beta = 0, \gamma = 0.75 \,e_p/\xi^4$ inspired by the choice of the
uniaxial defect with 10 \% anisotropy in Sec.\ \ref{sec:uniax_defect};
the dotted ellipse shows the effect of a finite skew parameter $\beta
= 0.05\, e_p/\xi^4$ on the jump ellipse $\mathcal{J}_\mathrm{\Rti}$. Along the edges of
$\mathcal{U}_{\Rti}$, one of the stable tip configurations coalesces with the
unstable solution of \eqref{eq:gen_force_balance} and the total
pinning energy $e_{\mathrm{pin}}(\tilde{\bf R};\bar{\bf R})$ develops an inflection line in the
tip coordinate $\tilde{\bf R}$. Crosses correspond to the contact points
\eqref{eq:contact_points} between the two ellipses $\mathcal{J}_\mathrm{\Rti}$ and $\mathcal{L}_\mathrm{\Rti}$.
Blue and red colors identify different types of vortex deformations
upon jump and landing. Pairs of solid and open circles connected via
long arrows are, respectively, examples of pairs of jumping- and
landing tip positions for vortices approaching the defect from the left (top)
and right (bottom), see Fig.\ \ref{fig:f_pin}(c) for the isotropic
problem's counterpart. The unstable direction $\mathbf{v}_-(\Rti_{\mathrm{jp}})$,
shown as short black arrows for different points on the ellipse,
always points in the $u-$direction and are parallel to the tangent
vector of the unstable ellipse at the contact points
\eqref{eq:contact_points}.}
\label{fig:ellipses}
\end{figure}
\subsubsection{Landing line $\mathcal{L}_\mathrm{\Rti}$}\label{sec:Lti}
We find the landing positions $\Rti_{\mathrm{lp}}$ by extending the discussion of the
isotropic situation in Sec.\ \ref{sec:sp_char} to two dimensions: we shift the
origin of the expansion \eqref{eq:e_pin_expans_ani} to the jump point $\Rti_{\mathrm{jp}}$
and find the landing point $\Rti_{\mathrm{lp}} = \Rti_{\mathrm{jp}} + \Delta \tilde{\bf R}$ by minimizing the total
energy $e_{\mathrm{pin}}(\Delta\tilde{\bf R})$ at the landing position. Below, we use $\Delta
\tilde{\bf R}$ both as a variable and as the jump distance to avoid introducing more
coordinates.
We exploit the differential properties of $e_{\mathrm{pin}}$ at the jump and landing
positions. At landing, $e_{\mathrm{pin}}(\Rti_{\mathrm{jp}}+\Delta\tilde{\bf R})$ has a minimum, hence, the
configuration is force free, in particular along $\tilde{v}$,
\begin{align}\nonumber
\partial_{\tilde{v}} e_{\mathrm{pin}} (\Rti_{\mathrm{jp}}+\Delta \tilde{\bf R}) &\approx
[\partial_{\tilde{v}}\partial_{\tilde{u}} e_{\mathrm{pin}}]_{\Rti_{\mathrm{jp}}} \Delta\tilde{u}\\ \nonumber
&\qquad + [\partial_{\tilde{v}}^2 e_{\mathrm{pin}}]_{\Rti_{\mathrm{jp}}} \Delta\tilde{v} = 0,
\end{align}
from which we find that $\Delta\tilde{u}$ and $\Delta\tilde{v}$ are related via
\begin{equation}\label{eq:dv-du}
\Delta \tilde{v} \approx -\frac{[\partial_{\tilde{v}}\partial_{\tilde{u}} e_{\mathrm{pin}}]_{\Rti_{\mathrm{jp}}}}
{[\partial_{\tilde{v}}^2 e_{\mathrm{pin}}]_{\Rti_{\mathrm{jp}}}} \Delta\tilde{u}.
\end{equation}
Here, we have dropped higher order terms in the expansion, assuming that the
jump is mainly directed along the unstable $u$-direction---indeed, using the
expansion \eqref{eq:e_pin_expans_ani}, we find that
\begin{equation}\label{eq:dv-mall}
\Delta \tilde{v} \approx -\frac{a \vti_{\mathrm{jp}}} {\Cbar +\lambda_+}\, \Delta\tilde{u}
\propto \sqrt{\kappa_m - 1}\> \Delta\tilde{u}.
\end{equation}
Note that we cannot interchange the roles of $\tilde{u}$ and $\tilde{v}$ in this force
analysis, as higher order terms in the expression for the force along $\tilde{u}$
cannot be dropped.
At the jump position $\Rti_{\mathrm{jp}}$, the state is force-free, i.e., the derivatives
$[\partial_{\tilde{u}} e_{\mathrm{pin}}]_{\Rti_{\mathrm{jp}}}$ and $[\partial_{\tilde{v}} e_{\mathrm{pin}}]_{\Rti_{\mathrm{jp}}}$ vanish,
and the Hessian determinant vanishes as well. Therefore, the expansion of
$e_{\mathrm{pin}}(\Rti_{\mathrm{jp}}+\Delta\tilde{\bf R})$ has no linear terms and the second order terms
$[\partial_{\tilde{u}}^2 e_{\mathrm{pin}}]_{\Rti_{\mathrm{jp}}} \Delta\tilde{u}^2/2 + [\partial_{\tilde{u}}
\partial_{\tilde{v}} e_{\mathrm{pin}}]_{\Rti_{\mathrm{jp}}} \Delta\tilde{u} \Delta\tilde{v} + [\partial_{\tilde{v}}^2
e_{\mathrm{pin}}]_{\Rti_{\mathrm{jp}}} \Delta\tilde{v}^2/2$ combined with Eq.\ \eqref{eq:dv-du} can be
expressed through the Hessian determinant, $\{[\partial_{\tilde{u}}^2
e_{\mathrm{pin}}][\partial_{\tilde{v}}^2 e_{\mathrm{pin}}] - [\partial_{\tilde{u}}\partial_{\tilde{v}}
e_{\mathrm{pin}}]^2\}_{\Rti_{\mathrm{jp}}} \Delta\tilde{u}^2/2 = 0$, that vanishes as well. Therefore, the
expansion of $e_{\mathrm{pin}}$ around $\Rti_{\mathrm{jp}}$ starts at third order in $\Delta \tilde{\bf R}
\approx (\Delta \tilde{u}, 0)$ and takes the form (we make use of \eqref{eq:dv-mall},
dropping terms $\propto \Delta\tilde{v}$ and a constant)
\begin{equation}\label{eq:epin_at_jp}
e_{\mathrm{pin}}(\Rti_{\mathrm{jp}} + \Delta \tilde{\bf R}) \approx
\frac{1}{6} \bigl(\gamma \uti_{\mathrm{jp}} + \beta \vti_{\mathrm{jp}} \bigr) \Delta\tilde{u}^3
+\frac{\gamma}{24} \Delta\tilde{u}^4.
\end{equation}
Minimizing this expression with respect to $\Delta\tilde{u}$ (as $e_{\mathrm{pin}}$ is minimal
at $\Rti_{\mathrm{lp}}$), we obtain the result
\begin{equation}\label{eq:du}
\Delta \tilde{u} \approx - 3 (\gamma\uti_{\mathrm{jp}} + \beta\vti_{\mathrm{jp}})/\gamma.
\end{equation}
Making use of the quadratic form \eqref{eq:matrix_eq_jp}, we can show that the
equation for the landing position $\tilde{\bf R}_\mathrm{lp} = \tilde{\bf R}_\mathrm{jp} +
\Delta\tilde{\bf R}$ can be cast into a similar quadratic form (with $\delta\Rti_{\mathrm{lp}}$
measured relative to $\tilde{\bf R}_m$)
\begin{equation}\label{eq:matrix_eq_lp}
\delta\Rti_{\mathrm{lp}}^\mathrm{T} M_\mathrm{lp}\, \delta\Rti_{\mathrm{lp}} = \Cbar (\kappa_m - 1),
\end{equation}
but with the landing matrix now given by
\begin{equation}\label{eq:ellipse_lp}
M_\mathrm{lp} = \frac{1}{4} M_\mathrm{jp} +
\begin{bmatrix} 0 & 0\\
0 & ~~~\displaystyle{\frac{3}{4}\Bigl(\frac{\delta}{2}
- \frac{\beta^2}{2\gamma}\Bigr)}
\end{bmatrix}.
\end{equation}
In the following, we will refer to the solutions of Eq.\
\eqref{eq:matrix_eq_lp} as the `landing' or `stable' ellipse $\Rti_{\mathrm{lp}}$ and
write the jump distance in a parametric form involving the
shape $\uti_{\mathrm{jp}}(\tilde{v})$ in Eq.\ \eqref{eq:uti_jp} of the jumping ellipse,
\begin{align}
&\Delta \tilde{u}(\tilde{v}) = -3\left[\gamma\, \tilde{u}_\mathrm{jp}(\tilde{v})
+ \beta\, \tilde{v}\right]/\gamma,\label{eq:delta_rx}\\
&\Delta \tilde{v}(\tilde{v}) = - \left[a/(\Cbar + \lambda_+)\right]\, \tilde{v}\,
\Delta \tilde{u}(\tilde{v})\label{eq:delta_ry}.
\end{align}
The landing line derived from \eqref{eq:matrix_eq_lp} is displayed as a dashed
line in Fig.\ \ref{fig:ellipses}. Two tip jumps connected by an arrow are
shown for illustration, with solid dots marking the jump position
$\tilde{\bf R}_\mathrm{jp}$ of the tip and open dots its landing position
$\tilde{\bf R}_\mathrm{lp}$; they describe tip jumps for a vortex approaching the
unstable ellipse once from the left (upper pair) and another time from the
right (lower pair). The different topologies associated with jumps and landing
showing up for the isotropic defect in Fig.\ \ref{fig:f_pin}(c) (two
concentric circles) and for the generic onset in Fig.\ \ref{fig:ellipses} (two
touching ellipses) will be discussed later.
Inspecting the matrix equation \eqref{eq:matrix_eq_lp}, we can gain several
insights on the landing ellipse $\mathcal{L}_\mathrm{\Rti}$: (i) the matrix $M_\mathrm{jp}/4$ on
the right-hand side of \eqref{eq:ellipse_lp} corresponds to an ellipse with
the same geometry as for $\mathcal{J}_\mathrm{\Rti}$ but double in size, (ii) the remaining matrix with
vanishing entries in the off-diagonal and the $M_{xx}$ elements leaves the
size doubling of the stable ellipse $\mathcal{L}_\mathrm{\Rti}$ at $\tilde{v} = 0$ unchanged, and (iii)
the finite $M_{yy}$ component exactly counterbalances the doubling along the
$v-$direction encountered in (i), cf.\ the definiton \eqref{eq:ellipse_jp} of
$M_\mathrm{jp}$, up to a term proportional to the skew parameter $\beta$
accounting for deviations of the semiaxis from the $v-$axis. Altogether, the
stable ellipse $\mathcal{L}_\mathrm{\Rti}$ extends with a double width along the $u-$axis and
smoothly overlaps with the unstable ellipse at the two contact points
$\tilde{v}_{c,\pm}$. The latter are found by imposing the condition $\Delta \tilde{u} =
\Delta \tilde{v} = 0$ in Eqs.\ \eqref{eq:delta_rx} and \eqref{eq:delta_ry}; we find
them located (relative to $\tilde{\bf R}_m$) at
\begin{align}
\delta\tilde{\bf R}_{c,\pm} &= \pm \left(-\beta/\gamma, 1\right)\,\tilde{v}_{c},
\label{eq:contact_points}
\end{align}
with the endpoint coordinate $\tilde{v}_c$ given in Eq.\ \eqref{eq:vti_c}, and mark
them with crosses in Fig.\ \ref{fig:ellipses}. As anticipated, the contact
points are off-set with respect to the $v-$axis for a finite
skew parameter $\beta$. At these points, the unstable and the stable tip
configurations coincide and the vortex tip undergoes no jump. Furthermore, the
vector tangent to the jump (or landing) ellipse is parallel to the
$u-$direction at the contact points. To see that, we consider
\eqref{eq:uti_jp} and find that
\begin{align}\label{eq:tangent_y}
\frac{\partial \tilde{u}}{\partial \tilde{v}}\Big|_{\tilde{v}\to\pm\tilde{v}_{c}} \!\!\!\!\!
&\approx\pm\left(\sqrt{\tilde{v}_c^2 - \frac{2\gamma\,\Cbar(\kappa_m-1)}
{\gamma\beta - \delta^2}}\right)^{-1} \!\!\! \to \pm\infty,
\end{align}
hence, the corresponding tangents $\partial_{\tilde{u}} \tilde{v}$ vanish.
The asymptotic positions $\bar{\bf R}$ where the vortex tips jump and land belong to
the boundary of the bistable region $\mathcal{B}_{\Ras}$; for the isotropic case in Fig.\
\ref{fig:f_pin}(d) these correspond to the circles with radii $\bar{R}_-$
(pinning) and $\bar{R}_+$ (depinning) with jump and landing radii
$\tilde{R}_\mathrm{f-}(\bar{R}_-)$ and $\tilde{R}_\mathrm{p-}(\bar{R}_-)$ and
$\tilde{R}_\mathrm{p+}(\bar{R}_+)$ and $\tilde{R}_\mathrm{f+}(\bar{R}_+)$, respectively, see
Fig.\ \ref{fig:f_pin}(c). For the anisotropic defect, we have only a single
jump/landing event at one asymptotic position $\bar{\bf R}$ that we are going to
determine in the next section.
\subsection{Bistable domain $\mathcal{B}_{\Ras}$}\label{sec:Bas}
The set of asymptotic positions $\bar{\bf R}$ corresponding to the tip positions
$\tilde{\bf R}_\mathrm{jp}$ along the edges of $\mathcal{U}_{\Rti}$ forms the boundary $\partial\mathcal{B}_{\Ras}$
of the bistable domain $\mathcal{B}_{\Ras}$; they are related through the force-balance
equation \eqref{eq:gen_force_balance}, with every vortex tip position
$\tilde{\bf R}_\mathrm{jp} \in \partial \mathcal{U}_{\Rti}$ defining an associated asymptotic
position $\bar{\bf R}(\tilde{\bf R}_\mathrm{jp}) \in \partial\mathcal{B}_{\Ras}$.
At the onset of strong pinning, the bistable domain corresponds to the
isolated point $\bar{\bf R}_m$, related to $\tilde{\bf R}_m$ through
\eqref{eq:gen_force_balance}. Beyond the Labusch point, $\mathcal{B}_{\Ras}$ expands out of
$\bar{\bf R}_m$ and its geometry is found by evaluating the force balance equation
\eqref{eq:gen_force_balance} at a given tip position $\tilde{\bf R}_\mathrm{jp} \in
\partial\mathcal{U}_{\Rti}$, $\bar{\bf R}(\tilde{\bf R}_\mathrm{jp}) = \tilde{\bf R}_\mathrm{jp} - \mathbf{f}_p
(\tilde{\bf R}_\mathrm{jp})/\Cbar \in \partial\mathcal{B}_{\Ras}$. Using the expansion
\eqref{eq:e_pin_expans_ani} for $e_{\mathrm{pin}}(\tilde{\bf R};\bar{\bf R})$, this force equation can be
expressed as $\nabla_\mathbf{R} e_\mathrm{pin} (\mathbf{R};\bar{\bf R}) \big|_{\tilde{\bf R}}
= 0$, or explicitly (we remind that we measure $\bar{\bf R} = \bar{\bf R}_m +(\bar{u},\bar{v})$
relative to $\bar{\bf R}_m$),
\begin{align}\nonumber
\Cbar \bar{u} &= \Cbar(1-\kappa_m) \tilde{u} + \frac{a}{2}\tilde{v}^2
+ \frac{\gamma}{6}\tilde{u}^3 + \frac{\beta}{2}\tilde{u}^2 \tilde{v}
+ \frac{\alpha}{2}\tilde{u} \tilde{v}^2,\\
\Cbar \bar{v} &= (\Cbar + \lambda_+) \tilde{v} + a\,\tilde{u} \tilde{v} + \frac{\beta}{6}\tilde{u}^3
+ \frac{\alpha}{2}\tilde{u}^2 \tilde{v}.
\label{eq:asymptotic_positions}
\end{align}
Inserting the results for the jump ellipse $\mathcal{J}_\mathrm{\Rti}$, Eq.\ \eqref{eq:uti_jp}, into
Eqs.\ \eqref{eq:asymptotic_positions}, we find the crescent-shape bistable domain
$\mathcal{B}_{\Ras}$ shown in Fig.\ \ref{fig:bananas}; let us briefly derive the origin of this shape.
\begin{figure}
\includegraphics[width = 1.\columnwidth]{figures/bananas_zoom_out.pdf}
\caption{(a) Bistable domain $\mathcal{B}_{\Ras}$ in asymptotic $\bar{\bf R}$-space
measured in units of $\xi$; the same parameters as in Fig.\
\ref{fig:ellipses} have been used. Note the different scaling of the
axes in $\kappa_m - 1$; the right panel (b) shows $\mathcal{B}_{\Ras}$ in
isotropic scales. The bistable domain $\mathcal{B}_{\Ras}$ is elongated along the
transverse direction $\bar{v}$ and narrow/bent along the unstable
direction $\bar{u}$, giving $\mathcal{B}_{\Ras}$ its peculiar crescent-like shape. The
branch crossing line $\bar{\bf R}_0$, see \eqref{eq:x_0_line}, is shown as a
dashed black line. Black crosses mark the cusps of $\mathcal{B}_{\Ras}$ and are
associated with the contact points of $\mathcal{U}_{\Rti}$ through the force balance
equation \eqref{eq:gen_force_balance}; they correspond to critical
end-points in the thermodynamic Ising analogue, while the boundaries
$\partial \mathcal{B}_{\Ras}$ map to spinodals. Blue and red colors identify
different characters of vortex tip configurations as quantified
through the `order parameter' $\tilde{u}$ of the Landau expansion (at
$\beta = 0$), see text, while magenta is associated to the bistable
area $\mathcal{B}_{\Ras}$; the blue and red branches extend to the far side of the
crescent and terminate in the blue and red colored boundaries
$\partial\mathcal{B}_{\Ras}^\mathrm{b}$ and $\partial\mathcal{B}_{\Ras}^\mathrm{r}$, respectively.
Thin horizontal lines show vortex trajectories that proceed smoothly
in asymptotic space, see also Fig.\ \ref{fig:f_pin}(d). Blue and red
dots mark the asymptotic positions associated with vortex tip jumps
that happen at the exit of $\mathcal{B}_{\Ras}$; they correspond to the pairs of tip
positions in Fig.\ \ref{fig:ellipses}. (b) Bistable domain $\mathcal{B}_{\Ras}$ in
isotropic scaled coordinates $\bar{u}$ and $\bar{v}$ showing the `true'
shape of $\mathcal{B}_{\Ras}$. Vortices impacting on the bistable domain with an
angle $|\theta|\leq \theta^\ast$ undergo a single jump on the far side
of $\mathcal{B}_{\Ras}$, with the pinning force density directed along $u$ and
scaling as $\mathbf{F}_\mathrm{pin}^{\parallel}\propto (\kappa-1)^{5/2}$. Vortices
crossing $\mathcal{B}_{\Ras}$ at large angles close to $\pi/2$ jump either never,
once, or twice; at $\theta = \pi/2$ the pinning force density is
small, $\mathbf{F}_\mathrm{pin}^{\perp}\propto (\kappa-1)^{3}$, and directed along
$v$.}
\label{fig:bananas}
\end{figure}
Solving \eqref{eq:asymptotic_positions} to leading order, $\Cbar
\bar{u}^{\scriptscriptstyle (0)} \approx (a/2) \tilde{v}^2$ and $\Cbar
\bar{v}^{\scriptscriptstyle (0)} \approx (\Cbar + \lambda_+) \tilde{v}$, we find the
parabolic approximation
\begin{align}\label{eq:parabola_x}
\bar{u}^{\scriptscriptstyle (0)} &\approx \frac{a}{2\Cbar}
\frac{1}{(1 + \lambda_+/\Cbar)^2}\,\bar{v}^{{\scriptscriptstyle (0)}\,2},
\end{align}
telling that the extent of $\mathcal{B}_{\Ras}$ scales as $(\kappa_m - 1)$ along
$\bar{u}$ and $\propto (\kappa_m - 1)^{1/2}$ along $\bar{v}$, i.e., we find a flat
parabola opening towards positive $\bar{u}$ for $a > 0$, see Fig.\ \ref{fig:bananas}.
In order to find the width of $\mathcal{B}_{\Ras}$, we have to solve
\eqref{eq:asymptotic_positions} to the next higher order, $\bar{u} =
\bar{u}^{\scriptscriptstyle (0)} + \bar{u}^{\scriptscriptstyle (1)}$; for $\beta =
0$, we find the correction
\begin{equation}\label{eq:duas}
\bar{u}^{\scriptscriptstyle (1)} = (1-\kappa_m) \tilde{u} + \frac{\gamma}{6\Cbar}\tilde{u}^3
+ \frac{\alpha}{2\Cbar}\tilde{u} \tilde{v}^2
\end{equation}
that produces a $\bar{v} \leftrightarrow -\bar{v}$ symmetric crescent.
Inserting the two branches \eqref{eq:uti_jp} of the jump ellipse, we arrive
at the width of the crescent that scales as $(\kappa_m-1)^{3/2}$. The correction
to $\bar{v}$ is $\propto (\kappa_m - 1)$ and we find the closed form
\begin{align}\label{eq:dvas}
\bar{v} &\approx [1+ (\lambda_+ + a\tilde{u})/\Cbar]\> \tilde{v}
\end{align}
with a small antisymmetric (in $\tilde{u}$) correction. For a finite $\beta \neq
0$, the correction $\bar{u}^{\scriptscriptstyle (1)}$ picks up an additional term
$(\beta/2\Cbar)\,\tilde{u}^2 \tilde{v}$ that breaks the $\bar{v} \leftrightarrow -\bar{v}$
symmetry and the crescent is distorted.
Viewing the boundary $\partial\mathcal{B}_{\Ras}$ as a parametric curve in the variable
$\tilde{v}$ with $\tilde{u} = \tilde{u}_\mathrm{jp}(\tilde{v})$ given by Eq.\ \eqref{eq:uti_jp},
we obtain the boundary $\partial\mathcal{B}_{\Ras}$ in the form of two separate arcs that
define the crescent-shaped domain $\mathcal{B}_{\Ras}$ in Fig.\ \ref{fig:bananas}(a). The
two arcs merge in two cusps at $\bar{\bf R}_{c,\pm}$ that are associated to the touching
points \eqref{eq:contact_points} in dual space and derive from Eqs.\
\eqref{eq:asymptotic_positions}; measured with respect to $\bar{\bf R}_m$, these cusps
are located at
\begin{eqnarray}\label{eq:cusps}
\delta\bar{\bf R}_{c,\pm} &=& (\bar{u}_c,\pm \bar{v}_c) \\ \nonumber
&\approx& \left[\left(a/2\,\Cbar\right)\,\tilde{v}^2_c,\,
\pm (1 + \lambda_+/\Cbar)\tilde{v}_c\right].
\end{eqnarray}
The coloring in Fig.\ \ref{fig:bananas} indicates the characters `red' and
`blue' of the vortex states; these are defined in terms of the `order
parameter' $\tilde{u}-\tilde{u}_m(\bar{v})$ of the Landau functional
\eqref{eq:e_pin_expans_ani} that changes sign at the branch crossing line Eq.\
\eqref{eq:x_0_line}, with the shift
\begin{equation}\label{eq:shift}
\tilde{u}_m(\bar{v}) = -\frac{\beta}{\gamma} \tilde{v}(\bar{v})
\approx -\frac{\beta}{\gamma} \frac{\bar{v}}{1 + \lambda_+/\Cbar},
\end{equation}
$\tilde{u}_m(\bar{v}) = 0$ for our symmetric case with $\beta = 0$ in Fig.\
\ref{fig:bananas}. Going beyond the cusps (or critical points) at
$\bar{\bf R}_{c,\pm}$, the two states smoothly crossover between `red' and `blue'
(indicated by the smooth blue--white--red transition), as known for the van
der Waals gas (or Ising magnet) above the critical point. Within the bistable
region $\mathcal{B}_{\Ras}$, both `red' and `blue' states coexist and we color this region
in magenta.
The geometry of the bistable domain $\mathcal{B}_{\Ras}$ is very different from the
ring-shaped geometry of the isotropic problem discussed in Sec.\
\ref{sec:iso_def}, see Fig.\ \ref{fig:f_pin}(d); in the discussion of the
uniaxial anisotropic defect below, we will learn how these two geometries are
interrelated. Comparing the overall dimensions of the crescent with the
ring in Fig.\ \ref{fig:f_pin}(d), we find the following scaling behavior in
$\kappa_m -1$: while the crescent $\mathcal{B}_{\Ras}$ grows along $\bar{v}$ as
$(\kappa_m-1)^{1/2}$, the isotropic ring involves the characteristic size
$\xi$ of the defect, $\bar{R}_- \sim \xi$ and hence its extension along $\bar{v}$ is
a constant. On the other hand, the scaling of the crescent's and the ring's
width is the same, $\propto (\kappa_m - 1)^{3/2}$. The different scaling of
the transverse width then will be responsible for the new scaling of the
pinning force density, $\mathbf{F}_\mathrm{pin} \propto (\kappa_m -1 )^{5/2}$.
\subsection{Comparison to isotropic situation}\label{sec:discussion}
Let us compare the unstable domains $\mathcal{U}_{\Rti}$ for the isotropic and anisotropic
defects in Figs.\ \ref{fig:f_pin}(c) and \ref{fig:ellipses}, respectively. In
the isotropic example of Sec.\ \ref{sec:iso_def}, the jump- and
landing-circles $\tilde{R}_\mathrm{jp}(\bar{R})$ and $\tilde{R}_{\mathrm{lp}}(\bar{R})$ are
connected to different phases, e.g., free (colored in blue at
$\tilde{R}_\mathrm{jp} = \tilde{R}_\mathrm{f-}$) and pinned (colored in red at
$\tilde{R}_{\mathrm{lp}} =\tilde{R}_\mathrm{p-}$) associated with $\bar{R}_-$. Furthermore,
the topology is different, with the unstable ring domain separating the two
distinct phases, free and pinned ones. As a result, a second pair of jump-
and landing-positions associated with the asymptotic circle $\bar{R}_{+}$ appears
along the vortex trajectory of Fig.\ \ref{fig:f_pin}(c); these are the located
at the radii $\tilde{R}_\mathrm{jp} = \tilde{R}_\mathrm{p+}$ and $\tilde{R}_\mathrm{lp} =
\tilde{R}_\mathrm{f+}$ and describe the depinning process from the pinned branch
back to the free branch (while the previous pair at radii $\tilde{R}_\mathrm{f-}$
and $\tilde{R}_\mathrm{p-}$ describes the pinning process from the free to the
pinned branch). The pinning (at $\bar{R}_-$) and depinning (at $\bar{R}_+$)
processes in the asymptotic coordinates are shown in figure
\ref{fig:f_pin}(d). The bistable area $\mathcal{B}_{\Ras}$ with coexisting free and pinned
states has a ring-shape as well (colored in magenta, the superposition of blue
and red); the two pairs of jump and landing points in tip space have collapsed
to two pinning and depinning points in asymptotic space.
In the present situation describing the strong pinning onset for a generic
anisotropic potential, the unstable domain $\mathcal{U}_{\Rti}$ grows out of an isolated
point (in fact, $\tilde{\bf R}_m$) and assumes the shape of an ellipse that is simply
connected; as a result, a vortex incident on the defect undergoes only a
single jump, see Fig.\ \ref{fig:ellipses}. The bistable domain $\mathcal{B}_{\Ras}$ is
simply connected as well, but now features two cusps at the end-points of the
crescent, see Fig.\ \ref{fig:bananas}. The bistability again involves two
states, but we cannot associate them with separated pinned and free
phases---we thus denote them by `blue'-type and `red'-type. The two states
approach one another further away from the defect and are distiguishable only
in the region close to bistability; in Fig.\ \ref{fig:bananas}, this is
indicated with appropriate color coding. Note that the Landau-type expansion
underlying the coloring in Fig.\ \ref{fig:bananas} fails at large distances;
going beyond a local expansion near $\tilde{\bf R}_m$, the distortion of the vortex
vanishes at large distances and red/blue colors faint away to approach
`white'.
\subsection{Topology}\label{sec:topology}
The different topologies of unstable and bistable regions appearing in the
isotropic and anisotropic situations are owed to the circular symmetry of the
isotropic defect; we will recover the ring-like topology for the anisotropic
situation later when describing a uniaxially anisotropic defect at larger
values of the Labusch parameter $\kappa_m$. Indeed, such an increase in pinning
strength will induce a change in topology with two crescents facing one
another joining into a ring-like shape.
Let us discuss the consequences of the different topologies that we
encountered for the isotropic and anisotropic defects in the discussion above.
Specifically, the precise number and position of the contact points have an
elegant topological explanation. When a vortex tip touches the edges $\Rti_{\mathrm{jp}}$ of
the unstable domain there are two characteristic directions: one is given by
the unstable eigenvector $\mathbf{v}_-(\Rti_{\mathrm{jp}})$ discussed in Sec.\ \ref{sec:Uti}
along which the tip will jump initially. The second is the tangent vector to
the boundary $\partial \mathcal{U}_{\Rti}$ of the unstable domain, i.e., to the unstable
ellipse. While the former is approximately constant and parallel to the
unstable $u$-direction along $\Rti_{\mathrm{jp}}$, the latter winds around the ellipse
exactly once after a full turn around $\mathcal{U}_{\Rti}$. The contact points
$\tilde{\bf R}_{c,\pm}$ of the unstable and stable ellipses then coincide with those
points on the ellipse where the tangent vector are parallel and anti-parallel
to $\mathbf{v}_-$; at these points, the tip touches the unstable ellipse but
does not undergo a jump any more. Given the different winding numbers of
$\mathbf{v}_-$ and of the tangent vector, there are exactly two points along
the circumference of $\mathcal{U}_{\Rti}$ where the tangent vector is parallel/anti-parallel
to the $u$-direction; these are the points found in \eqref{eq:contact_points}.
This argument remains valid as long as the contour $\partial\mathcal{U}_{\Rti}$ is not
deformed to cross/encircle the singular point of the $\mathbf{v}_-(\Rti_{\mathrm{jp}})$ field
residing at the defect center.
The same arguments allow us to understand the absence of contact points in the
isotropic scenario: For an isotropic potential, the winding number
$n_{\scriptscriptstyle \mathcal{U}}$ of the tangent vector around $\mathcal{U}_{\Rti}$
remains unchanged, i.e., $n_{\scriptscriptstyle \mathcal{U}} = \pm 1$, while
the unstable direction $\mathbf{v}_-$ is pointing along the radius and thus
acquires a unit winding number as well. Indeed, the two directions, tangent
and jump, then rotate simultaneously and do not wind around each other after a
full rotation, explaining the absence of contact points in the isotropic
situation.
\subsection{Energy jumps}\label{sec:de_pin}
Within strong pinning theory, the energy jump $\Delta e_\mathrm{pin}$
associated with the vortex tip jump between bistable vortex configurations at
the boundaries of $\mathcal{B}_{\Ras}$ determines the pinning force density $\mathbf{F}_\mathrm{pin}$ and the
critical current $j_c$, see Eqs.\ \eqref{eq:F_pin} and
\eqref{eq:macroscopic_force_balance}. Formally, the energy jump $\Delta
e_\mathrm{pin}$ is defined as the difference in energy $e_{\mathrm{pin}}(\tilde{\bf R};\bar{\bf R})$ at
fixed asymptotic position $\bar{\bf R} \in \partial\mathcal{B}_{\Ras}$ between vortex
configurations with tips in the jump ($\Rti_{\mathrm{jp}}(\bar{\bf R})$) and landing ($\Rti_{\mathrm{lp}}(\bar{\bf R}) =
\Rti_{\mathrm{jp}}(\bar{\bf R}) + \Delta\tilde{\bf R}$) positions,
\begin{multline}\label{eq:energy_jump_def}
\Delta e_\mathrm{pin}(\bar{\bf R} \in \partial \mathcal{B}_{\Ras}) \equiv e_\mathrm{pin}[\Rti_{\mathrm{jp}}(\bar{\bf R}); \bar{\bf R}]\\
- e_\mathrm{pin}[\Rti_{\mathrm{lp}}(\bar{\bf R}); \bar{\bf R}].
\end{multline}
In Sec.\ \ref{sec:Lti} above, we have found that the jump $\Delta\tilde{\bf R}$ is
mainly forward directed along $u$. Making use of the expansion
\eqref{eq:epin_at_jp} of $e_{\mathrm{pin}}$ at $\Rti_{\mathrm{jp}}$ and the result \eqref{eq:du} for
the jump distance $\Delta\tilde{u}$, we find the energy jumps $\Delta
e_\mathrm{pin}$ in tip- and asymptotic space in the form (cf.\ with the
isotropic result Eq.\ \eqref{eq:d_epin^pf}),
\begin{align}\label{eq:energy_jump}
\Delta e_\mathrm{pin}(\bar{\bf R}) &\approx \frac{\gamma}{72}\Delta\tilde{u}^4 \approx
\left(\frac{9}{8\gamma^3}\right)
\left[\gamma\, \uti_{\mathrm{jp}}(\tilde{v}) + \beta\, \tilde{v} \right]^4\\
&\approx\left(\frac{9}{8\gamma^3}\right)\left[(\gamma\delta - \beta^2)
\left(\tilde{v}_c^2 - \tilde{v}^2\right)\right]^2
\nonumber\\
& \approx\left(\frac{9}{8\gamma^3}\right)
\left[\frac{(\gamma\delta - \beta^2)}{(1+\lambda_+/\Cbar)^2}
\left(\bar{v}_c^2 - \bar{v}^2\right)\right]^2. \nonumber
\end{align}
Here, we have used the parametric shape $\uti_{\mathrm{jp}}(\tilde{v})$ in Eq.\ \eqref{eq:uti_jp}
for the jumping ellipse as well as \eqref{eq:asymptotic_positions} to lowest
order, $\tilde{v} \approx \bar{v}/(1 + \lambda_+ / \Cbar)$, to relate the tip and
asymptotic positions in the last equation. The energy jump
\eqref{eq:energy_jump} scales as $(\kappa_m - 1)^2$ and is shown in
Fig.\ \ref{fig:energies}. It depends on the $v$ coordinate of the asymptotic
(or tip) position only and vanishes at the cusps $\bar{\bf R}_{c,\pm}$, see Eq.\
\eqref{eq:cusps} (or at the touching points $\tilde{\bf R}_{c,\pm}$, see Eq.\
\eqref{eq:contact_points}). To order $(\kappa_m - 1)^2$, the energy jumps are
identical at the left and right edges of the bistable domain $\mathcal{B}_{\Ras}$.
\begin{figure}
\includegraphics[width = 1.0\columnwidth]{figures/energies_simple.pdf}
\caption{Energy jump $\Delta e_\mathrm{pin}$ along the edges of the
bistable domain $\mathcal{B}_{\Ras}$ as a function of the transverse coordinate
$\bar{v}$; we have used the same parameters as in Fig.\
\ref{fig:ellipses}. The energy jump vanishes at the cusps
$\pm\bar{v}_{c}$, as the bistable tip configurations become identical and
their energies turn equal.}
\label{fig:energies}
\end{figure}
Following the two bistable branches and the associated energy jumps between
them to the inside of $\mathcal{B}_{\Ras}$, the latter vanish along the branch crossing line
$\bar{\bf R}_0$. In the thermodynamic analogue, this line corresponds to the
first-order equilibrium transition line that is framed by the spinodal lines;
for the isotropic defect, this is the circle with radius $\bar{R}_0 = x_0$ framed
by the spinodal circles with radii $\bar{R}_\pm$, see Figs.\ \ref{fig:e_pin} and
\ref{fig:f_pin}(d). For the anisotropic defect with $\beta = 0$, this line is
trivially given by the centered parabola of $\mathcal{B}_{\Ras}$, see Eq.\
\eqref{eq:parabola_x}, and hence
\begin{equation}
\bar{u}_0 \approx \frac{a}{2\Cbar}\frac{1}{(1+ \lambda_+/\Cbar)^2} \bar{v}_0^2.
\label{eq:x_0_line}
\end{equation}
The result for a finite skew parameter $\beta \neq 0$ is given by Eq.\
\eqref{eq:x_0_line_beta} in Appendix \ref{sec:eff_1D_onset}.
\subsection{Pinning force density}\label{sec:F_pin_anis}
The pinning force density $F_\mathrm{pin}$ is defined as the average force
density exerted on a vortex line as it moves across the superconducting
sample. For the isotropic case described in Sec.\ \ref{sec:F_pin_iso}, the
individual pinning force $\mathbf{f}_\mathrm{pin}(\bar{\bf R}) = - \nabla_{\bar{\bf R}}
e_{\mathrm{pin}}(\bar{\bf R})$, see Eq.\ \eqref{eq:f_pin}, is directed radially and the force
density $F_\mathrm{pin}$ is given by the (constant) energy jump $\Delta
e_\mathrm{pin}\propto (\kappa-1)^2$ on the edge $\partial\mathcal{B}_{\Ras}$ of the bistable
domain and the transverse length $t_\perp \sim \xi$, hence, $\mathbf{F}_\mathrm{pin}\propto
t_\perp \Delta e_\mathrm{pin}$ scales as $(\kappa-1)^2$.
For an anisotropic defect, the pinning force depends on the vortex direction
of motion $\hat{\mathbf{v}} = (\cos\theta,\sin\theta)$ relative to the axis of
the bistable region: we choose angles $-\pi/2\leq\theta\leq\pi/2$ measured
from the unstable direction $\bar{u}$, i.e., vortices incident from the left; the
case of larger impact angles $|\theta| > \pi/2$ corresponds to vortices
incident from the right and can be reduced to the previous case by inverting
the sign of the parameter $a$ in the expansion \eqref{eq:e_pin_expans_ani},
i.e., the curvature of the parabola \eqref{eq:parabola_x}; to our leading
order analysis, the results remain the same. The pinning force is no longer
directed radially but depends on $\theta$; furthermore, the energy jump
\eqref{eq:energy_jump} is non-uniform along the boundary $\mathcal{B}_{\Ras}$.
In spite of these complications, we can perform some simple scaling estimates
as a first step: let us assume a uniform distribution of identical anisotropic
defects, all with their unstable direction pointing along $x$. The jumps in
energy still scale as $\Delta e_\mathrm{pin} \propto (\kappa_m-1)^2$, however,
the trapping distance is no longer finite but grows from zero as $\kappa_m -
1$ increases. Due to their elongated shapes, the bistable domains $\mathcal{B}_{\Ras}$
exhibit different extensions along the $y$ and $x$ directions, i.e., $\propto
\bar{v}_{c} \propto \sqrt{\kappa_m - 1}$ along $y$ and $\propto \bar{u}_{c} \propto
(\kappa_m - 1)$ along $x$, respectively. These simple considerations then
suggest that the pinning force density exhibits a scaling $\mathbf{F}_\mathrm{pin} \propto
(\kappa_m-1)^\mu$ with $\mu > 2$, different from the setup with isotropic
defects. Even more, vortices moving along the $x$ or $y$ directions,
respectively, will experience different forces $\mathbf{F}_\mathrm{pin}^{\parallel}$ and
$\mathbf{F}_\mathrm{pin}^{\perp}$ scaling as
\begin{equation}\label{eq:fpin_scaling}
\mathbf{F}_\mathrm{pin}^{\parallel}\propto (\kappa_m-1)^{5/2}, \quad \mathbf{F}_\mathrm{pin}^{\perp}\propto (\kappa_m-1)^{3}
\end{equation}
near the onset of strong pinning. While such uniform anisotropic defects could
be created artificially, a more realistic scenario will involve defects that
are randomly oriented and an additional averaging over angles $\theta$ has to
be performed; this will be done at the end of this section.
We first determine the magnitude and orientation of the pinning force density
$\mathbf{F}_\mathrm{pin}(\theta)$ as a function of the vortex impact angle
$\theta$ for randomly positioned but uniformly oriented (along $x$) defects of
density $n_p$. The pinning force density is given by the average over
relative positions between vortices and defects (with a minus sign following
convention; $\mathcal{V}_{\Ras}$ denotes the vortex lattice unit cell),
\begin{eqnarray}\label{eq:formal_pinning_force}
&&\mathbf{F}_\mathrm{pin}(\theta) = -n_p
\int_{\mathcal{V}_{\Ras}\setminus\mathcal{B}_{\Ras}} \!\! \frac{\mathrm{d}^2\bar{\bf R}}{a_0^2}\,
\mathbf{f}_\mathrm{pin}(\bar{\bf R}) \\ \nonumber
&&\quad -
n_p \int_{\mathcal{B}_{\Ras}} \!\!\! \frac{\mathrm{d}^2 \bar{\bf R}}{a_0^2} \left[p_\mathrm{b}(\bar{\bf R};\theta)\,
\mathbf{f}^\mathrm{b}_\mathrm{pin}(\bar{\bf R}) + p_\mathrm{r}(\bar{\bf R};\theta)\,
\mathbf{f}^\mathrm{r}_\mathrm{pin}(\bar{\bf R})\right].
\end{eqnarray}
Outside of the bistable domain, i.e., in $\mathcal{V}_{\Ras}\setminus\mathcal{B}_{\Ras}$, a single stable
vortex tip configuration exists and the pinning force
$\mathbf{f}_\mathrm{pin}(\bar{\bf R})$ is uniquely defined. Inside $\mathcal{B}_{\Ras}$, the
branch occupation functions $p_\mathrm{b,r}(\bar{\bf R};\theta)$ are associated with
the tip positions appertaining to the `blue' and the `red' vortex
configurations with different tip positions $\tilde{\bf R}^\mathrm{b,r}(\bar{\bf R})$, cf.\
Figs.\ \ref{fig:ellipses} and \ref{fig:bananas}. The pinning forces
$\mathbf{f}^\mathrm{b,r}_\mathrm{pin}(\bar{\bf R})$ are evaluated for the
corresponding vortex tip positions and are defined as
\begin{equation}
\mathbf{f}^\mathrm{b,r}_\mathrm{pin}(\bar{\bf R}) = -\mathbf{\nabla}_{\bar{\bf R}}
e_\mathrm{pin}[\tilde{\bf R}^\mathrm{b,r}(\bar{\bf R});\bar{\bf R}].
\end{equation}
Let us now study how vortex lines populate the bistable domain as a function
of the impact angle $\theta$. Examining Fig.\ \ref{fig:bananas}, we can
distinguish between two different angular regimes: a \emph{frontal}-impact
regime at angles away from $\pi/2$, $|\theta| \leq \theta^\ast$, where
all the vortices that cross the bistable domain undergo exactly one jump on
the far edge of $\mathcal{B}_{\Ras}$, see the blue dot and blue boundary
$\partial\mathcal{B}_{\Ras}^\mathrm{b}$ in Fig.\ \ref{fig:bananas}; and a \emph{transverse}
regime for angles $\theta^\ast \leq |\theta| \leq \pi/2$, where
vortices crossing the bistable domain undergo either no jump, one or two. The
angle $\theta^\ast$ is given by the (outer) tangent of the bistable domain at
the cusps $\bar{\bf R}_{c,\pm}$; making use of the lowest order approximation
\eqref{eq:parabola_x} of the crescent's geometry, we find that
\begin{align}
\tan (\theta^\ast) &=
\frac{\partial \bar{v}^{\scriptscriptstyle (0)}}
{\partial \bar{u}^{\scriptscriptstyle (0)}} \Big|_{\bar{v}_c}
= \frac{(\Cbar + \lambda_+)}{a}
\sqrt{\frac{\gamma\delta -\beta^2}{2\gamma\Cbar(\kappa_m -1)}},
\end{align}
implying that $\pi/2 - \theta^\ast \propto \sqrt{\kappa_m - 1}$ is small,
\begin{align}
\theta^\ast \approx \pi/2 - \frac{a}{(\Cbar + \lambda_+)}
\sqrt{\frac{2\gamma\Cbar(\kappa_m -1)}{\gamma\delta -\beta^2}}.
\end{align}
\subsubsection{Impact angles $|\theta| < \theta^\ast$}\label{sec:F_par}
For a frontal impact with $|\theta| < \theta^\ast$,
vortices occupy the `blue' branch and remain there throughout
the bistable domain $\mathcal{B}_{\Ras}$ until its termination on the
far edge $\partial\mathcal{B}_{\Ras}^\mathrm{b}$, see Fig.\ \ref{fig:bananas}, implying that
$p_\mathrm{b}(\bar{\bf R}\in\mathcal{B}_{\Ras}) = 1$ and $p_\mathrm{r}(\bar{\bf R}\in\mathcal{B}_{\Ras}) = 0$,
independent of $\theta$. As a consequence, the pinning force
$\mathbf{F}_\mathrm{pin}$ does not depend an the impact angle and
is given by the expression
\begin{equation*}
\mathbf{F}^{<}_\mathrm{pin} = -n_p \! \int_{\mathcal{V}_{\Ras}\setminus\mathcal{B}_{\Ras}} \!\!\!\!
\frac{\mathrm{d}^2\bar{\bf R}}{a_0^2}\, \mathbf{f}_\mathrm{pin}(\bar{\bf R})
- n_p \! \int_{\mathcal{B}_{\Ras}} \!\!\!\! \frac{\mathrm{d}^2\bar{\bf R}}{a_0^2}\,
\mathbf{f}^\mathrm{b}_\mathrm{pin}(\bar{\bf R}).
\end{equation*}
Next, Gauss' formula tells us that for a function $e(\mathbf{x})$, we can
transform
\begin{equation}\label{eq:gauss_theorem}
\int_\mathcal{V} \mathrm{d}^n x \,\mathbf{\nabla} e(\mathbf{x})
= \int_{\partial\mathcal{V}}\mathrm{d}^{n-1}\, \mathbf{S}_\perp \,e(\mathbf{x}),
\end{equation}
with the surface element $\mathrm{d}^{n-1}\, \mathbf{S}_\perp$ oriented
perpendicular to the surface and pointing outside of the domain $\mathcal{V}$.
In applying \eqref{eq:gauss_theorem} to the first integral of
$\mathbf{F}^{<}_\mathrm{pin}$, we can drop the contribution from the
outer boundary $\partial\mathcal{V}_{\Ras}$ since we assume a compact defect potential. The
remaining contribution from the crescent's boundary $\partial\mathcal{B}_{\Ras}$ joins up
with the second integral but with an opposite sign, as the two terms involve
the same surface but with opposite orientations. Altogether, we then arrive
at the expression
\begin{multline}\label{eq:intermediate_flux_fpin}
\mathbf{F}^{<}_\mathrm{pin} = n_p \int_{\partial \mathcal{B}_{\Ras}^{\mathrm{b}}}
\frac{\mathrm{d}\, \mathbf{S}_\perp}{a_0^2}
\left(e^\mathrm{b}_\mathrm{pin}(\bar{\bf R}) - e_\mathrm{pin}(\bar{\bf R})\right)\\
+ n_p \int_{\partial \mathcal{B}_{\Ras}^{\mathrm{r}}} \frac{\mathrm{d}\, \mathbf{S}_\perp}{a_0^2}
\left(e^\mathrm{b}_\mathrm{pin}(\bar{\bf R}) - e_\mathrm{pin}(\bar{\bf R})\right),
\end{multline}
where we have separated the left and right borders $\partial \mathcal{B}_{\Ras}^{\mathrm{r,b}}$
of the bistable domain. Due to continuity, the stable vortex energy
$e_\mathrm{pin}(\bar{\bf R})$ will be equal to $e_\mathrm{pin}^\mathrm{b}(\bar{\bf R})$ on
the left border $\partial \mathcal{B}_{\Ras}^{\mathrm{r}}$ and equal to
$e_\mathrm{pin}^\mathrm{r}(\bar{\bf R})$ on the right border $\partial
\mathcal{B}_{\Ras}^{\mathrm{b}}$. The expression \eqref{eq:intermediate_flux_fpin} for
$\mathbf{F}^{<}_\mathrm{pin}$ then reduces to
\begin{align}\label{eq:fpin_frontal}
\mathbf{F}^{<}_\mathrm{pin} &= n_p \int_{\partial \mathcal{B}_{\Ras}^{\mathrm{b}}}
\frac{\mathrm{d}\, \mathbf{S}_\perp}{a_0^2} \left(e^\mathrm{b}_\mathrm{pin}(\bar{\bf R})
- e^\mathrm{r}_\mathrm{pin}(\bar{\bf R})\right)\nonumber\\
&= n_p \int_{-\bar{v}_{c}}^{\bar{v}_{c}}\frac{\mathrm{d}\bar{v}}{a_0}\,
\frac{\Delta e_\mathrm{pin}(\bar{v})}{a_0} \left[1,-\partial{\bar{u}}/\partial{\bar{v}}\right]
\nonumber\\
&=n_p \left[\frac{2\bar{v}_{c}}{a_0}\frac{\langle\Delta e_\mathrm{pin}\rangle}{a_0},\, 0\right]
\equiv [F^{\parallel}_\mathrm{pin}, 0]
\end{align}
with $\langle\Delta e_\mathrm{pin}\rangle$ the average energy jump evaluated
along the $v$-direction. The force $\mathbf{F}^{<}_\mathrm{pin}$ is
aligned with the unstable directed along $u$, with the $v$-component vanishing
due to the antisymmetry in $\bar{v} \leftrightarrow-\bar{v}$ of the derivative
$\partial{\bar{u}} /\partial{\bar{v}}$, and is independent on $\theta$ for $|\theta|
< \theta^*$.
\subsubsection{Impact angle $|\theta| = \pi/2$}\label{sec:F_perp}
Second, let us find the pinning force density
$\mathbf{F}^{\pi/2}_\mathrm{pin}$ for vortices moving along the (positive)
$v$-direction, $\theta = \pi/2$. As follows from Fig.\ \ref{fig:bananas},
vortices occupy the blue branch and jump to the red one upon hitting the lower
half of the boundary $\partial\mathcal{B}_{\Ras}^\mathrm{b}$; vortices that enter $\mathcal{B}_{\Ras}$ but
do not cross $\partial\mathcal{B}_{\Ras}^\mathrm{b}$ undergo no jump and hence do not
contribute to $\mathbf{F}^{\pi/2}_\mathrm{pin}$. As vortices in the red branch
proceed upwards, they jump back to the blue branch upon crossing the red
boundary $\partial\mathcal{B}_{\Ras}^\mathrm{r}$. While jumps appear on all of the lower
half of $\partial\mathcal{B}_{\Ras}^\mathrm{b}$, a piece of the upper boundary
$\partial\mathcal{B}_{\Ras}^\mathrm{r}$ that contributes with a second jump is cut away (as
vortices to the left of $\bar{u}^{\scriptscriptstyle (0)} +
\bar{u}^{\scriptscriptstyle (1)}$ do not change branch from blue to red). The
length $\Delta\bar{v}$ of this interval scales as $\Delta\bar{v}/\bar{v}_c \propto
(\kappa_m - 1)^{1/4}$; ignoring this small jump-free region, we determine
$\mathbf{F}^{\pi/2}_\mathrm{pin}$ assuming that vortices contributing to
$\mathbf{F}^{\pi/2}_\mathrm{pin}$ undergo a sequence of two jumps, from blue
to red on the lower half $\partial\mathcal{B}_{\Ras}^\mathrm{b<}$ and back from red to blue
on the upper half $\partial\mathcal{B}_{\Ras}^\mathrm{r>}$ of the boundary $\partial\mathcal{B}_{\Ras}$.
Repeating the above analysis, we find that the $u$-components in
$\mathbf{F}^{\pi/2}_\mathrm{pin}$ arising from the blue and red boundaries now
cancel, while the $v$-components add up,
\begin{align}\label{eq:fpin_perp}
\mathbf{F}^{\pi/2}_\mathrm{pin}
&=n_p \int_{\partial \mathcal{B}_{\Ras}^{\mathrm{b<}}} \frac{\mathrm{d}\, \mathbf{S}_\perp}{a_0^2}
\left(e^\mathrm{b}_\mathrm{pin}(\bar{\bf R}) - e^\mathrm{r}_\mathrm{pin}(\bar{\bf R})\right)\nonumber\\
&+n_p \int_{\partial \mathcal{B}_{\Ras}^{\mathrm{r>}}} \frac{\mathrm{d}\, \mathbf{S}_\perp}{a_0^2}
\left(e^\mathrm{r}_\mathrm{pin}(\bar{\bf R}) - e^\mathrm{b}_\mathrm{pin}(\bar{\bf R})\right)\nonumber\\
&= 2 n_p \int_0^{\bar{v}_{c}} \frac{\mathrm{d}\bar{v}}{a_0}\,
\frac{\Delta e_\mathrm{pin}(\bar{v})}{a_0} \left[0,\partial{\bar{u}}/\partial{\bar{v}}\right]\\
\nonumber
&= n_p \left[0,\frac{2\bar{v}_{c}}{a_0}\frac{\langle\Delta e_\mathrm{pin}
\partial_{\bar{v}} \bar{u} \rangle}{a_0}\right] \equiv [0,F^{\perp}_\mathrm{pin}].
\end{align}
Making use of the result \eqref{eq:energy_jump} for $\Delta
e_\mathrm{pin}(\bar{v})$ in \eqref{eq:fpin_frontal}, we find explicit expressions
for the pinning force densities for impacts parallel and perpendicular to the
unstable direction $u$,
\begin{align}\label{eq:fpin_explicit_par}
F_\mathrm{pin}^{\parallel} &\approx \left(\frac{9n_p}{8\,a_0^2\gamma^3}\right)
\!\int_{-\bar{v}_c}^{\bar{v}_c} \!\!\!\!\!\! \mathrm{d}\bar{v}
\left[\frac{\gamma\delta -\beta^2}{(1+\lambda_+/\Cbar)^2}
\left(\bar{v}_c^2 - \bar{v}^2\right)\right]^2
\\ \nonumber
&=\frac{24}{5} n_p \frac{\sqrt{2\Cbar/\gamma}}{a_0}
\frac{\Cbar^2}{\gamma a_0} \frac{\gamma (1 + \lambda_+/\Cbar)}
{\sqrt{\gamma\delta -\beta^2}} (\kappa_m - 1)^{5/2}
\end{align}
and
\begin{align}\label{eq:fpin_explicit_perp}
F_\mathrm{pin}^\perp &\approx
3 \frac{\Cbar^2}{\gamma a_0} \frac{\gamma a/a_0}{\gamma\delta - \beta^2}(\kappa_m - 1)^3,
\end{align}
that confirm the scaling estimates of Eq.\ \eqref{eq:fpin_scaling}. Here, we
have made use of the definition \eqref{eq:cusps} of $\bar{v}_{c}$ and have
brought the final result into a form similar to the isotropic result
\eqref{eq:F_pin_iso_result} (with the length $\sqrt{\Cbar/\gamma}$ and the
force $\Cbar^2/\gamma a_0$, equal to $\xi/\sqrt{3}\kappa$ and $e_p/12\kappa^2$
for a Lorentzian potential). The result \eqref{eq:fpin_explicit_par} provides
the pinning force density $\mathbf{F}_\mathrm{pin} =
[F_\mathrm{pin}^{\parallel},0]$ for all impact angles $|\theta| \leq
\theta^\ast$ (note that \eqref{eq:fpin_explicit_par} depends on the curvature
$a$ of the crescent via $\delta$, Eq.\ \eqref{eq:delta}, that involves $a^2$
only, but higher-order corrections will introduce an asymmetry between left-
and right moving vortices). Within the interval $\theta^\ast < \theta <
\pi/2$, the longitudinal force $F_{\mathrm{pin},u}$ along $u$ decays to zero
and the transverse force $F_{\mathrm{pin},v}$ along $v$ becomes finite,
assuming the value \eqref{eq:fpin_explicit_perp} at $\theta = \pi/2$. The two
force components have been evaluated numerically over the entire angular
regime and the results are shown in Fig.\ \ref{fig:forces_angle}: when moving
away from the angle $\theta = \pi/2$, the transition from the blue to the red
boundary is moving upwards, with the relevant boundary turning fully blue at
$\theta = \theta^\ast$, thus smoothly transforming \eqref{eq:fpin_perp} into
\eqref{eq:fpin_frontal} (we have adopted the approximation of dropping the
jump-free interval $\Delta \bar{v}$ that moves up and becomes smaller as $\theta$
decreases from $\pi/2$ to $\theta^\ast$).
\begin{figure}[t]
\includegraphics[width = 1.\columnwidth]{figures/Pinning_force.pdf}
\caption{Top: scaled pinning force densities $F_{\mathrm{pin},u}$ and
$F_{\mathrm{pin},v}$ versus impact angle $\theta$; we have used the
same parameters as in Fig.\ \ref{fig:ellipses}. The longitudinal
(along $u$) force $F_{\mathrm{pin},u}$ remains constant and equal to
$F_\mathrm{pin}^\parallel$ for all angles $|\theta| < \theta^\ast$,
while the transverse (along $v$) component $F_{\mathrm{pin},v}$
vanishes in this regime. The longitudinal force drops and vanishes
over the narrow interval $\theta^\ast < |\theta| < \pi/2$, while the
transverse force $F_{\mathrm{pin},v}$ increases up to
$F_\mathrm{pin}^\perp$. Bottom: critical force density $F_c$
(directed along the Lorentz force $\mathbf{F}_{\rm \scriptscriptstyle
L} = \mathbf{j} \wedge\mathbf{B}/c$) versus angle $\varphi$ of the
Lorentz force; the dashed line shows the upper bound $F_c <
F_\mathrm{pin}^\perp/ \sin(\varphi)$.}
\label{fig:forces_angle}
\end{figure}
\subsubsection{Anisotropic critical force density $\mathbf{F}_c$}\label{sec:F_c}
When the vortex system is subjected to a current density $\mathbf{j}$, the
associated Lorentz force $\mathbf{F}_{\rm \scriptscriptstyle L}(\varphi) =
\mathbf{j} \wedge \mathbf{B}/c$ directed along $\varphi$ pushes the vortices
across the defects. When $\mathbf{F}_{\rm \scriptscriptstyle L}$ is directed
along $u$, we have $\mathbf{F}_\mathrm{pin} = [F_\mathrm{pin}^\parallel,0]$
and the vortex system gets immobilized at force densities $F_{\rm
\scriptscriptstyle L} < F_c = F_\mathrm{pin}^\parallel$ (or associated current
densities $\mathbf{j}_c$). When $\mathbf{F}_{\rm \scriptscriptstyle L}$ is
directed away from $u$, the driving component along $v$ has to be compensated
by a finite pinning force $F_{\mathrm{pin},v}$ that appears only for angles
$\theta^\ast < \theta < \pi/2$. Hence, the angles of force and motion,
$\varphi$ associated with the Lorentz force $\mathbf{F}_{\rm
\scriptscriptstyle L}(\varphi)$ and $\theta$ providing the direction of the
pinning force $\mathbf{F}_\mathrm{pin}(\theta)$, are different. We find them,
along with the critical force density $\mathbf{F}_c(\varphi)$, by solving the
dynamical force equation \eqref{eq:macroscopic_force_balance} at vanishing
velocity $\mathbf{v} = 0$,
\begin{equation}\label{eq:Lor-pin-force}
\mathbf{F}_c(\varphi) = \mathbf{F}_\mathrm{pin}(\theta)
\end{equation}
resulting in a critical force density
\begin{equation}\label{eq:Fc}
F_c(\varphi) = \sqrt{F_{\mathrm{pin},u}^2(\theta) + F_{\mathrm{pin},v}^2(\theta)}
\end{equation}
with angles $\varphi$ and $\theta$ related via
\begin{equation}\label{eq:varphi}
\tan \varphi = \frac{F_{\mathrm{pin},u}(\theta)}{F_{\mathrm{pin},v}(\theta)}.
\end{equation}
Since $F_{\mathrm{pin},u}(\theta< \theta^\ast) = 0$, the entire interval
$\theta < \theta^\ast$ is compressed to $\varphi = 0$ and it is the narrow
regime $\theta^\ast < \theta < \pi/2$ that determines the angular
characteristic of the critical force density $F_c(\varphi)$. The critical
force density $F_c(\varphi)$ is peaked at $\varphi = 0$ as shown in Fig.\
\ref{fig:forces_angle} (with a correspondingly sharp peak in $j_c$ at right
angles). Combing Eqs.\ \eqref{eq:Fc} and \eqref{eq:varphi}, we can derive a
simple expression bounding the function $F_c(\varphi)$,
\begin{equation}\label{eq:bound_Fc}
F_c(\varphi) = F_{\mathrm{pin},v}(\theta)\sqrt{1+\cot^2(\varphi)} \leq
\frac{F_\mathrm{pin}^\perp}{\sin(\varphi)},
\end{equation}
that traces $F_c(\varphi)$ over a wide angular region, see the dashed line in
Fig.\ \ref{fig:forces_angle}. At small values of $\varphi$ we cannot ignore
the angular dependence in $F_{\mathrm{pin},v}(\theta)$ any more that finally
cuts off the divergence $\propto 1/\sin(\varphi)$ at the value $F_c(\varphi
\to 0) \to F_\mathrm{pin}^\parallel$.
\subsubsection{Isotropized pinning force density $F_\mathrm{pin}$}\label{sec:F_ang-av}
In a last step, we assume an ensemble of equal anisotropic defects that are
uniformly distributed in space and randomly oriented. In this situation, we
have to perform an additional average over the instability directions
$\hat{\mathbf{u}}_i$ associated with the different defects $i = 1, \dots N$.
Neglecting the modification of $\mathbf{F}_\mathrm{pin}(\theta)$ away from
$[F_\mathrm{pin}^\parallel,0]$ in the small angular regions $\theta^\ast <
|\theta| < \pi/2$, we find that the force along any direction
$\hat{\mathbf{R}}$ has the magnitude
\begin{eqnarray}\label{eq:av_force}
F_\mathrm{pin} &\approx& \frac{1}{N}\sum_{i=1}^N |(F_\mathrm{pin}^\parallel
\hat{\mathbf{u}}_i) \cdot \hat{\mathbf{R}}|
\\ \nonumber
&\approx& F^{\parallel}_\mathrm{pin} \int_{-\pi/2}^{\pi/2}
\frac{\mathrm{d}\theta}{\pi} \, \cos\theta = \frac{2}{\pi} \mathbf{F}_\mathrm{pin}^\parallel.
\end{eqnarray}
As a result of the averaging over the angular directions, the pinning force
density is now effectively isotropic and directed against the velocity
$\mathbf{v}$ of the vortex motion.
\section{Uniaxial defect}\label{sec:uniax_defect}
In Sec.\ \ref{sec:arb_shape}, we have analyzed the onset of strong pinning for
an arbitrary potential and have determined the shape of the unstable and
bistable domains $\mathcal{U}_{\Rti}$ and $\mathcal{B}_{\Ras}$---with their elliptic and crescent forms,
they look quite different from their ring-shaped counterparts for the
isotropic defect in Figs.\ \ref{fig:f_pin}(c) and (d). In this section, we
discuss the situation for a weakly anisotropic defect with a small uniaxial
deformation quantified by the small parameter $\epsilon$ in order to
understand how our previous findings, the results for the isotropic defect and
those describing the strong-pinning onset, relate to one another.
Our weakly deformed defect is described by equipotential lines that are nearly
circular but slightly elongated along $y$, implying that pinning is strongest
in the $x$-direction. We will find that the unstable (bistable) domain $\mathcal{U}_{\Rti}$
($\mathcal{B}_{\Ras}$) for the uniaxially anisotropic defect starts out with two ellipses
(crescents) on the $x$-axis as $\kappa_m$ crosses unity. With increasing
pinning strength, i.e., $\kappa_m$, these ellipses (crescents) grow and deform
to follow the equipotential lines, with the end-points approaching one another
until they merge on the $\pm y$-axis. These merger points, we denote them as
$\tilde{\bf R}_s$ and $\bar{\bf R}_s$, define a second class of important points (besides the
onset points $\tilde{\bf R}_m$ and $\bar{\bf R}_m$) in the buildup of the strong pinning
landscape: while the onset points $\tilde{\bf R}_m$ are defined as minima of the
Hessian determinant $D(\tilde{\bf R})$, the merger points $\tilde{\bf R}_s$ turn out to be
associated with saddle points of $D(\tilde{\bf R})$. Pushing across the merger of the
deformed ellipses (crescents) by further increasing the Labusch parameter
$\kappa_m$, the unstable (bistable) domains $\mathcal{U}_{\Rti}$ ($\mathcal{B}_{\Ras}$) undergo a change
in topology, from two separated areas to a ring-like geometry as it appears
for the isotropic defect, see Figs.\ \ref{fig:f_pin}(c) and (d), thus
explaining the interrelation of our results for isotropic and anisotropic
defects.
With this analysis, we thus show how the strong pinning landscape for the
weakly uniaxial defect will finally assume the shape and topology of the
isotropic defect as the pinning strength $\kappa_m$ overcomes the anisotropy
$\epsilon$. Second, this discussion will introduce the merger points $\tilde{\bf R}_s$ as a
second type of characteristic points of strong pinning landscapes that we will
further study in section \ref{sec:hyp_expansion} using a Landau-type expansion as
done in section \ref{sec:ell_expansion} above; we will find that the geometry of
the merger points $\tilde{\bf R}_s$ is associated with hyperbolas, as that of the onset points
was associated with ellipses.
Our uniaxially anisotropic defect is described by the stretched (along the
$y$-axis) Lorentzian
\begin{equation}\label{eq:uniax_potential_formal}
e_p(\tilde{x},\tilde{y}) = -e_p\left(1+\frac{\tilde{x}^2}{2\xi^2}
+ \frac{\tilde{y}^2}{2\xi^2\left(1 + \epsilon\right)^2}\right)^{-1},
\end{equation}
with equipotential lines described by ellipses
\begin{equation}\label{eq:equipotential_lines}
\frac{\tilde{x}^2}{\xi^2} + \frac{\tilde{y}^2}{\xi^2\left(1 + \epsilon\right)^2} = \text{const},
\end{equation}
and the small parameter $0 < \epsilon \ll 1$ quantifying the degree of
anisotropy. At fixed radius $\tilde{R}^2 = \tilde{x}^2 + \tilde{y}^2$, the potential
\eqref{eq:uniax_potential_formal} assumes maxima in energy and in negative
curvature on the $x-$axis, and corresponding minima on the $y-$axis. Along
both axes, the pinning force is directed radially towards the origin
and the Labusch criterion \eqref{eq:gen_Lab} for strong pinning is determined
solely by the curvature along the radial direction. At the onset of strong
pinning, the unstable and bistable domains then first emerge along the $x-$axis
at the points $\tilde{\bf R}_m=(\pm\sqrt{2}\xi,0)$ and $\bar{\bf R}_m = (\pm 2\sqrt{2}\xi,0)$
when
\begin{equation}
\kappa_m = \frac{e_p}{4\Cbar\xi^2} = 1.
\end{equation}
Upon increasing the pinning strength $\kappa_m$, e.g., via softening of the
vortex lattice as described by a decrease in $\Cbar$, the unstable and
bistable domains $\mathcal{U}_{\Rti}$ and $\mathcal{B}_{\Ras}$ expand away from these points, and
eventually merge along the $y-$axis at $\tilde{\bf R}_s = (0, \pm
\sqrt{2}\xi(1+\epsilon))$, $\bar{\bf R}_s = (0, \pm 2\sqrt{2}\xi(1+\epsilon))$ when
\begin{equation}\label{eq:uniax_merging}
\kappa_s = \frac{e_p}{4\Cbar\xi^2(1+\epsilon)^2} = \frac{\kappa_m}{(1+\epsilon)^2} = 1,
\end{equation}
i.e., for $\kappa_m = (1 + \epsilon)^2$. The evolution of the strong pinning
landscape from onset to merging takes place in the interval $\kappa_m
\in [1, (1+ \epsilon)^2]$; pushing $\kappa_m$ beyond this interval,
we will analyze the change in topology and appearance of non-simply connected
unstable and bistable domains after the merging.
The quantity determining the shape of the unstable domain $\mathcal{U}_{\Rti}$ is the
Hessian determinant $D(\tilde{\bf R})$ of the total vortex energy $e_{\mathrm{pin}}(\tilde{\bf R};\bar{\bf R})$,
see Eqs.\ \eqref{eq:det_Hessian} and \eqref{eq:en_pin_tot}, respectively. At
onset, the minimum of $D(\tilde{\bf R})$ touches zero for the first time; with
increasing $\kappa_m$, this minimum drops below zero and the condition
$D(\tilde{\bf R}) = 0$ determines the unstable ellipse that expands in $\tilde{\bf R}$-space.
Viewing the function $D(\tilde{\bf R})$ as a height function of a landscape in the
$\tilde{\bf R}$ plane, this corresponds to filling this landscape, e.g., with water, up
to the height level $D = 0$ with the resulting lake representing the unstable
domain. In the present uniaxially symmetric case, a pair of unstable ellipses
grow simultaneously, bend around the equipotential line near the radius $\sim
\sqrt{2}\xi$ and finally touch upon merging on the $y$-axis. In our geometric
interpretation, this corresponds to the merging of the two (water-filled)
valleys that happens in a saddle-point of the function $D(\tilde{\bf R})$ at the height
$D = 0$. Hence, the merger point $\tilde{\bf R}_s$ correspond to saddles in $D(\tilde{\bf R})$
with
\begin{equation}\label{eq:det_saddle2Da}
D(\tilde{\bf R}_s) = 0,\quad \mathbf{\nabla}_{\tilde{\bf R}}\,D(\mathbf{R})\big|_{\tilde{\bf R}_s} = 0,
\end{equation}
and
\begin{equation}\label{eq:det_saddle2Db}
\mathrm{det}\bigl[\mathrm{Hess}\bigl[ D(\tilde{\bf R}) \bigr]\bigr] \big|_{\tilde{\bf R}_s} < 0,
\end{equation}
cf.\ Eq.\ \eqref{eq:det_min2D}.
In our calculation of $D(\tilde{\bf R})$, we exploit that the Hessian in
\eqref{eq:det_Hessian} does not depend on the asymptotic position $\bar{\bf R}$ and
we can set it to zero,
\begin{align}
D(\tilde{\bf R})
&=\mathrm{det}\bigl\{\mathrm{Hess}[\Cbar\tilde{R}^2/2 + e_{p}^{\scriptscriptstyle (i)}(\tilde{R})
+ \delta e_p(\tilde{\bf R})]\bigr\},
\end{align}
where we have split off the anisotropic correction $\delta
e_p(\tilde{\bf R}) = e_p(\tilde{\bf R}) - e_p^{\scriptscriptstyle (i)}(\tilde{R})$ away from the
isotropic potential $e_p^{\scriptscriptstyle (i)}(\tilde{R})$ with $\epsilon = 0$.
In the following, we perform a perturbative analysis around the isotropic
limit valid in the limit of weak anisotropy $\epsilon \ll 1$; this motivates
our use of polar (tip) coordinates $\tilde{R}$ and $\tilde{\phi}$.
The isotropic contribution $\mathrm{H}^{\scriptscriptstyle (i)}$ to the
Hessian matrix $\mathrm{H}$ is diagonal with components
\begin{align}\nonumber
\mathrm{H}^{\scriptscriptstyle (i)}_{\tilde{R}\tilde{R}}(\tilde{R})
&\equiv \partial_{\tilde{R}}^2 [\Cbar\tilde{R}^2/2 + e_{p}^{\scriptscriptstyle (i)}(\tilde{R})]\\
\label{eq:uniax_Hrr}
&=\Cbar + \partial_{\tilde{R}}^2 e_{p}^{\scriptscriptstyle (i)}(\tilde{R})
\end{align}
and
\begin{align} \nonumber
\mathrm{H}^{\scriptscriptstyle (i)}_{\tilde{\phi}\pti}(\tilde{R})
&\equiv(\tilde{R}^{-2}\partial^2_{\tilde{\phi}\pti} + \tilde{R}^{-1}\partial_{\tilde{R}})
[\Cbar\tilde{R}^2/2 + e_p^{\scriptscriptstyle (i)}(\tilde{R})] \\
&= \Cbar - f_p^{\scriptscriptstyle (i)}(\tilde{R})/\tilde{R}.
\label{eq:uniax_Hpp}
\end{align}
The radial component $\mathrm{H}^{\scriptscriptstyle
(i)}_{\tilde{R}\tilde{R}}\propto(\kappa_m - 1)$ vanishes at onset, while
$\mathrm{H}^{\scriptscriptstyle (i)}_{\tilde{\phi}\pti}$ remains finite, positive, and
approximately constant.
\begin{figure}[t]
\includegraphics[width = 1.\columnwidth]{figures/quasi-ring-ellipse.pdf}
\caption{Unstable and bistable domains close to the onset of strong
pinning for a uniaxial defect \eqref{eq:uniax_potential_formal}
centered at the origin, with $\epsilon = 0.1$ and $\kappa_m - 1
=0.01$. The pinning potential is steepest at angles $\tilde{\phi} = 0,\, \pi$
and least steep at $\tilde{\phi} =\pm\pi/2$, hence strong pinning is realized
first in a small interval around $\tilde{\phi} = 0,\, \pi$ (solid black dots)
where $\kappa_m(\tilde{\phi}) \geq 1$. (a) The unstable domain $\mathcal{U}_{\Rti}$ in tip
space is bounded by red/blue solid lines (jump lines $\mathcal{J}_\mathrm{\Rti}$, see Eq.\
\eqref{eq:uniax_Rjp}); dashed lines mark the associated landing lines
$\mathcal{L}_\mathrm{\Rti}$, see \eqref{eq:uniax_Rlp}. (b) Focus on the unstable domain near
$\tilde{\phi} = 0$ in polar coordinates $\tilde{R}$ and $\tilde{\phi}$. The jumping
(solid) and landing (dashed) lines have the approximate shape of
ellipses, see Eq.\ \eqref{eq:ellipse-small}, in agreement with our
analysis of Sec.\ \ref{sec:Uti}. (c) The bistable domain $\mathcal{B}_{\Ras}$ in
asymptotic space involves symmetric crescents centered at $\bar{\phi} = 0,\,
\pi$ and a narrow width $\propto (\kappa_m(\bar{\phi}) - 1)^{3/2}$, see Eq.\
\eqref{eq:uniax_R_bas}, in agreement with the analysis of Sec.\
\ref{sec:Bas}. (d) Focus on the bistable domain at $\bar{\phi} = 0$ in
polar coordinates $\bar{R}$ and $\bar{\phi}$. Red/blue colors indicate
different vortex configurations as quantified through the order
parameter $\tilde{R} - \tilde{R}_m(\tilde{\phi})$.}
\label{fig:quasi-ring-ellipse}
\end{figure}
The anisotropic component $\delta e_p(\tilde{\bf R})$ introduces corrections
$\propto\epsilon$; these significantly modify the radial entry of the full
Hessian while leaving its azimutal component $\mathrm{H}_{\tilde{\phi}\pti}$
approximately unchanged; the off-diagonal entries of the full Hessian scale as
$\epsilon$ and hence contribute in second order of $\epsilon$ to $D(\tilde{\bf R})$.
As a result, the sign change in the determinant
\begin{equation}
D(\tilde{\bf R}) \approx \mathrm{H}_{\tilde{R}\tilde{R}}(\tilde{\bf R})
\mathrm{H}_{\tilde{\phi}\pti}(\tilde{R})
+ \mathcal{O}\left(\epsilon^2\right),
\end{equation}
is determined by
\begin{equation}
\mathrm{H}_{\tilde{R}\tilde{R}}(\tilde{\bf R})
= \mathrm{H}^{\scriptscriptstyle (i)}_{\tilde{R}\tilde{R}}(\tilde{R})
+ \partial^2_{\tilde{R}}\delta e_p(\tilde{\bf R})
\end{equation}
for radii close to $\tilde{R}_m$ with $\delta\tilde{R} = \tilde{R} - \tilde{R}_m \approx
\mathcal{O}(\sqrt{\kappa_m - 1})$. We expand the potential
\eqref{eq:uniax_potential_formal} around the isotropic part
$e_p^{\scriptscriptstyle (i)}(\tilde{R})$,
\begin{equation}\label{eq:dep_quad}
\delta e_p(\tilde{\bf R}) \approx -\epsilon\,[\partial_{\tilde{R}}
e_p^{\scriptscriptstyle (i)}(\tilde{R})]\tilde{R}\sin^2\tilde{\phi},
\end{equation}
and additionally expand both $e_p^{\scriptscriptstyle (i)}(\tilde{R})$ and $\delta
e_p(\tilde{\bf R})$ around $\tilde{R}_m$, keeping terms $\propto \epsilon\,
\sqrt{(\kappa_m-1)}$. The radial entry of the anisotropic Hessian matrix then
assumes the form
\begin{multline}
\mathrm{H}_{\tilde{R}\tilde{R}}(\tilde{\bf R})\approx \Cbar \, [1-\kappa_m(\tilde{\phi})] \\
+ \gamma \, [ \delta\tilde{R}^2/2 - \epsilon\,\sin^2{\tilde{\phi}}\,\tilde{R}_m \delta\tilde{R}]
\end{multline}
with $\gamma = \partial^4_{\tilde{R}}e_p^{\scriptscriptstyle (i)}(\tilde{R})|_{\tilde{R}_m}$
and the angle-dependent Labusch parameter
\begin{equation}\label{eq:kappa_quad}
\kappa_m(\tilde{\phi}) \equiv \frac{\max_{\tilde{R}}[-\partial_{\tilde{R}}^2 e_p(\tilde{R},\tilde{\phi})|_{\tilde{\phi}}]}{\Cbar}
= \kappa_m - 2\epsilon\sin^2\tilde{\phi}.
\end{equation}
The edges of the unstable region $\mathcal{U}_{\Rti}$ then can be obtained by imposing the
condition $\mathrm{H}_{\tilde{R}\tilde{R}}(\tilde{\bf R}) = 0$ and the solution to the
corresponding quadratic equation define the jump positions
$\tilde{R}_\mathrm{jp}(\tilde{\phi})$ (or boundaries $\partial\mathcal{U}_{\Rti}$)
\begin{align}\label{eq:uniax_Rjp}
\tilde{R}_\mathrm{jp}(\tilde{\phi}) \approx \tilde{R}_m(\tilde{\phi}) \pm \delta \tilde{R}(\tilde{\phi}).
\end{align}
These are centered around the (`large') ellipse defined by
\begin{equation}\label{eq:ellipse-large}
\tilde{R}_m(\tilde{\phi}) = \tilde{R}_m (1 +\epsilon\sin^2\tilde{\phi})
\end{equation}
and separated by (cf.\ Eq.\ \eqref{eq:ujp})
\begin{align}\label{eq:uniax_drmax}
2\, \delta \tilde{R}(\tilde{\phi}) = \sqrt{\frac{8\Cbar}{\gamma}(\kappa_m(\tilde{\phi}) - 1)}
\end{align}
along the radius. Making use of the form \eqref{eq:kappa_quad} of
$\kappa_m(\tilde{\phi})$ and assuming a small value of $\kappa_m > 1$ near onset, we
obtain the jump line in the form of a (`small') ellipse centered at $[\pm
\tilde{R}_m,0]$,
\begin{equation}\label{eq:ellipse-small}
\gamma\, \delta \tilde{R}^2 + \epsilon \Cbar\,\tilde{\phi}^2 = \Cbar(\kappa_m - 1).
\end{equation}
\begin{figure}[t]
\includegraphics[width = 1.\columnwidth]{figures/quasi-ring-hyperbola.pdf}
\caption{Unstable and bistable domains before merging for a uniaxial
defect \eqref{eq:uniax_potential_formal} centered at the origin, with
$\epsilon = 0.1$ and $1 - \kappa_s \approx 0.01$. Strong pinning is
realized everywhere but in a small interval around $\tilde{\phi} = \pm\pi/2$
where $\kappa_m(\tilde{\phi}) < 1$. (a) The unstable domain $\mathcal{U}_{\Rti}$ in the tip
plane is bounded by the solid red/blue jump lines $\mathcal{J}_\mathrm{\Rti}$, see Eq.\
\eqref{eq:uniax_Rjp} and involves two strongly bent ellipses
originating from angles $\tilde{\phi} = 0,\, \pi$ (black dots) and approaching
one another close to $\tilde{\phi} =\pm\pi/2$ (black crosses); red/blue dashed
lines are landing points as given by Eqs.\ \eqref{eq:uniax_Rlp}. (b)
Focus (in polar coordinates $\tilde{R},\, \tilde{\phi}$) on the tips of the
unstable domain near $\tilde{\phi} = \pi/2$. (c) The bistable domain $\mathcal{B}_{\Ras}$
in the asymptotic space consists of thin symmetric crescents (colored
in magenta) originating from $\bar{\phi} = 0,\, \pi$, with the delimiting
black solid lines given by Eq.\ \eqref{eq:uniax_R_bas}. (d) Focus on
the cusps of the bistable domain close to $\bar{\phi} = \pi/2$ in polar
coordinates $\bar{R},\,\bar{\phi}$. Red/blue colors indicate different
vortex configurations as quantified through the order parameter $\tilde{R}
- \tilde{R}_m(\bar{\phi})$.}
\label{fig:quasi-ring-hyperbola}
\end{figure}
Hence, we find that the anisotropic results are obtained from the isotropic
ones by replacing the circle $\tilde{R}_m$ by the ellipse $\tilde{\bf R}_m(\tilde{\phi})$ and
substituting $\kappa \to \kappa_m(\tilde{\phi})$ in the width \eqref{eq:ujp}, see
Figs.\ \ref{fig:quasi-ring-ellipse}(a) and (b) evaluated for small values
$\kappa_m - 1 = 0.01$ and $\epsilon = 0.1$.
Analogously, the boundaries of the bistable domain $\mathcal{B}_{\Ras}$ can be found by
applying the same substitutions to the result \eqref{eq:bs_pos}, see Figs.\
\ref{fig:quasi-ring-ellipse}(c) and (d),
\begin{align}\label{eq:uniax_R_bas}
\bar{R}(\bar{\phi}) \approx \bar{R}_m(\bar{\phi}) \pm \delta \bar{R}(\bar{\phi})
\end{align}
with $\bar{R}_m(\bar{\phi}) = \bar{R}_m (1 +\epsilon\sin^2\bar{\phi})$ and the width
\begin{align}\label{eq:uniax_dras}
2\, \delta \bar{R}(\bar{\phi}) = \frac{2}{3}\sqrt{\frac{8\Cbar}{\gamma}}(\kappa_m(\tilde{\phi}) - 1)^{3/2}.
\end{align}
The landing line $\mathcal{L}_{\tilde{\bf R}}$ is given by (see Eq.\ \eqref{eq:j_dist}
and note that the jump point is shifted by $\uti_{\mathrm{jp}}$ away from $\tilde{x}_m$, see Eq.\
\eqref{eq:jp_pos})
\begin{align}\label{eq:uniax_Rlp}
\tilde{R}_\mathrm{lp}(\tilde{\phi}) \approx \tilde{R}_m(\tilde{\phi}) \mp 2\,\delta \tilde{R}(\tilde{\phi}).
\end{align}
An additional complication is the finite angular extension of the unstable and
bistable domains $\mathcal{U}_{\Rti}$ and $\mathcal{B}_{\Ras}$; these are limited by the condition
$\kappa_m (\phi_{\max})=1$, providing us with the constraint
\begin{equation}
\tilde{\phi}_{\max} = \bar{\phi}_{\max} \approx \pm \sqrt{\frac{\kappa_m-1} {2\epsilon}}
\end{equation}
near the strong pinning onset with $(\kappa_m-1)\ll\epsilon$. The resulting
domains $\mathcal{U}_{\Rti}$ have characteristic extensions of scale
$\propto\sqrt{\kappa_m-1}$, see Fig.\ \ref{fig:quasi-ring-ellipse}.
\begin{figure}[t]
\includegraphics[width = 1.\columnwidth]{figures/quasi-ring-hyperbola-merged.pdf}
\caption{Unstable and bistable domains for a uniaxial defect
\eqref{eq:uniax_potential_formal} after merging, with $\epsilon = 0.1$
and $\kappa_s - 1 \approx 0.01$. (a) The unstable domain $\mathcal{U}_{\Rti}$ in tip
plane is enclosed between the jump lines $\mathcal{J}_\mathrm{\Rti}$ (solid red/blue, see Eq.\
\eqref{eq:uniax_Rjp}) and takes the shape of a deformed ring with a
wider (narrower) width at strongest (weakest) pinning near the solid
dots (crosses). Red/blue dashed lines mark the landing positions
$\mathcal{L}_\mathrm{\Rti}$ of the vortex tips and are given by Eq.\ \eqref{eq:uniax_Rlp}.
(b) Focus on the narrowing in the unstable domain close to the merger
points (crosses) at $\tilde{\phi} = \pi/2$ in the polar coordinates
$\tilde{R}, \, \tilde{\phi}$. (c) The bistable domain $\mathcal{B}_{\Ras}$ in asymptotic
space is a narrow ring (colored in magenta) thicker (thinner) at points of
strongest (weakest) pinning near $\bar{\phi} = 0,\ \pi$ ($\bar{\phi} = \pm\pi/2$);
black lines correspond to Eq.\ \eqref{eq:uniax_R_bas}. (d) Focus on
the constriction in the bistable domain close to $\bar{\phi} = \pi/2$ in
polar coordinates $\bar{R}, \, \bar{\phi}$. Red/blue colors indicate
different vortex configurations as quantified through the order parameter
$\tilde{R} - \tilde{R}_m(\bar{\phi})$.}
\label{fig:quasi-ring-hyperbola-merged}
\end{figure}
Close to merging (marked by crosses in the figure) at $\phi = \pm\pi/2$, we
define the deviation $\delta\phi = \pi/2 - \phi$ with $\delta\phi \ll 1$, and
imposing the condition $\kappa_m(\phi_{\max})=1$, we find
\begin{equation}
\delta\tilde{\phi}_{\max} = \delta\bar{\phi}_{\max} \approx \sqrt{1 -\frac{\kappa_m-1} {2\epsilon}}
\approx \sqrt{\frac{1 - \kappa_s}{2\epsilon}}.
\end{equation}
The corresponding geometries of $\mathcal{U}_{\Rti}$ and $\mathcal{B}_{\Ras}$ are shown in Fig.\
\ref{fig:quasi-ring-hyperbola} for $1 -\kappa_s\approx 0.01$ and $\epsilon =
0.1$. Finally, $\delta\tilde{\phi}_{\max}$ vanishes at merging for $\kappa_s = 1$ (or
$\kappa_m -1 \approx 2\epsilon$), in agreement, to order $\epsilon$, with the
exact result \eqref{eq:uniax_merging}.
Pushing the Labusch parameter beyond the merger with $\kappa_s > 1$ or
$\kappa_m > (1+\epsilon)^2 \approx 1 + 2\epsilon$, the unstable and bistable
regimes $\mathcal{U}_{\Rti}$ and $\mathcal{B}_{\Ras}$ change their topology: they develop a (non-simply
connected) ring-like geometry with separated inner and outer edges that are a
finite distance apart in the radial direction at all angles $\tilde{\phi}$ and $\bar{\phi}$.
The situation after the merger is shown in Fig.\
\ref{fig:quasi-ring-hyperbola-merged} for $\kappa_s - 1 \approx 0.01$ and
$\epsilon = 0.1$, with the merging points $\tilde{\bf R}_s$ and $\bar{\bf R}_s$ marked by
crosses.
The merging of the unstable domains at the saddle point $\tilde{\bf R}_s$ is a general
feature of irregular pinning potentials. In the next section, we will analyze
the behavior of the unstable domains close to a saddle point $\tilde{\bf R}_s$ of the
Hessian determinant $D(\tilde{\bf R})$ and obtain a universal description of their
geometry close to this point. We will see that the geometry associated with
this merger is of a hyperbolic type described by $\gamma \tilde{u}^2 + \delta
\tilde{v}^2 = 2\Cbar (\kappa_s -1)$, $\gamma >0$ and $\delta < 0$ (assuming no
skew). The change in topology then is driven by the sign change in $\kappa_s -
1$: before merging, $\kappa_s < 1$, the hyperbola is open along the
unstable (radial) direction $\tilde{u}$, thus separating the two unstable regions,
while after merging, $\kappa_s > 1$, the hyperbola is open along the
transverse direction $\tilde{v}$, with the ensuing passage defining the single,
non-simply connected, ring-like unstable region.
\section{Merger points}\label{sec:merger}
The merging of unstable and bistable domains is a general feature of irregular
pinning potentials that is relevant beyond the simple example of a weakly
anisotropic uniaxial defect discussed above. Indeed, while the exact
geometries of $\mathcal{U}_{\Rti}$ and $\mathcal{B}_{\Ras}$ depend on the precise shape of the pinning
potential, their behavior close to merging is universal. Below, we will study
this universal behavior by generalizing the expansions of Sec.\
\ref{sec:arb_shape} to saddle points $\tilde{\bf R}_s$ of the determinant $D(\tilde{\bf R})$.
As with the onset of strong pinning, the merger of two domains induces a
change in topology in the unstable and bistable domains; we will discuss these
topological aspects of onsets and mergers in Secs.\ \ref{sec:topology_hyp} and
\ref{sec:2D_landscape} below.
\subsection{Expansion near merger}\label{sec:hyp_expansion}
Following the strategy of Sec.\ \ref{sec:arb_shape}, we expand the energy
functional around a saddle point $\tilde{\bf R}_s$ of the determinant $D(\tilde{\bf R})$ in
order to obtain closed expressions for the unstable and bistable domains at
merging. In doing so, we again define local coordinate systems $(\tilde{u},\tilde{v})$
and $(\bar{u},\bar{v})$ in tip- and asymptotic space centered at $\tilde{\bf R}_s$ and
$\bar{\bf R}_s$, where the latter is associated with $\tilde{\bf R}_s$ through the force
balance equation \eqref{eq:gen_force_balance} in the original laboratory
system. Furthermore, we fix our axes such that $D(\tilde{\bf R}_s)$ is a local maximum
along the (unstable) $u$- and a local minimum along the (stable) $v$-direction
of the saddle; the mixed term $\propto \tilde{u}\tilde{v}$ is absent from the expansion
(as the Hessian matrix is symmetric). Furthermore, the vanishing slopes at
the saddle point, see \eqref{eq:det_saddle2Da}, imply the absence of terms
$\propto \tilde{u}^3$ and $\propto \tilde{u}^2\tilde{v}$ in the expansion and dropping
higher-order terms (corresponding to double-primed terms in
\eqref{eq:e_pin_expans_ani_orig}), we arrive to the expression
\begin{align}\label{eq:e_pin_expans_hyp}
&e_\mathrm{pin}(\tilde{\bf R}; \bar{\bf R}) =
\frac{\Cbar}{2} (1-\kappa_s) \, \tilde{u}^2
+ \frac{\Cbar + \lambda_{+,s}}{2}\, \tilde{v}^2 +\frac{a_s}{2}\, \tilde{u} \tilde{v}^2 \nonumber \\
&\quad+\frac{\alpha_s}{4}\, \tilde{u}^2\tilde{v}^2
+\frac{\beta_s}{6}\, \tilde{u}^3\tilde{v}
+\frac{\gamma_s}{24}\, \tilde{u}^4
-\Cbar\bar{u}\tilde{u} - \Cbar\bar{v} \tilde{v},
\end{align}
with $\kappa_s \equiv -\lambda_-(\tilde{\bf R}_s)/\Cbar,\ \lambda_{+,s} \equiv
\lambda_+(\tilde{\bf R}_s)$ and the remaining coefficients defined in analogy to Eq.\
\eqref{eq:e_pin_expans_ani}.
The most important term in the expansion \eqref{eq:e_pin_expans_hyp} is the
curvature term $\Cbar (1-\kappa_s)\, \tilde{u}^2 /2$ along the unstable direction
$u$. As before in Sec.\ \ref{sec:Uti}, see Eq.\ \eqref{eq:e_pin_expans_ani},
the coefficient $(1 - \kappa_s)$ changes sign at some value of the pinning
strength and will serve as the small parameter in our considerations. The
higher-order terms in the expansion \eqref{eq:e_pin_expans_hyp} are
constrained by the saddle condition \eqref{eq:det_saddle2Db}, implying that
(cf.\ \eqref{eq:Hess_D} and \eqref{eq:detHD})
\begin{equation}\label{eq:det_saddle2D_explicit}
\gamma_s\delta_s - \beta_s^2 < 0
\end{equation}
with
\begin{equation}\label{eq:delta_s}
\delta_s \equiv \alpha_s - \frac{2 a_s^2}{\Cbar + \lambda_{+,s}}
\end{equation}
(for the saddle point there is no condition on the trace of the Hessian). The
mapping of the two-dimensional pinning energy \eqref{eq:e_pin_expans_hyp} to
an effective one-dimensional Landau theory \eqref{eq:eff_landau_1D_hyp} of the
van der Waals kind is discussed in Appendix \ref{sec:eff_1D_merging}, both
before and after merging.
\subsection{Unstable domain $\mathcal{U}_{\Rti}$}\label{sec:Uti_merger}
\begin{figure}[t]
\includegraphics[width = 1.\columnwidth]{figures/hyperbolae_duo.pdf}
\caption{Jump lines $\mathcal{J}_\mathrm{\Rti}$ (solid red/blue) and landing lines $\mathcal{L}_\mathrm{\Rti}$
(dashed red/blue) in tip space $\tilde{\bf R}$ (in units of $\xi$), with the
hyperbola $\mathcal{J}_\mathrm{\Rti}$ defining the edge $\partial \mathcal{U}_{\Rti}$ of the unstable
domain $\mathcal{U}_{\Rti}$, before (a) and after (b) merging, for $1- \kappa_s= \pm
0.01$. Parameters are $\lambda_{-,s} = -0.25 \,e_p/\xi^2,
\lambda_{+,s} = 0$, and $a_s \approx 0.035\,e_p/\xi^3$, $\alpha_s =
-0.025\,e_p/\xi^4$, $\beta_s = 0$, $\gamma_s \approx 0.68
\,e_p/\xi^4$. A finite skew parameter $\beta_s = 0.025 e_p/\xi^4$
tilts the hyperbola away from the axes (dotted curves). Crosses
correspond to the vertices \eqref{eq:hyp_vertices} and
\eqref{eq:hyp_vertices_merged} of the hyperbola before and after
merging. Pairs of solid and open circles connected via long arrows
are examples of pairs of jumping- and landing tip positions. After
merging, see (b), the unstable domain $\mathcal{U}_{\Rti}$ is connected along the
$\tilde{v}$-axis, dividing the tip coordinate plane into two separate
regions. The jumping and landing hyperbolas coincide at their vertices
before merging, see (a), but not thereafter, see (b), where the
jumping and landing hyperbolas are separated (vertices on $\mathcal{L}_\mathrm{\Rti}$ are
marked with open red/blue stars) and no contact point is present.
Note the rotation by 90 degrees of the unstable direction with
respect to Figs.\ \ref{fig:quasi-ring-hyperbola}(b) and
\ref{fig:quasi-ring-hyperbola-merged}(b).}
\label{fig:hyp-duo}
\end{figure}
\subsubsection{Jump line $\mathcal{J}_\mathrm{\Rti}$}\label{sec:hyp_Jti}
The boundary of the unstable domain $\mathcal{U}_{\Rti}$ is determined by the jump condition
$D(\Rti_{s,\mathrm{jp}}) = 0$. Making use of the expansion \eqref{eq:e_pin_expans_hyp} and
keeping only terms quadratic in $\tilde{u},\tilde{v}$, the edges $\delta\Rti_{s,\mathrm{jp}} = (\uti_{s,\mathrm{jp}},\vti_{s,\mathrm{jp}})$
of $\mathcal{U}_{\Rti}$ (measured relative to $\tilde{\bf R}_s$) are given by the solutions of the quadratic
form (cf.\ \eqref{eq:quadratic_form})
\begin{equation}\label{eq:quadratic_form_hyp}
[\gamma_s\,\tilde{u}^2 + 2 \beta_s\,\tilde{u}\tilde{v} + \delta_s\, \tilde{v}^2]_{\Rti_{s,\mathrm{jp}}}
= 2 \Cbar(\kappa_s - 1).
\end{equation}
Equation \eqref{eq:quadratic_form_hyp} describes a hyperbola (centered at
$\tilde{\bf R}_s$) as its associated determinant is negative, see Eq.\
\eqref{eq:det_saddle2D_explicit}. Again, \eqref{eq:quadratic_form_hyp} can be
cast in the form of a matrix equation
\begin{equation}\label{eq:matrix_eq_jp_hyp}
\delta\Rti_{s,\mathrm{jp}}^T M_{s, \mathrm{jp}}\delta\Rti_{s,\mathrm{jp}} = \Cbar(\kappa_s - 1),
\end{equation}
with $M_{s,\mathrm{jp}}$ given by
\begin{align}\label{eq:hyperbola_jp}
M_{s,\mathrm{jp}} &= \begin{bmatrix} \gamma_s/2 &~~~ \beta_s/2\\
\beta_s/2 &~~~ \delta_s/2
\end{bmatrix}
\end{align}
with $\mathrm{det} M_{s,\mathrm{jp}} = (\gamma_s\delta_s - \beta_s^2)/4 < 0$. As
shown in Fig.\ \ref{fig:hyp-duo}, the geometry of the unstable domain $\mathcal{U}_{\Rti}$
changes drastically when $1- \kappa_s$ changes sign. Before merging, i.e., for
$1 - \kappa_s > 0$, the unstable domain (top and bottom regions in Fig.\
\ref{fig:hyp-duo}(a)) is disconnected along the stable $v$-direction and the
two red/blue branches of the hyperbola \eqref{eq:quadratic_form_hyp} describe
the tips of $\mathcal{U}_{\Rti}$. When $\kappa_s$ goes to unity, the tips of the unstable
domain merge at the saddle point $\tilde{\bf R}_s$. After merging, the unstable domain
extends continuously from the top to the bottom in Fig.\ \ref{fig:hyp-duo}(b)
with a finite width along the unstable $u$-direction, similarly to the
isotropic case shown in Fig.\ \ref{fig:f_pin}(c). Correspondingly, the two
(red and blue) branches of the hyperbola \eqref{eq:quadratic_form_hyp} now
describe the edges of $\mathcal{U}_{\Rti}$.
Solving the quadratic equation \eqref{eq:quadratic_form_hyp} before merging,
i.e., $1 - \kappa_s >0$, we find solutions $\uti_{s,\mathrm{jp}}(\tilde{v})$ away from a gap
along the stable $v$-direction,
\begin{multline}\label{eq:uti_jp_hyp}
\uti_{s,\mathrm{jp}}(|\tilde{v}| \geq \tilde{v}_{s,c}) = -\frac{1}{\gamma_s}\Bigl[\beta_s \tilde{v}\\
\pm \sqrt{2\gamma_s\Cbar(\kappa_s-1)- (\gamma_s\delta_s - \beta_s^2)\tilde{v}^2} \Bigr],
\end{multline}
i.e., Eq.\ \eqref{eq:uti_jp_hyp} has real solutions in the (unbounded)
interval $|\tilde{v}| \geq \tilde{v}_{s,c}$, with
\begin{equation}\label{eq:vti_sc}
\tilde{v}_{s,c} = \sqrt{2\gamma_s\Cbar(1-\kappa_s)/|\gamma_s\delta_s - \beta_s^2|}.
\end{equation}
For the uniaxial defect \eqref{eq:uniax_potential_formal} before merging, this
gap corresponds to a splitting of $\mathcal{U}_{\Rti}$ along the stable angular direction,
producing two separated domains as shown in Fig.\
\ref{fig:quasi-ring-hyperbola}(a). The coordinates
$\left(\uti_{s,\mathrm{jp}}(\pm\tilde{v}_{s,c}), \pm \tilde{v}_{s,c}\right)$ give the positions of the
vertices $\delta\tilde{\bf R}^<_{s,c,\pm}$ (relative to $\tilde{\bf R}_s$) of the hyperbola before
merging,
\begin{align}\label{eq:hyp_vertices}
\delta\tilde{\bf R}^<_{s,c,\pm} &= \pm \left(-\beta_s/\gamma_s, 1\right)\,\tilde{v}_{s,c}.
\end{align}
These are marked as black crosses in Fig.\ \ref{fig:hyp-duo}(a) (note the
rotation in the geometry as compared with Fig.\
\ref{fig:quasi-ring-hyperbola}(a)). We denote the distance between these
vertices by $\delta v^<$, defining a gap of width $\propto
\sqrt{1-\kappa_s}$ given by
\begin{equation}
\delta v^< = 2|\delta\tilde{\bf R}^<_{s,c,\pm}| = 2 \sqrt{\left(\gamma_s+\frac{\beta_s^2}{\gamma_s}\right)
\frac{\Cbar(1-\kappa_s)}{|\gamma_s\delta_s - \beta_s^2|}}.
\end{equation}
After merging, i.e., for $\kappa_s - 1 > 0$, the (local) topology of $\mathcal{U}_{\Rti}$
has changed as the gap along $v$ closes and reopens along the unstable
$u$-direction; as a result, the two separated domains of $\mathcal{U}_{\Rti}$ have merged.
The two branches of the hyperbola derived from \eqref{eq:quadratic_form_hyp}
are now parametrized as
\begin{multline}\label{eq:vti_jp_hyp}
\vti_{s,\mathrm{jp}}(|\tilde{u}| \geq \tilde{u}_{s,e}) = -\frac{1}{\delta_s}\Bigl[\beta_s\tilde{u} \\
\pm \sqrt{2\delta_s\Cbar(\kappa_s-1)- (\gamma_s\delta_s - \beta_s^2)\tilde{u}^2} \Bigr],
\end{multline}
with
\begin{equation}
\tilde{u}_{s,e} = \sqrt{2\delta_s\Cbar(\kappa_s - 1)/|\gamma_s\delta_s - \beta_s^2|}.
\end{equation}
The corresponding unstable domain is shown in Fig.\ \ref{fig:hyp-duo}(b). For
the uniaxial defect \eqref{eq:uniax_potential_formal} after merging, this gap
now corresponds to the finite width of $\mathcal{U}_{\Rti}$ along the radial direction, as
shown in Fig.\ \ref{fig:quasi-ring-hyperbola-merged}(a). The coordinates
$\left(\pm\tilde{u}_{s,e}, \vti_{s,\mathrm{jp}}(\pm \tilde{u}_{s,e})\right)$ for the vertices
$\tilde{\bf R}^>_{s,e,\pm}$ read
\begin{align}\label{eq:hyp_vertices_merged}
\delta \tilde{\bf R}^>_{s,e,\pm}
&= \pm\left(1, -\frac{\beta_s}{\delta_s}\right)\,\tilde{u}_{s,e}
\end{align}
and correspond to the points of closest approach in the branches of the
hyperbola \eqref{eq:quadratic_form_hyp}; these are again marked as black
crosses in Fig.\ \ref{fig:hyp-duo}(b) but are no longer associated with
critical points (we index these extremal points by `e'). Their distance
$\delta u^>$ is given by
\begin{equation}
\delta u^>=2|\delta\tilde{\bf R}^>_{s,e,\pm}|
= 2\sqrt{\left(\delta_s+\frac{\beta_s^2}{\delta_s}\right)
\frac{\Cbar(\kappa_s - 1)}{|\gamma_s\delta_s - \beta_s^2|}},
\end{equation}
i.e., the smallest width in $\mathcal{U}_{\Rti}$ grows as $\propto\sqrt{\kappa_s - 1}$.
As discussed above and shown in Fig.\ \ref{fig:hyp-duo}, the solutions of the
quadratic form \eqref{eq:quadratic_form_hyp} before and after merging are
unbounded for every value of $\kappa_s - 1$. As a consequence, neglecting the
higher order terms in the determinant $D(\tilde{\bf R})$ is valid only in a narrow
neighborhood of the saddle $\tilde{\bf R}_s$, where the boundaries of $\mathcal{U}_{\Rti}$ have the
shape of a hyperbola. Away from the saddle, these higher order terms are
relevant in determining the specific shape of the unstable and bistable domain,
e.g., the ring-like structures of $\mathcal{U}_{\Rti}$ and $\mathcal{B}_{\Ras}$ in Figs.\
\ref{fig:quasi-ring-hyperbola} and \ref{fig:quasi-ring-hyperbola-merged}.
\subsubsection{Landing line $\mathcal{L}_\mathrm{\Rti}$}\label{sec:hyp_Lti}
To find the second bistable vortex tip configuration $\Rti_{s,\mathrm{lp}}$ associated to the
edges of $\mathcal{B}_{\Ras}$ before and after merging, we repeat the steps of Sec.\
\ref{sec:Lti}. For the jump vector $\Delta\tilde{\bf R}_s = \Rti_{s,\mathrm{lp}} -\Rti_{s,\mathrm{jp}}$, we find the
result
\begin{align}
&\Delta \tilde{u}_s(\tilde{v}) = -3\left(\gamma_s\, \uti_{s,\mathrm{jp}}(\tilde{v})
+ \beta_s\, \tilde{v}\right)/\gamma_s,\label{eq:delta_rxh}\\
&\Delta \tilde{v}_s(\tilde{v}) = - \left[a_s/(\Cbar + \lambda_{s,+})\right]\tilde{v}\,
\Delta \tilde{u}_s(\tilde{v}), \label{eq:delta_ryh}
\end{align}
%
cf.\ Eqs.\ \eqref{eq:delta_rx} and \eqref{eq:delta_ry} above. Here, we make
use of the parametrization for the jump coordinate $\uti_{s,\mathrm{jp}}(\tilde{v})$ in
\eqref{eq:uti_jp_hyp} before merging; after merging, the above result is still
valid but should be expressed in terms of the parametrization $\vti_{s,\mathrm{jp}}(\tilde{u})$ in
Eq.\ \eqref{eq:vti_jp_hyp}.
The landing positions $\Rti_{s,\mathrm{lp}} = \Rti_{s,\mathrm{jp}} + \Delta\tilde{\bf R}_s$ arrange along the
branches $\mathcal{L}_\mathrm{\Rti}$ of a hyperbola in $\tilde{\bf R}$-space that are described by the
matrix equation
\begin{equation}\label{eq:matrix_eq_lp_hyp}
\delta\Rti_{s,\mathrm{lp}}^\mathrm{T} M_{s,\mathrm{lp}}\, \delta\Rti_{s,\mathrm{lp}} = \Cbar (\kappa_s - 1),
\end{equation}
with the landing matrix now given by
\begin{equation}\label{eq:hyperbola_lp}
M_\mathrm{s,lp} = \frac{1}{4} M_{s,\mathrm{jp}} +
\begin{bmatrix} 0 & 0\\
0 & ~~~\displaystyle{\frac{3}{4}\Bigl(\frac{\delta_s}{2}
- \frac{\beta_s^2}{2\gamma_s}\Bigr)}
\end{bmatrix}
\end{equation}
with $\mathrm{det} M_\mathrm{s,lp} = (\gamma_s\delta_s -\beta_s^2)/16 < 0$.
Before merging, the vertices of the landing and jumping hyperbolas coincide
and the jump \eqref{eq:delta_rxh}--\eqref{eq:delta_ryh} vanishes at these
points. Moreover, as for the contact points \eqref{eq:contact_points} close
to onset of strong pinning, the tangent to the jumping and landing hyperbolas
at the vertices is parallel to the $u$-direction, as is visible in Fig.\
\ref{fig:hyp-duo}(a).
For $\kappa_s = 1$, the tips of $\mathcal{U}_{\Rti}$ merge and both the jumping and landing
hyperbolas coincide at $\tilde{\bf R}_s$. After merging, i.e., for $\kappa_s - 1 > 0$,
the condition $\Delta \tilde{u}_s = \Delta \tilde{v}_s = 0$ cannot be realized along the
hyperbola \eqref{eq:quadratic_form_hyp} and the jumping and landing lines
separate completely; as a result, both the jumping distance $\Delta\tilde{\bf R}_s$ as
well as the jump in energy $\Delta e_{\mathrm{pin}}$ are always finite (see also Appendix
\ref{sec:eff_1D_merging}). Indeed, after merging the landing hyperbola
\eqref{eq:matrix_eq_lp_hyp} has vertices
\begin{equation}
\delta\tilde{\bf R}_{s,v,\pm}
= \pm\left(1, -\frac{\gamma_s \beta_s}{(4\gamma_s \delta_s-3\beta_s^2)}\right)
\,\tilde{u}_{s,v},
\end{equation}
with
\begin{equation}
\tilde{u}_{s,v} = \sqrt{\frac{2\Cbar(\kappa_s-1) (4\gamma_s\delta_s-3\beta_s^2)}
{\gamma_s(\gamma_s\delta_s -\beta_s^2)}}
\end{equation}
different from the jumping hyperbola in \eqref{eq:hyp_vertices_merged}. At
these points, the stable and unstable hyperbolas are tangent to the
$v$-direction, as is visible in Fig.\ \ref{fig:hyp-duo}(b).
In section Sec.\ \ref{sec:topology_hyp} below, we will take a step back from
the local analysis of the unstable domain $\mathcal{U}_{\Rti}$ close to a saddle point
$\tilde{\bf R}_s$ and consider the evolution of its geometry across the merging
transition from a global perspective using specific examples. Elaborating on
the analysis of Sec.\ \ref{sec:topology}, we will provide a simple argument
explaining the absence of contact points between jump and landing lines after
merging. Furthermore, we discuss the two possible roles of mergers as changing
the number of components of $\mathcal{U}_{\Rti}$ or changing the connectivity of $\mathcal{U}_{\Rti}$
between simply and non-simply connected areas. Before doing so, we discuss
the behavior of the bistable region $\mathcal{B}_{\Ras}$ close to merging.
\subsection{Bistable domain $\mathcal{B}_{\Ras}$}\label{sec:hyp_Bas}
\begin{figure}[t]
\includegraphics[width = 1.\columnwidth]{figures/hyp_bananas_duo.pdf}
\caption{Bistable domain $\mathcal{B}_{\Ras}$ in asymptotic space $\bar{\bf R}$ before (a)
and after (b) merging, for $1 - \kappa_s=\pm0.01$ and parameters as in
Fig.\ \ref{fig:hyp-duo}. (a) Before merging, the bistable domain $\mathcal{B}_{\Ras}$
consists of two parts, corresponding to the two unstable regions $\mathcal{U}_{\Rti}$ in
Fig.\ \ref{fig:hyp-duo}(a). These terminate in the cusps at
$\bar{\bf R}_{s,c,\pm}^<$ that approach one another along the dashed parabola
\eqref{eq:x_0_line_hyp} to merge at $\kappa_s = 1$. Red/blue colors
indicate different vortex configurations as quantified through the
order parameter $\tilde{u} - \tilde{u}_m(\bar{v})$, while magenta is associated to
the bistable region $\mathcal{B}_{\Ras}$. Colored dots mark the asymptotic positions
associated to the pairs of jump positions in Fig.\
\ref{fig:hyp-duo}(a). (b) After merging, the bistable domain is
continuously connected; the cusps/critical points have vanished and
the dashed parabola turns into the branch cutting line. The black
crosses now mark the positions of strongest pinching of $\mathcal{B}_{\Ras}$, the
colored dots mark the asymptotic positions associated to the pairs of
tip positions in Fig.\ \ref{fig:hyp-duo}(b).}
\label{fig:hyp-bananas-duo}
\end{figure}
The set of asymptotic positions corresponding to $\mathcal{U}_{\Rti}$ before and after
merging, i.e., the bistable domain $\mathcal{B}_{\Ras}$, can be found by systematically
repeating the steps in Sec.\ \ref{sec:Bas}. Applying the force balance
equation $\nabla_\mathbf{R} e_\mathrm{pin}(\mathbf{R};\bar{\bf R})\Big|_{\tilde{\bf R}}=0$ to
the energy expansion \eqref{eq:e_pin_expans_hyp}, we find the counterpart of
Eqs.\ \eqref{eq:asymptotic_positions},
\begin{align}\nonumber
\Cbar \bar{u} &= \Cbar(1-\kappa_s) \tilde{u} + \frac{a_s}{2}\tilde{v}^2
+ \frac{\gamma_s}{6}\tilde{u}^3 + \frac{\beta_s}{2}\tilde{u}^2 \tilde{v}
+ \frac{\alpha_s}{2}\tilde{u} \tilde{v}^2,\\
\Cbar \bar{v} &= (\Cbar + \lambda_{s,+}) \tilde{v} + a_s\,\tilde{u} \tilde{v} + \frac{\beta_s}{6}\tilde{u}^3
+ \frac{\alpha_s}{2}\tilde{u}^2 \tilde{v},
\label{eq:asymptotic_positions_hyp}
\end{align}
relating tip and asymptotic positions close to merging. As for the unstable
domain, the topology of $\mathcal{B}_{\Ras}$ depends on the sign of $1- \kappa_s$. The
bistable domain $\mathcal{B}_{\Ras}$ before merging is shown in Fig.\
\ref{fig:hyp-bananas-duo}(a) for $1 - \kappa_s = 0.01$. It
consists of two parts, corresponding to the two pieces of $\mathcal{U}_{\Rti}$ for $1 - \kappa_s
> 0$, that terminate at the cusps $\bar{\bf R}_{s,c,\pm}^<$. The latter are related
to the vertices $\tilde{\bf R}_{s,c,\pm}^<$ of the jumping hyperbola through the force
balance equation \eqref{eq:asymptotic_positions_hyp},
\begin{equation}\label{eq:cusps_hyp}
\delta\bar{\bf R}_{s,c,\pm}^< \approx \left[\left(a_s/2\,\Cbar\right)\,\tilde{v}^2_{s,c},\,
\pm\left(1 + \lambda_{s,+}/\Cbar\right)\tilde{v}_{s,c}\right].
\end{equation}
For finite values of $(1-\kappa_s)$, the cusps are separated by a distance
$2|\delta{\bar{\bf R}}_{s,c,\pm}^<|\approx 2\left(1 + \lambda_{s,+} /\Cbar\right) \tilde{v}_{s,c} \propto
\sqrt{1-\kappa_s}$. They approach one another along the parabola
\begin{equation}
\bar{u}_{s,0} \approx \frac{a}{2\Cbar}\frac{1}{(1+ \lambda_+/\Cbar)^2} \bar{v}_{s,0}^2,
\label{eq:x_0_line_hyp}
\end{equation}
see the black dashed line in Fig.\ \ref{fig:hyp-bananas-duo}, with
higher-order corrections appearing at finite skew $\beta \neq 0$.
After merging, this line lies within $\mathcal{B}_{\Ras}$ and defines the branch
crossing line, cf.\ Eq.\ \eqref{eq:x_0_line}.
After merging, when $\kappa_s - 1 > 0$, the cusps have vanished and the edges
have rearranged to define a connected bistable region, see Fig.\
\ref{fig:hyp-bananas-duo}(b). The extremal points of the two edges are found
by evaluating the force balance equation \eqref{eq:asymptotic_positions_hyp}
at the vertices $\tilde{\bf R}_{s,e,\pm}^>$, Eq.\ \eqref{eq:hyp_vertices_merged}, to
lowest order,
\begin{equation}\label{eq:cusps_hyp_merged}
\delta\bar{\bf R}_{s,e,\pm}^> \approx \frac{\beta_s}{\delta_s}
\left[\frac{a_s}{2\,\Cbar}\frac{\beta_s}{\delta_s}\,\tilde{u}_{s,e}^2,\,
\mp\left(1 + \frac{\lambda_{s,+}}{\Cbar}\right)\,\tilde{u}_{s,e}\right].
\end{equation}
For finite values of $(\kappa_s - 1)$, these points are separated by a
distance $2|\delta\bar{\bf R}_{s,e,\pm}^>|\approx 2\left(1 + \lambda_{s,+} / \Cbar\right)
(\beta_s/\delta_s)\tilde{u}_{s,e}\propto\sqrt{\kappa_s -1}$. Note that the extremal
points $\bar{\bf R}_{s,e,\pm}^>$ are no longer associated to cusps or critical points
as these have disappeared in the merging process. When the skew parameter
vanishes as in Fig.\ \ref{fig:hyp-bananas-duo}, $\beta_s = 0$, higher-order
terms in $(\kappa_s - 1)$ in the force-balance equation
\eqref{eq:asymptotic_positions_hyp} become relevant in determining the
positions $\bar{\bf R}_{s,e,\pm}^>$, separating them along the unstable
$u$-direction. In this case, we obtain a different scaling for their distance,
i.e., $|\delta\bar{\bf R}_{s,e,\pm}^>| \propto\left(1-\kappa_s\right)^{3/2}$.
\subsection{Topological aspect of mergers}\label{sec:topology_hyp}
In order to discuss the topological aspect of a merger, it is convenient to
consider some specific examples. In Sec.\ \ref{sec:uniax_defect}, we have
analyzed the case of a uniaxial defect with a quadrupolar anisotropy $\delta
e_p \propto \epsilon \sin^2\tilde{\phi}$ in the pinning potential, see \eqref{eq:dep_quad},
that produced a degenerate onset at symmetric points $[\pm\tilde{x}_m,0]$. Here, we
choose again a weakly anisotropic defect centered in the origin but with a dipolar
deformation $\delta e_p \propto \epsilon \cos\tilde{\phi}$ that results in an angle-dependent
Labusch parameter
\begin{equation}\label{eq:kappa_dip}
\kappa_m(\tilde{\phi}) = \kappa_m - \epsilon\cos\tilde{\phi},
\end{equation}
see Eq.\ \eqref{eq:kappa_quad}. The strong pinning onset of such a defect then
appears in an isolated point on the negative $x$-axis, with the unstable
ellipse $\mathcal{U}_{\Rti}$ deforming with increasing $\kappa_m$ into a horseshoe that is
open on the positive $x$-axis---the closing of the horseshoe to produce a
ring, see Fig.\ \ref{fig:top_horseshoe}, then corresponds to the local merger
shown in Fig.\ \ref{fig:hyp-duo}. With this example in mind, we can repeat the
discussion in Sec.\ \ref{sec:topology}: The unstable eigenvector
$\mathbf{v}_-(\mathbf{R}_\mathrm{jp})$ points radially outwards from the
origin over the entire horseshoe, including the merging region at positive
$x$. On the other hand, the tangent to the boundary $\partial\mathcal{U}_{\Rti}$ rotates
forward and back along the horseshoe as shown in Fig.\ \ref{fig:top_horseshoe}
(we attribute a direction to $\partial\mathcal{U}_{\Rti}$ with the convention of following
the boundary with the unstable region on the left); in fact, over most of the
boundary, the tangent is simply orthogonal to $\mathbf{v}_-$, with both
vectors rotating together when going along $\partial\mathcal{U}_{\Rti}$. At the ends of the
horseshoe, however, the tangent locally aligns parallel (anti-parallel) to
$\mathbf{v}_-$ and the two vectors rotate (anti-clockwise) with respect to one
another, with the total winding equal to $2\pi$. After the merger, this
winding has disappeared, with the resulting ring exhibiting no winding in the
tangent fields on the inner/outer boundary; as a result, the contact points
between the jump and landing lines have disappeared.
Furthermore, the merger changes the topology of $\mathcal{U}_{\Rti}$ from the
simply-connected horseshoe to the non-simply connected ring, while the number
of components in $\mathcal{U}_{\Rti}$ has not changed. Note that the change in the relative
winding is not due to crossing the singularity of the vector field
$\mathbf{v}_-$ as alluded to in Sec.\ \ref{sec:topology}---rather, it is the
merger of the horseshoe tips that rearranges the boundaries of $\mathcal{U}_{\Rti}$ and make
them encircle the singularity.
\begin{figure}
\includegraphics[width = 1.\columnwidth]{figures/horseshoe.pdf}
\caption{Left: Unstable region $\mathcal{U}_{\Rti}$ for a defect with dipolar asymmetry.
Upon the onset of strong pinning, an unstable ellipse appears to the
left of the defect center (black solid dot). With increasing pinning
strength (decreasing $\Cbar$) the ellipse grows and deforms into a
horseshoe geometry. The unstable eigenvector field $\mathbf{v}_-$ (red
arrows) points radially outward away from the defect center. The
tangent field to the boundary $\partial\mathcal{U}_{\Rti}$ (black arrows) follows
the unstable direction at an angle of $\pi/2$ over most of
$\partial\mathcal{U}_{\Rti}$, with the exception of the two turning points where the
tangent rotates by $\pi$ with respect to $\mathbf{v}_-$, producing a
relative winding of $2\pi$. Right: After the merger of the turning
points the unstable region $\mathcal{U}_{\Rti}$ changes topology and assumes the
shape of a ring. The windings of the tangent field with respect to the
eigenvector-field $\mathbf{v}_-$ vanish separately for both
boundaries of $\mathcal{U}_{\Rti}$.
}
\label{fig:top_horseshoe}
\end{figure}
In the above example, we have discussed a merger that changes the
connectedness of $\mathcal{U}_{\Rti}$. On the other hand, as we are going to show, a merger
might leave the connectedness of $\mathcal{U}_{\Rti}$ unchanged, while modifying the number
of components, i.e., the number of disconnected parts, in $\mathcal{U}_{\Rti}$. Let us
again consider a specific example in the form of an anisotropic defect with a
warped well shape, producing several (in general subsequent) onsets and
mergers; in Fig.\ \ref{fig:top_three}, we consider a situation with three
onset points and subsequent individual mergers. After the onset, the three
ellipses define an unstable region $\mathcal{U}_{\Rti}$ with three disconnected parts that
are simply-connected each. This configuration is characterized by its number
of components measuring $C = 3$. As two of the three ellipses merge, the
number of components of $\mathcal{U}_{\Rti}$ reduces to $C = 2$, the next merger generates a
horseshoe that is still simply-connected with $C = 1$. The final merger
produces a ring; while the number of components remains unchanged, $C = 1$,
the unstable area assumes a non-simply connected shape with a `hole'; we
associate the index $H = 1$ with the appearance of this hole within $\mathcal{U}_{\Rti}$. In
physics terms, the last merger producing a hole in $\mathcal{U}_{\Rti}$ is associated with
the appearance of a pinned state; the unstable ring separates stable tip
positions that are associated with pinned and free vortex configurations
residing at small and large radii, respectively.
\begin{figure}[t]
\includegraphics[width = 1.\columnwidth]{figures/top_three.pdf}
\caption{The unstable domain $\mathcal{U}_{\Rti}$ starting out with $C = 3$ components in
(a) changes topology in three steps: after the first (b) and second
(c) mergers the number of components $C$ has changed from three in (a)
to two in (b) to one in (c), leading to a horseshoe shape of $\mathcal{U}_{\Rti}$.
The third merger closes the horseshoe to produce the ring
geometry in (d) characterized by the coefficients $C = 1$ and $H = 1$ ($H$
denotes the number of `holes' in $\mathcal{U}_{\Rti}$); the Euler characteristic
$\chi = C - H$ changes by unity in every merger. }
\label{fig:top_three}
\end{figure}
Defining the (topological) characteristic $\chi \equiv C - H$, we see that
$\chi$ changes by unity at every onset and merger, either through an increase
(for an onset) or decrease (for a merger) in the number of components $C \to
C\pm 1$, or through the appearance of a hole (in a merger) $H \to H+1$.
Indeed, the quantity $\chi$ is known as the Euler characteristic of a manifold
and describes its global topological properties; it generalizes the well known
Euler characteristic of a polyhedron to surfaces and manifolds\cite{Nakahara_2003}, see Sec.\
\ref{sec:2D_landscape} below. Finally, Morse theory \cite{NashSen_2011}
connects the Euler characteristic with the local differential properties
(minima, maxima, saddles) of that manifold, hence establishing a connection
between local onsets and mergers (at minima and saddles of $D(\tilde{\bf R})$) and the
global properties of $\mathcal{U}_{\Rti}$ such as the appearance of new pinned states. In
Sec.\ \ref{sec:2D_landscape} below, we consider the general case of a random
pinning landscape in two dimensions and discuss the connection between local
differential and global topological properties of $\mathcal{U}_{\Rti}$ in the light of Morse
theory---the topology of bistable domains $\mathcal{B}_{\Ras}$ then follows trivially.
\section{$\mathcal{U}_{\Rti}$ of a two-dimensional pinscape}\label{sec:2D_landscape}
We consider a two-dimensional pinning landscape $e_p(\mathbf{R})$, e.g., as
produced by a superposition of several (anisotropic Lorentzian) defects
residing in the $z = 0$ plane. In the figures \ref{fig:3_defects} and
\ref{fig:2_defects_maxima}, we analyse two specific cases with $n = 3$ and $n
= 2$ defects as given in Eq.\ \eqref{eq:uniax_potential_formal} with $\epsilon
= 0.1$ and positions listed in Tables \ref{table1} and \ref{table2}; these
produce unstable landscapes $\mathcal{U}_{\Rti}$ of considerable complexity already, see
Figs.\ \ref{fig:3_defects}(a) and \ref{fig:2_defects_maxima}(a). Our defects
are compact with $e_p(\mathbf{R}) \to 0$ vanishing at $R \to \infty$; as a
result, $e_{\mathrm{pin}}$ becomes flat at infinity. Note that a dense assembly of
uniformly distributed individual defects produces a random Gaussian pinning
landscape, as has been shown in Ref.\ \onlinecite{Willa_2022}.
Here, we are interested in the evolution of the unstable and bistable domains
$\mathcal{U}_{\Rti}$ and $\mathcal{B}_{\Ras}$ associated with the 2D pinning landscape $e_{\mathrm{pin}}$; we focus
on the unstable domain $\mathcal{U}_{\Rti}$, with the properties of the bistable domain
$\mathcal{B}_{\Ras}$ following straightforwardly from the solution of the force balance
equation \eqref{eq:force_balance}. Unlike the analysis above that is centered
on special points of $\mathcal{U}_{\Rti}$, ellipses near onset and hyperbolas near mergers,
here, we are interested in the global properties of the unstable region
produced by a generic (though still two-dimensional) pinscape.
As discussed in Sec.\ \ref{sec:arb_shape} above, the unstable region $\mathcal{U}_{\Rti}$
associated with strong pinning is determined by the condition $D(\tilde{\bf R}) = 0$ of
vanishing Hessian determinant, more precisely, by the competition between the
lowest eigenvalue $\lambda_-(\tilde{\bf R})$ of the Hessian matrix $\mathrm{H}_{ij}$ of
the pinning potential $e_p(\mathbf{R})$ and the effective elasticity $\Cbar$,
see Eq.\ \eqref{eq:def_calU}. In order to avoid the interference with the
second eigenvalue $\lambda_+(\tilde{\bf R})$ of the Hessian matrix, we consider the
shifted (by $\Cbar$) curvature function
\begin{equation}\label{eq:def_Lambda}
\Lambda_{\Cbar}(\tilde{\bf R}) \equiv \Cbar + \lambda_-(\tilde{\bf R}),
\end{equation}
i.e., the relevant factor of the determinant $D(\tilde{\bf R}) = [\Cbar +
\lambda_-(\tilde{\bf R})] [\Cbar + \lambda_-(\tilde{\bf R})]$. The condition
\begin{equation}\label{eq:vanish_Lambda}
\Lambda_{\Cbar}(\tilde{\bf R}) = 0
\end{equation}
then determines the boundaries of $\mathcal{U}_{\Rti}$.
\begin{table}
\caption{\label{table1} Positions and relative weights of 3 uniaxially
anisotropic Lorentzian defects in Fig.\ \ref{fig:3_defects} as given by Eq.\
\eqref{eq:uniax_potential_formal}.}
\vskip 3 pt
\begin{tabular}{l | c c c}
& $~x/\xi$ & ~~$y/\xi$ & ~~weight\\
\hline
defect~\#1~~ & $1.14$ & $1.07$ & 0.65\\
defect~\#2~~ & $-0.98$ & $-0.19$ & 1\\
defect~\#3~~ & $0.20$ & $-0.67$ & 1 \\
\end{tabular}
\end{table}
\begin{table}
\caption{\label{table2} Positions and relative weights of 2 uniaxially
anisotropic Lorentzian defects in Fig.\ \ref{fig:2_defects_maxima} as given by
Eq.\ \eqref{eq:uniax_potential_formal}.}
\vskip 3 pt
\begin{tabular}{l | c c c}
& $~x/\xi$ & ~~$y/\xi$& ~~weight \\
\hline
defect~\#1~~ & $-1.32$ & $0.33$ & 1\\
defect~\#2~~ & $1.48$ & $-0.76$ & 1\\
\end{tabular}
\end{table}
The above problem can be mapped to the problem of cutting a surface, where
$\Lambda_{\Cbar}(\tilde{\bf R})$ is interpreted as a height-function over
$\mathbb{R}^2$ that is cut at zero level; the elasticity $\Cbar$ then plays
the role of a shift parameter that moves the function $\lambda_-(\tilde{\bf R})$
downwards in height with decreasing $\Cbar$ (that corresponds to increasing
the relative pinning strength of the pinscape in physical terms). As $\Cbar$
is decreased to compensate the absolute {\it minimum} of $\lambda_-(\tilde{\bf R}) <
0$, $\Cbar + \lambda_-(\tilde{\bf R}) = 0$, strong pinning sets in locally at $\tilde{\bf R}_m$
for the first time in the form of an unstable ellipse $\mathcal{U}_{\Rti}$, see Fig.\
\ref{fig:3_defects}(b) for our specific example with three defects; the
Labusch parameter $\kappa(\tilde{\bf R})$ evaluated at the point $\tilde{\bf R}_m$ defines
$\kappa_m$, the parameter tuned in Fig.\ \ref{fig:3_defects}. Decreasing
$\Cbar$ further, this ellipse grows and deforms, while other local {\it
minima} of $\lambda_-(\tilde{\bf R})$ produce new disconnected parts of $\mathcal{U}_{\Rti}$, a
situation illustrated in Fig.\ \ref{fig:3_defects}(c) where four `ellipses'
have appeared around (local) minima (blue filled dots). A further increase in
pinning strength (decrease in $\Cbar$) continuous to deform these `ellipses'
and adds three new ones. As the first {\it saddle} drops below the zero level
(red cross), two components merge and the number of components decreases; in
Fig.\ \ref{fig:3_defects}(d), we have three below-zero saddles and only
four components remain, $C = 4$. In Fig.\ \ref{fig:3_defects}(e) four
further mergers have reduced $C$ to 1 as the corresponding {\it saddles} drop
below zero level. This produces a single non-simply connected component, i.e.,
$C = 1$ and a hole, increasing the number of holes $H$ from zero to one.
The last merger leading to (f) finally leaves $C = 1$ but cuts the stable
region inside the ring into two, increasing the number of holes to $H = 2$.
\begin{figure}[t]
\includegraphics[width = 1.\columnwidth]{figures/3_defects_horizontal.pdf}
\caption{(a) Grayscale image of the pinning potential landscape
$e_p(\tilde{\bf R})$, with the three diamonds marking the positions of the
defects. (b)--(f) Shifted curvature function $\Lambda_{\Cbar}(\tilde{\bf R})$
versus tip position $\tilde{\bf R}$ for increasing values of $\kappa_m$
(decreasing $\Cbar$) as we proceed from (b) to (f). We make use of
the topographic interpretation with positive values of
$\Lambda_{\Cbar}$ marked as landmass (greenish colors, with low/high
elevation in dark/light green) and negative values of
$\Lambda_{\Cbar}$ constituting $\mathcal{U}_{\Rti}$ in flat light blue (height
levels are shown by thin black lines). The pinscape in (a) produces a
curvature landscape with $7$ minima (solid dots), $4$ maxima (open
dots), and $10$ saddles (crosses). Several unstable regions $\mathcal{U}_{\Rti}$
appear (solid dots turn blue) and merge (crosses turn red) to change
the topology of $\mathcal{U}_{\Rti}$. The Euler characteristic $\chi(\mathcal{U}_{\Rti}) = m - s
+ M = 1 - 0 + 0 = 1$ in (b) changes to $\chi(\mathcal{U}_{\Rti}) = 4$ in (c) and
(d), drops to $\chi(\mathcal{U}_{\Rti}) = 0$ in (e) and $\chi(\mathcal{U}_{\Rti}) = -1$ in (f); indeed,
$\mathcal{U}_{\Rti}$ in (f) has one component $C = 1$ and two holes $H = 2$,
reproducing $\chi(\mathcal{U}_{\Rti}) = C - H = -1$.
}
\label{fig:3_defects}
\end{figure}
This sequence of onsets and mergers is conveniently described in the
topographic language introduced in section \ref{sec:uniax_defect} that
interprets stable tip regions as land mass (green with bright regions
indicating higher mountains in Fig.\ \ref{fig:3_defects}) and unstable regions
as lakes (flat blue with (below-water) height levels indicated by thin black
lines), with the height $\Lambda_{\Cbar} = 0$ defining the water level. The
sequence (b) to (f) then shows the flooding of the landscape as pinning
increases ($\Cbar$ decreasing), with white dot minima turning blue at strong
pinning onsets and white cross saddles turning red at mergings; maxima in the
landscape are shown as black open circles. Note that we distinguish critical
points (minima, saddles) residing below (blue and red) and above (white) water
level. Similarly, a (local) maximum above sea level (black open dot) turns
into a blue open dot as it drops belop sea level; such an event is missing in
Fig.\ \ref{fig:3_defects} but can be produced with other configurations of
defects, see Fig.\ \ref{fig:2_defects_maxima} where the curvature landscape
for two defects is shown.
The above discussion relates the local differential properties of the function
$\Lambda_{\Cbar}(\tilde{\bf R}) < 0$, minima and saddles, to the global topological
properties of $\mathcal{U}_{\Rti}$, its number of components $C(\mathcal{U}_{\Rti})$ and holes $H(\mathcal{U}_{\Rti})$.
This connection between local and global properties is conveniently discussed
within Morse theory \cite{NashSen_2011}. Before presenting a general
mathematical formulation, let us discuss a simple heuristic argument producing
the result relevant in the present context; in doing so, we make use of the
above topographic language.
Starting with the {\it minima} of the function $\Lambda_{\Cbar}(\tilde{\bf R})$, a new
disconnected component appears in $\mathcal{U}_{\Rti}$ whenever the minimum drops below sea
level as $\Cbar$ is decreased, that produces an increase $C \to C+1$. With the
further decrease of $\Cbar$, these disconnected regions expand and merge
pairwise whenever a {\it saddle} point of $\Lambda_{\Cbar}(\tilde{\bf R})$ goes below
sea level, thereby inducing a change in the topology of $\mathcal{U}_{\Rti}$ by either
reducing the number of components $C \to C-1$ (keeping $H$ constant) or
leaving it unchanged (changing $H \to H + 1$), see, e.g., the example with the
horseshoe closing up on itself in Sec.\ \ref{sec:topology_hyp}. The below
sea-level minima and saddles of $\Lambda_{\Cbar}(\tilde{\bf R})$ can naturally be
identified with the vertices and edges of a graph; the edges in the graph then
define the boundaries of the graph's faces (the same way as the vertices are
the boundaries of the edges). For a connected graph, Euler's formula then
tells us that the number $V$ of vertices, $E$ of edges, and $F$ of faces are
constrained via $V - E + F = 1$ (not counting the outer face extending to
infinity) and a graph with $C$ components satisfies the relation $C = V - E +
F$ as follows from simple addition.
We have already identified minima and saddles of $\Lambda_{\Cbar}(\tilde{\bf R}) < 0$
with vertices and edges of a graph; denoting the number of below sea-level
minima and saddles by $m$ and $s$, we have $V = m$ and $E = s$. It remains to
express the number $F$ of faces in terms of critical points of the surface
$\Lambda_{\Cbar}(\tilde{\bf R}) < 0$. Indeed, the faces of our graph are associated
with maxima of the function $\Lambda_{\Cbar}(\tilde{\bf R})$: following the boundaries
of a face, we cross the corresponding saddles with the function
$\Lambda_{\Cbar}(\tilde{\bf R})$ curving upwards away from the edges, implying that the
faces of our graph include maxima of $\Lambda_{\Cbar}(\tilde{\bf R})$. These maxima
manifest in two possible ways: either the face contains a single below
sea-level maximum or a single above sea-level landscape. The above sea-level
landscape comprises at least one maximum but possibly also includes other
extremal points that we cannot analyse with our knowledge of the below
sea-level function $\Lambda_{\Cbar}(\tilde{\bf R}) < 0$ only; we therefore call the
above sea-level landscape a (single) hole. The appearance of a {\it single}
maximum or hole is owed to the fact that faces are not split by a below
sea-level saddle as these have already been accounted for in setting up the
graph.
Let us denote the number of (below sea-level) maxima by $M$ and the number of
holes by $H$, then $F = H + M$. Combining this last expression with Euler's
formula and regrouping topological coefficients $C(\mathcal{U}_{\Rti})$ and $H(\mathcal{U}_{\Rti})$ on one
side and extremal points $m[\Lambda_{\Cbar}(\tilde{\bf R})]$, $s[\Lambda_{\Cbar}(\tilde{\bf R})]$,
and $M[\Lambda_{\Cbar}(\tilde{\bf R})]$ on the other, we arrive at the Euler
characteristic $\chi \equiv C- H$ and its representation through local
differential properties,
\begin{equation}\label{eq:def_chi_Euler_Morse}
\chi(\mathcal{U}_{\Rti}) \equiv [C - H]_{\mathcal{U}_{\Rti}} = [m - s + M]_{\Lambda_{\Cbar}(\tilde{\bf R}) < 0}.
\end{equation}
The result \eqref{eq:def_chi_Euler_Morse} follows rigorously from the
Euler-Poincar\'e theorem\cite{Nakahara_2003, NashSen_2011} in combination with
Morse's theorem \cite{NashSen_2011}, with the former expressing the Euler
characteristic $\chi(\mathcal{U}_{\Rti})$ through the so-called Betti numbers $b_i(\mathcal{U}_{\Rti})$,
\begin{equation}\label{eq:def_chi_Betti}
\chi(\mathcal{U}_{\Rti}) \equiv \sum_{i=0}^{2} (-1)^i b_i(\mathcal{U}_{\Rti}),
\end{equation}
where the $i$-th Betti number $b_i(\mathcal{U}_{\Rti}) = \mathrm{Dim}[H_i(\mathcal{U}_{\Rti})]$ is given
by the dimension or rank of the $i$-th (singular) homology group $H_i(\mathcal{U}_{\Rti})$.
In colloquial terms, the Betti numbers $b_i$ count the number of `holes' in
the manifold with different dimensions $i$: the zeroth Betti number gives the
number of components $b_0 = C$ of $\mathcal{U}_{\Rti}$, the first Betti number $b_1 = H$
counts the holes, and the second Betti number refers to cavities, here $b_2 =
0$ for our open manifold. Hence, we find that the Euler characteristic is
given by the number of components and holes in $\mathcal{U}_{\Rti}$,
\begin{equation}\label{eq:chi_CH}
\chi(\mathcal{U}_{\Rti}) = C(\mathcal{U}_{\Rti}) - H(\mathcal{U}_{\Rti}),
\end{equation}
in agreement with the discussion in Sec.\ \ref{sec:topology_hyp} and
\eqref{eq:def_chi_Euler_Morse}.
\begin{figure}[t]
\includegraphics[width = 1.\columnwidth]{figures/2_defects_horizontal.pdf}
\caption{(a) Grayscale image of the pinning potential landscape
$e_p(\tilde{\bf R})$, with the two diamonds marking the positions of the
defects. (b)--(f) Shifted curvature function $\Lambda_{\Cbar}(\tilde{\bf R})$
(in topographic coloring, see caption of Fig.\ \ref{fig:3_defects})
versus tip position $\tilde{\bf R}$ for increasing values of $\kappa_m$ as we
proceed from (b) to (f). The pinscape in (a) produces a curvature
landscape with $6$ minima (solid dots), $4$ maxima (open dots), and
$9$ saddles (crosses). Upon increasing $\kappa_m$, several unstable
regions $\mathcal{U}_{\Rti}$ appear (solid dots turn blue) and merge (crosses turn
red) to change the topology of $\mathcal{U}_{\Rti}$. The Euler characteristic
$\chi(\mathcal{U}_{\Rti}) = m - s + M = 1 = C$ in (b), remains $\chi(\mathcal{U}_{\Rti}) = 1$ in
(c), but with $C = 2$ and $H = 1$, changes to $\chi(\mathcal{U}_{\Rti}) = -1$ in
(d), and $\chi(\mathcal{U}_{\Rti}) = -3$ with one component $C = 1$ and four holes
$H = 4$ in (e). In going from (e) to (f) two of the maxima (black
open dots turn blue) drop below zero, producing a characteristic
$\chi(\mathcal{U}_{\Rti}) = 6 - 9 + 2 = -1$; indeed, $\mathcal{U}_{\Rti}$ in (f) has one component
$C = 1$ and two holes $H = 2$, reproducing $\chi(\mathcal{U}_{\Rti}) = C - H = -1$.}
\label{fig:2_defects_maxima}
\end{figure}
Morse theory\cite{NashSen_2011} then provides a connection between the
topological properties of the manifold $\mathcal{U}_{\Rti}$ and the local differential
properties of the surface $\Lambda_{\Cbar}(\tilde{\bf R}) < 0$ defining it: with $C_i$
the number of critical points with index $i$ of the surface
$\Lambda_{\Cbar}(\tilde{\bf R}) < 0$ (the index $i$ counts the number of negative
eigenvalues of the Hessian matrix evaluated at the critical point), the Euler
characteristic $\chi(\mathcal{U}_{\Rti})$ relates the manifold's topology to the number and
properties of critical points,
\begin{equation}\label{eq:chi_Morse_C}
\chi(\mathcal{U}_{\Rti}) = \sum_{i=0}^{2} (-1)^i C_i(\Lambda_{\Cbar} <0).
\end{equation}
For our 2D manifold the coefficients $C_i$ count the minima $C_0 = m$, the
number of saddles $C_1 = s$, and $C_2 = M$ refers to the number of maxima,
hence,
\begin{equation}\label{eq:chi_Morse_msM}
\chi(\mathcal{U}_{\Rti}) = [m - s + M]_{\Lambda_{\Cbar} < 0}
\end{equation}
and the combination with \eqref{eq:chi_CH} produces the result
\eqref{eq:def_chi_Euler_Morse} anticipated above.
Summarizing, knowing the number of critical points $m$, $M$, and $s$ of the
seascape, i.e., its {\it local differential properties}, we can determine the
global topological aspects of the pinning landscape via the evaluation of the
Euler characteristic $\chi(\mathcal{U}_{\Rti})$ with the help of Eq.\
\eqref{eq:chi_Morse_msM}. The latter then informs us about the number $C$ of
unstable domains in $\mathcal{U}_{\Rti}$ where locally pinned states appear and the number
of holes $H$ in $\mathcal{U}_{\Rti}$ where globally distinct pinned states show up.
Furthermore, the outer boundaries of the lakes, of which we have $C$
components, are to be associated with instabilities of the free vortex state,
while inner boundaries (or boundaries of holes, which count $H$ elements) tell
about instabilities of pinned states, hence the Betti numbers $C$ and $H$
count different types of instabilities. It would then have been nice to
determine the separate topological coefficients $C$ and $H$
individually---unfortunately, $\chi(\mathcal{U}_{\Rti})$ as derived from local differential
properties provides us only with the difference $C-H$ between locally and
globally pinned areas and not their individual values. Nevertheless, using
Morse theory, we could connect our discussion of local differential properties
of the pinning landscape in Secs.\ \ref{sec:ell_expansion} and
\ref{sec:hyp_expansion} with the global pinning properties of the pinning
energy landscape as expressed through the topology of the unstable domain
$\mathcal{U}_{\Rti}$.
Regarding our previous examples, the isotropic and uniaxial defects, we remark
that for the latter the two simultaneous mergers on the $y$-axis produce a
reduction in $C = 2 \to 1$ and an increase of $H = 0 \to 1$ and hence a jump
from $\chi = 2$ to $\chi = 0$ in one step, as expected for two
simultaneous mergers. The symmetry of the isotropic defect produces a
(degenerate) critical line at $\tilde{R}_m$ rather than a critical point; adding a
small perturbation $\propto x^3$ breaks this symmetry and produces the
horseshoe geometry discussed in Sec.\ \ref{sec:topology_hyp} above that is
amenable to the standard analysis.
A last remark is in place about the topological properties in dual space,
i.e., of bistable regions $\mathcal{B}_{\Ras}$. Here, the mergers produce another
interesting phenomenon as viewed from the perspective of its thermodynamic
analogue. Indeed, the merger of deformed ellipses in tip-space corresponds to
the merger of cusps in asymptotic space, what translates to the vanishing of
critical points and a smooth continuation of the first-order critical and
spinodal lines in the thermodynamic analogue, see also Sec.\
\ref{sec:hyp_Bas}. We are not aware of a physical example in thermodynamics
that produces such a merger and disappearance of critical points.
\section{Summary and outlook}\label{sec:summary}
Strong pinning theory is a quantitative theory describing vortex pinning in
the dilute defect limit where this complex many-body system can be reduced to
an effective single-pin--single-vortex problem. The accuracy offered by this
theory then allows for a realistic description of the shape of the pinning
potential $e_p(\mathbf{R})$ associated with the defects. While previous work
focused on the simplest case of isotropic defects, here, we have generalized
the strong pinning theory to the description of arbitrary anisotropic pinning
potentials. Surprisingly, going from an isotropic to an anisotropic defect has
quite astonishing consequences for the physics of strong pinning---this
reminds about other physical examples where the removal of symmetries or
degeneracies produces new effects.
While the strong pinning problem is quite a complex one requiring the use of
numerical tools in general, we have identified several generic features that
provide the essential physics of the problem and that are amenable to an
analytic treatment. Specifically, these are the points of strong pinning onset
and the merger points, around which the local expansions of the pinning
potential $e_{\mathrm{pin}}(\tilde{\bf R};\bar{\bf R})$ in the tip coordinate $\tilde{\bf R}$ allow us to find all
the characteristics of strong pinning. In particular, we identify the
instability region $\mathcal{U}_{\Rti}$ in the vortex tip space (with coordinates $\tilde{\bf R}$)
and the bistable region $\mathcal{B}_{\Ras}$ in the space of asymptotic vortex positions
$\bar{\bf R}$ as the main geometric objects that determine the critical pinning force
density $\mathbf{F}_\mathrm{pin}$, from which the critical current density $j_c$, the
technologically most relevant quantity of the superconductor, follows
straightforwardly. While the relevance of the bistable region $\mathcal{B}_{\Ras}$ was
recognized in the past \cite{Labusch_1969,LarkinOvch_1979,Koopmann_2004}, the
important role played by the unstable region $\mathcal{U}_{\Rti}$ went unnoticed so far.
When going from an isotropic defect to an anisotropic one, the strong pinning
onset changes dramatically: while the unstable region $\mathcal{U}_{\Rti}$ grows out of a
circle of radius $\sim \xi$ and assumes the shape of a ring at $\kappa > 1$
for the isotropic situation, for an anisotropic defect the onset appears in a
point $\tilde{\bf R}_m$ and grows in the shape of an ellipse with increasing $\kappa_m
> 1$; the location where this onset appears is given by the Hessian of
$e_{\mathrm{pin}}$, specifically, the point $\tilde{\bf R}_m$ where its determinant touches zero
first, $\mathrm{det}\{\mathrm{Hess}[e_{\mathrm{pin}}(\tilde{\bf R};\bar{\bf R})|_{\bar{\bf R}}]\}_{\tilde{\bf R}_m} = 0$. The boundary of this
ellipse defines the jump positions $\mathcal{J}_\mathrm{\Rti}$ associated with the strong pinning
instabilities; when combined with the landing ellipse $\mathcal{L}_\mathrm{\Rti}$, these two
ellipses determine the jump distance $\delta \tilde{u}$ of the vortex tip, from
which follows the jump in the pinning energy $\Delta e_{\mathrm{pin}} \propto \delta\tilde{u}^4$,
which in turn determines $\mathbf{F}_\mathrm{pin}$ and $j_c$.
The bistable region $\mathcal{B}_{\Ras}$ in asymptotic vortex space comes into play when
calculating the average critical force density $\mathbf{F}_\mathrm{pin}$ opposing the vortex
motion: while the vortex tip undergoes a complex trajectory including jumps,
the vortex motion in asymptotic space $\bar{\bf R}$ is described by a straight line.
As this trivial trajectory in $\bar{\bf R}$-space traverses the bistable region
$\mathcal{B}_{\Ras}$, the vortex tip jumps upon exiting $\mathcal{B}_{\Ras}$, that produces the jump
$\Delta e_{\mathrm{pin}}$ and hence $\mathbf{F}_\mathrm{pin}$. Again, the shape of $\mathcal{B}_{\Ras}$ changes when
going from the isotropic to the anisotropic defect, assuming a ring of finite
width around a circle of radius $\sim\xi$ in the former case, while growing in
the form of a crescent out of a point for the anisotropic defect.
The new geometries associated with $\mathcal{U}_{\Rti}$ and $\mathcal{B}_{\Ras}$ then produce a
qualitative change in the scaling behavior of the pinning force density $\mathbf{F}_\mathrm{pin}
\propto (\kappa_m - 1)^{\mu}$ near onset, with the exponent $\mu$ changing
from $\mu = 2$ to $\mu = 5/2$ when going from the isotropic to the anisotropic
defect. This change is due to the change in the scaling of the geometric size
of $\mathcal{B}_{\Ras}$, with the replacement of the fixed radius $\sim \xi$ of the ring by
the growing size of the crescent $\sim \xi (\kappa_m-1)^{1/2}$ [the exponent
$\mu$ assumes a value $\mu =3$ for trajectories cutting the crescent along its
short dimension of size $\xi (\kappa_m-1)$]. Furthermore, for directed
defects, the pinning force density $\mathbf{F}_\mathrm{pin}(\theta)$ depends on the impact angle
$\theta$ relative to the unstable direction $u$ and is aligned with $u$,
except for a small angular regime close to $\theta = \pi/2$. This results in a
pronounced anisotropy in the critical current density $j_c$ in the vicinity of
the strong pinning onset.
A fundamental difference between the strong pinning onsets in the isotropic
and in the anisotropic case are the geometries of the unstable $\mathcal{U}_{\Rti}$ and
bistable $\mathcal{B}_{\Ras}$ regions: these are non-simply connected for the isotropic case
(rings) but simply connected for the anisotropic defect (ellipse and
crescent). The resolution of this fundamental difference is provided by the
second type of special points, the mergers. Indeed, for a general anisotropic
defect, the strong pinning onset appears in a multitude of points, with
unstable and bistable regions growing with $\kappa_m > 1$ and finally merging
into larger areas. Two examples illustrate this behavior particularly well,
the uniaxial defects with a quadrupolar and a dipolar deformation, see Secs.\
\ref{sec:uniax_defect} and \ref{sec:topology_hyp}. In the first case,
symmetric onset points on the $x$ axis produce two ellipses/crescents that
grow, approach one another, and finally merge in a ring-shaped geometry that
is non-simply connected. In the case of a dipolar deformation, we have seen
$\mathcal{U}_{\Rti}$ grow out of a single point with its ellipse expanding and deforming
around a circle, assuming a horseshoe geometry, that finally undergoes a
merging of the two tips to produce again a ring; similar happens when multiple
$\mathcal{U}_{\Rti}$ domains grow and merge as in Figs.\ \ref{fig:top_three} (a warped
defect) and \ref{fig:2_defects_maxima}(c) (a 2D pinning landscape where four
unstable domains have merged to enclose an `island').
These merger points are once more amenable to an analytic study using a proper
expansion of $e_{\mathrm{pin}}(\tilde{\bf R};\bar{\bf R})$ in $\tilde{\bf R}$ around the merger point $\tilde{\bf R}_s$,
the latter again defined by the local differential properties of the
determinant $\mathrm{det}\{\mathrm{Hess}[e_{\mathrm{pin}}(\tilde{\bf R};\bar{\bf R})|_{\bar{\bf R}}]\}$, this time not a
minimum but a saddle. Rather than elliptic as at onset, at merger points the
geometry is hyperbolic, with the sign change associated with increasing
$\kappa_s \equiv \kappa(\tilde{\bf R}_s)$ across unity producing a reconnection of the jump-
and landing lines $\mathcal{J}_\mathrm{\Rti}$ and $\mathcal{L}_\mathrm{\Rti}$.
While the expansions of $e_{\mathrm{pin}}(\tilde{\bf R};\bar{\bf R})$ are describing the local pinning
landscape near onset and merging (and thus produce generic results), the study
of the {\it combined set} of onset- and merger-points describe the global
topological properties of $\mathcal{U}_{\Rti}$ as discussed in Sec.\ \ref{sec:2D_landscape}:
every new (nondegenerate) onset increases the number of components $C$ in
$\mathcal{U}_{\Rti}$, while every merger either decreases $C$ or increases $H$, the number
of `holes' or `islands' (or nontrivial loops in a non-simply connected region)
in the pinning landscape. It is the `last' merging producing a non-simply
connected domain that properly defines a new pinned state; in our examples
these are the closing of the two deformed ellipses in the uniaxial defect with
quadrupolar deformation and the closing of the horseshoe in the defect with a
dipolar deformation. Formally, the relation between the local differential
properties of the curvature function $\Lambda_{\Cbar}(\tilde{\bf R}) = \Cbar +
\lambda_-(\tilde{\bf R})$ [with $\lambda_-(\tilde{\bf R})$ the lower eigenvalue of the Hessian
of $e_p(\tilde{\bf R})$], its minima, saddles, and maxima, are related to the global
topological properties of $\mathcal{U}_{\Rti}$ as described by its Euler characteristic
$\chi = C - H$ through Morse theory, see Eq.\ \eqref{eq:def_chi_Euler_Morse}.
Such topological structures have recently attracted quite some interest, e.g.,
in the context of Fermi surface topologies and topological Lifshitz
transitions \cite{Volovik_2017, Kane_2022}.
The physics around the onset points as expressed through an expansion of
$e_{\mathrm{pin}}(\tilde{\bf R};\bar{\bf R})$ resembles a Landau theory with $\tilde{\bf R}$ playing the role of
an order parameter and $\bar{\bf R}$ the dual variable corresponding to a driving
field---here, $\bar{\bf R}$ drives the vortex lattice across the defect and $\tilde{\bf R}$
describes the deformation of the pinned vortex. The endpoints of the crescent
$\mathcal{B}_{\Ras}$ correspond to critical end points as they appear in the Landau theory
of a first-order transition line, e.g., the Ising model in an external field
or the van der Waals gas. The boundary lines of $\mathcal{B}_{\Ras}$ correspond to spinodal
lines where phases become unstable, e.g., the termination of
overheated/undercooled phases in the van der Waals gas. The existence of
critical end points tells that `phases', here in the form of different pinning
branches, are smoothly connected when going around the critical point, similar
as in the gas--liquid transition of the van der Waals gas. As the `last'
critical point vanishes in a merger, a well defined new phase, here a new
pinned branch, appears.
Perspectives for future theoretical work include the study of correlations
between anisotropic defects (see Ref.\ \onlinecite{Buchacek_2020} addressing
isotropic defects) or the inclusion of thermal fluctuations, i.e., creep (see
Refs.\ \onlinecite{Buchacek_2019} and \onlinecite{Gaggioli_2022}).
Furthermore, our discussion of the extended pinscape in Sec.\
\ref{sec:2D_landscape} has been limited to a two-dimensional pinning
potential. In reality, defects are distributed in all three dimensions that
considerable complicates the corresponding analysis of a full
three-dimensional disordered pinning potential, with the prospect of
interesting new results.
On the experimental side, there are several possible applications for our
study of anisotropic defects. For a generic anisotropic defect, the inversion
symmetry may be broken. In this case, the pinning force along opposite
directions is different in magnitude, as different jumps are associated to the
boundaries of the bistable region $\mathcal{B}_{\Ras}$ away from onset, i.e., at
sufficiently large values of $\kappa_m$. Reversing the current, the different
critical forces then result in a ratchet effect \cite{Villegas_2003,
SouzaSilva_2006}. This leads to a rectification of an ac current and hence a
superconducting diode effect. While for randomly oriented defects the pinning
force is averaged and the symmetry is statistically restored, for specially
oriented defects, the diode effect will survive. Indeed, introducing
nanoholes into the material, vortex pinning was enhanced \cite{Wang_2013,
Kwok_2016} and a diode effect has been observed recently \cite{Lyu_2021}.
Generalizing strong pinning theory to this type of defects then may help in
the design of superconducting metamaterials with interesting functionalities.
Furthermore, vortex imaging has always provided fascinating insights into vortex
physics. Recently, the SQUID-on-tip technique has been successful in mapping
out a 2D pinning landscape in a film \cite{Embon_2015} (including the
observation of vortex jumps) that has inspired a new characterization of the
pinscape through its Hessian analysis \cite{Willa_2022}; the adaptation of this
current-driven purely 2D setup to the 3D situation described in the present
paper is an interesting challenge.
Finally, we recap the main benefits of this work in a nutshell: For one, we
have established a detailed connection of the strong pinning transition with a
the concept of first-order phase transitions in thermodynamics, with the main
practical result that the scaling of the pinning force density $\mathbf{F}_\mathrm{pin} \propto
(\kappa_m - 1)^\mu$ comes with an exponent $\mu = 5/2$ when working with
generic defects of arbitrary shapes. Second, we have found a mechanism, the
breaking of a defect's inversion symmetry, that produces rachets and a diode
effect in superconducting material. Third, we have uncovered the geometric
structure and its topological features that is underlying strong pinning
theory, including a proper understanding of the appearance of distinguished
pinned states. While understanding these geometric structures seems to be of
rather fundamental/scholarly interest at present, future work may establish
further practical consequences that can be used in the development of
superconducting materials with specific functional properties.
\section*{Acknowledgments}
We thank Tom\'a\v{s} Bzdu\v{s}ek, Gian Michele Graf, and Roland Willa for
discussions and acknowledge financial support of the Swiss National Science
Foundation, Division II.
|
2,869,038,155,469 | arxiv | \section{Appendix A}\label{B}
Here we describe an example illustrating how adiabatic dephasing Lindbladian, with slaved dephasing term, naturally arise from stochastic unitary evolutions.
The example is a stochastic variant of Berry's paradigm of the notion of adiabatic curvature, namely, a spin 1/2 in a magnetic field \cite{berry}. The evolution equation is
\begin{equation}
\dot\rho=
-i[ \vec{B}\cdot \vec{\sigma},\rho]
\label{berry}\end{equation}
where $\vec{B}\in \mathbb{R}^3$ is a time dependent magnetic field and $\vec{\sigma}$ the vector of Pauli matrices. The case considered by Berry is when $\vec{B}$ changes its orientation adiabatically say with fixed magnitude. We want now to consider the stochastic version of this model where the {\em magnitude} of $\vec{B}$ is a stochastic variable while its orientation is changing smoothly (adiabatically) in time. Formally, this corresponds to replacing $B$ in the evolution equation by
\[
\vec{B}\to W_t \vec{B}_0
\]
where $W_t$ is (scalar, biased) white noise. The canonical interpretation of Eq.~(\ref{berry}) as a stochastic differential equation goes through the Ito calculus \cite{ito}. To do so, it is convenient to expresses white noise in terms of the corresponding Brownian motion
\[
db_t:=W_t dt.
\]
The rules of Ito calculus say that $d\rho$ has to be expanded to first order in $dt$ and to second order in $db$.
This gives the stochastic evolution equation
\[
d\rho =
-i[ H_0,\rho]db-\mbox{$\frac 1 2$} (db)^2 [ H_0,[ H_0,\rho]],
\qquad H_0=\vec{\sigma}\cdot \vec{B}_0
\]
where $B_0$ is the smooth (non stochastic) function of time. In particular,
it follows that the (noise average) state $\rho_a=\mathbb{E}(\rho)$ satisfies the adiabatic Lindblad equation
\[
\dot{ \rho_a} =
{\cal L}(\rho_a)=-i\mu[ H_0,\rho_a]-\mbox{$\frac 1 2$} D [H_0,[H_0,\rho_a]]
\]
where $\mu$ is the bias of the white noise $ \mu=\mathbb{E}(W_t)$ and $D$ its variance $\mathbb{E}(W_tW_s)=D\delta(t-s)$. If $D\neq 0$, this gives a dephasing evolution where the dephasing is slaved to the time dependence of the Hamiltonian.
(A general framework for deriving Lindbladian for general stochastic evolutions is described e.g. in \cite{ref:stochastic}.)
\section{Appendix B}\label{A}
Here we outline the proof of Theorem~\ref{theorem}
by evaluating the terms proportional to $\dot\phi$ in Eq.~(\ref{LinResp}).
Let $P_j$ denote the spectral projections for $H$. Since ${\cal L}(P_j)=0$, the spectral projections are instantaneous stationary states.
Let $P=P_0$ denote the projection on the ground state.
We also denote $E_{jk}=\ket{j}\bra{k}$. This is an eigenvector of the Lindbladian with eigenvalue $\lambda_{jk}$ i.e. ${\cal L}(E_{jk})=\lambda_{jk}E_{jk}$.
By the adiabatic theorem the states adheres to the spectral projection, $\rho(t)=P(\phi(t))+O(\dot\phi)$. The first order correction $\delta\rho$ to the state
satisfies
\begin{equation}\label{dr}{\cal L}(\delta\rho)=\dot{P},\end{equation}
as can be seen from the substitution $\rho=P+\delta\rho$
into the Lindblad Eq.~(\ref{Lind}) and using ${\cal L}(P)=0$.
The correction can be decomposed as $\delta\rho=\delta_\perp \rho + \delta_\parallel \rho$ into parts $\delta_\perp \rho\in\operatorname{\mathrm{Range}} {\cal L}$ and $\delta_\parallel \rho\in\operatorname{\mathrm{Ker}} {\cal L}$, which are orthogonal with respect to the inner product defined by the trace. Note that ${\cal L}$ considered as a map on $\operatorname{\mathrm{Range}} {\cal L}$ is invertible and that $\dot P \in \operatorname{\mathrm{Range}} {\cal L}$. Thus
Eq.~(\ref{dr}) implies $\delta_\perp\rho= {\cal L}^{-1}(\dot P)$, where the inverse ${\cal L}^{-1}$ is well defined.
In fact, since the eigenstates of ${\cal L}$ are $E_{jk}$, one may readily write
\begin{equation}\label{rp} \delta_\perp \rho={\cal L}^{-1}(\dot P)=\sum_{j\neq k}
\frac{ \bra{j}\dot P\ket{k}}{\lambda_{jk}}E_{jk}.\end{equation}
Strictly speaking we restricted here to the case of simple eigenvalues;
more generally
$\bra{j}\dot P\ket{k}=0$ between degenerate eigenstates, whence
the appropriate reading of the sum~(\ref{rp}) is by omitting such pairs.
The complementary part $\delta_\parallel \rho$ may be determined as well (cf. \cite{BR93}) and happens to depend on history, but will not be needed.
Now, $\rho$ carries two contributions to the response Eq.~(\ref{LinResp}), of which the leading one, $\mathrm{Tr}(P F_\mu)$ equals
$\partial_\mu\mathrm{Tr}(PH)=\partial_\mu\varepsilon_0$. This term is not propotional to $\dot \phi$ and does not concern us (note that
it vanishes when the spectrum is independent of $\phi$, c.f. observation~8 after the theorem).
As for the first order correction $\delta\rho$, two contributions
arise in turn through $F_\mu={\partial H\over\partial\phi^\mu}=\partial_\mu{\sum\varepsilon_j P_j}$. The first, $\sum (\partial_\mu\varepsilon_j) P_j$, lies in $\operatorname{\mathrm{Ker}} {\cal L}$ and matches $\delta_\parallel \rho$.
This term does not concern us either and again vanishes when $\partial_\mu\varepsilon_j$ does.
The other part $\sum \varepsilon_j \partial_\mu P_j$ lies in $\operatorname{\mathrm{Range}} {\cal L}$
and gives the requisite linear response term of the expectation value $\langle F_\mu\rangle= \mathrm{Tr}(\rho\, \partial_\mu H)$:
\begin{equation}
\sum_{i} \varepsilon_i \mathrm{Tr} \bigl((\partial_\mu P_i) \, \delta_\perp\rho\bigr)
=\sum_{i\neq 0} (\varepsilon_i-\varepsilon_0) \mathrm{Tr} \bigl((\partial_\mu P_i) \, \delta_\perp\rho\bigr),
\label{lin-res}
\end{equation}
where we used $\partial_\mu\sum_i P_i=0$. Eq.~(\ref{lin-res}) can now be written as $\sum_{i}(\varepsilon_i-\varepsilon_0) \mathcal{A}_i$, where
\begin{equation}
\mathcal{A}_i=\sum_{j\neq k}
\bra{k} \partial_\mu P_i\ket{j}
\frac{ \bra{j}\dot P\ket{k}}{\lambda_{jk}}.
\label{lin-res2}
\end{equation}
Using
\begin{equation}
\bra{j}\dot P\ket{k}=\bra{j}P_j\dot PP_k\ket{k}=\left(\delta_{k,0}+\delta_{j,0}\right)\bra{j}\dot P\ket{k}
\label{Aa}
\end{equation}
it follows that the double sum in Eq. (\ref{lin-res2}) reduces to the single sum
\begin{equation}
\mathcal{A}_i=\sum_{j\neq 0} \Bigl(\frac{\bra{0}\partial_\mu P_i\ket{j}\,\bra{j}\dot P\ket{0}}{\lambda_{j0}} + c.c. \Bigr).
\label{lin-res3}
\end{equation}
Since $\overline{\lambda_{kj}}=\lambda_{jk}$ we get
$\mathcal{A}_i$ is manifestly real as it must be.
Using the fact that (recall that $i\neq 0$)
\begin{equation}
\bra{0}\partial_\mu P_i\ket{j}
=\bra{0}P\partial_\mu P_i\ket{j}=-\bra{0}(\partial_\mu P) P_i\ket{j}=-\delta_{ij} \bra{0}\partial_\mu P \ket{j}
\end{equation}
we finally find
\begin{align}
\sum_{j\neq 0} (\varepsilon_j-\varepsilon_0) \mathcal{A}_j&=-\sum_{j\neq 0} \ {\frac {\varepsilon_j-\varepsilon_0} {\lambda_{j0}}}\, \mathrm{Tr}\bigl(P (\partial_\mu P) P_j \dot P\bigr) + c.c. \nonumber\\
&=\sum_{j\neq 0} \ \frac 1 {i+\gamma_{j0}}\, \mathrm{Tr}\bigl((\partial_\mu P) P_j \dot P\bigr) + c.c.,
\label{lin-res4}
\end{align}
where $\gamma_{j0}\ge 0$ is the dimensionless characterization of the spectral data of Eq.~(\ref{gammajk}).
Simplification occurs for $\gamma_{j0}$ is independent of $j$. This is, of course,
automatically the case for a two level system where $j$ takes one value $j=1$. (Similar simplification occurs when there is one dominant $1/\gamma_{j0}$.) The sum over $j$ can now be carried out explicitly
\begin{equation}
\langle F_\mu\rangle=
\sum (\varepsilon_j-\varepsilon_0)\mathcal{A}_j=
\frac {\gamma-i}{1+\gamma^2} \,\mathrm{Tr} \big((\partial_\mu P)P_\perp \dot P\big)
+ c.c.
\label{lin-res-Final}
\end{equation}
Writing $\dot P= \sum_\nu (\partial_\nu P)\, \dot \phi$ we obtain the expression in the theorem.
|
2,869,038,155,470 | arxiv | \chapter{Introduction}
\section{Goals}
In this work we provide algorithms\footnote{See~(\ref{eq:AlgI}) in Theorem~\ref{thm:II} and~(\ref{eq:AlgII}) in Theorem~\ref{thm:III}.} approximating the bivector $\displaystyle \partial_{\ell_1} s_{(x)} \wedge \partial_{\ell_2} s_{(x)}$ and the integral $\displaystyle \int_P \big|\partial_{\ell_1} s(x) \wedge \partial_{\ell_2} s(x) \big|dx$ of a smooth map $s:\Omega \to \mathbb{E}_n$ (that we loosely call `surface'), where
\begin{itemize}
\item {\color{dgreen} $\mathbf{\mathbb{E}_n}$}\index{symboles}{$\mathbb{E}_n$} is an $n$-dimensional Euclidean space;
\item $\Omega$ is an open subset of the Euclidean plane $\mathbb{E}_2$;
\item $P\subset\Omega$ is a compact polygon;
\item for every $v\in \mathbb{E}_2$, $\displaystyle \partial_v s_{(x)}= \lim_{\epsilon \to 0}\frac{1}{\epsilon} \big[s(x+\epsilon v)-s(x)\big]$;
\item $\{\ell_1, \ \ell_2\}$ is an orthonormal basis in $\mathbb{E}_2$;
\item $\wedge$ is the outer product in the Euclidean Clifford algebra $\mathbb{G}_n$ associated\footnote{See Sections~\ref{sec:Eucl struct} and~\ref{sec:GA and En}.} to $\mathbb{E}_n$.
\end{itemize}
In particular, if $\displaystyle \partial_{\ell_1} s_{(x)} \wedge \partial_{\ell_2} s_{(x)}\ne 0$, then the bivector $\displaystyle \partial_{\ell_1} s_{(x)} \wedge \partial_{\ell_2} s_{(x)}$ can represent\footnote{See Section~\ref{sec:point lines}} the direction of the tangent plane to the surface $s$ at point $s(x)$ (or the normal vector, if $s:\Omega \to \mathbb{E}_3$ and if we consider\footnote{See Section~\ref{sec:cross product}.} the cross product $\partial_{\ell_1} s_{(x)} \bm{\times} \partial_{\ell_2} s_{(x)}$).
Our algorithms use informations from triangles in $\mathbb{E}_n$ inscribed\footnote{This means that the vertices of the triangles are images $s(x)$ of vertices of some nondegenerate triangles in $\Omega$ (see also Section~\ref{sec:surfaces}).} in the surface~$s$. Thus, Algorithm~(\ref{eq:AlgI}) allows to recover the tangent plane direction from every sequence of inscribed triangles converging to the point $s(x)$; this result is obtained approximating\footnote{See~(\ref{eq:Alg0}) in Theorem~\ref{thm:I}.} Jacobian determinants of smooth transformations $f:\Omega \to \mathbb{E}_2$ at points $x\in \Omega$ through nondegenerate triangles converging to point $x$. Algorithm~(\ref{eq:AlgI}) can also estimate the norm of $\displaystyle \partial_{\ell_1} s_{(x)} \wedge \partial_{\ell_2} s_{(x)}$, and thus, when $s$ is globally injective, Algorithm~(\ref{eq:AlgII}) can approximate the area of portions of the surface $s$ from every sequence of inscribed triangular\footnote{This means that all faces of the polyhedron are inscribed triangles.} polyhedra uniformly convergent to that portion\footnote{See Remark~\ref{rem:triangulations}.}.
In particular, we apply Algorithm~(\ref{eq:AlgI}) to the triangulation of a circular cylinder of the famous Schwarz\footnote{Hermann Amandus Schwarz (1843-1921).} area paradox\footnote{See Chapter~\ref{cha:local Schwarz}.}, showing that the approximating inscribed balanced mean bivectors\footnote{See Section~\ref{sec:surfaces}.} do converge to the tangent bivectors without any restriction of the approximating triangular mesh.
As a matter of fact, by using Algorithms~(\ref{eq:AlgI}) and~(\ref{eq:AlgII}) we can restore analogies\footnote{Compare, for instance, Proposition~\ref{prop: approxim dot c} and Theorem~\ref{thm:II}.} between the limit vector $\dot{c}_{(\chi)}$ of a smooth curve $c:I\to \mathbb{E}_n$ and the limit bivector $\displaystyle \partial_{\ell_1} s_{(x)} \wedge \partial_{\ell_2} s_{(x)}$ of a smooth surface $s:\Omega\to \mathbb{E}_n$; such analogies are lost, according to the Schwarz paradox, if we try to approximate tangents or surface area via the usual algorithms applied to arbitrary inscribed triangular polyhedra.
\section{Warnings}\label{sec:warnings}
The aim of this work is to describe Algorithms~(\ref{eq:AlgI}) and~(\ref{eq:AlgII}) as simply as possible; thus, our intention here is not to provide the most general hypothesis under which such algorithms work; neither do we want to generalize them here to $k$-manifolds immersed in $n$-dimensional Euclidean spaces, or to Riemann manifolds, nor do we want to introduce a Stieltjes-like $k$-measure in $\mathbb{E}_n$ generalizing Theorem~\ref{thm:I}. Such generalizations will be examined in forthcoming works.
The main theorems are stated and proved using Geometric Algebra. However, the reader will be provided formulas to translate them into the lengthy Cartesian coordinate formalism.
Finally, we apologize if some calculations may appear tedious or pedantic to readers well acquainted with Geometric Algebra, but this work is addressed to a broader audience.
\section{Notations I}
In this work we consider it important to distinguish the different types of mathematical objects in our formulas; therefore, we use the following conventions:
\begin{itemize}
\item lower-case Greek letters stand for real numbers;
\item lower-case Latin letters stand for vectors in some Euclidean space $\mathbb{E}_n$ (with the exceptions of letters $i,\ j,\ k,\ m,\ n$, representing integer indexes);
\item capital Latin letters stand for bivectors or generic $k$-vectors;
\item capital Greek letters stand for sets;
\item capital bold Greek letters stand for $n$-uples or arrays of real numbers (with $n>1$).
\end{itemize}
\section{Historical notes}
As two distinct points on a sufficiently smooth curve converge to the same point, the line passing through those two points assumes a well defined position. In particular, when such a local phenomenon is globally injective and uniform, we can approximate the length of the curve by the lengths of the line segments joining a finite number of consecutive points on the curve. The idea that a similar phenomenon may occur to triangles inscribed in a sufficiently smooth surface is probably what suggested to Serret\footnote{Joseph Alfred Serret (1819-1885).} (see \cite{Ser1879}) the following definition of area\footnote{Our translation: ``Let a portion of a curved surface be bounded by a contour C; we will call area of that surface the limit S to which converges the area of an inscribed polyhedral surface whose faces are triangles and which is bounded by a polygonal contour F having C as limit.''}:
\bigskip
\textit{\large Soit une portion de surface courbe termin\'ee par un contour C; nous nommerons aire de cette surface la
limite S vers laquelle tend l'aire d'une surface poly\'edrale inscrite form\'ee de faces triangulaires et termin\'ee par un contour polygonal F ayant pour limite le contour~C.}
\bigskip
However, on 20 December 1880, Schwarz wrote to Genocchi\footnote{Angelo Genocchi (1817-1889).} (see \cite{Cas1950}) observing that the area of a curved surface cannot be defined as Serret did. In subsequent letters to Genocchi, Schwarz showed that even the area of a surface as simple as a bounded part of a right circular cylinder cannot be recovered using Serret's definition.
Schwarz even provided examples of sequences of inscribed triangular polyhedra whose areas converge to any given number not less than the area of the cylinder (and even to infinity) as the polyhedra approach uniformly the cylinder\footnote{As a consequence, there also exist sequences of inscribed triangular polyhedra approaching the cylinder whose areas have no limit.}. Such phenomenon, that may occur to every curved surface (and even to polyhedra\footnote{See \cite{Fre1925}.}) is given the name of {\bf \color{dgreen} Schwarz paradox}
\index{termes}{Schwarz paradox} (or {\bf \color{dgreen} Schwarz phenomenon}).
\index{termes}{Schwarz phenomenon}
That famous paradox apparently destroyed the possibility of defining the area of a smooth surface by analogy with the length of a smooth curve.
Besides, the local interpretation of the Schwarz phenomenon implies that as three noncollinear points on a smooth surface converge to the same point of the surface, the limit position of the plane passing through those three points is not well determined, and can differ from the tangent plane to the surface at the limiting point.
Also, Schwarz's counterexample shows that the limiting position of the secant plane can even be orthogonal to the actual tangent plane.
\bigskip
Two questions naturally arise:
\begin{itemize}
\item what sequences of inscribed triangular polyhedra approaching a surface have areas converging to the area of that surface~?
\item are there algorithms able to recover the area of a surface from every sequence of inscribed triangular polyhedra approaching that surface~?
\end{itemize}
Schwarz showed that those questions are not trivial even for a cylinder.
\bigskip
Many different approaches were used to answer those questions. We cannot summarize such a long and prolific history here\footnote{Suggested readings are~\cite{GanPer2009}, \cite{Ces1956}, \cite{Rad1948} and~\cite{Sak1937}.}; we will just focus on some particular issues which only concern smooth curved surfaces.
\begin{itemize}
\item
Apart from Peano\footnote{Giuseppe Peano (1858-1932).}, all authors\footnote{Another exception is William Henry Young (1863-1942), who used an approach similar to Peano's in~\cite{You1920}.} approximated the area of a curved surface using the areas of triangular polyhedra uniformly approaching the surface.
\begin{itemize}
\item
Most of those authors selected particular inscribed triangular polyhedra constraining the form or the position of the triangular faces with `ad hoc' conditions.
\item
Lebesgue\footnote{Henri Lebesgue (1875-1941).}, instead, freed himself from inscribed polyhedra and artificial geometric conditions; however, his definition of area\footnote{See~\cite{Leb1902}} is of no help in selecting a sequence of polyhedra whose areas converge to the area of the surface\footnote{We suggest to read the Jordan's criticism to Lebesgue's definition of area in \cite{Leb1926} at pages~163--164.}, nor does his definition of area correspond locally to a definition of tangent.
\item
Ge\"{o}cze\footnote{Zo\'ard Ge\"{o}cze (1873-1916).} conjectured\footnote{See \cite{Rad1948}.}, and Mulholland\footnote{H.~P.~Mulholland (we did not find any biographical data about him).} proved in~\cite{Mul1950}, that Lebesgue's area can also be obtained restricting Lebesgue's approach to inscribed polyhedra.
\end{itemize}
\item
Peano freed himself from polyhedra\footnote{Nevertheless, his work has been a key inspiration to us.}, and used his \textit{Calcolo Geometrico} (based on Grassmann exterior algebra\footnote{See~\cite{Pea1887} and~\cite{Pea1888}.}) to define the area through integrals taken on the boundaries of portions of a surface. However, his definition\footnote{See~\cite{Pea1887} on page~164, or \cite{Pea1890} on page~55.} was vague about what portions of a surface may cut in order to approximate the area of the whole surface.
\end{itemize}
Our Algorithm~(\ref{eq:AlgII}) allows to consider inscribed triangular polyhedra without any kind of constraint, and uses a slightly modified notion of area. Besides, Algorithm~(\ref{eq:AlgII}) is just a global adaptation of the local Algorithm~(\ref{eq:AlgI}) that approximates tangent planes from every inscribed triangle approaching a point on the surface. Thus, Algorithms~(\ref{eq:AlgI}) and~(\ref{eq:AlgII}) restore many of the analogies between curves and surfaces.
\chapter{Basic notions of Euclidean Clifford algebras}
\section{Motivations}
Theorem~\ref{thm:I}, Theorem~\ref{thm:II} and Theorem~\ref{thm:III} are stated and proved using Euclidean Clifford algebra (i.e. {\bf \color{dgreen} Geometric Algebra}\index{termes}{Geometric Algebra}).
Of course, they can be translated into the Cartesian coordinatewise language as well\footnote{We provide formulas to do it.}; however, we consider the coordinate-free language of Geometric Algebra to be richer and more suitable in order to algebraically represent geometric properties. Moreover, we discovered Algorithm~(\ref{eq:AlgI}) and~(\ref{eq:AlgII}) while exploring the Schwarz paradox via Geometric Algebra and not via Cartesian language.
\section{Formal Geometric Algebra}
Following is a brief formal description of Geometric Algebra $\mathbb{G}_n$.
For more details and other approaches, see also~\cite{Art2006},~\cite{Cli1876},~\cite{Del1992},~\cite{DorFonMan2007},~\cite{HesSob1984},~\cite{Hes1999},~\cite{Lou2001},~\cite{Mac2002} or~\cite{Sny2012}.
\medskip
Suppose we have an ordered alphabet of $n$ (distinct) letters {\bf \color{dgreen} $\mathcal{A}_n$}$=\{\ell_1,\dots , \ell_n\}$\index{symboles}{$\mathcal{A}_n$}.
A {\bf \color{dgreen} word}\index{termes}{Word} from this alphabet is a juxtaposition of letters taken from $\mathcal{A}_n$. A word with no letters is considered a word as well, it is named {\bf \color{dgreen} empty word}\index{termes}{Empty word} (or {\bf \color{dgreen}unit}),\index{termes}{Unit} and it is given the reserved\footnote{A symbol is called `reserved' if it can never be a letter of any alphabet.} symbol~{\bf \color{dgreen}$\1$})\index{symboles}{$\1$}.
The set of formal finite real linear combinations of words\footnote{We will write real coefficients on left of words.} from $\mathcal{A}_n$ forms a real algebra {\bf \color{dgreen} $\mathbb{G}_n$}\index{symboles}{$\mathbb{G}_n$} if we consider juxtaposition of words as an associative and distributive product among words\footnote{The real coefficients are multiplied among themselves in $\mathbb{R}$.}. Thus, $\1$ is the unit for that product.
Also the empty real linear combination of words is considered an element of such algebra, and it is given the symbol~{\bf \color{dgreen}$\mathbb{ O}$}.\index{symboles}{$\mathbb{ O}$}
The following axioms hold in $\mathbb{G}_n$:
\begin{center}
$
\displaystyle
\hfil
\ell_j \ne \1\ ,
\hfil
\ell_j \ne \mathbb{O}\ ,
\hfil
\mathbb{O} \ne \1\ ,
\hfil
0 W = \mathbb{O}\ ,
\hfil
1 W = W\ ,
\hfil
$
\end{center}
where $j=1,\dots, n$ and $W$ is a word from the alphabet $\mathcal{A}_n$; moreover,
\begin{equation}\label{eq: product axiom}
\ell_i \ell_j =
\left\{
\begin{array}{ll}
-\ell_j \ell_i & \textrm{if } i\ne j \ , \\
\1 & \textrm{if } i=j \ ,
\end{array}
\right.
\end{equation}
where $-\ell_j \ell_i$ abbreviates $(-1)\ell_j \ell_i$.
The complete ordered word $\ell_1 \ell_2 \cdots \ell_n$ is called {\bf \color{dgreen} pseudo-unit}\index{termes}{Pseudo-unit} and is given the reserved symbol~{\bf \color{dgreen}$\mathbb{I}_n$}\index{symboles}{$\mathbb{I}_n$}.
$\mathbb{G}_n$ is then uniquely determined\footnote{Unique up to algebra isomorphisms between real associative algebras with unit.} if we add the final axioms
\begin{center}
$\displaystyle
\hfil
\mathbb{I}_n \ne \mathbb{O}\ ,
\hfil
\mathbb{I}_n \ne \1\ ,
\hfil
\mathbb{I}_n \ne -\1 \ .
\hfil
$
\end{center}
Axiom~(\ref{eq: product axiom}) allows to reduce every nonempty word from the alphabet $\mathcal{A}_n$ to a unique minimal\footnote{That is, without repeated letters.} ordered word (with sign)
\[
\pm
\ell_{i_1} \ell_{i_2} \cdots \ell_{i_k}\ ,
\]
where $i_1< i_2< \cdots <i_k$. The number $k$ is called {\bf \color{dgreen} grade}\index{termes}{Grade of a word} of the word, and the sign is called {\bf \color{dgreen} orientation}\index{termes}{Orientation of a word} of the word (with respect to the ordered alphabet $\mathcal{A}_n$). Such reductions make $\mathbb{G}_n$ a graded algebra
\[
\mathbb{G}_n
=
\bigoplus_{k=0}^n \mathbb{G}_{n \choose k}\ ,
\]
where~{\bf \color{dgreen}$\displaystyle \bm{\mathbb{G}_{n \choose k}}$}\index{symboles}{$\mathbb{G}_{n \choose k}$}
is the linear subspace of (finite) real combinations of words of grade $k$ (notice that $\displaystyle \mathbb{G}_{n \choose k}$ is not a subalgebra). Each $\displaystyle \mathbb{G}_{n \choose k}$ has (real) dimension ${n \choose k} = \frac{n!}{k!(n-k)!}$, and $\mathbb{G}_n$ has dimension $2^n$.
We can unambiguously identify the algebra of real numbers $\mathbb{R}$ with $\displaystyle \mathbb{G}_{n \choose 0}$, the real number $1$ with unit $\1$, and $0\in \mathbb{R}$ with the empty linear combination $\mathbb{O}$.
\section{Euclidean structure of $\mathbb{G}_n$}\label{sec:Eucl struct}
Geometric Algebra $\mathbb{G}_n$ is also called {\bf \color{dgreen} Euclidean Clifford algebra}\index{termes}{Euclidean Clifford algebra}, because it possesses a Euclidean structure strictly tied with its algebraic product\footnote{And because it was introduced by William Kingdon Clifford (1845-1879); see~\cite{Cli1876}.}. As a matter of fact, the symmetric part of the product among elements $x,y \in \mathbb{G}_{n \choose 1}$
\[
\frac{1}{2}
(xy+yx)
\]
\noindent
is always a real number and, as a function of $x$ and $y$, it is a symmetric, positive definite bilinear form in $\mathbb{G}_{n \choose 1}$ (that we denote with the symbol~{\bf \color{dgreen} $\bm{x\cdot y}$}).\index{symboles}{$x\cdot y$}
The $n$ letters of the ordered alphabet $\mathcal{A}_n$ (generating $\mathbb{G}_n$) form an ordered orthonormal basis in $\mathbb{G}_{n \choose 1}$ with respect to the scalar product $x \cdot y$, indeed
\[
\ell_i \cdot \ell_j =
\frac{1}{2}
(\ell_i \ell_j + \ell_j \ell_i )
=
\left\{
\begin{array}{ll}
0 & \textrm{if } i\ne j \ , \\
1 & \textrm{if } i=j \ .
\end{array}
\right.
\]
It is also important to note that the antisymmetric part of the product between $x,y \in \mathbb{G}_{n \choose 1}$
\[
\frac{1}{2}
(xy-yx)
\]
\noindent
is always an element of $\mathbb{G}_{n \choose 2}$, it is given the symbol~{\bf \color{dgreen} $\bm{x\wedge y}$}\index{symboles}{$x\wedge y$}, and is called {\bf \color{dgreen} outer product}\footnote{Or, simply, exterior product.}\index{termes}{Outer product in $\mathbb{G}_n$} because it acts in the graded algebra $\mathbb{G}_n$ as the associative antisymmetric Grassmann exterior product acts on the graded Grassmann algebra $\displaystyle \bigoplus_{k=0}^n \left[\bigwedge^{k} \mathbb{G}_{n \choose 1}\right]$. So, the product of $x,y \in \mathbb{G}_{n \choose 1}$ can be decomposed as
\[
xy = (x\cdot y) \ + \ (x\wedge y)\ ,
\]
and
\[
\ell_i \ell_j =
(\ell_i \cdot \ell_j)
+
(\ell_i \wedge \ell_j)
=
\left\{
\begin{array}{ll}
\ell_i \wedge \ell_j = - \ell_j \wedge \ell_i & \textrm{if } i\ne j \ , \\
1 & \textrm{if } i=j \ .
\end{array}
\right.
\]
\section{The Geometric Algebra associated to an oriented Euclidean space}\label{sec:GA and En}
Here we notice that the construction of $\mathbb{G}_n$ can proceed the other way around as well: given a $n$-dimensional Euclidean space $\mathbb{E}_n$,
there exists a unique\footnote{Up to isometries and orientation.} Geometric Algebra $\mathbb{G}_n$ such that its Euclidean subspace $\mathbb{G}_{n \choose 1}$ is isometric to $\mathbb{E}_n$. Indeed, it suffices to choose an ordered orthonormal basis $\{e_1,\ e_2,\ \dots ,\ e_n\}\subset \mathbb{E}_n$ as the ordered alphabet generating $\mathbb{G}_n$. In this sense we speak of Geometric Algebra associated to the oriented\footnote{The orientation being determined by the order of the orthonormal basis.} Euclidean space~$\mathbb{E}_n$.
A fundamental improvement of Geometric Algebra $\displaystyle \mathbb{G}_n
=\bigoplus_{k=0}^n \mathbb{G}_{n \choose k}$ over Grassmann algebra $\displaystyle \bigoplus_{k=0}^n \left[\bigwedge^{k} \mathbb{E}_n\right]$ is that $\mathbb{G}_n$ has a well defined Euclidean structure such that
\begin{itemize}
\item each subspace $\mathbb{G}_{n \choose k}$ is orthogonal in $\mathbb{G}_n$ to every other $\mathbb{G}_{n \choose j}$, with $k\ne j$;
\item each subspace $\mathbb{G}_{n \choose k}$ has a Euclidean structure, i.e. a symmetric, positive-definite bilinear form (that we will continue to indicate with the dot $\bm{\cdot}$) uniquely determined by the scalar product in $\mathbb{G}_{n \choose 1}$ (usually identified with $\mathbb{E}_n$).
\end{itemize}
For instance, for each $a,b,c,d \in \mathbb{G}_{3 \choose 1}\equiv \mathbb{E}_3$
\[
(a\cdot c) (b\cdot d)- (a\cdot d)(b\cdot c)
\]
is the scalar product $(a\wedge b) \cdot (c\wedge d)$ in $\mathbb{G}_3$ restricted to the subspace $\mathbb{G}_{3 \choose 2}$.
Notice that the one-dimensional subspace $\mathbb{G}_{3 \choose 0}$ (that we identified with $\mathbb{R}$) has the usual product between real numbers as the restriction of the scalar product in~$\mathbb{G}_3$
\begin{center}
$
\big(\alpha\1\big)
\cdot
\big(\beta\1\big)
=
\alpha\beta
=
\big(\alpha\1\big)
\big(\beta\1\big)
$,
\end{center}
while
\begin{center}
$
\big(\alpha\mathbb{I}_n\big)
\cdot
\big(\beta\mathbb{I}_n\big)
=
\alpha\beta
$, and
$
\big(\alpha\mathbb{I}_n\big)
\big(\beta\mathbb{I}_n\big)
=
(-1)^{\frac{n(n-1)}{2}}
\alpha\beta
$.
\end{center}
Roughly speaking, $\mathbb{G}_n$ encodes the scalar product of $\mathbb{E}_n$, its orientation\footnote{Encoded by its pseudo-unit $\mathbb{I}_n$.}, and the Grassmann exterior product on $\displaystyle \bigoplus_{k=0}^n \left[\bigwedge^{k} \mathbb{E}_n\right]$, within its associative and distributive algebraic product; moreover, such encoding reveals many connections between algebra and geometry, making new insights possible.
\section{Notations II}
As we have already said, the Geometric Algebra $\mathbb{G}_n$ is a Euclidean space; we indicate the norm of $X\in\mathbb{G}_n$ with
symbol~{\color{dgreen}$\bm{|X|}$}$=\sqrt{X\cdot X}$.\index{symboles}{$\vert \cdots \vert$}
In order to emphasize the geometric interpretation of elements in a Geometric algebra, we will use the following nomenclature.
Given an orthonormal ordered basis in the $n$-dimensional Euclidean space $\mathbb{E}_n$ (or an ordered alphabet) $\{\ell_1, \dots , \ell_n\}$, then
\begin{itemize}
\item elements of $\mathbb{G}_{n \choose 0}\equiv \mathbb{R}$ are called {\bf \color{dgreen} scalars}\index{termes}{Scalars},
\item elements of $\mathbb{G}_{n \choose 1}\equiv \mathbb{E}_n$ are called {\bf \color{dgreen} vectors}\index{termes}{Vectors},
\item elements of $\mathbb{G}_{n \choose 2}$ are called {\bf \color{dgreen} bivectors}\index{termes}{Bivector},
\item elements of $\mathbb{G}_{n \choose k}$ are called {\bf \color{dgreen} $k$-vectors}\index{termes}{$K$-vectors},
\item elements of $\mathbb{G}_{n \choose n}$ are called {\bf \color{dgreen} pseudo-scalars}\index{termes}{Pseudo-scalars}, and are real multiples of the pseudo-unit $\mathbb{I}_n=\ell_1\cdots\ell_n$.
\end{itemize}
A $k$-vector of the form $\alpha (v_1\wedge \cdots \wedge v_k)$, where $\alpha\in\mathbb{R}$ and each $v_i\in \mathbb{G}_{n \choose 1}$, is called {\bf \color{dgreen} $k$-blade}.\index{termes}{$K$-blade}
Note that in $\mathbb{G}_3$ every bivector is a $2$-blade, while in $\mathbb{G}_4$ the bivector $(\ell_1\ell_2)\ + \ (\ell_3\ell_4)$ is not a $2$-blade.
In order to limit the use of parentheses, we establish the following precedence rules for operations in $\mathbb{G}_n$, listed below with decreasing rank of precedence\footnote{Sometimes we will also use spacing to stress precedence.}:
\begin{enumerate}
\item outer product $\wedge$,
\item product in $\mathbb{G}_n$ among elements of the same grade,
\item scalar product,
\item product between a scalar and a $k$-vector (with $k>0$),
\item sum in $\mathbb{G}_n$.
\end{enumerate}
Thus, for example, $\alpha a \ + \ \beta b$ means $(\alpha a)+(\beta b)$; $\alpha \beta\ x\wedge y$ means $(\alpha \beta)(x\wedge y)$.
However, note also that $\alpha \beta\ x\wedge y = (\alpha x)\wedge (\beta y) =
(\beta x)\wedge (\alpha y)= x\wedge (\alpha\beta y)= \dots$
\section{Inverse of vectors and pseudo-unit in $\mathbb{G}_n$}
In $\mathbb{G}_n$ a vector $v$ is invertible if and only if $v\ne 0$; in this case we have
\[
v^{-1}
=
\frac{1}{|v|^2} v\ ,
\]
\noindent
where
{\bf \color{dgreen} $|v|$}$=\sqrt{v\cdot v}=\sqrt{v^2}$.
In fact $\displaystyle v v^{-1}=\frac{1}{|v|^2} vv=\frac{1}{|v|^2} (v\cdot v + v\wedge v)=1$.
\noindent
In $\mathbb{G}_n$ the pseudo-unit is always invertible, and
\[
(\mathbb{I}_n)^{-1}
=
(\ell_1\cdots\ell_n)^{-1}
=
\ell_n\cdots\ell_1
=
(-1)^{\frac{n(n-1)}{2}}
\mathbb{I}_n \ .
\]
\section{Pseudo-scalars in $\mathbb{G}_2$ and determinants of $2\times 2$ real matrices}\label{sec:determinants}
In $\mathbb{G}_2$ the notions of bivector, $2$-blade and pseudo-scalar coincide.
Let $\{\ell_1,\ \ell_2\}$ be an ordered orthonormal basis in $\mathbb{E}_2\equiv \mathbb{G}_{2\choose 1}$.
For each $x,y\in \mathbb{G}_{2\choose 1}$, we can write $x=\chi_1 \ell_1\ +\ \chi_2\ell_2$ and $y=\zeta_1 \ell_1\ +\ \zeta_2\ell_2$ (where $\chi_i=x\cdot \ell_i$ and $\zeta_i=y\cdot \ell_i$), then
\[
x\wedge y
=
\left(
\chi_1 \zeta_2 - \chi_2\zeta_1
\right)
(\ell_1\wedge\ell_2)
=
\det
\left(
\begin{array}{cc}
\chi_1 & \chi_2 \\
\zeta_1 & \zeta_2
\end{array}
\right)
\ell_1 \ell_2
=
\det
\left(
\begin{array}{cc}
\chi_1 & \chi_2 \\
\zeta_1 & \zeta_2
\end{array}
\right)
\mathbb{I}_2\ ,
\]
and then
\[
(x\wedge y)
(\mathbb{I}_2)^{-1}
=
\det
\left(
\begin{array}{cc}
\chi_1 & \chi_2 \\
\zeta_1 & \zeta_2
\end{array}
\right)
=
(x\wedge y)\cdot \mathbb{I}_2\
.
\]
Notice that the one-dimensional space $\mathbb{G}_{2\choose 2}$ has the following positive definite symmetric bilinear form:
\[
(x\wedge y)\cdot (w\wedge z)
=
(x\cdot w)(y\cdot z)- (x\cdot z)(y\cdot w)\ ,
\]
where $x,y,w,z\in \mathbb{G}_{2\choose 1}$.
It will be useful to note the following property, too.
\begin{proposition}\label{prop:bivector norm ineq}
If $x,y\in \mathbb{E}_2$, then $|x\wedge y| \le |x| \ |y|$.
\end{proposition}
\textit{\textbf{Proof:}} let $\{\ell_1,\ \ell_2\}$ be an ordered orthonormal basis in $\mathbb{E}_2$;
let us write $x=\chi_1 \ell_1\ +\ \chi_2\ell_2$ and $y=\zeta_1 \ell_1\ +\ \zeta_2\ell_2$, so that
\begin{center}
$|x \wedge y|^2 =\left(\chi_1 \zeta_2 - \chi_2\zeta_1 \right)^2$.
\end{center}
The thesis is then achieved verifying the following equivalence,
\begin{center}
$
\left(\chi_1 \zeta_2 - \chi_2\zeta_1 \right)^2 \le (\chi_1^2 +\chi_2^2)(\zeta_1^2+\zeta_2^2)
\Longleftrightarrow
0 \le (\chi_1\zeta_1 + \chi_2\zeta_2)^2 \ . \ \square
$
\end{center}
In this work the symbol {\bf \color{dgreen} $\bm{\square}$}\index{symboles}{$\square$} indicates the end of a proof.
\section{Exterior product of orthogonal vectors in $\mathbb{E}_2$}
\begin{proposition} If $x,y\in \mathbb{E}_2$ are orthogonal, then
\[
x\wedge y
=
\left\{
\begin{array}{rl}
|x|\ |y| \ \mathbb{I}_2 & \textrm{ if } (x\wedge y)\cdot \mathbb{I}_2 > 0 \\
- |x|\ |y| \ \mathbb{I}_2 & \textrm{ if } (x\wedge y)\cdot \mathbb{I}_2 < 0
\end{array}
\right.
\]
\end{proposition}
\textit{\textbf{Proof:}} let $\{\ell_1,\ \ell_2\}$ be an ordered orthonormal basis in $\mathbb{E}_2$; $x\wedge y = [(x\wedge y)\cdot\mathbb{I}_2] \mathbb{I}_2$; let us write $x=\chi_1 \ell_1\ +\ \chi_2\ell_2$ and $y=\zeta_1 \ell_1\ +\ \zeta_2\ell_2$, so that $x \wedge y = \left( \chi_1 \zeta_2 - \chi_2\zeta_1 \right)\mathbb{I}_2$, then
\begin{center}
$
\displaystyle
|\chi_1 \zeta_2 - \chi_2\zeta_1|^2
=
(\chi_1)^2 (\zeta_2)^2+ (\chi_2)^2(\zeta_1)^2 - \chi_1 \zeta_1 \chi_2\zeta_2 - \chi_1 \zeta_1 \chi_2\zeta_2\ ,
$
\end{center}
\noindent
and orthogonality corresponds to the relation $\chi_1\zeta_1=-\chi_2\zeta_2$, so
\begin{center}
$
\displaystyle
|\chi_1 \zeta_2 - \chi_2\zeta_1|^2
=
(\chi_1)^2 (\zeta_2)^2+ (\chi_2)^2(\zeta_1)^2 + (\chi_1)^2 (\zeta_1)^2 + (\chi_2)^2(\zeta_2)^2
=
|x|^2\ |y|^2 \ \square
$
\end{center}
Two ordered bases\footnote{Or, what is the same think, two 2-blades $b_1\wedge b_2$, $c_1\wedge c_2$ in $\mathbb{E}_2$.} $\{b_1,b_2\}$ and $\{c_1,c_2\}$ in $\mathbb{E}_2$ are said to be {\bf \color{dgreen} equi-oriented}\index{termes}{Equi-oriented basis in $\mathbb{E}_2$}\index{termes}{Equi-oriented 2-blades in $\mathbb{E}_2$} if
\begin{center}
$(b_1\wedge b_2)\cdot (c_1 \wedge c_2)>0$.
\end{center}
\section{Isometric duality between $\mathbb{G}_{3\choose 1}$ and $\mathbb{G}_{3\choose 2}$: the cross product}\label{sec:cross product}
In $\mathbb{G}_3$ the subspaces $\mathbb{G}_{3\choose 1}$ and $\mathbb{G}_{3\choose 2}$ have the same dimension, and are both Euclidean spaces. Moreover, the correspondence
\[
\mathbb{G}_{3 \choose 1} \ni x \longmapsto {\color{dgreen}\bm{x^*}}=x\mathbb{I}_3 \in \mathbb{G}_{3 \choose 2}\ ,
\]\index{symboles}{$x^*$}
\noindent
is an isometry in $\mathbb{G}_3$, whose inverse is the correspondence
\[
\mathbb{G}_{3 \choose 2}
\ni X \longmapsto {\color{dgreen}\bm{X^{\#}}} = -X \mathbb{I}_3
\in \mathbb{G}_{3 \choose 1} \ .
\]\index{symboles}{$X^{\#}$}
\noindent
This allows to establish the classical $1:1$ correspondence between a two-dimensional vector subspace of $\mathbb{E}_3$ generated by the two independent vectors
$a,b\in\mathbb{G}_{3\choose 1}$ (i.e. the bivector $a\wedge b \in \mathbb{G}_{3 \choose 2}$) and the vector $(a\wedge b)^\#\in \mathbb{G}_{3 \choose 1}$ that is orthogonal to that subspace; as a matter of fact
\begin{center}
$ \displaystyle
\big[(a\wedge b)^\# \big]\cdot a
=
- \big[(a\wedge b)\mathbb{I}_3 \big]\cdot a
=
-\frac{1}{4}
\big[
(ab-ba)\mathbb{I}_3a + a(ab-ba)\mathbb{I}_3
\big]
=
0\ ,
$
\end{center}
\noindent
as $w\mathbb{I}_3=\mathbb{I}_3w$ for all $w\in \mathbb{G}_{3 \choose 1}$ \Big($\big[(a\wedge b)^\# \big]\cdot b=0$, analogously\Big).
In this sense, the application
\begin{eqnarray*}
\mathbb{G}_{3 \choose 1} \times \mathbb{G}_{3 \choose 1}
& \longrightarrow &
\mathbb{G}_{3 \choose 1} \\
(a,b) & \longmapsto & (a \wedge b)^\# = - (x\wedge y)\mathbb{I}_3\ ,
\end{eqnarray*}
corresponds to the classical cross product $a \times b$.
\chapter{Geometric Algebra and Geometry}
\section{Point, lines and planes in $\mathbb{E}_n$}\label{sec:point lines}
In an $n$-dimensional real affine Euclidean space~{\bf \color{dgreen} $\mathbb{A}_n$}\index{symboles}{$\mathbb{A}_n$}, if one fixes a point as the origin, the points in $\mathbb{A}_n$ can be identified with vectors in a Euclidean space $\mathbb{E}_n$. With respect to the foregoing identification, we will talk about points, lines and planes in a Euclidean space; that is, vectors in $\mathbb{E}_n$ are considered as points of an affine Euclidean space where some point (the corresponding origin $\mathbb{O}$) is fixed somewhere\footnote{See, for instance, ...}.
An {\bf \color{dgreen} oriented (linear) direction}\index{termes}{Oriented (linear) direction in $\mathbb{E}_n$} in $\mathbb{E}_n$ is a nonzero vector.
An {\bf \color{dgreen} oriented (planar) direction}\index{termes}{Oriented (planar) direction in $\mathbb{E}_n$} in $\mathbb{E}_n$ is a nonzero $2$-blade in $\mathbb{G}_n$(the Geometric Algebra associated to $\mathbb{E}_n$).
A {\bf \color{dgreen} line}\index{termes}{Line in $\mathbb{E}_n$} in $\mathbb{E}_n$ (passing trough point $x_0\in \mathbb{E}_n$ and parallel to the oriented direction $v\in \mathbb{E}_n$) is the set
\[
\big\{
x\in \mathbb{E}_n \ : \ \exists \tau\in \mathbb{R} \ \ \ x=x_0 + \tau v
\big\}\ .
\]
Considering $\mathbb{E}_n\equiv \mathbb{G}_{n \choose 1}$, the same set can be described through vectors and $2$-blades as follows:
\[
\big\{
x\in \mathbb{G}_{n \choose 1} \ : \ (x-x_0) \wedge v =0
\big\}\ .
\]
Similarly, a {\bf \color{dgreen} plane}\index{termes}{Plane in $\mathbb{E}_n$} in $\mathbb{E}_n$ (passing trough point $x_0\in \mathbb{E}_n$ and parallel to the vector space generated by the two independent vectors $u,v\in \mathbb{E}_n$) is the set
\[
\big\{
x\in \mathbb{E}_n \ : \ \exists \mu,\ \nu \in \mathbb{R} \ \ x=x_0 + \mu u + \nu v
\big\}\ ,
\]
\noindent
which can also be described through vectors and $3$-vectors in $\mathbb{G}_n$ as follows:
\[
\big\{
x\in \mathbb{G}_{n \choose 1} \ : \ (x-x_0) \wedge u \wedge v =0
\big\}\ ,
\]
\noindent
characterized by point $x_0 \in \mathbb{E}_n\equiv \mathbb{G}_{n \choose 1}$ and the oriented (plane) direction $u~\wedge~v~\in~\mathbb{G}_{n \choose 2}$.
\section{Oriented intervals and triangles in $\mathbb{E}_n$}
Geometric Algebra is particularly suited to deal with triangles in Euclidean spaces. In particular, the coordinate-free formalism of Geometric Algebra provides analogies between intervals in $\mathbb{R}$ and triangles in $\mathbb{E}_n$.
Let $\{h_1,\dots , h_n\}$ be an ordered orthonormal basis in the $n$-dimensional Euclidean space $\mathbb{E}_n$.
An {\bf \color{dgreen} interval}\index{termes}{Interval in $\mathbb{E}_n$} in $\mathbb{E}_n$ with {\bf \color{dgreen} extremities}\index{termes}{Extremities of an interval} $a,b\in \mathbb{E}_n$ is the set
\[
\big\{
x\in \mathbb{E}_n \ : \ \ x=\alpha a + \beta b \ , \ \alpha + \beta =1 \ , \ \alpha,\beta \ge 0
\big\}\ .
\]
When we want to attribute an orientation to this set, depending on the order of its extremities, we indicate it with the ordered brackets {\bf \color{dgreen} $\bm{[a,b]}$}\index{symboles}{$[a,b]$} and call it {\bf \color{dgreen} oriented interval}\index{termes}{Oriented interval in $\mathbb{E}_n$} in~$\mathbb{E}_n$. The length of an oriented interval $[a,b]$ is, of course, $|a-b|$.
Analogously, a {\bf \color{dgreen} triangle}\index{termes}{Triangle in $\mathbb{E}_n$} in $\mathbb{E}_n$ with {\bf \color{dgreen} vertices}\index{termes}{Vertices of a triangle} $a,b,c \in \mathbb{E}_n$ is the set
\[
\big\{
x\in \mathbb{E}_n \ : \ \ x=\alpha a + \beta b + \gamma c\ , \ \alpha + \beta +\gamma =1 \ , \ \alpha,\beta,\gamma \ge 0
\big\}\ .
\]
When we want to attribute an orientation to this set, depending on the order of its vertices, we indicate it within the brackets {\bf \color{dgreen} $\bm{[a,b,c]}$}\index{symboles}{$[a,b,c]$} and call it {\bf \color{dgreen} oriented triangle}\index{termes}{Oriented triangle in $\mathbb{E}_n$} in~$\mathbb{E}_n$.
The following oriented triangles
\begin{center}
$
\hfil [b,c,a] \hfil [c,a,b] \hfil
$
\end{center}
\noindent
correspond to the same triangle and have the same orientation with $[a,b,c]$, while the oriented triangles
\begin{center}
$
\hfil [a,c,b] \hfil [c,b,a] \hfil [b,a,c]\hfil
$
\end{center}
\noindent
correspond to the same triangle of $[a,b,c]$, but have opposite orientation with respect the orientation of $[a,b,c]$.
The oriented intervals $[a,b]$, $[b,c]$ and $[c,a]$ are called the {\bf \color{dgreen} sides}\index{termes}{Sides of an oriented triangle} of the oriented triangle $[a,b,c]$.
A {\bf \color{dgreen} diameter}\index{termes}{Diameter of an oriented triangle} of an oriented triangle is a side of maximal length\footnote{An oriented triangle can have more then one diameter, of course.}.
Given an oriented triangle $[a,b,c]$, we consider the bivector
\begin{center}
$
\displaystyle
{\bf \color{dgreen} \left\langle a;b;c\right\rangle}\index{symboles}{$\left\langle a;b;c\right\rangle$}
=
a\wedge b + b\wedge c + c\wedge a
\in \mathbb{G}_{n \choose 2}\ .
$
\end{center}
Such a bivector is a $2$-blade, indeed. For, given the oriented triangle $[a,b,c]$, if we define
\begin{center}
$
\hfil
{\bf \color{dgreen} \ell_a}\index{symboles}{$\ell_a$}= c-b
\hfil
{\bf \color{dgreen} \ell_b}\index{symboles}{$\ell_b$}= a-c
\hfil
{\bf \color{dgreen} \ell_c}\index{symboles}{$\ell_c$}= b-a\ ,
\hfil
$
\end{center}
then
\begin{center}
$
\displaystyle
\left\langle a;b;c\right\rangle
=
\ell_a \wedge \ell_b
=
\ell_b \wedge \ell_c
=
\ell_c \wedge \ell_a \ .
$
\end{center}
Notice that
\begin{center}
$
\displaystyle
\left\langle a;b;c\right\rangle
=
\left\langle b;c;a\right\rangle
=
\left\langle c;a;b\right\rangle
=
-\left\langle a;c;b\right\rangle
=
-\left\langle c;b;a\right\rangle
=
-\left\langle b;a;c\right\rangle \ ,
$
\end{center}
that is, $\left\langle a;b;c\right\rangle$ change sign if we change the orientation of $[a,b,c]$.
The {\bf \color{dgreen} area}\index{termes}{Area of a triangle} of a triangle whose vertices are $a,b,c$ is $\displaystyle \frac{1}{2}\Big|\left\langle a;b;c\right\rangle\Big|$. Moreover, such area can also be expressed using only the scalar product
\begin{center}
$
\displaystyle
\frac{1}{2}
\Big|\left\langle a;b;c\right\rangle\Big|
=
\frac{1}{2}
\sqrt{(\ell_a\wedge \ell_b)\cdot(\ell_a\wedge \ell_b)}
=
\frac{1}{2}
\sqrt{|\ell_a|^2 |\ell_b|^2- (\ell_a\cdot \ell_b)^2}\ .
$
\end{center}
A triangle, whose area is zero, is called {\bf \color{dgreen} degenerate}\index{termes}{Degenerate triangle}.
\bigskip
Let us now express the bivector $\left\langle a;b;c\right\rangle \in \mathbb{G}_{n \choose 2}$ coordinatewise.
If $\{h_1,\dots , h_n\}$ is an ordered orthonormal basis in the $n$-dimensional Euclidean space $\mathbb{E}_n$, and
\begin{center}
$
\displaystyle
\hfil
a= \sum_{j=1}^n \alpha_j h_j
\hfil
b= \sum_{j=1}^n \beta_j h_j
\hfil
c= \sum_{j=1}^n \gamma_j h_j\ ,
\hfil
$
\end{center}
then
\[
\begin{array}{l}
\phantom{=}
\left\langle a;b;c\right\rangle
=
a\wedge b + b\wedge c + c\wedge a = \\ \\
\displaystyle
=
\sum_{1\le j< k \le n}
\left[
\det
\left(
\begin{array}{cc}
\alpha_j & \alpha_{k} \\
\beta_j & \beta_{k}
\end{array}
\right)
+
\det
\left(
\begin{array}{cc}
\beta_j & \beta_{k} \\
\gamma_j & \gamma_{k}
\end{array}
\right)
+
\det
\left(
\begin{array}{cc}
\gamma_j & \gamma_{k} \\
\alpha_j & \alpha_{k}
\end{array}
\right)
+
\right]
h_j \wedge h_{k} = \\ \\
\displaystyle
=
(b-a)\wedge (c-a)
=
\sum_{1\le j< k \le n}
\det
\left(
\begin{array}{cc}
\beta_j -\alpha_j & \beta_{k}-\alpha_{k} \\
\gamma_j - \beta_j & \gamma_{k}- \beta_{k}
\end{array}
\right)
h_j \wedge h_{k} \ .
\end{array}
\]
In particular,
\begin{center}
$
\displaystyle
\frac{1}{2}
\Big|\left\langle a;b;c\right\rangle\Big|
=
\frac{1}{2}
\sqrt{
\sum_{1\le j< k \le n}
\left[
\det
\left(
\begin{array}{cc}
\beta_j -\alpha_j & \beta_{k}-\alpha_{k} \\
\gamma_j - \beta_j & \gamma_{k}- \beta_{k}
\end{array}
\right)
\right]^2
}\ ,
$
\end{center}
since $\displaystyle \big\{h_j\wedge h_{k}\big\}_{1\le j< k \le n}$ is an orthonormal basis in $\mathbb{G}_{n \choose 2}$.
\section{Reflections in $\mathbb{E}_n$ and mirror vertices in plane triangles}\label{sec:mirror vertices}
Invertible vectors (that is, linear directions) are useful to represent mirror points with respect to those directions.
Let us consider a point $x \in \mathbb{E}_n$ and a direction $v \in \mathbb{E}_n$; if $x$ and $v$ are linearly independent ($x\wedge v \ne 0$), then the mirror image of $x$ with respect to the line passing through $0$ and $v$ is the point
\psset{unit=0.8cm}
\begin{center}
\begin{figure}[!h]
\begin{pspicture}(-4.5,1.9)(7,6.1)
\psdots(1.96,1.82)(5.68,4.1)(3.95,5.71)
\uput[d](1.96,1.82){$0$}
\uput[r](5.68,4.1){$x$}
\uput[u](3.95,5.71){$vxv^{-1}$}
\uput[r](3.2,3.16){$v$}
\psline{->}(1.96,1.82)(3.2,3.16)
\psline{->}(1.96,1.82)(5.68,4.1)
\psline[linestyle=dotted]{->}(1.96,1.82)(3.95,5.71)
\psline[linestyle=dotted](1.24,1.04)(6.2,6.4)
\psline[linestyle=dotted](5.68,4.1)(3.95,5.71)
\end{pspicture}
\end{figure}
\end{center}
\[
\begin{array}{rl}
vxv^{-1}
&
\displaystyle
=
(v\cdot x \ + \ v \wedge x)v^{-1}
=
(v\cdot x \ - \ x \wedge v)v^{-1}
= \\ \\
&
\displaystyle
=
\big[v\cdot x \ - \ (xv- \ x\cdot v)\big] v^{-1}
=
(2 v\cdot x \ - \ xv)v^{-1}
=
2\frac{x\cdot v}{|v|^2}v - x
\end{array}
\]
\noindent
(see~\cite{Lou2001} at pag.13 for further details). Note that the foregoing formula works even when $x\wedge v=0$ (that is, $x=\chi v$ for some $\chi \in \mathbb{R}$).
Let us consider a nondegenerate oriented triangle $[a,b,c]$ in $\mathbb{E}_2$ (that is, a plane triangle), then each of its (oriented) sides $[a,b]$, $[b,c]$ and $[c,a]$ determines a direction ($\ell_c$, $\ell_a$ and $\ell_b$ respectively). So we can consider the mirror image {\bf \color{dgreen} $\bm{x'}$}\index{symboles}{$x'$} of each vertex $x$ of $[a,b,c]$ with respect to the line passing through its two adjacent vertices. We have that
\[
\begin{array}{l}
\displaystyle
a'
=
c + \ell_a \ell_b \ell_a^{-1} = c + 2 \frac{\ell_a\cdot \ell_b}{|\ell_a|^2}\ell_a -\ell_b
=
-\left[
a
+ 2 b \frac{\ell_b\cdot \ell_a}{|\ell_a|^2}
+ 2 c \frac{\ell_c\cdot \ell_a}{|\ell_a|^2}
\right]\ , \\ \\
\displaystyle
b'
=
a + \ell_b \ell_c \ell_b^{-1} = a + 2 \frac{\ell_b\cdot \ell_c}{|\ell_b|^2}\ell_b -\ell_c
=
-\left[
b
+ 2 c \frac{\ell_c\cdot \ell_b}{|\ell_b|^2}
+ 2 a \frac{\ell_a\cdot \ell_b}{|\ell_b|^2}
\right]\ , \\ \\
\displaystyle
c'
=
b + \ell_c \ell_a \ell_c^{-1} = b + 2 \frac{\ell_c\cdot \ell_a}{|\ell_c|^2}\ell_c -\ell_a
=
-\left[
c
+ 2 a \frac{\ell_a\cdot \ell_c}{|\ell_c|^2}
+ 2 b \frac{\ell_b\cdot \ell_c}{|\ell_c|^2}
\right]\ ,
\end{array}
\]
and we call them {\bf \color{dgreen} mirror vertices}\index{termes}{Mirror vertices of a triangle} of the oriented\footnote{However, they do not depend on the triangle's orientation.} triangle $[a,b,c]$.
\bigskip
Given a nondegenerate oriented plane triangle $[a,b,c]$, it will be useful to define the following point and two directions
$
\displaystyle
\hfill
{\bf \color{dgreen} \bar{a}}\index{symboles}{$\bar{a}$}
=
\frac{1}{2}(a'+a)\ ,
\hfill
{\bf \color{dgreen} u_a}\index{symboles}{$u_a$}
=
\frac{1}{2}(a'-a)\ ,
\hfill
{\bf \color{dgreen} v_a}\index{symboles}{$v_a$}
=
c-\bar{a}\ .
\hfill
$
The above definitions can be generalized to any vertex. Indeed, if $x$ is a vertex of a nondegenerate oriented plane triangle $[x,x_+,x_-]$ one can define
\[
\begin{array}{rl}
\displaystyle
x'
& \displaystyle
=
x_- + \ell_{x} \ell_{x_+} \ell_{x}^{-1} = x_- + 2 \frac{\ell_{x}\cdot \ell_{x_+}}{|\ell_x|^2}\ell_x -\ell_{x_+}
= \\
& \displaystyle
=
-\left[
x
+ 2 (x_+) \frac{\ell_{x_+}\cdot \ell_x}{|\ell_x|^2}
+ 2 (x_-) \frac{\ell_{x_-}\cdot \ell_x}{|\ell_x|^2}
\right]\ , \\ \\
\displaystyle
\bar{x}
& \displaystyle
=
\frac{1}{2}(x'+x)
\ \ ,\ \
u_x
=
\frac{1}{2}(x'-x)
\ \ , \ \
v_x
=
x_- - \bar{x}\ ,
\end{array}
\]
where $\ell_x= (x_- -x_+)$, $\ell_{x_+}=(x- x_-)$ and $\ell_{x_-}=(x_+ - x)$.
So we have that
\begin{center}
$
\displaystyle
\hfill
x = \bar{x} - u_x
\hfill
x_+ = \bar{x} + (v_x -\ell_x)
\hfill
x_- = \bar{x} + v_x
\hfill
x'= \bar{x}+ u_x\ ,
\hfill
$
\end{center}
and we can state the following proposition.
\begin{proposition
Let $x$ be a vertex of the nondegenerate oriented plane triangle $[x,x_+,x_-]$, then
\begin{enumerate}
\item
$
2 u_x \wedge \ell_x
=
2 \left\langle x;x_+;x_-\right\rangle
=
\left\langle x;x_+;x_-\right\rangle
-
\left\langle x';x_+;x_-\right\rangle\ ;
$
\item $u_x \cdot \ell_x=0$ \big(so that $u_x\wedge \ell_x= \pm |u_x| \ |\ell_x|\ \mathbb{I}_2$\big)\ ;
\item $v_x \wedge \ell_x=0$ \big(so there exists $\tau\in\mathbb{R}$ such that $v_x=\tau\ell_x$ \big)\ .
\end{enumerate}
\end{proposition}
\textbf{\textit{Proof of 2.}} We have that $\displaystyle u_x=\frac{1}{2}\big(x' -x\big) = \frac{1}{2}\big(-\ell_{x_+} + \ell_x \ell_{x_+}\ell_x^{-1}\big)$, and
\begin{center}
$
\begin{array}{l}
\displaystyle
\phantom{=}
u_x \cdot \ell_x
=
\frac{1}{2}
\big(u_x\ell_x + \ell_x u_x\big)
=
\frac{1}{4}
\big(
-\ell_{x_+}\ell_x +\ell_x \ell_{x_+} \ell_x^{-1} \ell_x
-\ell_x \ell_{x_+}+ \ell_x\ell_x \ell_{x_+}\ell_x^{-1}
\big)
=0\ .
\end{array}
$
\end{center}
\textbf{\textit{Proof of 3.}} We have that $\displaystyle v_x= x_- - \frac{1}{2}\big(x'+x \big)= -\frac{1}{2}\big(\ell_{x_+}+ \ell_x \ell_{x_+}\ell_x^{-1}\big)$, and
\begin{center}
$
\begin{array}{l}
\displaystyle
\phantom{=}
\kern-10pt
v_x \wedge \ell_x
=
\frac{1}{2}
\big(v_x\ell_x - \ell_x v_x\big)
=
\frac{1}{4}
\big(
-\ell_{x_+}\ell_x -\ell_x \ell_{x_+} \ell_x^{-1} \ell_x
+\ell_x \ell_{x_+}+ \ell_x\ell_x \ell_{x_+}\ell_x^{-1}
\big)
=0 .\ \square
\end{array}
$
\end{center}
A mirror vertex $x'$ of the oriented triangle $[x,x_+,x_-]$ is said to be {\bf \color{dgreen} balanced}\index{termes}{Balanced mirror vertex of a triangle} if $\bar{x}\in [x_+,x_-]$. In particular, if $[x_+,x_-]$ is a diameter of that triangle, then $x'$ is balanced. This implies that every triangle has at least a balanced mirror vertex.
Owing to the foregoing proposition, balanced mirror vertices can be characterized through lengths.
\begin{proposition}\label{pro:mirror vertex}
A mirror vertex $x'$ of the oriented triangle $[x,x_+,x_-]$ is balanced if and only if $|\ell_x|= |\ell_x-v_x|+ |v_x|$.
\end{proposition}
In the two following figures, the mirror vertices $a'$ and $b'$ are balanced, while the mirror vertex $c'$ is not.
\psset{unit=0.85cm}
\begin{figure}[!h]
\
\hspace{50pt}
\begin{pspicture}(-3,0)(1,6)
\psdots(0,0)(1,3)(-1,3)
\pspolygon(0,0)(1,3)(-1,3)
\psdots(0,6
\psdots(-2.6,1.8
\psdots(0,3
\psdots(-0.8,2.4
\uput[d](0,0){$a$}
\uput[r](1,3){$b$}
\uput[l](-1,3){$c$}
\uput[u](0,6){$a'$}
\uput[dl](-2.6,1.8){$b'$}
\uput[ur](0,3){$\bar{a}$}
\uput[dl](-0.7,2.4){$\bar{b}$}
\psline[linestyle=dashed](0,0)(0,6)
\psline[linestyle=dashed](1,3)(-2.6,1.8)
\pspolygon[linestyle=dotted](1,3)(0,6)(-1,3)
\pspolygon[linestyle=dotted](-1,3)(-2.6,1.8)(0,0)
\end{pspicture}
\hspace{60pt}
\begin{pspicture}(-3,-3)(3,1)
\psdots(0,0)(3,1)(-3,1)
\pspolygon(0,0)(3,1)(-3,1)
\psdots(-1.8,-2.6)
\psdots(-2.4,-0.8)
\uput[d](0,0){$a$}
\uput[r](3,1){$b$}
\uput[l](-3,1){$c$}
\uput[d](-1.8,-2.6){$c'$}
\uput[l](-2.4,-0.8){$\bar{c}$}
\psline[linestyle=dashed](-3,1)(-1.8,-2.6)
\psline[linestyle=dotted](0,0)(-1.8,-2.6)(3,1)
\end{pspicture}
\end{figure}
\vfill
\chapter{Smooth curves}\label{cha:smooth curves}
In the following sections we describe some classical approximation algorithms for smooth curves in $\mathbb{E}_n$ because of their analogies with our approximation Algorithms~\ref{eq:AlgI} and~\ref{eq:AlgII} for smooth surfaces in $\mathbb{E}_n$.
\section{Approximation through inscribed mean vectors}
Let $I \subseteq \mathbb{R}$ be an open interval. Let $\mathbb{E}_n$ be a $n$-dimensional Euclidean space, and $\{h_1,\dots ,h_n\}$ an orthonormal basis in $\mathbb{E}_n$.
A continuous function $c:I\to \mathbb{E}_n$ will be simply called a {\bf \color{dgreen} curve}.\index{termes}{Curve in $\mathbb{E}_n$}
In this chapter we define $n$ real functions $\gamma_j:I\to \mathbb{R}$ as the $n$ components $\gamma_j=c\cdot h_j$; so, we have that $\forall \tau \in I$ $\displaystyle c(\tau)=\sum_{j=1}^n \gamma_j(\tau) h_j$.
A vector $v\in \mathbb{E}_n$ is said to be {\bf \color{dgreen} inscribed}\index{termes}{Inscribed vector} in the curve $c$ if there exist $\alpha,\beta\in I$, such that $\alpha\ne \beta$ and $v=c(\beta)-c(\alpha)$; in this case the vector $\displaystyle \frac{1}{\beta-\alpha}[c(\beta)-c(\alpha)]$ is called {\bf \color{dgreen} inscribed mean vector}\index{termes}{Inscribed mean vector} in $c$.
A curve $c:I\to \mathbb{E}_n$ is said to be {\bf \color{dgreen} smooth}\index{termes}{Smooth curve} if
\begin{itemize}
\item each $\gamma_j$ has continuous second derivative $\ddot{\gamma}_j$ on $I$, and
\item there exists $\delta >0$ such that $\displaystyle \sup_{\tau\in I}|\ddot{\gamma}_j(\tau)|\le \delta <\infty$ ($\delta$ does not depend on $j=1,\dots, n$).
\end{itemize}
For a smooth curve the following estimate holds for each $\tau, \tau+\epsilon \in I$
\begin{equation}\label{eq:estimate curve}
\big|
\gamma_j(\tau+\epsilon) -\gamma_j(\tau)-\dot{\gamma}_j(\tau) \epsilon
\big|
\le
\frac{\delta}{2} \epsilon^2\ ,
\end{equation}
\noindent
where $\dot{\gamma_j}$ is, of course, the first derivative of $\gamma_j$. Following the Landau notation, we can rewrite the relation~(\ref{eq:estimate curve}) as follows
\[
\gamma_j(\tau+\epsilon) -\gamma_j(\tau)= \dot{\gamma}_j(\tau) \epsilon + O(\epsilon^2)\ .
\]
If $c$ is a smooth curve, we denote $\displaystyle {\bf \color{dgreen} \dot{c}(\tau)}\index{symboles}{$\dot{c}(\tau)$}=\sum_{j=1}^n \dot{\gamma_j}(\tau)\ h_j$.
\begin{proposition}\label{prop: approxim dot c}
If $c:I\to \mathbb{E}_n$ is a smooth curve and $\chi \in I$, then the vector $\dot{c}(\chi)$ is the limit of the inscribed mean vectors $\displaystyle \frac{1}{\beta-\alpha}\big[c(\beta)-c(\alpha)\big]$ as $(\alpha,\beta) \to (\chi,\chi)$ in $\mathbb{R}^2$, that is
\[
\lim_{
\begin{array}{c}
\scriptstyle (\alpha,\beta) \to (\chi,\chi) \\
\scriptstyle \alpha \ne \beta
\end{array}
}
\frac{1}{\beta-\alpha}\big[c(\beta)-c(\alpha)\big]
=
\dot{c}(\chi) \ .
\]
\end{proposition}
The proof of Proposition~\ref{prop: approxim dot c} is routine. Nonetheless we prove it, just because the proof we provide here is a $1$-dimensional version of the proof of Theorem~\ref{thm:II}.
\bigskip
\textit{\textbf{Proof of Proposition~\ref{prop: approxim dot c}:}} let $\alpha \ne \beta$, then
\[
\begin{array}{l}
\displaystyle
\frac{\gamma_j(\beta)-\gamma_j(\alpha)}{\beta - \alpha}
=
\frac{\gamma_j(\beta)-\gamma_j\left(\frac{\alpha+\beta}{2}\right) -\left[\gamma_j(\alpha)-\gamma_j\left(\frac{\alpha+\beta}{2}\right)\right]}{\beta - \alpha}
= \\ \\
\displaystyle
=
\frac{\dot{\gamma_j}\left(\frac{\alpha+\beta}{2}\right)\left(\beta - \frac{\alpha+\beta}{2}\right) + O\left(\left(\beta - \frac{\alpha+\beta}{2}\right)^2\right)}{\beta-\alpha} -\frac{\dot{\gamma_j}\left(\frac{\alpha+\beta}{2}\right)\left(\alpha - \frac{\alpha + \beta}{2}\right)+O\left(\left(\alpha - \frac{\alpha + \beta}{2}\right)^2\right)}{\beta - \alpha}= \\ \\
\displaystyle
=
\dot{\gamma_j}\left(\frac{\alpha+\beta}{2}\right) + O(\beta -\alpha)\ .
\end{array}
\]
So,
\[
\begin{array}{l}
\displaystyle
\left| \frac{1}{\beta -\alpha}\big[c(\beta)-c(\alpha)\big] - \dot{c}(\chi)\right|
=
\left| \sum_{j=1}^n \left[\dot{\gamma_j}\left(\frac{\alpha+\beta}{2}\right)
- \dot{\gamma_j}(\chi)
+ O(\beta -\alpha)\right] h_j\right| = \\ \\
\displaystyle
\le
O(\beta -\alpha) +
\sum_{j=1}^n \left| \dot{\gamma_j}\left(\frac{\alpha+\beta}{2}\right)
- \dot{\gamma_j}(\chi) \right|
\end{array}
\]
that goes to $0$ as $(\alpha,\beta)\to(\chi,\chi)$ because each $\gamma_j$ is $C^2(I)$\ $\square$
\section{Geometric interpretation of the direction $\dot{c}(\chi)$}
If $c$ is a smooth curve and $\dot{c}(\chi) \ne 0$, then $c$ is locally injective and thus, if $\alpha$ and $\beta$ are sufficiently close to $\chi \in I$, every inscribed mean vector $\displaystyle \frac{1}{\beta -\alpha}\big[c(\beta)-c(\alpha)\big]$ is a direction, and Proposition~\ref{prop: approxim dot c} has the following geometric interpretation:
direction $\dot{c}(\chi)$ is the limit of the inscribed mean directions $\displaystyle \frac{1}{\beta -\alpha}\big[c(\beta)-c(\alpha)\big]$ as $(\alpha,\beta)\to (\chi,\chi)$.
It is well known that if $c$ is smooth, but $\dot{c}(\chi)= 0$, then the foregoing interpretation may be false, even if $c$ is locally injective. A classical example is the cusp in~$\mathbb{E}_2$, $c(\tau)= \tau^2 h_1 + \tau^3 h_2$, and $\chi=0$.
\section{Estimates of the length of a smooth curve}
If the curve $c:I\to \mathbb{E}_n$ is smooth and $[\alpha,\beta]\subset I$, then the following algorithm
\begin{equation}\label{eq: Algorithm 0}
\sum_{i=0}^k\big|c(\beta_i)-c(\alpha_i)\big|
\end{equation}
can estimate the integral\footnote{That we call {\bf \color{dgreen} length}\index{termes}{Length of a smooth curve} of the curve $c:[\alpha,\beta]\to\mathbb{E}_n$, when $c$ is injective on $[\alpha,\beta]$.} $\displaystyle \int_\alpha^\beta \big|\dot{c}(\tau)\big|\ d\tau$, where
\begin{itemize}
\item $\alpha_0=\alpha$,
\item $\alpha_i < \alpha_{i+1}=\beta_i$ (for $i=0,\dots, k-1$), and
\item $\beta_{k-1}< \beta_k =\beta$;
\end{itemize}
that is, $\big\{[\alpha_i,\beta_i]\big\}_{i=0}^k=\Pi$ is a partition of $[\alpha.\beta]$ with contiguous nonoverlapping intervals; in this sense Algorithm~(\ref{eq: Algorithm 0}) can also be written
\[
\sum_{[\alpha_i,\beta_i]\in \Pi}\big|c(\beta_i)-c(\alpha_i)\big|\ .
\]
More precisely, Algorithm~(\ref{eq: Algorithm 0}) converges to $\displaystyle \int_\alpha^\beta \big|\dot{c}(\tau)\big|\ d\tau$ when the maximal length of intervals in the partition $\Pi$, $\displaystyle \max_{[\alpha_i,\beta_i]\in\Pi}|\beta_i-\alpha_i|$, goes to zero, as we can see from the following elementary estimates:
\begin{eqnarray*}
& &
\left|
\sum_{[\alpha_i,\beta_i]\in \Pi}\big|c(\beta_i)-c(\alpha_i)\big|
-
\int_\alpha^\beta \big|\dot{c}(\tau)\big|\ d\tau
\right|
= \\
& = &
\left|
\sum_{[\alpha_i,\beta_i]\in \Pi}
\left[
\big|c(\beta_i)-c(\alpha_i)\big|
-
\int_{\alpha_i}^{\beta_i} \big|\dot{c}(\tau)\big|\ d\tau
\right]
\right|
= \\
& \le &
\sum_{[\alpha_i,\beta_i]\in \Pi}
\left|
\big|c(\beta_i)-c(\alpha_i)\big|
-
\int_{\alpha_i}^{\beta_i} \big|\dot{c}(\tau)\big|\ d\tau
\right|
= \\
\end{eqnarray*}
\begin{eqnarray*}
& = &
\sum_{[\alpha_i,\beta_i]\in \Pi}
\left|
\int_{\alpha_i}^{\beta_i}
\left[
\frac{\big|c(\beta_i)-c(\alpha_i)\big|}{|\beta_i-\alpha_i|}
-
\big|\dot{c}(\tau)\big|
\right]\ d\tau
\right|
= \\
& \le &
\sum_{[\alpha_i,\beta_i]\in \Pi}
\int_{\alpha_i}^{\beta_i}
\left|
\frac{\big|c(\beta_i)-c(\alpha_i)\big|}{|\beta_i-\alpha_i|}
-
\big|\dot{c}(\tau)\big|
\right|\ d\tau = (*)\ .
\end{eqnarray*}
As $\Big||v|-|w|\Big|\le |v-w|$ for each $v,w\in \mathbb{E}_n$, we have that
\begin{eqnarray*}
(*) & \le &
\sum_{[\alpha_i,\beta_i]\in \Pi}
\int_{\alpha_i}^{\beta_i}
\left|
\frac{1}{\beta_i-\alpha_i}
\big[c(\beta_i)-c(\alpha_i)\big]
-
\dot{c}(\tau)
\right|\ d\tau
= \\
& = &
\sum_{[\alpha_i,\beta_i]\in \Pi}
\int_{\alpha_i}^{\beta_i}
\left|
\frac{1}{\beta_i-\alpha_i}
\big[c(\beta_i)-c(\alpha_i)-\dot{c}(\alpha_i)(\beta_i-\alpha_i)\big]
+\big[
\dot{c}(\alpha_i)
-
\dot{c}(\tau)
\big]
\right|\ d\tau
= \\
& \le &
\sum_{[\alpha_i,\beta_i]\in \Pi}
\int_{\alpha_i}^{\beta_i}
\left[
\left|
\frac{1}{\beta_i-\alpha_i}
\big[c(\beta_i)-c(\alpha_i)-\dot{c}(\alpha_i)(\beta_i-\alpha_i)\big]
\right|
+
\left|
\dot{c}(\alpha_i)
-
\dot{c}(\tau)
\right|
\right]\ d\tau \ .
\end{eqnarray*}
Then, owing to estimate~(\ref{eq:estimate curve}), we have that
\[
\begin{array}{l}
\displaystyle
\Big|
c(\beta_i)-c(\alpha_i)-\dot{c}(\alpha_i)(\beta_i-\alpha_i)
\Big|
= \\ \\
\displaystyle
=
\left|
\sum_{j=1}^n
\big[\gamma_j(\beta_i)-\gamma_j(\alpha_i)-\dot{\gamma_j}(\alpha_i)(\beta_i-\alpha_i)\big]h_j
\right|
= \\ \\
\displaystyle
\le
\sum_{j=1}^n
\Big|
\gamma_j(\beta_i)-\gamma_j(\alpha_i)-\dot{\gamma_j}(\alpha_i)(\beta_i-\alpha_i)
\Big|
\le n \delta (\beta_i-\alpha_i)^2\ .
\end{array}
\]
So we can conclude that
\[
\begin{array}{l}
\displaystyle
\left|
\sum_{[\alpha_i,\beta_i]\in \Pi}\big|c(\beta_i)-c(\alpha_i)\big|
-
\int_\alpha^\beta \big|\dot{c}(\tau)\big|\ d\tau
\right|
= \\ \\
\displaystyle
\le
n\delta
\sum_{[\alpha_i,\beta_i]\in \Pi}
(\beta_i-\alpha_i)^2
+
\sum_{[\alpha_i,\beta_i]\in \Pi}
\int_{\alpha_i}^{\beta_i}
\left|
\dot{c}(\alpha_i)
-
\dot{c}(\tau)
\right|\ d\tau \ .
\end{array}
\]
That goes to zero as $\displaystyle \max_{[\alpha_i,\beta_i]\in \Pi} |\beta_i-\alpha_i|\longrightarrow 0$, since $c$ is smooth.
\bigskip
Here we provided, as we did before for the proof of Proposition~\ref{prop: approxim dot c}, many details of well-known estimates, just because of their analogies with the estimates for the area of a smooth surface\footnote{See section~\ref{sec:area}.}.
\chapter{Smooth surfaces}
\section{Surfaces, inscribed balanced mean bivectors}\label{sec:surfaces}
Let $\Omega$ be an open set in $\mathbb{E}_2$.
A continuous function $s:\Omega \to \mathbb{E}_n$ will be simply called a {\bf \color{dgreen} surface}.\index{termes}{Surface in $\mathbb{E}_n$}
If $\{h_1,\dots , h_n\}$ is an ordered basis in the $n$-dimensional Euclidean space $\mathbb{E}_n$ and $s$ is a surface, we define $n$ real functions $\sigma_j:\Omega \to \mathbb{R}$ as the components $\sigma_j = s \cdot h_j$; so, we have that $\displaystyle\forall x \in \Omega\ \ s(x)=\sum_{j=1}^n \sigma_j(x) h_j$, and $s$ is a surface if and only if each component $\sigma_j$ is continuous.
If $\{\ell_1, \ell_2\}$ is an ordered orthonormal basis in $\mathbb{E}_2$, we can indicate each $x\in\mathbb{E}_2$ as
$\chi_1 \ell_1 + \chi_2 \ell_2 $, where $\chi_1=x\cdot \ell_1$ and $\chi_2=x\cdot \ell_2$.
\begin{exam}\label{exa:cylinder}
A circular right cylinder of radius $\rho$ is a surface. Thus, it corresponds to the following function $s:\mathbb{E}_2\to \mathbb{E}_3$,
\begin{center}
$
s(x)
=
s(\chi_1 \ell_1 + \chi_2 \ell_2)=\rho \cos(\chi_1)h_1 + \rho \sin(\chi_1)h_2 + \chi_2h_3
$,
\end{center}
\noindent
where $\{\ell_1,\ell_2\}$ is an ordered orthonormal basis in $\mathbb{E}_2$, $\{h_1,h_2,h_3\}$ is an ordered orthonormal basis in $\mathbb{E}_3$.
\end{exam}
A bivector $V\in\mathbb{G}_{n \choose 2}$ is said to be {\bf \color{dgreen} inscribed}\index{termes}{Inscribed bivector in a surface} in a surface $s$ if there exists a nondegenerate ordered plane triangle $[a,b,c]$ contained in $\Omega$, such that
\begin{center}
$
\displaystyle
V
=
\big\langle
s(a);s(b);s(c)
\big\rangle
=
s(a)\wedge s(b)
\ +\
s(b)\wedge s(c)
\ +\
s(c)\wedge s(a)\ .
$
\end{center}
In this case, the ordered triangle $\big[s(a),s(b),s(c)\big]$ is said to be {\bf \color{dgreen} inscribed}\index{termes}{Inscribed ordered triangle on a surface} on the surface $s$, and the following bivector
\begin{center}
$
\displaystyle
\frac{1}{
\left\langle
a;b;c
\right\rangle
\cdot \mathbb{I}_2
}
\big\langle
s(a);s(b);s(c)
\big\rangle
$
\end{center}
\noindent
is called {\bf \color{dgreen} inscribed mean bivector}\index{termes}{Inscribed mean bivector in a surface} in $s$.
\noindent
A plane triangle $\Delta$ of vertices $\{a,b,c\}$ is said to be {\bf \color{dgreen} balanced}\index{termes}{Balanced triangle in a open set of $\mathbb{E}_2$} in the open set~$\Omega$ if
\begin{itemize}
\item $\Delta \subset \Omega$,
\item there exists a balanced mirror vertex $x'\in \{a',b',c'\}$ of $\Delta$ such that the plane triangle of vertices\footnote{See Section~\ref{sec:mirror vertices} for the notations.} $\{x',x_+,x_-\}$ is contained in $\Omega$.
\end{itemize}
An inscribed triangle $\big[s(a),s(b),s(c)\big]$ is said to be {\bf \color{dgreen} balanced}\index{termes}{Inscribed balanced ordered triangle on a surface} on $s$ if the plane triangle $[a,b,c]$ is balanced in $\Omega$. Note that, since $\Omega$ is an open set, every sufficiently small\footnote{Small with respect to its diameter.} inscribed triangle $\big[s(a),s(b),s(c)\big]$ is balanced on $s$.
If $\big[s(a),s(b),s(c)\big]$ is an inscribed balanced ordered triangle on $s$, where $a'$ is a balanced mirror vertex of $[a,b,c]$ (with respect to vertex $a$) such that $[a',b,c]$ is in $\Omega$, then the bivector
\begin{center}
$
\displaystyle
\big[s(a')-s(a)\big]
\wedge
\big[s(c)-s(b)\big]
$
\end{center}
\noindent
is called {\bf \color{dgreen} inscribed balanced bivector}\index{termes}{Inscribed balanced bivector in a surface} in $s$;
in this case the bivector
\begin{center}
$
\displaystyle
\frac{1}{2\left\langle a;b;c\right\rangle\cdot \mathbb{I}_2}
\big[s(a')-s(a)\big]
\wedge
\big[s(c)-s(b)\big]
$
\end{center}
\noindent
is called {\bf \color{dgreen} inscribed balanced mean bivector}\index{termes}{Inscribed balanced mean bivector in a surface} in $s$.
An inscribed balanced bivector can be written in different ways
\[
\begin{array}{l}
\displaystyle
\big[s(a')-s(a)\big]
\wedge
\big[s(c)-s(b)\big]
= \\ \\
=
s(a)\wedge s(b)
\ + \
s(b)\wedge s(a')
\ + \
s(a')\wedge s(c)
\ + \
s(c)\wedge s(a)
= \\ \\
\displaystyle
=
\big\langle
s(a);s(b);s(c)
\big\rangle
+
\big\langle
s(c);s(b);s(a')
\big\rangle
=
\big\langle
s(a);s(b);s(c)
\big\rangle
-
\big\langle
s(a');s(b);s(c)
\big\rangle\ .
\end{array}
\]
It will also be useful to write an inscribed balanced bivector coordinatewise with respect to an ordered orthonormal basis $\{h_1, \dots , h_n\}$ in~$\mathbb{E}_n$; that is, if $\displaystyle\forall x \in \Omega$ $\displaystyle s(x)=\sum_{j=1}^n \sigma_j(x) h_j$, then
\[
\begin{array}{l}
\displaystyle
\phantom{=}
\big[s(a')-s(a)\big]
\wedge
\big[s(c)-s(b)\big]
= \\ \\
\displaystyle
=
\left\{\sum_{j=1}^n \big[\sigma_j(a')-\sigma_j(a)\big] h_j\right\}
\wedge
\left\{\sum_{k=1}^n \big[\sigma_k(c)-\sigma_k(b)\big] h_k\right\}
= \\ \\
\displaystyle
=
\sum_{1\le j<k\le n}
\Big\{
\big[\sigma_j{\scriptstyle(a')} \kern-2pt - \kern-2pt \sigma_j{\scriptstyle(a)}\big]
\big[\sigma_{k}{\scriptstyle(c)} \kern-2pt - \kern-2pt \sigma_{k}{\scriptstyle(b)}\big]
\kern-2pt - \kern-2pt
\big[\sigma_j{\scriptstyle(c)} \kern-2pt - \kern-2pt \sigma_j{\scriptstyle(b)}\big]
\big[\sigma_{k}{\scriptstyle(a')} \kern-2pt - \kern-2pt \sigma_{k}{\scriptstyle(a)}\big]
\Big\}
h_j\wedge h_{k} \ .
\end{array}
\]
Let us now define $n \choose 2$ transformations ${\color{dgreen} \bm{s_{j,k}}}:\Omega \to \mathbb{E}_2$,\index{symboles}{$s_{j,k}$}
\begin{center}
$
\displaystyle
s_{j,k}(x)
=
\sigma_j(x) \ell_1 \ + \ \sigma_{k}(x) \ell_2\ ;
$
\end{center}
then we can rewrite each component
\[
\begin{array}{l}
\displaystyle
\big[\sigma_j(a')-\sigma_j(a)\big]\big[\sigma_{k}(c)-\sigma_{k}(b)\big]
-
\big[\sigma_j(c)-\sigma_j(b)\big]\big[\sigma_{k}(a')-\sigma_{k}(a)\big]
= \\ \\
\displaystyle
=
\Big\{
\big[s_{j,k}(a')-s_{j,k}(a)\big]
\wedge
\big[s_{j,k}(c)-s_{j,k}(b)\big]
\Big\}
\cdot
\mathbb{I}_2\ ,
\end{array}
\]
\noindent
so that an inscribed balanced bivector can be written as
\[
\big[s(a')-s(a)\big]
\wedge
\big[s(c)-s(b)\big]
=
\kern-10pt
\sum_{1\le j<k\le n}
\kern-5pt
\Big\{
\big\{
[s_{j,k}(a')-s_{j,k}(a)]
\wedge
[s_{j,k}(c)-s_{j,k}(b)]
\big\}
\cdot
\mathbb{I}_2
\Big\}
h_j\wedge h_{k} \ .
\]
An inscribed balanced mean bivector can be written in other ways, too; in particular \big(as $\left\langle a;b;c\right\rangle = \left\langle c;b;a'\right\rangle$\big) it corresponds to the following mean of inscribed mean bivectors:
\begin{center}
$
\begin{array}{l}
\displaystyle
\frac{1}{2\left\langle a;b;c\right\rangle\cdot \mathbb{I}_2}
\big[s(a')-s(a)\big]
\wedge
\big[s(c)-s(b)\big]
= \\ \\
\displaystyle
=
\frac{1}{2}
\left\{
\frac{1}{\left\langle a;b;c\right\rangle\cdot \mathbb{I}_2}
\big\langle s(a);s(b);s(c)\big\rangle
+
\frac{1}{\left\langle c;b;a'\right\rangle\cdot \mathbb{I}_2}
\big\langle s(c);s(b);s(a')\big\rangle
\right\} \ .
\end{array}
$
\end{center}
In the case of a surface in space ($s:\Omega \to \mathbb{E}_3$) the foregoing mean can be seen as the mean of the vectors orthogonal to the planes secant the surfaces at points $s(a), s(b), s(c)$ and $s(c), s(b), s(a')$ respectively. However, the most interesting representation of an inscribed balanced bivector on $s$ is the following:
\begin{center}
$
\begin{array}{l}
\displaystyle
\frac{1}{2\left\langle a;b;c\right\rangle\cdot \mathbb{I}_2}
\big[s(a')-s(a)\big]
\wedge
\big[s(c)-s(b)\big]
= \\ \\
\displaystyle
=
\frac{1}{\big[ \left\langle a;b;c\right\rangle - \left\langle a';b;c\right\rangle\big]\cdot \mathbb{I}_2}
\Big[
\big\langle s(a);s(b);s(c)\big\rangle
-
\big\langle s(a');s(b);s(c)\big\rangle
\Big] \ ;
\end{array}
$
\end{center}
\noindent
indeed, the previous expression plays for surfaces (in Theorem~\ref{thm:II}) the same role as the classical expression (for the inscribed mean vector)
\begin{center}
$
\displaystyle
\frac{1}{\beta -\alpha}
\big[
c(\beta) - c(\alpha)
\big]
$
\end{center}
\noindent
plays for curves (in Proposition~\ref{prop: approxim dot c}).
\section{Notations III}
Let $\Omega\subseteq \mathbb{E}_2$ be open. Let us give some differential notations for a sufficiently regular function $\psi:\Omega \to \mathbb{R}$. Let $w \in \mathbb{E}_2$
\[
\begin{array}{rcll}
{\bf \color{dgreen} \partial_w\psi(x)}\index{symboles}{$\partial_w\psi(x)$}
& = &
\displaystyle \lim_{\epsilon\to 0} \frac{1}{\epsilon} \big[\psi(x+\epsilon w)-\psi(x)\big] \in \mathbb{R} & \\ \\
{\bf \color{dgreen} \nabla \psi(x)}\index{symboles}{$\nabla \psi(x)$}
& = &
\partial_{\ell_1}\psi(x) \ell_1
+
\partial_{\ell_2}\psi(x) \ell_2
\in \mathbb{E}_2
&
\textrm{ is the gradient vector,}
\\ \\
{\bf \color{dgreen} \bm{H}_\psi(x)}\index{symboles}{$\bm{H}_\psi(x)$}
& = &
\left(
\begin{array}{cc}
\partial_{\ell_1}\partial_{\ell_1}\psi(x)
&
\partial_{\ell_2}\partial_{\ell_1}\psi(x) \\ \\
\partial_{\ell_1}\partial_{\ell_2}\psi(x)
&
\partial_{\ell_2}\partial_{\ell_2}\psi(x)
\end{array}
\right) \in \mathbb{R}^{2 \times 2}
&
\textrm{ is the Hessian matrix,}
\end{array}
\]
\noindent
whose real eigenvalues are indicated as {\bf \color{dgreen} $\lambda_{i,\psi(x)}$}\index{symboles}{$\lambda_{i,\psi(x)}$}
(with $i=1,2$), corresponding to the ordered orthonormal basis of eigenvectors $\{\ell_{1,\psi(x)}, \ell_{2,\psi(x)}\}$ equioriented with a fixed ordered orthonormal basis $\{\ell_1, \ell_2\}$ in $\mathbb{E}_2\equiv \mathbb{G}_{2 \choose 1}$, that is $\ell_{1,\psi(x)}\wedge \ell_{2,\psi(x)} = \ell_1 \wedge \ell_2= \mathbb{I}_2$.
\section{Smooth functions, transformations and surfaces}
Let $\Omega\subseteq \mathbb{E}_2$ be open, a (real) {\bf \color{dgreen} function} $\psi:\Omega \to \mathbb{R}$ is said to be {\bf \color{dgreen} smooth}\index{termes}{Smooth (real) function} if
\begin{itemize}
\item $\psi$ has second-order derivatives, that are continuous on $\Omega$ \big(is $C^2(\Omega)$\big),
\item $\displaystyle \sup_{x\in \Omega} \max\big\{ |\lambda_{1,\psi(x)}|, |\lambda_{2,\psi(x)}| \big\}< +\infty$.
\end{itemize}
For a smooth function $\psi$ the following estimate\footnote{That estimate is analogue to estimate~(\ref{eq:estimate curve}), and can be obtained using the Taylor formula with integral remainder.} holds for each $x,y\in \Omega$ such that the interval $[x,y]$ is contained in $\Omega$
\begin{equation}\label{eq:estimate function}
\big|
\psi(y)-\psi(x) - \nabla\psi(x) \cdot (y-x)
\big|
\le
\lambda
|y-x|^2\ ,
\end{equation}
where $\displaystyle \lambda = \sup_{z\in [x,y]} \max\big\{ |\lambda_{1,\psi(z)}|, |\lambda_{2,\psi(z)}| \big\}$.
Following the Landau notation, we can rewrite relation~(\ref{eq:estimate function}) as follows\footnote{Here, for the sake of simplicity, we can suppose that the set $\Omega$ is also convex.}:
\begin{equation}\label{eq:estimate function Landau}
\forall x,y\in \Omega
\hspace{30pt}
\psi(y)-\psi(x) - \nabla\psi(x) \cdot (y-x)
=
O(|y-x|^2)\ .
\end{equation}
In particular, if $z,w\in \mathbb{E}_2$ are such that the interval $[z-w,w+w]$ is contained in $\Omega$, we can also write
\[
\psi(z+w)-\psi(z-w) = 2 \nabla\psi(z) \cdot w \ +\ O(|w|^2)\ .
\]
Let $\Omega\subseteq \mathbb{E}_2$ be open, let $\{\ell_1,\ell_2\}$ be an ordered orthonormal basis in $\mathbb{E}_2$, and $\{h_1,\dots h_n\}$ be an ordered orthonormal basis in $\mathbb{E}_n$,
\begin{itemize}
\item a {\bf \color{dgreen} transformation} $f:\Omega \to \mathbb{E}_2$ is called {\bf \color{dgreen} smooth}\index{termes}{Smooth plane transformation} if each of its components $\phi_i=f\cdot e_i$ is a smooth (real) function (i=1,2);
\item a {\bf \color{dgreen} surface} $s:\Omega \to \mathbb{E}_n$ is called {\bf \color{dgreen} smooth}\index{termes}{Smooth surface} if each of its components $\sigma_j=s\cdot h_i$ is a smooth (real) function (j=1,\dots , n).
\end{itemize}
\begin{exam}
The cylinder of example~\ref{exa:cylinder} is a smooth surface. As a matter of fact, if $\{\ell_1,\ell_2\}$ is an ordered orthonormal basis in $\mathbb{E}_2$ and $x=\chi_1\ell_1 + \chi_2\ell_2$, then
\begin{center}
$
\hfil
\sigma_1(x)= \rho\cos\chi_1\ ,
\hfil
\sigma_2(x)= \rho\sin\chi_1\ ,
\hfil
\sigma_3(x)= \chi_2\ ,
\hfil
$
\end{center}
and $\displaystyle \sup_{x\in \mathbb{E}_2} \max_{1\le j \le 3} \max_{1\le i \le 2} |\lambda_{i,\sigma_j(x)}|=\rho$.
\end{exam}
\chapter{Main results}
\section{The Jacobian determinant}
\begin{theorem}\label{thm:I}
Let $\{\ell_1,\ell_2\}$ be an ordered orthonormal basis in the Euclidean space $\mathbb{E}_2$;
let $\Omega \subseteq \mathbb{E}_2$ be open; let $f:\Omega \to \mathbb{E}_2$ be a smooth plane transformation, then $\forall x \in \Omega$ we have that
\begin{equation}\label{eq:Alg0}
\lim_{
\begin{array}{c}
\scriptstyle (a,b,c)\to (x,x,x) \\
\scriptstyle a\wedge b +b\wedge c + c\wedge a \ne 0
\end{array}
}
\kern-20pt
\frac{\Big\{\big[f(d_{(a,b,c)})-f(a)\big]\wedge \big[f(c)-f(b)\big]\Big\}\cdot \mathbb{I}_2}{2\big(a\wedge b +b\wedge c + c\wedge a\big)\cdot \mathbb{I}_2}
=
\big(\nabla\phi_1(x)\wedge \nabla\phi_2(x)\big)\cdot \mathbb{I}_2
\end{equation}
\noindent
where
\begin{enumerate}
\item $\displaystyle d_{(a,b,c)}
=
a'
=
-\left[
a
+ 2 b \frac{\ell_b\cdot \ell_a}{|\ell_a|^2}
+ 2 c \frac{\ell_c\cdot \ell_a}{|\ell_a|^2}
\right]$ is the mirror vertex of vertex $a$, in the oriented plane triangle~$[a,b,c]$,
\item $a'$ is balanced,
\end{enumerate}
\noindent
and
\begin{itemize}
\item $\mathbb{I}_2= \ell_1 \ell_2=\ell_1\wedge \ell_2$ is the pseudo-unit in the Geometric Algebra $\mathbb{G}_2$ associated to the oriented Euclidean space $\mathbb{E}_2$,
\item $\wedge$ is the outer product in $\mathbb{G}_2$, and $\cdot$ is the scalar product in $\mathbb{G}_2$,
\item $\phi_i = f\cdot \ell_i$ (with $i=1,2$),
\item the limit $(a,b,c)\to (x,x,x)$ is taken in the product topology of $\mathbb{E}_2 \times \mathbb{E}_2 \times \mathbb{E}_2$.
\end{itemize}
\end{theorem}
\begin{rem}
Coordinatewise, the transformation $f$ can be seen as the real transformation $\bm{\Phi}:\tilde{\Omega}\to \mathbb{R}^2$, where
\begin{itemize}
\item $\mathbb{R}^2 \supseteq \tilde{\Omega} \ni (\chi_1,\chi_2) \Longleftrightarrow \chi_1 \ell_1 \ +\ \chi_2 \ell_2 \in \Omega \subseteq \mathbb{E}_2$;
\item $
\displaystyle
\bm{\Phi}(\chi_1,\chi_2)
=
\big(f(\chi_1 \ell_1 \ +\ \chi_2 \ell_2)\cdot \ell_1\ , \ f(\chi_1 \ell_1 \ +\ \chi_2 \ell_2)\cdot \ell_2\big)$
$\displaystyle
\phantom{ \bm{\Phi}(\chi_1,\chi_2)}
=
\big(
\phi_1(\chi_1 \ell_1 \ +\ \chi_2 \ell_2)\ , \ \phi_2(\chi_1 \ell_1 \ +\ \chi_2 \ell_2)
\big)
$;
\end{itemize}
then, we have that
\begin{center}
$
\displaystyle
\left(
\begin{array}{cc}
\partial_{\ell_1}\phi_1(x) & \partial_{\ell_2}\phi_1(x) \\ \\
\partial_{\ell_1}\phi_2(x) & \partial_{\ell_2}\phi_2(x)
\end{array}
\right)
=
\frac{\partial (\phi_1,\phi_2)}{\partial(\chi_1,\chi_2)}
$
\end{center}
is the Jacobian matrix of transformation $\bm{\Phi}$, whose determinant\footnote{See Section~\ref{sec:determinants}.} is
\[
\big(\nabla\phi_1(x)\wedge \nabla\phi_2(x)\big)\cdot \mathbb{I}_2
=
\det
\frac{\partial (\phi_1,\phi_2)}{\partial(\chi_1,\chi_2)}
\ .
\]
\end{rem}
\begin{rem}
It is possible to obtain an analogue result for transformations within $k$-di\-men\-sional Euclidean spaces. However, such result will be treated in other works, were we will apply it to construct $k$-dimensional Stieltjes-like measures in $\mathbb{E}_n$.
\end{rem}
\begin{rem}
The approximating ratio in the foregoing theorem can be rewritten as a kind of incremental ratio for the function $f:\Omega\subseteq\mathbb{E}_2\to\mathbb{E}_2$
\[
\frac{\Big\{\big[f(a')-f(a)\big]\wedge \big[f(c)-f(b)\big]\Big\}\cdot \mathbb{I}_2}{2\big(a\wedge b +b\wedge c + c\wedge a\big)\cdot \mathbb{I}_2}
=
\frac
{\Big[
\big\langle f(a);f(b);f(c)\big\rangle
-
\big\langle f(a');f(b);f(c)\big\rangle
\Big]\kern-2pt \cdot \kern-2pt \mathbb{I}_2
}
{\big[ \left\langle a;b;c\right\rangle - \left\langle a';b;c\right\rangle\big]\kern-2pt \cdot \kern-2pt \mathbb{I}_2}
\]
where $\left\langle x;y;z\right\rangle=x\wedge y + y\wedge z + z\wedge x$. This strengthens the analogy between the derivative of a function of one real variable and the Jacobian determinant of a transformation of two real variables.
\end{rem}
\begin{rem}
The coordinatewise writing of the previous approximating ratio is more elaborate and lengthy. Let us define $\tilde{\phi}_i(\chi_1,\chi_2)=\phi_i(\chi_1\ell_1+\chi_2\ell_2)$, then $\tilde{\phi}_i:\tilde{\Omega}\to\mathbb{R}$. If we denote $a=\alpha_1\ell_1+\alpha_2\ell_2$, $a'=\alpha'_1\ell_1+\alpha'_2\ell_2$, $b=\beta_1\ell_1+\beta_2\ell_2$, $c=\gamma_1\ell_1+\gamma_2\ell_2$, the thesis of Theorem~\ref{thm:I} becomes
\[
\frac
{\det
\left(
\begin{array}{cc}
\tilde{\phi}_1(\alpha'_1,\alpha'_2)-\tilde{\phi}_1(\alpha_1,\alpha_2)
&
\tilde{\phi}_2(\alpha'_1,\alpha'_2)-\tilde{\phi}_2(\alpha_1,\alpha_2) \\
\tilde{\phi}_1(\gamma_1,\gamma_2)-\tilde{\phi}_1(\beta_1,\beta_2)
&
\tilde{\phi}_2(\gamma_1,\gamma_2)-\tilde{\phi}_2(\beta_1,\beta_2)
\end{array}
\right)}
{2
\det
\left(
\begin{array}{cc}
\beta_1-\alpha_1 & \beta_2-\alpha_2 \\
\gamma_1-\beta_1 & \gamma_2 -\beta_2
\end{array}
\right)}
\longrightarrow
\det
\frac{\partial (\tilde{\phi}_1,\tilde{\phi}_2)}{\partial(\chi_1,\chi_2)}
\]
as $(\alpha_1,\alpha_2,\beta_1,\beta_2,\gamma_1,\gamma_2)\longrightarrow(\chi_1,\chi_2,\chi_1,\chi_2,\chi_1,\chi_2)$ in $\mathbb{R}^6$, where
\[
\alpha'_i
=
-
\left[
\alpha_i
+
2\beta_i
\frac{\scriptstyle (\alpha_1 -\gamma_1)(\gamma_1-\beta_1)+(\alpha_2 -\gamma_2)(\gamma_2-\beta_2)}{\scriptstyle (\gamma_1-\beta_1)^2+(\gamma_2-\beta_2)^2}
+2\gamma_i
\frac{\scriptstyle (\beta_1 -\alpha_1)(\gamma_1-\beta_1)+(\beta_2 -\alpha_2)(\gamma_2-\beta_2)}{\scriptstyle (\gamma_1-\beta_1)^2+(\gamma_2-\beta_2)^2}
\right]\ .
\]
The comparison between the above Cartesian expressions and the Geometric Algebraic ones should suggest why we prefer the Clifford coordinate-free language.
\end{rem}
\textit{\textbf{Proof of Theorem~\ref{thm:I}.}}
As $\mathbb{E}_2$ is locally convex, every sufficiently small triangle is balanced in $\Omega$. Let us write the inscribed balanced bivector with respect to the ordered basis $\{\ell_1, \ell_2\}$,
\begin{center}
$
\begin{array}{l}
\displaystyle
\phantom{=}
\big[f(a')-f(a)\big]\wedge \big[f(c)-f(b)\big]= \\
\displaystyle
=
\Big\{
\big[\phi_1{\scriptstyle(a')}-\phi_1{\scriptstyle(a)}\big]\ell_1
+ \kern-3pt
\big[\phi_2{\scriptstyle(a')}-\phi_2{\scriptstyle(a)}\big]\ell_2
\Big\}
\wedge
\Big\{
\big[\phi_1{\scriptstyle(c)}-\phi_1{\scriptstyle(b)}\big]\ell_1
+ \kern-3pt
\big[\phi_2{\scriptstyle(c)}-\phi_2{\scriptstyle(b)}\big]\ell_2
\Big\}
= \\
\displaystyle
=
\Big\{
\big[\phi_1{\scriptstyle(a')}-\phi_1{\scriptstyle(a)}\big]
\big[\phi_2{\scriptstyle(c)}-\phi_2{\scriptstyle(b)}\big]
-
\big[\phi_2{\scriptstyle(a')}-\phi_2{\scriptstyle(a)}\big]
\big[\phi_1{\scriptstyle(c)}-\phi_1{\scriptstyle(b)}\big]
\Big\}
\mathbb{I}_2
\end{array}
$
\end{center}
putting $\bar{a}=\frac{1}{2}(a'+a)$, $u_a=\frac{1}{2}(a'-a)$, $v_a=c-\bar{a}$ and $\ell_a=c-b$ we have that
$a'=\bar{a}+u_a$, $a=\bar{a}-u_a$, $c=\bar{a}+v_a$ and $b=\bar{a}-(\ell_a-v_a)$.
As $\mathbb{E}_2$ is locally convex, there exists in $\Omega$ a convex open neighborhood of $x$ where we can use estimate~(\ref{eq:estimate function Landau})
\begin{eqnarray*}
\phi_i(a')-\phi_i(a)
& = &
\phi_i(\bar{a}+u_a)-\phi_i(\bar{a}-u_a)
=
\nabla\phi_i(\bar{a})\cdot (2u_a)\ + \ O\big(|u_a|^2\big) \\ \\
\phi_i(c)-\phi_i(b)
& = &
\phi_i(\bar{a}+v_a)-\phi_i\big(\bar{a}-(\ell_a-v_a)\big)
= \\
& = &
\phi_i(\bar{a}+v_a)-\phi_i(\bar{a})-\Big[\phi_i\big(\bar{a}-(\ell_a-v_a)\big)-\phi_i(\bar{a})\Big]
= \\
& = &
\nabla\phi_i(\bar{a})\cdot v_a \ + \ O\big(|v_a|^2\big) +\nabla\phi_i(\bar{a})\cdot (\ell_a-v_a)\ +\ O\big(|\ell_a-v_a|^2\big)
= \\
& = &
\nabla\phi_i(\bar{a})\cdot \ell_a \ + \ O\big(|v_a|^2\big) \ +\ O\big(|\ell_a-v_a|^2\big) \ .
\end{eqnarray*}
Then,
\[
\begin{array}{l}
\displaystyle
\phantom{=}
\Big\{\big[f(a')-f(a)\big]\wedge \big[f(c)-f(b)\big]\Big\}\cdot \mathbb{I}_2=
\\ \\
\displaystyle
=
\big[\phi_1{\scriptstyle(a')}-\phi_1{\scriptstyle(a)}\big]
\big[\phi_2{\scriptstyle(c)}-\phi_2{\scriptstyle(b)}\big]
-
\big[\phi_2{\scriptstyle(a')}-\phi_2{\scriptstyle(a)}\big]
\big[\phi_1{\scriptstyle(c)}-\phi_1{\scriptstyle(b)}\big]
= \\ \\
\displaystyle
=
\Big[\nabla\phi_1(\bar{a})\cdot (2u_a)\ + \ O\big(|u_a|^2\big)\Big]
\Big[\nabla\phi_2(\bar{a})\cdot \ell_a \ + \ O\big(|v_a|^2\big) \ +\ O\big(|\ell_a-v_a|^2\big)\Big] + \\
\displaystyle
\phantom{=}
-
\Big[\nabla\phi_2(\bar{a})\cdot (2u_a)\ + \ O\big(|u_a|^2\big)\Big]
\Big[ \nabla\phi_1(\bar{a})\cdot \ell_a \ + \ O\big(|v_a|^2\big) \ +\ O\big(|\ell_a-v_a|^2\big)\Big]= \\ \\
\displaystyle
=
\Big[\nabla\phi_1(\bar{a})\cdot (2u_a)\Big]
\Big[\nabla\phi_2(\bar{a})\cdot \ell_a\Big]
-
\Big[\nabla\phi_2(\bar{a})\cdot (2u_a)\Big]
\Big[ \nabla\phi_1(\bar{a})\cdot \ell_a\Big] +\\
\phantom{=}
+
\Big[\nabla\phi_1(\bar{a})\cdot (2u_a)\Big]
\Big[O\big(|v_a|^2\big) \ +\ O\big(|\ell_a-v_a|^2\big)\Big]
+O\big(|u_a|^2\big)\Big[\nabla\phi_2(\bar{a})\cdot \ell_a\Big] + \\
\phantom{=}
+
\Big[\nabla\phi_2(\bar{a})\cdot (2u_a)\Big]
\Big[O\big(|v_a|^2\big) \ +\ O\big(|\ell_a-v_a|^2\big)\Big]
+O\big(|u_a|^2\big)\Big[\nabla\phi_1(\bar{a})\cdot \ell_a\Big] + \\
\phantom{=}
+
O\big(|u_a|^2\big)\Big[O\big(|v_a|^2\big) \ +\ O\big(|\ell_a-v_a|^2\big)\Big] \ .
\end{array}
\]
The first difference is a scalar product between bivectors in $\mathbb{G}_2$ (i.e. pseudo-scalars)
\begin{eqnarray*}
& &
\Big[\nabla\phi_1(\bar{a})\cdot (2u_a)\Big]
\Big[\nabla\phi_2(\bar{a})\cdot \ell_a\Big]
-
\Big[\nabla\phi_2(\bar{a})\cdot (2u_a)\Big]
\Big[ \nabla\phi_1(\bar{a})\cdot \ell_a\Big]
= \\
& = &
\Big(\nabla\phi_1(\bar{a})\wedge \nabla\phi_2(\bar{a})\Big)
\cdot
\big((2u_a)\wedge \ell_a\big)\ .
\end{eqnarray*}
Since $a'$ is a mirror vertex, then $u_a$ is a direction orthogonal to the direction $\ell_a$, and we have that
\[
\phantom{=}
\Big(\nabla\phi_1(\bar{a})\wedge \nabla\phi_2(\bar{a})\Big)
\cdot
\big((2u_a)\wedge \ell_a\big)\\
= \pm 2 |u_a| |\ell_a|
\Big(\nabla\phi_1(\bar{a})\wedge \nabla\phi_2(\bar{a})\Big)
\cdot
\mathbb{I}_2\ ,
\]
\noindent
the sign depending on whether or not the ordered basis $\{ u_a, \ell_a \}$ is equi-oriented with the ordered orthononormal basis $\{\ell_1, \ell_2\}$.
Now, we simply observe that
\[
2\big(a\wedge b +b\wedge c + c\wedge a\big)
=
(2u_a)\wedge \ell_a
=
\pm 2 |u_a| |\ell_a|\ \mathbb{I}_2\ ,
\]
and so we can write
\begin{equation}\label{eq:estimate transf}
\begin{array}{l}
\displaystyle
\phantom{=}
\left|
\frac{\Big\{\big[f(a')-f(a)\big]\wedge \big[f(c)-f(b)\big]\Big\}\cdot \mathbb{I}_2}{2\big(a\wedge b +b\wedge c + c\wedge a\big)\cdot \mathbb{I}_2}
-
\Big(\nabla\phi_1(x)\wedge \nabla\phi_2(x)\Big)
\cdot
\mathbb{I}_2
\right|
= \\ \\
\displaystyle
\le
\left|
\Big(\nabla\phi_1(\bar{a})\wedge \nabla\phi_2(\bar{a})\Big)
\cdot
\mathbb{I}_2
-
\Big(\nabla\phi_1(x)\wedge \nabla\phi_2(x)\Big)
\cdot
\mathbb{I}_2
\right| + \\
\displaystyle
\phantom{\le}
+
\big|\nabla\phi_1(\bar{a})\big|
\left|
\frac{O\big(|v_a|^2\big)}{|\ell_a|} + \frac{O\big(|\ell_a-v_a|^2\big)}{|\ell_a|}
\right|
+
\big|\nabla\phi_2(\bar{a})\big|\ \Big|O\big(|u_a|\big)\Big| + \\
\displaystyle
\phantom{\le\ }
+
\big|\nabla\phi_2(\bar{a})\big|
\left|
\frac{O\big(|v_a|^2\big)}{|\ell_a|} + \frac{O\big(|\ell_a-v_a|^2\big)}{|\ell_a|}
\right|
+
\big|\nabla\phi_1(\bar{a})\big| \ \Big|O\big(|u_a|\big)\Big| + \\
\displaystyle
\phantom{\le \ \ \ }
+
O\big(|u_a|\big)\
\left|
\frac{O\big(|v_a|^2\big)}{|\ell_a|} + \frac{O\big(|\ell_a-v_a|^2\big)}{|\ell_a|}
\right|
\end{array}
\end{equation}
By Cauchy-Schwarz inequality, triangular inequality and Proposition~\ref{prop:bivector norm ineq}, we have that $\forall t,w,y,z\in \mathbb{E}_2$
\begin{center}
$
\begin{array}{l}
\phantom{=}
\big|(t\wedge w)\cdot \mathbb{I}_2 -(y\wedge z)\cdot \mathbb{I}_2\big|
=
|t\wedge w -y\wedge z|
=
|t\wedge w - t\wedge z + t\wedge z - y\wedge z| = \\
\le
\big|t\wedge (w-z)\big| + \big|(t-y) \wedge z\big|
\le
|t| |w-z| + |t-y||z|
\end{array}
$
\end{center}
so
\begin{center}
$
\begin{array}{l}
\displaystyle
\phantom{\le}
\left|
\Big(\nabla\phi_1(\bar{a})\wedge \nabla\phi_2(\bar{a})\Big)
\cdot
\mathbb{I}_2
-
\Big(\nabla\phi_1(x)\wedge \nabla\phi_2(x)\Big)
\cdot
\mathbb{I}_2
\right| = \\
\displaystyle
\le
\Big|\nabla\phi_1(\bar{a})\Big| \Big|\nabla\phi_2(\bar{a})-\nabla\phi_2(x)\Big|
+
\Big|\nabla\phi_1(\bar{a})-\nabla\phi_1(x)\Big| \Big|\nabla\phi_2(x)\Big|
\end{array}
$
\end{center}
Moreover, the mirror vertex $a'$ is balanced, and then (by Proposition~\ref{pro:mirror vertex}) we have that
\begin{equation}\label{eq:balanced estimate}
\max\left\{\frac{|v_a|}{|\ell_a|}, \frac{|\ell_a-v_a|}{|\ell_a|}\right\}\le 1\ .
\end{equation}
As $f$ is smooth, the theorem is proved; in fact,
\[
\bar{a}\longrightarrow x
\ \ \ \textrm{ and } \ \ \
|u_a|,\ |v_a|,\ |\ell_a-v_a|, |\ell_a| \longrightarrow 0\ ,
\]
as $(a,b,c)\longrightarrow (x,x,x)\ .\ \square$
\bigskip
\begin{rem}\label{rem:relaxing hypothesis}
As we have warned in the introduction\footnote{See Section~\ref{sec:warnings}} some hypotheses in the foregoing theorem can be weakened. For instance, we can relax hypothesis~($\textit{2.}$) by imposing that
\begin{center}
$
\displaystyle
\max\left\{\frac{|v_a|}{|\ell_a|}, \frac{|\ell_a-v_a|}{|\ell_a|}\right\}
$
\end{center}
\noindent
be simply bounded, instead of supposing $a'$ being balanced \big(that is equivalent to~(\ref{eq:balanced estimate})\big). We will show in section~\ref{sec:not balanced bivector} that even when $d=d_{(a,b,c)}$ is not a mirror vertex of~$[a,b,c]$, it is possible to choose suitable oriented triangles $[d,c,b]$, adjacent to the converging triangles $[a,b,c]$, such that~(\ref{eq:Alg0}) and~(\ref{eq:AlgI}) hold.
\end{rem}
\section{The tangent bivector}
The following theorem is to smooth surfaces as Proposition~\ref{prop: approxim dot c} is to smooth curves.
\begin{theorem}\label{thm:II}
Let $\{\ell_1,\ell_2\}$ be an ordered orthonormal basis in the Euclidean space $\mathbb{E}_2$; let $\{h_1,\dots , h_n\}$ be an ordered orthonormal basis in the $n$-dimensional Euclidean space $\mathbb{E}_n$;
let $\Omega \subseteq \mathbb{E}_2$ be open, and $s:\Omega \to \mathbb{E}_n$ be a smooth surface,
then $\forall x \in \Omega$ we have that
\begin{equation}\label{eq:AlgI}
\begin{array}{c}
\displaystyle
\ \kern-10pt
\lim_{
\begin{array}{c}
\scriptstyle (a,b,c)\to (x,x,x) \\
\scriptstyle a\wedge b +b\wedge c + c\wedge a \ne 0
\end{array}
}
\kern-12pt
\frac{1}{\big[ \big\langle a;b;c\big\rangle - \big\langle d_{(a,b,c)};b;c\big\rangle\big] \kern-3pt \cdot \kern-1pt \mathbb{I}_2}
\Big[
\kern-3pt
\big\langle s(a);s(b);s(c)\big\rangle
-
\big\langle s(d_{(a,b,c)});s(b);s(c)\big\rangle
\kern-3pt
\Big]
\kern-3pt = \\
\displaystyle
=
\partial_{\ell_1} s(x) \wedge \partial_{\ell_2} s(x) \ ,
\end{array}
\end{equation}
where
\begin{itemize}
\item $\displaystyle d_{(a,b,c)}
=a'=
-\left[
a
+ 2 b \frac{\ell_b\cdot \ell_a}{|\ell_a|^2}
+ 2 c \frac{\ell_c\cdot \ell_a}{|\ell_a|^2}
\right]$ is a balanced mirror vertex of the oriented plane triangle~$[a,b,c]$,
\item $\mathbb{I}_2= \ell_1 \ell_2=\ell_1\wedge \ell_2$ is the pseudo-unit in the Geometric Algebra $\mathbb{G}_2$ associated to the oriented Euclidean space $\mathbb{E}_2$,
\item $\wedge$ is the outer product in $\mathbb{G}_k$, and $\cdot$ is the scalar product in $\mathbb{G}_k$ (with $k=2$ or $k=n$),
\item $
\displaystyle
{\bf \color{dgreen} \partial_{\ell_i}s(x)}\index{symboles}{$\partial_{\ell_i}s(x)$}
=
\sum_{j=1}^n \partial_{\ell_i} \sigma_j(x)\ h_j
$ (with $i=1,2$), where $\sigma_j = s \cdot h_j$ (with $j=1,\dots , n$),
\item the limit $(a,b,c)\to (x,x,x)$ is taken in the product topology of $\mathbb{E}_2 \times \mathbb{E}_2 \times \mathbb{E}_2$.
\end{itemize}
\end{theorem}
\textit{\textbf{Proof of Theorem~\ref{thm:II}.}}
The proof is just a coordinatewise application of Theorem~\ref{thm:I}. Let us rewrite
\begin{eqnarray}
& &
\phantom{=}
\displaystyle
\big\langle s(a);s(b);s(c)\big\rangle
-
\big\langle s(a');s(b);s(c)\big\rangle
=
\big[s(a')-s(a)\big]
\wedge
\big[s(c)-s(b)\big]
= \nonumber \\
& &
\displaystyle
=
\left\{
\sum_{j=1}^n
\big[
\sigma_j(a')-\sigma_j(a)
\big]
h_j
\right\}
\wedge
\left\{
\sum_{k=1}^n
\big[
\sigma_k(c)-\sigma_k(b)
\big]
h_k
\right\}
= \nonumber \\
& &
\displaystyle
=
\kern-7pt
\sum_{1\le j< k \le n}
\kern-7pt
\Big\{
\kern-3pt
\big[
\sigma_j(a')-\sigma_j(a)
\big]
\kern-2pt
\big[
\sigma_k(c)-\sigma_k(b)
\big]
-
\big[
\sigma_j(c)-\sigma_j(b)
\big]
\kern-2pt
\big[
\sigma_k(a')-\sigma_k(a)
\big]
\kern-3pt
\Big\}
h_j
\wedge
h_k
= \nonumber \\
& &
\displaystyle
=
\sum_{1\le j< k \le n}
\left\{
\Big\{
\big[
s_{j,k}(a')-s_{j,k}(a)
\big]
\wedge
\big[
s_{j,k}(c)-s_{j,k}(b)
\big]
\Big\}
\cdot \mathbb{I}_2
\right\}
h_j
\wedge
h_k \label{eq:balanced bivector}
\end{eqnarray}
where $s_{j,k}=\sigma_j \ell_1 + \sigma_k \ell_2: \Omega \to \mathbb{E}_2$ are $n\choose 2$ smooth transformations. By Theorem~\ref{thm:I} we have that
\[
\lim_{
\begin{array}{c}
\scriptstyle (a,b,c)\to (x,x,x) \\
\scriptstyle a\wedge b +b\wedge c + c\wedge a \ne 0
\end{array}
}
\kern-10pt
\frac{\Big\{\big[s_{j,k}(a')-s_{j,k}(a)\big]\wedge \big[s_{j,k}(c)-s_{j,k}(b)\big]\Big\}\cdot \mathbb{I}_2}{2\big(a\wedge b +b\wedge c + c\wedge a\big)\cdot \mathbb{I}_2}
=
\big(\nabla\sigma_j(x)\wedge \nabla\sigma_k(x)\big)\cdot \mathbb{I}_2\ .
\]
Then, the thesis follows observing that
\begin{eqnarray}
& &
\displaystyle
\phantom{=}
\partial_{\ell_1} s(x) \wedge \partial_{\ell_2} s(x)
=
\left(
\sum_{j=1}^n
\partial_{\ell_1}\sigma_j(x) h_j
\right)
\wedge
\left(
\sum_{k=1}^n
\partial_{\ell_2}\sigma_k(x) h_k
\right)
= \nonumber \\
& &
\displaystyle
=
\sum_{1\le j< k \le n}
\big[
\partial_{\ell_1}\sigma_j(x)
\partial_{\ell_2}\sigma_k(x)
-
\partial_{\ell_2}\sigma_j(x)
\partial_{\ell_1}\sigma_k(x)
\big]
h_j
\wedge
h_k
= \nonumber \\
& &
\displaystyle
=
\sum_{1\le j< k \le n}
\Big[
\big(\nabla\sigma_j(x)\wedge \nabla\sigma_k(x)\big)\cdot \mathbb{I}_2
\Big]
h_j
\wedge
h_k \ . \ \square \label{eq:tangent bivector}
\end{eqnarray}
\section{The area}\label{sec:area}
\begin{theorem}\label{thm:III}
Let $P$ be a compact polygon contained in the open set $\Omega\subseteq \mathbb{E}_2$; let $s:\Omega\to\mathbb{E}_n$ be a smooth surface; let $\Pi$ be a partition of $P$ into a finite family of non-overlapping\footnote{Two sets are {\bf \color{dgreen} non-overlapping}\index{termes}{Non-overlapping sets} if the interiors of those two sets have empty intersection.} nondegenerate oriented triangles $[a_i,b_i,c_i]$ all balanced in $\Omega$;
\noindent
let~$\displaystyle||\Pi||=\max_{[a_i,b_i,c_i]\in \Pi}\big\{|\ell_{a_i}|,|\ell_{b_i}|, |\ell_{c_i}|, \big\}$; then,
\begin{equation}\label{eq:AlgII}
\lim_{||\Pi||\to 0}
\frac{1}{4}
\sum_{[a_i,b_i,c_i]\in\Pi}
\Big|
\big[s(a_i')-s(a_i)\big]
\wedge
\big[s(c_i)-s(b_i)\big]
\Big|
=
\int_P
\big|\partial_{\ell_1} s(x) \wedge \partial_{\ell_2} s(x) \big|dx\ ,
\end{equation}
\noindent
where $a'_i =
-\left[
a_i
+ 2 b_i \frac{\ell_{b_i}\cdot \ell_{a_i}}{|\ell_{a_i}|^2}
+ 2 c_i \frac{\ell_{c_i}\cdot \ell_{a_i}}{|\ell_{a_i}|^2}
\right]$
is a balanced mirror vertex for $[a_i,b_i,c_i]$.
\end{theorem}
In particular, if $s$ is injective, the above integral represent the area of $s(P) \subset \mathbb{E}_n$.
\begin{rem*}\label{rem:triangulations}
In the hypothesis of the foregoing theorem we do not require the partition~$\Pi$ to be a triangulation\footnote{ A {\bf \color{dgreen} triangulation}\index{termes}{Triangulation of a compact polygon} of $P$ is a partition of P into a finite number of non-overlapping triangles such that no vertex of a triangle is an internal point of a side of another.} of $P$ (indeed, a vertex of triangle may be interior to the side of an adjacent triangle). However, if~$\Pi$ is a triangulation of~$P$, then the images~$s(x)$ of its vertices~$x$ are vertices of a polyhedron inscribed on~$s$.
\end{rem*}
\textit{\textbf{Proof of Theorem~\ref{thm:III}.}}
Without loss of generality, we can suppose that all triangles $[a_i,b_i,c_i]\in \Pi$ are equi-oriented with $\mathbb{I}_2=\ell_1\wedge \ell_2$. Then,
\[
\begin{array}{l}
\displaystyle
\left|
\sum_{[a_i,b_i,c_i]\in \Pi}
\frac{1}{4}
\Big|
\big[s(a_i')-s(a_i)\big]
\wedge
\big[s(c_i)-s(b_i)\big]
\Big|
-
\int_P
\big|\partial_{\ell_1} s(x) \wedge \partial_{\ell_2} s(x) \big|dx
\right|
= \\ \\
\displaystyle
=
\left|
\sum_{[a_i,b_i,c_i]\in \Pi}
\left[
\frac{1}{4}
\Big|
\big[s(a_i')-s(a_i)\big]
\wedge
\big[s(c_i)-s(b_i)\big]
\Big|
-
\int_{[a_i,b_i,c_i]}
\big|\partial_{\ell_1} s{\scriptstyle(x)} \wedge \partial_{\ell_2} s{\scriptstyle(x)} \big|dx
\right]
\right|
= \\ \\
\displaystyle
\le
\sum_{[a_i,b_i,c_i]\in \Pi}
\left|
\frac{1}{4}
\Big|
\big[s(a_i')-s(a_i)\big]
\wedge
\big[s(c_i)-s(b_i)\big]
\Big|
-
\int_{[a_i,b_i,c_i]}
\big|\partial_{\ell_1} s{\scriptstyle(x)} \wedge \partial_{\ell_2} s{\scriptstyle(x)} \big|dx
\right|
= (\#)
\end{array}
\]
Note that the area of each triangle $[a_i,b_i,c_i]$ ca be written as~$\displaystyle \frac{|u_{a_i}| |\ell_{a_i}|}{2}$, so
\[
\begin{array}{l}
\displaystyle
(\#)
=
\kern-10pt
\sum_{[a_i,b_i,c_i]\in \Pi}
\left|
\int_{[a_i,b_i,c_i]}
\left[
\frac{
\Big|
\big[s(a_i')-s(a_i)\big]
\wedge
\big[s(c_i)-s(b_i)\big]
\Big|
}{2 |u_{a_i}| |\ell_{a_i}|}
-
\big|\partial_{\ell_1} s{\scriptstyle(x)} \wedge \partial_{\ell_2} s{\scriptstyle(x)} \big|
\right]\ dx
\right|
= \\ \\
\displaystyle
\le
\kern-10pt
\sum_{[a_i,b_i,c_i]\in \Pi}
\int_{[a_i,b_i,c_i]}
\left|
\frac{
\Big|
\big[s(a_i')-s(a_i)\big]
\wedge
\big[s(c_i)-s(b_i)\big]
\Big|
}{2 |u_{a_i}| |\ell_{a_i}|}
-
\big|\partial_{\ell_1} s{\scriptstyle(x)} \wedge \partial_{\ell_2} s{\scriptstyle(x)} \big|
\right|\ dx \ .
\end{array}
\]
\noindent
Since $\mathbb{G}_{n \choose 2}$ is a Euclidean space, we have that
$\Big||V|-|W|\Big|\le |V-W|$ for each~$V,W\in \mathbb{G}_{n \choose 2}$, so
\begin{eqnarray*}
& &
\phantom{\le}
\left|
\frac{
\Big|
\big[s(a_i')-s(a_i)\big]
\wedge
\big[s(c_i)-s(b_i)\big]
\Big|
}{2 |u_{a_i}| |\ell_{a_i}|}
-
\big|\partial_{\ell_1} s{\scriptstyle(x)} \wedge \partial_{\ell_2} s{\scriptstyle(x)} \big|
\right| =
\\
& &
\kern-20pt
\le
\left|
\frac{1}{2 |u_{a_i}| |\ell_{a_i}|}
\Big\{
\big[s(a_i')-s(a_i)\big]
\wedge
\big[s(c_i)-s(b_i)\big]
\Big\}
-
\partial_{\ell_1} s{\scriptstyle(x)} \wedge \partial_{\ell_2} s{\scriptstyle(x)}
\right| = \\
& &
\displaystyle
\kern-20pt
=
\kern-2pt
\left|
\sum_{1\le j< k \le n}
\kern-5pt
\left\{
\kern-3pt
\Big\{
\frac{1}{\scriptstyle 2 |u_{a_i}| |\ell_{a_i}|}
\big[
s_{j,k}{\scriptstyle (a'_i)} - s_{j,k}{\scriptstyle (a_i)}
\big]
\kern-3pt
\wedge
\kern-3pt
\big[
s_{j,k}{\scriptstyle (c_i)} - s_{j,k}{\scriptstyle (b_i)}
\big]
-
\nabla\sigma_j{\scriptstyle (x)}
\kern-3pt
\wedge
\kern-3pt
\nabla\sigma_k{\scriptstyle (x)}
\kern-2pt
\Big\}
\cdot \mathbb{I}_2
\kern-2pt
\right\}
h_j
\kern-3pt
\wedge
\kern-3pt
h_k
\right| \\
& &
\displaystyle
\kern-20pt
\le
\kern-2pt
\sum_{1\le j< k \le n}
\kern-2pt
\left|
\frac{1}{\scriptstyle 2 |u_{a_i}| |\ell_{a_i}|}
\big[
s_{j,k}{\scriptstyle (a'_i)} - s_{j,k}{\scriptstyle (a_i)}
\big]
\kern-3pt
\wedge
\kern-3pt
\big[
s_{j,k}{\scriptstyle (c_i)} - s_{j,k}{\scriptstyle (b_i)}
\big]
-
\nabla\sigma_j{\scriptstyle (x)}
\kern-3pt
\wedge
\kern-3pt
\nabla\sigma_k{\scriptstyle (x)}
\right| \ ,
\end{eqnarray*}
\noindent
by equations~(\ref{eq:balanced bivector}) and~(\ref{eq:tangent bivector}). Since each $[a_i,b_i,c_i]$ is equi-oriented with $\mathbb{I}_2$, and we are dealing with bivectors that are also pseudo-scalars, we can write
\begin{eqnarray*}
& &
\left|
\frac{1}{2 |u_{a_i}| |\ell_{a_i}|}
\big[
s_{j,k}(a'_i) - s_{j,k}(a_i)
\big]
\kern-3pt
\wedge
\kern-3pt
\big[
s_{j,k}(c_i) - s_{j,k}(b_i)
\big]
-
\nabla\sigma_j(x)
\kern-3pt
\wedge
\kern-3pt
\nabla\sigma_k(x)
\right|
= \\
& &
=
\left|
\frac{\Big\{\big[s_{j,k}(a'_i)-s_{j,k}(a_i)\big]\wedge \big[s_{j,k}(c_i)-s_{j,k}(b_i)\big]\Big\}\cdot \mathbb{I}_2}{2\big(a\wedge b +b\wedge c + c\wedge a\big)\cdot \mathbb{I}_2}
-
\Big(\nabla\sigma_j(x)\wedge \nabla\sigma_k(x)\Big)
\cdot
\mathbb{I}_2
\right| = \\
& &
\le
\left|
\Big(\nabla\sigma_j(\bar{a}_i)\wedge \nabla\sigma_k(\bar{a}_i)\Big)
\cdot
\mathbb{I}_2
-
\Big(\nabla\sigma_j(x)\wedge \nabla\sigma_k(x)\Big)
\cdot
\mathbb{I}_2
\right| + \\
& &
\phantom{\le}
+
\big|\nabla\sigma_j(\bar{a}_i)\big|
\left|
\frac{O\big(|v_{a_i}|^2\big)}{|\ell_{a_i}|} + \frac{O\big(|\ell_{a_i}-v_{a_i}|^2\big)}{|\ell_{a_i}|}
\right|
+
\big|\nabla\sigma_k(\bar{a}_i)\big|\ \Big|O\big(|u_{a_i}|\big)\Big| + \\
& &
\phantom{\le\ }
+
\big|\nabla\sigma_k(\bar{a}_i)\big|
\left|
\frac{O\big(|v_{a_i}|^2\big)}{|\ell_{a_i}|} + \frac{O\big(|\ell_{a_i}-v_{a_i}|^2\big)}{|\ell_{a_i}|}
\right|
+
\big|\nabla\sigma_j(\bar{a}_i)\big| \ \Big|O\big(|u_{a_i}|\big)\Big| + \\
& &
\displaystyle
\phantom{\le \ \ \ }
+
O\big(|u_{a_i}|\big)\
\left|
\frac{O\big(|v_{a_i}|^2\big)}{|\ell_{a_i}|} + \frac{O\big(|\ell_{a_i}-v_{a_i}|^2\big)}{|\ell_{a_i}|}
\right| \ ,
\end{eqnarray*}
by inequality~(\ref{eq:estimate transf}). Then, if we sum over the $n \choose 2$ indexes $j,k$, and if we integrate all over $P$, we obtain quantities that are infinitesimal with respect to $||\Pi||$. $\square$
\section{The graph of a smooth function}
Let $\{\ell_1,\ell_2\}$ be an ordered orthonormal basis in the Euclidean space $\mathbb{E}_2$; let $\{h_1,h_2,h_3\}$ be an ordered orthonormal basis in the three-dimensional Euclidean space $\mathbb{E}_3$. We can consider $\mathbb{E}_2 \subset \mathbb{E}_3$ by identifying $h_1=\ell_1$ and $h_2=\ell_2$. So when we have a smooth function $\psi:\Omega \to \mathbb{R}$, we can consider the following smooth surface $s:\Omega \to \mathbb{E}_3$
\begin{center}
$
s(x)=s(\chi\ell_1 + \chi_2\ell_2)
=
\chi\ell_1 + \chi_2\ell_2 + \psi(\chi\ell_1 + \chi_2\ell_2) h_3
=
x+\psi(x)h_3 \ .
$
\end{center}
In this case we have that
\begin{eqnarray*}
&\phantom{ = } &
\big[s(a')-s(a)\big] \wedge \big[s(c)-s(b)\big] =\\
& = &
(a'-a)\wedge (c-b) - \Big\{\big[\psi(a')-\psi(a)\big](c-b)-\big[\psi(c)-\psi(b)\big](a'-a)\Big\}\wedge h_3 \ ,
\end{eqnarray*}
\begin{eqnarray*}
\partial_{\ell_1} s{\scriptstyle(x)} \wedge \partial_{\ell_2} s{\scriptstyle(x)}
& = &
\ell_1\wedge\ell_2 + \partial_{\ell_2} \psi(x) \ell_1\wedge h_3 -\partial_{\ell_1}\psi(x) \ell_2\wedge h_3 =
\\
& = &
\ell_1\wedge\ell_2 - \big(\nabla\psi(x)\big)^* \ ,
\end{eqnarray*}
that is to say, $\displaystyle \nabla\psi(x) = h_3 - \big(\partial_{\ell_1} s{\scriptstyle(x)} \wedge \partial_{\ell_2} s{\scriptstyle(x)}\big)^{\#}= h_3 - \big(\partial_{\ell_1} s{\scriptstyle(x)} \times \partial_{\ell_2} s{\scriptstyle(x)}\big)$,
and
\begin{eqnarray*}
\big|\partial_{\ell_1} s{\scriptstyle(x)} \wedge \partial_{\ell_2} s{\scriptstyle(x)} \big|
& = &
\sqrt{1 + \big|\nabla\psi(x)\big|^2}\ .
\end{eqnarray*}
\chapter{The local Schwarz paradox}\label{cha:local Schwarz}
We have seen in Chapter~\ref{cha:smooth curves} (Proposition~\ref{prop: approxim dot c}) that the vector $\dot{c}(\chi)$ is the limit of inscribed mean vectors
\[
\frac{1}{\beta-\alpha}
\big[c(\beta)-c(\alpha)\big] \ ,
\]
\noindent
as $\alpha$ and $\beta$ converge to $\chi$. In this chapter we will verify that the Schwarz Paradox has the following local formulation: inscribed mean bivectors
\begin{center}
$
\displaystyle
\frac{1}{
\left\langle
a;b;c
\right\rangle
\cdot \mathbb{I}_2
}
\big\langle
s(a);s(b);s(c)
\big\rangle
$
\end{center}
\noindent
on a smooth surface $s:\Omega\to \mathbb{E}_n$ (such as circular right cylinder) may not converge to the bivector $\partial_{\ell_1}s(x)\wedge \partial_{\ell_2}s(x)$, as $a$, $b$ and $c$ converge to the point $x\in \Omega$. On the contrary, we have seen in Theorem~\ref{thm:II} that the corresponding inscribed balanced mean bivectors
\begin{center}
$
\begin{array}{l}
\displaystyle
\frac{1}{2\left\langle a;b;c\right\rangle\cdot \mathbb{I}_2}
\big[s(a')-s(a)\big]
\wedge
\big[s(c)-s(b)\big]
= \\ \\
\displaystyle
=
\frac{1}{\big[ \left\langle a;b;c\right\rangle - \left\langle a';b;c\right\rangle\big]\cdot \mathbb{I}_2}
\Big[
\big\langle s(a);s(b);s(c)\big\rangle
-
\big\langle s(a');s(b);s(c)\big\rangle
\Big] \ ,
\end{array}
$
\end{center}
\noindent
always converge\footnote{As the non-degenerate triangles $[a,b,c]$ converge to the point $x$.} to the bivector $\partial_{\ell_1}s(x)\wedge \partial_{\ell_2}s(x)$. In this chapter we will verify it on the double sequences of isosceles triangles proposed by Schwarz to prove the fallacy of Serret's definition of area.
\section{The Schwarz triangles}
Let us consider the circular right cylinder of Example~\ref{exa:cylinder}. Such a surface is smooth, and for each $x=\chi\ell_1 +\chi \ell_2 \in \mathbb{E}_2$
\[
\partial_{\ell_1}s(x)\wedge \partial_{\ell_2}s(x)
=
\rho \cos(\chi_1) h_2\wedge h_3 + \rho \sin(\chi_2) h_3\wedge h_1\ .
\]
Let us consider the double sequences of oriented triangles $[a_{m,n},b_{m,n},c_{m,n}]$ such that
\begin{center}
$
\displaystyle
\hfil
a_{m,n}
=0=x\ ,
\hfil
b_{m,n}
=\frac{\pi}{m}\ell_1 \ + \ \frac{1}{2n}\ell_2\ ,
\hfil
c_{m,n}
=-\frac{\pi}{m}\ell_1 \ + \ \frac{1}{2n}\ell_2\ .
\hfil
$
\end{center}
Then
\[
x=\lim_{m,n\to \infty} b_{m,n}=\lim_{m,n\to \infty} c_{m,n}=0=a_{m,n}
, \ \ \ \
\langle a_{m,n};b_{m,n};c_{m,n} \rangle= \frac{\pi}{mn}\mathbb{I}_2 \ ,
\]
\noindent
and
\begin{center}
$
\displaystyle
\big\langle s(a_{m,n});s(b_{m,n});s(c_{m,n}) \big\rangle=
2\rho\sin\frac{\pi}{m}
\left[
\rho\left(1-\cos\frac{\pi}{m}\right)h_1\wedge h_2 + \frac{1}{2n}h_2\wedge h_3
\right]
$
\end{center}
\noindent
so that $
\displaystyle
\frac{1}{
\left\langle
a_{m,n};b_{m,n};c_{m,n}
\right\rangle
\cdot \mathbb{I}_2
}
\big\langle
s(a_{m,n});s(b_{m,n});s(c_{m,n})
\big\rangle
$ is asymptotically equivalent to
\begin{center}
$
\displaystyle
2\rho^2n\frac{\pi^2}{m^2}h_1\wedge h_2
+
\rho
h_2\wedge h_3 \ ,
$
\end{center}
\noindent
and then
\begin{itemize}
\item $\displaystyle
\kern-5pt
\lim_{m\to\infty}
\frac{1}{
\left\langle
a_{m,m};b_{m,m};c_{m,m}
\right\rangle
\kern-3pt
\cdot
\kern-3pt
\mathbb{I}_2
}
\big\langle
s(a_{m,m});s(b_{m,m});s(c_{m,m})
\big\rangle
\kern-2pt
=
\kern-2pt
\rho h_2 \kern-1pt \wedge \kern-1pt h_3
\kern-2pt
=
\kern-2pt
\partial_{\ell_1}s{\scriptstyle (0)}\wedge \partial_{\ell_2}s{\scriptstyle (0)}\ ,
$
\item
$\displaystyle
\
\kern-8pt
\lim_{m\to\infty}
\kern-2pt
\frac{1}{
\left\langle
a_{m,m^2};b_{m,m^2};c_{m,m^2}
\right\rangle
\kern-2pt
\cdot
\kern-2pt
\mathbb{I}_2
}
\kern-1pt
\big\langle
s(a_{m,m^2});s(b_{m,m^2});s(c_{m,m^2})
\big\rangle
\kern-3pt
=
\kern-2pt
2\rho^2 \pi^2 h_1 \kern-2pt \wedge \kern-2pt h_2
+
\rho h_2 \kern-2pt \wedge \kern-2pt h_3 ,
$
\item
and the normalized direction of the mean bivector
$
\displaystyle
\hfil
\frac{1}{
\left\langle
a_{m,m^3};b_{m,m^3};c_{m,m^3}
\right\rangle
\cdot \mathbb{I}_2
}
\left\langle
s(a_{m,m^3});s(b_{m,m^3});s(c_{m,m^3})
\right\rangle
\hfil
$
tends to the planar direction $h_1\wedge h_2$ which is orthogonal to the tangent planar direction $h_2\wedge h_3$.
\end{itemize}
On the contrary, if we consider the mirror vertex $\displaystyle a'_{m,n}=\frac{1}{n}\ell_2$ (that is always balanced), we have that the balanced mean bivector
\begin{center}
$
\displaystyle
\frac{1}{2\left\langle a;b;c\right\rangle\cdot \mathbb{I}_2}
\big[s(a')-s(a)\big]
\wedge
\big[s(c)-s(b)\big]
=
\rho \frac{m}{\pi}
\sin\frac{\pi}{m} h_2 \wedge h_3\ ,
$
\end{center}
\noindent
converges to the tangent bivector $\rho h_2\wedge h_3$.
\section{A converging mean bivector not balanced}\label{sec:not balanced bivector}
As we have anticipated in remark~\ref{rem:relaxing hypothesis}
As before, we consider the Schwarz's triangles, but ordered as follows
\begin{center}
$
\displaystyle
\hfil
a_{m,n}
=-\frac{\pi}{m}\ell_1 \ + \ \frac{1}{2n}\ell_2\ ,
\hfil
b_{m,n}
=0=x\ ,
\hfil
c_{m,n}
=\frac{\pi}{m}\ell_1 \ + \ \frac{1}{2n}\ell_2\ .
\hfil
$
\end{center}
As point $d_{(a_{m,n},b_{m,n},c_{m,n})}=d_{m,n}$ we choose $\displaystyle d_{m,n}=\frac{2\pi}{m}\ell_1$ which is not a mirror vertex of $[a_{m,n},b_{m,n},c_{m,n}]$. However, the following relation still holds
\begin{center}
$
\displaystyle
2\left\langle a_{m,n};b_{m,n};c_{m,n}\right\rangle
=
\big\langle a_{m,n};b_{m,n};c_{m,n}\big\rangle - \big\langle d_{m,n};b_{m,n};c_{m,n}\big\rangle
=
\frac{2\pi}{mn} \mathbb{I}_2\ .
$
\end{center}
\begin{eqnarray*}
& &
\big\langle s(a_{m,n});s(b_{m,n});s(c_{m,n})\big\rangle
-
\big\langle s(d_{m,n});s(b_{m,n});s(c_{m,n})\big\rangle =\\
& = &
\big[s(d_{m,n})-s(a_{m,n})\big]
\wedge
\big[s(c_{m,n})-s(b_{m,n})\big]= \\
& = &
\left\{
\rho \left[\cos\left(\frac{2\pi}{m}\right)-\cos\left(\frac{\pi}{m}\right)\right]h_1
+
\rho \left[\sin\left(\frac{2\pi}{m}\right)+\sin\left(\frac{\pi}{m}\right)\right]h_2
-
\frac{1}{2n}h_3
\right\} \wedge \\
& &
\wedge
\left\{
\rho \left[\cos\left(\frac{\pi}{m}\right)-1\right]h_1
+
\rho \sin\left(\frac{\pi}{m}\right) h_2
+
\frac{1}{2n}h_3
\right\} = \\
& = &
\rho \frac{1}{2n}
\left[\sin\left(\frac{2\pi}{m}\right)+2 \sin\left(\frac{\pi}{m}\right)\right] h_2\wedge h_3
+
\rho \frac{1}{2n}
\left[1- \cos\left(\frac{2\pi}{m}\right)\right] h_3\wedge h_1 \ ,
\end{eqnarray*}
that is asymptotically equivalent (as $m,n\to \infty$) to the bivector
\[
\rho
\frac{2\pi}{mn} h_2\wedge h_3
+
\rho
\frac{\pi^2}{m^2 n} h_3\wedge h_1\ .
\]
So we can conclude that
\begin{center}
$
\displaystyle
\lim_{m,n\to \infty}
\frac{1}{2\left\langle a_{m,n};b_{m,n};c_{m,n}\right\rangle\cdot \mathbb{I}_2}
\big[s(d_{m,n})-s(a_{m,n})\big]
\wedge
\big[s(c_{m,n})-s(b_{m,n})\big]
=
\rho h_2 \wedge h_3\ .
$
\end{center}
|
2,869,038,155,471 | arxiv | \section*{Results}
\textbf{The principle of quantum interference.} HOM interference,
a basic type of quantum interference that reflects the bosonic properties
of a single particle, is generally used to test the quantum properties
of single plasmons \cite{Hong}. In addition to its fundamental importance
within quantum physics, the HOM effect underlies the basic entanglement
mechanism in linear optical quantum computing \cite{Klm} because
two-qubit quantum gates, which form the core of linear optical quantum
computing, can be obtained via classical and quantum interference
(HOM interference) effects followed by a measurement-induced state
projection.
HOM interference can be described as follows: when two indistinguishable
photons enter a 50/50 beam splitter (BS) from different sides at the
same time, according to the exchange symmetry of photons (bosons),
a 50\% chance exists of obtaining two photons in output port 1 (P1);
furthermore, a 50\% chance probability exists of obtaining two photons
in output port 2 (P2). However, the two photons will never be in different
output ports. The twin photon state $|1,1\rangle$ is converted into
a quantum superposition state $1/\sqrt{2}(|2,0\rangle+|0,2\rangle)$.
This phenomenon is a signal of photon bunching and can only be explained
using a quantum mechanism \cite{Lim}. Experiments typically control
the arrival times of two photons by adjusting the path-length difference
between them and measure the photon coincidence of P1 and P2. When
two indistinguishable photons completely overlap at the BS, they give
rise to the maximum interference effect, and no coincidence exists.
Visibility is defined as $V_{1}=(C_{max}-C_{min})/C_{max}$, where
$C_{max}$ is the maximum coincidence and $C_{min}$ is the minimum
coincidence. For perfect quantum interference, $C_{min}=0$ and $V_{1}=1$,
whereas for a classical coherent laser, $V_{1}=50\%$. Consequently,
to prove that destructive interference is due to two-photon quantum
interference, the visibility must be greater than 50\%. Here, we used
a modified HOM interferometer (see Figure 1a). We collected the photons
from P2 of the first BS, sent them to the second 50/50 BS, and then
measured the coincidence. According to quantum interference theory,
a 25\% chance should exist for us to record a click when HOM interference
occurs and a 12.5\% chance otherwise. In this case, visibility was
modified as follows: $V_{2}=(C_{max}-C_{min})/C_{min}$. For perfect
quantum interference,, $C_{max}=2C_{min}$ and $V_{2}=1$. Our modified
interferometer is capable of reflecting the indistinguishability of
the input particles and can tell us whether these plasmons are bosons.
\textbf{Experimental design. }In the current experiment, we chose
a dielectric-loaded SPP waveguide (DLSPPW) \cite{Kumar} to test the
bosonic properties of the single plasmons. A DLSPPW is a typical sub-wavelength
plasmonic waveguide that is formed by placing a dielectric ridge on
top of a thin metal layer. Among the various plasmonic-waveguide structures,
DLSPPWs are promising for enriching the functional portfolio of plasmonics
owing to their dielectric-loading properties, which have been demonstrated
in practice. They can confine the lateral size of propagating modes
to the sub-wavelength scale and simultaneously transmit photons and
electrons in the same component. In addition, because the energy is
mostly confined to the surface of the metal, highly efficient control
of the waveguide-mode characteristics is possible. For example, power-monitoring
\cite{Kumar2} and switching \cite{Kala} elements with high response
speeds have been experimentally demonstrated in DLSPPWs. We used nanofabrication
techniques to prepare our plasmonic waveguide. Specifically, our waveguide
was constructed of polymethyl methacrylate (PMMA) and placed on top
of a 45-nm-thick gold layer deposited on a $SiO_{2}$ substrate. Figure
1b shows a scanning electron microscope (SEM) image of part of the
fabricated sample.
Based on our calculations, the lateral size of the single-mode DLSPPW
for photons at 1,550 nm was 600nm$\times$600nm \cite{Theory}, because
such a waveguide supports only one fundamental mode (see Figure 2b).
The BS is realized using a directional coupler (DC), which is composed
of two waveguides. In the coupling region, the evanescent fields of
the two waveguide modes couple with each other and exchange energy.
As a result, two new coupling eigenmodes, the symmetric (Figure 2c)
and anti-symmetric (Figure 2d) superpositions of the two waveguide
modes, are generated. Owing to the different effective refractive
indices of these two modes, the beating of the two modes leads to
a BS-like function. By controlling the coupling strength, the amount
of output at the two waveguide ports (the splitting ratio) can be
tuned. Using the engineered waveguide gap, we obtained a coupling
profile with a splitting ratio of approximately 1:1.
\begin{figure}[htb]
\includegraphics[width=8cm]{2} \caption{(a) Three-dimensional simulation of field distribution on our plasmonic
DC structure. (b) Field distribution in single mode plasmonic waveguide
with lateral size of 600nm$\times$600nm. (c) Field distribution of
symmetric eigenmode in coupling section. (d) Field distribution of
anti-symmetric eigenmode in coupling section. }
\end{figure}
The coupling efficiencies among our SPP circuit, the external source,
and the detectors were particularly crucial because the quantum signals
were weak (approximately 7,000 photons pairs per second in our experiment).
However, it is difficult to directly connect our plasmonic waveguide
to a single-mode optical fiber because its lateral-mode field area
is much smaller than that of the fiber (diameter $6.8\mu m$, 980HP,
Thorlabs Inc.). Therefore, we adopted the alternative adiabatic method
\cite{PBS,Zou} to excite the plasmons using fiber taper \cite{Dong,Tong}.
As Figure 1c shows, the photons in the fiber are adiabatically squeezed
into the microfiber via the taper region and coupled to the plasmon
waveguide when the microfiber approached the waveguide. Owing to the
high efficiency conversion and evanescent field coupling, the ideal
conversion efficiency might have been higher than 99\%. Under the
limitations imposed by the experimental conditions, the efficiency
of our fiber taper coupling system was estimated to be approximately
30\%. Importantly, the alignment direction of the fiber taper was
vertical to the collection fiber, thereby avoiding the collection
of directly scattered photons from the end of the fiber taper.
\begin{figure*}[htb]
\includegraphics[width=15cm]{3} \caption{(a) HOM interference of the down-converted photon pairs measured using
a fiber 50/50 BS; the visibility was $95.5\pm1.0\%$ and the optical
coherence length was $162.6\pm5.0\mu m$. (b) Modified HOM interference
of the down-converted photon pairs measured using two fiber 50/50
BSs; the visibility was $96.5\pm3.1\%$, and the optical coherence
length was $173.9\pm5.7\mu m$. (c) Quantum interference of single
plasmons on DLSPPWs: For Sample 1, the visibility was $95.7\pm8.9\%$,
and the optical coherence length was $191.6\pm17.6\mu m$; for Sample
2, the visibility was $93.6\pm6.7\%$, and the optical coherence length
was $193.0\pm13.0\mu m$. All results are at the level of single photons.}
\end{figure*}
\textbf{Quantum-interference results.} The 1,550-nm quantum photon
pairs were generated via the spontaneous parametric down-conversion
(SPDC) \cite{Burn} process of a BBO crystal (Type-II phase matching,
non-collinear) pumped by a 775-nm-wavelength laser (Coherent Inc.;
see Figure 1a). The down-converted twin photons consisted of one photon
in the horizontal (H) polarization and one in the vertical (V) polarization.
The photons were separated into two paths, each of which contained
a prime reflector (PR), a half-wave plate (HWP, 1,550 nm), a long-pass
filter (LP; 830 nm), and a narrow-band filter (IF, 1,550 nm, 8.8 nm
FWHM). After these components, the two photons, which now had the
same polarization, were guided into two separate single-mode fibers.
One of the fiber couplers was installed on a motorized stage to adjust
the optical path.
As shown in Figure 3a, the indistinguishability of the produced photon
pairs was first characterized using a standard HOM interferometer
with a fiber BS. The dip represented the quantum interference of two
photons that arrived at the BS simultaneously, and the coherence of
the photons determined its width. The quantum-interference results
were fit using $N_{HOM}=C\cdot[1-V\cdot e^{-(\Delta\omega\cdot\Delta\tau)^{2}}]$\cite{Hong},
where $N_{HOM}$ is the measured coincidence count, $C$ is a fitting
constant, $V$ is the quantum-interference visibility, $\Delta\omega$
is the bandwidth of the photons, and $\Delta\tau$ is the optical
time delay. For perfect quantum interference of indistinguishable
photon pairs, the visibility should be unity. Here, we obtained a
visibility of $V=95.5\pm1.0\%$ and an optical coherence length of
$c/\Delta\omega=162.6\pm5.0\mu m$, where $c$ is the speed of light.
The deviation of the visibility from $100\%$ was attributed to the
polarization distortion of the photons during propagation in the fiber,
photon source variability , or both. We also tested the modified quantum
interferometer, in which photons from one output port were divided
using a second fiber BS and detected with two single-photon detectors.
Fitting the experimental results (Figure 3b) using the function $N_{M}=C\cdot[1+V\cdot e^{-(\Delta\omega\cdot\Delta\tau)^{2}}]$,
we obtained a visibility of $96.5\pm3.1\%$ and a coherence length
of $173.9\pm5.7\mu m$. These values were consistent with standard
HOM interference.
Finally, we observed the quantum interference of single plasmons using
the modified quantum interferometer, in which two single photons from
the fiber excited plasmon pairs in separate waveguides, and quantum
interference occurred in the coupling section. We sought to collect
the two plasmons from the two output ports and record their coincidence
with a standard HOM interferometer; to do so, we required two additional
fiber tapers to collect the signal. To avoid this requirement, we
simplified the experimental design by collecting the photons scattered
from P2 using an end-fire-coupled single-mode fiber. Using the second
fiber BS, we divided the collected photons into two ports and measured
the coincidence. Three samples were measured, and the visibilities
were $95.7\pm8.9\%$, $93.6\pm6.7\%$, and $93.1\pm16.5\%$. These
values are well above the classical limitation of $50\%$. The coherence
lengths of the plasmons were also calculated using the experimental
data, yielding $191.6\pm17.6\mu m$,$193.0\pm13.0\mu m$ and$146.4\pm9.4\mu m$,
which were similar values to those of the photons. Our results demonstrate
that although the electron is a fermion, a single plasmon (i.e., the
quasi-particle of a collective electron-density wave) acts as a boson.
The high visibility also suggests that plasmonic structures can be
used in QPICs.
\textbf{Discussion.} In this section, we address the second question:
what is the influence of loss on quantum interference visibility?
The inevitable loss of SPPs attenuates the amplitude of light; thus,
gain materials are often used to compensate for this loss. In addition,
the first-order coherence of the photons is destroyed during absorption
and re-emission processes. It is necessary to determine how this loss
influences second-order quantum interference visibility and under
what conditions these losses are tolerable. The following discussion
provides a detailed account of the two-photon quantum interference
of lossy channels based on our plasmonic DC structure.
The operation of a four-port DC can be described as follows:
\begin{equation}
\begin{pmatrix}b_{1}^{\dagger}\\
b_{2}^{\dagger}
\end{pmatrix}=\begin{pmatrix}r & t\\
t & r
\end{pmatrix}\begin{pmatrix}a_{1}^{\dagger}\\
a_{2^{\dagger}}
\end{pmatrix},
\end{equation}
where $a_{1}^{\dagger}$ and $a_{2}^{\dagger}$ as well as $b_{1}^{\dagger}$
and $b_{2}^{\dagger}$ are the creation operators of the input and
output boson particles, and $r$ and $t$ are the amplitudes of the
reflection and transmission coefficients. The output state of the
input twin-particle state $|1,1\rangle$ is
\begin{equation}
|\Phi\rangle_{out}=\sqrt{2}rt|2,0\rangle+\sqrt{2}rt|0,2\rangle+(r^{2}+t^{2})|1,1\rangle
\end{equation}
multiplied by a normalization factor. Here, we discard the terms that
represent the loss of one or two particles because only the coincidence
counts were recorded in the experiment. The probability of finding
two particles in the same mode (proportional to the HOM interference
visibility) is
\begin{equation}
P=\dfrac{4|rt|^{2}}{4|rt|^{2}+|r^{2}+t^{2}|^{2}}.
\end{equation}
For a lossless system, the DC is characterized by its classical transmission
and reflection coefficients, $|r|^{2}$ and $|t|^{2}$. Thus, designing
a DC with $|r|^{2}=|t|^{2}$ should optimize quantum interference.
However, for a lossy system, the structures and microscopic transport
process of the DC will determine the second-order quantum coherence.
In our DLSPPW DC, when plasmons were propagated in the coupling region
of the two waveguides, they were in coherent superpositions of symmetric
and anti-symmetric modes (see Figures 2c and 2d). The precise microscopic
losses can be included using the coefficients \cite{Loss}
\begin{eqnarray}
r=\dfrac{e^{in_{2}k_{0}L}}{2}(e^{i\mathrm{Re}(\Delta n)k_{0}L}e^{-\mathrm{Im}(\Delta n)k_{0}L}+1)\\
t=\dfrac{e^{in_{2}k_{0}L}}{2}(e^{i\mathrm{Re}(\Delta n)k_{0}L}e^{-\mathrm{Im}(\Delta n)k_{0}L}-1)
\end{eqnarray}
Here, $\Delta n=n_{1}-n_{2}$, where $n_{1(2)}$ is the effective
refractive index of the symmetric mode (the anti-symmetric mode),
$k_{0}$ is the wave vector in free space, and $L$ is the coupling
length. The imaginary portion of $n_{1(2)}$ corresponds to the propagation
loss of the plasmons and leads to a non-unitary operation matrix for
the DC.
By substituting Eqs. (4) and (5 ) into Eq. (3), we obtain $P$, which
is related to the loss difference between the two intermediate eigenmodes
($\propto \mathrm{Im}(\Delta n)$) and the coupling length $L$ . When
$L$ is sufficiently large, the energy in the eigenmode with higher
loss approximates 0 and can therefore be neglected compared with the
lower-loss eigenmode. In this case, $P$ decreases to 0.5, which corresponds
to a classical random process.
Figure 4 illustrates the relationships among $P$ and $L$ for a lossless
DC (black dots), our DLSPPW DC (blue dots), and a metal-strip DC (red
dots) in which we selected the $L$ that corresponded to a 50/50 splitter
. For a lossless DC, $P=1$ for any selected $L$. In our sample,
$P$ slowly decreased as $L$ increased. This result is because the
difference between $n_{1}$ ($1.318-0.00426i$) and $n_{2}$ ($1.150-0.00437i$)
is small; therefore, we were able to achieve a high interference visibility
for a small $L$. For the metal-strip DC used in \cite{Rei}, $P$
fastly decreased as $L$ increased because the difference between
$n_{1}$ ($2.036-0.02i$) and $n_{2}$ ($1.841-0.01i$) was much larger,
especially in the imaginary portions.
The influence of loss on quantum coherence defined the limitations
of lossy QPIC devices. The high-order quantum interference of photons
should be considered when designing integrated photonic components
because the microscopic processes of photons in these devices might
deviate from the expected unitary evolution.
In summary, we experimentally demonstrated that single plasmons can
be used as qubits to perform on-chip quantum information processing.
The discussion presented here regarding loss also introduces a platform
for using plasmonic structures to investigate the on-chip quantum-decoherence
phenomenon. Additional investigations should consider using single
plasmons as qubits to carry quantum information and achieve on-chip
linear optical computations or quantum simulations.
\begin{figure}[htb]
\includegraphics[width=8cm]{4} \caption{The relationship between $P$ and the coupling length $L$. The black,
blue, and red dots represent the theoretical calculations for a lossless
DC, our DLSPPW DC and a metal-strip DC \cite{Rei}, respectively.
$R$ decreases as $L$ increases and converges to 50\% for sufficiently
large $L$ in lossy DCs. Here, we used the $L$ values that corresponded
to a 50/50 splitter. }
\end{figure}
\section*{Acknowledgments}
This work was funded by NBRP (grant nos. 2011CBA00200 and 2011CB921200),
the Innovation Funds from the Chinese Academy of Sciences (grant no.
60921091), NNSF (grant nos.11374289, 10934006, 11374288, and 11104261),
and NCET. We thank Prof. Fang-Wen Sun, Bao-Sen Shi and Zheng-Wei Zhou
for useful discussion and Mrs Jun-Yi Xue from Qingdao No.2 High School
of Shandong Province for her help in optical measurement.
|
2,869,038,155,472 | arxiv | \section{Introduction}
\numberwithin{equation}{section}
Historically, gravitational instantons have been explored with the same motivations similar to gauge instantons: to describe the non-perturbative transitions in quantum gravity, and by analytic continuation producing real-time gravitational backgrounds \cite{GH78, GH79, GH-book}. Dealing with (anti)-self-dual systems makes the task of solving Einsteins equations substantially easier, producing first order equations like the (anti)-self-dual Yang-Mills equations, often related to interesting integrable systems \cite{ACH}. \\
Following applications to homogeneous cosmology, $\mathcal{M}_4$ spaces topologically equivalent to $\mathbb{R} \times \mathcal{M}_3$ of Bianchi type have been extensively explored. For a pedagogical introduction to various cosmological models look at the Lecture Notes by Ellis et.al.\cite{lecture}. Out of all Bianchi type models of classes I - IX with vanishing cosmological constant, only Bianchi-IX has been found to exhibit a relationship with quasimodular forms \cite{MM1, MM2}. Modular forms in physics are a consequence of duality properties, resulting either from an invariance or a relationship between two distinct theories. In the past 30 years, modular and quasimodular forms have emerged mostly in the study of gravity and string theory \cite{MPPV}. Furthermore, we must note that the Bianchi-IX model is a controversial system (possessing both integrable and non-integrable aspects).
A debate followed between various authors involving doubts over statements regarding the chaotic nature of Bianchi-IX dynamics. They simultaneously expressed their opinion that the model might as well be a classical integrable system (In Liovilles sense) \cite{HBC}. Thus Bianchi-IX cosmological model is a good ``thought laboratory'' for testing theories in order to understand various concepts of integrability. \\
Since the late 70s, many systems were developed starting with self-dual Yang-Mills system after reductions, leading to the belief that all integrable systems were obtained from similar such reductions \cite{ACH, PMPetro1, PMPetro2}. The generalized Darboux-Halphen system is one such system, heavily studied over the recent years, although in its classical form the relationship with Bianchi-IX metric was established rather recently in the 90s. Such a connection proves to be of use in constructing various interesting Bianchi IX solutions \cite{GibbonsCQG} or their applications e.g. in the study of scattering of $SU(2)$ BPS monopoles \cite{PMPetro1, AC}. The Darboux-Halphen system also exhibits Ricci flow that describes the evolution of $SU(2)$-homogeneous 3D geometries and can be seen as reflection of hidden symmetrry of hyperbolic monopole motion \cite{GibWar}. \\
The plan of this article is as follows: In Sect.2 we will first perform a geometric analysis of the Bianchi-IX metric, directly exploring both connection-wise and curvature-wise self-dual cases, bringing us to the Eule-Top and classical Darboux-Halphen cases respectively, followed by computation of the general form of the curvature components. Then in the next Sect.3 we will start with the self-dual Yang-Mills equation and by reduction see how it gives rise to the generalized Darboux-Halphen system, which has a close relation to the Euler-Top system. This will be followed by an elaboration on solutions that can be obtained for such a system. A brief note will be made as to how the classical case arises from the generalized one and also explore why we cannot always find a metric that gives rise to the generalized system. We will proceed to list solutions to the various first-order differential equations involved with the system, that effectively yield the metric corresponding to the classical system and possibly the generalized one as well. This will be followed by a derivation and examination of the Ricci flow equations in Sect.4. We will see in Sect.5 how the Chazy equation emerges from the classical Darboux-Halphen system, a result of curvature-wise self-duality, as well how othes like the Ramanujan and Ramamani systems are related to it. This will be followed up by a detailed analysis of integrability of the Bianchi-IX in Sect.6 to see if self-duality implies integrability. Fianlly we conclude in Sect.7 and try to look for future directions. \\
\textbf{Note added}: Near the completion of this work, the paper \cite{FFM} appeared on the arXiv. The authors, among other things, also address the question of arithmetics and integrability of Bianchi IX gravitational instantons, which have been explored by imposing the self-duality condition on triaxial Bianchi IX metrics and by employing a time-dependent conformal factor. We comment more about this in the final section at the end of the paper.
\section{Geometric analysis}
The Bianchi-IX metric is a general setup for 4D Euclidean spherically symmetric metrics. Under certain settings of its curvature-wise anti-self-dual case, becomes the Taub-NUT. Naturally, the analysis of its connection and curvature follows the same way as in \cite{CGR1, CGR2}. \\
The metric is written as:
\begin{equation}
\label{bianchiix} ds^2 = \big[ c_1 (r) c_2 (r) c_3 (r) \big]^2 dr^2 + c_1^2 (r) \sigma_1^2 + c_2^2 (r) \sigma_2^2 + c_3^2 (r) \sigma_3^2
\end{equation}
where the variables $\sigma_i$ obey the structure equation:
\begin{equation}
\label{struc} d \sigma^i = - {\varepsilon^i}_{jk} \ \sigma^j \wedge \sigma^k \hspace{1cm} \text{where} \hspace{0.75cm} \sigma^i = - \frac1{r^2} \eta^i_{\mu \nu} x^\mu dx^\nu
\end{equation}
and $i, j, k$ are permutations of the indices $1, 2, 3$. \\
Those solutions that are (anti-) self-dual fall into 2 categories:
\begin{enumerate}
\item connection wise self-dual
\item curvature wise self-dual
\end{enumerate}
we shall now uncover the systems characterising each category respectively.
\numberwithin{equation}{subsection}
\subsection{Connection wise self duality - the Lagrange system}
First we shall compute the spin connections in the same manner as for the Taub-NUT \cite{CGR2}. We can describe the tetrads as:
\vspace{-0.5cm}
\begin{equation}
\begin{split}
e^0 = c_0 (r) dr \hspace{1cm} e^i &= c_i (r) \sigma^i, \hspace{0.5cm} i = 1, 2, 3 \\
\text{where} \hspace{1cm} c_0 (r) &= c_1 (r) c_2 (r) c_3 (r)
\end{split}
\end{equation}
Obviously, $e^0$ produces no connections $( d e^0 = 0 )$. However, for the remaining three:
\begin{equation}
\label{deriv} d e^i = - \frac{\partial_r c_i}{c_0} \ \sigma^i \wedge e^0 - \bigg\{ - {\varepsilon^i}_{jk} \frac{ c_i^2 + c_j^2 - c_k^2 }{2 c_i c_j} \ \sigma^k \wedge e^j - {\varepsilon^i}_{kj} \frac{ c_i^2 + c_k^2 - c_j^2 }{2 c_i c_k} \ \sigma^j \wedge e^k \bigg\}
\end{equation}
Under torsion-free condition the 1st Cartan structure equation $( d e^i = - {\omega^i}_j \wedge e^j )$ gives us the following spin connections:
\vspace{-0.25cm}
\begin{equation}
{\omega^i}_0 = \frac{\partial_r c_i}{c_0} \ \sigma^i \hspace{2cm} {\omega^i}_j = - {\varepsilon^i}_{jk} \frac{ c_i^2 + c_j^2 - c_k^2 }{2 c_i c_j} \ \sigma^k
\end{equation}
This elaborate form for the components of the spin connections make its anti-symmetric nature evident. If we only consider those cases of (anti-) self-duality, we will have
\[ \omega_{0i} = \pm \frac12 {\varepsilon_{0i}}^{jk} \omega_{jk} = \pm \omega_{jk} \hspace{1cm} \Rightarrow \hspace{1cm} 2 \frac{\partial_r c_i}{c_i} = \mp \varepsilon_{jki} \big( c_j^2 + c_k^2 - c_i^2 \big) \]
\begin{equation}
\label{consd} \therefore \hspace{1cm} \partial_r \big( \ln c_i^2 \big) = \mp \varepsilon_{jki} \big( c_j^2 + c_k^2 - c_i^2 \big)
\end{equation}
One may suppose that we must parametrize the LHS to match the linear form of the RHS in the equation above. Essentially, the derivative operator aside, $c_i^2$ must be parametrized such that
\[ \ln c_i^2 = \ln \Omega_j + \ln \Omega_k - \ln \Omega_i = \ln \bigg( \frac{\Omega_j \Omega_k}{\Omega_i} \bigg) \]
\vspace{-0.25cm}
\begin{equation}
\label{pmtr} \therefore \hspace{1cm} ( c_i )^2 = \frac{\Omega_j \Omega_k}{\Omega_i} \hspace{1cm} \Rightarrow \hspace{1cm} \Omega_i = c_j c_k
\end{equation}
which enable us to decouple the individual parameters into their own equations turning into simpler expressions. This allows us to continue our analysis from (\ref{consd}) as
\[ \partial_r \bigg[ \ln \bigg( \frac{\Omega_j \Omega_k}{\Omega_i} \bigg) \bigg] = \frac{\dot{\Omega}_j}{\Omega_j} + \frac{\dot{\Omega}_k}{\Omega_k} - \frac{\dot{\Omega}_i}{\Omega_i} = \mp \varepsilon_{jki} \bigg( \frac{\Omega_k \Omega_i}{\Omega_j} + \frac{\Omega_i \Omega_j}{\Omega_k} - \frac{\Omega_j \Omega_k}{\Omega_i} \bigg) \]
Adding up with a similar expression for $2 \ \partial_r \big( \ln c_j \big) = \mp \varepsilon_{kij} \big( c_i^2 + c_k^2 - c_j^2 \big)$, we get
\[ \frac{\dot{\Omega}_i}{\Omega_i} + \frac{\dot{\Omega}_k}{\Omega_k} - \frac{\dot{\Omega}_j}{\Omega_j} = \mp \varepsilon_{kij} \bigg( \frac{\Omega_j \Omega_k}{\Omega_i} + \frac{\Omega_i \Omega_j}{\Omega_k} - \frac{\Omega_k \Omega_i}{\Omega_j} \bigg) \]
we will find that self-dual cases of the Bianchi-IX (keeping in mind that $\varepsilon_{jki} = \varepsilon_{ijk} = \varepsilon_{kij} = 1$) metric gives us the Lagrange (Euler-top) system
\vspace{-0.5cm}
\[ \begin{split}
\frac{\dot{\Omega}_j}{\Omega_j} + \frac{\dot{\Omega}_k}{\Omega_k} - \frac{\dot{\Omega}_i}{\Omega_i} &= \mp \varepsilon_{jki} \bigg( \frac{\Omega_k \Omega_i}{\Omega_j} + \frac{\Omega_i \Omega_j}{\Omega_k} - \frac{\Omega_j \Omega_k}{\Omega_i} \bigg) \\
& \hspace{0.5cm} + \\
\frac{\dot{\Omega}_i}{\Omega_i} + \frac{\dot{\Omega}_k}{\Omega_k} - \frac{\dot{\Omega}_j}{\Omega_j} &= \mp \varepsilon_{kij} \bigg( \frac{\Omega_j \Omega_k}{\Omega_i} + \frac{\Omega_i \Omega_j}{\Omega_k} - \frac{\Omega_k \Omega_i}{\Omega_j} \bigg) \\
& \hspace{0.5cm} \big{\downarrow}
\end{split} \]
\begin{equation}
\hspace{1cm} \therefore \hspace{1cm} 2 \frac{\dot{\Omega}_k}{\Omega_k} = \mp 2 \frac{\Omega_i \Omega_j}{\Omega_k} \hspace{1cm} \Rightarrow \hspace{1cm} \boxed{\dot{\Omega}_k = \mp \Omega_i \Omega_j}
\end{equation}
where throughout derivative (denoted by dot) is taken with respect to $r$.
\subsection{Curvature wise self-duality - Classical Darboux-Halphen system}
Since we have already covered connection-wise self duality, let us explore a stronger version known as curvature-wise self-duality. This emphasizes and expands upon the property of self-duality, generalizing it beyond connection 1-forms. This means that curvature-wise self-duality does not invalidate, rather implies connection-wise self-duality \cite{ETH}, and hence part of the dynamical system derived from this should have the same form as the Lagrange system. \\
The Cartan-structure equation for Riemann curvature is
\begin{equation}
\label{cartan2} R_{ij} = d \omega_{ij} + \omega_{im} \wedge {\omega^m}_j
\end{equation}
The self-duality of curvature demands that
\begin{equation}
\label{curvsd} R_{0i} = \frac12 { \varepsilon_{0i} }^{jk} R_{jk} = R_{jk}
\end{equation}
Now, for the LHS and RHS of (\ref{curvsd}), we have
\vspace{-0.5cm}
\begin{align}
\label{lhs} \text{LHS} \hspace{1cm} R_{0i} &= d \omega_{0i} + \omega_{0j} \wedge {\omega^j}_i + \omega_{0k} \wedge {\omega^k}_i \\ \nonumber \\
\label{rhs} \text{RHS} \hspace{1cm} R_{jk} &= d \omega_{jk} + \omega_{j0} \wedge {\omega^0}_k + \omega_{ji} \wedge {\omega^i}_k \nonumber \\
&= d \omega_{jk} - \omega_{0j} \wedge {\omega^0}_k - \omega_{ji} \wedge \omega_{ki}
\end{align}
Now, for (anti-) self-duality of the connection forms, as employed in the previous section, we shall be able to eliminate some of the later terms of (\ref{lhs}) and (\ref{rhs}), since
\begin{equation}
\label{condual} \omega_{ij} = \pm \frac12 { \varepsilon_{ij} }^{k0} \omega_{k0} = \mp \omega_{k0} \hspace{1cm} \Rightarrow \hspace{1cm} \omega_{ji} \pm \omega_{0k} = 0
\end{equation}
This leaves us with the equation shown below and its solution that follows, adapted from the previous subsection.
\vspace{-0.35cm}
\begin{equation}
d \omega_{0i} = \pm d \omega_{jk}
\end{equation}
\vspace{-0.5cm}
\[ \begin{split}
\Rightarrow \hspace{1cm} &\partial_r \bigg( \frac{\partial_r c_i}{c_0} \bigg) dr \wedge \sigma^i + \frac{\partial_r c_i}{c_0} d \sigma^i = \mp \partial_r \bigg( \frac{ c_j^2 + c_k^2 - c_i^2 }{2 c_j c_k} \bigg) dr \wedge \sigma^i \mp \frac{ c_j^2 + c_k^2 - c_i^2 }{2 c_j c_k} \ d \sigma^i \\
\Rightarrow \hspace{1cm} &\partial_r \bigg( \frac{\partial_r c_i}{c_0} \bigg) = \mp \partial_r \bigg( \frac{ c_j^2 + c_k^2 - c_i^2 }{2 c_j c_k} \bigg) \hspace{1cm} \Rightarrow \hspace{1cm} \frac{\partial_r c_i}{c_0} = \mp \varepsilon_{jki} \frac{ c_j^2 + c_k^2 - c_i^2 }{2 c_j c_k} + \lambda_{jk} \\ \\
& \hspace{1cm} \Rightarrow \hspace{1cm} 2 \partial_r \big( \ln c_i \big) = \mp \big( c_j^2 + c_k^2 - c_i^2 \big) + 2 \lambda_{jk} c_j c_k
\end{split} \]
Thus, as before, upon parametrization we shall have
\vspace{-0.5cm}
\begin{align}
\frac{\dot{\Omega}_j}{\Omega_j} + \frac{\dot{\Omega}_k}{\Omega_k} - \frac{\dot{\Omega}_i}{\Omega_i} &= \mp \bigg( \frac{\Omega_k \Omega_i}{\Omega_j} + \frac{\Omega_i \Omega_j}{\Omega_k} - \frac{\Omega_j \Omega_k}{\Omega_i} \bigg) + 2 \lambda_{jk} \Omega_i \\ \nonumber \\
\frac{\dot{\Omega}_i}{\Omega_i} + \frac{\dot{\Omega}_k}{\Omega_k} - \frac{\dot{\Omega}_j}{\Omega_j} &= \mp \bigg( \frac{\Omega_j \Omega_k}{\Omega_i} + \frac{\Omega_i \Omega_j}{\Omega_k} - \frac{\Omega_k \Omega_i}{\Omega_j} \bigg) + 2 \lambda_{ik} \Omega_j
\end{align}
Adding up these two results, just like before, will now give us
\vspace{-0.25cm}
\begin{equation}
\label{dhalpsys} \dot{\Omega}_k = \mp \Omega_i \Omega_j + \lambda_{jk} \Omega_k \Omega_i + \lambda_{ik} \Omega_k \Omega_j
\end{equation}
where setting $\lambda_{ij} = - 1 \hspace{0.25cm} \forall \hspace{0.25cm} i, j$ in (\ref{dhalpsys}) for anti-self-duality proceeds to give us the classical Darboux-Halphen system
\begin{equation}
\therefore \hspace{1cm} \boxed{\dot{\Omega}_k = \Omega_i \Omega_j - \Omega_k \big( \Omega_i + \Omega_j \big)}
\end{equation}
Thus, we can see that the curvature-wise self-duality extends upon the characteristic system of the connection-wise self-duality, making the Darboux-Halphen system a suitable candidate for further development beyond the Lagrange system. Clearly, the first term has included the dynamical aspect of the Lagrange system, as the property of self-duality of the connection 1-forms being preserved, aside from an additive constant involved and was extended to their exterior derivatives. Needless to say, connection-wise self-duality must precede curvature-wise self-duality, and the latter is not possible without ensuring the former.
\subsection{Self-dual curvature components}
So far, we have managed to study a great deal about the Bianchi-IX geometry, without confronting the work of extracting the curvature components. Now, in this subsection, we will proceed to do exactly that, using the imposed (anti-) self duality properties at our disposal to make our job easier. But first, we shall prove and later in this case confirm that all curvature-wise self-dual manifolds are Ricci-flat. \\
We recall from \cite{CGR1} that the Riemann curvature tensor for (anti-)self-dual metrics on the vierbein space can be written as:
\begin{equation}
\label{sdcur} R_{abcd} = \mathcal{G}_{ij} (\vec{x}) \eta^{(\pm)i}_{ab} \eta^{(\pm)j}_{cd} \hspace{1cm} i, j = 1, 2, 3; \hspace{0.5cm} a, b, c, d = 0, 1, 2, 3
\end{equation}
This means the Ricci tensor for Euclidean vierbein space is given by
\begin{equation}
\mathbb{R}_{ac} = \delta^{bd} R_{abcd} = \delta^{bd} \mathcal{G}_{ij} (\vec{x}) \eta^i_{ab} \eta^j_{cd} = \mathcal{G}_{ij} (\vec{x}) \delta^{ij} \delta_{ac} = \Big( \text{Tr} \big[ \mathcal{G} (\vec{x}) \big] \Big) \delta_{ac}
\end{equation}
Clearly, the above result implies that the Ricci tensor has only diagonal elements, which allows us to demonstrate that
\[ \mathbb{R}_{aa} = R_{abab} + R_{acac} + R_{adad} \xrightarrow{\text{self-duality}} \pm \big( R_{abcd} + R_{acdb} + R_{adbc} \big) \xrightarrow{\text{Bianchi identity}} 0 \]
\vspace{-0.25cm}
\begin{equation}
\therefore \hspace{1cm} \boxed{ \mathbb{R}_{aa} = 0 } \hspace{1cm} \Rightarrow \hspace{1cm} \boxed{ \text{Tr} \big[ \mathcal{G} (\vec{x}) \big] = 0}
\end{equation}
Showing that curvature-wise self-dual manifolds are undoubtedly Ricci-flat.
\[ \boxed{\boxed{\text{Self-duality} \hspace{0.5cm} \Longrightarrow \hspace{0.5cm} \text{Ricci-flatness}}} \]
Returning to the original co-ordinates, we have:
\begin{equation}
\label{ric} \mathbb{R}_{ac} = \big( \mathcal{G}_{ij} (\vec{x}) \delta^{ij} \big) \big( \delta_{ac} {e^a_\mu} {e^c_\nu} \big) = \text{Tr} \big[ \mathcal{G} (\vec{x}) \big] g_{\mu \nu} ( \vec{x} )
\end{equation}
But, for a more thorough analysis, it would be better to directly obtain all the curvature components for detailed examination. This can be easily done as we already have the general formula for all the connection components. The results are made easier by keeping the self-duality of the connection forms (\ref{condual}) in mind.
\vspace{-0.25cm}
\[ \begin{split}
R_{0i} &= d \omega_{0i} + \omega_{0j} \wedge \omega_{ji} + \omega_{0k} \wedge \omega_{ki} \nonumber \\
&= d \omega_{0i} + \omega_{0j} \wedge \omega_{0k} - \omega_{0k} \wedge \omega_{0j} \nonumber = d \omega_{0i} + 2 \omega_{0j} \wedge \omega_{0k}
\end{split} \]
\begin{equation}
\therefore \hspace{1cm} R_{0i} = \underbrace{\frac1{c_0 c_i} \bigg( \frac{c_i '}{c_0} \bigg)'}_{R_{0i0i}} e^0 \wedge e^i - \underbrace{\frac1{c_j c_k} \bigg[ \frac{c_i '}{c_0} - 2 \frac{\big( c_j ' \big) \big( c_k ' \big)}{c_0^2} \bigg]}_{- R_{0ijk}} e^j \wedge e^k
\end{equation}
Now curvature wise anti-self-duality means
\begin{equation}
R_{0i0i} = - R_{0ijk} = - R_{jk0i} = R_{jkjk}
\end{equation}
Demanding curvature wise anti-self-duality gives us the differential equation
\begin{equation}
\frac1{c_0 c_i} \bigg( \frac{c_i '}{c_0} \bigg)' = \frac1{c_j c_k} \bigg[ \frac{c_i '}{c_0} - 2 \frac{\big( c_j ' \big) \big( c_k ' \big)}{c_0^2} \bigg]
\end{equation}
Since we have connection wise anti-self-duality rule (\ref{consd}), we can say
\vspace{-0.5cm}
\begin{align}
R_{0i} = \frac{\varepsilon_{ijk}}{c_0 c_i} &\bigg( \frac{c_j^2 + c_k^2 - c_i^2}{2 c_j c_k} \bigg)' e^0 \wedge e^i - \frac{\varepsilon_{ijk}}{c_j c_k} \bigg[ \frac{c_j^2 + c_k^2 - c_i^2}{2 c_j c_k} - 2 \bigg( \frac{c_k^2 + c_i^2 - c_j^2}{2 c_k c_i} \bigg) \bigg( \frac{c_i^2 + c_j^2 - c_k^2}{2 c_i c_j} \bigg) \bigg] e^j \wedge e^k \nonumber \\ \nonumber \\
\therefore \hspace{0.5cm} R_{0i} &= \underbrace{\frac{\varepsilon_{ijk}}{c_0 c_i} \bigg( \frac{c_j^2 + c_k^2 - c_i^2}{2 c_j c_k} \bigg)'}_{R_{0i0i}} e^0 \wedge e^i - \underbrace{\varepsilon_{ijk} \frac{c_i^2 \big( c_j^2 + c_k^2 - c_i^2 \big) - c_i^4 + \big( c_j^2 - c_k^2 \big)^2}{2 c_0^2}}_{- R_{0ijk}} e^j \wedge e^k
\end{align}
Due to curvature wise anti-self-duality being considered, we should have:
\begin{equation}
R_{0i0i} = - R_{0ijk} = \varepsilon_{ijk} \frac{\big( c_i^2 + c_j^2 + c_k^2 \big) \big( c_j^2 + c_k^2 \big) - 2 c_i^4 - 4 c_j^2 c_k^2}{2 c_0^2}
\end{equation}
Thus, we can say that the curvature 2-form is given by
\begin{equation}
\boxed{R_{ab} = \sum_{i = 1}^3 \varepsilon_{ijk} \frac{\big( c_i^2 + c_j^2 + c_k^2 \big) \big( c_j^2 + c_k^2 \big) - 2 c_i^4 - 4 c_j^2 c_k^2}{2 c_0^2} \ \bar{\eta}^i_{ab} \bar{\eta}^i_{cd} e^c \wedge e^d}
\end{equation}
which on comparison with (\ref{sdcur}) tells us that
\vspace{-0.5cm}
\begin{align}
\mathcal{G}_{il} (\vec{x}) &= \varepsilon_{ijk} \frac{\big( c_i^2 + c_j^2 + c_k^2 \big) \big( c_j^2 + c_k^2 \big) - 2 c_i^4 - 4 c_j^2 c_k^2}{2 c_0^2} \delta_{il} \\
\text{Tr} \big[ \mathcal{G} \big] &= \frac{2 \big( c_i^2 + c_j^2 + c_k^2 \big)^2 - 2 \big( c_i^2 + c_j^2 + c_k^2 \big)^2}{2 c_0^2} = 0
\end{align}
Thus, the Ricci tensor, and consequently scalar on vierbein space is given by
\vspace{-0.25cm}
\begin{equation}
\boxed{\mathbb{R}_{ab} = 0, \hspace{1cm} \mathbb{R} = 0}
\end{equation}
confirming what was already proven previously.
\subsection{Special case: The Taub-NUT}
The Taub-NUT is a special case of the Bianchi-IX metric connected to the classical Darboux-Halphen system. It is an exact solution of Einstein's equations, found by Abraham Huskel Taub (1951) and extended to a larger manifold by E. Newman, T. Unti and L. Tamburino (1963), whose names are included in the name of the metric Taub-NUT. It exhibits anti-self-duality as can be seen from its curvature and $SU(2)$ gauge fields. Recently, we have conducted a detailed study of the Taub-NUT \cite{CGR2} where we have explored its self-duality, geometric properties, conserved quantities and Killing tensors. \\
The condition to set to obtain this metric is $\Omega_1 = \Omega_2 = \Omega \neq \Omega_3$. This consequently makes $c^2_1 = c^2_2 = c^2 = \Omega_3$ and converts the Bianchi-IX metric into the following form:
\begin{equation}
ds^2 = \Omega^2 \Omega_3 \ dr^2 + \Omega_3 \big( \sigma_1^2 + \sigma_2^2 \big) + \frac{\Omega^2}{\Omega_3} \sigma_3^2
\end{equation}
Naturally, we would have to define the parameters and rescale the radial co-ordinate:
\begin{equation}
dr = \frac1{2m} \frac{d \tilde{r}}{\Omega^2} \hspace{0.5cm} \Rightarrow \hspace{0.5cm} ds^2 = \frac{\Omega_3}{\Omega^2} \ \frac{d \tilde{r}^2}{4 m^2} + \Omega_3 \big( \sigma_1^2 + \sigma_2^2 \big) + \frac{\Omega^2}{\Omega_3} \sigma_3^2
\end{equation}
Finally, we can define the parameters as
\begin{equation}
\Omega = \frac{\tilde{r} - m}{2 m} \hspace{1cm} \Omega_3 = \frac{\tilde{r}^2 - m^2}{4 m^2}
\end{equation}
which means that the rescaling equation is
\begin{equation}
dr = \frac{2m}{ ( \tilde{r} - m )^2 } \ d \tilde{r} \hspace{1cm} \Rightarrow \hspace{1cm} r = k - \frac{2m}{ \tilde{r} - m }
\end{equation}
This finally leads to Taub-NUT metric up to a rescaling conformal factor,
\vspace{-0.25cm}
\begin{equation}
ds^2 = \frac{\tilde{r} + m}{\tilde{r} - m} d \tilde{r}^2 + 4m^2 \frac{\tilde{r} - m}{\tilde{r} + m} \big( d \psi + \cos \theta \ d \phi \big)^2 + \big( \tilde{r}^2 - m^2 \big) \big( d \theta^2 + \sin^2 \theta \ d\phi^2 \big)
\end{equation}
\section{The generalized Darboux-Halphen system}
So far, we have dealt with the classical Darboux-Halphen system that arises as the connection-wise self-dual case of the Bianchi-IX metric. Now we shall proceed to derive a generalized version of this system starting from a different configuration as described in \cite{ACH, dhsstruc}.
\subsection{Reduction of the SDYM equation}
While the classical Darboux-Halphen system is an artifact of connection-wise self-duality of Bianchi-IX, this time we shall start by considering self-dual Yang-Mills equation and execute a reduction process on it.
\begin{equation}
\label{sdym} F_{ab} = - \frac12 {\varepsilon_{ab}}^{cd} F_{cd} \hspace{2cm} F_{ab} = \partial_a A_b - \partial_b A_a - \big[ A_a, A_b \big]
\end{equation}
If we set $A_0 = 0$ and all $A_i = A_i (t)$ only, then (\ref{sdym}) becomes the Nahm equation \cite{Nahm, Hitch, Donald}
\begin{equation}
\label{nahm} F_{0i} = \dot{A}_i \hspace{2cm} F_{ij} = - \big[ A_i, A_j \big] \hspace{1cm} \Rightarrow \hspace{1cm} \dot{A}_i = \frac12 {\varepsilon_i}^{jk} \big[ A_j, A_k \big]
\end{equation}
Now, the $A_i$s are functions from $\mathbb{R}^4$ to a Lie algebra $\mathfrak{g}$ given by
\begin{equation}
\label{fld} A_i = - M_{ij} (t) O_{jk} X_k
\end{equation}
where $O_{ij}$ is an SO(3) matrix, and $X_i$ are the generators of $\mathfrak{sdiff}(S^3)$ satisfying the relation $\big[ X_i, X_j \big] = \varepsilon_{ijk} X_k$. The matrix $M_{ij}$ is given as a sum of symmetric components $M_s$ and anti-symmetric components $M_a$
\vspace{-0.5cm}
\begin{align} \displaybreak[0]
\label{mat} M &= M_s + M_a = P (d + a) P^{-1} \hspace{2cm} M_a = {\varepsilon_{ij}}^k \tau_k \\ \nonumber \\
\text{where } \hspace{1cm} &M_s = \left({\begin{array}{ccc}
\Omega_1 & 0 & 0\\
0 & \Omega_2 & 0\\
0 & 0 & \Omega_3
\end{array} } \right) \hspace{1cm}
M_a = \left({\begin{array}{ccc}
0 & \tau_3 & - \tau_2\\
- \tau_3 & 0 & \tau_1\\
\tau_2 & - \tau_1 & 0
\end{array} } \right)
\end{align}
The equation we get on applying (\ref{fld}) and (\ref{mat}) to (\ref{nahm}) is:
\begin{equation}
\dot{M} = \big( \text{Adj} (M) \big)^T + M^T M - \text{Tr} (M) . M
\end{equation}
and taking the diagonal parts gives us the generalized Darboux-Halphen system equations.
\begin{equation}
\label{gendh} \dot{\Omega}_i = \Omega_j \Omega_k - \Omega_i \big( \Omega_j + \Omega_k \big) + \tau^2 \hspace{2cm} \tau^2 = \tau_1^2 + \tau_2^2 + \tau_3^2
\end{equation}
The off-diagonal terms taken together give us
\begin{equation}
\label{teq} \dot{\tau}_i = - \tau_i \big( \Omega_j + \Omega_k \big) \hspace{2cm} \tau_i^2 = \alpha_i^2 \big( \Omega_j - \Omega_i \big) \big( \Omega_i - \Omega_k \big)
\end{equation}
Now we shall consider equations that will hold for both, classical and generalized systems and their general solutions.
\subsection{Solutions of the generalized system}
While the generalized system has different set of equations from the classical version due to the common extra term $\tau^2$, if we choose to designate the variable as
\vspace{-0.25cm}
\begin{equation}
x_i = \Omega_j - \Omega_k
\end{equation}
we should obtain equations similar to \cite{dhsstruc}. Then, using the generalized Darboux-Halphen equations, we can write
\vspace{-0.5cm}
\begin{align} \displaybreak[0]
\label{drv1} \dot{x}_i = \dot{\Omega}_j - \dot{\Omega}_k = - 2 \Omega_i x_i \hspace{1cm} &\Rightarrow \hspace{1cm} \Omega_i = - \frac12 \big[ \ln x_i \big]' \\ \nonumber \\
\label{drv2} \Rightarrow \hspace{1cm} \bigg[ \ln \bigg( \frac{x_j}{x_i} \bigg) \bigg]' = \bigg[ \ln &\bigg( - \frac{x_j}{x_i} \bigg) \bigg]' = 2 x_k
\end{align}
This equation applies to both, generalized and classical Darboux-Halphen systems. If we choose to define a variable $s$ as:
\begin{equation}
s = - \frac{x_2}{x_1} \hspace{1cm} \Rightarrow \hspace{1cm} s - 1 = - \frac{x_2 + x_1}{x_1} = \frac{\Omega_1 - \Omega_2}{\Omega_2 - \Omega_3} = \frac{x_3}{x_1}
\end{equation}
then using (\ref{drv1}) and (\ref{drv2}), we should find as in \cite{fintgradflo, fintgdh} that
\[ \frac{\dot{s}}{s \big( s - 1 \big)} = 2 x_1 \hspace{1cm} \frac{\dot{s}}{s - 1} = - 2 x_2 \hspace{1cm} \frac{\dot{s}}{s} = 2 x_3 \]
\begin{equation}
\label{hw1}
\boxed{\Omega_1 = - \frac12 \bigg[ \ln \bigg( \frac{\dot{s}}{s \big( s - 1 \big)} \bigg) \bigg]' \hspace{0.75cm} \Omega_2 = - \frac12 \bigg[ \ln \bigg( \frac{\dot{s}}{s - 1} \bigg) \bigg]' \hspace{0.75cm} \Omega_3 = - \frac12 \bigg[ \ln \bigg( \frac{\dot{s}}{s} \bigg) \bigg]'}
\end{equation}
Now the off-diagonal anti-symmetric terms, recalling (\ref{teq}), give rise to equations:
\[ \frac{\dot{\tau}_1}{\tau_1} = \frac12 \bigg[ \ln \bigg( \frac{\dot{s}^2}{s ( s - 1 )} \bigg) \bigg]' \hspace{0.5cm}
\frac{\dot{\tau}_2}{\tau_2} = \frac12 \bigg[ \ln \bigg( \frac{\dot{s}^2}{s^2 ( s - 1 )} \bigg) \bigg]' \hspace{0.5cm}
\frac{\dot{\tau}_3}{\tau_3} = \frac12 \bigg[ \ln \bigg( \frac{\dot{s}^2}{s ( s - 1 )^2} \bigg) \bigg]' \]
with the following solutions:
\begin{equation}
\label{hw2}
\boxed{\tau_1 = \kappa_1 \frac{\dot{s}}{\sqrt{s ( s - 1 )}} \hspace{1cm} \tau_2 = \kappa_2 \frac{\dot{s}}{s \sqrt{( s - 1 )}} \hspace{1cm} \tau_3 = \kappa_3 \frac{\dot{s}}{\sqrt{s} ( s - 1 )} }
\end{equation}
where $s$ satisfies the Schwarzian equation given by
\vspace{-0.5cm}
\begin{equation}
\begin{split}
\big\{ s, t \big\} + \frac{\dot{s}^2}2 V (s) &= 0 \\ \\
\text{where} \hspace{0.75cm} \big\{ s, t \big\} := \frac{d \ }{dt} \bigg( \frac{\ddot{s}}{\dot{s}} \bigg) - \frac12 \bigg( \frac{\ddot{s}}{\dot{s}} \bigg)^2 \hspace{1cm} V (s) &= \frac{1 - 4 \kappa_2^4}{s^2} + \frac{1 - 4 \kappa_3^4}{(s - 1)^2} + \frac{ \kappa_2^4 + \kappa_3^4 - \kappa_1^4 - 1}{s(s - 1)}
\end{split}
\end{equation}
Thus, if we concentrate only on the diagonal terms, we are able to express the metric related to the classical Darboux-Halphen system.
\subsection{Non-existence of a metric for the generalized system}
Naturally, one can suspect that the classical Darboux-Halphen system and consequently the Bianchi-IX metric is the result of setting $\tau_i = 0$. For the classical system, we should have the metric co-efficients given by the diagonal matrix
\begin{equation}
\label{coef} h_{class} = \left({\begin{array}{cccc}
\Omega_1 \Omega_2 \Omega_3 & 0 & 0 & 0\\
0 & \dfrac{\Omega_2 \Omega_3}{\Omega_1} & 0 & 0\\
0 & 0 & \dfrac{\Omega_3 \Omega_1}{\Omega_2} & 0\\
0 & 0 & 0 & \dfrac{\Omega_1 \Omega_2}{\Omega_3}
\end{array} } \right)
\end{equation}
Now, we notice that the matrix describing the metric $h$ can be given by
\vspace{-0.5cm}
\begin{equation}
\label{matlat} h_{class} = M_{class}^{-1} \ \text{Adj} \big( M_{class} \big) \hspace{2cm} \text{where } \hspace{0.75cm} M_{class} = \left({\begin{array}{cccc}
1 & 0 & 0 & 0\\
0 & \Omega_1 & 0 & 0\\
0 & 0 & \Omega_2 & 0\\
0 & 0 & 0 & \Omega_3
\end{array} } \right)
\end{equation}
where $M_{class}$ is the matrix that produces the classical Darboux-Halphen system. With this in mind, we see that the generalized Darboux-Halphen system (\ref{gendh}) seems to arise from a matrix $M_{gen}$ given as
\begin{equation}
M_{gen} = \left({\begin{array}{cccc}
1 & 0 & 0 & 0\\
0 & \Omega_1 & \tau_3 & - \tau_2\\
0 & - \tau_3 & \Omega_2 & \tau_1\\
0 & \tau_2 & - \tau_1 & \Omega_3
\end{array} } \right)
\end{equation}
However, there are not always a vierbeins or metric counterparts for gauge fields, as there are for curvature and connection components. This shall be elaborated further as follows. \\
The torsion-free form of the 1st Cartan structure equation is
\vspace{-0.25cm}
\begin{equation}
d e^i = - {\omega^i}_j \wedge e^j
\end{equation}
Further examination reveals that
\vspace{-0.25cm}
\[ \partial_\mu {e^i}_\nu \ dx^\mu \wedge dx^\nu = - \omega^i_{\mu j} {e^j}_\nu \ dx^\mu \wedge dx^\nu \]
\vspace{-0.25cm}
\begin{equation}
\therefore \hspace{1cm} {E_j}^\nu \partial_\mu {e^i}_\nu = - \omega^i_{\mu j}
\end{equation}
Recalling that the spin connections can be expanded as shown below, we can say that
\vspace{-0.25cm}
\begin{align} \displaybreak[0]
\omega_{ij} = \eta^{(+)a}_{ij} A^{(+)a} &+ \eta^{(-)a}_{ij} A^{(-)a} \hspace{2cm} A^{(\pm)a} = A^{(\pm)a}_\mu dx^\mu \\ \nonumber \\
\therefore \hspace{1cm} {E_j}^\nu \partial_\mu {e^i}_\nu &= - \Big[ \eta^{(+)a}_{ij} A^{(+)a}_\mu + \eta^{(-)a}_{ij} A^{(-)a}_\mu \Big]
\end{align}
Now, we can obtain the individual SU(2)$_\pm$ gauge potential function components $A^{(\pm)}_\mu$ in terms of the vierbein components as follows
\vspace{-0.25cm}
\begin{equation}
A^{(\pm)a}_\mu = - \frac14 \eta^{(\pm)a} {E_j}^\nu \partial_\mu {e^i}_\nu
\end{equation}
Thus, if we start with a metric or equivalently the vierbeins, we should be able to get the spin-connections and hence gauge fields, and from there go backwards, however, the opposite is not always possible.
Since the generalized Darboux-Halphen system is primarily a product of the reduction of the SDYM gauge fields, it may not always be possible to find a metric or its vierbeins that are related to it. The classical Darboux-Halphen system is a special case where $\tau_i = 0 \hspace{0.25cm} \forall \hspace{0.25cm} i$, for which we have the self-dual Bianchi-IX metric (gravitational instanton).
\numberwithin{equation}{section}
\section{Aspects of Flow equations}
Geometric flows describe the evolution of a metric on a Riemannian manifold along the path parameter, under a general non-linear equation, given a symmetric tensor $S_{ij}$ \cite{flowapp, BBLP}. Usually, a system that exhibits geometric flows satisfies the equation
\vspace{-0.25cm}
\begin{equation}
\frac{d g_{ij}}{d \tau} = S_{ij}
\end{equation}
where $S_{ij}$ is symmetric. Some systems exhibit a particular category of such flows known as Ricci flow for which $S_{ij} = - R_{ij}$. Such systems that describe Ricci flows do not preserve volume elements, which are described by the equation:
\vspace{-0.25cm}
\begin{equation}
\frac{d g_{ij}}{d \tau} = - R_{ij}
\end{equation}
The Ricci flow equation introduced by Richard Hamilton in 1982 was a primary tool in Grigory Perelman's proof of Thurston's geometrization conjecture, Poincare conjecture being a special case of that. Ricci flow exhibits many similarities with the heat equation: it gives manifolds more uniform geometry and smooths out irregularities and has proven to be a very useful tool in understanding the topology of arbitrary Riemannian manifolds.
Now, we have already shown that Darboux-Halphen systems are Ricci-flat which means that it is a fixed point of the Ricci flow, usually exhibited by gravitational instantons which are extremal points of the Euclidean Einstein-Hilbert action \cite{HSW}. Looking at the Darboux-Halphen equations, we can see that for the Bianchi-IX metric
\vspace{-0.5cm}
\[ \begin{split}
\frac{d \big( c_i^2 \big) }{d \tau \ \ } = c_i^2 \big( &c_j^2 + c_k^2 - c_i^2 - 2 c_j c_k \big) = c_i^2 \big[ \big( c_j - c_k \big)^2 - c_i^2 \big] \\
\frac{d \big( c_0^2 \big) }{d \tau \ \ } = \frac{d \ }{d \tau} &\big( c_1 c_2 c_3 \big)^2 = c_0^2 \bigg\{ c_i^2 + c_j^2 + c_k^2 - 2 \big( c_i c_j + c_j c_k + c_k c_i \big) \bigg\}
\end{split} \]
Thus, we have the following equations:
\begin{equation}
\boxed{\begin{split}
\frac{d \big( c_0^2 \big) }{d \tau \ \ } &= c_0^2 (\vec{x}) \Big[ c_i^2 + c_j^2 + c_k^2 - 2 \big( c_i c_j + c_j c_k + c_k c_i \big) \Big] \\
\frac{d \big( c_i^2 \big)}{d \tau \ \ } &= c_i^2 (\vec{x}) \big[ \big( c_j - c_k \big)^2 - c_i^2 \big]
\end{split}}
\end{equation}
Now if we set $c_0 = 1$ for the co-ordinate rescaling $dt = c_0 (\vec{x}) \ d r$, then we should get
\begin{equation}
\boxed{\begin{split}
c_i^2 + c_j^2 &+ c_k^2 = 2 \big( c_i c_j + c_j c_k + c_k c_i \big) \\
2 \frac{d c_i }{d \tau} &= \frac1{c_j c_k} \big[ \big( c_j - c_k \big)^2 - c_i^2 \big]
\end{split}}
\end{equation}
which matches and re-confirms the results obtained in \cite{BBLP} where the Ricci tensor for such Bianchi-IX geometry is given by the RHS of the above equation, showing that it does exhibit Ricci flow. For a more general Darboux-Halphen system, the result would be of the form:
\vspace{-0.25cm}
\[ \frac{d \big( c_i^2 \big) }{d \tau \ \ } = \ c_i^2 \Big[ c_j^2 + c_k^2 - c_i^2 - 2 \big( \beta_{ij} \ c_i c_j + \beta_{jk} \ c_j c_k + \beta_{ki} \ c_k c_i \big) \Big] \]
where \ \ $2 \beta_{ij} = \lambda_{jk} - \lambda_{ik}, \hspace{0.5cm} 2 \beta_{jk} = \lambda_{ji} + \lambda_{ki}, \hspace{0.5cm} 2 \beta_{ki} = \lambda_{kj} - \lambda_{ij}$
\[ \text{and} \hspace{1cm} \frac{d \big( c_0^2 \big) }{d \tau \ \ } = \frac{d \ }{d \tau} \big( c_1 c_2 c_3 \big)^2 = c_0^2 (\vec{x}) \big[ c_i^2 + c_j^2 + c_k^2 - 2 \big( \alpha_{ij} \ c_i c_j + \alpha_{jk} \ c_j c_k + \alpha_{ki} \ c_k c_i \big) \big] \]
where $2 \alpha_{ij} = \lambda_{ik} + \lambda_{jk}, \hspace{0.5cm} 2 \alpha_{jk} = \lambda_{ji} + \lambda_{ki}, \hspace{0.5cm} 2 \alpha_{ki} = \lambda_{kj} + \lambda_{ij}$
\begin{equation}
\boxed{\begin{split}
\frac{d \big( c_0^2 \big) }{d \tau \ \ } &= c_0^2 (\vec{x}) \big[ c_i^2 + c_j^2 + c_k^2 - 2 \big( \alpha_{ij} \ c_i c_j + \alpha_{jk} \ c_j c_k + \alpha_{ki} \ c_k c_i \big) \big] \\
\frac{d \big( c_i^2 \big) }{d \tau \ \ } &= c_i^2 (\vec{x}) \big[ c_j^2 + c_k^2 - c_i^2 - 2 \big( \beta_{ij} \ c_i c_j + \beta_{jk} \ c_j c_k + \beta_{ki} \ c_k c_i \big) \big]
\end{split}}
\end{equation}
Thus, Ricci flow is exhibited and implies a self-dual Bianchi-IX metric, otherwise known to be the Darboux-Halphen system describing the evolution of $SU(2)$ 3D geometries.
\section{Other related systems}
The Darboux-Halphen system has analogues and equivalents in various forms of quadratic and non-linear differential equations. In this section, we will describe them in detail.
\subsection{Ramamani to Darboux-Halphen}
The Ramamani system \cite{Ramamani-thesis, Ramamani-paper} is described by the following differential equations
\vspace{-0.5cm}
\begin{equation}
\label{ramamani}
\begin{split}
q\frac{d{\cal P}}{dq} &= \frac{{\cal P}^2 - {\cal Q}}{4} \\
q\frac{d{\tilde {\cal P}}}{dq} &= \frac{{\tilde {\cal P}}{\cal P} - {\cal Q}}{2} \\
q\frac{d{\cal Q}}{dq} &= {\cal P}{\cal Q} - {\tilde {\cal P}}{\cal Q}
\end{split}
\end{equation}
In a recent paper Ablowitz et al. \cite{ach} showed that Ramamani's system of differential equations is equivalent to a third order scalar nonlinear ODE found by Bureau \cite{bu}, whose solutions are given implicitly by a Schwarzian triangle function. Under a suitable variable transformation, the Ramamani system produces the classical Darboux-Halphen system. \\
The Ramamani system (\ref{ramamani}) for $q = \dfrac1{2 i \pi}$, is described by the equations
\vspace{-0.5cm}
\begin{equation}
\label{ramamani2}
\begin{split}
\dot{\mathcal{P}} &= \frac{i \pi}2 \big( {\mathcal{P}}^2 - \mathcal{Q} \big) \\
\dot{\widetilde{\mathcal{P}}} &= i \pi \big( \mathcal{P} \widetilde{\mathcal{P}} - \mathcal{Q} \big) \\
\dot{\mathcal{Q}} &= 2 i \pi \big( \mathcal{P} - \widetilde{\mathcal{P}} \big) \mathcal{Q}
\end{split}
\end{equation}
We convert to Darboux-Halphen variables $\big( {\mathcal P}, \widetilde{{\mathcal P}}, {\mathcal Q} \big) \rightarrow \big( X, Y, Z \big)$ \cite{yurisimonalexey} as follows
\begin{equation}
{\mathcal P} = \frac2{i \pi} X \hspace{1cm} \widetilde{{\mathcal P}} = \frac1{i \pi} \big( 2 X - Y - Z \big) \hspace{1cm} {\mathcal Q} = \frac4{\pi^2} \big( Z - X \big) \big( X - Y \big)
\end{equation}
Naturally, if we apply the above transformation to the Ramamani equations (\ref{ramamani2}), we shall get the classical Darboux-Halphen system equations.
\vspace{-0.5cm}
\[ \begin{split}
\dot{\mathcal P} = \frac2{i \pi} \dot{X} &= \frac{i \pi}2 \bigg\{ - \frac4{\pi^2} X^2 - \frac4{\pi^2} \big( XY + XZ - YZ - X^2 \big) \bigg\} \\
\Rightarrow \hspace{1cm} &- \frac4{\pi^2} \dot{X} = - \frac4{\pi^2} \big( XY + XZ - YZ \big)
\end{split} \]
and hence, we get one Darboux-Halphen equation in the form
\begin{equation}
\dot{X} = X \big( Y + Z \big) - YZ
\end{equation}
For the others, the process is more elaborate although quite straightforward to show.
\vspace{-0.5cm}
\[ \begin{split}
\dot{\widetilde{\mathcal P}} = \frac1{i \pi} \big( 2 \dot{X} - \dot{Y} - \dot{Z} \big) &= i \pi \bigg\{ - \frac4{\pi^2} X^2 + \frac2{\pi^2} X \big( Y + Z \big) - \frac4{\pi^2} \big( XY + XZ - YZ - X^2 \big) \bigg\} \\
\Rightarrow \hspace{1cm} - \frac1{\pi^2} \big( 2 \dot{X} - \dot{Y} - \dot{Z} \big) &= \frac2{\pi^2} \big( \dot{X} + YZ \big) - \frac4{\pi^2} \dot{X}
\end{split} \]
\begin{equation}
\label{dhtype2} \Rightarrow \hspace{1cm} \dot{Y} + \dot{Z} = 2 YZ
\end{equation}
\[ \begin{split}
\dot{\mathcal Q} = \frac4{\pi^2} \big[ \big( \dot{Z} - \dot{X} \big) \big( X &- Y \big) + \big( Z - X \big) \big( \dot{X} - \dot{Y} \big) \big] = \frac8{\pi^2} \big( Y + Z \big) \big(Z - X \big) \big( X - Y \big) \\
\Rightarrow \hspace{1cm} \big( \dot{Z} - Z^2 &\big) \big( X - Y \big) + \big( Y^2 - \dot{Y} \big) \big( Z - X \big) = 0
\end{split} \]
Using (\ref{dhtype2}), we have another DH equation
\begin{equation}
\dot{Y} = Y \big( Z + X \big) - ZX
\end{equation}
Naturally, using (\ref{dhtype2}) again, we should get the final equation
\begin{equation}
\dot{Z} = Z \big( X + Y \big) - XY
\end{equation}
Consequently, we have three sets of equations for the classical Darboux-Halphen system
\[ \boxed{\begin{split}
\dot{X} = X \big( Y + Z \big) - YZ \\
\dot{Y} = Y \big( Z + X \big) - ZX \\
\dot{Z} = Z \big( X + Y \big) - XY
\end{split}} \]
This is henceforth, another system related to the Ramanujan equations \cite{Ramanujan-arith, Ramanujan-collect} via the focal point of classical DH systems they converge to. We must note that, the Ramamani system gives rise to self dual (and not anti-self dual) Darboux-Halphen equations. Inverting the sign of the Halphen variables gives the familiar anti-self-dual system.
\subsection{The Chazy equation}
Now, we shall see how the solution of the Darboux-Halphen system satisfies the Chazy equation \cite{Chazy, qdschzeq}. Let us take the previous result for anti-self-duality and $\lambda_{ij} = -1$ and write it for all values of $i, j, k$
\vspace{-0.5cm}
\begin{equation}
\begin{split}
\dot{\Omega}_1 = \Omega_2 \Omega_3 - \Omega_1 \big( \Omega_2 + \Omega_3 \big) \\
\dot{\Omega}_2 = \Omega_3 \Omega_1 - \Omega_2 \big( \Omega_3 + \Omega_1 \big) \\
\dot{\Omega}_3 = \Omega_1 \Omega_2 - \Omega_3 \big( \Omega_1 + \Omega_2 \big) \\
\end{split}
\end{equation}
Adding up will give
\begin{equation}
\dot{\Omega}_1 + \dot{\Omega}_2 + \dot{\Omega}_3 = - \big( \Omega_1 \Omega_3 + \Omega_2 \Omega_1 + \Omega_3 \Omega_2 \big)
\end{equation}
If we define $y = - 2 \big( \Omega_1 + \Omega_2 + \Omega_3 \big)$, we will have
\vspace{-0.25cm}
\begin{align}
\dot{y} &= 2 \big( \Omega_1 \Omega_3 + \Omega_2 \Omega_1 + \Omega_3 \Omega_2 \big) \\
\ddot{y} &= - 12 \ \Omega_1 \Omega_2 \Omega_3
\end{align}
Thus, the third order derivative will be
\vspace{-0.25cm}
\[ \begin{split}
\dddot{y} &= - 12 \big[ \big\{ \Omega_2 \Omega_3 - \Omega_1 \big( \Omega_2 + \Omega_3 \big) \big\} \Omega_2 \Omega_3 + \Omega_1 \big\{ \Omega_3 \Omega_1 - \Omega_2 \big( \Omega_3 + \Omega_1 \big) \big\} \Omega_3 \\
& \hspace{3cm} + \Omega_1 \Omega_2 \big\{ \Omega_1 \Omega_2 - \Omega_3 \big( \Omega_1 + \Omega_2 \big) \big\} \big] \\
&= 48 \ \Omega_1 \Omega_2 \Omega_3 \big( \Omega_1 + \Omega_2 + \Omega_3 \big) - 12 \ \big( \Omega_2 \Omega_3 + \Omega_3 \Omega_1 + \Omega_1 \Omega_2 \big)^2 \\
&= 2 y \ddot{y} - 3 \dot{y}^2
\end{split} \]
Thus, we obtain the Chazy equation \cite{Chazy}
\begin{equation}
\label{chazy} \boxed{\frac{d^3 y}{dt^3} = 2 y \frac{d ^2 y}{d t^2} - 3 \bigg( \frac{d y}{d t} \bigg)^2}
\end{equation}
\numberwithin{equation}{subsection}
\subsection{The Ramanujan equation}
In case of the classical Chazy system, the Ramanujan equations \cite{Ramanujan-arith, Ramanujan-collect} are given by
\vspace{-0.25cm}
\begin{equation}
\label{ram}
\begin{split}
\dot{P} &= \frac{i \pi}6 \big( P^2 - Q \big) \\
\dot{Q} &= \frac{2 i \pi}3 \big( P Q - R \big) \\
\dot{R} &= i \pi \big( P R - Q^2 \big)
\end{split}
\end{equation}
To understand how they are related to the Chazy equation, we shall examine what they imply systematically. From the first equation of (\ref{ram}), we find that
\begin{equation}
\label{a} Q = P^2 - \frac6{i \pi} \dot{P} \hspace{1cm} \Rightarrow \hspace{1cm} Q = Q ( P, \dot{P} )
\end{equation}
Applying (\ref{a}) to the second eq. of (\ref{ram}), we get
\[ \dot{Q} = \dot{Q} ( P, \dot{P}, \ddot{P} ) = 2 \bigg( P \dot{P} - \frac3{i \pi} \ddot{P} \bigg) \]
\begin{equation}
\label{b} \Rightarrow \hspace{0.5cm} R = R (P, \dot{P}, \ddot{P} ) = PQ - \frac3{2 i \pi} \dot{Q} = P^3 - \frac9{i \pi} P \dot{P} - \frac9{\pi^2} \ddot{P}
\end{equation}
Finally, using result (\ref{b}) in the last equation of (\ref{ram}), we will get
\[ \dot{R} = i \pi \big( P R - Q^2 \big) = 3 P^2 \dot{P} - \frac9{i \pi} \big( \dot{P}^2 + P \ddot{P} \big) - \frac9{\pi^2} \dddot{P} \]
\begin{equation}
\therefore \hspace{1cm} \dddot{P} + i \pi \big( 3 \dot{P}^2 - 2 P \ddot{P} \big) = 0
\end{equation}
However, we are not there yet. The final step requires us to take advantage of the non-linearity of this equation and write $P = \dfrac{y}{i \pi}$ to arrive at
\begin{equation}
\hspace{1cm} \dddot{y} = 2 y \ddot{y} - 3 \dot{y}^2
\end{equation}
This final result is the classical Chazy equation (\ref{chazy}) we are familiar with.
\subsection{Generalized Chazy System}
If we start instead with the generalized Darboux-Halphen equations and set the co-efficients as $\alpha_1 = \alpha_2 = \alpha_3 = \frac2n$, we will get the corresponding generalized Chazy equation \cite{fintgradflo}.
\begin{equation}
\boxed{\frac{d^3 y}{dt^3} - 2 y \frac{d ^2 y}{d t^2} + 3 \frac{d y}{d t}^2 = \frac4{36 - n^2} \bigg( 6 \frac{d y}{d t} - y^2 \bigg)^2}
\end{equation}
The set of transformations that leads the Ramanujan equations to the above generalized Chazy equation turn out to be:
\vspace{-0.5cm}
\begin{equation}
\label{gram} \begin{split}
\dot{P} &= \frac{i \pi}6 \big( P^2 - Q \big) \\
\dot{Q} &= \frac{2 i \pi}3 \big( P Q - R \big) \\
\dot{R} &= i \pi \bigg[ P R - Q^2 \bigg( 1 - \frac{36}{36 - n^2} \bigg) \bigg]
\end{split}
\end{equation}
The first two equations of (\ref{gram}) are the same as for (\ref{ram}), so the same steps will follow as with (\ref{a}) and (\ref{b}), but for the last step, we will have
\begin{equation}
\dddot{P} + i \pi \big( 3 \dot{P}^2 - 2 P \ddot{P} \big) = \frac{4 i \pi }{36 - n^2} \bigg( P^2 - \frac6{i \pi} \dot{P} \bigg)^2
\end{equation}
Applying the same variable redefinition $P = \dfrac{y}{i \pi}$ as before, we obtain
\begin{equation}
\frac{d^3 y}{dt^3} - 2 y \frac{d ^2 y}{d t^2} + 3 \frac{d y}{d t}^2 = \frac4{36 - n^2} \bigg( 6 \frac{d y}{d t} - y^2 \bigg)^2
\end{equation}
which is exactly the generalized Chazy equation described before.
\numberwithin{equation}{section}
\section{Integrability of the Bianchi-IX}
There are various impositions possible on a 4-dimensional Riemannian metric. It could be K\"ahler or Einstein or even have an anti-self-dual (ASD) Weyl tensor. The venn-diagram below depicts the various possibilities resulting to different field equations in 4-dimensions, where the intersection zones correspond to interesting conditions.
\vspace{-0.25cm}
\begin{center}
\includegraphics[scale=0.65]{Venn_diag}
\end{center}
If ($e^0, e^1, e^2, e^3$) define the vierbeins on a Riemannian 4-manifold, the basis of self-dual 2-forms is given as
\vspace{-0.25cm}
\begin{equation}
* \eta^i = \eta^i = \eta^i_{ab} e^a \wedge e^b:
\begin{cases}
\eta^1 = e^0 \wedge e^1 + e^2 \wedge e^3 \\
\eta^2 = e^0 \wedge e^2 + e^3 \wedge e^1 \\
\eta^3 = e^0 \wedge e^3 + e^1 \wedge e^2
\end{cases}
\end{equation}
Similarly, the anti-self-dual 2-form basis is given by
\vspace{-0.25cm}
\begin{equation}
* \bar{\eta}^i = - \bar{\eta}^i = \bar{\eta}^i_{ab} e^a \wedge e^b:
\begin{cases}
\bar{\eta}^1 = e^0 \wedge e^1 - e^2 \wedge e^3 \\
\bar{\eta}^2 = e^0 \wedge e^2 - e^3 \wedge e^1 \\
\bar{\eta}^3 = e^0 \wedge e^3 - e^1 \wedge e^2
\end{cases}
\end{equation}
If ${\omega^i}_j$ are the spin connection 1-forms for the self-dual part, then the first Cartan equation is given by
\vspace{-0.25cm}
\begin{equation}
d \eta^i = {\omega^i}_j \wedge \eta^j
\end{equation}
The curvature 2-form is given as usual by the 2nd Cartan structure equation (\ref{cartan2}). It is possible to expand the curvature in terms of $\eta^i$ and $\bar{\eta}^i$ as
\vspace{-0.25cm}
\begin{equation}
R_{ij} = {W_{ij}}^k \eta_k + {\Phi_{ij}}^k \bar{\eta}_k
\end{equation}
where $W_{ijk}$ \& $\Phi_{ijk}$ are unknown co-efficients determined by the conditions imposed by various field equations.
\subsection*{Conditions determining $W_{ijk}$ \& $\Phi_{ijk}$}
The 1st Bianchi identity ${R^i}_j \wedge \eta^j = 0$ implies $W_{ijj} = 0$, further implying that $W_{ijk}$ has 6 independent components, out of which 5 correspond to the SD Weyl tensor and one to the totally anti-symmetric part corresponding to Ricci scalar. On the other hand, $\Phi_{ijk}$ has 9 components corresponding to trace-free Ricci tensor.
\begin{enumerate}
\item Iff $W_{ijk} = \Lambda \epsilon_{ijk}$, where $\Lambda$ is a multiple of Ricci scalar, we are in Set A $\Rightarrow$ ASD Weyl.
\item Iff $W_{ijk} = \Lambda \epsilon_{ijk}$ and $\Phi_{ijk} = 0$, then we have ASD Einstein ($A \cap B$).
\item Iff $W_{ijk} = 0 = \Phi_{ijk}$, we are in $(A \cap B \cap C)$ which is hyper-K\"ahler.
\end{enumerate}
Returning to the Bianchi-IX metric (\ref{bianchiix}) and using the parametrization (\ref{pmtr}), it can be written in terms of the basis $(\sigma^1, \sigma^2, \sigma^3)$ of left-invariant forms on $SU(2)$ as
\begin{equation}
\label{bianchiixnew} ds^2 = \big[ \Omega_1 (r) \Omega_2 (r) \Omega_3 (r) \big] dr^2 + \frac{\Omega_2 \Omega_3}{\Omega_1} \sigma_1^2 + \frac{\Omega_3 \Omega_1}{\Omega_2} \sigma_2^2 + \frac{\Omega_1 \Omega_2}{\Omega_3} \sigma_3^2
\end{equation}
where $\Omega_i, \forall i = 1, 2, 3$ are functions of $r$ and $\sigma^i$s satisfy Maurer Cartan equations. From this form of the metric, the vierbeins can be used to produce the SD 2-forms:
\begin{equation}
\eta^i = \Omega_j \Omega_k \ dr \wedge \sigma^i + \Omega_i \ \sigma^j \wedge \sigma^k \hspace{1cm} i \neq j \neq k
\end{equation}
Therefore, the connection 1-forms $\omega_{ij}$ can be written in terms of arbitrary functions $A_i$ of $r$, $\forall i = 1, 2, 3$ such that
\vspace{-0.25cm}
\begin{equation}
\omega_{12} = \frac{A_3}{\Omega_3} \sigma^3 + (\text{cyclic permutations})
\end{equation}
All $A_i$ components are obtained from the system below:
\begin{equation}
\label{system1} \dot{\Omega}_i = \Omega_j \Omega_k - \Omega_i \big( A_j + A_k \big) \hspace{2cm} i \neq j \neq k = 1, 2, 3
\end{equation}
We will refer to this system as the first system for future reference. \\
With the help of Cartan calculus, one can find the curvature 2-forms in terms of derivatives of $A_i$s. With the specific choice of field equations from restricitng ourselves to a specific region of the diagram (indicated in the Venn diagram), we will obtain a second system of 1st order differential equations involving $A_i$.
If we choose regions outside the top circle, we typically get non-integrable equations. Dancer and Strachan \cite{DS} already showed this for Einstein K\"ahler ($B \cap C$), while Barrow \cite{B} showed the same thing for Einstein (Set $B$). However, field equations belonging to the top circle A are integrable, as expected from the heuristic, yet concrete argument that self-duality implies integrability. according to Mason \cite{Ma}. \\
Imposing the vanishing of ASD Weyl tensor and the scalar curvature $\omega_{ij}$ results in the system of the equation widely known as Chazy system \cite{AC, Ch}. This system has a long history, having been studied and solved in the 19th century by Brioschi \cite{Br}.
\begin{equation}
\label{system2} \dot{A}_i = A_j A_k - A_i \big( A_j + A_k \big) \hspace{2cm} i \neq j \neq k = 1, 2, 3
\end{equation}
\newpage
We shall now list the following features of the first and second systems:
\begin{enumerate}
\item[i)] If all $A_i = 0$, then the connection of ASD 2-forms are clearly flat and the metric describes vaccum. This was found by Belinsky \cite{BGPP} and Eguchi-Hanson \cite{EH}.
\item[ii)] If $\Omega_i = A_i, \forall \ i$, then the first and second systems of equations are identical, which is precisely the Atiyah and Hitchin's \cite{AH} case.
\item[iii)] If one insists all $A_i$s to be constant in r, then without any loss of generality, if two of them necessarily vanish, then the remaining $A_i \neq 0$ can reduce the first system to a special case of Painleve-III \cite{Tod}. Also, the form $\eta^i$ is covariant constant in this case so that one arrives at the Pederson-Poon scalar flat K\"ahler metric \cite{PP}.
\item[iv)] There exists a significant conserved quantity
\vspace{-0.25cm}
\begin{equation}
\label{conserved}
Q = \frac{\Omega_1^2}{(A_1 - A_2)(A_1 - A_3)} + \frac{\Omega_2^2}{(A_2 - A_1)(A_2 - A_3)} + \frac{\Omega_3^2}{(A_3 - A_1)(A_3 - A_2)}
\end{equation}
\end{enumerate}
There is a covariance under fractional linear transformations in $r$ \cite{Tod}, which means that the solutions of the second system with $A_1 = A_2 \neq 0$ is conformally related to the Pederson-Poon case \cite{PP}. \\
Now, for a general solution of the second system, we introduce a new dependent variable $x$ as per Brioschi \cite{Br}
\vspace{-0.25cm}
\begin{equation}
x = \frac{A_1 - A_2}{A_3 - A_2}
\end{equation}
It is now straightforward to show that (\ref{system2}) reduces to the 3rd order ODE for $x$
\vspace{-0.25cm}
\begin{equation}
\dddot{x} = \frac32 \frac{\ddot{x}^2}{\dot{x}} - \frac12 \big( \dot{x} \big)^3 \bigg( \frac1{x^2} + \frac1{x (x - 1)} + \frac1{(x - 1)^2} \bigg)
\end{equation}
A remarkable fact is that this ODE is satisfied by the reciprocal of the elliptic modular function. Now this elliptic modular function has a natural boundary in the r-plane, so the $A_i$ and hence $\Omega_i$ have a natural boundary in the r-plane and the location of the boundary depends on the constants of integration. This implies the self-duality equations are not always equivalent to Painleve property, and thus integrable. \\
Now we introduce new dependent variables $\rho_i, \forall i = 1, 2, 3$ according to
\vspace{-0.25cm}
\begin{equation}
\Omega_1 = \rho_1 \frac{\dot{x}}{\sqrt{ x (1 - x)}} \hspace{1cm}
\Omega_2 = \rho_2 \frac{\dot{x}}{ x \sqrt{(1 - x)}} \hspace{1cm}
\Omega_3 = \rho_3 \frac{\dot{x}}{\sqrt{x} (1 - x)}
\end{equation}
and switch independent variable from $r$ to $x$ (ie. $\dot{x} \equiv \frac{dx}{dr}$), so the first system becomes
\vspace{-0.25cm}
\begin{equation}
\frac{d \rho_1}{dx} = \frac{\rho_2 \rho_3}{x (1 - x)} \hspace{2cm}
\frac{d \rho_2}{dx} = \frac{\rho_3 \rho_1}x \hspace{2cm}
\frac{d \rho_3}{dx} = \frac{\rho_1 \rho_2}{(1 - x)}
\end{equation}
This system is known to reduce to Painleve VI with the first integral
\begin{equation}
\gamma = \frac12 \big( \rho_2^2 + \rho_3^2 - \rho_1^2 \big) = const
\end{equation}
which is in fact the conserved quantity (\ref{conserved}), with the new metric being
\begin{equation}
ds ^2 = \frac{\rho_1 \rho_2 \rho_3}{x (1 - x)} \dot{x} \bigg[ \frac{d x^2}{x (1 - x)} + \frac{(\sigma^1 )^2}{\rho_1^2} + \frac{(1 - x)(\sigma^2 )^2}{\rho_2^2} + \frac{x(\sigma^3 )^2}{\rho_3^2} \bigg]
\end{equation}
Now we shall solve the new version of the first system where we will try to form an equation for $\rho_3$ only. Due to the existence of the first integral $\gamma$, this will be second order equation. To recognize it better, we introduce a new independent variable $z$, given as
\begin{equation}
x = \frac{4 \sqrt{z}}{\big(1 + \sqrt{z} \big)^2}
\end{equation}
a new dependent variable $V$ given by
\begin{equation}
\rho_3 = \frac zV \frac{d V}{d z} - \frac V{2 (z - 1)} - \frac12 + \frac z{2 V (z - 1)}
\end{equation}
It is not very tedious to show that $V$ satisfies Painleve-VI equations with parameters ($\alpha, \beta, \gamma, \delta$) in the notation of \cite{Ince} or ($\frac18, - \frac18, \gamma, \frac12 (1 - 2 \gamma)$) in the notation of \cite{AC} . Thus, we see the equation for the conformal factor
\begin{equation}
\Theta = \dfrac{\rho_1 \rho_2 \rho_3}{x (1 - x)} \dfrac{d x}{d t}
\end{equation}
has the Painleve property, but it also contains the function $x (\tau)$ which has a natural boundary. The choice of conformal factor is equivalent to a gauge choice to make the Ricci scalar vanish. \\
Now we have found the general solution of the metric (\ref{bianchiixnew}) inside the top circle of the figure (the shaded region of A). Also note that ASD Bianchi-IX metrics are not always diagonal in the chosen invariant basis of 1-forms. We can always adjust the conformal factor $\Theta$ in order to make this ASD Bianchi type metric to become Einstein. This would constitute a metric for the region $A \cap B$, which are quaternionic K\"ahler type of metrics and are diagonal in the basis.
The solution for the conformal factor was found in \cite{Tod95} with $\gamma = \dfrac18$ and writing down the desired condition as a set of equations on $\Theta$ and finally solve it. After a little algebra and once all the dusts get settled, we get
$\Theta = \dfrac{N}{D^2}$, with
\vspace{-0.5cm}
\begin{equation}
\begin{split}
N &= 2 \rho_1 \rho_2 \rho_3 ( 4x \rho_1 \rho_2 \rho_3 + P ) \\
P &= x \big( \rho_1^2 + \rho_2^2 \big) - (1 - 4 \rho_3^2 ) \big( \rho_2^2 - (1 - x) \rho_1^2 \big) \\
D &= x \rho_1 \rho_2 + 2 \rho_3 \big( \rho_2^2 - (1 - x) \rho_1^2 \big)
\end{split}
\end{equation}
Since the equation for $\rho_3$ is a 2nd order differential equation, the metric depends on two arbitrary constants. Particularly it is worth mentioning that there exists ASD Einstein metrics on $S^4$, which with appropriate choices of field equations fill in the general left-invariant metric on $S^3$ similar to the case of a 4-dim hyperbolic metric that fills the round metric on $S^3$.
\section{Conclusions}
Summarizing we have seen that there exist two different approaches that lead up to the classical Darboux-Halphen system. One approach starts from the Bianchi-IX metric and considers its anti-self-dual case, while the other starts with a reduction of the self-dual Yang-Mills equation and takes only the diagonal elements of the resulting matrix equation. When we start with SDYM gauge fields, it is clear why we cannot always reliably find a metric or its vierbeins that correspond to the generalized DH system. The classical configuration is a typical prototype where it is possible, as was seen from the way we could obtain it from the self-dual Bianchi-IX metric.
We have computed the form of curvature and confirmed that self-dual cases turn out to be Ricci-flat. We also discovered that the classical Darboux-Halphen is found to exhibit Ricci flow for a modified Bianchi-IX system. We also confirmed that this system satisfies the Chazy equation as well and is strongly related to other systems of differential equations, such as the Ramanujan and Ramamani systems.
It remains a challenge to find other integrable systems of number theoretic importance. Another useful direction could be to solve DH type systems (\ref{hw1}) and (\ref{hw2}) using moving monodromy methods and compare the results. We are also trying to find out interesting 1+1 dimensional or 2+1 dimensional DH type systems that can be solved using inverse scattering transform and can be studied to uncover several widely known aspects of integrability. Another testing ground could be various scalar flat K\"ahler metrics namely the LeBrun metric with
$U(1)$ isometry that contains Gibbons-Hawking, Real heaven and Burns metric as special limits and were used to test the bottom-up approach of emergent gravity \cite{Lee:2012px}. Using monodromy evolving deformation \cite{PRL} on Plebanski type self dual Einstein equations which are actually the EOM obtained from the 2- dim chiral $U(N)$ model in the large $N$ limit and studying integrability might clarify some important issues for the test of emergent gravity \cite{Lee:2012rb}.
\section{APPENDIX}
\subsection*{t'Hooft symbols}
Matrices representing the t'Hooft symbols would be given by :
\vspace{-0.25cm}
\begin{align}
\eta^{(+)1} = \left({\begin{array}{cccc}
0 & 0 & 0 & 1\\
0 & 0 & 1 & 0\\
0 & -1 & 0 & 0\\
-1 & 0 & 0 & 0
\end{array} } \right) \hspace{0.5cm}
\eta^{(+)2} = \left({\begin{array}{cccc}
0 & 0 & -1 & 0\\
0 & 0 & 0 & 1\\
1 & 0 & 0 & 0\\
0 & -1 & 0 & 0
\end{array} } \right) \hspace{0.5cm}
\eta^{(+)3} = \left({\begin{array}{cccc}
0 & 1 & 0 & 0\\
-1 & 0 & 0 & 0\\
0 & 0 & 0 & 1\\
0 & 0 & -1 & 0
\end{array} } \right)\\
\eta^{(-)1} = \left({\begin{array}{cccc}
0 & 0 & 0 & -1\\
0 & 0 & 1 & 0\\
0 & -1 & 0 & 0\\
1 & 0 & 0 & 0
\end{array} } \right) \hspace{0.5cm}
\eta^{(-)2} = \left({\begin{array}{cccc}
0 & 0 & -1 & 0\\
0 & 0 & 0 & -1\\
1 & 0 & 0 & 0\\
0 & 1 & 0 & 0
\end{array} } \right) \hspace{0.5cm}
\eta^{(-)3} = \left({\begin{array}{cccc}
0 & 1 & 0 & 0\\
-1 & 0 & 0 & 0\\
0 & 0 & 0 & -1\\
0 & 0 & 1 & 0
\end{array} } \right)
\end{align}
which obey the following relations between themselves
\vspace{-0.25cm}
\begin{equation}
\sum_{i = 1}^3 \eta^{(\pm)i}_{\mu \nu} \eta^{(\pm)i}_{\lambda \gamma} = \delta_{\mu \lambda} \delta_{\nu \gamma} - \delta_{\mu \gamma} \delta_{\nu \lambda} \pm \varepsilon_{\mu \nu \lambda \gamma}
\end{equation}
\vspace{-0.25cm}
\begin{align}\big[ \eta^{(\pm) i}, \eta^{(\pm) j} \big]_{\mu \nu} = -2 \epsilon^{ijk} &\eta^{(\pm) k}_{\mu \nu} \\ \nonumber\\
\big[ \eta^{(\pm) i}, \eta^{(\mp) j} \big]_{\mu \nu} = 0 \hspace{0.5cm} \Rightarrow \hspace{0.5cm} &\eta^{(\pm) i}_{\mu \rho} \eta^{(\mp) j}_{\rho \nu} = \eta^{(\pm) j}_{\nu \rho} \eta^{(\mp) i}_{\rho \mu}
\end{align}
\vspace{-0.25cm}
\begin{align}
\big\{ \eta^{(\pm) i}, \eta^{(\pm) j} \big\}_{\mu \nu} = -2 \delta^{ij} &\delta_{\mu \nu} \hspace{2cm} \\ \nonumber\\
\big\{ \eta^{(\pm) i}, \eta^{(\mp) j} \big\}_{\mu \nu} = 0 \hspace{0.5cm} \Rightarrow \hspace{0.5cm} &\eta^{(\pm) i}_{\mu \nu} \eta^{(\mp) j}_{\mu \nu} = 0
\end{align}
\begin{equation}
\therefore \hspace{0.5cm} \eta^{(\pm) i}_{\mu \lambda} \eta^{(\pm) j}_{\nu \lambda} = \delta^{ij} \delta_{\mu \nu} + \epsilon^{ijk} \eta^{(\pm) k}_{\mu \nu}
\end{equation}
\begin{equation}
\epsilon_{\mu \nu \lambda \gamma} \eta^{(\pm)i}_{\gamma \sigma} = \mp \big( \delta_{\sigma \lambda} \eta^{(\pm)i}_{\mu \nu} + \delta_{\sigma \mu} \eta^{(\pm)i}_{\nu \lambda} - \delta_{\sigma \nu} \eta^{(\pm)i}_{\mu \lambda} \big)
\end{equation}
\begin{equation}
\therefore \hspace{0.5cm} \epsilon^{ijk} \eta^{(\pm) j}_{\mu \nu} \eta^{(\pm) k}_{\rho \sigma} = \delta_{\mu \sigma} \eta^{(\pm) i}_{\rho \nu} - \delta_{\nu \sigma} \eta^{(\pm) i}_{\rho \mu} + \delta_{\rho \mu} \eta^{(\pm)i}_{\nu \sigma} - \delta_{\rho \nu} \eta^{(\pm)i}_{\mu \sigma}
\end{equation}
\section*{Acknowledgement}
One of the authors SC would like to thank Marios Petropoulos for several correspondences during the course of this project and Thanu Padmanabhan for an enlightening discussions on a subject related to this article. The research of RR was supported by FAPESP through Instituto de Fisica, Universidade de Sao Paulo with grant number 2013/17765-0.
|
2,869,038,155,473 | arxiv | \section{Introduction}
Matrix product states (MPS) were first developed by the physics community as compact representations of often intractable wave functions of complex quantum systems~\citep{perez2006matrix,Klumper1993MatrixPG,fannes1992finitely}, in parallel with the equivalent tensor-train decomposition~\citep{Oseledets2011TensorTrainD} developed in applied mathematics for high-order tensors. These tensor network models have been gaining popularity in machine learning especially as means of compressing highly-parameterized models \citep{novikov2015tensorizing,garipov2016ultimate,Yu2017LongtermFU,novikov2014putting}. There has also been recent interest in directly connecting ideas and methods from quantum tensor networks to machine learning \citep{stoudenmire2016supervised,han2018,guo2018matrix,huggins2019towards}. In particular, tensor networks have been used for probabilistic modeling as parameterizations of joint probability tensors \citep{glasser2019expressive,miller2020tensor,stokes2019probabilistic}. But the same problem has also been studied from various other perspectives. Notably, observable operator models \citep{jaeger2000observable} or predictive state representations (PSRs) \citep{SinghPSR} from the machine learning literature and stochastic weighted automata \citep{balle2014methods} are approaches to tackle essentially the same problem. While \citet{Thon2015} provide an overview discussing connections between PSRs and stochastic WA, their connection to MPS has not been extensively explored. At the same time, there exist many variants of tensor network models related to MPS that can be used for probabilistic modeling. \citet{glasser2019expressive} recently provided a thorough investigation of the relative expressiveness of various tensor networks for the \emph{non-uniform} case (where cores in the tensor decomposition need not be identical). However, to the best of our knowledge, similar relationships have not yet been established for the \emph{uniform} case. We address these issues by examining how various quantum tensor networks relate to aforementioned work in different fields, and we derive a collection of results analyzing the relationships in expressiveness between uniform MPS and their various subclasses.
The uniform case is important to examine for a number of reasons. The inherent weight sharing in uniform tensor networks leads to particularly compact models, especially when learning from highly structured data. This compactness becomes especially useful when we consider physical implementations of tensor network models in quantum circuits. For instance, \citet{glasser2019expressive} draw an equivalence between local quantum circuits and tensor networks; network parameters define gates that can be implemented on a quantum computer for probabilistic modeling. Uniform networks have fewer parameters, corresponding to a smaller set of quantum gates and greater ease of implementation on resource constrained near-term quantum computers. Despite the many useful properties of uniformity, the tensor-network literature tends to focus more on non-uniform models. We aim to fill this gap by developing expressiveness relationships for uniform variants.
We expect that the connections established in this paper will also open the door for results and methods in one area to be used in another. For instance, one of the proof strategies we adopt is to develop expressiveness relationships between subclasses of PSRs, and show how they also carry over to equivalent uniform tensor networks. Such cross fertilization also takes place at the level of algorithms. For instance, the learning algorithm for locally purified states (LPS) employed in \citet{glasser2019expressive} does not preserve uniformity of the model across time-steps, or enforce normalization constraints on learned operators. With the equivalence between uniform LPS and hidden quantum Markov models (HQMMs) established in this paper, the HQMM learning algorithm from \citet{adhikary2019expressiveness}, based on constrained optimization on the Stiefel manifold, can be adapted to learn uniform LPS \emph{while enforcing all appropriate constraints}. Similarly, spectral algorithms that have been developed for stochastic process models such as hidden Markov models (HMMs) and PSRs \citep{hsu2012spectral, siddiqi2010reduced, bailly2009grammatical} could also be adapted to learn uniform LPS and uniform MPS models. Spectral algorithms typically come with consistency guarantees, along with rigorous bounds on sample complexity. Such formal guarantees are less common in tensor network methods, such as variants of alternating least squares \citep{Oseledets2011TensorTrainD} or density matrix renormalization group methods \citep{dmrg}. On the other hand, tensor network algorithms tend to be better suited for very high-dimensional data; presenting an opportunity to adapt them to scale up algorithms for stochastic process models.
Finally, one of our key motivations is to simply provide a means of translating between similar models developed in different fields. While prior works \citep{glasser2019expressive,kliesch2014matrix,critch2014algebraic} have noted similarities between tensor networks, stochastic processes and weighted automata, many formal and explicit connections are still lacking, especially in the context of model expressiveness. It is still difficult for practitioners in one field to verify that the model classes they have been working with are indeed used elsewhere, given the differences in nomenclature and domain of application; simply having a dictionary to rigorously translate between fields can be quite valuable. Such a dictionary is particularly timely given the growing popularity of tensor networks in machine learning. We hope that the connections developed in this paper will help bring together complementary advances occurring in these various fields.
\paragraph{Summary of Contributions}
In Section 2, we demonstrate that uniform Matrix states (uMPS) are equivalent to predictive state representations and stochastic weighted automata, when taken in the \emph{non-terminating limit} (where we evaluate probabilities sufficiently away from the end of a sequence). Section 3 presents the known equivalence between uMPS with non-negative parameters, HMMs, and probabilistic automata, to show in Section 4 that another subclass of uMPS called Born machines (BM) \citep{han2018} is equivalent to norm-observable operator models (NOOM) \citep{Zhao2010b} and quadratic weighted automata (QWA) \citep{bailly2011quadratic}. We also demonstrate that uBMs and NOOMs are relatively restrictive model classes in that there are HMMs with no equivalent finite-dimensional uBM or NOOM (HMMs $\nsubseteq$ NOOMs/uBMs). Finally, in Section 5, we analyze a broadly expressive subclass of uMPS known as locally purified states (LPS), demonstrate its equivalence to hidden quantum Markov models, and discuss the open question of how the expressiveness of uLPS relates to that of uMPS. We thus develop a unifying perspective on a wide range of models coming from disparate communities, providing a rigorous characterization of how they are related to one another, as illustrated in Figures ~\ref{fig:tensor_network_diagrams} and ~\ref{fig:model_subsets}. The proofs for all theorems are provided in the Appendix. In our presentation, we routinely point out connections between tensor networks and relevant concepts in physics. However, we note that these models are not restricted to this domain.
\paragraph{Notation}
We use bold-face for matrix and tensor operators (e.g. $\mathbf{A}$), arrows over symbols to denote vectors (e.g. $\vec{x}$), and plain non-bold symbols for scalars. The vector-arrows are also used to indicate vectorization (column-first convention) of matrices. We frequently make use of the ones matrix $\mathbf{1}$ (filled with $1$s) and the identity matrix $\mathds{I}$, as well as their vectorizations $\vec{\mathbf{1}}$ and $\vec{\mathds{I}}$. We use overhead bars to denote complex conjugates (e.g. $\bar{\mathbf{A}}$) and $\dagger$ for the conjugate transpose $(\bar{\mathbf{A}}^T = \mathbf{A}^\dagger)$. Finally, $\text{tr}(\cdot)$ denotes the trace operation applied to matrices, and $\otimes$ denotes the Kronecker product.
\section{Uniform Matrix Product States}
Given a sequence of $N$ observations, where each outcome can take $d_i$ values, the joint probability of any particular sequence $y_1, \ldots, y_N$ can be written using the following tensor-train decomposition, which gives an MPS:
\begin{align}
P(y_1,\ldots,y_N)
~
&\propto
~
\text{MPS}_{y_1,\ldots,y_N} \nonumber\\
&= \mathbf{A}^{[N],y_{N}} \mathbf{A}^{[N-1],y_{N-1}}
~
\dots
~
\mathbf{A}^{[2],y_{2}} \mathbf{A}^{[1],y_1}
\label{eq_mps}
\end{align}
where each $\mathbf{A}^{[i]}$ is a three-mode tensor \emph{core} of the MPS containing $d_i$ slices, with the matrix slice associated with outcome $Y_i$ denoted by $\mathbf{A}^{[i], y_i}$. Each slice is a $D_{i+1} \times D_{i}$ matrix, and the conventional choice (which we use in this paper) of \emph{open boundary conditions}\footnote{An alternate choice, \emph{periodic boundary conditions}, sets $D_0=D_N$ and $D_0, D_N \geq 1$ and uses a trace operation to evaluate the product of matrices in Equation~\ref{eq_mps}. MPS with periodic boundaries are equivalent to the tensor ring decomposition~\citep{Mickelin2018TensorRD}.} is to set $D_0=D_N=1$ (i.e. $\mathbf{A}^{[1],y_1}$ and $\mathbf{A}^{[N],y_N}$ are column and row vectors respectively). MPS with open boundaries are equivalent to tensor train (TT) decompositions, and we will define them over the complex field, a choice common in quantum physics and tensor network settings.
The maximal value of $D = \max_k D_k$ is also called the bond-dimension or the TT-rank \citep{glasser2019expressive} of the MPS.
\footnote{Operational characterizations of the bond dimension have been developed in quantum physics, in terms of entanglement~\citep{eisert2010} or the state space dimension of recurrent many-body dynamics which generate the associated wavefunction~\citep{Schoen2005SequentialGO}.} For fixed dynamics, this will lead the MPS cores $\mathbf{A}^{[i]}$ to be identical.
In this paper, we will focus on the ``uniform'' case of identical cores, i.e., a uniform MPS or uMPS. uMPS models were first developed in the quantum physics community~\citep{perez2006matrix,2016TangentSM}, although employing a different probabilistic correspondence (Born machines as discussed later) than described below. As we will discuss, this corresponds naturally to Markovian dynamical systems; the notion of \emph{past being independent of future given the present} is encoded by the tensor train structure where each core only has two neighbours. While an MPS is inherently defined with respect to a fixed sequence length, a uMPS can be applied to sequences of arbitrary fixed or infinite length~\citep{Cirac2010InfiniteMP}. As there should be no distinction between the cores at different time steps in a uMPS, a natural representation is to fix two boundary vectors $(\vec{\sigma},~\vec{\rho}_0)$, leading to the following decomposition of the joint probability:
\begin{align}
P(y_1,\dots,y_N) &= \text{uMPS}_{y_1,\dots,y_N}
\nonumber \\
&= \vec{\sigma}^\dagger \mathbf{A}^{y_N} \mathbf{A}^{y_{N-1}}
\dots
\mathbf{A}^{y_2} \mathbf{A}^{y_1}
\vec{\rho}_0
\label{eq_umps}
\end{align}
\begin{figure}
\centering
\includegraphics[scale=0.37]{tn_diagrams.png}
\caption{Tensor network diagrams}
\label{fig:tensor_network_diagrams}
\end{figure}
To explore connections with arbitrary-length PSRs and WFAs, we will particularly focus on the \emph{non-terminating limit}. Consider that if we wished to compute the probability of some subsequence from $t= 1,\ldots,T$ of the $N$-length uMPS ($T < N$), we could compute $P(y_1, \ldots, y_T) = \sum_{y_N}\cdots\sum_{y_{T+1}}P(y_1,\ldots,y_T,y_{T+1},\ldots,y_N)$.
The non-terminating limit is essentially when we consider the uMPS to be infinitely long, i.e., we compute the probability of the subsequence in the limit as $N \to \infty$.
\begin{definition}[Non-terminating uMPS]
A non-terminating uMPS is an infinitely long uMPS where we can compute the probability of any sequence $P(y_1,\ldots,y_T)$ of length $T$ by marginalizing over infinitely many future observations, i.e. $P(y_1,\ldots,y_T) = \lim_{N\to\infty}\sum_{y_N}\cdots\sum_{y_{T+1}}P(y_1,\ldots,y_T,y_{T+1},\ldots,y_N)$.
\end{definition}
This is a natural approach to modeling arbitrary length sequences with Markovian dynamics; intuitively, if given an identical set of tensor cores at each time step, the probability of a sequence should not depend on how far it is from the `end' of the sequence. Similar notions are routinely used in machine learning and physics. In machine learning, it is common to discard the first few entries of sequences as ``burn-in'' to allow systems to reach their stationary distribution. In our case, the burn is being applied to the end of the sequence. The non-terminating limit is also similar to the ``thermodynamic limit'' employed in many-body physics, which marginalizes over an infinite number of future \emph{and} past observations \citep{2016TangentSM}. Such limits reflect the behavior seen in the interior of large systems, and avoid more complicated phenomena which arise near the beginning or end of sequences.
\subsection{The Many Names of Matrix Product States}
\label{sec_psr}
While connections between MPS and hidden Markov models (HMM) have been widely noted, we point out that non-terminating uMPS models have been studied from various perspectives, and are referred to by different names in the literature, such as stochastic weighted finite automata (stochastic WFA)~\citep{balle2014spectral}, quasi-realizations~\citep{vidyasagar2011complete}, observable operator models (OOM)~\citep{jaeger2000observable}, and (uncontrolled) predictive state representations (PSR)~\citep{SinghPSR}. The latter three models are exactly identical (we just refer to them as uncontrolled PSRs in this paper) and come from the stochastic processes perspective, while stochastic WFA are slightly different in their formulation, in that they are more similar to (terminating) uMPS (see below). \citet{Thon2015} detail a general framework of \emph{sequential systems} to study how PSRs and WFA relate to each other.
\begin{figure}
\centering
\includegraphics[scale=0.34]{stochastic_process_subsets.png}
\caption{Subset relationships between stochastic process models, non-terminating uniform quantum tensor networks, and weighted automata, along with a summary of new relationships established in this paper. The grey area is potentially empty.}
\label{fig:model_subsets}
\end{figure}
\paragraph{Predictive State Representations} We write the stochastic process defined by an $n$--dimensional predictive state representation over a set of discrete observations $\mathcal{O}$ as a tuple $(\mathds{C}^n, \vec{\sigma}, \{{\bf \pmb{\tau}}_y\}_{y\in\mathcal{O}}, \vec{x}_0)$. The initial state $\vec{x}_0 \in \mathds{C}^n$ is normalized, as enforced by the linear evaluation functional $\vec{\sigma}$, i.e., $\vec{\sigma}^\dagger \vec{x}_0 = 1$, and the observable operators are constrained to have normalized marginals over observations $\vec{\sigma}^\dagger\sum_y \pmb{\tau}_y = \vec{\sigma}^\dagger$, i.e., $\vec{\sigma}^\dagger$ is a fixed point of the \emph{transfer operator} $\sum_y {\pmb{\tau}}_y$. The probability of arbitrary length sequences $y_1, \ldots, y_T \in \mathcal{O}^{T}$ is computed as $\vec{\sigma}^\dagger\pmb{\tau}_{y_T} \ldots \pmb{\tau}_{y_1}\vec{x}_0$, which should be non-negative for any sequence. Note that we simply require this to hold for a valid PSR; we do not explicitly enforce constraints to ensure this. This joint probability computation is identical to Equation~\ref{eq_umps}, where evaluation functional $\vec{\sigma}$ and the initial state $\vec{x}_0$ are analogous to the left and right boundary vectors of the uMPS, and the \emph{observable operators} $\pmb{\tau}_y$ correspond to the matrix slices $\mathbf{A}^{y}$. In this sense, both uMPS and PSRs define tensor-train decompositions of joint distributions for a given fixed number of observations $T$. The only difference is that a uMPS does not require its evaluation functional to be the fixed point of its transfer operator. However, as we now discuss, any arbitrary uMPS evaluation functional will eventually converge to the fixed point of its transfer operator in the non-terminating limit. The fixed point then becomes the \emph{effective} evaluation functional of the uMPS in this limit.
Since PSRs were formulated with dynamical systems in mind, we typically consider sequences of \emph{arbitrary} length, whose probabilities are determined via a hidden state which evolves under a time-invariant update rule: the state update conditioned on an observation $y_{t}$ is computed as $\vec{x}_t = \frac{\pmb{\tau}_{y_t}\vec{x}_{t-1}}{\vec{\sigma}^T\pmb{\tau}_{y_t}\vec{x}_{t-1}}$ and the probability of an observation $y_{t}$ is $P(y_t|\vec{x}_t) = \vec{\sigma}^T\pmb{\tau}_{y_t}\vec{x}_{t}$. This allows us to deal more flexibly with arbitrary length sequences as compared to a generic uMPS. This flexibility for arbitrary-length sequences is precisely why we consider non-terminating uMPS: we can compute the conditional probability of a sequence $P(y_{t}|y_{1:t-1})$ by marginalizing over all possible future observations with a relatively simple equation:
\begin{small}
\begin{align}\label{eq:condprob}
P(y_{t}|y_{1:t-1}) &= \frac{\sum_{y_N, \ldots, y_{t+1}} P(y_N, \ldots, y_{t+1}, y_t, y_{t-1}, \ldots y_1)}{\sum_{y_N, \ldots, y_{t}} P(y_N, \ldots, y_t, y_{t-1}, \ldots, y_1)} \nonumber \\
&= \frac{\vec{\sigma}^\dagger \left(\sum_y \pmb{\tau}_y\right)^{N-(t+1)} \pmb{\tau}_{y_{t}} \dots \pmb{\tau}_1 \vec{\rho}_0}{\vec{\sigma}^\dagger \left(\sum_y \pmb{\tau}_{y}\right)^{N-t} \pmb{\tau}_{y_{t-1}} \dots \pmb{\tau}_1 \vec{\rho}_0}
\end{align}
\end{small}
Thus the \emph{effective} evaluation functional $\vec{\sigma}^\dagger \left(\sum_y \pmb{\tau}_y\right)^{N-t}$ is a function of time and so different at every time step in general. However, if the transfer operator $\pmb{\tau} = \sum_y \pmb{\tau}_y$ has some fixed point $\vec{\sigma}_*^\dagger$, i.e., $\vec{\sigma}_*^\dagger \pmb{\tau} = \vec{\sigma}_*^\dagger$, then the effective evaluation functional at timestep $t$ (which is $N-t$ steps away from the left boundary of the uMPS) will eventually converge to $\vec{\sigma}_*$ given a long enough sequence.\footnote{In fact, the effective evaluation functional will converge at an exponential rate towards the fixed point, so that $\| \vec{\sigma}_t - \vec{\sigma}_*\|^2 = \mathcal{O}(\exp \tfrac{N-t}{\xi})$, with a ``correlation length'' $\xi \simeq (1 - |\lambda_2| / |\lambda_1|)^{-1}$ set by the ratio of the largest and second largest eigenvalues of the transfer operator~\citep{orus2014}. Transfer operators with non-degenerate spectra can always be rescaled to have a unique fixed point, while matrices with degenerate spectra form a measure zero subset.} So, as long as we remain sufficiently far from the end of a sequence, the particular choice of the the left boundary vector does not matter.
Given that the non-terminating limit effectively replaces the uMPS evaluation functional $\vec{\sigma}$ by the fixed point $\vec{\sigma_*}$, consider what happens if we require $\vec{\sigma} = \vec{\sigma_*}$ to begin with, as is the case for PSRs. In this case, our effective evaluation functional remains independent of $t$, permitting a simple recursive state update rule that does not require fixing a prior sequence length or marginalizing over future observations. In this sense, a non-terminating uMPS is strictly equivalent to a PSR, a relationship which we will see holds between several other model families.
\begin{theorem}
Non-terminating uniform matrix product states are equivalent to uncontrolled predictive state representations.
\end{theorem}
If we do not consider the non-terminating limit of a uMPS, the subsequence length and boundary choice will affect the probability computed. Then, we technically only have an equivalence with PSRs for a fixed sequence length (when the evaluation functional is a fixed point of the transfer operator) and no notion of recursive state update.
\paragraph{Stochastic Weighted Automata} A weighted automaton~(WA) is a tuple $(\mathds{C}^n, \vec{\sigma}, \{{\bf \pmb{\tau}}_y\}_{y\in\mathcal{O}}, \vec{x}_0)$ which computes a function $f(y_1, \ldots, y_T) = \vec{\sigma}^\dagger\pmb{\tau}_{y_T} \ldots \pmb{\tau}_{y_1}\vec{x}_0$. In contrast with PSRs, no constraints are enforced on the weights of a weighted automaton in general. A weighted automaton is \emph{stochastic} if it computes a probability distribution. These models constitute another class of models equivalent to PSRs and uMPS, and represent probability distributions over sequences of symbols from an alphabet $\mathcal{O}$. It is worth mentioning that the semantics of the probabilities computed by PSRs and stochastic WAs can differ: while PSR typically maintain a recursive state and are used to compute the probability of a given sequence conditioned on some past sequence, stochastic WA are often used to compute the joint distributions over the set of all possible finite length sequences (just as in uMPS). We refer the reader to \citet{Thon2015} for a unifying perspective.
\section{Non-Negative uMPS, Hidden Markov Models, and Probabilistic Automata}
\label{sec_mps_hmm}
We first point out a well-known connection between hidden Markov models (HMM) and matrix product states~\citep{kliesch2014matrix, Critch2014AlgebraicGO}. We refer to uMPS where all tensor cores and boundary vectors are non-negative as \emph{non-negative uMPS}. HMMs have been extensively studied in machine learning and are a common approach to modeling discrete-observation sequences where an unobserved hidden state undergoes Markovian evolution (where the future is independent of past given present) and emits observations at each time-step \citep{Rabiner1986AnIT}. HMMs can be thought of as a special case of PSRs where all parameters are constrained to be non-negative. Such models are usually characterized by an initial \emph{belief state} $\vec{x}_0$ and column-stochastic transition and emission matrices ${\bf A}$ and ${\bf C}$.
Formally, we give the following definition:
\begin{definition}[Hidden Markov Model]\label{def_hmm}
An $n-$dimensional Hidden Markov Model for a set of discrete observations $\mathcal{O}$ is a stochastic process described by the tuple $(\mathds{R}^n, {\bf A}, {\bf C}, \vec{x}_0)$. The transition matrix ${\bf A}\in \mathds{R}^{n\times n}_{\geq 0}$ and the emission matrix ${\bf C} \in \mathds{R}^{|\mathcal{O}| \times n}_{\geq 0}$ are non-negative and column stochastic i.e. $\vec{{\bf 1}}^T {\bf A} = \vec{{\bf 1}}^T {\bf C} = \vec{{\bf 1}}^T$. The initial state $\vec{x}_0 \in \mathds{R}^{n}_{\geq 0}$ is also non-negative and is normalized $||\vec{x}_0||_1 = \vec{{\bf 1}}^T \vec{x} = 1$.
\end{definition}
The state transitions through the simple linear update $\vec{x}_t' = \mathbf{A} \vec{x}_{t-1}$. To condition on observation $y$, we construct the diagonal matrix $\text{diag}(\mathbf{C}_{(y,:)})$ from the $y^{th}$ row of $\mathbf{C}$, and perform a normalized update $
\vec{x}_t | y_t = \frac{\text{diag}(\mathbf{C}_{(y,:)}) \vec{x}_t'}{\vec{\mathbf{1}}^T \text{diag}(\mathbf{C}_{(y,:)})\vec{x}_t'}$. This multi-step filtering process can be simplified using an alternative representation with \emph{observable operators} as ${\bf T}_y = \text{diag}({\bf C}_{(y,:)}){\bf A}$, where we rewrite the normalization constraints on operators as ${\bf 1}^T \sum_y {\bf T}_y = {\bf 1}^T$. Then, we recover a recursive state update $\vec{x}_t = \frac{{\bf T}_{y_{t-1}}\vec{x}_{t-1}}{\vec{\mathbf{1}}^T{\bf T}_{y_{t-1}}\vec{x}_{t-1}}$. Clearly, HMMs are a special case of PSRs where the parameters are restricted to be non-negative, and a special case of uMPS when the left boundary $\vec{\sigma}^\dagger = {\bf 1}^T$, the right boundary $\vec{\rho}_0 =\vec{x}_0$, and the tensor core slice $A^y = {\bf T}_y$.
\paragraph{Probabilistic Automata} Lastly, non-negative uMPS are equivalent to probabilistic automata from formal language theory~\cite[Section 4.2]{denis2008rational}, which are in essence weighted automata where transition matrices need to satisfy stochasticity constraints. The strict equivalence between probabilistic automata and HMMs is proved in~\citet[Proposition 8]{dupont2005links} (see also Section 2.2 in~\citet{balle2014methods}). In addition, it is known that non-negative uMPS are strictly less expressive than general uMPS for representing probability distributions; a proof of this result in the context of formal languages can be found in~\citet{denis2008rational}. We give a brief discussion of this next.
\subsection{The Negative Probability Problem and the Expressiveness of Finite PSRs}\label{sec_npp}
As noted by several authors \citep{jaeger2000observable, adhikary2019expressiveness}, PSRs lack a constructive definition;
the definition of a PSR simply demands that the probabilities produced by the model be non-negative without describing constraints that can achieve this. Indeed, this is the cost of relaxing the non-negativity constraint on HMMs; it is undecidable whether a given set of PSR parameters will assign a negative probability to some arbitrary-length sequence \citep{denis2004learning, wiewiora2008modeling}, an issue known as the \emph{negative probability problem} (NPP). A similar issue arises in the many-body physics setting, where the analogous question of whether a general matrix product operator describes a non-negative quantum density operator is also undecidable~\citep{kliesch2014matrix}. In the special case where all PSR parameters are non-negative, we have a sufficient condition for generating valid probabilities, namely that the PSR is a hidden Markov model. Otherwise, the best available approach to characterize valid states for PSRs is whether they define a pointed convex cone (that includes the initial state) which is closed under its operators, and all points in it generate valid probabilities \citep{heller1965stochastic, jaeger2000observable, adhikary2019expressiveness}.
While this undecidability is an inconvenient feature of PSRs, it turns out that constraining PSRs to have only non-negative entries comes with a reduction in expressive power; there are finite (bond/state) dimensional uMPS/PSRs which have no equivalent finite-dimensional HMM representations for arbitrary length sequences (for example, the probability clock in \citet{jaeger2000observable}). The general question of which uncontrolled PSRs have equivalent finite-dimensional HMMs (though not always discussed in those terms) is referred to by some as the \emph{positive realization problem} \citep{benvenuti2004tutorial, vidyasagar2011complete}. A common approach is to use the result that a PSR has an equivalent finite-dimensional HMM if and only if the aforementioned convex cone of valid initial states $\{\vec{x}_0\}$ for a set of given operators $\vec{\sigma}^\dagger$, $\{\pmb{\tau}_y\}$ is $k$-polyhedral for some finite $k$ \citep{jaeger2000observable}.
There has been some work trying to investigate whether it is possible to maintain the superior expressiveness of uMPS/PSRs while avoiding the undecidability issue. \citet{Zhao2010b, bailly2011quadratic, adhikary2019expressiveness} explore this question in the machine learning context, while \citet{glasser2019expressive} consider this problem from the quantum tensor network perspective. We will explore these proposals shortly. When discussing the relative expressiveness of a model compared to a uMPS/PSR,
if its bond dimension (i.e. state dimension) grows with sequence length, we say there is no equivalent parameterization of the uMPS/PSR distribution in this model class.
\section{Uniform Born Machines, Norm-Observable Operator Models, and Quadratic Weighted Automata}
\label{sec_noom_ubm}
Born machines (BMs)~\citep{han2018} are a popular class of quantum tensor networks that model probability densities as the absolute-square of the outputs of a tensor-train decomposition, and hence always output valid probabilities.
As with uMPS, we will work with uniform Born machines (uBMs)~\citep{miller2020tensor}, for which the joint probability of $N$ discrete random variables $\{Y_i\}_{i=1}^N$ is computed as follows (with boundary vectors $\vec{\alpha}$ and $\vec{\omega}_0$ sandwiching a sequence of identical cores ${\bf A}$):
\begin{align}
P(y_1, \ldots, y_N) = \text{uBM}_{y_1, \ldots, y_N} = \left|\vec{\alpha}^{~\dagger} \mathbf{A}^{y_N} \dots \mathbf{A}^{y_1} \vec{\omega}_0\right|^2
\label{eq_bms}
\end{align}
We can re-write this decomposition showing uBMs to be special kinds of uMPS/PSR:
\begin{small}
\begin{equation*}
\begin{split}\label{eq_bmj}
\text{uBM}_{y_1,\dots,y_N}
~~&=~~
\left|\vec{\alpha}^{~\dagger} \mathbf{A}^{y_N} \dots \mathbf{A}^{y_{1}} \vec{\omega}_0 \right|^2 \\
~~&=~~
\vec{\alpha}^{~\dagger} \mathbf{A}^{y_N}\dots \mathbf{A}^{y_{1}} \vec{\omega}_0~\vec{\omega}_0^\dagger (\mathbf{A}^{y_{1}})^\dagger \dots (\mathbf{A}^{y_N})^\dagger \vec{\alpha}\\
~~&=~~
\vec{\sigma}^{~\dagger}~\pmb{\tau}_{y_N}~\dots~\pmb{\tau}_{y_1}~\vec{\rho}_0
\end{split}
\end{equation*}
\end{small}%
where $\pmb{\tau}_{y} = \overline{\mathbf{A}^{y}} \otimes {\mathbf{A}^{y}}$, $\vec{\rho}_0 = \overline{\vec{\omega}_0} \otimes \vec{\omega}_0$, and $\vec{\sigma} = \overline{\vec{\alpha}} \otimes \vec{\alpha}$. This makes it clear that uBMs are a special class of MPS/PSRs, where the observable operators $\pmb{\tau}_y$ and boundary conditions must satisfy the additional requirement of having unit Kraus-rank (i.e., a symmetric unit Schmidt rank decomposition).\footnote{An operator ${\bf A}$ has unit Kraus-rank if it has a decomposition ${\bf A} = \overline{\bf X} \otimes {\bf X}$.}
\paragraph{Norm Observable Operator Models}
Motivated by the NPP for PSRs, \citet{Zhao2010b} introduce norm-observable operator models or NOOMs. Coming from the PSR literature, they were designed to model joint distributions of observations as well as recursive state-updates to obtain conditional probabilities (analogous to PSRs in Section \ref{sec_psr}). They bear a striking resemblance to uniform Born machines (uBMs) and the connection has not been previously explored. Both NOOMs and uBMs associate probabilities with quadratic functions of the state vector, with NOOMs directly using the squared 2-norm of the state to determine observation probabilities. While NOOMs were originally defined on the reals, we use a more general definition over complex numbers.
\begin{definition}[Norm-observable operator model]
An $n$-dimensional norm-observable operator model for a set of discrete observations $\mathcal{O}$ is a stochastic process described by the tuple $(\mathds{C}^n, \{\pmb{\phi}_y\}_{y \in \mathcal{O}}, \vec{\psi}_0)$. The initial state $\vec{\psi}_0 \in \mathds{C}^n$ is normalized by having unit $2$-norm i.e. $\|\vec{\psi}_0\|_2^2 = 1$. The operators $\pmb{\phi}_y \in \mathds{C}^{n \times n}$ satisfy $\sum_{y} \pmb{\phi}_y^\dagger \pmb{\phi}_y = \mathds{I}$.
\end{definition}
These models avoid the NPP by using the 2-norm of the state to recover probability which, unlike for HMMs, is insensitive to the use of negative parameters in the matrices $\pmb{\phi}_{y}$. We write the joint probability of a sequence as computed by a NOOM and manipulate it using using the relationship between 2-norm and trace to show:
\begin{equation}
\label{eq_noom}
\begin{split}
P(y_1, \dots, y_N)
&= \text{NOOM}_{y_1,\dots,y_N} \\
&= \left|\left|
\pmb{\phi}_{y_N} \dots \pmb{\phi}_{y_{1}} \vec{\psi}_0 \right|\right|_2^2 \\ &= \text{tr}(\pmb{\phi}_{y_N} \dots \pmb{\phi}_{y_{1}} \vec{\psi}_0 \vec{\psi}_0^\dagger (\pmb{\phi}_{y_{1}} )^\dagger \dots (\pmb{\phi}_{y_N})^\dagger) \\
&= \vec{\mathds{I}}^\dagger\, \pmb{\tau}_{y_N}~\dots~\pmb{\tau}_{y_{1}}~\vec{\rho}_0
\end{split}
\end{equation}
where $\pmb{\tau}_{y} = \overline{\pmb{\phi}_{y}}\otimes \pmb{\phi}_{y} \in \mathds{C}^{n^2 \times n^2}$ and $\vec{\rho}_0 = \overline{\vec{\psi}_0}\otimes \vec{\psi}_0$. Equation~\ref{eq_noom} shows that NOOMs are a special subset of PSRs/MPS, as every finite-dimensional NOOM has an equivalent finite-dimensional PSR $(\mathds{C}^{n^2}, \{\overline{\pmb{\phi}_y} \otimes \pmb{\phi}_y\}_{y \in \mathcal{O}}, \overline{\vec{\psi}_0} \otimes \vec{\psi}_0)$ \citep{Zhao2010b}. From a quantum mechanical perspective, the unit rank constraint on NOOM initial states can be framed as requiring the initial state to be a pure density matrix.
We can also recursively update the state conditioned on observation $y_t$ as $\vec{\psi}_t = \frac{\pmb{\phi}_{y_t}\vec{\psi}_{t-1}}{\|\pmb{\phi}_{y_t}\vec{\psi}_{t-1}\|_2}$, where $y_t$ is observed with probability $P(y_t|\vec{\psi}_t) = \|\pmb{\phi}_{y_t} \vec{\psi}_{t-1}\|_2^2$.
\paragraph{Non-terminating uniform BMs are NOOMs} Note that the NOOM joint distribution in Equation~\ref{eq_noom} is almost identical to that of uBMs in Equation \ref{eq_bmj}, with $\pmb{\tau}_{y}$ and $\vec{\rho}_0$ having unit Kraus rank; but the left boundary / evaluation functional $\vec{\sigma} = \vec{\alpha} \otimes \vec{\alpha}$ is replaced by $\vec{\sigma} = \vec{\mathds{I}}$ and necessarily is full Kraus rank. So how can we reconcile these nearly identical models? Similar to our approach in Section~\ref{sec_psr}, we can consider the uBM for an infinitely long sequence where the exact specification of the left boundary / evaluation functional ceases to matter in the non-terminating limit; the effective evaluation functional converges to the fixed point of the transfer operator and we have a notion of recursive state update. Assuming that the uBM transfer operator is a similarity transform away from a trace-preserving quantum channel (i.e., which is normalized by satisfying $\sum_y \pmb{\phi}_y^\dagger \pmb{\phi}_y = \mathds{I}$; see appendix for more details), we have that an arbitrary evaluation functional (with unit Kraus-rank) of such a uBM will eventually converge to $\vec{\mathds{I}}$, the NOOM's evaluation functional:
\begin{theorem}
Non-terminating uniform Born machines are equivalent to norm observable operator models.
\end{theorem}
With the above equivalence, we now turn to the question of how the expressiveness of uBM/NOOMs compares to non-negative uMPS/HMMs, As we have seen, they are all special classes of uMPS, but with different constructions. \citet{glasser2019expressive} studied the expressiveness of \emph{non-uniform} BMs, showing that there are finite-dimensional non-uniform BMs that cannot be modeled by finite dimensional non-uniform HMMs, and conjecture that the reverse direction is also true. \citet{Zhao2010b} showed by example the existence of a NOOM (and so a non-terminating uBM) that cannot be modeled by any finite-dimensional HMM. However, they left open the question of whether HMMs were a subclass of NOOMs. We answer this question in the following theorem, which also implies the latter corollary through its equivalence with non-terminating uBM.
\begin{theorem}[HMM $\nsubseteq$ NOOM]
\label{thm_hmm_noom}
There exist finite-dimensional hidden Markov models that have no equivalent finite-dimensional norm-observable operator model.
\end{theorem}
\begin{corollary}[uBM $\nsubseteq$ HMM and HMM $\nsubseteq$ uBM]
There exist finite-dimensional non-terminating uniform Born machines that have no equivalent finite-dimensional hidden Markov models, and vice-versa.
\end{corollary}
\paragraph{Quadratic Weighted Automata} Finally, we note that Quadratic Weighted Automata (QWA) \citep{bailly2011quadratic}, developed in the stochastic weighted automata literature, are equivalent to uBM. \citet{bailly2011quadratic} suggest that QWA $\nsubseteq$ HMM and that HMM $\nsubseteq$ QWA, but do not provide a proof. To the best of our knowledge, the proof we provide is the first to formally show the non-equivalence of QWA and HMM.
\section{Locally Purified States and Hidden Quantum Markov Models}
\label{sec_hqmms}
While uBMs/NOOMs are constructive models guaranteed to return valid probabilities, they still aren't expressive enough to capture all HMMs, a fairly general class. Hence, it may be desirable to identify a construction that is more expressive than these models but still gives valid probabilities. Locally Purified States (LPS) were proposed as a tensor-network model of discrete multivariate probability distributions inspired from techniques used in the simulation of quantum systems. \citet{glasser2019expressive} show that these models are not only strictly more expressive than non-uniform HMMs, but also correspond directly to local quantum circuits with ancillary qubits -- serving as a guide to design quantum circuits for probabilistic modeling. We arrive at the LPS model from the MPS model essentially by marginalizing over an additional mode -- called the ``purification dimension'' -- in each of the MPS tensors. The rank of an LPS, also called its puri-rank, is defined the same way as the bond dimension (or TT-rank) for the MPS. The corresponding uniform LPS defines the unnormalized probability mass function over $N$ discrete random variables $\{Y_i\}_{i=1}^N$ as follows:
\begin{equation}
\begin{split}
\label{eq_joint_lps}
&P(y_1, \ldots, y_N) = \text{uLPS}_{y_1, \ldots, y_N}\\
&= \left( \sum_{\beta_L} \overline{\mathbf{K}}_{\beta_L, L}^T \otimes \mathbf{K}_{\beta_L, L}^T \right) \left( \sum_{\beta} \overline{\mathbf{K}}_{\beta, y_N} \otimes \mathbf{K}_{\beta, y_N} \right) \cdots \\
&\cdots \left( \sum_{\beta} \overline{\mathbf{K}}_{\beta, y_1} \otimes \mathbf{K}_{\beta, y_1} \right) \left( \sum_{\beta_R} \overline{\mathbf{K}}_{\beta_R, R} \otimes \mathbf{K}_{\beta_R, R} \right)
\end{split}
\end{equation}
\paragraph{Hidden Quantum Markov Models}
Hidden quantum Markov models (HQMMs) were developed by \citet{monras2010hidden} as a quantum generalization of hidden Markov models that can model joint probabilities of sequences and also allow for recursive state updates we have described previously. \citet{srinivasan2017learning, srinivasan2018learning} specifically develop HQMMs by constructing quantum analogues of classical operations on graphical models, and show that HQMMs are a more general model class compared to HMMs. \citet{adhikary2019expressiveness} on the other hand develop HQMMs by relaxing the unit Kraus-rank constraint on NOOM operators and initial state. We give a formal definition of these models here (noting that the Choi matrix ${\bf C}_y$ is a particular reshuffling of the sum of superoperators ${\bf L}_y$ defined below, see \citet{adhikary2019expressiveness}):
\begin{definition}[Hidden Quantum Markov Models]
\label{def:hqmm}
An $n^2-$dimensional hidden quantum Markov model for a set of discrete observations $\mathcal{O}$ is a stochastic process described by the tuple $(\mathds{C}^{n^2}, \vec{\mathds{I}}, {\{\bf L}_y\}_{y\in \mathcal{O}}, \vec{\rho})$. The initial state $\vec{\rho}_0 \in \mathds{C}^{n^2}$ is a vectorized unit-trace Hermitian PSD matrix of arbitrary rank, so $\vec{\mathds{I}}^{~T} \vec{\rho}_0=1$. The Liouville operators ${\bf L}_y \in \mathds{C}^{n^2\times n^2}$ (with corresponding Choi matrices ${\bf C}_y$) are trace-preserving (TP) i.e. $\vec{\mathds{I}}^T \left(\sum_y {\bf L}_y \right) = \vec{\mathds{I}}^T$, and completely positive (CP) i.e. ${\bf C}_y \geq 0$.
\end{definition}
The CP-TP condition on the operator ${\bf L}_y$ implies that we can equivalently write it via the Kraus decomposition as ${\bf L}_y = \left( \sum_{\beta} \mathbf{K}_{\beta, y} \otimes \overline{\mathbf{K}}_{\beta, y} \right)$, using Kraus operators $\mathbf{K}_{\beta, y}$ \citep{Kraus1971, adhikary2019expressiveness}. Intuitively, what makes HQMMs more general than NOOMs is that its state can be a vectorized density matrix of arbitrary rank and the superoperators can have arbitrary Kraus-rank, while NOOMs require both these ranks to be 1. With this in mind, we can write and manipulate the joint probability of a sequence of $N$ observations as:
\begin{equation}
\label{eq_hqmm_lps}
\begin{split}
&P(y_1, \ldots, y_N) = \text{HQMM}_{y_1, \ldots, y_N} = \vec{\mathds{I}}^T {\bf L}_{y_1} \cdots {\bf L}_{y_N} \vec{\rho}_0 \\
&= \vec{\mathds{I}}^T \left( \sum_{\beta} \mathbf{K}_{\beta, y_N} \otimes \overline{\mathbf{K}}_{\beta, y_N} \right) \left( \sum_{\beta} \overline{\mathbf{K}}_{\beta, Y_1} \otimes \mathbf{K}_{\beta, y_1} \right) \vec{\rho}_0
\end{split}
\end{equation}
The joint probability computation makes it clear that HQMMs are a class of PSRs, and the manipulation shows how they are equivalent to a uLPS where the left boundary condition is $\vec{\mathds{I}}$. We also compute the recursive state update conditioned on observation $y$ as $\vec{\rho}_{t+1} = \frac{{\bf L}_y\vec{\rho}_t}{\vec{\mathds{I}}^T{\bf L}_y\vec{\rho}_t}$ and
the probability of an observation $y$ is $P(y|\rho_t) = \vec{\mathds{I}}^T {\bf L}_y \vec{\rho}$.
\paragraph{Non-terminating Uniform LPS are HQMMs}
Equation \ref{eq_hqmm_lps} shows that every HQMM is a uLPS, but we also consider in what sense every uLPS is an HQMM: the transfer operator of arbitrary CP maps with unit spectral radius\footnote{This condition is necessary for probability distributions such as Equations~\ref{eq_bmj} and \ref{eq_joint_lps} to be properly normalized.} is a similarity transform away from that of a CP-TP map \citep{perez2006matrix}, so $\vec{\mathds{I}}$ is related to such a fixed point by such a similarity transform. Thus, every non-terminating uLPS has an equivalent HQMM and allows for an HQMM-style recursive state update. This is the same reasoning behind the equivalence between non-terminating uBMs (with CP maps) and NOOMs (with CP-TP maps).
\begin{theorem}
Non-terminating uniform locally purified states are equivalent to hidden quantum Markov models.
\end{theorem}
While it is already known that HMMs are a strict subset of HQMMs (since HQMMs also contain NOOMs which cannot always be modeled by a HMM), \citet{adhikary2019expressiveness} left open the possibility that every HQMM could have an equivalent NOOM in with some higher dimensional state. In light of Theorem \ref{thm_hmm_noom}, we can say this is not possible as NOOMs do not capture HMMs, while HQMMs can. We defer a longer discussion of how the expressiveness of HQMMs/uLPS compares to uMPS to the appendix (see Appendix~\ref{app_hqmm_express}), with the simple remark that we are not currently aware of an HQMM without an equivalent uMPS, although we may be able to adapt an example from the LPS $\subset$ MPS result from the non-uniform case \citep{glasser2019expressive}.
\begin{corollary}[NOOM $\subset$ HQMM]
\label{corollary_hqmm_noom}
Finite dimensional norm-observable operator models are a strict subset of finite dimensional hidden quantum Markov models.
\end{corollary}
We are not aware of any proposals from the weighted automata literature that are analogous to these uLPS/HQMMs.
\paragraph{Expressiveness of HQMMs (uLPS) and PSRs (uMPS)} We have determined that HQMMs are the most constructive known subclass of PSRs (containing both NOOMs and HMMs), yet the question of whether there is a `gap' between HQMMs and PSRs, i.e., if there is a PSR which has no finite-dimensional HQMM representation, is still open to the best of our knowledge. The results in \cite{glasser2019expressive} and \citet{de2013purifications} show that MPS are more expressive than LPS in the \emph{non-uniform} case, but their technique cannot be easily adapted to the uniform case. We are not aware of an example of a PSR with no equivalent finite-dimensional HQMM. A longer discussion of this problem is presented in Appendix \ref{app_hqmm_express}.
\section{Conclusion}
We presented uniform matrix product states and their various subclasses, and showed how they relate to previous work in the stochastic processes and weighted automata literature. In discussing the relative expressiveness of various models, we discuss if we can find an \emph{equivalent finite-dimensional parameterization} in another model class, but we do not discuss the relative compactness of various parameterizations. \citet{glasser2019expressive} do discuss this for the non-uniform case, and this could be an interesting direction to explore for the uniform case. We also speculate that the connections laid out here may allow spectral learning algorithms commonly used for PSRs and weighted automata suitable \citep{hsu2012spectral, balle2014spectral, hefny2015supervised} for learning uMPS, and an algorithm for constrained optimization on the Stiefel manifold \citep{adhikary2019expressiveness} suitable for learning uLPS \emph{with appropriate constraints}. Future work will involve adapting these algorithms so they can be transferred between the two fields.
We can extend our analyses to \emph{controlled} stochastic processes. Controlled generalizations of uMPS may be developed through matrix product operators \citep{Murg2008MatrixPO, Chan2016MatrixPO} that append an additional open index at each core of a uMPS, which we can associate with actions. We can also develop input-output versions of uniform tensor networks and uncontrolled stochastic process models, similar to input-output OOMs from \citet{Jaeger2003DiscretetimeDO}. We briefly describe such extensions for HQMMs and uLPSs in Appendix~\ref{app_sec_controlled_stoc_proc}, showing that they generalize recently proposed quantum versions of partially observable Markov decisions processes \citep{qomdp, YingYing, Cidre2016}. With this connection, we also find that the undecidability of perfect planning (determining if there exists a policy that can deterministically reach a goal state from an arbitrary initial state in finite steps) established for quantum POMDPs by \citet{qomdp} also extends to these generalizations. We leave a longer discussion for future work.
\newpage
\bibliographystyle{apalike}
|
2,869,038,155,474 | arxiv | \section{Introduction}
Reconfigurable intelligent surface (RIS) is a passive planar array that can control the reflection and either perform beam-steering and beam-focusing or split
the incident beam into multiple reflected beams \cite{yurduseven2020intelligent}.
Such surfaces have a potential for capacity improvement and coverage extension \cite{liu2021reconfigurable} as a result of partial control of propagation environment.
The RIS concept quickly gained popularity, there are plenty of papers on channel estimation, joint beamforming and interference mitigation in RIS-assisted systems \cite{zheng2022survey}.
However, the majority of the researchers use Rician model for the analysis and this can greatly delay the standardization of
RIS-assisted systems for several reasons.
Firstly, the Non Line-Of-Sight (NLoS) component in Rician model is modeled as random, which contradicts with the deterministic nature of RIS deployment. NLoS component in Rician model does not account for the scenario geometry, which may become crucial in distributed RIS-assisted systems.
Secondly, this model does not comply with 3GPP standard \cite{3GPP2022}
and it is unclear how to calibrate it to match field tests.
One of the possible techniques for geometry-consistent simulations is Ray Tracing (RT). Though 3GPP standard \cite{3GPP2022} includes RT-based model as an alternative, such simulations need detailed maps of the environment to yield accurate results. For Macro-cell simulations it will require large number of digital maps, which is impractically difficult.
Geometry-Based Stochastic Models (GBSMs) offer a tradeoff between Ray-tracing and Rician models.
Instead of using digital map, GBSMs generate clusters of scatterers according to calibrated distribution and only then use Ray Tracing approach. Large Scale and Small Scale fading parameters can also be included.
Several authors suggested GBSMs for RIS-assisted systems \cite{Dang2021, Jiang2021, Sun2021, basar2021indoor}, but these models have serious limitations on frequency range, number of elements, mobility of terminals, scenarios etc., as we demonstrate in section \ref{sec:Models}.
Moreover, most of them are not 3GPP-compatible and only one of them is freely accessible online.
This motivated us to build a new GBSM simulation method for RIS-assisted MIMO systems using one of the state-of-the-art conventional MIMO models.
Based on the comparison that we present in section \ref{sec:Simulators}, we decided to use QuaDRiGa simulation platform \cite{quad2021manual, jaeckel2014quadriga}.
QuaDRiGa is free software that supports user mobility, features spatial consistency of fading parameters and offers a wide range of 3GPP-compliant scenarios.
The main contributions of our paper are as follows:
\begin{itemize}
\item in section \ref{sec:QuaRIS} we propose a new simulation method for RIS-assisted MIMO systems based on QuaDRiGa;
\item in section \ref{sec:Results} we compare the achievable rate of a system simulated with Rician and the proposed models and explain the huge difference.
\end{itemize}
Finally, in section \ref{sec:Conclusion} we summarize our research findings.
\section{Models for RIS-assisted systems}
Since the RIS concept is relatively young, there is still no standardized model for RIS-assisted systems. The articles on RIS use different channel models with different parameters, as we show in section \ref{sec:Models}. At the same time, the models for conventional MIMO systems are well-developed and can be adapted to RIS-assisted systems. In section \ref{sec:Simulators} we choose the best one for this task.
\subsection{RIS-assisted system models}
\label{sec:Models}
Let us consider a downlink RIS MIMO system with $N_{RX}$ antennas at the User Equipment (UE), $N_{TX}$ antennas at the Base Station (BS) and $N_{RIS}$ elements at the RIS. All RIS-assisted channel models express the total channel
$\mathbf{H} \in \mathbb{C}^{N_{RX} \times N_{TX}}$ using the BS-UE channel
$\mathbf{H}_0 \in \mathbb{C}^{N_{RX}\times N_{TX}}$,
BS-RIS channel $\mathbf{H}_A \in \mathbb{C}^{N_{RIS} \times N_{TX}}$ and RIS-UE channel
$\mathbf{H}_B\in \mathbb{C}^{N_{RX}\times N_{RIS}}$:
\begin{equation}
\mathbf{H} = \mathbf{H}_0 + \mathbf{H}_B \mathbf{Q} \mathbf{H}_A,
\label{eq:basic_model}
\end{equation}
where $\mathbf{Q} = diag(\mathbf{\mu}) \in \mathbb{C}^{N_{RIS} \times N_{RIS}}$ is the RIS control matrix with diagonal elements
$|\mu_i| = 1 \ \forall i = 1, \dots N_{RIS}$. Next, some model should be selected for $\mathbf{H}_0$, $\mathbf{H}_A$ and $\mathbf{H}_B$ channels.
Simple Freespace model can be used for theoretical estimations, e.g. to
compare RIS against Decode-and-Forward relays
\cite{de2022intelligent}.
However, this model lacks Non Line-Of-Sight (NLoS) components and therefore it is inappropriate for realistic scenarios.
To account for the multipath components, the majority of researchers \cite{zheng2022survey} adopt Rician channel model, splitting every channel matrix $\mathbf{H}_i$ in \eqref{eq:basic_model} into LoS and NLoS components:
\begin{equation}
\mathbf{H}_i = \beta \left( \sqrt{\frac{K}{K+1}} \mathbf{H}_i^{LoS} + \sqrt{\frac{1}{K+1}}\mathbf{H}_i^{NLoS} \right),
\label{eq:Rician}
\end{equation}
where $\beta$ is the pathloss, $K$ is the Rician factor, $\mathbf{H}_i^{LoS}$ is calculated according to the freespace model and $\mathbf{H}_i^{NLoS}$ is random.
For example, in \cite{bjornson2020rayleigh} the authors took into account spatial correlation by taking columns of $\mathbf{H}_i^{NLoS}$ from $\mathcal{CN}(\mathbf{0}, \mathbf{R})$
with correlation matrix $\mathbf{R}$ determined by $sinc$ function.
Nevertheless, such model of NLoS component is unlikely to give realistic results. Firstly, during the derivation of $\mathbf{R}$ authors in \cite{bjornson2020rayleigh} assumed that the scatterers are distributed uniformly in front of the the RIS, which hardly happens in real life.
Secondly, the RIS position is deterministic, therefore modeling
the channels as random matrices yields an uncalibrated tool. It can be used to study properties of algorithms, but it is unclear how to tune it to match real-world measurements.
To bring RIS-assisted systems closer to standardization, researchers should use
geometry-consistent channel models.
Good results can be achieved with Ray Tracing method, especially for indoor scenarios that allow creating highly-accurate digital maps \cite{ZhengqingYun2015}.
However, in outdoor scenarios the environment can vary a lot, especially between urban and rural environments. Generating a sufficiently detailed map for every
specific cell is impractically difficult.
\begin{figure}[t]
\centerline{\includegraphics[width=\linewidth]{Figures/Channel_Clusters.png}}
\caption{General structure of GBSM model for RIS-assisted systems.}
\label{fig:chan_model}
\end{figure}
Geometry-Based Stochastic Models (GBSM) offer a trade-off between the fully-determenistic Ray Tracing and almost stochastic Rician methods.
GBSMs do not need the exact map of the environment, as the position of scaterrer clusters is generated randomly according to calibrated distributions with specific bodily angles $\sigma$, as in Fig.~\ref{fig:chan_model}. After the cluster generation, Ray Tracing can be applied. Finally, Large and Small Scale fading parameters \cite{3GPP2022} can be introduced.
GBSM models offer calibration for every typical scenario and at the same time are more accurate than stochastic ones.
Several researchers suggested different GBSMs for RIS-assisted communication.
For example, the model introduced in \cite{Jiang2021} considers only LoS part of $\mathbf{H}_A$ and $\mathbf{H}_B$ channels and NLoS part of $\mathbf{H}_0$ channel, while the model in \cite{Dang2021} features cluster generation.
However, \cite{Dang2021} does not take into account user mobility and \cite{Jiang2021} considers only one cluster in the NLoS part.
Moreover, they are limited to narrow bands and support only omnidirectional antennas.
The GBSM model from \cite{Sun2021} supports Tx and Rx mobility as well. In addition, it includes Shadow Factor and cluster evolution with death probability. Nevertheless, it calculates
the cascaded channel $\mathbf{H}_B \mathbf{Q} \mathbf{H}_A$ pathloss based on the optimal RIS coefficients. These coefficients are derived for freespace model and may not be optimal in general. Thus, the pathloss expression may be incorrect if $\mathbf{H}_B$ or $\mathbf{H}_A$ are NLoS.
The GBSM model introduced in \cite{basar2021indoor} features pathloss for Urban Microcellular and Indoor Hotspot scenarios from 3GPP standard \cite{3GPP2022} and is available online as simulation software \cite{Basar2020SimRISCS}.
However, it supports neither Tx/Rx mobility nor wideband simulations and is implemented only for single-antenna case.
Moreover, only LoS component is supported in $\mathbf{H}_B$ in indoor scenario and the number of clusters is the same as for mm-Wave band, limiting simulation frequency.
Although there are several GBSMs for RIS-assisted systems, they all have rather strict constraints related either to the number of antenna elements, radiation patterns, pathloss expression, frequency range or channel conditions.
These constraints motivated us to use one of the available MIMO models and to adapt it to RIS-assisted MIMO systems.
\subsection{MIMO GBSM models}
\label{sec:Simulators}
Among all 5G GBSM simulation platforms and models, there are three that received significant popularity \cite{pang2022investigation}, namely,
the More General 5G model (MG5G) \cite{wu2017general},
the NYUSIM channel simulator \cite{sun2017novel} and
the Quasi Determenistic Radio Channel Generator (QuaDRiGa) \cite{jaeckel2014quadriga}.
\begin{table}[t!]
\centering
\caption{State of the art MIMO GBSM models}
\label{tab:comparisonSim}
\setlength\tabcolsep{5pt}
\begin{tabular}{| m{0.06\textwidth} | m{0.08\textwidth} | m{0.05\textwidth} | m{0.07\textwidth} | m{0.05\textwidth} | m{0.05\textwidth} |}
\hline
Simulator or Model & Calibration according to 3GPP model & Massive MIMO support & LOS/NLOS transition & Moving clusters & Custom antenna patterns \\ \hline
{QuaDRiGa} & Yes & Yes & Yes & No & Yes \\ \hline
{NYUSIM} & No & No & Yes & No & No \\ \hline
{MG5G} & No & Yes & No & Yes & Yes \\ \hline
\end{tabular}
\end{table}
The main advantage of the MG5G model is its detailed cluster evolution.
In addition to 'newborn' and 'disappearance' cluster states, MG5G features 'survival'
state. The parameters of survived clusters are regenerated based on the new geometry and the channel coefficients are recalculated for every time interval. However, MG5G model does not support transition between
LoS and NLoS scenarios.
NYUSIM focuses on mm-Wave range and features a channel model similar to the 3GPP \cite{3GPP2022} model.
However, the NYUSIM platform has restrictions on the number of antenna elements: no more than 128 Tx antennas and no more than 64 RX antennas. Such restrictions do not allow modeling a sufficiently large RIS to provide reasonable gain. Moreover, NYUSIM lacks custom antenna pattern support, which is crucial for physically-consistent simulations.
Compared to these two simulators, QuaDRiGa platform has no limit on the number of antenna elements and supports transition between LoS and NLoS scenarios.
Apart from that, the main advantage of QuaDRiGa is that it offers predefined channel models with parameters calibrated according to 3GPP standard channel model \cite{3GPP2022}. Different simulation scenarios can be loaded both for sub-6GHz and mm-Wave ranges.
We choose QuaDRiGa for our RIS-assisted MIMO simulations based on the comparison summarized in Table \ref{tab:comparisonSim}.
Compared to MG5G, it supports LoS/NLoS transition and in comparison to NYUSIM, it has no limit on antenna array size and supports custom antenna patterns.
Finally, the main advantage of QuaDRiGa is that it comes with 3GPP-calibrated model configurations unlike the two other competitors.
Until the goal of RIS standardization is accomplished, using QuaDRiGa for simulations is a natural step towards it.
\section{RIS-assisted MIMO with QuaDRiGa}
\label{sec:QuaRIS}
\begin{figure}[t]
\centerline{\includegraphics[width=\linewidth]{Figures/IRS_pattern.png}}
\caption{The geometry of the RIS-assisted system.}
\label{fig:scat_pat}
\end{figure}
In this section we propose the novel RIS-assisted MIMO simulation method that is based on the well-known QuaDRiGa platform \cite{jaeckel2014quadriga} .
The main advantage of the proposed method is that QuaDRiGa is compatible \cite{quad2021manual} with 3GPP TR 38.901 v16.1.0 and 3GPP TR 36.873 v12.5.0 standards, as well as with the mmMAGIC channel model.
Moreover, QuaDRiGa features spatial consistency of Large Scale Fading parameters and consistency between Large Scale and Small Scale parameters. In addition to this, it supports mobility of UE, which means that the proposed simulation method can be used for RIS-assisted MIMO systems with moving UEs or RIS.
The QuaDRiGa simulation platform was originally designed for MIMO systems, so of-the-shelf QuaDRiGa does not support simulations with RIS.
To extend it to RIS-assisted communication we model every channel in \eqref{eq:basic_model} as
conventional MIMO channel and use QuaDRiGa to obtain the corresponding channel matrix.
More specifically, we obtain the direct channel $\mathbf{H}_0$ modeling MIMO system BS-UE in a usual way.
To get the BS-RIS channel $\mathbf{H}_A$, we represent the RIS as a virtual receiver
and simulate the BS-RIS subsystem as conventional MIMO system.
Next, we represent the RIS as a virtual transmitter and obtain the RIS-UE channel
$\mathbf{H}_B$.
Thus in our QuaDRiGa-based simulation platform the RIS is represented by a virtual receiver and a virtual transmitter that have the same coordinates and the same element array. At the same time, we omit the simulation between the virtual transmitter and receiver by setting the corresponding pairing parameter in configuration \cite{quad2021manual}.
Apart from 3GPP-compliant calibrated model, another detail that ensures physically-consistent simulation is the scattering pattern of RIS element. In this section we demonstrate how it can be included in the proposed simulation method.
First, let us explain the concept of scattering pattern using a simple single-antenna LoS example with single-element RIS.
In this case, it is possible to express the received power in the cascaded subchannel using the radar equation approach:
\begin{equation}
P_{RX} = \frac{P_{TX}G_{TX}(\varphi_d, \theta_d)G_{RX}(\varphi_a, \theta_a)\sigma_{RIS}\lambda^2}{{4\pi}^3R_{TI}^2R_{IR}^2},
\label{eq:Radar}
\end{equation}
where $P_{TX}$ is the transmitter power, $\sigma_{RIS}$ is the RIS element radar cross-section, $G_{TX}(\varphi_d, \theta_d)$ and $G_{RX}(\varphi_a, \theta_a)$ are the transmitter and receiver antenna patterns, respectively, $\varphi_d, \theta_d$ are the transmitter's Angles of Departure and
$\varphi_a, \theta_a$ are the receiver's Angles of Arrival.
We denote the carrier wavelength as $\lambda$, and the distances from Tx to RIS and from RIS to Rx as $R_{TI}$ and $R_{IR}$ respectively, as it is shown in Fig.~\ref{fig:scat_pat}.
\begin{figure*}[h!]
\centering
\subfigure[CDF of $\mathbf{H}_A$ and $\mathbf{H}_B$ eigenvalues. Solid lines for $\mathbf{H}_A$, dashed lines for $\mathbf{H}_B$.]{\label{fig:a}\includegraphics[width=59mm]{Figures/Ha_Hb_eigenvalues.png}}
\subfigure[CDF of $\mathbf{H}_0$ channel. Solid lines for first eigenvalues, dashed for the second.]{\label{fig:b}\includegraphics[width=59mm]{Figures/H0_eigenvalues.png}}
\subfigure[Achievable Rate CDF with (dashed) and without (solid) RIS, RIS control from \cite{zhou2020joint}.]{\label{fig:c}\includegraphics[width=59mm]{Figures/Capacity.png}}
\caption{Comparison between the Rician and the proposed model.}
\end{figure*}
The antenna patterns in \eqref{eq:Radar} can be expressed in normalized form: $G_{TX} = G_{TX}^{max}F_{TX}(\varphi_d, \theta_d)$ and $G_{RX} = G_{RX}^{max} F(\varphi_a, \theta_a)$, as in \cite{9206044}. Supposing one RIS element can be represented as a rectangular plate of conductive material of size $a\times b$, we can obtain similar expression using eq. (11.44) from \cite{ConstantineBalanis2016}:
\begin{equation}\label{eq:RCS3DGen}
\sigma_{RIS}(\varphi_i, \varphi_r, \theta_i, \theta_r) = 4\pi\left(\frac{ab}{\lambda}\right)^2F(\varphi_i, \varphi_r, \theta_i, \theta_r),
\end{equation}
where $\varphi_i, \theta_i$ are azimuth and elevation angles of incidence, $\varphi_r, \theta_r$ - corresponding angles of reflection, as in Fig.~\ref{fig:scat_pat}.
$F(\varphi_i, \varphi_r, \theta_i, \theta_r)$ is the isolated element scattering pattern, a normalized function that takes values between 0 and 1 and describes angular properties of scattered field.
As it follows from macroscopic reradiation model introduced in \cite{degli2022reradiation}, the scattering pattern of RIS element can be factorized as:
\begin{equation}
F(\varphi_i, \varphi_r, \theta_i, \theta_r) =
F(\varphi_i, \theta_i) F(\varphi_r, \theta_r),
\label{eq:F_decomp}
\end{equation}
where $F(\varphi, \theta)$ is the antenna pattern of RIS element.
Decomposition \eqref{eq:F_decomp} helps us to model the scattering pattern quite easily using QuaDRiGa antenna pattern options.
The antenna patterns of the virtual transmitter and receiver's element can be set to $F(\varphi, \theta) = \left( sin(\theta) \right)^{\alpha}$ as in \cite{degli2022reradiation}. Note that the definition of angle $\theta$ in \cite{degli2022reradiation} is different, that is why in our article the antenna pattern includes $sin(\theta)$ instead of $cos(\theta)$.
In general, due to mutual coupling the center element pattern is different from the edge element pattern \cite{fukao1986numerical}.
However, for large arrays this effect hardly impacts array gain \cite{fukao1986numerical} and we can assume that all the elements have the same pattern.
Furthermore, mutual coupling causes the difference between the isolated and the embedded element pattern. The embedded pattern can be obtained via EM-simulation or measured in a setup like in \cite{sayanskiy20222d} and uploaded to QuaDRiGa as a custom one. Thus, specifying the pattern of single RIS element is enough, but in this article we used omnidirectional element model for the RIS, BS and the UE so that it would be easier to compare the proposed simulation method with the Rician model.
\section{Numerical results}
\label{sec:Results}
To compare our QuaDRiGa-based RIS channel model against Rician model from \cite{zhou2020joint}, we perform a simulation for a simple single-user, single-RIS scenario.
The BS, the RIS and the UE have coordinates (0, 0, 25), (200, 50, 25) and (250, 0, 1.5) respectively.
The number of elements at the BS, the UE and the RIS is $N_{TX} = 4\times 4 = 16$, $N_{RX} = 4\times1$ and $N_{RIS} = 45 \times 45 = 2025$ respectively. The elements on the RIS, the BS and the UE are $d = \lambda/2$ apart, and for simplicity we model all the elements as isotropic.
In QuaDRiGa we perform simulation in $1.4$ MHz band with $15$ kHz subcarrier spacing and set the BS power to $30$ dBm, so that the power per subcarrier is $10.3$ dBm.
The noise power spectral density is $-174$ dBm/Hz and the receiver noise factor is $9$ dB. Thus the noise power per subband is $-123$ dBm.
To compare with Rician model, we extract the channels at one subcarrier and use these values for achievable rate calculation.
In a popular scenario the direct BS-UE path is blocked and the RIS is in Line-of-Sight conditions with respect to the UE and the BS. That is why we choose
3GPP 38.901 UMa NLoS model for $\mathbf{H}_0$ and 3GPP 38.901 UMa LoS model
for $\mathbf{H}_A$ and $\mathbf{H}_B$.
To make the comparison fair, we set $K = 9$ dB in Rician model \eqref{eq:Rician} for $\mathbf{H}_A$ and $\mathbf{H}_B$, like it is in UMa LoS model. For
$\mathbf{H}_0$ we set $K = -100$ dB like in UMa NLoS QuaDRiGa configuration file. In \eqref{eq:Rician} we use the same pathloss expressions as in UMa LoS and NLoS models.
Additionally, to determine Large-Scale fading parameters impact, we perform simulation with two modifications of 3GPP 38.901 UMa models. The first one removes the Shadow Fading, setting its standard deviation to $-100$ dB both in LoS and NLoS models.
The second one additionally sets the $K$ standard deviation to $\sigma_K = -100$ dB in UMa LoS model.
We obtain $500$ realizations for every channel and for every model to plot the Cumulative Density Functions (CDFs).
Fig.~\ref{fig:a} shows the CDFs of LoS channels $\mathbf{H}_A$ and $\mathbf{H}_B$.
Though the average eigenvalue is the same for the Rician (blue lines) and UMa LoS models (red lines), the range of possible eigenvalues is different.
The Rician LoS channel yields eigenvalues in a range of $0.2$ dB, while the eigenvalues in 3GPP UMa LoS channel vary by approximately $20$ dB. Such a large range is caused mainly by the Shadow factor, since setting it to $-100$ dB reduces the eigenvalue range to approximately $3$ dB. Additionally setting $K$ variance to $-100$ dB yields almost the same CDF as with Rician model \eqref{eq:Rician}.
Next, we compare the 3GPP UMa NLoS model against the Rician model from \cite{zhou2020joint}.
As Fig.~\ref{fig:b} demonstrates, the mean values of the eigenvalues differ: with UMa model the first and second eigenvalues are approximately $3$ dB and $7$ dB lower, respectively.
Moreover, with Rician model the two eigenvalues are only $1.5$ dB apart, while with UMa model the difference between them is $6$ dB.
The reason is that Rician model assumes that the scatterers are distributed uniformly in the hemisphere in front of the receiver, while 3GPP model features specific AoA and AoD ranges with non-uniform distribution that depends on the geometry of the scenario.
Finally, we compare the achievable rate of the RIS-assisted MIMO system for the two models. We assume perfect CSI and use an algorithm suggested by Zhou et al. \cite{zhou2020joint} for joint BS-RIS precoding. As Fig.~\ref{fig:c} shows, there is a huge performance difference between the two models. The mean achievable rate without the RIS is $2.2$ times larger with Rician model. Moreover, with Rician model RIS provides $31 \%$ gain, while with QuaDRiGa RIS gives almost $100 \%$ gain. That is because $\mathbf{H}_0$ eigenvalues are smaller with QuaDRiGa.
To summarize, there is a significant difference in the results obtained with Rician and with 3GPP-standardized models caused by different modeling of the Non-Line-of-Sight part of the channel. Though it is possible to modify Rician model so that the results match for fixed UE, BS and RIS position, the universal calibration is not feasible.
\section{Conclusion}
\label{sec:Conclusion}
Rician channel model, which is extremely popular in RIS analysis,
is an obstacle to standardization of RIS-assisted MIMO since it models NLoS component in geometry-inconsistent way.
Moreover, Rician model is not compatible with channel model calibration procedure specified in the 3GPP standard.
To fill this gap, we introduce new RIS-assisted MIMO channel model which is based on
QuaDRiGa simulation platform with calibrated parameters.
The proposed method inherits realistic 3GPP-compliant channel model from QuaDRiGa.
We demonstrate that, compared with the proposed method, Rician model yields overestimated achievable rate.
In addition to this, we explain how the scattering pattern of the RIS element can be included in the proposed method.
The channel model we analyzed is difficult to calibrate because it uses isolated RIS element pattern that is different from the embedded one.
In our future research, we are going to obtain the embedded RIS element pattern via measurements and fit the proposed model based on field tests.
\bibliographystyle{IEEEtran}
|
2,869,038,155,475 | arxiv | \section{Introduction}
\label{sec:intro}
Nonlinear resonance is a mechanism by which energy is continuously transferred between a small number of linear wave modes. This phenomenon, first observed in Wilton's analysis of gravity-capillary wave trains \cite{Wilton1915}, has been the subject of frequent investigation over the past century \cite{McGoldrick1965, McGoldrick1970b, McGoldrick1970a, Simmons1969, SchwartzVandenBroeck1979, HammackHenderson1993, CraikBook}; indeed, nonlinear resonance has since observed for wave trains in a growing number of dispersive wave systems, including
gravity waves \cite{Phillips1960, Hasselmann1961, LonguetHiggins1962, Benney1962},
acoustic-gravity waves \cite{Kadri2013, Kadri2016},
flexural-gravity waves \cite{Wang2013},
two-layer flows \cite{Ball1964, Joyce1974, Segur1980}, and
atmospheric flows \cite{Raupp2008, Raupp2009}. Whilst the aforementioned studies typically consider nonlinear resonance for laterally unbounded domains, the purpose of this study is to demonstrate that energy exchange between free-surface gravity waves may be induced and accentuated by horizontal confinement.
We focus our study on the collective resonance of three linear wave modes, henceforth referred to as a \emph{triad} \cite{Bretherton1964}. In laterally unbounded domains, the monotonic and concave form of the dispersion curve precludes the existence of resonant triads for gravity wave trains at finite depth \cite{Phillips1960, Hasselmann1961}, with resonant quartets instead being the smallest possible collective resonant interaction \cite{Benney1962, LonguetHiggins1962, BergerMilewski2003}. However, confinement of the fluid to a vertical cylinder results in linear wave modes that differ in form to sinusoidal plane waves (except for a rectangular cylinder), so the preclusion of resonant triads no longer applies.
Indeed, our study demonstrates that, under certain conditions, resonant triads may arise in cylinders of arbitrary cross-section for specific values of the fluid depth. As resonant triads evolve over a much faster time scale than that of resonant quartets, the exchange of energy in gravity waves is thus more efficient under the influence of lateral confinement \cite{Michel2019}, with potential implications on resonant sloshing in man-made and natural basins \cite{Bryant1989}.
Prior investigations of confined resonant free-surface gravity waves have predominantly focused on the so-called 1:2 resonance, which arises when two of the three linear wave modes comprising a triad coincide.
For axisymmetric standing waves in a circular cylinder,
Mack \cite{Mack1962} determined a condition for the existence of critical depth-to-radius ratios at which a 1:2 resonance may arise, a result later generalised to cylinders of arbitrary cross-section \cite{Miles1984b}.
Miles \cite{Miles1976, Miles1984a} then characterised the weakly nonlinear evolution of such internal resonances, demonstrating that a 1:2 resonance is impossible in a rectangular cylinder \cite{Miles1976}. Although Miles' seminal results provide an informative view of the weakly nonlinear dynamics, the influence of fully nonlinear effects was later assessed by Bryant \cite{Bryant1989} and Yang \emph{et al.}\ \cite{Yang2021}. For the case of a circular cylinder of finite depth, Bryant \cite{Bryant1989} and Yang \emph{et al.}\ \cite{Yang2021} both characterised new steadily propagating nonlinear waves arising in the vicinity of a 1:2 resonance, and Yang \emph{et al.}\ \cite{Yang2021} also computed nonlinear near-resonant axisymmetric standing waves.
Finally, broader mathematical properties of water waves exhibiting O(2) symmetry (of which a circular cylinder is one example) were analysed by Bridges \& Dias \cite{BridgesDias1990} and Chossat \& Dias \cite{ChossatDias1995}.
Given the restrictive set of critical depths at which a 1:2 resonance may arise \cite{Bryant1989,Yang2021}, it is natural to explore the possibility of nonlinear resonance in cylinders whose depth departs from the depths that trigger a 1:2 resonance.
To the best of our knowledge, the first and only such study was the seminal experimental investigation performed by Michel \cite{Michel2019}, who focused on resonant triads arising for free-surface gravity waves confined to a finite-depth circular cylinder. Notably, the cylinder depth in Michel's experiment was judiciously chosen so as to isolate a specific triad. Michel utilised bandlimited random horizontal vibration so as to excite two members of the triad, whose nonlinear interaction led to the growth of the third mode. Significantly, the energy of the third mode was, on average, the product of the energies of the remaining two modes, thereby satisfying the quadratic energy exchange typical of resonant triads.
In order to exemplify the mechanism of nonlinear resonance, Michel \cite{Michel2019} also calculated the response of a child mode due to the nonlinear interaction between two parent modes (where all three wave modes comprise the triad).
Notably, Michel's calculation is restricted to the early stages of growth and to particular relative phases of the wave modes. In addition Michel considered a fluid of infinite depth for all but the resonance conditions, for which finite-depth corrections were included. In contrast, we consider general resonances in arbitrary cylinders of finite depth and derive equations for the triad evolution over long time-scales. We also believe some nonlinear contributions to the interactions were omitted from Michel's calculation, resulting in quantitative differences (see \S \ref{sec:multiple_scales_summary}).
The goal of our study is to unify the existence and evolution of 1:2 and triadic resonances into a single mathematical framework, effectively characterising all triad interactions of this type.
Based on existing theory, it is unclear how the existence of resonant triads depends on the form of the cylinder cross-section, and which combinations of wave modes are permissible for judicious choice of the fluid depth. Furthermore, the range of depths that may excite a particular triad is uncertain, with 1:2 resonances only excited in a very narrow window about each critical depth \cite{Mack1962, Miles1984b}. Once a particular triad is excited, one anticipates that the triad evolution will be governed by the canonical triad equations \cite{Bretherton1964, CraikBook}; however, quantifying the triad evolution and relative energy exchange requires computation of the triad coupling coefficients. Finally, it is unclear how best to excite triads in arbitrary cylinders, both with and without external forcing.
We here present a relatively comprehensive characterisation of the existence, evolution and excitation of resonant triads for gravity waves confined to a cylinder of arbitrary cross-section and finite depth. In order to reduce the problem to its key components, we first truncate the Euler equations, recasting the fluid evolution in terms of a finite-depth Benney-Luke equation (\S \ref{sec:formulation}), incorporating only the nonlinear interactions necessary for resonant triads. In \S \ref{sec:triads_existence}, we prove necessary and sufficient conditions for there to exist a finite depth at which three linear wave modes may form a resonant triad. In particular, we prove that resonant triads are impossible for rectangular cylinders, yet there is an abundance of resonant triads for circular cylinders. We then use multiple-scales analysis to determine the long-time evolution of a triad in a cylinder of arbitrary cross-section (\S \ref{sec:triad_eqs}), from which we characterise the relative coupling of different triads. Finally, we explore the excitation of resonant triads (\S \ref{sec:excitation}), and discuss the potential extension of our theoretical developments to the cases of applied forcing and two-layer flows (\S \ref{sec:discussion}).
\section{Formulation}
\label{sec:formulation}
We consider the irrotational flow of an inviscid, incompressible liquid that is bounded above by a free surface, confined laterally by the vertical walls of a cylinder whose horizontal cross-section, $\mathcal{D}$, is enclosed by the curve $\partial\mathcal{D}$, and bounded below by a rigid horizontal plane lying a distance $H$ below the undisturbed free surface; see figure \ref{fig:Schematic_diagram}. We consider the fluid evolution in dimensionless variables, taking the cylinder's typical horizontal extent, $a$, as the unit of length, and $\sqrt{ag^{-1}}$ as the unit of time, where $g$ is the acceleration due to gravity. It follows that the dimensionless free-surface elevation, $\eta(\bm{x},t)$, and velocity potential, $\phi(\bm{x},z,t)$, evolve according to the equations
\begin{subequations}
\label{eq:Euler}
\begin{alignat}{2}
\Delta \phi + \phi_{zz} &= 0 \qquad && \mathrm{for}\,\,\,\bm{x} \in \mathcal{D}, \quad -h < z < \epsilon \eta, \label{eq:Euler_Laplace} \\
\phi_t + \eta + \frac{\epsilon}{2}\Big(|\nabla \phi|^2 + \phi_z^2\Big) &= 0 \quad &&\mathrm{for}\,\,\,\bm{x} \in \mathcal{D}, \quad z = \epsilon \eta, \label{eq:Euler_DBC} \\
\eta_t + \epsilon \nabla\phi \cdot \nabla\eta &= \phi_z \quad &&\mathrm{for}\,\,\,\bm{x} \in \mathcal{D}, \quad z = \epsilon \eta, \label{eq:Euler_KBC} \\
\bm{n}\cdot \nabla \phi &= 0 &&\mathrm{for}\,\,\,\bm{x} \in \partial\mathcal{D}, \quad -h < z < \epsilon \eta, \label{eq:Euler_no_flux_walls} \\
\phi_z &= 0 &&\mathrm{for}\,\,\,\bm{x} \in \mathcal{D}, \quad z = -h, \label{eq:Euler_no_flux_base}
\end{alignat}
\end{subequations}
corresponding to the continuity equation, dynamic and kinematic boundary conditions, and no-flux through the vertical walls and horizontal base, respectively.
In equation \eqref{eq:Euler}, the dimensionless parameter $\epsilon$ is proportional to the typical wave slope, $h = H/a$ is the ratio of the fluid depth to the typical horizontal extent, $\bm{n}$ is a unit vector normal to the boundary $\partial \mathcal{D}$, and the operators $\nabla$ and $\Delta$ denote the horizontal gradient and Laplacian, respectively. Moreover, conservation of mass implies that the free surface satisfies $\iint_{\mathcal{D}} \eta \,\mathrm{d}A = 0$ for all time. Finally, in dimensional variables, $a\bm{x}$ is the two-dimensional horizontal coordinate, $az$ is the upward-pointing vertical coordinate, $\sqrt{ag^{-1}} t$ denotes time, $\epsilon a \eta$ is the free-surface displacement, and $\epsilon a \sqrt{ag}\phi$ is the velocity potential.
\begin{figure}
\centering
\includegraphics[width=0.25\textwidth]{figures/Figure1.png}
\caption{\label{fig:Schematic_diagram} Schematic diagram of the cylindrical tank (with cross-section $\mathcal{D}$ and boundary $\partial \mathcal{D}$) partially filled with liquid. The undisturbed free surface (dashed lines) lies on $z = 0$, a distance $H$ above the rigid bottom plane (grey). The disturbed free surface is sketched in dash-dotted lines.}
\end{figure}
We aim to develop a broad framework for understanding resonant triads in a cylinder of finite depth; however, care must be taken when modelling fluid-boundary interactions and determining the class of permissible cylinder cross-sections.
From a modelling perspective, we employ an assumption generally implicit to the water-wave problem in bounded domains; specifically, we neglect the meniscus and dissipation arising near the vertical walls \cite{Miles1967}, thus determining that the free surface intersects the boundary normally, i.e.\ $\bm{n}\cdot \nabla \eta = 0$ for $\bm{x} \in \partial\mathcal{D}$ \cite{MilesHenderson1990}.
In order to maximise the generality of our investigation, we allow the cylinder cross-section, $\mathcal{D}$, to be fairly arbitrary; however, the mathematical developments presented herein require $\mathcal{D}$ to be bounded with a piecewise-smooth boundary, thereby allowing us to utilise the spectral theorem for compact self-adjoint operators \cite{KreyszigBook} and the divergence theorem. As most cylinders of practical interest consist of a piecewise-smooth boundary, this mathematical restriction fails to limit the breadth of our study.
\subsection{Derivation of the Benney-Luke equation}
\label{sec:BL_eq}
As our study is focused on the weakly nonlinear evolution of small-amplitude waves, we proceed to simplify \eqref{eq:Euler} in the case $0 < \epsilon \ll 1$ and $h = O(1)$. We begin by expanding the dynamic and kinematic boundary conditions (equations \eqref{eq:Euler_DBC}--\eqref{eq:Euler_KBC}) about $z = 0$ in powers of $\epsilon$, which, upon eliminating $\eta$, gives rise to the equation \cite{Benney1962, MilewskiKeller1996}
\begin{equation}
\label{eq:BL_intermediate}
\phi_{tt} + \phi_z = \epsilon\Big(\partial_t (\phi_{tz}\phi_t) + \phi_{zz}\phi_t - \partial_t(|\nabla \phi|^2) - \phi_z\phi_{zt}\Big) + O(\epsilon^2) \quad\mathrm{for}\quad \bm{x} \in \mathcal{D}, \quad z = 0.
\end{equation}
To reduce the fluid evolution to the dynamics arising on the linearised free surface, $z = 0$, we define the Dirichlet-to-Neumann operator, $\mathscr{L}$, so that $\mathscr{L} \phi|_{z = 0} = \phi_z|_{z = 0}$. Here $\phi$ satisfies Laplace's equation \eqref{eq:Euler_Laplace} over the linearised domain $-h < z < 0$, with $\partial_z \phi = 0$ on $z = -h$ (see equation \eqref{eq:Euler_no_flux_base}) and $\bm{n}\cdot \nabla \phi = 0$ for $\bm{x} \in \partial \mathcal{D}$ (see equation \eqref{eq:Euler_no_flux_walls}). Notably, the Dirichlet-to-Neumann operator may be defined in terms of its spectral representation, as detailed in \S \ref{sec:DtN_operator}. By denoting $u(\bm{x},t) = \phi(\bm{x},0,t)$, we finally obtain the finite-depth Benney-Luke equation \cite{Benney1962, BenneyLuke1964, MilewskiKeller1996}
\begin{equation}
\label{eq:BL_eq}
u_{tt} + \mathscr{L} u + \epsilon\bigg(
u_t\big(\mathscr{L}^2 + \Delta\big)u + \pd{}{t}\Big[(\mathscr{L} u)^2 + |\nabla u|^2\Big]\bigg) = O(\epsilon^2) \quad\mathrm{for}\quad \bm{x} \in \mathcal{D},
\end{equation}
where we have simplified the nonlinear terms in equation \eqref{eq:BL_intermediate} using $\phi_{zz} = -\Delta \phi$ and $u_{tt} = -\mathscr{L} u + O(\epsilon)$.
The remainder of our investigation will be focused on the evolution of resonant triads governed by the Benney-Luke equation \eqref{eq:BL_eq}.
As resonant triads arising in confined geometries are governed primarily by quadratic nonlinearities, it is sufficient to neglect terms of size $O(\epsilon^2)$ in equation \eqref{eq:BL_eq}; however, higher-order corrections to the Benney-Luke equation may be derived by following a similar expansion procedure \cite{Benney1962, MilewskiKeller1996, BergerMilewski2003}.
Although our investigation is mainly focused on the evolution of the velocity potential, $u$, one may recover the leading-order free-surface elevation from the dynamic boundary condition \eqref{eq:Euler_DBC}, namely $\eta = -u_t + O(\epsilon)$.
\subsection{Spectral representation of the Dirichlet-to-Neumann operator}
\label{sec:DtN_operator}
The Dirichlet-to-Neumann operator, $\mathscr{L}$, may be understood in terms of the discrete set of orthogonal eigenfunctions of the horizontal Laplacian operator \cite{KreyszigBook}. Specifically, we consider the set of real-valued eigenfunctions, $\Phi_n(\bm{x})$, satisfying
\[ -\Delta \Phi_n = k_n^2 \Phi_n \,\,\, \mathrm{for}\,\,\, n = 0, 1, \ldots,\]
where the corresponding eigenvalues, $k_n^2$, are ordered so that $0 = k_0 < k_1\leq k_2 \leq \ldots$. Moreover, each eigenfunction satisfies the boundary condition $\bm{n}\cdot\nabla \Phi_n = 0$ on $\partial\mathcal{D}$, as motivated by the no-flux condition \eqref{eq:Euler_no_flux_walls}. Finally, the orthogonal eigenfunctions are normalised so that $\langle \Phi_m, \Phi_n \rangle = \delta_{mn}$, where
\[\langle f, g\rangle = \frac{1}{S}\iint_\mathcal{D} f g \,\mathrm{d}A \]
defines an inner product for real functions $f$ and $g$, $S$ is the area of $\mathcal{D}$, and $\delta_{mn}$ is the Kronecker delta. Notably, $\Phi_0(\bm{x}) = 1$ is the constant eigenfunction, with corresponding eigenvalue $k_0 = 0$.
To determine the Dirichlet-to-Neumann operator for sufficiently smooth $\phi$, we first substitute the series expansion $\phi(\bm{x},z) = \sum_{n = 0}^\infty \phi_n(z)\Phi_n(\bm{x})$ into Laplace's equation \eqref{eq:Euler_Laplace}, where we have temporally omitted the time dependence. We then solve the resulting equation for $\phi_n(z)$ over the linearised domain $-h < z < 0$, in conjunction with the no-flux condition on $z = -h$ (see equation \eqref{eq:Euler_no_flux_base}). It follows that
$\partial_z\phi_n(0) = \hat{\mathscr{L}}_n \phi_n(0)$, where
\begin{equation}
\label{eq:DtN_symbol}
\hat{\mathscr{L}}_n = k_n \tanh(k_n h)
\end{equation}
is the spectral multiplier of the Dirichlet-to-Neumann operator, $\mathscr{L}$. By expressing the time-dependent free-surface velocity potential, $u = \phi|_{z = 0}$, in terms of the basis expansion $u(\bm{x}, t ) = \sum_{n = 0}^\infty u_n(t) \Phi_n(\bm{x})$, it follows that the Dirichlet-to-Neumann map has the spectral representation $\mathscr{L} u = \sum_{n = 0}^\infty \hat{\mathscr{L}}_n u_n \Phi_n$.
\section{The existence of resonant triads}
\label{sec:triads_existence}
Resonant triads arise due to the exchange of energy between linear wave modes, an effect induced by nonlinear wave interactions. In order to define resonant triads mathematically, it is necessary to first determine the angular frequency associated with each linear wave mode. In the limit $\epsilon \rightarrow 0$, the Benney-Luke equation \eqref{eq:BL_eq} reduces to the linear equation $u_{tt} + \mathscr{L} u = 0$. By seeking a solution to the linearised Benney-Luke equation of the form $u(\bm{x},t) = \Phi_n(\bm{x}) \mathrm{e}^{-\mathrm{i} \omega_n t}$, we conclude that the angular frequency, $\omega_n$, satisfies $\omega_n^2 = \hat{\mathscr{L}}_n$, or the more familiar \cite{LambBook}
\begin{equation}
\label{eq:dis_relation}
\omega_n^2 = k_n \tanh(k_n h).
\end{equation}
As we will see, a crucial aspect of the following analysis is that the angular frequency depends on the fluid depth, i.e.\ $\omega_n(h)$.
Finally, we note that the angular frequency is larger for more oscillatory eigenfunctions (i.e.\ for larger values of $k_n$); by analogy to the evolution of plane gravity waves, we refer to $k_n$ as a `wavenumber' henceforth.
We proceed by considering three linear wave modes, enumerated $n_1$, $n_2$ and $n_3$, where we denote
\[\Omega_j = \omega_{n_j}, \quad K_j = k_{n_j}, \quad \mathrm{and}\quad \Psi_j(\bm{x}) = \Phi_{n_j}(\bm{x}) \quad \mathrm{for}\,\,\, j = 1, 2, 3. \]
Notably, we exclude the wavenumber $k_0 = 0$ from consideration as the corresponding eigenmode, $\Phi_0$, simply reflects the invariance of the Benney-Luke equation \eqref{eq:BL_eq} under the mapping $u \mapsto u + \mathrm{constant}$; henceforth, we consider only wavenumbers $K_j > 0$. The three linear wave modes form a resonant triad if there is a critical fluid depth, $h_c$, satisfying
\begin{equation}
\label{eq:triad_sum_gen}
\Omega_1(h_c) \pm \Omega_2(h_c) \pm \Omega_3(h_c) = 0,
\end{equation}
where all four sign combinations are permissible (we consider $\Omega_j > 0$ without loss of generality). To simplify notation in the following arguments, we restrict our attention to the particular case
\begin{equation}
\label{eq:triad_sum1}
\Omega_1(h_c) + \Omega_2(h_c) = \Omega_3(h_c),
\end{equation}
where the other three sign combinations in equation \eqref{eq:triad_sum_gen} may be recovered by suitable re-indexing of the $\Omega_j$ terms.
However, as we will see in \S \ref{sec:triad_eqs}, an additional constraint necessary for triads to exist is the eigenmode correlation condition,
\begin{equation}
\label{eq:corr_cond}
\iint_{\mathcal{D}} \Psi_1\Psi_2\Psi_3 \,\mathrm{d}A \neq 0,
\end{equation}
which implies that the product of any two eigenmodes is non-orthogonal to the remaining eigenmode.
\subsection{The existence of a critical depth}
\label{sec:critical_depth_existence}
We proceed to determine necessary and sufficient conditions on the wavenumbers, $K_j$, for there to exist a depth, $h_c$, at which a resonant triad forms, where such a critical depth is unique. We summarise our results in terms of the following theorem.
\begin{theorem}
\label{thm:triads}
There exists a positive and finite value of $h$ such that $\Omega_1 + \Omega_2 = \Omega_3$ if and only if
\begin{equation}
\label{eq:kineq}
K_1 + K_2 < K_3 < \big(\sqrt{K_1} + \sqrt{K_2}\big)^2.
\end{equation}
When this pair of inequalities is satisfied, the corresponding value of $h$ is unique.
\end{theorem}
We briefly sketch the proof of Theorem \ref{thm:triads}, with full details presented in appendix \ref{app:thm_proof}.
We first demonstrate that no solutions to $\Omega_1 + \Omega_2 = \Omega_3$ are possible when the bounds in equation \eqref{eq:kineq} are violated, i.e.\ when $K_1 + K_2 \geq K_3$ or when $\sqrt{K_1} + \sqrt{K_2} \leq \sqrt{K_3}$. We then consider the case where the inequalities \eqref{eq:kineq} are satisfied and determine the existence of positive roots to the function $F(h) = (\Omega_1(h) +\Omega_2(h))/\Omega_3(h) - 1$. In this case, we demonstrate that $\lim_{h \rightarrow 0} F(h) < 0$ and $\lim_{h\rightarrow\infty} F(h) > 0$, from which we conclude that $F(h)$ has at least one root (by continuity of $F$). Finally, we deduce that this root is unique by proving that $F(h)$ is a strictly monotonically increasing function of $h$ when the inequalities \eqref{eq:kineq} are satisfied.
Two important conclusions may be deduced from Theorem \ref{thm:triads}.
First, it follows from equation \eqref{eq:kineq} that the wavenumber, $K_3$, corresponding to the largest angular frequency, $\Omega_3$, is larger than both the other two wavenumbers ($K_1$ and $K_2$), but it cannot be arbitrarily large (as supplied by the upper bound). For a given pair of eigenmodes (say $\Psi_1$ and $\Psi_2$), we conclude that there are likely to be only finitely many eigenmodes that can resonate with this pair (indeed, that number might fairly small, or even zero).
Second, when modes 1 and 2 coincide (a 1:2 resonance), one deduces that $\Omega_1 = \Omega_2$ and $K_1 = K_2$; as such, the existence bounds \eqref{eq:kineq} simplify to $2K_1 < K_3 < 4K_1$, or $2 < K_3/K_1 < 4$ \cite{Mack1962, Miles1984b}.
\subsection{Determining the critical depth}
\label{sec:critical_depth_determine}
Although Theorem \ref{thm:triads} determines necessary and sufficient conditions on the wavenumbers, $K_j$, for there to be a critical depth, $h_c$, at which a resonant triad exists, the critical depth remains to be determined. In general, the critical depth must be computed numerically (being the unique root of the nonlinear function $F(h)$); however, we demonstrate that useful quantitative and qualitative information may be obtained via asymptotic analysis. For the remainder of this section, we consider the rescaled wavenumbers, $\xi_1 = K_1/K_3$ and $\xi_2 = K_2/K_3$, and the rescaled depth, $\zeta = K_3 h$; it remains to determine the root, $\zeta_c$, of
\begin{equation}
\label{eq:resc_freq_sum}
F(\zeta) = \sqrt{\frac{ \xi_1 \tanh(\xi_1 \zeta )} {\tanh(\zeta)} } + \sqrt{\frac{ \xi_2 \tanh(\xi_2 \zeta )} {\tanh(\zeta)} } - 1
\end{equation}
when $\xi_1, \xi_2 > 0$ satisfy
\begin{equation}
\label{eq:xi_ineq}
\xi_1 + \xi_2 < 1 < \sqrt{\xi_1} + \sqrt{\xi_2}.
\end{equation}
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{figures/Figure2.png}
\caption{\label{fig:Contours_of_h} Contours of the rescaled critical depth, $\zeta_c = h_cK_3$, as a function of the rescaled wavenumbers, $\xi_1 = K_1/K_3$ and $\xi_2 = K_2/K_3$.
$(a)$ The contours computed numerically from equation \eqref{eq:resc_freq_sum}. The black lines indicate the limiting cases of $\zeta_c \rightarrow 0$ (at $\xi_1 + \xi_2 = 1$) and $\zeta_c \rightarrow \infty$ (at $\sqrt{\xi_1} + \sqrt{\xi_2} = 1$).
$(b)$ The contours are overlaid by the leading-order approximation (equation \eqref{eq:zeta*_cont1}; circles) and the higher-order correction (equation \eqref{eq:zeta*_exp_higher_order}; diamonds) for $\zeta_c$ equal to 0.5, 1, 1.5 and 2.}
\end{figure}
In figure \ref{fig:Contours_of_h}$(a)$, we present contours of the critical rescaled depth, $\zeta_c$, in the $(\xi_1, \xi_2)$-plane, restricted to the region demarcated by equation \eqref{eq:xi_ineq}. Consistent with the limits $\lim_{\zeta\rightarrow 0}F(\zeta) = \xi_1 + \xi_2 - 1$ and $\lim_{\zeta\rightarrow \infty}F(\zeta) = \sqrt{\xi_1} + \sqrt{\xi_2} - 1$, we observe that the root, $\zeta_c$, tends to zero at the line $\xi_1 + \xi_2 = 1$, and approaches infinity at the curve $\sqrt{\xi_1} + \sqrt{\xi_2} = 1$. Furthermore, the uniqueness of the root of $F$ for given $(\xi_1, \xi_2)$ is reflected in the observation that the contours of $\zeta_c$ do not cross. Finally, we note that the contours are symmetric about the line $\xi_1 = \xi_2$, which is a direct consequence of the invariance of $F(\zeta)$ under the mapping $\xi_1 \leftrightarrow \xi_2$ (see equation \eqref{eq:resc_freq_sum}).
Although we are primarily interested in the physically relevant case for which the cylinder's depth-to-width ratio, $h$, is of size $O(1)$, an informative analytic result may be obtained by considering $F(\zeta)$ in the limit $\zeta \ll 1$ (or $K_3 h \ll 1$). By utilising the Taylor expansion
$$\sqrt{\tanh(x)} \sim \sqrt{x}\bigg(1 - \frac{x^2}{6} + \frac{19}{360}x^4 + O\big(x^6\big)\bigg),$$
we obtain
\begin{equation}
\label{eq:om_exp}
\sqrt{\xi_1\tanh(\xi_1\zeta)} + \sqrt{\xi_2\tanh(\xi_2\zeta)} - \sqrt{\tanh(\zeta)} \sim \sqrt{\zeta}\bigg[\Big(\xi_1 + \xi_2 - 1\Big) - \frac{\zeta^2}{6}\Big(\xi_1^3 + \xi_2^3 - 1\Big) + O(\zeta^4)\bigg]
\end{equation}
for $0 < \zeta \ll 1$. Whilst deriving equation \eqref{eq:om_exp}, we have utilised the bound $\xi_1, \xi_2 < 1$ (see equation \eqref{eq:xi_ineq}), which additionally ensures that $0 < \xi_j\zeta \ll 1$ for $j = 1,2$. We note that the left-hand side of equation \eqref{eq:om_exp} is equal to $F(\zeta)\tanh(\zeta)$, so $\zeta_c$ satisfies
\begin{equation}
\label{eq:zeta*_exp}
\xi_1 + \xi_2 - 1 - \frac{\zeta_c^2}{6}\Big(\xi_1^3 + \xi_2^3 - 1\Big) = O(\zeta_c^4),
\end{equation}
provided that $0 < \zeta_c \ll 1$. By neglecting terms of size $O(\zeta_c^4)$ in equation \eqref{eq:zeta*_exp}, one may then easily solve for $\zeta_c$ in terms of $\xi_1$ and $\xi_2$.
Alternatively, a more succinct expression for $\zeta_c$ may be found by first noting that
\begin{equation}
\label{eq:xi_cubed_exp}
\xi_1^3 + \xi_2^3 = (\xi_1 + \xi_2)^3 - 3\xi_1\xi_2(\xi_1 + \xi_2) = 1 - 3\xi_1\xi_2 + O(\zeta_c^2),
\end{equation}
where we have utilised the leading-order approximation $\xi_1 + \xi_2 = 1 + O(\zeta_c^2)$ (see equation \eqref{eq:zeta*_exp}) to determine the second equality.
Upon substituting equation \eqref{eq:xi_cubed_exp} into equation \eqref{eq:zeta*_exp}, we find that $\xi_1$, $\xi_2$ and $\zeta_c$ are now related by the notably simpler expression
\begin{equation}
\label{eq:zeta*_exp_simp}
\xi_1 + \xi_2 - 1 + \frac{\zeta_c^2}{2}\xi_1\xi_2 = O(\zeta_c^4).
\end{equation}
By neglecting terms of $O(\zeta_c^4)$, the leading-order approximation for the rescaled critical depth, $\zeta_c$, is given by
\begin{equation}
\label{eq:zeta*_quad}
\zeta_c \sim \sqrt{\frac{2(1 - \xi_1 - \xi_2)}{\xi_1\xi_2}},
\end{equation}
an expression valid when $0 < \zeta_c \ll 1$ and $\xi_1 + \xi_2 < 1$ (see equation \eqref{eq:xi_ineq}). Alternatively, one may deduce from equation \eqref{eq:zeta*_exp_simp} that the contours of $\zeta_c$ satisfy the approximate form
\begin{equation}
\label{eq:zeta*_cont1}
\xi_2 \sim \frac{1 - \xi_1}{1 + \frac{1}{2}\zeta_c^2\xi_1},
\end{equation}
where the term in the denominator is responsible for the increased `bending' of the contours as $\zeta_c$ becomes progressively larger (see figure \ref{fig:Contours_of_h}). We note that the additional simplification afforded by equation \eqref{eq:xi_cubed_exp} allows for a far more tractable representation of the contours relative to solving equation \eqref{eq:zeta*_exp} directly for $\xi_2$ given $\xi_1$ and $\zeta_c$.
Despite being derived under the assumption $0 < \zeta_c \ll 1$, we see in figure \ref{fig:Contours_of_h}$(b)$ that the contours given by equation \eqref{eq:zeta*_cont1} agree favorably with the numerical solution even up to $\zeta_c \approx 1$. However, it is readily verified from equation \eqref{eq:zeta*_quad} that the asymptotic approximation of each contour crosses the boundary curve $\sqrt{\xi_1} + \sqrt{\xi_2} = 1$ at $\zeta_c = 4$ (for which $\xi_1 = \xi_2 = \frac{1}{4}$), thereby demonstrating that the reduced asymptotic form has limited applicability (even in a qualitative sense) for slightly larger values of $\zeta_c$. One may further improve the quantitative (and, to an extent, qualitative) agreement between the asymptotic analysis and numerical computation by including terms of size $O(\zeta^4)$ in equation \eqref{eq:om_exp}; indeed, an analogous calculation gives rise to the following higher-order correction to equation \eqref{eq:zeta*_exp_simp}:
\begin{equation}
\label{eq:zeta*_exp_higher_order}
\xi_1 + \xi_2 - 1 + \frac{\zeta_c^2}{2}\xi_1\xi_2 + \frac{\zeta_c^4}{72} \xi_1\xi_2\big(\xi_1\xi_2 - 1\big) = O(\zeta_c^6).
\end{equation}
Although one may then solve for $\zeta_c$ given $\xi_1$ and $\xi_2$ (or, alternatively, determine the contours of $\zeta_c$) by truncating terms of $O(\zeta_c^6)$ in equation \eqref{eq:zeta*_exp_higher_order}, the resulting algebraic expressions yield little qualitative information. However, one may, in principle, use this reduced form as a reasonable initial guess for a numerical root-finding algorithm for determining the root of $F(\zeta)$, provided that $\zeta_c$ is not too large.
\subsection{Example cavities}
\label{sec:example_cavities}
Our investigation into the emergence of resonant triads has been focused, thus far, on finite-depth cylinders with arbitrary horizontal cross-section. However, it is convenient to understand how the results of Theorem \ref{thm:triads} influence the formation (or not) of resonant triads for some specific cross-sections, namely rectangular, circular, and annular cylinders.
\subsubsection{Rectangular cylinder}
\label{sec:rectangular_cylinder}
It is well known that resonant triads are impossible for plane gravity waves evolving across an unbounded horizontal domain of finite depth \cite{Phillips1960, Hasselmann1961}.
\footnote{Weak interactions are possible, however, in the shallow-water limit, $K_jh\rightarrow 0$, for which $\tanh(K_jh)$ in the dispersion relation \eqref{eq:dis_relation} is replaced by its leading-order approximation, $K_j h$ \cite{Phillips1960, Bryant1973, Miles1976}.}
We now utilise Theorem \ref{thm:triads} to demonstrate a similar result: resonant triads are impossible for gravity waves evolving within a rectangular cylinder of finite depth. Our result generalises the special case of a 1:2 resonance, for which the impossibility of internal resonance in a rectangular cylinder was demonstrated by Miles \cite{Miles1976}.
To proceed, we consider a rectangular cylinder with side lengths $L_x$ and $L_y$.
By orientating the Cartesian coordinate system, $\bm{x} = (x,y)$, so that the cylinder cross-section is defined by the region $0 < x < L_x$ and $0 < y < L_y$, the eigenmodes are of the form
\[ \Phi_{mn}(x,y) = \frac{1}{\mathscr{N}_{mn}}\cos(p_m x)\cos(q_n y), \]
where $\mathscr{N}_{mn} > 0$ is a normalisation constant.
Notably, the wavenumbers $p_m = m\pi/L_x$ and $q_n = n\pi/L_y$ are chosen so that the no-flux condition is satisfied (see equation \eqref{eq:Euler_no_flux_walls}).
For a triad determined by the non-negative integers $m_j$ and $n_j$ (for $j = 1, 2, 3$), the corresponding wavenumbers, $P_j = p_{m_j}$ and $Q_j = q_{n_j}$, must satisfy $P_1 + P_2 = P_3$ and $Q_1 + Q_2 = Q_3$ (under suitable reordering of the subscripts) in order for the eigenmode correlation condition \eqref{eq:corr_cond} to be satisfied. By defining the wave vector $\bm{k}_j = (P_j, Q_j)$, the conditions on $P_j$ and $Q_j$ simplify to the single requirement $\bm{k}_1 + \bm{k}_2 = \bm{k}_3$, where the triangle inequality supplies that $|\bm{k}_3| \leq |\bm{k}_1| + |\bm{k}_2|$. As the eigenvalues, $K_j^2$, of the negative Laplacian operator are related to the wave vectors via $K_j = |\bm{k}_j|$, we deduce that $K_3 \leq K_1 + K_2$. Owing to the violation of the left-hand bound in equation \eqref{eq:kineq}, we conclude that resonant triads \emph{cannot} exist in a rectangular cylinder of finite depth.
\subsubsection{Circular cylinder}
\label{sec:circular_cylinder}
We consider a circular cylinder of unit radius in dimensionless variables (i.e.\ the dimensional radius is equal to $a$; see \S \ref{sec:formulation}). For polar coordinates $\bm{x} = (r,\theta)$, it is well known that the corresponding (complex-valued) eigenmodes may be expressed in the form
\begin{equation}
\label{eq:Bessel_eig}
\Phi_{mn}(r,\theta) = \frac{1}{\mathscr{N}_{mn}}\mathrm{J}_m(k_{mn}r)\mathrm{e}^{\mathrm{i} m \theta},
\quad\mathrm{where}\quad
\mathscr{N}_{mn} = \big|\mathrm{J}_m(k_{mn})\big|\sqrt{1 - \frac{m^2}{k_{mn}^2}}
\end{equation}
is the normalisation factor and $m$ is the azimuthal wavenumber (an integer). Furthermore, the no-flux condition \eqref{eq:Euler_no_flux_walls} determines that the radial wavenumbers, denoted $k_{mn}$, satisfy $\mathrm{J}_m'(k_{mn}) = 0$, where $0 < k_{m1} < k_{m2} < \ldots$ (we exclude $k_{00} = 0$ from consideration; see \S \ref{sec:triads_existence}). Notably, the eigenvalues of the negative Laplacian operator are \emph{precisely} the squared wavenumbers, $k_{mn}^2$; consequently, the antinodes of each Bessel function play a pivotal role in determining the existence of resonant triads.
Akin to the rectangular cylinder, we find that the eigenmode correlation condition imparts an important restriction on the combination of eigenmodes that may resonate. For given $m_j$ and $n_j$ (for $j = 1,2,3$), we denote $K_j = k_{m_jn_j}$, $\Psi_j = \Phi_{m_j n_j}$ and $N_j = \mathscr{N}_{m_j n_j}$. Although the correlation condition given in equation \eqref{eq:corr_cond} is defined for real eigenmodes, a similar condition holds for complex-valued eigenmodes, namely $\iint_{\mathcal{D}} \Psi_1 \Psi_2 \Psi_3^*\,\mathrm{d}A \neq 0$.
By considering the quantity
\[\iint_{\mathcal{D}} \Psi_1 \Psi_2 \Psi_3^* \,\mathrm{d}A = \frac{1}{N_1 N_2 N_3}\bigg(\int_0^1 r\mathrm{J}_{m_1}(K_1 r) \mathrm{J}_{m_2}(K_2 r) \mathrm{J}_{m_3}(K_3 r)\,\mathrm{d}r\bigg)\bigg(\int_0^{2\pi} \mathrm{e}^{\mathrm{i}(m_1 + m_2 - m_3)\theta}\,\mathrm{d}\theta\bigg),\]
we deduce from the azimuthal integral that a necessary condition for the correlation integral to be nonzero is $m_1 + m_2 = m_3$ \cite{Michel2019}. This condition thus restricts the permissible combinations of angular wavenumbers in a manner similar to the restriction on the permissible planar wavenumbers for the case of a rectangular cylinder. Unlike rectangular cylinders, however, we demonstrate that resonant triads \emph{are} possible in a circular cylinder.
\begin{table}
\begin{center}
\begin{tabular}{ c | ccc | ccc | ccc | c | c}
No.\ & $m_1$ & $m_2$ & $m_3$ & $n_1$ & $n_2$ & $n_3$ & $K_1$ & $K_2$ & $K_3$ & $h_c$ & $\iint_{\mathcal{D}} \Psi_1\Psi_2\Psi_3^*\,\mathrm{d}A$ \\
\rowcolor{LightGrey}
1 & -3 & 3 & 0 & 1 & 1 & 3 & 4.201 & 4.201 & 10.173 & 0.14591 & 0.19061 \\
\rowcolor{LightGrey}
2 & -2 & 2 & 0 & 1 & 1 & 2 & 3.054 & 3.054 & 7.016 & 0.17030 & 0.46429 \\
\rowcolor{LightGrey}
3 & -2 & 2 & 0 & 1 & 1 & 3 & 3.054 & 3.054 & 10.173 & 0.39129 & -0.03050 \\
4 & -2 & 2 & 0 & 1 & 2 & 3 & 3.054 & 6.706 & 10.173 & 0.06331 & 0.68257 \\
\rowcolor{LightGrey}
5 & -1 & 1 & 0 & 1 & 1 & 1 & 1.841 & 1.841 & 3.832 & 0.15227 & 1.28795 \\
\rowcolor{LightGrey}
6 & -1 & 1 & 0 & 1 & 1 & 2 & 1.841 & 1.841 & 7.016 & 1.00970 & -0.02032 \\
\rowcolor{DarkGrey}
7 & -1 & 1 & 0 & 1 & 2 & 3 & 1.841 & 5.331 & 10.173 & 0.30197 & -0.00603 \\
\rowcolor{Grey}
8 &0 & 0 & 0 & 1 & 1 & 3 & 3.832 & 3.832 & 10.173 & 0.19814 & 0.03327 \\
9 & -2 & 3 & 1 & 1 & 1 & 3 & 3.054 & 4.201 & 8.536 & 0.15767 & 0.30704 \\
10 & -1 & 2 & 1 & 1 & 1 & 2 & 1.841 & 3.054 & 5.331 & 0.17266 & 0.85581 \\
11 & -1 & 2 & 1 & 1 & 1 & 3 & 1.841 & 3.054 & 8.536 & 0.60375 & -0.02595 \\
12 & -1 & 2 & 1 & 2 & 1 & 3 & 5.331 & 3.054 & 8.536 & 0.04664 & 0.99088 \\
13 & 0 & 1 & 1 & 1 & 1 & 3 & 3.832 & 1.841 & 8.536 & 0.38516 & 0.00542 \\
14 & -1 & 3 & 2 & 1 & 1 & 2 & 1.841 & 4.201 & 6.706 & 0.16313 & 0.64211 \\
15 & -1 & 3 & 2 & 1 & 1 & 3 & 1.841 & 4.201 & 9.969 & 0.48152 & -0.02717 \\
16 & -1 & 3 & 2 & 1 & 2 & 3 & 1.841 & 8.015 & 9.969 & 0.03928 & 1.08903 \\
17 & -1 & 3 & 2 & 2 & 1 & 3 & 5.331 & 4.201 & 9.969 & 0.06286 & 0.66930 \\
18 & 0 & 2 & 2 & 1 & 1 & 3 & 3.832 & 3.054 & 9.969 & 0.26387 & -0.00087\\
\rowcolor{Grey}
19 & 1 & 1 & 2 & 1 & 1 & 2 & 1.841 & 1.841 & 6.706 & 0.83138 & 0.02801 \\
20 & 1 & 1 & 2 & 1 & 2 & 3 & 1.841 & 5.331 & 9.969 & 0.28691 & 0.01818 \\
21 & 0 & 3 & 3 & 1 & 1 & 3 & 3.832 & 4.201 & 11.346 & 0.21395 & -0.00640 \\
22 & 0 & 3 & 3 & 2 & 1 & 3 & 7.016 & 4.201 & 11.346 & 0.02782 & 1.00669 \\
23 & 1 & 2 & 3 & 1 & 1 & 2 & 1.841 & 3.054 & 8.015 & 0.50595 & 0.03712 \\
24 & 1 & 2 & 3 & 1 & 2 & 3 & 1.841 & 6.706 & 11.346 & 0.23678 & 0.02590 \\
25 & 1 & 2 & 3 & 2 & 1 & 3 & 5.331 & 3.054 & 11.346 & 0.19839 & 0.01522
\end{tabular}
\caption{\label{Table_circle} Combinations of the angular wavenumbers, $m_j$, and radial mode indices, $n_j$, that form a resonant triad ($m_1 + m_2 = m_3$ and $\Omega_1 + \Omega_2 = \Omega_3$) at critical depth, $h_c$, in a circular cylinder of unit radius. For each triad, the corresponding wavenumbers, $K_j = k_{m_j n_j}$, satisfy \eqref{eq:kineq}, and the correlation condition, $\iint_{\mathcal{D}} \Psi_1\Psi_2\Psi_3^*\,\mathrm{d}A \neq 0$, is met. The list is restricted to resonant triads arising for $|m_j|, n_j \leq 3$, and we consider $m_1 \leq m_2$ and $m_3 \geq 0$ without loss of generality. We have omitted resonances that give rise to the same critical depth, but with the roles of modes 1 and 2 swapped. The triad numbers (left column) and shaded rows are referenced in the text.
}
\end{center}
\end{table}
Despite the apparent restriction of the Bessel antinodes, $K_j$, and summation condition on the azimuthal wavenumbers, $m_j$, Theorem \ref{thm:triads} determines that a vast array of resonant triads may be excited for judicious choices of the fluid depth. In table \ref{Table_circle}, we list a small number of resonant triads and their corresponding critical depth, $h_c$, subject to the restrictions $|m_j| \leq 3$ and $n_j \leq 3$; for larger values of $|m_j|$ and $n_j$, the corresponding wave field becomes increasingly oscillatory, to the extent that the effects of surface tension and dissipation may become appreciable.
Moreover, even marginally relaxing the upper bounds on $|m_j|$ and $n_j$ vastly increases the number of resonant triads; indeed, the restriction $|m_j| \leq 4$ and $n_j \leq 4$ introduces 70 additional resonant triads relative to table \ref{Table_circle}. As the upper bounds for $|m_j|$ and $n_j$ are further increased, the typical difference between the various critical depths decreases. Finally, the correlation condition, $\iint_{\mathcal{D}} \Psi_1\Psi_2\Psi_3^*\,\mathrm{d}A \neq 0$, is satisfied for each triad; however, the integral is very close to zero in some cases (e.g.\ triad 18), corresponding to an elongation of the triad evolution time-scale (see \S \ref{sec:multiple_scales}).
At this juncture, it is informative to assess how the triads listed in table \ref{Table_circle} relate to the resonances explored in prior investigations. First, triad 7 in table \ref{Table_circle} (dark grey row) was explored by Michel \cite{Michel2019} for a circular cylinder of radius 9.45 cm and an approximate fluid depth of 3 cm; it follows that the depth-to-radius ratio in Michel's experiment was approximately 0.317, close to the value of 0.30197 reported in table \ref{Table_circle}.
Furthermore, table \ref{Table_circle} (grey rows) incorporates two well-known examples of a 1:2 resonance, for which modes 1 and 2 coincide: (i) the critical depth $h_c = 0.83138$ (triad 19) corresponds to the second-harmonic resonance with the fundamental mode \cite{Miles1976, Miles1984a, Bryant1989, Yang2021}; (ii) the critical depth $h_c = 0.19814$ (triad 8) corresponds to a standing wave composed of two resonant axisymmetric modes \cite{Mack1962, Yang2021}.
Finally, triads 1, 2, 3, 5 and 6 (table \ref{Table_circle}, light grey rows) form an interesting class of resonant triad, for which an axisymmetric mode ($m_3 = 0$) interacts with two identical counter-propagating non-axisymmetric modes ($m_1= -m_2 \neq 0$ and $n_1 = n_2$). In fact, our investigation in \S \ref{sec:pump_modes} demonstrates that the axisymmetric mode is the so-called \emph{pump} mode, and may thus excite the non-axisymmetric modes, even when the initial energy in each non-axisymmetric mode is negligible. We draw an analogy between this novel class of resonant triad and the excitation of beach edge waves \cite{Guza1974} in \S \ref{sec:discussion}.
We conclude our exploration of resonant triads arising in a circular cylinder by remarking that the fluid depth may, in some cases, be judiciously chosen so as to excite multiple triads. In general, the condition on the angular frequencies, $\Omega_1 + \Omega_2 = \Omega_3$ (see equation \eqref{eq:triad_sum1}), cannot be satisfied for two distinct triads at the same fluid depth; however, nonlinear resonance may persist for both triads provided that each condition on the angular frequencies is \emph{approximately} satisfied \cite{Bretherton1964, McGoldrick1972, CraikBook}, at the cost of weak detuning (see \S \ref{sec:weak_detuning} for further details). Specifically, if triads 1 and 2 have critical depths $h_{c,1}$ and $h_{c,2}$, respectively, then there is potential excitement of both triads when the fluid depth, $h$, satisfies $|h - h_{c,j}| = O(\epsilon)$ for $j = 1,2$ (where $0 < \epsilon \ll 1$ is the typical wave slope; see \S \ref{sec:formulation}), giving rise to the approximation $\Omega_1 + \Omega_2 - \Omega_3 = O(\epsilon)$ for each triad. For example, if $0 < h_{c,2} - h_{c,1} \ll 1$, then it may be sufficient to excite both triads at an intermediate depth, $h_{c,1} \leq h \leq h_{c,2}$. We note, however, that the excitation of multiple triads at a single fluid depth is not possible when the depth discrepancy, $|h - h_{c,j}|$, becomes too large (relative to the typical wave slope) for any of the triads under consideration.
To demonstrate the potential for the simultaneous excitation of two triads within a circular cylinder of finite depth, we consider two scenarios: (i) the excitation of two triads that share a common wave mode; and (ii) the excitation of two triads that do not share any common wave modes. Heuristically, case (ii) is more common than case (i) owing to the number of similar fluid depths in table \ref{Table_circle}; however, case (i) will likely generate a far richer set of dynamics owing to the nonlinear interaction between the two triads \cite{McEwan1972, CraikBook, Chow1996, Choi2021}. As an example of case (i), we consider triads 21 and 24 in table \ref{Table_circle}, with nearby critical depths $h_{c,1} = 0.21395$ and $h_{c,2} = 0.23678$, respectively. As mode $(m_3,n_3) = (3,3)$ is common to both triads, inter-triad resonance may arise at an intermediate depth, e.g.\ $h = 0.225$.
Finally, an example of case (ii) arises for triads 8 and 25 in table \ref{Table_circle}, with nearby critical depths $h_{c,1} = 0.19814$ and $h_{c,2} = 0.19839$. Neither of these triads share a common wave mode, so one would \emph{not} expect the inter-triad energy exchange discussed in case (i). Nevertheless, one might anticipate a signature of these two triads to be visible in the surface evolution for an intermediate depth, e.g.\ $h = 0.19825$. The theoretical and numerical exploration of coupled triads in a circular cylinder will be the focus of future investigation.
\subsubsection{Annular cylinder}
\label{sec:annular_cylinder}
A natural variation upon a circular cylinder is an annulus of inner radius $r_0 \in (0,1)$ and outer radius 1. By varying $r_0$, the annulus approaches a circular cylinder as $r_0 \rightarrow 0^+$, and a quasi-one-dimensional periodic ring as $r_0 \rightarrow 1^-$. Notably, resonant triads are impossible for a one-dimensional periodic ring, as can be shown by modifying the arguments presented for the case of a rectangular cylinder (see \S \ref{sec:rectangular_cylinder}). Thus, one might anticipate that the existence of triads in an annular cylinder depends critically on the inner radius, $r_0$. Rather than enumerating some possible triads for given values of $r_0$, we instead track the corresponding critical depth, $h_c$, for the triads identified for a circular cylinder (see table \ref{Table_circle}) as $r_0$ is progressively increased from zero. Of particular interest is determining whether a given triad exists for all $r_0 < 1$, or whether there is some critical inner radius, $r_c$, beyond which the triad ceases to exist, with either $h_c \rightarrow 0$ or $h_c \rightarrow \infty$ as $r_0 \rightarrow r_c^-$.
The (complex-valued) eigenmodes in an annular domain are cylinder functions of the form
\begin{equation}
\label{eq:cylinder_functions}
\Phi_{mn}(r,\theta) = \frac{1}{\mathscr{N}_{mn}}\bigg[\mathrm{J}_m(k_{mn}r) \cos(\gamma_{mn}\pi) + \mathrm{Y}_m(k_{mn}r) \sin(\gamma_{mn}\pi) \bigg] \mathrm{e}^{\mathrm{i} m \theta},
\end{equation}
where $\mathscr{N}_{mn} > 0$ is a normalisation constant, $\mathrm{Y}_m$ is the Bessel function of the second kind with order $m$ (an integer), and $\gamma_{mn} \in [0,1]$ determines the weighting between the two Bessel functions. As shown in appendix \ref{app:annulus_wavenumbers}, the no-flux condition (see equation \eqref{eq:Euler_no_flux_walls}) on the inner and outer walls determines that the wavenumbers, $k_{mn}(r_0)$, satisfy the equation
\begin{equation}
\label{eq:annulus_no_flux}
\mathrm{J}_m'(k_{mn} r_0) \mathrm{Y}_m'(k_{mn}) - \mathrm{J}_m'(k_{mn}) \mathrm{Y}_m'(k_{mn} r_0) = 0.
\end{equation}
A formula for the corresponding value of $\gamma_{mn}$ is determined in appendix \ref{app:annulus_wavenumbers}. Once again, the wavenumbers, $k_{mn}$, are ordered so that $0 < k_{m1} < k_{m2} < \ldots$ (excluding $k_{00} = 0$) and satisfy $-\Delta \Phi_{mn} = k_{mn}^2\Phi_{mn}$. Three correlated wave modes may form a resonant triad (for a judicious choice of the fluid depth) provided that the corresponding wavenumbers, $K_j$, which depend on the channel width, $1 - r_0$, satisfy the bounds given in Theorem \ref{thm:triads}.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{figures/Figure3.eps}
\caption{\label{fig:Annulus_example}
The existence and predominant characteristics of a triad in an annular cylinder with inner radius $r_0$ and outer radius 1. The triad bifurcates from the critical depth $h_c = 0.17266$ as $r_0 \rightarrow 0$ (the limiting case of a circular cylinder), with corresponding wavenumbers presented in table \ref{Table_circle} (see triad 10).
$(a)$ The critical depth, $h_c$ (blue curve), with $h_c \rightarrow \infty$ as $r_0 \rightarrow r_c$, where $r_c \approx 0.57$ (black line).
$(b)$ The corresponding wavenumbers, $K_j$, all of which remain finite for $r_0 < r_c$ (black line).
$(c)$ The normalised wavenumbers, $K_1/K_3$ and $K_2/K_3$, parametrised by increasing $r_0$ (blue arrow), with the limiting case $r_0 \rightarrow 0$ denoted by the white dot. The wavenumbers leave the triad existence region (see Theorem \ref{thm:triads}) via the left-hand boundary (black curve) as $r_0 \rightarrow r_c^-$.
}
\end{figure}
Bifurcating from the limiting case of a circular cylinder, we track the critical depth (when such a depth exists) of different triads as $r_0$ is progressively increased. The predominant behaviour is characterised by the example presented in figure \ref{fig:Annulus_example}, for which we consider the triad whose critical depth is $h_c = 0.17266$ as $r_0 \rightarrow 0^+$ (see triad 10 in table \ref{Table_circle}). Given that $h_c$ is fairly small in this limit, one might anticipate that the triad ceases to exist with $h_c \rightarrow 0$; somewhat surprisingly, however, the opposite scenario arises, with $h_c \rightarrow \infty$ as $r_0 \rightarrow r_c^- $ ($r_c \approx 0.57$ in this example). It follows, therefore, that the triad may persist for narrow channels only when the fluid is sufficiently deep. We note, however, that there exist (at least) two relatively rare transitions for increasing $r_0$, which we briefly describe as follows: (i) the triad ceases to exist when $h_c \rightarrow 0$ as $r_0 \rightarrow r_c^-$, which may arise when bifurcating from a sufficiently shallow circular cylinder (e.g.\ triad 22 in table \ref{Table_circle}); and (ii) the triad continues to exist for all $r_0 < 1$, with $h_c \rightarrow 0$ and $K_j \rightarrow \infty$ as $r_0 \rightarrow 1$, yet the normalised depth, $h_c K_3$, remains finite, and the normalised wavenumbers, $K_1/K_3$ and $K_2/K_3$, remain within the triad existence region (e.g.\ triad 8 in table \ref{Table_circle}). Owing to the appreciable influence of viscous effects for relatively shallow fluids, the physical relevance of these latter two scenarios is somewhat nebulous, however.
\section{The evolution of resonant triads}
\label{sec:triad_eqs}
Having established the existence of resonant triads, we now determine the long-time triad evolution, utilising the method of multiple scales. Ostensibly, the calculations necessary for determining the triad equations are a variation upon the pioneering work of McGoldrick \cite{McGoldrick1965, McGoldrick1970b, McGoldrick1970a} in the absence of surface tension. However, the confinement of the fluid to a cylinder imposes some additional considerations, the salient details of which we outline below. Finally, we note that an alternative approach to multiple scales is Whitham's technique of averaging the system's Lagrangian \cite{Whitham1965b, Whitham1965a, Whitham1967a, Whitham1967b}, which has the advantage of streamlining some algebraic calculations \cite{Simmons1969, Miles1976, Miles1984a}; nevertheless, multiple-scales analysis is sufficient for our purposes and allows for the possible inclusion of higher-order corrections in the asymptotic expansion \cite{McGoldrick1970a}.
In a manner similar to \S \ref{sec:triads_existence}, we consider three linear wave modes (with real-valued eigenfunctions), enumerated $n_1$, $n_2$ and $n_3$, where we denote
\[\Omega_j = \omega_{n_j}, \quad K_j = k_{n_j}, \quad L_j = \hat{\mathscr{L}}_{n_j}, \quad \mathrm{and}\quad \Psi_j(\bm{x}) = \Phi_{n_j}(\bm{x}) \quad \mathrm{for}\,\,\, j = 1,2,3. \]
In contrast to \S \ref{sec:triads_existence}, however, we now allow each (nonzero) angular frequency to be either negative or positive: the resonance condition on the angular frequencies is henceforth defined
\begin{equation}
\label{eq:omega_sum2}
\Omega_1 + \Omega_2 + \Omega_3 = 0.
\end{equation}
The modified requirement on the angular frequencies (equation \eqref{eq:omega_sum2}) is not restrictive on the possible triad combinations; one may recover equation \eqref{eq:triad_sum1} by mapping $\Omega_3 \mapsto -\Omega_3$, for example. The decision behind the summation condition on the angular frequencies is motivated by the cyclical symmetry of equation \eqref{eq:omega_sum2}, a property that will be inherited by the resultant amplitude equations \cite{Simmons1969}. As a consequence, one need only derive the amplitude equation for one of the wave modes; the amplitude equations for the remaining two wave modes follow by cyclic permutation of the subscripts $(1,2,3)$.
Before embarking on the multiple-scales analysis presented in \S \ref{sec:multiple_scales}, we remark upon two caveats.
First, we note that equation \eqref{eq:omega_sum2} corresponds to an exact resonance, for which the fluid depth, $h$, is chosen to be precisely equal to the critical depth, $h_c$. In practice, however, there may be a small discrepancy between $h$ and $h_c$, resulting in a the sum of the angular frequencies being slightly offset from zero. When the frequency detuning is sufficiently weak, e.g.\ $\Omega_1 + \Omega_2 + \Omega_3 = O(\epsilon)$, one may modify the following asymptotic analysis to derive a similar set of amplitude equations (see \S \ref{sec:weak_detuning}).
Second, our analysis in \S \ref{sec:multiple_scales} is not valid when two of the wave modes coincide. This case corresponds to a 1:2 resonance, for which the corresponding evolution equations were derived by Miles \cite{Miles1976} using Whitham modulation theory (as summarised in \S \ref{sec:12_resonance}).
\subsection{Multiple-scales analysis}
\label{sec:multiple_scales}
In order to determine the evolution of each of the three dominant wave modes involved in an exact resonance, we utilise the method of multiple scales \cite{KevorkianBook, Strogatz}. Specifically, we seek a perturbation solution to the Benney-Luke equation \eqref{eq:BL_eq} of the form $u \sim u_0 + \epsilon u_1 + O(\epsilon^2)$. The leading-order terms in equation \eqref{eq:BL_eq} determine that $u_0$ satisfies $\partial_{tt}u_0 + \mathscr{L} u_0 = 0$; we \emph{choose} to consider a leading-order solution comprised only of the three triad modes (all other modes are assumed to be smaller in magnitude and appear at higher order), giving rise to the leading-order form
\begin{equation}
\label{eq:triad_leading_order}
u_0(\bm{x} ,t, \tau) = \sum_{j = 1}^3 \Big[ A_j(\tau) \Psi_j(\bm{x}) \mathrm{e}^{-\mathrm{i} \Omega_j t} + \mathrm{c.c.}\Big].
\end{equation}
In equation \eqref{eq:triad_leading_order}, we have introduced the slow time-scale $\tau = \epsilon t$, which governs the evolution of each complex amplitude, $A_j$. As $\epsilon$ and $t$ are both independent variables, we treat $\tau$ and $t$ as independent time-scales, giving rise to the transformation of derivatives $\partial_t \mapsto \partial_t + \epsilon \partial_\tau$. Finally, $\mathrm{c.c.}$ denotes the complex conjugate of the preceding term, a contribution necessary for real $u_0$.
So as to determine coupled evolution equations for each complex amplitude, $A_j$, we consider terms of $O(\epsilon)$ in the Benney-Luke equation \eqref{eq:BL_eq}. By substituting the leading-order solution, $u_0$, into the nonlinear terms and applying the triad condition for the angular frequencies (equation \eqref{eq:omega_sum2}), we obtain the following problem for $u_1$:
\begin{equation}
\label{eq:triad_u1}
\partial_{tt}u_1 + \mathscr{L} u_1 = -\bigg[\sum_{j = 1}^3 f_j(\bm{x},\tau)\mathrm{e}^{-\mathrm{i} \Omega_j t} + \mathrm{c.c.}\bigg] + \mbox{nonresonant terms}.
\end{equation}
As we will see below, each of the functions $f_j(\bm{x},\tau)$ appearing on the right-hand side of equation \eqref{eq:triad_u1} will play a fundamental role when determining the amplitude equations; specifically,
\[
f_1 = -2\mathrm{i} \Omega_1 \sd{A_1}{\tau} \Psi_1 + \mathrm{i} A_2^* A_3^*\bigg( \Big[\Omega_2 \big(L_3^2 - K_3^2\big) + \Omega_3\big(L_2^2 - K_2^2\big) - 2\Omega_1 L_2L_3\Big] \Psi_2\Psi_3 - 2\Omega_1 \nabla\Psi_2 \cdot \nabla \Psi_3\bigg),
\]
where $f_2$ and $f_3$ follow upon cyclic permutation of the subscripts $(1,2,3)$. Finally, we note that the `nonresonant terms' in equation \eqref{eq:triad_u1} are of the general form $p(\bm{x},\tau)\mathrm{e}^{\mathrm{i} \varsigma t}$, where we assume that the angular frequency, $\varsigma$, is not equal (or close) to any of the angular frequencies, $\pm\omega_n$, associated with linear wave modes (see \S \ref{sec:triads_existence}).
We proceed by projecting equation \eqref{eq:triad_u1} onto each of the three wave modes, giving rise to differential equations of the form (for $j = 1,2,3$)
\begin{equation}
\label{eq:uhat}
\partial_{tt}\hat{u}_{1,j} + L_j \hat{u}_{1,j} = -\Big[\langle \Psi_j, f_j \rangle \mathrm{e}^{-\mathrm{i} \Omega_j t} + \mathrm{c.c.} \Big] + \mbox{nonresonant terms},
\end{equation}
where $\hat{u}_{1,j} = \langle \Psi_j, u_1 \rangle $ is the projection of $u_1$ onto the mode $\Psi_j$. By recalling that $L_j = \Omega_j^2$, we immediately see that the term in square brackets in equation \eqref{eq:uhat} is itself a solution to the linear operator $\partial_{tt} + \Omega_j^2$. It follows that the solution of equation \eqref{eq:uhat} comprises of particular solutions that have temporal dependence $t\mathrm{e}^{\pm\mathrm{i} \Omega_j t}$, leading to an ill-posed asymptotic expansion when $\epsilon t = O(1)$. The resolution to this problem is achieved via the solubility condition
$\langle \Psi_j, f_j\rangle = 0, $
which suppresses the secular growth.
By applying the solubility condition $\langle \Psi_j, f_j\rangle = 0$ for $j = 1,2,3$, we conclude that the complex amplitude, $A_j(\tau)$, of each wave mode, $\Psi_j(\bm{x}) \mathrm{e}^{-\mathrm{i}\Omega_j t}$, evolves according to the triad system of canonical form \cite{Bretherton1964, CraikBook}
\begin{equation}
\label{eq:triad_equations}
\sd{A_1}{\tau} = \alpha_1 A_2^* A_3^*, \qquad
\sd{A_2}{\tau} = \alpha_2 A_1^* A_3^*, \qquad
\sd{A_3}{\tau} = \alpha_3 A_1^* A_2^*,
\end{equation}
where
\begin{equation}
\label{eq:alpha1_full}
\alpha_1 = \frac{1}{2\Omega_1}\bigg( \Big[\Omega_2 \big(L_3^2 - K_3^2\big) + \Omega_3\big(L_2^2 - K_2^2\big) - 2\Omega_1 L_2L_3\Big] \mathscr{C} - 2\Omega_1 \big\langle \Psi_1, \nabla\Psi_2 \cdot \nabla \Psi_3 \big\rangle\bigg),
\end{equation}
while $\alpha_2$ and $\alpha_3$ follow by cyclic coefficient of the subscripts $(1,2,3)$.
Furthermore, the correlation integral, $\mathscr{C}$, is defined
\begin{equation}
\label{eq:corr_int_def}
\mathscr{C} = \frac{1}{S}\iint_\mathcal{D} \Psi_1\Psi_2\Psi_3\,\mathrm{d}A,
\end{equation}
where we recall that $S$ is the area of the cylinder cross-section (see \S \ref{sec:DtN_operator}).
As the triad equations \eqref{eq:triad_equations} are valid for $\tau = O(1)$ (or $t = O(1/\epsilon)$), their dynamics yield an informative view of the long-time evolution of the resonant triad.
In order to assess the influence of the triad coefficients on the triad evolution (see \S \ref{sec:simplify_coeff}), we first simplify the algebraic form given in equation \eqref{eq:alpha1_full}.
As shown by Miles \cite{Miles1976}, one may simplify the inner product $\langle \Psi_1, \nabla \Psi_2 \cdot \nabla \Psi_3 \rangle$ by repeated application of the divergence theorem and utilisation of the relationship $-\Delta \Psi_j = K_j^2\Psi_j$; it follows that
\begin{equation}
\label{eq:Miles_simp}
\big\langle \Psi_1, \nabla\Psi_2 \cdot \nabla \Psi_3 \big\rangle = \frac{1}{2}\Big(K_2^2 + K_3^2 - K_1^2\Big) \mathscr{C},
\end{equation}
where $\mathscr{C}$ is the correlation integral defined in equation \eqref{eq:corr_int_def}.
We then substitute equation \eqref{eq:Miles_simp} into equation \eqref{eq:alpha1_full} and simplify using the relation $\Omega_1 + \Omega_2 + \Omega_3 = 0$. After some algebra, we derive the reduced expression
\begin{equation}
\label{eq:alpha1_simp}
\alpha_1 = \frac{\mathscr{C}}{2\Omega_1}\bigg( \Omega_2 L_3^2 + \Omega_3L_2^2 - 2\Omega_1 L_2 L_3 + \sum_{l = 1}^3 \Omega_l K_l^2 \bigg),
\end{equation}
where $\alpha_2$ and $\alpha_3$ follow similarly.
Finally, we demonstrate in appendix \ref{app:reduce_triad_coeff} that the algebraic form of the triad coefficients may be further reduced to
\begin{equation}
\label{eq:triad_coeff_final}
\alpha_j = \frac{ \mathscr{C}\beta}{2\Omega_j} \quad\mathrm{for}\quad j = 1,2,3,
\end{equation}
where
\begin{equation}
\label{eq:beta_coeff}
\beta = \sum_{l= 1}^3 \Omega_lK_l^2 - \frac{1}{2}\Omega_1\Omega_2\Omega_3 \big(\Omega_1^2 + \Omega_2^2 + \Omega_3^2\big).
\end{equation}
Equations \eqref{eq:triad_equations}, \eqref{eq:corr_int_def}, \eqref{eq:triad_coeff_final} and \eqref{eq:beta_coeff} constitute the triad equations for resonant gravity waves confined to a cylinder of finite depth. Although the triad equations are of canonical form \cite{Bretherton1964}, the novelty of our investigation is the computation of the coefficients, $\alpha_j$, whose algebraic form is specific to our system.
\subsection{Properties of the triad coefficients}
\label{sec:simplify_coeff}
The simplified form of the coefficients, $\alpha_j$ (equation \eqref{eq:triad_coeff_final}), allows for some important theoretical observations that were obfuscated by the more complicated expressions for $\alpha_j$ given in equations \eqref{eq:alpha1_full} and \eqref{eq:alpha1_simp}.
In particular, as exactly two of the angular frequencies, $\Omega_j$, have the same sign, we deduce from equation \eqref{eq:triad_coeff_final} that the two corresponding coefficients, $\alpha_j$, also have the same sign, with the third coefficient having the opposite sign. By utilising the well-known results pertaining to the canonical triad equations, we conclude that all solutions to the triad equations \eqref{eq:triad_equations} are periodic in time, with solutions expressible in terms of elliptic functions \cite{Ball1964, Bretherton1964, Simmons1969, CraikBook}. Typically, these solutions result in an exchange of energy between the comprising modes, although there is a class of periodic solution that, perhaps counter-intuitively, results in zero energy exchange for all time \cite{CaseChiu1977, ChabaneChoi2019}.
Moreover, it is readily verified that the leading-order energy density, $\mathscr{E}_1 + \mathscr{E}_2 + \mathscr{E}_3$, is conserved, where $\mathscr{E}_j = \Omega_j^2 |A_j|^2$, consistent with the Hamiltonian structure of the Euler equations \cite{Bretherton1964, CraikBook}.
The reader is directed to the work of Craik \cite{CraikBook} for a more detailed account of the various properties of the canonical triad equations.
\begin{figure}
\centering
\includegraphics[width=0.65\textwidth]{figures/Figure4.png}
\caption{\label{fig:beta_coefficient} Contours of $\beta/K_3^2$ (see equation \eqref{eq:beta_coeff}) for the case $\Omega_1, \Omega_2 > 0$ and $\Omega_3 < 0$ (with $\Omega_1 + \Omega_2 + \Omega_3 = 0$), for which $\beta < 0$ (see equation \eqref{eq:beta_bound}).
}
\end{figure}
Of particular relevance to the evolution of the triad is the quantity $\beta$ (see equation \eqref{eq:beta_coeff}), which, together with $\mathscr{C}$, determines the time scale over which energy exchange arises.
In particular, we present the form of $\beta$ in figure \ref{fig:beta_coefficient} for the case $\Omega_1,\Omega_2 > 0$ and $\Omega_3 < 0$. As we will demonstrate below, $\beta < 0$ in this case;
in general, the sign of $\beta$ is the same as the sign of the largest (in magnitude) angular frequency, $\Omega_j$.
Notably, $|\beta|$ decreases sharply towards zero as $K_1 + K_2 \rightarrow K_3$, corresponding to the limit $h_c \rightarrow 0$.
Similarly, $|\beta|$ approaches zero in the limiting cases $K_1 \ll K_3$ or $K_2 \ll K_3$, corresponding to one low-oscillatory wave mode interacting with two highly-oscillatory wave modes.
Away from these limiting cases, however, $|\beta|$ depends only weakly on the wavenumbers, $K_j$, suggesting that the correlation integral, $\mathscr{C}$, predominantly controls the time-scale of the triad evolution.
Finally, we observe that $\beta$ is symmetric about the line $K_1 = K_2$, consistent with the invariance of equation \eqref{eq:beta_coeff} under the mapping $K_1\leftrightarrow K_2$ (and hence, $\Omega_1 \leftrightarrow \Omega_2$).
We conclude this section by proving that $\beta < 0$ in the case $\Omega_1, \Omega_2 > 0$ and $\Omega_3 < 0$.
By comparing the forms of equations \eqref{eq:beta_coeff} and \eqref{eq:alpha1_simp}, and then permuting the subscripts $(1,2,3) \mapsto (3,1,2)$, we first note that $\beta$ may be equivalently expressed as
\[
\beta = \Omega_1 L_2^2 + \Omega_2L_1^2 - 2\Omega_3L_1L_2 + \sum_{l = 1}^3\Omega_l K_l^2,
\]
or
\[
\beta = \Omega_1 (L_2^2 + K_1^2) + \Omega_2(L_1^2 + K_2^2) + |\Omega_3| ( 2L_1 L_2 - K_3^2).
\]
By bounding $L_j = K_j \tanh(K_j h_c) < K_j$ for $0 < h_c < \infty$ and utilising the relation $\Omega_1 + \Omega_2 = |\Omega_3|$, we obtain
\begin{equation}
\label{eq:beta_bound}
\beta < |\Omega_3|\big(K_1^2 + K_2^2 + 2K_1 K_2 - K_3^2\big) = |\Omega_3|\big((K_1 + K_2)^2 - K_3^2\big).
\end{equation}
As resonant triads exist only when $K_1 + K_2 < K_3$ (see Theorem \ref{thm:triads}), we conclude that $\beta < 0$ in this case.
\subsection{Summary}
\label{sec:multiple_scales_summary}
To summarise our theoretical developments, the velocity potential, $u$, at the fluid rest level ($z = 0$) evolves according to
\begin{equation}
\label{eq:u_triads_exp}
u(\bm{x},t) \sim \sum_{j = 1}^3 \Big[ A_j(\tau) \Psi_j(\bm{x}) \mathrm{e}^{-\mathrm{i} \Omega_j t} + \mathrm{c.c.}\Big] + O(\epsilon),
\end{equation}
where the complex amplitudes, $A_j(\tau)$, evolve over the slow time-scale, $\tau = \epsilon t$, according to the triad equations \eqref{eq:triad_equations}. In particular, the triad coefficients, $\alpha_j$ (see equation \eqref{eq:triad_coeff_final}), are defined in terms of the correlation integral, $\mathscr{C}$ (equation \eqref{eq:corr_int_def}), and the coefficient $\beta$ (equation \eqref{eq:beta_coeff}). Notably, we assume that $\mathscr{C}$ is nonzero; if this condition were violated then all three of the triad coefficients, $\alpha_j$, would be equal to zero, giving rise to non-interacting wave modes at leading order (contradicting the notion of a triad). Indeed, the condition $\mathscr{C} \neq 0$ is identical to the correlation condition detailed in equation \eqref{eq:corr_cond}, the origins of which we have now justified. Finally, the evolution of the free surface, $\eta$, may be recovered by recalling that $\eta = -u_t + O(\epsilon)$: we conclude that $\eta(\bm{x},t)$ has a similar leading-order form to $u(\bm{x},t)$, but each complex amplitude, $A_j(\tau)$, in \eqref{eq:u_triads_exp} is replaced by $\mathrm{i} \Omega_j A_j(\tau)$ (see equation \eqref{eq:eta_exp} below).
We briefly contrast our investigation of triad interaction with the early-time calculation of Michel \cite{Michel2019}, who characterised the initial linear growth of a child mode induced by the nonlinear interaction of two parent modes (where all three modes comprise the triad).
If modes 1 and 2 are the parent modes and mode 3 is the child mode, then the initial linear growth may be deduced directly from triad equations \eqref{eq:triad_equations} in the limit $|A_3| \ll |A_1| \sim |A_2|$. Specifically, the initial variation of $A_1$ and $A_2$ is slow relative to that of $A_3$, which has the approximate early-time form $A_3(\tau) \approx \alpha_3 C_1^* C_2^* \tau + C_3$, where $C_j = A_j(0)$. Notably, the linear growth rate of the child mode depends on the corresponding triad coefficient, $\alpha_3$, and the product of the initial amplitudes of the two parent modes.
However, our result for circular cylinders differs to that of Michel; we believe that the author neglected some important nonlinear contributions (compare Michel's equation (A2) to equations (2.4) and (2.4a) of Longuet-Higgins \cite{LonguetHiggins1962}). As Michel's experiment verified the scaling of the interaction only up to a proportionality constant, this discrepancy was not captured.
\subsubsection{The influence of weak detuning}
\label{sec:weak_detuning}
As discussed earlier in \S \ref{sec:triad_eqs}, the analysis in \S\S \ref{sec:multiple_scales} and \ref{sec:simplify_coeff} does not account for weak detuning of the angular frequencies, as might arise when the fluid depth, $h$, differs slightly from the critical depth, $h_c$. We now briefly consider the case of weak detuning, for which equation \eqref{eq:omega_sum2} is replaced by the condition $\Omega_1 + \Omega_2 + \Omega_3 = \epsilon \sigma$ (see \S \ref{sec:circular_cylinder}); here $\epsilon$ is the small parameter representative of the typical wave slope (see \S \ref{sec:formulation}) and $\sigma = O(1)$ determines the extent of the detuning \cite{Bretherton1964, McGoldrick1972}. By following a very similar multiple-scales procedure to the case $\sigma = 0$, we obtain amplitude equations that are now augmented by a time-dependent modulation. Specifically, each complex amplitude now evolves according to
\[\sd{A_1}{\tau} = \alpha_1 A_2^* A_3^* \mathrm{e}^{\mathrm{i} \sigma \tau}, \quad
\sd{A_2}{\tau} = \alpha_2 A_1^* A_3^* \mathrm{e}^{\mathrm{i} \sigma \tau}, \quad
\sd{A_3}{\tau} = \alpha_3 A_1^* A_2^* \mathrm{e}^{\mathrm{i} \sigma \tau}, \]
where each coefficient, $\alpha_j$, is defined in equation \eqref{eq:triad_coeff_final}. Although detuning yields non-autonomous amplitude equations, autonomous equations may be derived by mapping $A_j(\tau) \mapsto A_j(\tau) \mathrm{e}^{\mathrm{i} \sigma\tau/3}$ for all $j = 1,2,3$ \cite{CraikBook}. Finally, we note that the energy, $\mathscr{E}_1 + \mathscr{E}_2 + \mathscr{E}_3$, is not exactly conserved when considering the effects of detuning; instead, the energy slowly oscillates about a constant value \cite{CraikBook}.
\subsubsection{The case of a 1:2 resonance}
\label{sec:12_resonance}
A 1:2 resonance is a resonant triad for which two modes comprising the triad coincide. For this case, we define two angular frequencies, $\Omega_1$ and $\Omega_2$, so that $\Omega_2 = 2\Omega_1$ \cite{Miles1976}, where the connection to resonant triads is clear when writing $\Omega_1 + \Omega_1 = \Omega_2$. By following a very similar multiple-scales procedure to that outlined in \S \ref{sec:multiple_scales}, we obtain
\[ u(\bm{x},t) \sim \sum_{j = 1}^2 \Big[A_j(\tau) \Psi_j(\bm{x}) \mathrm{e}^{-\mathrm{i} \Omega_j t} + \mathrm{c.c.}\Big] + O(\epsilon), \]
where
\begin{equation}
\label{eq:Wilton_ripples_amplitude_eqs}
\sd{A_1}{\tau} = -\gamma A_1^* A_2
\quad \mathrm{and}\quad
\sd{A_2}{\tau} = \frac{\gamma}{4} A_1^2.
\end{equation}
In particular, the evolution of the amplitude equations \eqref{eq:Wilton_ripples_amplitude_eqs} depends on the coefficient $\gamma = \mathscr{C}\big(K_2^2 - K_1^2 - 3\Omega_1^4\big)$, where $\mathscr{C} = \frac{1}{S}\iint_{\mathcal{D}} \Psi_1^2 \Psi_2 \,\mathrm{d}A$ is the correlation integral. Indeed, the amplitude equations \eqref{eq:Wilton_ripples_amplitude_eqs} and coefficient, $\gamma$, are consistent with the results of Miles \cite{Miles1976} when expressing the evolution of each complex amplitude, $A_j$, in polar form (with appropriate rescaling). Finally, we note that a weak detuning (see \S \ref{sec:weak_detuning}) may also be incorporated within the amplitude equations \eqref{eq:Wilton_ripples_amplitude_eqs}, thereby accounting for a slight mismatch between the fluid depth, $h$, and the corresponding critical depth, $h_c$ \cite{Miles1976}.
Of particular interest is the evolution of weakly nonlinear waves steadily propagating around a circular cylinder of unit radius, focusing on the case where the fluid depth is precisely equal to the critical depth of a 1:2 resonance \cite{Yang2021}.
For the complex-valued eigenmodes defined in equation \eqref{eq:Bessel_eig}, the correlation condition, $\iint_{\mathcal{D}} \Psi_1^2 \Psi_2^* \,\mathrm{d}A \neq 0$, determines that the angular wavenumbers satisfy $m_2 = 2m_1$ \cite{ChossatDias1995, Yang2021}. By expressing the complex wave amplitudes in polar form, $A_j(\tau) = a_j(\tau) \mathrm{e}^{\mathrm{i} \theta_j(\tau)}$ (for $j = 1,2$), equation \eqref{eq:Wilton_ripples_amplitude_eqs} may be recast as \cite{Miles1976}
\[ \sd{a_1}{\tau} = -\gamma a_1 a_2 \cos\Theta, \quad
\sd{a_2}{\tau} = \frac{\gamma}{4}a_1^2 \cos\Theta, \quad \sd{\Theta}{\tau} = 2\gamma a_2 \bigg[1 - \frac{a_1^2}{8a_2^2}\bigg] \sin\Theta, \]
where $\Theta(\tau) = \theta_2(\tau) - 2\theta_1(\tau)$ is the time-dependent phase shift. Steadily propagating waves correspond to time-independent solutions for $a_1$, $a_2$ (both nonzero) and $\Theta$, from which we deduce that $\cos\Theta = 0$ and $a_1/a_2 = 2\sqrt{2}$. Indeed, it is remarkable that the amplitude ratio of the two dominant (normalised) wave modes is independent of the angular wavenumbers, $m_j$, the radial wavenumbers, $K_j$, and the corresponding angular frequencies, $\Omega_j$ (see \S \ref{sec:circular_cylinder} for details). Furthermore, one may readily determine the relationship between the angular velocity of the steady wave rotation and the corresponding wave amplitude, which may then be compared to the numerical solution of the full Euler equations \cite{Yang2021}. This comparison, as well as a comparison to steadily propagating waves computed from various truncations of the Euler equations, will be the subject of future investigation.
\section{The excitation of resonant triads}
\label{sec:excitation}
Having established the existence and evolution of resonant triads, we now focus on the excitation of a particular triad via external forcing. So as to motivate the method of excitation, we first recall (\S \ref{sec:pump_modes}) the well-known result that one mode in the triad may, or may not, excite the other two modes \cite{Davis1967, Hasselmann1967, Simmons1969}; in the case of excitation, the initial mode is referred to as the \emph{pump} mode \cite{CraikBook}. We will then utilise the criterion of the pump mode to excite all three modes in the triad via a pulsating pressure source (\S \ref{sec:pressure_source}). Throughout this section, we continue with the convention that the triad angular frequencies satisfy $\Omega_1 + \Omega_2 + \Omega_3 = 0$, as set forth in \S \ref{sec:triad_eqs}.
\subsection{Excitation via the triad pump mode}
\label{sec:pump_modes}
To first identify the triad pump mode and then characterise the resultant excitation, we consider the case for which $A_3$, say, is much larger in magnitude than the other two mode amplitudes, so $|A_1|, |A_2| \ll |A_3|$ \cite{Davis1967, Hasselmann1967, Simmons1969}. By linearising the triad equations \eqref{eq:triad_equations}, we obtain
\begin{equation}
\label{eq:pump_approx}
\sd{A_1}{\tau} = \alpha_1 A_2^* A_3^*, \qquad
\sd{A_2}{\tau} = \alpha_2 A_1^* A_3^*, \qquad
\sd{A_3}{\tau} = 0,
\end{equation}
from which we immediately conclude that $A_3$ is constant (whilst the linearisation assumption holds); we denote $A_3(\tau) = C$ for some given complex number $C$. By considering second derivatives of $A_1$ and $A_2$, we deduce the linearised evolution equations \cite{CraikBook}
\[ \sd{{}^2A_1}{\tau^2} = \alpha_1\alpha_2 |C|^2 A_1
\quad\mathrm{and}\quad
\sd{{}^2A_2}{\tau^2} = \alpha_1\alpha_2 |C|^2 A_2,
\]
where $\alpha_1\alpha_2 = \mathscr{C}^2\beta^2/(4\Omega_1\Omega_2)$ (see equation \eqref{eq:triad_coeff_final}). We conclude that $A_1(\tau)$ and $A_2(\tau)$ grow exponentially in time (whilst the linearisation approximation holds) when $\Omega_1 \Omega_2 > 0$, and exhibit sinusoidal oscillations when $\Omega_1\Omega_2 < 0$ \cite{Davis1967, Hasselmann1967, CraikBook}. Thus, mode 3 may excite modes 1 and 2 when $\Omega_1$ and $\Omega_2$ have the same sign (and likewise for other mode permutations). As one angular frequency must have a different sign from the other two (so as to satisfy $\Omega_1 + \Omega_2 + \Omega_3 = 0$), we conclude that the mode whose angular frequency is largest in magnitude (i.e.\ differs in sign) is the triad pump mode \cite{CraikBook}. Equivalently, the pump mode is the mode with largest wavenumber, $K_j$.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{figures/Figure5.png}
\caption{\label{fig:Triad_pump_visualisation}
Excitation of a triad via its pump mode for the case of a circular cylinder. We consider triad 24 in table \ref{Table_circle}, but with $m_3 \mapsto -m_3$.
We choose $\Omega_1, \Omega_2 > 0$ and $\Omega_3 < 0$, so that mode 3 is the pump mode.
$(a)$ Evolution of the free-surface, $\eta \sim -u_t$, over the slow time-scale, $\tau = \epsilon t$, with $\epsilon = 10^{-3}$.
$(b)$ The evolution of the wave amplitudes, $|A_j|$, according to the triad equations (equation \eqref{eq:triad_equations}, solid curves) and the pump-mode approximation (equation \eqref{eq:pump_approx}, dashed-dotted curves). Insets: modes 1 (blue), 2 (red) and 3 (gold) at $\tau = 0$; all three modes rotate counter-clockwise.
The simulations were initialised from $A_1(0) = 0.01$ and $A_2(0) = 0.01\mathrm{i}$, where $A_3(0)$ was chosen to be the positive real number satisfying $\mathscr{E}_1 + \mathscr{E}_2 + \mathscr{E}_3 = 1$, with $\mathscr{E}_j = \Omega_j^2 |A_j|^2$ (see \S \ref{sec:simplify_coeff}).
}
\end{figure}
To visualise the influence of the pump mode on the resultant free-surface pattern, we present the solution of the triad equations \eqref{eq:triad_equations} and the corresponding pump-mode approximation (equation \eqref{eq:pump_approx}) in figure \ref{fig:Triad_pump_visualisation}. By recalling that the free surface satisfies $\eta = -u_t + O(\epsilon)$, we first deduce that
\begin{equation}
\label{eq:eta_exp}
\eta(\bm{x},t) \sim \sum_{j = 1}^3 \Big[\mathrm{i} \Omega_j A_j(\tau) \Psi_j(\bm{x}) \mathrm{e}^{-\mathrm{i} \Omega_j t} + \mathrm{c.c.}\Big] + O(\epsilon).
\end{equation}
For the case of a circular cylinder, we utilise the complex-valued eigenmodes defined in equation \eqref{eq:Bessel_eig}, corresponding to the superposition of steadily propagating waves for $m_j \neq 0$ (the rotation direction depends on the sign of $\Omega_j /m_j$). Upon initialising the system so that the energy is primarily within the pump mode (mode 3), modes 1 and 2 are gradually excited due to nonlinear interaction, with exponential growth evident for $\tau \lesssim 10$. As time further increases, the dynamics depart from the pump-mode approximation: the energy in the pump mode appreciably decreases, whilst the energy in modes 1 and 2 saturates. The free surface varies qualitatively during this evolution, with an appreciable change in pattern structure visible by $\tau = 24$ (primarily a superposition of modes 1 and 2). Notably, the system evolution is periodic, which becomes apparent over longer time scales.
\subsection{Excitation via an applied pressure source}
\label{sec:pressure_source}
Based on the ideas of the previous section, we consider a methodology for exciting the pump mode of a triad, which will subsequently excite the remaining two modes (provided that the initial disturbance of each of the remaining modes is nonzero).
Notably, several methods for exciting internal resonances have been considered in prior investigations, primarily focusing on imposed motion of the fluid vessel via horizontal \cite{Miles1976, Miles1984c} or vertical vibration \cite{Miles1976, Miles1984b, MilesHenderson1990, HendersonMiles1991}. Furthermore, one may, in principle, utilise sinusoidal paddles or plungers to excite a particular triad's pump mode for a given geometry (similar wave makers are used in rectangular wave tanks \cite{McGoldrick1970b, HendersonHammack1987}). However, for large-scale fluid tanks, imposed motion of the vessel may be impractical (if the tank were set in a concrete base, for example), and it may be challenging to determine the correct paddle motion necessary to excite a chosen pump mode for geometrically complex cylinders. We choose, therefore, to consider a slightly different approach: we instead excite the pump mode via a pulsating pressure source located just above the free surface (e.g.\ an air blower).
In order to incorporate a pressure source within our mathematical framework, we first reformulate the dimensionless dynamic boundary condition (equation \eqref{eq:Euler_DBC}) as
\[ \phi_t + \eta + \frac{\epsilon}{2}\Big(|\nabla \phi|^2 + \phi_z^2\Big) + \epsilon P(\bm{x}, t) = 0 \quad \mbox{for}\quad \bm{x} \in \mathcal{D}, \quad z = \epsilon \eta, \]
where the dimensional pressure is $\epsilon^2 a \rho g P$ for fluid density $\rho$ ($P = 0$ corresponds to atmospheric pressure). The pressure source is chosen to be small in magnitude so that the resultant wave excitation arises over the slow time-scale, $\tau = \epsilon t$, and may thus be saturated by weakly nonlinear effects. By modifying the developments outlined in \S \ref{sec:BL_eq}, we derive the forced Benney-Luke equation
\begin{equation}
\label{eq:BL_eq_pressure}
u_{tt} + \mathscr{L} u + \epsilon\bigg(
u_t\big(\mathscr{L}^2 + \Delta\big)u + \pd{}{t}\Big[(\mathscr{L} u)^2 + |\nabla u|^2\Big] + \partial_t P\bigg) = O(\epsilon^2) \quad\mathrm{for}\quad \bm{x} \in \mathcal{D},
\end{equation}
which will be the starting point for the asymptotic analysis.
Before proceeding further, we first describe two forms of the pressure source relevant to our investigation.
For a stationary pressure source oscillating periodically over the fast time-scale, $t$, we express $P(\bm{x},t) = f(\tau) s(\bm{x}) \mathrm{e}^{-\mathrm{i} \Omega_p t} + \mathrm{c.c.}$, where $s(\bm{x})$ is a fixed spatial profile (generally spanning the cavity), $f(\tau)$ accounts for a slow modulation in the magnitude of the pressure, and $\Omega_p$ is the pulsation angular frequency. We choose $\Omega_p$ to be close to the angular frequency of the pump mode, which, without loss of generality, we assume to be mode 3 (i.e.\ $\Omega_3$ has the opposite sign from $\Omega_1$ and $\Omega_2$). We denote, therefore, $\Omega_p = \Omega_3 + \epsilon \mu$, where $\mu = O(1)$ determines the extent of the frequency mismatch.
For a pressure source orbiting the centre of a circular cylinder at a constant angular velocity, we instead posit that $P$ has the form $P(r,\theta, t) = f(\tau) s(r, \theta - \Omega_p t)$, where $\Omega_p = (\Omega_3 + \epsilon \mu)/m_3$ is the angular velocity of the pressure source (assuming that the pump mode is non-axisymmetric, i.e.\ $m_3 \neq 0$).
For both standing and orbiting pressure sources, we now follow a similar multiple-scales procedure to that outlined in \S \ref{sec:multiple_scales}, starting from the forced Benney-Luke equation \eqref{eq:BL_eq_pressure}.
So as to discount the possibility that the pressure source excites more than one mode in the triad, we assume that neither $|\Omega_1|$ or $|\Omega_2|$ are close to $|\Omega_3|$. Furthermore, we incorporate a weak detuning in the triad angular frequencies, denoting $\Omega_1 + \Omega_2 + \Omega_3 = \epsilon \sigma$ (see \S \ref{sec:weak_detuning}).
It follows that each complex amplitude, $A_j(\tau)$, evolves according to
\begin{equation}
\label{eq:triad_forced}
\sd{A_1}{\tau} = \alpha_1 A_2^* A_3^* \mathrm{e}^{\mathrm{i} \sigma \tau}, \quad
\sd{A_2}{\tau} = \alpha_2 A_1^* A_3^* \mathrm{e}^{\mathrm{i} \sigma \tau}, \quad
\sd{A_3}{\tau} = \alpha_3 A_1^* A_2^* \mathrm{e}^{\mathrm{i} \sigma \tau} - \Omega_3s_3f(\tau)\mathrm{e}^{-\mathrm{i} \mu \tau},
\end{equation}
where the coefficients, $\alpha_j$, are defined in equation \eqref{eq:triad_coeff_final}. Notably, the pump mode may only be excited provided that the corresponding eigenmode is non-orthogonal to the pressure source, corresponding to a nonzero projection, i.e.\ $s_3\neq 0$, where $s_3 = \langle \Psi_3, s\rangle$. Similar equations describing the evolution of forced resonant triads have been explored by McEwan \emph{et al.}\ \cite{McEwan1972} (with the inclusion of linear damping) and Raupp \& Silva Dias \cite{Raupp2009}.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{figures/Figure6.eps}
\caption{\label{fig:Forced_triad}
Evolution of the forced triad equations \eqref{eq:triad_forced} for $\sigma = \mu = 0$ and constant $f$. We consider the same triad as figure \ref{fig:Triad_pump_visualisation}, with $s_3 f = 0.1$. In all three panels, $A_1(0) = 0.02\mathrm{i}$ and $A_2(0) = 0.01$.
For $A_3(0) = 0.01$, we observe $(a)$ the initial excitation of the triad and $(b)$ the resultant periodic dynamics (the initial growth is highlighted within the grey box).
$(c)$ For $A_3(0) = 0.01\mathrm{i}$, the triad evolution is chaotic.
}
\end{figure}
In the special case of time-independent forcing ($f$ constant) and no frequency detuning ($\sigma = \mu = 0$), the dynamics of the forced triad equations has been analysed by Harris \emph{et al.}\ \cite{Harris2012}, with both periodic and quasi-periodic dynamics reported. We also consider this case, leaving the effects of detuning and variable forcing for future investigation. In this setting, when $|A_1|$, $|A_2|$ and $|A_3|$ are initially small relative to the magnitude of the forcing, $|\Omega_3 s_3 f|$, the initial growth in $A_3$ is approximately linear (see figure \ref{fig:Forced_triad}$(a)$). As mode 3 is the pump mode, the growth in $A_3$ excites $A_1$ and $A_2$, thus activating the triad. The conservation laws of the forced triad equations \cite{Harris2012} result in a temporary diminution of mode 3, which is later augmented by the external forcing; whence the process repeats. In some parameter regimes, the resulting evolution of the forced triad is periodic in time (see figure \ref{fig:Forced_triad}$(b)$ and Raupp \& Silva Dias \cite{Raupp2009}); in contrast to the findings of Harris \emph{et al.}\ \cite{Harris2012}, however, we also identify initial conditions (with all other parameters unchanged) that result in hitherto unidentified chaotic dynamics (see figure \ref{fig:Forced_triad}$(c)$). The chaotic nature of this latter example may be verified via estimation of the maximal Lyapunov exponent \cite{Strogatz}, which is found to be positive (i.e.\ initially adjacent trajectories diverge exponentially in phase space); however, a more thorough investigation of the chaotic dynamics of the forced triad equations, and the subtle dependence on initial conditions, will be presented elsewhere.
\section{Discussion}
\label{sec:discussion}
We have performed a systematic investigation into nonlinear resonant triads of free-surface gravity waves confined to a cylinder of finite depth; previously studied 1:2 resonances are obtained as special cases. A key result of our study is Theorem \ref{thm:triads}, which determines whether there exists a fluid depth at which three given wave modes resonate due to the nonlinear evolution of the fluid. Equipped with this result, we determined the long-time fluid evolution using multiple-scales analysis, from which we deduced that all solutions to the triad equations are periodic in time. Finally, we determined that a given triad may be excited via external forcing of the triad's pump mode, thereby providing a mechanism for exciting a given triad in a wave tank. All our results are derived for cylinders of arbitrary cross-section (barring some technical assumptions; see \S \ref{sec:formulation}), thus forming a broad framework for characterising nonlinear resonance of confined free-surface gravity waves. In particular, our theoretical developments buttress experimental observations \cite{Michel2019} and demonstrate the potential generality of confinement as a mechanism for promoting nonlinear resonance.
A second fundamental component of our study is the influence of the cylinder cross-section on the existence of resonant triads; for example, resonant triads are impossible in rectangular cylinders, yet abundant within circular and annular cylinders (for particular fluid depths). Of the vast array of resonances arising in a circular cylinder, triads consisting of an axisymmetric pump mode and two identical counter-propagating waves are of notable interest. This combination of axisymmetric and non-axisymmetric modes possesses an interesting analogy to the excitation of counter-propagating subharmonic beach edge waves due to a normally incident standing wave \cite{Guza1974}. Specifically, the wave crests of the standing axisymmetric mode are always parallel to the bounding wall of the circular cylinder, and may excite steadily propagating waves that are periodic in the azimuthal direction. For the special case for which the amplitudes of the two counter-propagating modes coincide, one observes the resonant interaction of standing axisymmetric and non-axisymmetric waves.
So as to gain a deeper insight into the influence of nonlinearity on resonant triads, a primary focus for future investigations will be the simulation of the Euler equations within a cylindrical domain, with consideration of various truncated systems \cite{CraigSulem1993, MilewskiKeller1996, BergerMilewski2003, WangMilewski2012}. From a computational perspective, the most natural geometry to consider is a circular cylinder \cite{QadeerWilkening2019}; this geometry has been previously explored in the context of steadily propagating nonlinear waves in the vicinity of a 1:2 resonance \cite{Bryant1989, Yang2021}, but it remains to assess the efficacy of the amplitude equations \eqref{eq:triad_equations} for predicting the evolution of nonlinear triads. Indeed, exploration of the nonlinear dynamics may reveal additional resonant triads arising beyond the small-wave-amplitude limit explored herein.
Of similar interest is the fluid evolution when multiple triads are excited at a single depth, with the potential for energy exchange via triad-triad interactions \cite{McEwan1972, CraikBook, Chow1996, Choi2021}.
The simulation of free-surface gravity waves in non-circular cylinders presents a more formidable challenge, however, except for cylinder cross-sections that possess a tractable eigenmode decomposition.
A second natural avenue for future investigation is to characterise the influence of applied forcing on resonant triads. For example, when the fluid bath is subjected to sufficiently vigorous vertical vibration, Faraday waves \cite{Faraday1831, Kumar1996} may appear on the free surface; although this scenario has been studied in the case of a 1:2 internal resonance \cite{Miles1984b, MilesHenderson1990, HendersonMiles1991}, resonant triads may give rise to the formation of more exotic free-surface patterns, particularly at fluid depths that differ from that of a 1:2 resonance. In a similar vein, horizontal vibration \cite{Miles1976, Miles1984c} or a pulsating pressure source at the frequency of the triad's pump mode may lead to a wealth of periodic and quasi-periodic dynamics, as predicted by the forced triad equations \cite{Harris2012}. Our study has indicated, however, that chaotic dynamics are also possible in some parameter regimes, and might thus be excited in numerical simulation or experiments. Lastly, our study has focused on flat-bottomed cylinders; it seems plausible, however, that submerged topography may enhance or mitigate certain resonances, which may be an important consideration in the design of industrial-scale fluid tanks.
Finally, our study has focused on the special case of a liquid-air interface, for which the dynamics of the air are neglected within the Euler equations. It is natural, however, to extend our formulation to the case of two-layer flows (in the absence of surface tension), with two immiscible fluids (e.g.\ air and water) confined within a cylinder whose lid and base are both rigid. In this setting, the density difference across the fluid-fluid interface has a strong influence of the system dynamics; it seems plausible, therefore, that additional resonances may be excited in this configuration, relative to the liquid-air interface considered herein.
Notably, the anticipated resonances would arise across a single interface, in contrast to the cross-interface resonances explored in previous investigations \cite{Ball1964, Simmons1969, Joyce1974, Segur1980, TakloChoi2019, Choi2021}. Finally, exploring the influence of parametric forcing \cite{KumarTuckerman1994} on resonant triads arising for two-layer flows opens up exciting new vistas in nonlinear resonance induced by confinement.
|
2,869,038,155,476 | arxiv | \section{Introduction}
In this paper we present a new mathematical formalism for describing a massive relativistic particle with spin one. In this formalism, we use a four-dimensional transition from the Heisenberg to the Schr\"odinger picture. In quantum mechanics, the transition from the Heisenberg to the Schr\"odinger picture is
carried out by the unitary transformation $S(t)=\exp(-itH)$, where ${H}$ is the Hamiltonian operator of the particle; (we choose here a system of units such that ${\hbar}=1,{\,}c=1).$
The state of a particle in the Heisenberg picture and the particle operators in the
Schr\"odinger picture are defined as time-independent functions and operators, respectively. In
our earlier work \cite{Frick1}, we generalized this transition to the transformation
\begin{equation}
\label{1.1}
S(t,{\bf x})=\exp[-i(tH-{\bf x}\cdot{\bf P})],
\end{equation}
where ${\bf P}$ is the momentum operator of the particle. In this context, the functions in the
Heisenberg picture and the operators in the Schr\"odinger picture are independent of the time
and space coordinates t, ${\bf x}$. The Fourier transform of the state in the Heisenberg picture
must be independent of the space-time
coordinates. That is why the plane waves $\sim\exp(i{\bf x}\cdot{\bf p})$ cannot be
applied in this Fourier transformation. Accordingly, the momentum and the Hamiltonian
operator of the particle cannot be expressed in terms of the spatial derivative $-i\nabla_{\bf x}$. Under these premises,
there is no ${\bf x}$-representation. As a result, the plane
waves in the new Schr\"odinger picture and also the space-time coordinates in the operators of the new
Heisenberg picture appear in different representations. In the Heisenberg picture at first one can use the momentum representation and
subsequently the representation which is defined via a space-time independent Fourier
transformation.
Let the function $\Psi^{(s)}({\bf p})$ be a relativistic wave function of a particle in the momentum representation (${\bf p}$ = momentum, m = mass, $p_0:=\sqrt{m^2+{\bf p}^2}$, s = spin ). In the context of the generalization $S(t){\Rightarrow}S(t,{\bf x}),$ the function ${\Psi}^{(s)}_{\sigma}({\bf p}$) is a wave function in the Heisenberg picture. Under the Lorentz transformation $g$ with boost and rotation generators \cite{Wigner1,Wigner2,Bargmann,Shirokov}
\begin{equation}
\label{1.2}
{\bf N}({\bf p},{\bf
s}):=ip_0{\nabla}_{\bf p}-\frac{{\bf s}\times{\bf p}}
{p_0+m},\quad{\bf J}({\bf p},{\bf s}):=-i{\bf p}\times{\bf \nabla}_{\bf p}
+{\bf s}:={\bf L}({\bf p})+{\bf s},
\end{equation}
and parameters ${\bf u},u_{0}$, with $({\bf u}^2-u^2_{0}=1$), the function ${\Psi}^{(s)}_{\mu}({\bf p}$) transforms by the unitary representation ($\mu$= spin projection )
\begin{equation}
\label{1.3}
T_{g}{\Psi}^{(s)}_{\mu}({\bf p})=\sum_{\mu^{'}=-s}^{s}W_{\mu\mu^{'}}^{(s)}({\bf p},{\bf u})\,\Psi_{\mu^{'}}^{(s)}(g^{-1}{\bf p}),
\end{equation}
where $W_{\mu\mu^{'}}^{(s)}({\bf p},{\bf u})$ are the Wigner functions (${\bf {\sigma}}$ are the Pauli matrices)
\begin{equation}
\label{1.4}
W^{(1/2)}({\bf p},{\bf u})=\frac{(p_0+m)(u_{0}+1)-{\bf u}\cdot{\bf p}+i{\bf \sigma}({\bf p}\times{\bf u})}{\sqrt{2(u_{0}+1)(p_0+m)(p_{0}u_{0}-{\bf p}\cdot{\bf u}+m)}},\quad{W}^{(1/2)}({\bf p},{\bf u})\,
{W}^{\dagger(1/2)}({\bf p},{\bf u})=1.
\end{equation}
Such a wave function has positive definite norm
\begin{equation}
\label{1.5}
\int\frac{d{\bf p}}{p_0}\,\sum_{\mu=-s}^{s}|{\Psi}^{(s)}_{\mu}({\bf p})|=\int\frac{d{\bf p}}{p_0}\,\sum_{\mu=-s}^{s}|{\Psi}^{(s)}_{\mu}({\bf p},t,{\bf x})|<\infty,
\end{equation}
and can be expanded with respect to irreducible unitary representations of the Lorentz group. This function is covariant only with respect to the set of spin and momentum variables, and not with respect to each of them separately.
In (\ref{1.5}), the function $\Psi^{(s)}({\bf p},t,{\bf x}):=S(t,{\bf x})\,\Psi^{(s)}({\bf p})$ is the wave function in the new Schr\"odinger picture in momentum representation.
The relativistic spinors, transforming by non unitary finite representations of the Lorentz group, have not definite norms. In this case these spinors are not useful.
The unitary representations correspond to the eigenvalues $1+\alpha^2-{\lambda}^2$ of the first $C_1({\bf p}):={\bf N}^2-{\bf J}^2$ and the eigenvalues $\alpha\lambda$ of the second Casimir operator $C_2({\bf p})={\bf N}\cdot{\bf J},\,(0\leq\alpha<\infty,\quad\lambda=-s,...,s)$. The range of $\alpha$ defines the fundamental series of the unitary representations. The formalism of harmonic analysis on the Lorentz group has been used by many authors ( a detailed list of references can be found in Ref.\cite{Joos,Kad,Ruhl,Barut} e.g.). The four-dimensional generalization of the Heisenberg/Schr\"odinger picture introduces new features into the nature of the description of particle states. It is necessary to develop a mathematical formalism in the framework of this approach for describing the relativistic particles.
In \cite{Frick1,Frick2} the space-time independent expansion with respect to the unitary irreducible
representation of the Lorentz group was applied as Fourier transformation in the
Heisenberg picture for the relativistic particle with spin 0 and spin 1/2. This procedure will be applied here for such a practically important example as the massive particle with spin 1. We shall show that the consistent determination of the wave functions of a particle in the Schr\"odinger picture requires the use of the wave functions in the momentum representation or of the matrix elements of the fundamental series of unitary representations of the Lorentz group.
Since the operators $-i\nabla_{\bf x}$ are not momentum operators, we want to find the Hamiltonian and the momentum operators for the particle with spin 1 . In this paper, the operators we obtain are expressed through the group parameter ${\alpha}$. At first we will give a short review for the description of particles with spin 0 in
the context of the application of (\ref{1.1}).
\section{Spin 0 - particle}
In this case, the operator $C_1({\bf p})$ have the eigenfunctions
\begin{equation}
\label{2.1}
\xi^{(0)}({\bf p},{\alpha},{\bf n}):=[(p_{0}n_{0}-{\bf p}\cdot{\bf n})/m]^{-1+i\alpha},
\end{equation}
where $(n_{0},\,{\bf n})$ is a null vector $n_{0}^{2}-{\bf n}^{2}=0$.\\ The Fourier transforms
for the states of the relativistic particle
with spin 0 in terms the basis functions (\ref{2.1}) $({\bf n}=(\sin{\theta}\cos{\varphi},\sin{\theta}\sin{\varphi},\cos{\theta}),$ $\quad{d\omega}_{\bf n}=\sin{\theta}d{\theta}d{\varphi}$, have the form \cite{Shapiro},
\begin{equation}
\label{2.2}
\Psi^{(0)}({\bf p})=\frac{1}{(2\pi)^{3/2}}\int{\alpha}^2d{\alpha}\,d{\omega}_{\bf n}\,\Psi^{(0)}({\alpha},{\bf n})\,\xi^{(0)}({\bf p},{\alpha},{\bf n}),
\end{equation}
\begin{equation}
\label{2.3}
\Psi^{(0)}({\alpha},{\bf n})=\frac{1}{(2\pi)^{3/2}}
\int\frac{d{\bf p}}{p_0}\,\Psi^{(0)}({\bf p})\,\xi^{\ast(0)}({\bf p},{\alpha},{\bf n}).
\end{equation}
The functions \(\Psi^{(0)}({\bf p})\) and \(\Psi^{(0)}({\alpha},{\bf n})\)
are the state functions of the particle in \({\bf p}\)- and in the (\({\alpha},{\bf n}\))-representation. The completness and orthogonality relations for the functions $\xi^{(0)}({\bf p},\alpha,{\bf n}),\quad\xi^{\ast(0)}({\bf p},\alpha,{\bf n})$ are given in the Appendix. In the (\({\alpha},{\bf n}\))-representation the free Hamiltonian and the momentum operators for a particle
with spin 0 are the differential-difference operators \cite{Kad} $({\bf L}:={\bf L}({\theta},{\varphi}))$
\begin{equation}
\label{2.4}
H^{(0)}({\alpha},{\bf
n})=m\left[\cosh(i\frac{\partial}{{\partial}{\alpha}})+\frac{i}{{\alpha}}\sinh(i\frac{\partial}{{\partial}{\alpha}}) +\frac{{\bf L}^2}{2{\alpha}^2}\exp(i\frac{\partial}{{\partial}{\alpha}})\right],
\end{equation}
\begin{equation}
\label{2.5}
{\bf P}^{(0)}({\alpha},{\bf
n})={\bf n}\left[H^{(0)}({\alpha},{\bf
n})-m\exp(i\frac{\partial}{{\partial}{\alpha}})\right]-m\frac{{\bf n}{\times}{\bf L}}{\alpha}\exp(i\frac{\partial}{{\partial}{\alpha}}).
\end{equation}
The eigenfunction of this operators is $\xi^{(0)}_{\bf p}({\alpha},{\bf n}):=\xi^{\ast(0)}({\bf p},{\alpha},{\bf n})$:
\begin{equation}
\label{2.6}
H^{(0)}({\alpha},{\bf
n})\,\xi^{(0)}_{\bf p}({\alpha},{\bf n})=p_{0}\,\xi^{(0)}_{\bf p}({\alpha},{\bf n}),\quad{\bf P}^{(0)}({\alpha},{\bf
n)}\,{\xi}^{(0)}_{\bf p}({\alpha},{\bf n})={\bf p}\,{\xi}^{(0)}_{\bf p}({\alpha},{\bf n}).
\end{equation}
The operators in (\ref{2.4})-(\ref{2.6}) are used for the relastivistic description of the two-body problem \cite{Kad,Ska,Amir,Drenska,Kag}. In this case the vector ${\bf q}={\alpha}{\bf n}/m$ is
applied.
In the nonrelativistic limit
\begin{equation}
\label{2.7}
C_1^{(0)}({\bf p})\rightarrow{-m^2{\bf \nabla}^{2}_{\bf p}},\quad{\xi}^{(0)}({\bf p},{\alpha},{\bf n}){\;}\rightarrow{\;}\exp(-i\alpha{\bf n}\cdot{\bf p}/m),
\end{equation}
\begin{equation}
\label{2.8}
H^{(0)}(q,{\bf n})-m{\quad}\rightarrow{\quad}-\frac{1}{{2m}{q}^2}\frac{\partial}{{\partial}{q}}{q}^{2}\frac{\partial}{{\partial}{q}}+\frac{\bf L^{2}}{2mq^2},{\quad}{\bf P}^{(0)}(q,{\bf n})\rightarrow{\quad}-i\nabla_{\bf q}.
\end{equation}
The functions $\exp(i{\alpha}{\bf n}\cdot{\bf p}/m)$ realize the unitary irreducible representations of the Galileo group:
\begin{equation}
\label{2.9}
\Psi(\alpha{\bf n})={\frac{1}{{(2\pi)}^{3/2}}}\int{d{\bf p}}\Psi({\bf p})\exp(i{\alpha\bf n}\cdot{\bf p}/m).
\end{equation}
Since Wigner, particle are associated with unitary representation of the Poincar\'e group.
If one introduces the generators of the Lorentz algebra ${\bf N}(\alpha,{\bf n})$ for the particle with spin 0
\begin{equation}
\label{2.10}
{\bf N}^{(0)}(\alpha,{\bf n}):=\alpha{\bf n}+({\bf n}\times{\bf L}+{\bf L}\times{\bf n})/2,
\end{equation}
then \cite{Frick1}, instead of the vector $\alpha{\bf n}$, the $(\alpha,{\bf n})$-representation can be recognized as a representation of the
Poincar\'e group. The operators $H^{(0)}(\alpha,{\bf
n}),\,{\bf P}^{(0)}(\alpha,{\bf
n}),\,{\bf N}^{(0)}(\alpha,{\bf n}),\,{\bf L}$ satisfy the commutation relations of
the Poincar\'e algebra.
The Casimir operator $C_1({\bf p})$, and the functions $\xi^{(0)}({\bf p},\alpha,{\bf n})$ do not depend on the
space-time coordinates ${\bf x}$, t. That is why the functions in the expansions (\ref{2.2}), (\ref{2.3}) and
the operators in (\ref{2.4})-(\ref{2.6}) are independent on the space-time coordinates likewise. In
the framework of the four-dimensional generalization of the Heisenberg to the
Schr\"odinger picture $S(t){\Rightarrow}S(t,{\bf x})$, the functions in (\ref{2.2}), (\ref{2.3}) and the operators in the (\ref{2.4})-(\ref{2.6}) must be seen as
functions in the Heisenberg picture and, accordingly, as operators in the Schr\"odinger
picture for the particles with spin 0. If the transformation (\ref{1.1}) is not applied, then we have
no possibility to introduce the plane wave $\exp(i{\bf x}\cdot{\bf p})$ into the relativistic state functions (\ref{2.2}), (\ref{2.3}). In
the nonrelativistic limit there is such a possibility. The function $\exp(i{\alpha\bf n}\cdot{\bf p}/m)$ in (\ref{2.9}) has the
form of the plane wave $\exp(i{\bf x}\cdot{\bf p})$. Thus, if the generalization (\ref{1.1}) is not applied,
the function $\exp(i{\alpha\bf n}\cdot{\bf p}/m)$ can be replaced by $\exp(i{\bf x}\cdot{
\bf p})$. In such a form, the
plane waves can be introduced in the Schr\"odinger picture through the Fourier transformation, and then an
{\bf x}-representation can be introduced. In the relativistic expansion (\ref{2.3}) this method cannot be used. The
functions $\exp(i{\bf x}\cdot{\bf p}),\quad\xi^{\ast(0)}({\bf p},{\alpha},{\bf n})$ have different forms.
The application of the transformation (\ref{1.1}) gives the
state of the particle in the Schr\"odinger picture in ${bf p}$ and $(\alpha,{\bf n})$ -representation:\\
$\Psi^{(0)}({\bf p},t,{\bf x})=S(t,{\bf x})\Psi^{(0)}({\bf p}),\quad\Psi^{(0)}(\alpha,{\bf n},t,{\bf x})=S(t,{\bf x})\Psi^{(0)}(\alpha,{\bf n})$.
The Fourier expansion $(\exp[-i(p{\cdot}x)]:=\exp[-i(tp_{0}-{\bf x}\cdot{\bf p})])$
\begin{equation}
\label{2.11}
\Psi^{(0)}({\alpha},{\bf n},t,{\bf x})=\frac{1}{(2\pi)^{3/2}}
\int\frac{d{\bf p}}{p_0}\,\Psi^{(0)}({\bf p})\,{\xi}^{(0)}_{\bf p}({\alpha},{\bf n})\,\exp[-i(p\cdot{x})],
\end{equation}
in contrast to the usual expansion
\begin{equation}
\label{2.12}
\Psi^{(0)}(t,{\bf x})=\frac{1}{(2\pi)^{3/2}}
\int\frac{d{\bf p}}{p_0}\Psi^{(0)}({\bf p})\,\exp[-i(p\cdot{x})],
\end{equation}
contains the matrix elements ${\xi}^{(0)}_{\bf p}({\alpha},{\bf n})$ of the unitary representation of the Lorentz group.
In the expansion (\ref{2.12}), the plane waves in the form ${\sim}$ const$\exp-i(p{\cdot}x)$ appear as the wave functions of the particle with the definite momentum ${\bf p}$ and spin 0. Similar expansion for the particle with spin 1/2 or spin 1 contain the Dirac bispinor (spin 1/2) or the unit polarization 4- vector (spin 1), respectively.
An important difference between (\ref{2.11}) and (\ref{2.12}) is that the plane waves ${\sim}$ const$\exp[-i(p{\cdot}x)]$
without the wave functions in the Heisenberg picture in accordance with the transformation (\ref{1.1}) cannot express the wave functions of the particle.
The wave functions with definite momentum in the Schr\"odinger picture in $(\alpha,{\bf n})$-representation are the functions
\begin{equation}
\label{2.13}
\xi^{(0)}_{\bf p}(\alpha,{\bf n},t,{\bf x})=\,\xi^{(0)}_{\bf p}(\alpha,{\bf n})\exp[-i(p{\cdot}x)],
\end{equation}
in (\ref{2.11}).
The expression (2.12) and the similar expansions for the particle with spin 1/2 or spin 1 containing the Dirac bispinor or the unit polarization 4- vector are not
transformations from one representation to another.
In the nonrelativistic limit for the Fourier expansion in the Schr\"odinger picture we have
\begin{equation}
\label{2.15}
\Psi(\alpha{\bf n},t,{\bf x})=\frac{1}{(2\pi)^{3/2}}\int{d{\bf p}}\Psi({\bf p})\exp(i{\alpha\bf n}\cdot{\bf p}/m)\exp[-i(t{p^2/2m}-{\bf x}\cdot{\bf p})].
\end{equation}
\section{Spin 1 - particle}
The expansions (\ref{2.2}), (\ref{2.3}) are generalized in \cite{Chou,Popov} for the particle with spin. They can be expressed in the form
\begin{equation}
\label{3.1}
\Psi^{(s)}_{\mu}({\bf p})=\frac{1}{(2\pi)^{3/2}}\sum_{\mu^{'}=-s}^{s}\int({\mu^{'}}^2+{\alpha}^2)d{\alpha}\,d{\omega}_{\bf
n}\,D^{(s)}_{\mu{\mu}^{'}}(R_{w})\,\xi^{(0)}({\bf p},\alpha,{\bf n})\,
\Psi^{(s)}_{\mu^{'}}(\alpha,{\bf n}),
\end{equation}
\begin{equation}
\label{3.2}
\Psi^{(s)}_{\mu}(\alpha,{\bf n})=\frac{1}{(2\pi)^{3/2}}\sum_{\mu^{'}=-s}^{s}\int\frac{d{\bf p}}{p_0}\,D^{\dagger(s)}_{\mu{\mu}^{'}}(R_{w})\,\xi^{\ast(0)}({\bf p},\alpha,{\bf n})\,\Psi^{(s)}_{\mu^{'}}({\bf p}),
\end{equation}
where $\Psi^{(s)}_{\mu}(\alpha,{\bf n})$ is the wave function in $({\alpha},{\bf n}$) represantion and the matrix ${D}^{(s)}(R_w)$ must have the qualities of the Wigner rotation in (\ref{1.3}), (\ref{1.4}).
In \cite{Frick2} this matrix (spin=1/2) has been found by means of the solutions of the eigenvalue equations of the operator $C_1({\bf p})$
\begin{equation}
\label{3.3}
{D}^{(1/2)}(R_w):={D}^{(1/2)}({\bf p},{\bf n})=\frac{p_0-{\bf p}\cdot{\bf n}+m-i{\sigma}\cdot({\bf p}{\times}{\bf n})}{\sqrt{2(p_0+m)(p_0-{\bf p}\cdot{\bf n})}},\quad{D}^{(1/2)}({\bf p},{\bf n})\,
{D}^{\dagger(1/2)}({\bf p},{\bf n})=1.
\end{equation}
In the $({\alpha},{\bf n})$-representation the functions:
\begin{equation}
\label{3.4}
\xi^{(1/2)}_{\bf p}(\alpha,{\bf n}):= D^{\dagger(1/2)}({\bf p},{\bf n})\,\xi^{(0)}_{\bf p}(\alpha,{\bf n}),
\end{equation}
were determined as the eigenfunctions of the Hamiltonian and the momentum operator for the particle with spin 1/2.
In this case
\begin{equation}
\label{3.5}
{\bf J}:={\bf L}+{\bf s},\quad{\bf N}:=\alpha{\bf n}+({\bf n}\times{\bf J}+{\bf J}\times{\bf n})/2.
\end{equation}
and $C_1({\alpha},{\bf n})=1+\alpha^2-({\bf s}\cdot{\bf n})^2,\quad{C}_2({\alpha},{\bf n})=\alpha{\bf s}\cdot{\bf n}.$
For the particle with spin one, we use the eigenfunctions ${\xi}^{(1)}({\bf p},{\alpha},{\bf n})$ of both Casimir operators $C_1({\bf p})$ and $C_2({\bf p})$:
\begin{equation}
\label{3.6}
{\xi}^{(1)}({\bf p},{\alpha},{\bf n})=D^{(1)}({\bf p},{\bf n})\,D({\bf n})\,{\xi}^{(0)}({\bf p},{\alpha},{\bf n}).
\end{equation}
The matrix $D^{(1)}({\bf p},{\bf n})$ can be obtained from the matrix (\ref{3.3}) and the Clebsch-Gordan coefficients. The matrix $D^{\dagger}({\bf n})$, \,($D({\bf n})D^{\dagger}({\bf n})=1$) contains the eigenfunctions of the operator ${\bf s}\cdot{\bf n}$, with the eigenvalues $\lambda$=-1, 0, 1.
In order to define the Hamiltonian and the momentum operators, we consider the functions
\begin{equation}
\label{3.7}
\xi^{(1)}_{\bf p}({\alpha},{\bf n}):=D^{\dagger}({\bf n})\,D^{\dagger(1)}({\bf p},{\bf n})\,\xi^{(0)}_{\bf p}({\alpha},{\bf n}),
\end{equation}
as states of the free particle with spin 1 with a definite momentum in the Heisenberg picture in $({\alpha},{\bf n})$-representation
\begin{equation}
\label{3.8}
H^{(1)}({\alpha},{\bf
n})\,{\xi}^{(1)}_{\bf p}({\alpha},{\bf n})=p_0\,{\xi}^{(1)}_{\bf p}({\alpha},{\bf n}).
\end{equation}
The operators in (\ref{3.5}) must be transformed according to the rule
$D^{\dagger}({\bf n})\,{\bf J}\,D({\bf n})=\widetilde{\bf J}$.
Applying (\ref{2.4}), (\ref{2.5}), we can express the functions ${\xi}^{(1)}_{\bf p}({\alpha},{\bf n})$ by means of the operators
\begin{eqnarray}
\label{3.9}
A({\alpha},{\bf n}):&=&\left[1-\frac{i}{\alpha}\tau-\frac{1+i\alpha+\tau}{\alpha(\alpha-i)}2{\bf s}\cdot{\bf L}+\frac{{i}{\tau}+\alpha}{\alpha^2({\alpha}-{i})}{\bf L}^2-\frac{{2}}{\alpha({\alpha}-{i})}({\bf s}\cdot{\bf L})^2\right]\exp(i\frac{\partial}{\partial\alpha})\nonumber\\&&+\left[1+\frac{i}{\alpha}{\tau}\right]\exp(-i\frac{\partial}{\partial\alpha})+2-\frac{2i}{\alpha-i}{\bf s}\cdot{\bf L},
\end{eqnarray}
\begin{equation}
\label{3.10}
\xi^{(1)}_{\bf p}(\alpha,{\bf n})=D^{\dagger}({\bf n})\,A({\alpha},{\bf n})\,\frac{m}{2}\frac{\xi^{(0)}_{\bf p}({\alpha},{\bf n})}{p_{0}+m},
\end{equation}
where $\tau:=1-({\bf s}\cdot{\bf n})^2$.
Using the equation
\begin{equation}
\label{3.11}
H^{(1)}({\alpha},{\bf
n})\,D^{\dagger}({\bf n})\,A({\alpha},{\bf n})=D^{\dagger}({\bf n})\,A({\alpha},{\bf n})\,H^{(0)}({\alpha},{\bf
n}),
\end{equation}
we have
\begin{eqnarray}
\label{3.12}
H^{(1)}(\alpha,{\bf n})&=&m\left[\cosh(i\frac{\partial}{\partial\alpha})+\frac{i\alpha+{\widetilde\tau}}{\alpha(\alpha-i)}\sinh(i\frac{\partial}{\partial\alpha})+\frac{\widetilde\tau}{\alpha^2}\exp(-i\frac{\partial}{\partial\alpha})+\right.\nonumber\\&&\left.\frac{(\alpha^{2}+\widetilde\tau)\widetilde{\bf J}^{2}}{2\alpha^2(\alpha^2+1)}\exp(i\frac{\partial}{\partial\alpha})-\frac{(\widetilde{\bf s}\cdot\widetilde{\bf L}+2)\widetilde\tau}{\alpha^2+1}-\frac{\widetilde\tau(\widetilde{\bf s}\cdot\widetilde{\bf L}+2)}{\alpha^2}\right].
\end{eqnarray}
In the nonrelativistic limit, with the notation $q={\alpha}/m$, we have
\begin{equation}
\label{3.13}
H^{(1)}(q,{\bf n})-m{\quad}\rightarrow{\quad}-\frac{1}{{2m}{q}^2}\frac{\partial}{{\partial}{q}}{q}^{2}\frac{\partial}{{\partial}{q}}+\frac{\widetilde{\bf J^{2}}+2[\widetilde{\tau}-(\widetilde{\bf s}\cdot\widetilde{\bf J})\widetilde\tau-\widetilde\tau(\widetilde{\bf s}\cdot\widetilde{\bf J})]}{{2mq^2}}.
\end{equation}
One can determine the momentum operator either by means of the commutation relations of the
Poincar\'e algebra $[\widetilde{\bf N}, H^{(1)}({\alpha},{\bf
n})]=-i{\bf P}^{(1)}({\alpha},{\bf
n})$,
or by the equations
\begin{equation}
\label{3.14}
{\bf P}^{(1)}({\alpha},{\bf
n})\,D^{\dagger}({\bf n})\,A({\alpha},{\bf n})=D^{\dagger}({\bf n})\,A({\alpha},{\bf n})\,{\bf P}^{(0)}({\alpha},{\bf
n}),
\end{equation}
\begin{eqnarray}
\label{3.15}
{P}^{(1)}_{3}({\alpha},{\bf
n})&=&n_3 H^{(1)}({\alpha},{\bf
n})-m\left[{\bigl(}\frac{\alpha(1-\widetilde{\tau})}{(\alpha^2+1)}+\frac{\widetilde{\tau}}{\alpha}{\bigr)}\exp(i\frac{\partial}{{\partial}{\alpha}})N_3+\right.\nonumber\\&&\left.\frac{1-\widetilde{\tau}+s_3L_3}{\alpha^2+1}\exp(i\frac{\partial}{{\partial}{\alpha}})+\frac{(\widetilde{\bf s}\times{\bf n})_3}{\alpha}(1-\frac{\widetilde{\tau}}{\alpha+i})\right].
\end{eqnarray}
The operators $H^{(1)}({\alpha},{\bf
n}),\,{\bf P}^{(1}({\alpha},{\bf
n})$ can be
identified as operators of the massive relativistic spin 1 particle in the Schr\"odinger picture in $(\alpha,{\bf n})$-representation.
The state functions in the Schr\"odinger picture can be found by means of the Fourier expansion in the Heisenberg picture (\ref{3.2}) and the transformation (\ref{1.1}):
\begin{equation}
\label{3.16}
{\Psi}^{(1)}_{\mu}({\alpha},{\bf n},t,{\bf x})={\frac{1}{{(2\pi)}^{3/2}}}\sum_{\mu^{'}=-1}^{1}\int\frac{d{\bf p}}{p_0}\,{\xi}^{(1)}_{\bf p\mu\mu^{'}}({\alpha},{\bf n})\,\exp[-i(p\cdot{x})]{\Psi}^{(1)}_{\mu^{'}}({\bf p}).
\end{equation}
In this case the
Schr\"odinger equation is valid
\begin{equation}
\label{3.17}
i\frac{\partial}{\partial{t}}{\Psi}^{(1)}(\alpha,{\bf n},t,{\bf x})=H^{(1)}(\alpha,{\bf n}){\Psi}^{(1)}(\alpha,{\bf n},t,{\bf x}),
\end{equation}
as well the equation in the spatial derivatives
\begin{equation}
\label{3.18}
-i{\bf \nabla}_{\bf x}{\Psi}^{(1)}(\alpha,{\bf n},t,{\bf x})={\bf P}^{(1)}({\alpha},{\bf
n}){\Psi}^{(1)}(\alpha,{\bf n},t,{\bf x}).
\end{equation}
\section{ Partial-wave equations}
To determine the partial-wave equations in the $(\alpha,{\bf n})$-representation in Heisenberg picture, we first use the spherical spinors $\Omega_{{\jmath\ell}m}({\bf n}_{ p}),\quad\Omega_{{\jmath\ell}m}({\bf n})$ being the eigenfunctions of the operators ${\bf s}\cdot{\bf L({\bf p})}$ and ${\bf s}\cdot{\bf L}$. They have the same form as in the nonrelativistic formalism. The equation (\ref{3.8}) permits factorization by introducing the spinors $\widetilde\Omega_{{\lambda\jmath\ell}m}({\bf n}):=D^{\dagger}({\bf n})\cdot\Omega_{{\jmath\ell}m}({\bf n})$. If we introduce $D^{\dagger}_{11}({\bf n})=(1+n_3)/2,\quad{D}^{\dagger}_{10}({\bf n})=-n_{-}/\sqrt{2},\quad{D}^{\dagger}_{1-1}({\bf n})=n^{2}_{-}/2(1+n_3)$,{\quad}$(s_3)_{11}=1$,
then
\begin{equation}
\label{4.1}
\widetilde{J}^{(1)}_{3}=L_{3}+s_{3},\quad\widetilde{J}^{(1)}_{-}=L_{-}+s_{3}n_{-}/(1+n_3),\quad\widetilde{J}^{(1)}_{+}=L_{+}+s_{3}n_{+}/(1+n_3),
\end{equation}
with $\widetilde{{\bf s}\cdot{\bf L}}\,\widetilde{\Omega}_{{\lambda\jmath\ell}m}({\bf n})$=${\beta}\,\widetilde{\Omega}_{{\lambda\jmath\ell}m}({\bf n}),\quad{\beta}$=$\jmath({\jmath})+1-\ell({\ell}-1)-2$.
Let us integrate the expression
\begin{equation}
\label{4.2}
\xi^{(1)}_{\bf p}(\alpha,{\bf n}){\Omega}_{{\jmath\ell}m}({\bf n}_{ p})=D^{\dagger(1)}({\bf n})\,A({\alpha},{\bf n})\,\frac{m}{2}\frac{\xi^{(0)}_{\bf p}({\alpha},{\bf n})}{p_{0}+m}{\Omega}_{{\jmath\ell}m}({\bf n}_{ p}),
\end{equation}
over the angular variables of the ${\bf n}_{ p}$ vectors. The matrix elements obtained in this way can be written in the form
\begin{equation}
\label{4.3}
\widetilde{A({\alpha},{\bf n})}\,\frac{m}{2}\frac{{\cal{P}}^{(0)}_{l}(\cosh\chi,\alpha)}{p_{0}+m}\cdot4{\pi}i^{l}\widetilde{\Omega}_{{\lambda\jmath\ell}m}({\bf n}.
\end{equation}
We define the partial functions for the particles with spin 1 ${\cal{P}}^{(1)}_{{\lambda\jmath\ell}}(\cosh\chi,\alpha)$ as coefficients that stand in front of $4{\pi}i^{l}\widetilde\Omega_{{\lambda\jmath\ell}m}({\bf n})$ in the expression (\ref{4.3}). These can be expressed in terms of the functions ${\cal{P}}^{(0)}_{l}(\cosh\chi,\alpha)$ (see Appendix),${\quad}b(\chi):=1/2(\cosh\chi+1)$:
1) for $\jmath=\ell+1$,
\begin{eqnarray}
\label{4.4}
{\cal{P}}^{(1)}_{\lambda\jmath\ell}(\cosh\chi,\alpha)&=&b(\chi)\left[\frac{(\alpha-il)(\alpha-il-i)}{\alpha(\alpha-i|\lambda|)}\exp(i\frac{\partial}{{\partial}{\alpha}})+\frac{2(\alpha-il-i)}{\alpha-i}+\right.\nonumber\\&&\left.\frac{\alpha+i-|\lambda|}{\alpha}\exp(-i\frac{\partial}{{\partial}{\alpha}})\right]{\cal{P}}^{(0)}_{l}(\cosh\chi,\alpha);
\end{eqnarray}
2) for $\jmath=\ell-1$,
\begin{eqnarray}
\label{4.5}
{\cal{P}}^{(1)}_{\lambda\jmath\ell}(\cosh\chi,\alpha)&=&b(\chi)\left[\frac{(\alpha+il)(\alpha+il+i)}{\alpha(\alpha-i|\lambda|)}\exp(i\frac{\partial}{{\partial}{\alpha}})+\frac{2(\alpha+il)}{\alpha-i}+\right.\nonumber\\&&\left.\frac{\alpha+i-|\lambda|}{\alpha}\exp(-i\frac{\partial}{{\partial}{\alpha}})\right]{\cal{P}}^{(0)}_{l}(\cosh\chi,\alpha);
\end{eqnarray}
3) for $\jmath=\ell$ and $|\lambda|=1$,
\begin{eqnarray}
\label{4.6}
{\cal{P}}^{(1)}_{\lambda\jmath\ell}(\cosh\chi,\alpha)&=&b(\chi)\left[\frac{\alpha(\alpha+i)-\jmath(\jmath+1)}{\alpha(\alpha-i)}\exp(i\frac{\partial}{{\partial}{\alpha}})+\frac{2\alpha}{\alpha-i}+\right.\nonumber\\&&\left.\exp(-i\frac{\partial}{{\partial}{\alpha}})\right]{\cal{P}}^{(0)}_{l}(\cosh\chi,\alpha);
\end{eqnarray}
4) for $\jmath=\ell$ and $\lambda=0$,\quad$\widetilde{\Omega}_{{\lambda\jmath\ell}m}({\bf n})$=0,\quad${\cal{P}}^{(1)}_{\lambda\jmath\ell}(\cosh\chi,\alpha)$=0.
For the functions ${\cal{P}}^{(1)}_{\lambda\jmath\ell}(\cosh\chi,\alpha)$ we have
\begin{equation}
\label{4.7}
H^{(1)}({\alpha},\jmath,\lambda)\,{\cal{P}}^{(1)}_{\lambda\jmath\ell}(\cosh\chi,\alpha)=p_0\,{\cal{P}}^{(1)}_{\lambda\jmath\ell}(\cosh\chi,\alpha),
\end{equation}
where
\begin{eqnarray}
\label{4.8}
H^{(1)}({\alpha},\jmath,\lambda):&=&m\left[\frac{\alpha}{2(\alpha-i)}+\frac{\widetilde\tau}{2(\alpha-i)}+\frac{\jmath(\jmath+1)}{2({\alpha}^{2}+1)}(1+\frac{\widetilde\tau}{\alpha^2})\right]\exp(i\frac{\partial}{{\partial}{\alpha}})\nonumber\\&&+m\left[\frac{\alpha-2i}{2(\alpha-i)}(1+\frac{\widetilde\tau}{\alpha^2})\right]\exp(-i\frac{\partial}{\partial\alpha})+m\left(
\begin{array}{ccc}
0&-\frac{\beta+1}{{\alpha}^{2}+1}&0\\-\frac{\beta+2}{{2\alpha}^{2}}&0&-\frac{\beta+2}{{2\alpha}^{2}}\\0&-\frac{\beta+1}{{\alpha}^{2}+1}&0
\end{array}
\right).
\end{eqnarray}
\section{CONCLUSION}
The four-dimensional generalization of the Heisenberg/Schr\"odinger picture introduces new features into the nature of the description of particle states. In the framework of this approach the plane wave ${\sim}$ const$\cdot\exp[-i(p\cdot{x})]$ in their original sense as the stationary states of a particle cannot appear in the mathematical formalism of the quantum theory. The consistent determination of the wave functions of a particle in the Schr\"odinger picture requires the use of the wave functions in the momentum representation or of the matrix elements of the fundamental series of unitary representations of the Lorentz group.
The found Hamiltonian for spin 1 - particle is a differential-difference operator. The system of eigenfunctions is expressed through the eigenfunction of the particle with spin zero.
We hope that the formalism that has been developed here will be employed for solving problems in relativistic quantum physics.
\subsubsection*{ACKNOWLEDGMENTS}
The author would like to thank Prof. F. W. Hehl for very helpful discussions and comments.
\begin{appendix}
\section*{orthogonality and completeness}
The partial expansion for the function $\xi^{(0)}({\bf p},\alpha,{\bf n})$ has the form $\\(p_0/m:=\cosh{\chi},\quad{\bf n}_{ p}:={\bf p}/|{\bf p}\mid,\quad{\bf n}_{ p}:=(\sin{\theta}_{ p}\cos{\varphi}_{ p},\sin{\theta}_{ p}\sin{\varphi}_{ p},\cos{\theta}_{ p}),\quad|{\bf p}|/m=\sinh{\chi})$,
\begin{equation}
\label{A.1}
\xi^{(0)}({\bf p},\alpha,{\bf n})=\sum_{l=0}^{\infty}(2l+1)i^{l}{\cal{P}}^{(0)}_{l}(\cosh\chi,\alpha)P_{l}(\bf n_{\bf p}\cdot\bf n),
\end{equation}
with the functions
\begin{eqnarray}
\label{A.2}
{\cal{P}}^{(0)}_{l}(\cosh\chi,\alpha)&=&(-i)^{l}\sqrt\frac{\pi}{2\sinh\chi}\,\frac{\Gamma(i\alpha+l+1)}{\Gamma(i\alpha+1)}{\cal{P}}^{-1/2-l}_{-1/2+i\alpha}(\cosh\chi),
\end{eqnarray}
\begin{eqnarray}
\label{A.3}
{\cal{P}}^{(0)}_{l}(\cosh\chi,\alpha)&=&i^{l}\frac{\Gamma(i\alpha+1)}{\Gamma(-i\alpha+l+1)}({\sinh\chi})^l(\frac{d}{d\sinh\chi})^l{\cal{P}}^{(0)}_{(0)}(\cosh\chi,\alpha),
\end{eqnarray}
\begin{eqnarray}
\label{A.4}
{\cal{P}}^{(0)}_{(0)}(\cosh\chi,\alpha)&=&\frac{\sin(\alpha\chi)}{\alpha\sinh\chi}.
\end{eqnarray}
The orthogonality and completeness conditions for the functions $\xi^{(s)}({\bf p},\alpha,{\bf n})$ have the form
\begin{eqnarray}
\label{A.5}
\frac{1}{(2\pi)^{3}}\sum_{\nu=-s}^{s}\int(\nu^2+\alpha^2)d{\alpha}\,d{\omega}_{\bf n}\,\xi^{\dagger(s)}_{\mu\nu}({\bf p},\alpha,{\bf n})\,
\xi^{(s)}_{\nu\sigma}({\bf p_1},\alpha,{\bf n})&=&\nonumber\\{\delta}_{\mu\sigma}\delta^{(3)}({\bf p}-{\bf p_1})\sqrt{1+{\bf p}^{2}/m^2},
\end{eqnarray}
\begin{eqnarray}
\label{A.6}
\frac{1}{(2\pi)^{3}}\sum_{\nu=-s}^{s}\int\frac{d{\bf p}}{p_0}\,\xi^{(s)}_{\mu\nu}({\bf p},\alpha,{\bf n})\,
\xi^{\dagger(s)}_{\nu\sigma}({\bf p_1},\alpha_1,{\bf n_1})&=&\nonumber\\{\delta}_{\mu\sigma}\frac{\alpha^2}{\mu^2+\alpha^2}\,\delta^{(3)}({\bf n}-{\bf n_1})\,\delta(\alpha-\alpha_1).
\end{eqnarray}
\end{appendix}
|
2,869,038,155,477 | arxiv | \section{Introduction}
The Kondo effect, found originally for systems with magnetic impurities in
metals is now present in a variety of nanoscopic systems, including
semiconducting quantum dots \cite{kou}, magnetic adatoms on surfaces \cite{kou,lobos,trimer}
and carbon nanotubes \cite{nyg}. In the latter, in
addition to the spin Kramers degeneracy, there is in addition orbital
degeneracy due to the ``pseudospin'' degree of freedom related with
the particular band structure of graphene. This leads to an SU(4) Kondo
effect which has been observed experimentally \cite{jari,maka} and also
discussed theoretically \cite{lim,buss,and}. In particular, Lim \textit{et al.}
have studied the spectral density when the SU(4) symmetry is reduced to
SU(2), mainly by a change in the tunneling matrix elements \cite{lim,and}.
Our main motivation in the problem arises from interference phenomena in
systems of quantum dots \cite{scs,fea,soc} or molecules \cite{fea,bege,mole}. For
example depressions in the integrated conductance through a ring
described by the Hubbard or $t-J$ model,
pierced by an Aharonov-Bohm magnetic flux, related with spin-charge separation
\cite{scs,fea}, are due to a partial destructive interference when the energy of
two doublets cross \cite{fea}. Similar interference effects were predicted in
molecular transistors \cite{bege,mole}.
The effective Hamiltonian near the crossing
is discussed in the next section.
To our knowlege, the conductance in these systems has so far
been calculated using approximate expressions or a slave-boson formalism \cite{soc},
which is valid only for very low temperatures and applied bias
voltages. This work is a step towards a more quantitative theory to describe
the transport through similar systems, treating the effective Hamiltonian
within the non-crossing approximation (NCA) \cite{nca,nca2}. Work is in
progress to deal with the non-equilibrium situation, which is necessary
within our formalism to calculate the current.
In this paper, we report on our study of the spectral density of the model as a function of the
splitting $\delta $ between both doublets in the Kondo regime. We also calculate the
dependence of the Kondo temperature $T_{K}$ with $\delta $ and analyze the
validity of the Friedel sum rule \cite{fri}.
\section{Model}
We start from a model in which two doublets of an interacting system are
hybridized with a singlet by promotion of an electron to two conducting
leads. This is the low-energy effective Hamiltonian for several systems with
partial destructive interference, such as Aharonov-Bohm rings \cite{fea}
or aromatic molecules \cite{bege,mole} connected to
conducting leads. The Hamiltonian can be written as \cite{fea}
\begin{eqnarray}
H &=&E_{s}|0\rangle \langle 0|+\sum_{i\sigma }E_{i}|i\sigma \rangle \langle
i\sigma |+\sum_{\nu k\sigma }\epsilon _{\nu k}c_{\nu k\sigma }^{\dagger
}c_{\nu k\sigma } \nonumber \\
&&+\sum_{i\nu k\sigma }(V_{i\nu }|i\sigma \rangle \langle 0|c_{\nu k\sigma }+{\rm H.c}.),
\label{ham}
\end{eqnarray
where the singlet $|0\rangle $ and the two doublets $|i\sigma \rangle $ (
i=1,2$; $\sigma =\uparrow $ or $\downarrow $) denote the localized states
(representing for example the low-energy states of an isolated molecule),
$c_{\nu k\sigma }^{\dagger }$ create conduction states in the left
($\nu =L$) or right ($\nu =R$) lead, and $V_{i\nu }$ describe the four hopping
elements between the two leads and both doublets, assumed independent of $k$.
Changing the phase of the conduction states and the relative phase between
both doublets, three among the four $V_{i\nu }$ can be made real and
positive. The phase $\phi $ of the remaining hopping $|V|e^{i\phi }$,
depends on the particular system and its symmetry. For example in molecules
with rotational symmetry $\phi =(K_{1}-K_{2})l$, where $l$ is the distance between the
sites connected to the left and right leads, and $K_{i}$ is the wave
vector of the state $|i\sigma \rangle$, which can be modified with an applied
magnetic flux \cite{fea}. In absence of magnetic flux and when the relevant
states have wave vector $K_{i}=\pm \pi /2$, as in rings with a number of
atoms multiple of four, $\phi =\pi $ and there is complete destructive
interference in transport \cite{fea,mole}.
In the following we will assume
this case, with states 1 and 2 related by symmetry implying $|V_{1\nu
}|=|V_{2\nu }|$. We \ further assume symmetric leads, $\epsilon
_{Lk}=\epsilon _{Rk}=\epsilon _{k}$, $|V_{iL}|=|V_{iR}|$. Then, without loss
of generality we can take $V_{1L}=V_{1R}=V_{2L}=V>0$, $V_{2R}=-V$, and $E_2 \geq E_1$.
For these parameters,
changing basis $c_{1k\sigma }^{\dagger }=(c_{Lk\sigma }^{\dagger
}+c_{Rk\sigma }^{\dagger })/\sqrt{2}$, $c_{2k\sigma }^{\dagger
}=(c_{Lk\sigma }^{\dagger }-c_{Rk\sigma }^{\dagger })/\sqrt{2}$, the
Hamiltonian takes the form of an SU(4) Anderson model with a "field" $\delta
=E_{2}-E_{1}$ and on-site hybridization $V^{\prime }=\sqrt{2}V$
\begin{eqnarray}
H &=&E_{s}|0\rangle \langle 0|+\sum_{i\sigma }E_{i}|i\sigma \rangle \langle
i\sigma |+\sum_{ik\sigma }\epsilon _{k}c_{ik\sigma }^{\dagger }c_{ik\sigma }
\nonumber \\
&&+V^{\prime }\sum_{ik\sigma }(|i\sigma \rangle \langle 0|c_{ik\sigma }+{\rm H.c}.).
\label{ham2}
\end{eqnarray}
Interchanging the doublet index (1 or 2) with the spin index $\sigma$, one realizes
that this model also
describes transport trough a carbon nanotube with electron density depleated
at two points (so as to created an SU(4) quantum dot in the middle) under a real applied
magnetic field.
\section{Spectral density}
\begin{figure}
\includegraphics[width=7.5cm]{dens1.eps}
\caption{Spectral density as a function of energy for different temperatures. The inset shows a detail
near the Fermi energy. Parameters are $\Delta =0.5$. $D=10$, $E_1=E_2=-4$. The lowest temperature is
$T= 10^{-3}=0.076T_K$. }
\label{dens1}
\end{figure}
In Fig. \ref{dens1} we present numerical results for the spectral density $\rho _{i\sigma }$ of
the SU(4) Anderson model ($E_1=E_2$) in the Kondo regime
$\epsilon_{F}-E_{i}\gg \Delta$, where the hybridization function
$\Delta=\pi \sum_{k}(V^{\prime })^{2}\delta (\omega -\epsilon _{k})$, assumed
independent of energy. We set $\Gamma=2\Delta =1$ as the unit of energy. We also assume a
conduction band symmetric around the Fermi level $\epsilon _{F}=0,$ of half
width $D=10$.
The spectral density shows a broad charge transfer peak near $E_1$.
For temperatures below a characteristic energy scale $T_K$ (defined below),
$\rho _{i\sigma }$ develops
a narrow peak around the Fermi level. In contrast to the better known one-level SU(2)
case, this peak is displaced towards positive energies and is much broader,
as discussed in Section 5.
\begin{figure}
\includegraphics[width=7.5cm]{dens.eps}
\caption{Spectral densities for levels 1 (full line) and 2 (dashed line)
as a function of energy for
different values of $E_2=E_1+\delta$ and $T= 10^{-3}$. Other parameters as in Fig. 1.
Dot-dashed line corresponds to $\rho _{2 \sigma }$ for $\delta=0.015$, 0.3 and 0.6.}
\label{dens}
\end{figure}
The evolution of the spectral densities as $E_2$ is displaced to larger energies,
breaking the SU(4) symmetry is shown in Fig. \ref{dens}.
The peak near the Fermi energy of $\rho _{2 \sigma} (\omega)$ is displaced towards positive energies
(near $\delta=E_2-E_1$). In contrast, the corresponding peak in $\rho _{1 \sigma } (\omega)$ narrows
significantly and displaces towards the Fermi energy. This implies that the Kondo temperature $T_K$
defined as the half width at half maximum of this peak, also decreases strongly.
The evolution of $T_K$ with $\delta$ is discussed in the Section 5.
In addition $\rho _{1 \sigma }$ develops a broad peak near energy $-\delta$ which becomes visible
when $\delta$ becomes greater than $T_K$.
\section{Friedel sum rule}
The Anderson model studied has a Fermi liquid ground state which satisfies well known relationships at
zero temperature.
One of them is the Friedel sum rule which relates the spectral density at the Fermi level for each
``pseudospin'' channel with the occupation of that channel \cite{fri}. For the simplest case of a constant density
of conduction states, this rule reads
\begin{equation}
\rho _{i\sigma }(\epsilon_{F})=\frac{1}{\pi \Delta }\sin ^{2}(\pi n_{i\sigma }),
\label{fsr}
\end{equation
where $n_{i\sigma }=\langle |i\sigma \rangle \langle i\sigma |\rangle $.
This is an exact relationship for a Fermi liquid, which is not necessarily satisfied by approximations.
In particular, it is known that at very low temperatures, the NCA has a tendency to develop
spurious spikes in $\rho _{i\sigma }(\omega)$ at the Fermi energy, while thermodynamic properties,
such as expectation values are accurately reproduced \cite{nca,nca2}.
In Fig. \ref{friedel}, we compare both members of Eq. (\ref{fsr}) for the lowest lying doublet,
at a temperature $T=0.1 T_K$ \cite{note} low enough so that no further increase in $\rho _{i\sigma }(\epsilon_{F})$
takes place as the temperature is lowered (according to physical expectations and Eq. (\ref{fsr})),
but high enough to prevent the presence of spurious spikes.
The disagreement lies below 20 \%.
The agreement improves as the parameters are moved deeper in the Kondo regime $\epsilon_{F}-E_{1}\gg \Delta$.
Thus, while the spectral density at zero temperature is not well represented by the NCA results at $T=0$,
one can take the values at $T=0.1 T_K$ as a reasonable description of the correct $T=0$ ones.
This statement is supported by the comparison of results obtained by NCA and numerical renormalization group
for the one-level SU(2) case \cite{compa}.
\begin{figure}
\includegraphics[width=7.5cm]{friedel.eps}
\caption{Squares: rescaled spectral density of the lowest lying level at the Fermi energy as a
function of $\delta$. Circles: corresponding (more accurate) result given by Eq. (\ref{fsr}).
Lines are guides to the eye.
Parameters are $T=0.1 T_K$ and the rest as in Fig. 1.}
\label{friedel}
\end{figure}
\section{Kondo temperature}
From the half width at half maximum of the peak nearest to the Fermi energy
of the spectral density of the lowest level ($\rho _{1 \sigma } (\omega)$),
we have calculated the Kondo temperature of the system $T_{K}$ for several
values of $\delta$. This requires to solve the self-consistent NCA
equations up to low enough temperatures (about 0.1 $T_K$ as discussed above)
so that the height of the peak does not
increase significantly with further lowering of the temperature \cite{note}.
Fortunately, the result is not very sensitive to the ratio $T/T_K$.
The
results are shown in Fig. \ref{kondo} and compared with Eq. (\ref{tk}) obtained from
a variational calculation as explained below. We see that except for an
overall multiplicative factor, the agreement is quite good, in spite of the fact
that $T_{K}$ changes by nearly two orders of magnitude.
\begin{figure}
\includegraphics[width=7.5cm]{kondo.eps}
\caption{Squares: Kondo energy scale determined by the width of the peak
in the spectral density near the Fermi energy as a function of the splitting $\delta$.
The temperatures used were $T=10^{-3}$, $T=10^{-4}$, and $T=10^{-5}$ depending on $\delta$
Dashed line: corresponding variational result Eq. (\ref{tk}) multiplied by a factor 0.606.}
\label{kondo}
\end{figure}
To provide an independent estimate of $T_{K}$, we have calculated the
stabilization energy of the following variational wave function
\begin{equation}
|\psi \rangle =\alpha |s\rangle +\sum_{ik\sigma }\beta _{ik}(|i\sigma
\rangle \langle 0|c_{ik\sigma })|s\rangle , \label{var}
\end{equation
where $|s\rangle $ is the many-body singlet state with the filled Fermi sea
of conduction electrons and the state $|0\rangle $ at the localized site, while
$\alpha$ and $\beta _{ik}$ are variational parameters.
From the resulting optimized energy $E$, we can define
the stabilization energy as $T^*_{K}=E_1-E$.
The Kondo energy scale defined in this way becomes
\begin{equation}
T^*_{K}=\left\{ (D+\delta )D\exp \left[ \pi E_{1}/(2\Delta )\right] +\delta
^{2}/4\right\} ^{1/2}-\delta /2. \label{tk}
\end{equation
This expression interpolates between the SU(4) result ($T^*_{K}=D\exp \left[
\pi E_{1}/(4\Delta )\right] $ for $\delta=0$) and the SU(2) one for one doublet only
($T^*_{K}=D\exp \left[ \pi E_{1}/(2\Delta )\right] $ for $\delta \rightarrow + \infty $).
\bigskip
\section{Summary and discussion}
We have studied an impurity Anderson model containing two doublets, which
interpolates between the cases for one level with SU(4) and SU(2) symmetry,
and is of interest for several
nanoscopic systems, using the non-crossing approximation (NCA).
We have
shown that the NCA provides reasonable results for the equilibrium spectral
density. The values of the spectral density for both doublets agree within
20\% with the predictions of the Friedel sum rule, in spite of the fact that
it is not expected to satisfy Fermi liquid relationships at zero
temperature.
In addition, the Kondo temperature scale $T_K$ obtained from the width of
the peak in spectral density near the Fermi energy agrees very well with the
stabilization energy of a variational calculation, in spite of the change in
several orders of magnitude of $T_K$ when the splitting between
both doublets is changed.
The approach seems promising for studying transport properties within a
non-equilibrium formalism. Work in this direction is in progress.
\section{Acknowledgments}
One of us (A. A. A.) is partially supported by CONICET, Argentina. This work was partially
supported by PIP No 11220080101821 of CONICET, and PICT Nos 2006/483 and
R1776 of the ANPCyT.
|
2,869,038,155,478 | arxiv | \section{Introduction and Preliminaries}
Over a period of several hundred years theoretical physics has been developed on the basis of real and complex analysis.
However, for the past half a century the field of p-adic numbers $\Qp$ (as well as its algebraic extensions) has been intensively used in theoretical and mathematical physics (see \cite{VVZ,Khrennikov:1994,Khrennikov:1997,Ko:2001,Koz:2008} and
the references therein).
Since in $p$-adic analysis associated with the mappings from $\Qp$ to $\CC$, the operation of differentiation is not defined, many $p$-adic models instead of differential equations use pseudo-differential equations generated by so called Vladimirov operator $D^\al$.
A wide class of $p$-adic pseudo-differential equations was intensively used in applications, for example to model
basin-to-basin kinetics \cite{Avetisov:1999,Avetisov:2002,Koz:2004,Koz:2008}, for instance to construct the simplest $p$-adic pseudo-differential heat type equation \cite{Avetisov:2002}. In \cite{K:2008A} and in \cite{FZ:2006} ultrametric and $p$-adic nonlinear equations were used to model turbulence.
Some types of p-adic pseudo-differential equations were stu\-di\-ed in detail in the books \cite{Ko:2001,AKK:book} and \cite{ZG:2016}; see also the recent survey \cite{BGPW}.
At the same time very little is known about nonlinear p-adic equations. We can mention only some semilinear evolution equations solved using p-adic wavelets \cite{AKK:book} and a kind of equations of reaction-diffusion type studied in \cite{ZG:2018}. A nonlinear evolution equation for complex-valued functions
of a real positive time variable and a $p$-adic spatial variable, which is a non-Archimedean counterpart of the fractional porous medium equation, that is the equation:
\begin{equation}\Label{1-1}\
\dfrac{\dd u}{\dd t}+D^\al \big(\vph(u)\big)=0,\ \ u=u(t,x),\ \ t>0,\ x\in\Qp
\end{equation}
was studied in the recent paper \cite{Ko:2018}. Here $\Qp$ is the field of $p$-adic numbers, $D^\al$, $\al >0$ is Vladimirov's fractional differentiation operator, $\vph$ is a strictly monotone increasing continuous function.
Developing an $L^1$-theory of Vladimirov's $p$-adic fractional differentiation operator, the authors of \cite{Ko:2018} proved the
$m$-accretivity of the corresponding nonlinear operator and obtained the existence and uniqueness of a mild solution.
In this paper we prove a similar result for a more complicated multi-dimensional nonlinear equation resembling \eqref{1-1} but with more general nonlocal operator $W$ \eqref{W1} instead of the Vladimirov operator $D^\al$. We follow essentially the strategy of the paper \cite{Ko:2018} and use the abstract theory for nonlinear $m$-accretive operators in Banach space $L^1$ developed in \cite{BrSt:1973} and further in \cite{CrP}.
In order to use this method we need to build the $L^1$-theory for the weighted Vladimirov operator $W$ and prove some special properties of its restriction onto $p$-adic ball. This meets several difficulties, which we overcome by reconstruction of the associated Markov process in the $p$-adic ball and by recovering of the associated Levy measure. We prove several additional properties of $W$, which are necessary for our tasks and may have independent interest.
Let us formulate the main definitions and auxiliary statement, which will be used further.
\medskip
{\it 2.1. $p$-Adic numbers.} Let $p$ be a prime
number. The field of $p$-adic numbers is the completion $\mathbb Q_p$ of the field $\mathbb Q$
of rational numbers, with respect to the absolute value $|x|_p$
defined by setting $|0|_p=0$,
$$
|x|_p=p^{-\nu }\ \mbox{if }x=p^\nu \frac{m}n,
$$
where $\nu ,m,n\in \mathbb Z$, and $m,n$ are prime to $p$. $\Qp$ is a locally compact topological field.
Note that by Ostrowski's theorem there are no absolute values on $\mathbb Q$, which are not equivalent to the ``Euclidean'' one,
or one of $|\cdot |_p$.
The absolute value $|x|_p$, $x\in \mathbb Q_p$, has the following properties:
\begin{gather*}
|x|_p=0\ \mbox{if and only if }x=0;\\
|xy|_p=|x|_p\cdot |y|_p;\\
|x+y|_p\le \max (|x|_p,|y|_p).
\end{gather*}
The latter property called the ultra-metric inequality (or the non-Archi\-me\-dean property) implies the total disconnectedness of $\Qp$ in the topology
determined by the metric $|x-y|_p$, as well as many unusual geometric properties. Note also the following consequence of the ultra-metric inequality:
\begin{equation*}
|x+y|_p=\max (|x|_p,|y|_p)\quad \mbox{if }|x|_p\ne |y|_p.
\end{equation*}
The absolute value $|x|_p$ takes the discrete set of non-zero
values $p^N$, $N\in \mathbb Z$.
If $|x|_p=p^N$, then $x$ admits a
(unique) canonical representation
\begin{equation}
\Label{2.1}\
x=p^{-N}\left( x_0+x_1p+x_2p^2+\cdots \right) ,
\end{equation}
where $x_0,x_1,x_2,\ldots \in \{ 0,1,\ldots ,p-1\}$, $x_0\ne 0$.
The series converges in the topology of $\mathbb Q_p$. For
example,
$$
-1=(p-1)+(p-1)p+(p-1)p^2+\cdots ,\quad |-1|_p=1.
$$
The {\it fractional part} of element $x\in\Qp$ in canonical representation \eqref{2.1} is given by:
\[
\{x\}_p=
\left\{\begin{array}{ll} 0,&\text{if}\quad N\leq 0 \ \ \ \text{or}\ \ \ x=0;\\
p^{-N}\big(x_0+x_1p+\ldots+x_{N-1}p^{N-1}\big),&\text{if} \quad N>0.
\end{array}
\right.
\]
\medskip
{\it 2.2. Integration in the space $\Qp^n$}. The space $\Qpn=\Qp\times\cdots\times\Qp$ consists of points $x=(x_1,\ldots, x_n)$, where $x_j\in\Qp$, $j=1,\ldots,n$, $n\geq 2$. The $p$-adic norm on $\Qpn$ is
$$
\Vert x\Vert_p=\max\limits_{j=1,\ldots,n}|x_j|_p,\ \ \ \ x\in\Qpn.
$$
This norm is also non-Archimedean, since for any $x, y\in \Qpn$:
\begin{align*}
\Vert x+y\Vert_p=&\max\limits_{j=1,\ldots,n}|x_j+y_j|_p\leq \max\limits_{j=1,\ldots,n}\max\big(
|x_j|_p+|y_j|_p\big)=\\
&=\max\big(\max\limits_{j=1,\ldots,n}|x_j|_p,\max\limits_{j=1,\ldots,n}|y_j|_p\big)=\max\big(\Vert x\Vert_p,\Vert y\Vert_p\big).
\end{align*}
The space $\Qpn$ is a complete metric locally compact and totally disconnected space. The scalar product of vectors $x, y\in\Qpn$ is defined by $
x\cdot y=\sum\limits_{j=1}^n x_jy_j.
$
In the space $\Qpn$ the following change of variables formula is valid (see \cite[Ch.I, \S 4, Sect. 4, p. 68]{VVZ}). If $F: x=x(y)$ (i.e. $x_i =x_i(y_1,\ldots,y_n), i=1,\ldots,n$) is a homeomorphic map of the open compact set $K$ onto the (open) compact set $F(K)$, moreover the functions $x_i(y), i=1,\ldots,n$ are analytic in $K$ and
$
\det F^\prime (y)=\det \Big[{\dd x_j}/{\dd y_j}\Big] (y)\neq 0,$ $y\in K$,
then for any $f\in L^1(T(K))$:
\begin{equation}\Label{change}\
\int_{F(K)} f(x)\, d^nx =\int_K f(F(y))\,\Vert\det F^\prime \Vert_p\, d^ny.
\end{equation}
{\it 2.3. Fourier transformation and generalized functions on $\Qp^n$}. The function $\chi(x)=exp\big(2\pi i\{x\}_p\big)$ is an additive character of the field $\Qp$, i.e. it is a character of its additive group, the continuous complex valued function on $\Qp$ satisfying the conditions:
\begin{equation}\Label{char}\
\vert \chi (x)\vert = 1, \quad\quad \chi(x+y)=\chi(x)+\chi(y).
\end{equation}
Moreover $\chi(x)=1$ if and only if $| x|_p\leq 1.$ Denote by $dx$ the Haar measure on the additive group $\Qp$. Then the integral on the $p$-adic ball
$B_N=\big\{x\in\Qpn: \Vert x\Vert_p\leq p^N\big\},\quad N\in\ZZ,$
equals (\cite[(7.14), p. 25]{VTab}):
\begin{equation}\Label{formula11}\
\int_{B_N}\chi(\xi\cdot x)\,d^n x=\left\{
\begin{array}{lcl}
p^{nN},&\text{if} &\Vert \xi\Vert_p\leq p^{-N};\\
0,&\text{if} &\Vert \xi\Vert_p> p^{-N},
\end{array}
\right.\end{equation}
Lemma 4.1 in \cite[Ch.III, \S 4, p. 137]{Taib} implies that for $z_0$ such that $\Vert z_0\Vert_p=1$:
\begin{equation}\Label{chi-1}\
\int_{S_j}\chi(z_0\cdot x)\,d^nx=\left\{
\begin{array}{rcl}
p^{jn}(1-p^{-n}),&\ &{j\leq}\ 0;\\
-p^{\,{n}}p^{-n},&\ &{j=1};\\
0,&\ &{j>1},
\end{array}
\right.
\end{equation}
where $S_j=\big\{x\in\Qpn: \Vert x\Vert_p= p^{\,j}\big\},\quad j\in\ZZ$.
The {\it Fourier transform} of a test function $\vph\in\cD (\Qp^n)$ is defined by the formula
\[
\widehat{\vph}(\xi)\equiv\big(\Fx\,\vph\big)(\xi)=\int_{\Qp^n}\chi(\xi\cdot x)\vph(x)d^nx,\quad \xi \in \Qp^n,
\]
where $\chi (\xi\cdot x)= \chi(\xi_1 x_1)\cdots\chi(\xi_n x_n)=\exp(2\pi i\sum_{j=1}^n\{\xi_j x_j\}_p)$.
Remark that the additive group of $\Qp$ is self-dual, so that the Fourier transform of a complex-valued function $\vph\in \Qp^n$ is again a function on $\Qpn$ and if $\Fx\vph\in L^1(\Qpn)$ then it is true the inversion formula $\vph(x)=\int_{\Qpn}\chi(-x\cdot \xi)\widehat{\vph}(\xi)\,d^n\xi.$
Let us also remark that, in contrast to the Archimedean situation, the Fourier transform
$\vph \to \Fx \vph$
is a linear and continuous automorphism of the space $\cD (\Qpn)$ (cf. \cite[Lemma 4.8.2]{AKK:book}, see also \cite[Ch. II,§2.4.]{Gel}, \cite[III,(3.2)]{Taib}, \cite[VII.2.]{VVZ}, i.e.
$
\vph(x)=\cF^{\,-1}_{\xi\to x}\Big(\cF_{x\to\xi} \vph \Big).
$
Here $\cD (\Qpn)$
denotes the vector space of {\it test functions}, i.e. of all locally constant functions with compact support.
Recall that a function $\vph: \Qpn\to \CC$ is {\it locally constant} if there exist such an integer $\ell \geq 0$ that for any $x\in\Qpn$
\[\vph (x+y)=\vph (x),\quad \text{if}\quad \Vert y\Vert_p\leq p^{-\ell},\quad \text{($\ell$ is independent on $x$).}\]
The smallest number $\ell$ with this property is called {\it the exponent of constancy of the function $\vph$.} Note that $\cD(\Qpn)$ is dense in $L^q(\Qpn)$ for each $q\in [1,\infty)$.
Let us also introduce the subspace $D_N^\ell\subset \cD(\Qpn)$ consisting of functions with supports in a ball $B_N$ and with the exponents of local constancy less than $\ell$. Then the topology in $\cD(\Qpn)$ is defined as the double inductive limit topology, so that
\[\cD(\Qpn)=\lim\limits_{\longrightarrow\atop{N\to\infty}}
\lim\limits_{\longrightarrow\atop{\ell\to\infty}}D_N^\ell.\]
If $V\subset \Qpn$ is an open set, the space $\cD(V)$ of test functions on $V$ is defined as a subspace of $\cD(\Qpn)$ consisting of functions with supports in $V$. For a ball $V=B_N$, we can identify $\cD (B_N)$ with the set of all locally constant functions on $B_N$.
The space $\cD^\pr(\Qpn)$ of Bruhat-Schwartz distributions on $\Qpn$ is defined as a strong conjugate space to $\cD(\Qpn)$. By duality, the Fourier transform is extended to a linear (and therefore continuous) automorphism of $\cD^\pr(\Qpn)$. For a detailed theory of convolutions and direct product of distributions on $\Qpn$ closely connected with the theory of their Fourier transforms see \cite{AKK:book,Ko:2001,VVZ}
\medskip
{\it 2.4. Radial operator on $\Qp^n$}. In this paper we consider the class of non-local operators introduced in \cite{ZG:2016}.
\begin{definition}\rm
Let us fix a function $w:\Qpn\to \RR_+,$
which satisfies the following properties:
\begin{itemize}
\item[(i)] $w$ is radial, i.e. depending on $\Vert y\Vert_p$, $w=w\big(\Vert y\Vert_p\big)$, continuous and increasing function of $\Vert y\Vert_p$;
\item[(ii)] $w(0)=0$, if $y=0$;
\item[(iii)] there exist such constants $C>0$ and $\al > n$ that
\begin{equation}\Label{w2}\
C_1\Vert \xi\Vert_p^\al\leq w(\Vert \xi\Vert_p)\leq C_2\Vert \xi\Vert_p^\al, \ \text{for any}\ \ \xi\in \Qpn.
\end{equation}
\end{itemize}
\end{definition}
Remark that condition (iii) implies that for some $M\in \ZZ$
\[\int\limits_{\Vert y\Vert_p\geq p^M}\dfrac{d^ny}{w\big(\Vert y\Vert_p\big)}<\infty.\]
The nonlocal operator $W$ is defined by
\begin{equation}\Label{W1}\
(W\vph)(x)=\varkappa\int_{\Qpn}\dfrac{\vph(x-y)-\vph(x)}{w(\Vert y\Vert_p)}\,d^ny,\ \ \text{for}\ \ \vph\in\cD(\Qpn),
\end{equation}
where $\are$ is some positive constant.
From \cite[(2.5), p.15]{ZG:2016} it follows that
for $\vph\in\cD(\Qpn)$ and some constant $M=M(\vph)$ operator $W$ has the following representation:
\begin{equation}\Label{W2}\
(W\vph)(x)=\varkappa\,\dfrac{1_{\Qpn\backslash B_M}}{w(\Vert x\Vert_p)}\ast \vph(x)\, -\, \vph(x) \int\limits_{\Vert y\Vert_p\, >p^M}\dfrac{d^n y}{w(\Vert y\Vert_p)}.
\end{equation}
Moreover from Lemma 4 and Proposition 7, Ch.2 in \cite{ZG:2016} it follows that this operator, acting from $\cD(\Qpn)$ to $L^q(\Qpn)$, is linear bounded operator for each $q\in [1,\infty)$ and has the representation:
\begin{equation}\Label{FW}\
(W\vph)(x)=-\are \Fi\,\big(\Aw\,\Fx\,\vph\big),\ \text{for}\ \vph\in\cD(\Qpn),
\end{equation}
where
\begin{equation}\Label{Aw}\
\Aw:= \int_{\Qpn}\dfrac{1-\chi(y\cdot\xi)}{w(\Vert y\Vert_p)}\, d^ny.
\end{equation}
From \cite[(2.9), p.16]{ZG:2016} for any $z\in\Qpn$ and $\Vert z\Vert_p=p^{-\ga}$ it follows that
\begin{align}\Label{Awfin}\
A_w(z)=&(1-p^{-n})\sum\limits_{j=2}^{\infty}\dfrac{p^{\ga n+jn}}{w(p^{\ga+j})}\,
+ \dfrac{p^{\ga n+n}}{w(p^{\ga+1})}=\\
\Label{eq:Awrep-1}\
&=(1-p^{-n})\sum\limits_{j=\ga+2}^{\infty}\dfrac{p^{nj}}{w(p^{\,j})}\,
+ \dfrac{p^{n(\ga +1)}}{w(p^{\ga +1})}.
\end{align}
The condition \eqref{w2}
on function $w$ implies that there exist positive constants $C_3$ and $C_4$ such that
\begin{equation}\Label{Aw1}\
C_3\Vert\xi\Vert_p^{\al-n}\leq \Aw \leq C_4\Vert \xi\Vert_p^{\al-n},
\end{equation}
(see Lemma 8, Ch.2 in \cite{ZG:2016}).
Remark also that $W\vph\in C(\Qpn)\cap L^q(\Qpn)$ for each $q\in [1,\infty)$ and $\vph\in \cD(\Qpn)$ and operator $W$ may be extended to a densely defined operator in $L^2(\Qpn)$ with the domain
\[Dom (W)=\big\{\vph\in L^2(\Qpn): \ \Aw\Fx\,\vph\in L^2(\Qpn)\big\}.\]
The operator $\big(-W, Dom (W)\big)$ is essentially self-adjoint and positive. In particular, it generates a $C_0$-semigroup of contractions $T(t)$ in the space $L^2(\Qpn)$ (Proposition 20 in \cite{ZG:2016}):
\begin{equation}\Label{Tt}\
T(t)u=Z_t\ast u=\int_{\Qpn}Z(t,x-y)u(y)\,d^ny,\ \ t >0; \ T(0)u=u.
\end{equation}
Here $Z_t(x)=Z(t,x)$
\begin{equation}\Label{Zt}\
Z(t,x)=\int_{\Qpn}e^{-\are t\Aw}\chi(-x\cdot \xi)\,d^n\xi,\ \ \text{for}\ \ t>0.
\end{equation}
is the heat kernel or fundamental solution of the corresponding Cauchy problem.
Later we need the following properties of the fundamental solution $Z(t,x)$ (Lemma 10 and 11 and Theorem 13, Ch. 2 in \cite{ZG:2016}):
\begin{align}\Label{Z0}\
1) &Z(t,x)\geq 0; \quad Z_t(x)\in L^1(\Qpn),\ \text{for}\ t>0\\
\Label{Zt5}\
2) &\int_{\Qpn}Z(t,x)\,d^nx =1;\\
\Label{Zt4}\
3) & Z(t+s,x)=\int_{\Qpn} Z(t,x-y)\,Z(s,y)\, d^ny,\ \ t, s >0, \ x\in\Qpn;\\
\Label{Zt1}\
4)&
Z(t,x)=\Fi\,\big[e^{-\are t\Aw}\big]\in C(\Qpn;\RR)\cap L^1(\Qpn)\cap L^2(\Qpn);\\
\Label{Zt3}\
5) & Z(t,x) \leq \max\{2^\al C_1, 2^\al C_2\}\, t
\Big(\Vert x\Vert_p+
t^{\frac{1}{\al - n}}\Big)^{-\al},\ \ \text{for}\ \ t>0\ \ \text{and}\ \ x\in\Qpn;\\
\Label{Zt6}\
6)& D_tZ(t,x)=-\varkappa \int_{\Qpn}A_w(\xi)\,e^{-\varkappa t A_w(\xi)}\chi(x\cdot \xi) \, d^n \xi,\ \ \text{for}\ \ t>0,\ x\in \Qpn;\\
\Label{Zt7}\
7)& Z(t,x) =\Vert x\Vert_p^{-n}\Bigg[(1-p^{-n})\sum\limits_{j=0}^\infty p^{-nj}e^{-\are t A_w(p^{-(\be+j)})}-e^{-\are tA_w(p^{-\be +1})}\Bigg], \ \text{if}\ \ \Vert x\Vert_p=p^\be.
\end{align}
Since $Z_t(x)\in L^1(\Qpn)$ for $t>0$, for $u\in\cD(\Qpn)\subset L^\infty(\Qpn)$ the convolution in \eqref{Tt} exists and is a well-defined continuous in $x$ function (See Th, 1.11).
\medskip
{\it 2.5. Associated Markov processes}.\ \Label{Sec2-4}\ We use the analytic definition of a Markov process, that is a definition of a transition probability. (See for example \cite{Dyn}.) Suppose that $(E,\mathcal E)$ is a measurable space. A family of real-valued non-negative measurable with respect to variable $x$ functions $P(s,x;t,\Gamma)$, $s<t$, $x\in E,\Gamma \in \mathcal E$, such that $P(s,x;t,\cdot )$ is a measure on $\mathcal E$ and $P(s,x;t,\Gamma )\le 1$, is called a {\it transition probability}, if the Kolmogorov-Chapman equality
$$
\int\limits_EP(t,y;\tau ,\Gamma )\, P(s,x;t,dy)=P(s,x;\tau,\Gamma)
$$
holds whenever $s<t<\tau$, $x\in E$, $\Gamma \in \mathcal E$.
We consider $\cE=\Big(\Qpn,\Vert \cdot \Vert_p\Big)$ as a complete non-Archimedean metric space. Let $\cE$ denote the Borel $\si$-algebra In Chapter 2.2.5 (Lemma 14, p. 22) of \cite{ZG:2016} is was shown that the transition probability
\[P(t,x,B)=\left\{\begin{array}{ll}
\int\limits_{B} p\,(t,x,y)\,d^n y,&\ \text{for} \ t >0;\ x,y\in \Qpn,\ B\in\cE\\
1_B(x),& \ \text{for} \ t=0,
\end{array}\right.
\]
with $p(t,x,y):=Z(t,x-y)$ and $Z(t,x)$ defined in \eqref{Zt},
is normal, i.e. $
\lim\limits_{t\downarrow s}P(s,x;t,\Qpn)=1
$, for any $s>0$, $x\in \Qpn$.
Due to Theorem 16 in \cite[\S 2.2.5, p.23]{ZG:2016} as the consequence of Theorem 3.6 in \cite[p. 135]{Dyn} $Z(t,x)$ is the transition density of a time and space homogeneous Markov process $\xi_t$ which is bounded, right-continuous and has no discontinuities other than jumps.
Moreover the associated semigroup \eqref{Tt}
is Feller one.
Moreover, due to \cite[Vol. II, Ch1, \S 1, p.46] {GS} this process has an independent increments as a space and time homogeneous Markov process. (See also \cite[Vol. I, Ch. III, \S 1, p. 188]{GS})
\section{Properties of the operator W in Banach space $L^1(\Qpn)$}
Let us consider operator $T(t)$ defined by \eqref{Tt}:
\[\big(T(t)u\big)(x)=\int_{\Qpn} Z(t,x-\xi)\,u (\xi)\, d\xi\]
in Banach space $L^1(\Qpn)$. From \eqref{Zt4}, \eqref{Zt3} and Young inequality it follows that $T(t)$ is a contraction semigroup in $L^1(\Qpn)$.
\begin{lemma} \Label{l3-1}\ $T(t)$ is a strongly continuous semigroup in $L^1(\Qpn)$.
\end{lemma}
\begin{proof} Since the space $\cD(\Qpn)$ of Bruhat-Schwartz functions is dense in $L^1(\Qpn)$ \cite{VVZ} it is sufficient to prove the convergence of the expression $I_t:=\Vert T(t)u -u\Vert_{L^1(\Qpn)}\to 0,\ \ t\to 0$ only for $u\in\cD(\Qpn)$.
Due to \eqref{Zt5} we have for $u\in\cD(\Qpn)$:
\begin{align*}
&I_t:=\int_{\Qpn}\big\vert T(t)u(x)-u(x)\big\vert\,d^nx = \int_{\Qpn}
\Big\vert \int_{\Qpn} Z(t, x-\xi)u(\xi)\,d^n\xi -u(x)\Big\vert\,d^n x=\\
&=\int_{\Qpn}
\int_{\Qpn} Z(t,x-\xi)\,\big\vert u(\xi) -u(x)\big\vert\,d^n\xi\,d^n x.
\end{align*}
Here we also used \eqref{Z0}. Since function $u\in\cD(\Qpn)$ is locally constant, then such is also $u(\xi)-u(x)$ and therefore
it exists some $m>0$ such that $u(\xi)-u(x)=0$ for $\Vert x-\xi\Vert_p\leq p^m$. Moreover there exists some $N>m$ such that $u(x)=0$ for $\Vert x\Vert_p>p^N$.
Noting this let us represent integral $I_t$ as a sum of two integrals: inside and outside the $p$-adic ball.
Since on the ball $\Vert x-\xi\Vert_p\leq p^m$ functions $u(x)$ and $u(\xi)$ coincides, therefore, using \eqref{Zt3}, we have
\begin{align*}\nonumber
&I_{1,t}=\int\limits_{\Vert x\Vert_p\leq p^N}d^nx\int\limits_{\Vert x-\xi\Vert_p>p^m} Z(t,x-\xi)\,\vert u(\xi)-u(x)\vert\,d^n\xi{\leq}\\
\nonumber
&\leq C\,t \int\limits_{\Vert x\Vert_p\leq p^N}d^nx\int\limits_{\Vert x-\xi\Vert_p>p^m}
\Big(t^{\frac{1}{\al-n}}+\Vert x-\xi\Vert_p\Big)^{-\al}d^n\xi\leq\\
&\leq C\,t \int\limits_{\Vert x\Vert_p\leq p^N}d^nx\int\limits_{\Vert z\Vert_p>p^m}\Vert z\Vert^{-\al}_pd^nz\to 0,\ \ t\to 0, \ \ \text{for}\ \ \al > n.
\end{align*}
Since outside the ball, for $\Vert x\Vert_p >p^N$, function $u$ vanishes, $u (x)=0$, we have
\begin{align*}
&I_{2,t}=\int\limits_{\Vert x\Vert_p> p^N}d^nx\int\limits_{\Qpn} Z(t,x-\xi)\,\vert u(\xi)-u(x)\vert\,d^n\xi=\int\limits_{\Vert x\Vert_p> p^N}d^nx\int\limits_{\Qpn} Z(t,x-\xi)\,\vert u(\xi)\vert\,d^n\xi=\\
&=\int\limits_{\Vert x\Vert_p> p^N}d^nx\int\limits_{\Vert \xi\Vert_p\leq p^N} Z(t,x-\xi)\,\vert u(\xi)\vert\,d^n\xi.
\end{align*}
On the last step we also used that support of function $u(\xi)$ is contained in the ball $\Vert \xi\Vert_p\leq p^N$. Finally, using \eqref{Zt3} we continue:
\begin{align*}
&I_{2,t}\leq C\,t \int\limits_{\Vert x\Vert_p> p^N}d^nx\int\limits_{\Vert \xi\Vert_p\leq p^N}
\Big(t^{\frac{1}{\al-n}}+\Vert x- \xi\Vert_p\Big)^{-\al}d^n\xi=\\
&=C\,t \int\limits_{\Vert x\Vert_p> p^N}d^nx\int\limits_{\Vert \xi\Vert_p\leq p^N}
\Big(t^{\frac{1}{\al-n}}+\Vert x\Vert_p\Big)^{-\al}d^n\xi\to 0, \ \ \text{as}\ \ t\to 0,
\end{align*}
where we have used that $\Vert x-\xi\Vert_p=\Vert x\Vert_p$ for $\Vert x\Vert_p>p^N$.
\end{proof}
\begin{definition}\Label{def:3-2}\
Let us define operator $\mathfrak A$ as a generator of semigroup $T(t)$ in space $L^1(\Qpn)$ and let $Dom\,(\fA)$ be its domain.
\end{definition}
\begin{lemma}\Label{lem3-2} Any test function $u\in \cD(\Qpn)$ belongs to the domain of the operator $\fA$ in $L^1(\Qpn)\colon \cD(\Qpn) \subset Dom(\fA)$. Moreover, on the test functions the operator $\fA$ coincides with the representation of operator $W$ of the form \eqref{FW}.
\end{lemma}
\begin{proof} Let us write for $u\in \cD(\Qpn)$:
\begin{align*}
&\dfrac{1}{t}\Big( T(t)u-u\Big)=\frac{1}{t}\big(Z_t\ast u-u\big)=\\
&=\dfrac{1}{t}\Big(\Fi\big[e^{-\are t \Aw}\big]\ast u - \Fi \Fx u\big] \Big)=\\
&=\dfrac{1}{t}\Big(\Fi \big[e^{-\are t \Aw}\big]\ast \Fi \Fx u -
\Fi \Fx u\big] \Big)=\\
&=\dfrac{1}{t}\Big(\Fi \big[e^{-\are t \Aw}\cdot \Fx \,u\big] -
\Fi \Fx u\big] \Big)=\\
&=\Fi \Big[ \dfrac{1}{t}\big(e^{-\are t \Aw}-1\big) \Fx \,u\Big].
\end{align*}
Taking into account these calculations, to finish the prove we need to show that
\[F_t:=\int_{\Qpn} \Big\vert \Fi \big[ \dfrac{1}{t}\big(e^{-\are t \Aw}-1\big)\Fx\, u +\are \Aw\Fx\,u\big]\Big\vert\,d^n\xi \to 0, t\to 0.\]
To see this it is sufficient to note that for $u\in\cD(\Qpn)$ such that the support of $\Fx \,u$ is contained in some ball $B_N$:
\begin{align*}
&\big\vert\dfrac{1}{t}\big(e^{-\are \,t\Aw}-1\big)+\are\Aw \big\vert\cdot\vert\Fx\, u\vert =\\
&= \Big\vert\dfrac{1}{t}
\Big(
\sum\limits_{q=0}^\infty (-1)^q\dfrac{\big(\are\,t \Aw\big)^q}{q!} -1\Big)+\are \Aw \Big\vert\cdot \vert \Fx\, u\vert=\\
&= t\, A_w^2(\xi) \are^2\big\vert \sum\limits_{m=2}^\infty (-1)^m\dfrac{\are^m t^m A_w^m(\xi)}{(m+2)!}\big\vert\cdot\vert\Fx\, u\vert\leq\\
&\leq C_N \, t\, A_w^2(\xi) \vert\Fx\, u\vert
\end{align*}
since the series converges for any $\xi$ which belongs to the support of $u$.
Therefore
\begin{align*}F_t
& \leq \int\limits_{\Qpn}\Big\vert \Big( \dfrac{1}{t}\big(e^{-\are t \Aw}-1\big) +\are\Aw \Big)\Fx \,u\big\vert\, d^n\xi\leq\\
&\leq C_N\,t\int\limits_{\Vert \xi\Vert_p\leq\, p^{-N}}A^2_w(\xi)\,\vert\Fx u\vert \, d^n\xi \to 0, \ t\to 0.
\end{align*}
\end{proof}
\section{Construction of the Markov process in the ball}
Let $\xi_t$ be the Markov process on $\Qpn$ constructed in 2.4 of Section \ref{Sec2-4}. Like in \cite{Ko:2001} (Section 4.6.1) we construct the Markov process on a ball $B_N=\{x\in\Qpn\col \Vert x \Vert \leq p^N\}.
$
Suppose that $\xi_0\in B_N$. Denote by $\xi^{(N)}_t$ the sum of all jumps of the process
$\xi_\tau$, $\tau\in [0,t]$, whose absolute values exceed $p^N$. Since $\xi_t$ is right continuous process with left limits, $\xi_t^{(N)}$ is finite a.s. Moreover
$\xi_0^{(N)}=0$. Let us consider process
\begin{equation}\Label{etat}\
\eta_t=\xi_t-\xi_t^{(N)}.
\end{equation}
Since the jumps of $\eta_t$ never exceed $p^N$ by absolute value, this process remain a.s. in $B_N$ (due to the ultra-metric inequality).
Below we will identify a function on ball $B_N$ with its extension by zero
onto the whole $\Qpn$. Let $\cD(B_N)$ consist of functions from
$\cD(\Qpn)$ supported in the ball $B_N$. Then $\cD(B_N)$ is dense in
$L^2(B_N)$, and the operator $W_N$ in $L^2(B_N)$ defined by restricting
the operator $W$ \eqref{W1} to $\cD (B_N)$ and considering
$\big(Wu\big)(x)$ only for $x\in B_N$.
Let us find the generator $W_N$ of process $\eta_t$. To do this we need the following bunch of lemmas.
\begin{lemma}\Label{lem4-1}\ For any $z\in \Qpn$
\begin{equation}\Label{eq:4-1}\
\EEE \chi\big(z\cdot\xi_t\big)=\exp \big(-t\varkappa A_w(z)\big).
\end{equation}
\end{lemma}
\begin{proof} Indeed, let $\mu_t$ denote the distribution of the process $\xi_t$ with $\xi_0=0$. Then, due to \cite[Vol.2, p. 24]{GS}, the Fourier transform of measure $\mu_t$ is equal to
\begin{align*}
&\EEE\, \chi\big(z\cdot\xi_t\big)=\widehat{\mu}_t(z)=\int_{\Qpn} \chi(z\cdot y)\,\mu_t(dy)=\int_{\Qpn} \chi(z\cdot y)\,p(t,0,y)\,d^ny=\\
&=\int_{\Qpn} \chi(z\cdot y)\,Z(t,y)\,d^ny=\int_{\Qpn} \chi(z\cdot y)\,\Fiy\Big[e^{-\varkappa \,t A_w(\xi)}\Big] \,d^ny=e^{-\varkappa \,t A_w(z)}.
\end{align*}
Here we also used \eqref{Zt1} and that the Fourier transform $\cF$ is a linear continuous automorphism of the space $\cD (\Qpn)$.
\end{proof}
\begin{lemma}\Label{Iz}\
\begin{equation}\Label{eq:Iz}\
I_{B_N}(z)=\int_{B_N}\dfrac{1-\chi(z\cdot x)}{w(\Vert x\Vert_p)} \,d^nx=\left\{
\begin{array}{lcl}
0&,\text{if} &\Vert z\Vert_p \leq p^{-N};\\
A_w(z)-\la_N&,\text{if} &\Vert z\Vert_p > p^{-N},
\end{array}
\right.
\end{equation}
where
\begin{equation}\Label{la}\
\la_N =(1-p^{-n})\sum\limits_{j =\, N+1}^\infty
\dfrac{p^{\,nj}}{w(\,p^{\,j})}.
\end{equation}
\end{lemma}
\begin{rem} \rm Remark that
\begin{equation}\Label{la-1}\
\la_N= \int\limits_{\Vert y\Vert_p >p^N}\dfrac{d^ny}{w(\Vert y\Vert_p)}.
\end{equation}
{\it Indeed.}
\begin{align*}
\la_N=\int\limits_{\Vert y\Vert_p \,>\,p^N}\dfrac{d^ny}{w(\Vert y\Vert_p)}= \sum\limits_{j=N+1}^\infty\,\int\limits_{S_j} \dfrac{d^ny}{w(\,p^{\,j})}=(1-p^{-n})\sum\limits_{j =\, N+1}^\infty
\dfrac{p^{\,nj}}{w(\,p^{\,j})}.
\end{align*}
\end{rem}
\begin{rem}\rm
$$
I_{\Qpn} (z) = A_w(z).
$$
\end{rem}
\begin{proof} First of all remark that the character $\chi$ equals $1$ on ball $B_0$. Hence $\EEE \,\chi(z\cdot\eta_t)=1$ if $\Vert z\Vert_p\leq p^{-N}.$
Let us consider $z\in\Qpn$ such that $\Vert z\Vert_p =p^k$, $k\geq -N+1$.
Any such $z$ belongs to $S_k$, $k\geq -N+1$ and we may be represent it as $z=p^{-k} z_0$, where $\Vert z_0\Vert_p=1$. Since
$$
B_N\backslash \{0\} = \bigsqcup_{j\leq N} p^j S_0=\bigsqcup_{j\leq N} S_j,
$$
we may represent in the following form:
\begin{align*}
I_{B_N}(z)&=
\int\limits_{B_N}\dfrac{1-\chi(z\cdot x)}{w(\Vert x\Vert_p)} \,d^nx
=\sum\limits_{j=-\infty}^N\int\limits_{S_j}\dfrac{1-\chi(z\cdot x)}{w(\Vert x\Vert_p)} \,d^nx=\sum\limits_{j=-\infty}^N\int\limits_{S_j}\dfrac{1-\chi(p^{-k} z_0\cdot x)}{w(\Vert x\Vert_p)} \,d^nx.
\end{align*}
The change of variables formula \eqref{change} implies
\begin{align*}
I_{B_N}(z)=&\sum\limits_{j=-\infty}^Np^{-k n}\int\limits_{S_{j+k}}\dfrac{1-\chi( z_0\cdot y)}{w(\,p^{-k}\Vert y\Vert_p)} \,d^ny=\\
=& \sum\limits_{j=-\infty}^N\dfrac{p^{-k n}}{w(\,p^{\,j})}
\Big\{ p^{(j+k)n}(1-p^{-n})-\int\limits_{S_{j+k}}\chi( z_0\cdot y)\,d^ny\Big\}=\\
=&\sum\limits_{\ell=-\infty}^{k+N}\dfrac{p^{-k n}}{w(\,p^{\ell-k})}
\Big\{ p^{\ell n}(1-p^{-n})-\int\limits_{S_\ell}\chi( z_0\cdot y)\,d^ny\Big\}.
\end{align*}
Using \eqref{chi-1} we have for $z$ such that $\Vert z\Vert_p =p^k$, $k\geq -N+1$
\begin{align*}
I_{B_N}(z)=&\left\{
\begin{array}{lcl} \sum\limits_{\ell=-\infty}^{0}\dfrac{p^{ -kn}p^{\ell n}}{w(\,p^{\,\ell - k})}
\Big\{(1- p^{-n})-(1-p^{-n})\Big\}=0,&\ &\ell\ \leq\ 0;\\
\dfrac{p^{-kn}p^{\,{n}}}{w(\,p^{\, {1}-k})}
\Big\{(1- p^{-n})- (p^{-n})\Big\},&\ &\ell ={1};\\
\sum\limits_{\ell =2}^{{k+N}}\dfrac{p^{-kn}p^{\,\ell n}}{w(\,p^{\,\ell - k})}
(1-p^{-n}),&\ &\ell\ > 1,
\end{array}
\right.
\end{align*}
and finally
\[
I_{B_N}(z)=\sum\limits_{j=2}^{{k+N}}\dfrac{p^{-k n+jn}}{w(\,p^{\,j-k})}
(1-p^{-n})+\dfrac{p^{-k n{+}n}}{w(\,p^{\,{1}-k})}.
\]
Comparing this with \eqref{Awfin} we conclude that
\begin{align*}
I_{B_N}(z)=& A_w(z) - (1-p^{-n})\sum\limits_{j= k+N+1}^\infty \dfrac{p^{-k n+jn}}{w(\,p^{\,j-k})}=\\
=& A_w(z) - (1-p^{-n})\sum\limits_{\ell = N+1}^\infty
\dfrac{p^{\,\ell \,n}}{w(\,p^{\,\ell})}= A_w(z) -\la_N,
\end{align*}
where $\la_N$ is given by \eqref{la}.
\end{proof}
\begin{lemma}\Label{lem:4-6}\ For any $\psi\in \cD (\Qpn)$ such that its Fourier transform $u = \cF \psi$ has a support in the ball $B_N$ we have:
\begin{equation}\Label{eq:4-14}\
\int\limits_{\Vert z\Vert_p\,\leq\, p^{-N}}A_w(z)\,\psi(z)\,d^nz=\la_N\int\limits_{\Vert z\Vert \,\leq \, p^{-N}} \psi(z)\,d^nz
\end{equation}
\end{lemma}
\begin{proof} Since $\text{\rm supp} \,\psi\subset B_{-N}$, we have $\psi(z)=\psi(0)$ for $\Vert z\Vert_p\leq p^{-N}$. Therefore
\begin{align}\nonumber
J_N&:=\int\limits_{\Vert z\Vert_p\,\leq\, p^{-N}}A_w(z)\psi(z)\,d^nz=
\psi(0)\int\limits_{\Vert z\Vert_p\,\leq\, p^{-N}}A_w(z)\,d^nz=\\
\nonumber
&=\psi(0)\int\limits_{\Vert z\Vert_p\,\leq\, p^{-N}}
\int\limits_{\Qpn}\dfrac{1-\chi(y\cdot z)}{w(\Vert y\Vert_p)}\, d^ny\,d^nz=\\
\Label{4-12}\
&=\psi(0)\int\limits_{\Qpn}\int\limits_{\Vert z\Vert_p\,\leq\, p^{-N}}
\dfrac{1-\chi(y\cdot z)}{w(\Vert y\Vert_p)}\, d^nz\,d^ny.
\end{align}
Since the character $\chi$ equals $1$ on ball $B_0$, hence the integral above is equal zero, when $\Vert y\cdot z\Vert_p=\Vert y\Vert_p\cdot \Vert z\Vert_p\leq 1$. Therefore, for $\Vert z\Vert_p\leq p^{-N}$, if $\Vert y\Vert_p\leq p^N$, we have
$$
\Vert y\cdot z\Vert_p=\Vert y\Vert_p\cdot \Vert z\Vert_p\leq 1
$$
and therefore integral in \eqref{4-12} may be represented as:
\begin{align*}
J_N&=\psi(0)\int\limits_{\Vert y\Vert_p >p^N}\int\limits_{\Vert z\Vert_p\,\leq\, p^{-N}}
\dfrac{1-\chi(y\cdot z)}{w(\Vert y\Vert_p)}\, d^nz\,d^ny=\\
&=\psi(0)\int\limits_{\Vert y\Vert_p >p^N} \dfrac{d^ny}{w(\Vert y\Vert_p)}
\sum\limits_{j=-\infty}^{-N}\,\int\limits_{S_j}\Big(1-\chi(y\cdot z)\Big)\,d^nz=\\
&=\psi(0)\sum\limits_{k=N+1}^\infty\,\int\limits_{S_k} \dfrac{d^ny}{w(\Vert y\Vert_p)}
\sum\limits_{j=-\infty}^{-N}\,\int\limits_{S_j}\Big(1-\chi(y\cdot z)\Big)\,d^nz.
\end{align*}
Let $\Vert y\Vert_p=p^{\,k}$, $k\geq N+1$, then $y=p^{-k}y_0$, where $\Vert y_0\Vert_p=1$, and change of variables formula \eqref{change}
gives for the expression under integral:
\begin{align}\Label{4-15}\
\nonumber
&S_N(y)=\sum\limits_{j=-\infty}^{-N}\,\int\limits_{S_j}\Big(1-\chi(y\cdot z)\Big)\,d^nz=\sum\limits_{j=-\infty}^{-N}\,\int\limits_{S_j}\Big(1- \chi(p^{-k}y_0\cdot z)\Big)\,d^nz=\\
&=\sum\limits_{j=-\infty}^{-N}\,p^{-kn}\int\limits_{S_{j+k}}\Big(1-\chi(y_0\cdot v)\Big)\,d^nv=\sum\limits_{\ell =-\infty}^{k-N}\,p^{-kn}\int\limits_{S_{\ell}}\Big(1-\chi(y_0\cdot v)\Big)\,d^nv.
\end{align}
Due to \eqref{chi-1} using that
\begin{equation}\Label{Sell}\
\int\limits_{S_\ell}d^nz=(1-p^{-n})p^{n\ell}
\end{equation}
we have
\begin{align*}
S_N(y)&=\left\{
\begin{array}{lcl}
\sum\limits_{\ell =-\infty}^{0}\,p^{-kn}\big[(1-p^{-n})p^{n\ell}-p^{n\ell} (1-p^{-n})\big]=0,&\ &{\ell \leq}\ 0;\\
p^{-kn}\big[(1-p^{-n})p^{n}+p^np^{-n}\big]=p^{-kn}p^n,&\ &{\ell =1};\\
\sum\limits_{\ell =2}^{k-N}\ p^{-kn}p^{n\ell}(1-p^{-n}),&\ &{\ell >1}.
\end{array}
\right.
\end{align*}
Finally, for $y$ such that $\Vert y\Vert_p=p^k$, $k\geq N+1$ we may continue \eqref{4-15}:
\begin{align*}
S_N(y)=&\sum\limits_{j=-\infty}^{-N}\,\int\limits_{S_j}\Big(1- \chi(y\cdot z)\Big)\,d^nz=\sum\limits_{\ell =2}^{k-N}\,p^{-kn}\,p^{\ell n}(1-p^{-n}) \,+\,p^{-kn}p^n=\\
&=
p^{-kn}\,\Big(\sum\limits_{\ell =2}^{k-N}p^{\ell n}-\sum\limits_{\ell =2}^{k-N}p^{\ell n}
p^{-n}+p^n\Big)
=p^{-kn}\,\Big(\sum\limits_{\ell =1}^{k-N}p^{\ell n}-\sum\limits_{\ell =2}^{k-N}p^{\ell n}
p^{-n}\Big)=\\
&=p^{-kn}\,\Big(\sum\limits_{\ell =1}^{k-N}p^{\ell n}-\sum\limits_{\ell =2}^{k-N}p^{n(\ell -1)}
\Big)=p^{-kn}\,\Big(\sum\limits_{\ell =1}^{k-N}p^{n\ell}-\sum\limits_{s =1}^{k-N-1}p^{ns}
\Big)=\\
&=p^{-kn}\,p^{n(k-N)}=p^{-nN}.
\end{align*}
Therefore, again using \eqref{Sell} we have
\begin{align*}
J_N&=\psi(0)\sum\limits_{k=N+1}^\infty\,\int\limits_{S_k} \dfrac{d^ny}{w(\Vert y\Vert_p)}
\sum\limits_{j=-\infty}^{-N}\,\int\limits_{S_j}\Big(1-\chi(y\cdot z)\Big)\,d^nz=\\
&=\psi(0)\,p^{-nN}\sum\limits_{k=N+1}^\infty\,\int\limits_{S_k} \dfrac{d^ny}{w(\,p^{\,k})}\,
=\psi(0)\,p^{-nN}(1-p^{-n})\sum\limits_{k=N+1}^\infty\,\dfrac{p^{nk}}{w(\,p^{\,k})}=\\
&=\psi(0)\,p^{-nN} \,\la_N=\la_N\int\limits_{\Vert z\Vert \,\leq \, p^{-N}} \psi(z)\,d^nz.
\end{align*}
where $\la_N$ is given by \eqref{la}.
\end{proof}
\begin{theorem}\Label{tm:4-5}\ If $\eta_t\vert_{t=0}=x$ and $u \in \cD (B_N)$, then
\begin{equation}\Label{gen}\
\dfrac{d}{dt}\EEE\, u(\eta_t)\big|_{t=0}=- \big(W_N u\big)(x)+\varkappa\,\la_N\,u(x),
\end{equation}
where operator $W_N$ is defined by restricting $W$ to the function $u$ supported in the ball $B_N$ and the resulting function $Wu$ is considered only on the ball $B_N$, i.e.
$$\big(W_Nu \big)(x) =\big(Wu\big)\!\!\upharpoonright_{B_N},\ \ \text{\rm for}\ \ u\in\cD(B_N).
$$
\end{theorem}
\noindent Remark that the functions on $B_N$ we identify with its extension by zero onto $\Qpn$.
\begin{rem}\rm
This theorem actually states that the operator $W_N -\varkappa\, \la_N$ is the generator of the stochastic process $\eta_t$ located in the ball $B_N$.
\end{rem}
\begin{proof} By general result about processes with independent increments on the locally compact space \cite{Hey} the processes $\xi_t^{(N)}$ and $\eta_t$ are independent. Thus
\begin{equation}\Label{eq:4-1-1}\
\EEE \chi\big(z\cdot \xi_t\big)=\EEE\,\chi\big(z\cdot\xi_t^{(N)}\big)\cdot\EEE\chi
\big(z\cdot\eta_t\big)
\end{equation}
for any $z\in\Qpn$.
Due to Lemma \ref{lem4-1} we have
\begin{equation}\Label{4-2-2}\
\EEE \chi\big(z\cdot\xi_t\big)=\exp \big(-t\varkappa A_w(z)\big).
\end{equation}
On the other hand from Theorem 5.6.17 of \cite[p. 397]{Hey} for any
locally compact Abelian group $G=\Qpn$ having a countable basis of its
topology and the distribution $\mu_t:=P_{X_t}$ of an additive process
$\{X_t\}_{t\in[0,1]}$ on $(\OO,\cF,P)$ with values in $\Qpn$ for a chosen
fixed inner product $g$ fo $G$ there exist an element $x_t\in G$ and a
positive quadratic form $\phi_t$ on the character group $ G^{\wedge}$ of
$G$ such that
$$
\widehat{\mu}_t(\chi)=\chi(x_t)\exp\Bigg\{
-\phi_t+
\int_{G}\big[\chi( x)-1-ig(x,\chi)\big] \,\pi(t,dx) \Bigg\}
$$
holds for all $\chi\in G^{\wedge}$
for some $x_t\in \Qpn$. From Example 4 of 5.1.9 of \cite[Ch. V, p. 342]{Hey} it follows that for the totally disconnected group $G$ the zero function on $G\times G^\wedge$ is a local inner product for $G$, moreover on the totally disconnected group there is no nonzero Gaussian measure, it follows that
\begin{equation}\Label{eq:4-4}\
\EEE\, \chi\big(z\cdot\xi_t\big)\equiv\widehat{\mu}_t(z)=
\chi({x_t}\cdot z)\exp\Big\{\int_{\Qpn}\big[\chi(z\cdot x)-1\big] \,\pi(t,dx) \Big\}.
\end{equation}
Comparing this with \eqref{4-2-2}, where due to \eqref{Aw}
we conclude that
the Levy measure of process $\xi_t$ is equal to
\begin{equation}\Label{pi}\
\pi(t,dx)=\dfrac{\varkappa\,t }{w(\Vert x\Vert_p )}d^nx
\end{equation}
and $x_t$ in \eqref{eq:4-4} may be chosen as zero so that $\chi(x_t\cdot z)\equiv 1$.
From Proposition 1 of \cite{Ev:89} it follows that
\begin{equation}\Label{Eeta}\
\EEE \chi\big(z\cdot\eta_t\big)=\EEE \chi\big(z\cdot (\xi_t-\xi_t^{(N)})\big)=\exp\Big\{\int_{B_N}\big[\chi(z\cdot x)-1\big] \,\pi(t,dx) \Big\}.
\end{equation}
Noting \eqref{pi} and notation \eqref{eq:Iz} we may represent the integral in the exponent of \eqref{Eeta} in the following form:
\begin{align*}
\int_{B_N}\big[\chi(z\cdot x)-1\big] \,\pi(t,dx)=\varkappa\,t\int_{B_N}\dfrac{\chi(z\cdot x)-1}{w(\Vert x\Vert_p)} \,d^nx=-\varkappa\,t\, I_{B_N}(z).
\end{align*}
Due to Lemma \ref{Iz}
$$I_{B_N}(z)=\left\{
\begin{array}{lcl}
0&,\ \text{if} &\Vert z\Vert_p \leq p^{-N};\\
A_w(z)-\la_N&,\ \text{if} &\Vert z\Vert_p > p^{-N},
\end{array}
\right.
$$
with $\la_N$ given by \eqref{la} therefore from \eqref{Eeta} we have
\begin{equation}\Label{Ko:4-72}\
\EEE\,\chi \big(z\cdot \eta_t\big)=\left\{
\begin{array}{lcl}
1&,\ \text{if} &\Vert z\Vert_p \leq p^{-N};\\
\exp\big\{\varkappa t \,\big(\la_N - A_w(z)\big)\big\}&,\ \text{if} &\Vert z\Vert_p > p^{-N}.
\end{array}
\right.
\end{equation}
If we consider $u = \cF \psi$ for $\psi\in\cD(\Qpn)$, then from \eqref{Ko:4-72} we have:
\[\EEE\,u \big(\eta_t\big)=\int\limits_{\Vert z\Vert_p\leq\, p^{-N}} \psi (z)\,d^nz+
e^{\are\la_N\, t} \int\limits_{\Vert z\Vert_p\,>p^{-N}} e^{-t\varkappa A_w(z)}\psi(z)\,d^nz,\]
therefore
\begin{equation}\Label{Ko:4-73}\
\dfrac{d}{dt}\EEE\, u\big(\eta_t\big)\Big|_{t=0}=\are\la_N\int\limits_{\Vert z\Vert_p\,>p^{-N}}\psi(z)\,d^nz -\varkappa\int\limits_{\Vert z\Vert_p\,>p^{-N}}A_w(z)\psi(z)\,d^nz.
\end{equation}
By the Fourier inversion formula
$$
\int_{\Qpn}\psi(z)\,d^nz=u(0).
$$
On the other habd, since $\text{supp}\, u \subset B_N$, we find that $\psi(z)=\psi(0)$ for $\Vert z\Vert_p\leq p^{-N}$.
Lemma \ref{lem:4-6} implies that \eqref{Ko:4-73} takes the form
\begin{align*}
\dfrac{d}{dt}\EEE\, u\big(\eta_t\big)\Big|_{t=0}&=\are\la_N\int\limits_{\Qpn}\psi(z)\,d^nz -\varkappa\int\limits_{\Qpn}A_w(z)\psi(z)\,d^nz=\\
&=\are \la_N\,u(0)-\big(W_Nu\big)(0)
\end{align*}
which finished the proof of Theorem \ref{tm:4-5}.
\end{proof}
\begin{lemma}\Label{5*}\ Let the support of a function $u\in L^1(\Qpn)$ be contained in $\Qpn \backslash B_N$. Then the restriction to $B_N$ of the distribution $Wu\in \cD^\pr(\Qpn)$ coincides with the constant:
\begin{equation}\Label{Rr}\
R_N =R_N(u)=\are \int\limits_{\Qpn \backslash B_N} \dfrac{u(y)\,d^ny}{w(\Vert y\Vert_p)},
\end{equation}
i.e. for $u\in L^1(\Qpn)$, $\text{\rm supp}\, u\subset \Qpn \backslash B_N$:
$$
\big(Wu\big)\!\!\upharpoonright_{x\in B_N}= R_N(u).
$$
\end{lemma}
\begin{proof} Let $\psi \in \cD(B_N)$. Then $\langle Wu,\psi\rangle=\langle u, W\psi\rangle.$ Since $\psi (x)=0$ for $\Vert x\Vert_p>p^N$
\begin{align*}
&\langle u,W\psi\rangle =\varkappa \int\limits_{\Vert x\Vert_p>p^N} u(x)\,d^n x\int\limits_{\Qpn} \dfrac{\psi(x-y)-\psi(x)}{w(\Vert y\Vert_p)}\,d^ny\stackrel{\psi(x)=0}{=}\\
&=\varkappa \int\limits_{\Vert x\Vert_p>p^N} u(x)\,d^n x\int\limits_{\Qpn} \dfrac{\psi(x-y)}{w(\Vert y\Vert_p)}\,d^ny\stackrel{x-y=z}{=}\\
&=\varkappa \int\limits_{\Vert x\Vert_p>p^N} u(x)\,d^n x\int\limits_{\Qpn} \dfrac{\psi(z)}{w(\Vert x-z\Vert_p)}\,d^nz=\varkappa \int\limits_{\Vert x\Vert_p>p^N} u(x)\,d^n x\int\limits_{\Vert z\Vert_p\leq p^N} \dfrac{\psi(z)}{w(\Vert x-z\Vert_p)}\,d^nz=\\
&=\varkappa \int\limits_{\Vert x\Vert_p>p^N}\dfrac{ u(x)}{w(\Vert x\Vert_p)}\,d^n x\cdot \int\limits_{\Vert z\Vert_p\leq p^N} \psi(z)\,d^nz.
\end{align*}
On the last step we have used that for $\Vert x\Vert_p>p^N$ and $\Vert z\Vert_p\leq p^N$ it follows that $\Vert x-z\Vert_p=\Vert x\Vert_p$. Since $\psi\in \cD(B_N)$ is arbitrary, this implies the required property.
\end{proof}
\section{Semigroup on the $p$-adic ball}
Consider on the ball $B_N$, $N\in\ZZ$ the following Cauchy problem
\begin{align}\Label{Cauchy}\
\nonumber
&\dfrac{\dd u(t,x)}{\dd t}+\big(W_N -\varkappa \la_N\big) u(t,x) =0,\ \ \ x\in B_N, \ \ t>0;\\
&u(0,x)=\psi (x),\ \ x\in B_N,
\end{align}
where operator is given by Theorem \ref{tm:4-5} and $\la_N$ is obtained from \eqref{la}.
Recall that operator $W_N$ is defined by restricting $W$ to the function $u_N$ supported in the ball $B_N$ and considering the resulting function $Wu_N$ only on the ball $B_N$. The functions on $B_N$ we identify with its extension by zero onto $\Qpn$:
$\big(W_N u_N \big)(x) =\big(W u_N\big)\!\!\upharpoonright_{B_N},\ \ \text{\rm for}\ \ u_N\in\cD(B_N).$
Remark that for operator $W_N$ considered on $L^2(B_N)$ $\varkappa \la_N$ is its eigenvalue. This follows from \eqref{W2} and expression for $\la_N$ given by \eqref{la-1}.
A maximum principle arguments as in the proof of Theorem 4.5 in \cite[p. 82]{Ko:2001} proved the uniqueness of the solution of Cauchy problem \eqref{Cauchy}. The fundamental solution $Z_N(t, x-y)$ for the problem \eqref{Cauchy} is the transition density of the process $\eta_t$ \eqref{etat}.
The next result gives a formula for this transition density.
\begin{theorem}\Label{ZN}\ The solution of the problem \eqref{Cauchy} is given by the formula
\begin{equation}\Label{ZN1}\
u_N(t,x) =\int_{B_N}Z_N(t,x-y)\,\psi(y)\, d^ny,\ \ t>0,\ \ x\in B_N,
\end{equation}
where
\begin{align}\Label{Zr}\
&Z_N(t,x) = e^{\varkappa\la_N t}Z(t,x) +c(t),\ \ x\in B_N,\\
\Label{ct}\
&c(t)=\dfrac{1}{\mm(B_N)}-\dfrac{e^{\varkappa\la_N t}}{\mm(B_N)}\int_{B_N}Z(t,x)\,d^nx
\end{align}
and $Z(t,x)$ is from \eqref{Zt}. Here $\mm (B_N)=\int\limits_{B_N}d^nx =p^{nN}$.
Moreover
\begin{equation}\Label{cpr}\
c^\pr(t)=-e^{\varkappa \la_N t}\varkappa\int\limits_{\Qpn\backslash B_N}\dfrac{Z(t,\xi)}{w(\Vert \xi \Vert_p)}\,d^n\xi.
\end{equation}
\end{theorem}
\begin{proof}
For any $\psi\in \cD(\Qpn)$ such that $\text{supp}\,\psi\subset B_N$ the solution to the Cauchy problem \eqref{Cauchy} for $t>0$ is given by
\begin{align*}
&u_N(t,x)=\theta_N(x)\int_{B_N}Z_N(t,x-y)\psi(y)\,d^ny=\\
& =\theta_N(x)\,e^{\varkappa \la_N t}\int_{B_N}Z(t,x-y)\psi(y)\,d^ny +\theta_N(x)\,c(t)\int_{B_N}\psi(y)\,d^ny=u_1(t,x)+u_2(t,x),
\end{align*}
where $\theta_N(x)$, $x\in\Qpn$ is an indicator of the set $B_N$ and
\begin{align}\Label{u2}\ \nonumber
u_1(t,x)&=\theta_N(x)\,e^{\varkappa \la_N t}\int_{B_N}Z(t,x-y)\psi(y)\,d^ny;\\
u_2(t,x)&=\theta_N(x)\,c(t)\int_{B_N}\psi(y)\,d^ny.
\end{align}
Let us check that
\begin{equation}\Label{eq}\
\big(D_t+W_N -\varkappa \la_N\big) u_N(t,x)=0
\end{equation}
for $\psi\in\cD (B_N)$ and $x\in B_N$.
We may write for $\psi\in \cD(B_N)$ and $x\in B_N$:
\begin{align}\Label{eq:5-10}\
\nonumber
&\big(D_t+W_N\big)u_N(t,x)-\varkappa \la_N\, u_N(t,x)=\\
\nonumber
&=\big(D_t+W_N\big)\Big[\theta_N(x) \,e^{\varkappa \la_N t}\int\limits_{B_N}Z(t,x-y)\psi(y)\,d^ny +\theta_N(x) \,c(t)\int\limits_{B_N}\psi(y)\,d^ny\Big]=\\
&=\big(D_t+W_N\big)\big[u_1(t,x)+u_2(t,x)\big]-\varkappa \la_N \big[u_1(t,x)+u_2(t,x)\big].
\end{align}
Let us introduce functions
\begin{align}\Label{h2*}\ \nonumber
h_1(t,x)&=\theta_N(x) \int_{B_N}Z(t,x-y)\psi(y)\,d^ny=e^{-\varkappa \la_N t}u_1(t,x);\\
h_2(t,x)&=\big( 1- \theta_N(x)\big) \int_{B_N}Z(t,x-y)\psi(y)\,d^ny
\end{align}
and remark that
$$
\big(D_t+W\big)h_1=-\big(D_t+W\big) h_2
$$
or
$$
\big(D_t+W\big)h_1=-W h_2.
$$
Since
$$
\big(D_t+W\big)h_1=\big(D_t+W\big)\big(e^{-\varkappa \la_N t}u_1\big)=e^{-\varkappa \la_N t} \big(D_t+W\big)u_1-\varkappa \la_N e^{-\varkappa \la_N t}u_1
$$
we have for $x\in B_N$ that
$$
\big(D_t+W\big)u_1-\varkappa \la_N u_1(t,x)=-e^{\varkappa \la_N t}W h_2(t,x).
$$
Therefore we may continue:
\begin{equation}\Label{eq:5-11}\
\eqref{eq:5-10}= \big(D_t+W_N\big)u_2(t,x)-\varkappa \la_N u_2(t,x)-e^{\varkappa \la_N t}W h_2(t,x),
\end{equation}
where $c(t)$ is given by \eqref{ct}.
From \eqref{W2} it follows that function $\theta_N(x)$ is an eigenfunction of the operator $W_N$ corresponding to the eigenvalue $\varkappa\,\la_N$ with $\la_N$ defined in \eqref{la-1}.
Therefore, taking into account the representation \eqref{u2} for $u_2(t,x)$, we have
\begin{equation}\Label{Wru2}\
\big(D_t+W_N\big)u_2(t,x)-\varkappa \la_N u_2(t,x)=c^\pr(t)\theta_N(x)\int_{B_N}\psi(y)\,d^ny,
\end{equation}
and we continue
\begin{equation}\Label{fin1}\
\eqref{eq:5-11}=c^\pr(t)\theta_N(x)\int_{B_N}\psi(y)\,d^ny-e^{\varkappa \la_N t}W h_2(t,x).
\end{equation}
To finish the proof it remains to show that r.h.s. of \eqref{fin1} equals zero for $x\in B_N$.
Substituting definition \eqref{h2*} of $h_2$ into the expression for $W$, making calculations for $x\in B_N$ and noting that $\psi\in \cD(\Qpn)$, we have:
\begin{align*}
&\big(W h_2\big)(t,x) ={\varkappa} \int_{\Qpn}\dfrac{h_2(x-y)-h_2(x)}{w(\Vert y\Vert_p)}\, d^ny={\varkappa}\int_{\Qpn}\dfrac{h_2(x-y)}{w(\Vert y\Vert_p)}\, d^ny=\\
&={\varkappa}\int\limits_{\Qpn\backslash B_N}\dfrac{h_2(x-y)}{w(\Vert y\Vert_p)}\, d^ny={\varkappa}\int\limits_{\Qpn\backslash B_N}\dfrac{h_2(z)}{w(\Vert x-z\Vert_p)}\, d^nz=\\
&={\varkappa}\int\limits_{\Qpn\backslash B_N}\dfrac{h_2(z)}{w(\Vert z\Vert_p)}\, d^nz.
\end{align*}
Applying further Lemma \ref{5*} and changing the variable on the last step, we have:
\begin{align*}
\big(W h_2\big)(t,x)&=\varkappa\int_{\Qpn \backslash B_N}\dfrac{d^ny}{w(\Vert y\Vert_p)}\int_{B_N}Z(t,y-\eta)\,\psi(\eta)\,d^n\eta=\\
\nonumber
&={\varkappa}\int_{B_N}\psi(\eta)\,d^n\eta \int_{\Qpn\backslash B_N} Z(t,y-\eta) \dfrac{d^ny}{w(\Vert y\Vert_p)}=\\
&=\varkappa\int_{B_N}\psi(\eta)\,d^n\eta\cdot \int_{\Qpn\backslash B_N} Z(t,\zeta) \dfrac{d^n\zeta}{w(\Vert \zeta\Vert_p)}.
\end{align*}
Thus for $x\in B_N$ \eqref{fin1} looks as follows:
$$
\eqref{fin1}=c^\pr(t)\int_{B_N}\psi(y)\,d^ny+e^{\varkappa \la_N t}\varkappa\int_{B_N}\psi(\eta)\,d^n\eta\cdot \int_{\Qpn\backslash B_N} Z(t,\zeta) \dfrac{d^n\zeta}{w(\Vert \zeta\Vert_p)},
$$
therefore it remains to show that
\begin{equation}\Label{cpr1}\
c^\pr(t)=-e^{\varkappa \la_N t} \varkappa \int_{\Qpn\backslash B_N}
\dfrac{Z(t,\zeta)}{w(\Vert \zeta\Vert_p)}\,{d^n\zeta}.
\end{equation}
Let us remark that due to \eqref{Zt1}
\begin{align}\Label{fin2}\ \nonumber
&\int\limits_{\Qpn\backslash B_N} Z(t,\zeta)\dfrac{d^n\zeta}{w(\Vert \zeta\Vert_p)}=\int\limits_{\Qpn\backslash B_N} \int\limits_{\Qpn}e^{-\varkappa t A_w(\xi)}\chi(-\zeta\cdot \xi)\,d^n\xi\,\dfrac{d^n\zeta}{w(\Vert \zeta\Vert_p)}
=\\
\nonumber
&= \int\limits_{\Qpn}e^{-\varkappa t A_w(\xi)}\int\limits_{\Qpn\backslash B_N}\chi(-\zeta\cdot \xi)\,\dfrac{d^n\zeta}{w(\Vert \zeta\Vert_p)}\,d^n\xi=\\
&= \int\limits_{\Vert \xi\Vert \leq p^{-N}}e^{-\varkappa t A_w(\xi)}\Big[
-A_w(\xi)+\la_N\Big]\,d^n\xi.
\end{align}
On the last step we used that due to the expression for $\la_N$ \eqref{la-1}, representation of $A_w(\xi)$ \eqref{Aw} and Lemma \ref{Iz} we have
\begin{align*}
\int\limits_{\Qpn\backslash B_N}\chi(-\zeta\cdot \xi)\,\dfrac{d^n\zeta}{w(\Vert \zeta\Vert_p)}&= \int\limits_{\Qpn\backslash B_N}\big[\chi(-\zeta\cdot \xi)-1\big]\,\dfrac{d^n\zeta}{w(\Vert \zeta\Vert_p)}+\la_N=\\
&=-A_w(\xi)+I_{B_N}+\la_N=\left\{
\begin{array}{lcl}
-A_w(\xi)+\la_N&,\ \text{if} &\Vert \xi\Vert_p \leq p^{-N};\\
0&,\ \text{if} &\Vert \xi\Vert_p > p^{-N}.
\end{array}
\right.
\end{align*}
From the other side, due to representation \eqref{Zt1} for $Z(t,x)$, we have the following representation for $c(t)$:
\begin{align}\Label{fin3}\ \nonumber
c(t)&=p^{-nN}-e^{\varkappa\la_N t}p^{-nN}\int_{B_N}Z(t,x)\,d^nx=\\
\nonumber
&=p^{-nN}-e^{\varkappa\la_N t}p^{-nN}\int_{B_N}\int_{\Qpn}e^{-\varkappa t A_w(\xi)}\chi(-x\cdot \xi)\,d^n\xi\,d^nx=\\
&=p^{-nN}-e^{\varkappa\la_N t}\int\limits_{\Vert \xi\Vert_p\leq p^{-N}}e^{-\varkappa t A_w(\xi)}\,d^n\xi.
\end{align}
On the last step we used \eqref{formula11}, see also \cite[(7.14), p. 25]{VTab}, i.e.
$$\int_{B_N}\chi(\xi\cdot x)d^n x= p^{N n}\left\{
\begin{array}{lcl}
1,&\text{if} &\Vert \xi\Vert_p\leq p^{-N};\\
0,& &\text{otherwise}.
\end{array}
\right.$$
From \eqref{fin3} it follows that
\begin{equation}\Label{fin4}\
c^\pr(t)=-\varkappa \la_N e^{\varkappa \la_N t}
\int_{\Vert \xi\Vert_p\leq p^{-N}}e^{-\varkappa t A_w(\xi)}\,d^n\xi+e^{\varkappa \la_N t}\varkappa \int_{\Vert \xi\Vert_p\leq p^{-N}}A_w(\xi) e^{-\varkappa t A_w(\xi)} d^n\xi
\end{equation}
Noting \eqref{fin4} and \eqref{fin2} we receive the identity \eqref{cpr1}, which proves the statement of the theorem.
\end{proof}
\begin{rem}\rm Let us note that from \eqref{ct} it follows the following representation for $c^\pr(t)$:
\begin{align}\Label{ctpr}\
c^\pr(t)&=-\dfrac{\varkappa \la_N}{\mm(B_N)}\,e^{\varkappa\la_N t}\int\limits_{B_N}Z(t,x)\,d^nx-\dfrac{e^{\varkappa\la_N t}}{\mm(B_N)}\int\limits_{B_N}D_t Z(t,x)\,d^nx.
\end{align}
\end{rem}
\begin{lemma}\Label{cpr}
\begin{equation}\Label{ctpr1}\
c^\pr(t)=\varkappa\, e^{\varkappa \la_N t}
\int_{B_N}e^{-\varkappa t A_w(\xi)}\big[A_w(\xi)- \la_N\big] \,d^n\xi.
\end{equation}
Moreover $c^\pr(0)=0.$
\end{lemma}
\begin{proof} Representation \eqref{ctpr1} follows from the formula \eqref{fin4}. To prove second statement of the theorem, it is sufficient to show that $$\int_{B_N}A_w(\xi)d^n\xi=\int_{B_N} \la_N \,d^n\xi = \,p^{-nN} \,\la_N.$$
Similar to Lemma \ref{lem:4-6} we have
\begin{align*}
&\int\limits_{\Vert z\Vert_p\,\leq\, p^{-N}}A_w(z)\,d^n
=\int\limits_{\Vert y\Vert_p >p^N} \dfrac{d^ny}{w(\Vert y\Vert_p)}
\int\limits_{\Vert z\Vert_p\,\leq\, p^{-N}}\Big(1-\chi(y\cdot z)\Big)\,d^nz=\\
&=\int\limits_{\Vert y\Vert_p >p^N} \dfrac{d^ny}{w(\Vert y\Vert_p)}
\sum\limits_{j=-\infty}^{-N}\,\int\limits_{S_j}\Big(1-\chi(y\cdot z)\Big)\,d^nz=\\
&=\sum\limits_{k=N+1}^\infty\,\int\limits_{S_k} \dfrac{d^ny}{w(\Vert y\Vert_p)}
\sum\limits_{j=-\infty}^{-N}\,\int\limits_{S_j}\Big(1-\chi(y\cdot z)\Big)\,d^nz.
\end{align*}
Let $\Vert y\Vert_p=p^{\,k}$, $k\geq N+1$, then $y=p^{-k}y_0$, where $\Vert y_0\Vert_p=1$. Using change of variables formula \eqref{change} and \eqref{chi-1} we have
\begin{align}\Label{4-15-1}\
\nonumber
&\sum\limits_{j=-\infty}^{-N}\,\int_{S_j}\Big(1-\chi(y\cdot z)\Big)\,d^nz=\sum\limits_{j=-\infty}^{-N}\,\int_{S_j}\Big(1- \chi(p^{-k}y_0\cdot z)\Big)\,d^nz=\\
\nonumber
&=\sum\limits_{j=-\infty}^{-N}\,p^{-kn}\int_{S_{j+k}}\Big(1-\chi(y_0\cdot v)\Big)\,d^nv=\\
\nonumber
&=\sum\limits_{\ell =-\infty}^{k-N}\,p^{-kn}\int_{S_{\ell}}\Big(1-\chi(y_0\cdot v)\Big)\,d^nv=\\
&=\left\{
\begin{array}{lcl}
\sum\limits_{\ell =-\infty}^{0}\,p^{-kn}\big[(1-p^{-n})p^{n\ell}-p^{n\ell} (1-p^{-n})\big]=0,&\ &\ell \leq\ 0;\\
p^{-kn}\big[(1-p^{-n})p^{n}+p^np^{-n}\big]=p^{-kn}p^n,&\ &\ell =1;\\
\sum\limits_{\ell =2}^{k-N}\ p^{-kn}p^{n\ell}(1-p^{-n}),&\ &\ell >1.
\end{array}
\right.
\end{align}
Finally, for $y$ such that $\Vert y\Vert_p=p^k$, $k\geq N+1$, from \eqref{4-15-1} we have
\begin{align*}
&\sum\limits_{j=-\infty}^{-N}\,\int_{S_j}\Big(1- \chi(y\cdot z)\Big)\,d^nz=\sum\limits_{\ell =2}^{k-N}\,p^{-kn}\,p^{\ell n}(1-p^{-n}) \,+\,p^{-kn}p^n=\\
&=
p^{-kn}\,\Big(\sum\limits_{\ell =1}^{k-N}p^{\ell n}-\sum\limits_{\ell =2}^{k-N}p^{\ell n}
p^{-n}\Big)=p^{-kn}\,\Big(\sum\limits_{\ell =1}^{k-N}p^{n\ell}-\sum\limits_{s =1}^{k-N-1}p^{ns}
\Big)=\\
&=p^{-kn}\,p^{n(k-N)}=p^{-nN}.
\end{align*}
Thus
\begin{align*}
&\int\limits_{\Vert z\Vert_p\,\leq\, p^{-N}}A_w(z)\,d^nz=\sum\limits_{k=N+1}^\infty\,\int\limits_{S_k} \dfrac{d^ny}{w(\Vert y\Vert_p)}
\sum\limits_{j=-\infty}^{-N}\,\int\limits_{S_j}\Big(1-\chi(y\cdot z)\Big)\,d^nz=\\
&=\,p^{-nN}\sum\limits_{k=N+1}^\infty\,\int\limits_{S_k} \dfrac{d^ny}{w(\,p^{\,k})}\,
=\,p^{-nN}(1-p^{-n})\sum\limits_{k=N+1}^\infty\,\dfrac{p^{nk}}{w(\,p^{\,k})}=\,p^{-nN} \,\la_N.
\end{align*}
where $\la_N$ is given by \eqref{la}.
\end{proof}
\begin{lemma}The function $Z_N(t.x)$ is non-negative, and
\begin{equation}\Label{Z1}\
\int_{B_N}Z_N(t,x)\, d^nx=1.
\end{equation}
\end{lemma}
\begin{proof} From \eqref{Zr} and \eqref{ct} we have for $x\in B_N$
\begin{align*}
Z_N(t,x) &= e^{\varkappa\la_N t}Z(t,x) +c(t)=\\
&=e^{\varkappa\la_N t}Z(t,x) +\dfrac{1}{\mm(B_N)}-\dfrac{e^{\varkappa\la_N t}}{\mm(B_N)}\int_{B_N}Z(t,x)\,d^nx=\\
&=e^{\varkappa\la_N t}\Big[Z(t,x) -p^{-nN}\int_{B_N}Z(t,x)\,d^nx\Big]+p^{-nN}=1.
\end{align*}
The positivity of the function $Z_N(t,x)$ follows from its probabilistic meaning as the transition density of the process $\eta_t$ \eqref{etat}.
\end{proof}
On a ball $B_N$, $N\in \ZZ$ let us consider the Cauchy problem \eqref{Cauchy}. Its fundamental solution $Z_N(t,x)$ \eqref{Zr} defines a contraction semigroup
\[(T_N(t)u)(x)=\int_{B_N} Z_N(t, x-\xi)\, u(\xi)\, d^n\xi\]
on $L^1(B_N)$.
\begin{lemma}\Label{l5-4}\
The semigroup $T_N(t)$ is strongly continuous in $L^1(B_N)$.
\end{lemma}
\begin{proof} For $u\in L^1(B_N)$ we may write $\Vert T_N(t)u-u\Vert_{L^1(B_N)}\leq I_1(t)+I_2(t),$
where
\begin{align*}
I_1(t)&=\int_{B_N} \Bigg\vert \int_{B_N}e^{\varkappa \la_N t}Z(t,x-\xi)\,u(\xi)\,d\xi - u(x)\Bigg\vert\, d^nx;\\
I_2(t)&= p^{nN}\,\vert c(t)\vert \int_{B_N} \vert u(\xi)\vert \,d^n\xi.
\end{align*}
Using representation \eqref{Zt1} and \eqref{formula11} from \eqref{ct} we have
\begin{align}\ \Label{gtt0}\ \nonumber
c(t)&=p^{-nN}-e^{\varkappa\,\la_N t}p^{-nN}\int_{\Qpn}e^{-\varkappa t A_w(\xi)}\int_{B_N}\chi(-x\cdot \xi)\,d^nx\,d^n\xi=\\
&=p^{-nN}-e^{\varkappa\la_N t}\int\limits_{\Vert \xi\Vert_p\leq p^{-N}}e^{-\varkappa t A_w(\xi)}\,d^n\xi \to 0,\ \ \text{\rm as}\ \ t\to 0.
\end{align}
Therefore $I_2(t)\to 0$ as $t\to 0$.
For small values of $t$ we write
\begin{align*}
I_1(t)&=\int\limits_{B_N}\Bigg\vert\int\limits_{B_N}Z(t,x-\xi)u(\xi)\, d^n\xi - u(x)+\int\limits_{B_N}\big(e^{\varkappa \la_N t}-1\big)Z(t,x-\xi)u(\xi)\,d^n\xi \Bigg\vert\, d^nx\leq\\
&\leq \int\limits_{B_N}\Bigg\vert\int\limits_{B_N}Z(t,x-\xi)\,u(\xi)\, d^n\xi - u(x)\Bigg\vert\,d^nx+Ct\int\limits_{B_N}\int\limits_{B_N}Z(t,x-\xi)\vert u(\xi) \vert\, d^n\xi\,d^nx=\\
&=J_1(t)+J_2(t).
\end{align*}
By the Young inequality using the identity \eqref{Zt5}, extending $u$ by zero to a function $\w u$ on $\Qpn$, we obtain
\[
J_2(t)\leq Ct \int_{\Qpn}\int_{\Qpn}Z(t,x-\xi)\,\vert \w u(\xi) \vert\, d^n\xi\,d^nx\leq Ct \Vert \w u\Vert_{L^1(B_N)}\to 0,\ \ \text{\rm as}\ \ t\to 0.
\]
Moreover by the $C_0$-property of $T(t)$ stated in Lemma \ref{l3-1} we have
\begin{align*}
J_1(t)&= \int_{B_N}\Bigg\vert\,\int_{\Qpn}Z(t,x-\xi)\,\w u(\xi)\, d^n\xi - \w u(x)\Bigg\vert\,d^nx\leq\\
&\leq \int\limits_{\Qpn}\Bigg\vert\,\int\limits_{\Qpn}Z(t,x-\xi)\,\w u(\xi)\, d^n\xi - \w u(x)\Bigg\vert\,d^nx=\Vert T(t)\w u- \w u\Vert_{L^1(\Qpn)}\to 0,
\end{align*}
as $t\to 0$.
\end{proof}
Let $\fA_N$ denote the generator of the contraction semigroup $T_N(t)$ in $L^1(B_N)$. Let us also introduce operator $W_N$ which is understood in the sense of $\cD^\pr(B_N)$, that is $\psi_N$ is extended by zero to a function on $\Qpn$, $W_N$ is applied to it in the distribution sense, and the resulting distribution is restricted to $B_N$.
\begin{prop}\Label{prop5-25-1}\ Let operator $\fA$ be a generator of semigroup $T(t)$ in space $L^1(\Qpn)$ with the domain $Dom (\fA)$. Then for the restriction $\psi_N$ of the function $\psi\in Dom (\fA)$ to the ball $B_N$ we have:
\begin{equation}\Label{5-25-1}\
\fA\psi = W_N\psi_N + R_N,
\end{equation}
where $R_N = R_N (\psi - \psi_N)$ is the constant from Lemma \ref{5*}.
\end{prop}
\begin{proof} To prove this statement we remark that function $\psi \in Dom\, (\fA)$ may be represent as $\psi = \psi_N+(\psi-\psi_N)$ and by Lemma \ref{5*} on $B_N$ we may write $
\fA\psi = W_N\psi_N + R_N, \ \text{where}\ R_N = R_N (\psi - \psi_N).$
\end{proof}
\begin{theorem} If $\psi \in Dom \,(\fA)$ in $L^1(\Qpn)$ (Definition \ref{def:3-2}), then the restriction $\psi_N$ of the function $\psi$ to $B_N$ belongs to $Dom \,(\fA_N)$ and
\begin{equation}\Label{Th5-5}\
\fA_N\psi_N=\big( W_N-\varkappa \la_N\big) \psi_N,
\end{equation}
where $W_N \psi_N$ is understood in the sense of $\cD^\pr(B_N)$, that is $\psi_N$ is extended by zero to a function on $\Qpn$, $W_N$ is applied to it in the distribution sense, and the resulting distribution is restricted to $B_N$.
\end{theorem}
\begin{proof}
For $\psi \in Dom \,(\fA)$ we have to check that:
\begin{itemize}
\item[1)] $W_N\psi_N\in L^1(B_N)$;
\item[2)] $\Big\Vert -\dfrac{1}{t}\big[T_N(t)\psi_N -\psi_N\big]-\big(W_N-\varkappa \la_N\big)\psi_N\Big\Vert_{L^1(B_N)}\to 0$, as $t\to 0+$.
\end{itemize}
To prove the first statement we remark that for function $\psi \in Dom\, (\fA)$ Proposition \ref{prop5-25-1} implies that $\fA\psi = W_N\psi_N + R_N$ thus $W_N\psi_N \in L^1(B_N)$.
Further, from \eqref{Zr}, expanding exponent in the Taylor series, we have
\begin{align}\Label{5-25}\
\big(T_N(t) \psi_N\big)(x)&=\int_{B_N}Z(t,x-y)\psi(y)\, d^n y+c(t)\int_{B_N}\psi(y)\, d^ny +\\
\nonumber
&+\varkappa \,\la_N t \int_{B_N} Z(t,x-y)\psi(y)\, d^n y+d(t)\int_{B_N} Z(t,x-y)\psi(y)\,d^n y,
\end{align}
where $d(t)=O(t^2)$, $t\to 0$. By strong continuity of $T_N(t)$ in $L^1(B_N)$ (see Lemma \ref{l5-4}) we have
\[\Bigg\Vert -\varkappa \la_N \int\limits_{B_N}Z(t,x-y)\psi(y)\, d^n y +\varkappa\la_N \,\psi_N(x)\Bigg\Vert_{L^1(B_N)}\to 0,\ \ \text{\rm as}\ \ t\to 0.\]
Moreover from Young inequality it follows that
\[\dfrac{1}{t}\Bigg\Vert d(t)\int\limits_{B_N}Z(t,x-y)\psi(y)\,d^ny\,\Bigg\Vert_{L^1(B_N)}\to 0,\ \ \text{\rm as}\ \ t\to 0.\]
From \eqref{gtt0} it follows that $c(0)=0$. Moreover Lemma \ref{cpr} implies that $c^\pr(0)=0$, thus $c(t)=O(t^2)$ as $t\to 0$ and second term in \eqref{5-25} is negligible. Therefore it remains to consider the first term in \eqref{5-25}, that is
\[V(t,x)=\int_{B_N}Z(t, x-y)\,\psi(y)\, d^n y = v_1(t,x)-v_2(t,x),\ \ x\in B_r,\]
where
\begin{align*}
&v_1(t,x)=\int_{\Qpn}Z(t,x-y)\,\psi(y)\,d^ny,\\
&v_2(t,x)=\int\limits_{\Vert y\Vert_p >p^N}Z(t,x-y)\,\psi(y)\,d^ny.
\end{align*}
If we show that
\begin{equation}\Label{v2}\
\dfrac{1}{t} v_2(t,x) - R_N \to 0,\ \ \text{when}\ \ t\to 0
\end{equation}
then this ends the proof. Indeed, taking above argument and \eqref{5-25-1} in account we may write:
\begin{align*}
&\Big\Vert -\dfrac{1}{t}\big[T_N(t)\psi_N -\psi_N\big]-\big(W_N-\varkappa \la_N\big)\psi_N\Big\Vert_{L^1(B_N)}\leq\\
&\leq \Big \Vert -\dfrac{1}{t} \Big( \int_{\Qpn}Z(t,x-y)\, \psi (y)\, d^ny - v_2(t,x)-\psi\Big)-W_N\psi_N \Big\Vert_{L^1(B_N)} + o(1) =\\
&= \Big \Vert -\dfrac{1}{t} \Big( \int_{\Qpn}Z(t,x-y)\, \psi(y)\, d^ny -\psi\Big)-(W_N\psi_N +R_N) \Big\Vert_{L^1(B_N)} + o(1) =\\
&= \Big \Vert -\dfrac{1}{t} \big( T(t)\psi -\psi\big)-W\psi \Big\Vert_{L^1(B_N)} + o(1) \to 0,\ \ \text{as}\ \ t\to 0,
\end{align*}
which completes the proof.
Let us show \eqref{v2}. To do this we write
\begin{equation}\Label{5-28}\
\dfrac{1}{t}v_2(t,x)-R_N =\int_{\Vert y\Vert_p >p^N}\Big(\dfrac{1}{t}Z(t,y)-\dfrac{\are}{w(\Vert y\Vert_p)}\Big)\psi(y) \,d^ny,
\end{equation}
where $\psi\in Dom\, (\fA)$ in $L^1(\Qpn)$.
If we show that
\begin{equation}\Label{Ztare}\
Z(t,y) = \dfrac{\are t}{w(\Vert y\Vert_p)} +o(t^2),
\end{equation}
then this will prove \eqref{v2}.
Due to \eqref{Zt7} for $\Vert y\Vert_p=p^\be$ we have
\[Z(t,y) =\Vert y\Vert_p^{-n}\Bigg[(1-p^{-n})\sum\limits_{j=0}^\infty p^{-nj}e^{-\are t A_w(p^{-(\be+j)})}-e^{-\are tA_w(p^{-(\be -1)})}\Bigg].\]
Then expanding the exponents in the Taylor series we have
\begin{align*}
&e^{-\are t A_w(p^{-(\be+j)})}=1-\are t A_w(p^{-(\be+j)})+r_1(t);\\
&e^{-\are t A_w(p^{-(\be-1)})}=1-\are t A_w(p^{-(\be-1)})+r_2(t),
\end{align*}
where, for example, $r_2(t)$ is the result of decomposition of function $f(t)=e^{-\are tA_w(\Vert y\Vert_p^{-1} p)}$:
\begin{align*}
&r_2 (t)= \dfrac{t^2f^{\pr\pr}(\theta t)}{2}= \big[A_w(\Vert y\Vert_p^{-1}p)\big]^2e^{-\are \theta t A_w(\Vert y\Vert_p^{-1} p)}, \ \theta \in (0,1).
\end{align*}
Thus, due to \eqref{Aw1}
\[\vert r_2(t)\leq C\,t^2\big[\Vert y\Vert^{-1}_pp\big]^{2(\al - n)},\ \ \text{for}\ \Vert y\Vert_p=p^\be.\]
Therefore $r_2=o(t^2)$ as $t\to 0$ and similar $r_1(t)$, thus
\begin{align*}
Z(t,y)&=\Vert y\Vert_p^{-n}\Bigg[(1-p^{-n})\sum\limits_{j=0}^\infty p^{-nj}\big(1-\are t A_w(p^{-(\be+j)})\big)-1+\are t A_w(p^{-(\be-1)})\Bigg]+ o(t^2)=\\
&=\Vert y\Vert_p^{-n}\are\,t\Big[A_w(p^{-(\be-1)})-(1-p^{-n})\sum\limits_{j=0}^\infty p^{-nj} A_w(p^{-(\be+j)})\Big]+ o(t^2).
\end{align*}
Using representation \eqref{eq:Awrep-1} we may write
\begin{align}\Label{5-30}\
Z(t,y)&=\Vert y\Vert_p^{-n}\are\,t\Big[A+B-C-D\Big]+ o(t^2),\\
\intertext{where}
\nonumber
A&=(1-p^{-n})\sum\limits_{k=\be+1}^\infty\dfrac{p^{nk}}{w(p^{\,k})};\quad B=\dfrac{p^{n\be}}{w(p^\be)};\\
\nonumber
C&= (1-p^{-n})\sum\limits_{j=0}^\infty p^{-nj}(1-p^{-n}) \sum\limits_{k=\be+j+2}\dfrac{p^{nk}}{w(p^{\,k})};\\
\nonumber
D&= (1-p^{-n})\sum\limits_{j=0}^\infty p^{-nj}\dfrac{p^{n(\be+j+1)}}{w(p^{\be+j+1})}.
\end{align}
Let us in the term $C$ change the order of summation and calculate the finite sum of geometric progression, then we have
\begin{align*}
C&=(1-p^{-n})^2\sum\limits_{k=\be+2}^\infty\sum\limits_{j=0}^{k-\be-2}p^{-nj}\dfrac{p^{nk}}{w(p^{\,k})}=(1-p^{-n})\sum\limits_{k=\be+2}^\infty\dfrac{p^{nk}}{w(p^{\,k})}\big(1-p^{-n(k-\be-1)}\big)=\\
&=(1-p^{-n})\sum\limits_{k=\be+2}^\infty\dfrac{p^{nk}}{w(p^{\,k})}-(1-p^{-n})p^{n(\be+1)}\sum\limits_{k=\be+2}^\infty\dfrac{1}{w(p^{\, k})}=C_1-C_2;\\
D&=(1-p^n)p^{n(\be+1)}\sum\limits_{j=0}^\infty\dfrac{1}{w(p^{\be+j+1})}=(1-p^n)p^{n(\be+1)}\sum\limits_{k=\be+1}^\infty\dfrac{1}{w(p^{\, k})}.
\end{align*}
Looking at \eqref{5-30} we see that
\begin{align*}
&A-C_1 = (1-p^{-n})\dfrac{p^{n(\be+1)}}{w(p^{\,\be+1})};\\
&C_2-D = -(1-p^{-n})\dfrac{p^{n(\be+1)}}{w(p^{\,\be+1})},
\end{align*}
which contracts each other and it remains the term $B$, i.e.
\[Z(t,y)=\Vert y\Vert_p^{-n}\are\,t\dfrac{p^{n\be}}{w(p^{\be})}+ o(t^2)=\dfrac{\are t}{w(\Vert y\Vert_p)}+ o(t^2),\ \text{for}\ \Vert y\Vert_p=p^\be,\]
which proves \eqref{Ztare} and therefore \eqref{v2}.
\end{proof}
\section{Main result}
To formulate the main result let us first recall the notation of mild solution of nonlinear equation in some real Banach space $X$.
\subsection{Mild solution}
Consider the Cauchy problem
\begin{equation}\Label{4-1Barbu}
\left\{
\begin{array}{lc}
D_tu+Au = f(t),&t\in[0,T];\\
u(0)=u_0,&
\end{array}
\right.
\end{equation}
where $u_0\in X$ and $f\in L^1([0,T]; X)$, $A$ is a $m$-accretive nonlinear operator.
Operator $A\colon X\to X$ is called \bfi{accretive} if for every pair $x,y\in Dom\, (A)$
\[\langle Ax-Ay, w\rangle \geq 0,\]
where $w\in J(x-y)$ and $J\colon X\to X^*$ is the duality mapping of the space $X$. Cor\-res\-pon\-dingly operator $A$ is called \bfi{$m$-accretive} if the range $Ran\,(I+A)=X$.
\begin{definition} Let $f\in L^1([0;T];X)$ and $\vep > 0$ be given. An \bfi{$\vep$-discretization} on $[0;T]$ of the equation $D_t y +Ay = f$ consists of a partition $0 = t_0\leq t_1 \leq t_2 \leq \ldots\leq t_N$ of the
interval $[0; t_N]$ and a finite sequence $\{f_i\}_{i=1}^N\subset X$
such that $
t_i-t_{i-1}<\vep\ \ \text{for}\ \ i=1,\ldots, N,\ \ T-\vep < t_N \leq T
$ and
\[\sum\limits_{i=1}^N \,\int\limits_{t_{i-1}}^{t_i}\Vert f(s)-f_i\Vert \, ds <\vep.\]
\end{definition}
\begin{definition} A piecewise constant function $z\col [0,t_N]\to X$ whose values $z_i$ on $(t_{i-1}, t_i]$ satisfy the finite difference equation
$$
\dfrac{z_i-z_{i-1}}{t_i-t_{i-1}}+Az_i=f_i,\ \ \ i=1,\ldots, N
$$
is called an \bfi{$\vep-$approximate solution} to the Cauchy problem \eqref{4-1Barbu} if it satisfies
$$
\Vert z(0)-u_0\Vert \leq \vep.
$$
\end{definition}
\begin{definition}
\bfi{A mild solution} of the Cauchy problem \eqref{4-1Barbu} is a function $u\in C([0,T]; X)$ with the property that for each $\vep >0$ there us an \bfi{$\vep-$approximate solution} $z$ of $D_tu +Au = f$ on $[0,T]$ such that $\Vert u(t)- z(t)\Vert \leq \vep $ for al $t\in [0,T]$ and $u(0)=u_0$.
\end{definition}
See \cite[Ch. 4]{Barbu:book} for the details.
\subsection{Solvability of the nonlinear equation}
Let us consider in $L^1(\Qpn)$ the equation
\begin{equation}\Label{6-1}\
D_t u+\fA \big(\vph(u)\big)=0,\ \ u=u(t,x),\ t>0,\ x\in\Qpn,
\end{equation}
where $\fA$ is the generator of the semigroup $T(t)$ in $L^1(\Qpn)$ and $\vph\col \RR\to \RR$ is a continuous strictly increasing function, $\vph(0)=0$, such that:
$$
\vert \vph (s)\vert \leq C\, \vert s\vert^m,\ \ m\geq 1.
$$
Consider the nonlinear operator $\fA\vph$ with the domain
$$
Dom\, (\fA\vph) =\{u\in L^1(\OO)\col \vph(u)\in Dom\,(\fA)\}.
$$
From Lemma \ref{lem3-2} it follows that the operator $\fA \vph$ is densely defined and therefore so the operator $\w{\fA \vph}$ has the same property.
\begin{theorem} The operator $\overline{\fA \vph}$ is $m$-accretive, i.e. for any initial function $u_0\in L^1(\Qpn)$ the Cauchy problem for equation \eqref{6-1} has a unique mild solution.
\end{theorem}
\begin{proof} The statement of the theorem is a consequence of the Crandall-Liggett theorem \cite[Theorem 4.3]{Barbu:book}. Indeed,
from Proposition 1 in \cite{CrP} it follows that $(\fA\vph)(u)=\fA(\vph(u))$ is an accretive nonlinear operator in $L^1(\Qpn)$ and for any $\vep >0$ operator $(\vep I+\fA)\vph$ is $m$-accretive in $L^1(\Qpn)$.
Therefore in order to prove the $m$-accretivity of $\overline{\fA \vph}$ it is sufficient to prove that the operator $I+\fA\vph$ has a dense range in $L^1(\Qpn)$. In other words it suffices to prove that equation
\[u+\fA\vph(u)=f\]
is solvable for a dense subset of functions $f\in L^1(\Qpn)$. Equivalently, setting $\be = \vph^{-1}$ (the function inverse to $\vph$), we have to study the equation
\begin{equation}\Label{v}\
\fA v+\be(v)=f.
\end{equation}
Since the space of test functions $\cD(\Qpn)$ is dense in $L^1(\Qpn)$, therefore it is enough to prove the solvability of equation \eqref{v} for any $f\in L^1(\Qpn)\cap L^\infty(\Qpn)$.
For such a function $f$ we consider the regularized equation to \eqref{v}:
\begin{equation}\Label{ve}\
\vep v_\vep +\fA v_\vep+\be(v_\vep)=f, \quad \vep >0,
\end{equation}
possessing, due to Proposition 4 in \cite[p.\,571]{BrSt:1973} a unique solution $v_\vep$, such that $w_\vep = f-\fA v_\vep$ satisfies the inequality:
\begin{equation}\Label{2-17K}\
\Vert w_\vep\Vert_{L^1(\Qpn)}\leq \Vert f\Vert_{L^1(\Qpn)}.
\end{equation}
Moreover, if $\w{v}_\vep$ and $\w{w}_\vep$ correspond to equation \eqref{ve} with r.h.s. $f$ and $\w{f}$, then
\begin{equation}\Label{2-18K}\
\Vert w_{\vep}-\w{w}_\vep\Vert_{L^1(\Qpn)}\leq\Vert f - \w f\,\Vert_{L^1(\Qpn)}.
\end{equation}
In addition, if $f\in L^1(\Qpn)\cap L^\infty(\Qpn)$, then due to the Proposition 4 in \cite{BrSt:1973} applied in space $L^q(\Qpn)=L^\infty(\Qpn)$ we have
\begin{equation}\Label{2-19K}\
\Vert f - (\fA+\vep)v_\vep\Vert_{L^\infty(\Qpn)}=\Vert \be(v_\vep)\Vert_{L^\infty(\Qpn)}\leq \Vert f\Vert_{L^\infty(\Qpn)}.
\end{equation}
Using inequality \eqref{2-19K} we find that
\begin{equation}\Label{5-2}\
\vert v_\vep(x)\vert \leq \be^{-1}(\Vert f\Vert_{L^\infty(\Qpn)})
\end{equation}
for almost all $x\in\Qpn$. This means that for any fixed $N$ the constant $R_N(v_\vep)$ from \eqref{Rr} satisfy inequality:
\[R_N(v_\vep)\leq C,\]
where $C$ does not depend on $\vep$, so that the set of constant functions $\{R_N(v_\vep), 0 < \vep < 1\}$ is relatively compact in $L^1(B_N)$.
On the other hand, it follows from \eqref{2-17K}, \eqref{2-18K} and the translation invariance of $\fA$ that the family of functions $w_\vep = f-\fA v_\vep$ satisfies the inequalities:
\begin{align}\Label{5-3K}\
&\Vert w_\vep\Vert_{L^1(\Qpn)}\leq \Vert f\Vert_{L^1(\Qpn)};\\
\Label{5-4K}\
&\int\limits_{\Qpn}\vert w_\vep(x+h)-w_\vep(x)\vert\, d^nx\leq\int\limits_{\Qpn}\vert f(x+h) -f(x)\vert\, d^nx
\end{align}
for any $h\in\Qpn$.
The conditions \eqref{5-3K} and \eqref{5-4K} imply relative compactness of sequence $\{w_\vep\}$ and therefore of $\{\fA v_\vep\}$ in $L^1_{\text{loc}}(\Qpn)$, that is the compactness of the closure of the restriction $(\fA v_\vep)\big\vert_X$ for any bounded measurable subset $X\subset \Qpn$. This is a consequence of the criterion for relative compactness in $L^1(G)$ where $G$ is a compact group (see Theorem 4.20.1 in \cite{Edwards}) applied to the case $G=B_N$ (the additive group of $p$-adic ball).
Denote by $v_{\vep,N}$ the restriction of $v_\vep$ to $B_N$. From Proposition \ref{prop5-25-1} it follows that
\[W_N\psi_N = \fA\psi - R_N\]
therefore the set $W_N v_{\vep,N}$ is relatively compact in $L^1(B_N)$. Since $W_N =\fA_N +\are \la_N$, defined as in \eqref{Th5-5}, due to Hille-Yosida theorem has bounded inverse on $L^1_{loc}(\Qpn)$, this implies the relative compactness of $\{v_{\vep,N}\}$ in $L^1(B_N)$ for each $N$. The same is true for $\{v_\vep\}$ in $L^1_{loc}(\Qpn)$. Let $v$ be its limit point. Together with the relative compactness of $\{\fA v_\vep\}$ the above reasoning proves the solvability of \eqref{v} because by Fatou's lemma and \eqref{5-3K}, a limit point of $\{\fA v_\vep\}$ belongs to $L^1(\Qpn)$. Therefore $\be(v)\in L^1(\Qpn)$. By \eqref{5-2}, $v\in L^\infty (\Qpn)$, so that $\be(v)\in L^\infty(\Qpn)$, $v=\vph(\be(v))$, $\vert v(x)\vert \leq C\, \vert \be (v)\vert$, and $v$ belongs to $L^1(\Qpn).$
\end{proof}
\section*{Acknowledgments}
The work by the first- and third-named authors was funded in part under the budget program of Ukraine No. 6541230 ``Support to the development of priority research trends''. The third-named author was also supported in part in the framework of the research work "Markov evolutions in real and p-adic spaces" of the Dragomanov National Pedagogical University of Ukraine.
|
2,869,038,155,479 | arxiv | \section{Introduction and some upper bounds}
Consider the two-dimensional Euler equation for the vorticity
\begin{equation}
\dot\theta=\nabla\theta \cdot \psi, \quad \psi=\nabla^\perp
\Delta^{-1}\theta, \quad \theta(x,y,0)=\theta_0(x,y) \label{euler}
\end{equation}
and $\theta$ is $2\pi$--periodic in both $x$ and $y$ (that is, the
equation is considered on the torus $\mathbb{T}^2$). We assume that
$\theta_0$ has zero average over $\mathbb{T}^2$ and then
$\Delta^{-1}$ is well-defined since the Euler flow is
area-preserving and the average of $\theta(\cdot,t)$ is zero as
well. Denote the operator of Euler evolution by $\mathcal{E}_t$, i.e.
\[
\theta(t)=\mathcal{E}_t\theta_0
\]
The global existence of the smooth solution for smooth
initial data is well-known and is due to Wolibner \cite{wol} (see
also \cite{March}). The estimate on the possible growth of the
Sobolev norms, however, is double exponential. We sketch the proof
of this bound for $H^2$--norm. The estimates for $H^s, s>2$ can be
obtained similarly. More general results on regularity can be found
in \cite{const}. Let
\[
j_k(t)=\|\theta(t)\|_{H^k}
\]
\begin{lemma}If $\theta$ is the smooth solution of (\ref{euler}),
then
\begin{equation}
j_2(t)\leq \exp\Bigl(\frac{(1+2\log^+
j_2(0))\exp(C\|\theta_0\|_\infty t)-1}{2}\Bigr)\label{upper1}
\end{equation}
\end{lemma}
\begin{proof}
Acting on (\ref{euler}) with Laplacian we get
\[
\Delta \dot\theta=\Delta\theta_x \psi_y+2\nabla\theta_x\cdot \nabla
\psi_y-\Delta\theta_y\psi_x-2\nabla\theta_y\cdot \nabla \psi_x
\]
Multiply by $\Delta \theta$ and integrate over $\mathbb{T}^2$ to get
\begin{equation}
\partial_t\|\theta\|_{H^2}^2\lesssim \|H(\psi)\|_\infty
\|\theta\|_{H^2}^2 \label{est1}
\end{equation}
where $H(\psi)$ denotes the Hessian of $\psi$. The next inequality
follows from the Littlewood-Payley decomposition (see \cite{const},
proposition~1.4 for more general result)
\begin{equation}
\|H(\psi)\|_\infty<C(\sigma) \|\theta\|_\infty(1+\log^+
\|\theta\|_{H^\sigma})\label{est2}
\end{equation}
for any $\sigma>1$. Notice that $\|\theta\|_\infty$ is invariant
under the motion so combine (\ref{est1}) and (\ref{est2}) to get
(\ref{upper1}).
\end{proof}
{\bf Remark 1.} In the same way one can prove bounds for higher
Sobolev norms, e.g.,
\begin{equation} \log j_4(t)\lesssim (1+\log^+
j_4(0))\exp(C\|\theta_0\|_\infty t)-1\label{upper2}
\end{equation}
The natural questions one can ask then are the following: first, how
fast can the Sobolev norms grow in time and what is the mechanism
that leads to their growth? Secondly, for fixed $t$, how does $\|
\mathcal E (t) \theta_0\|_{H^s}$ depend on $\|\theta_0\|_{H^s}$ when the
last expression grows to infinity? For example, given
$\|\theta_0\|_\infty \sim 1$, the right hand side in (\ref{upper1})
grows as a power function in $j_2(0)$, the degree grows
exponentially in $t$ and is more than one for any $t>0$.
Instead of working with Sobolev norms, we will be studying the
uniform norm of the vorticity gradient (or Lipschitz norm) as this
norm is more natural for the method used in the proof. It allows the
similar upper bound. We again give the sketch of the proof for
completeness.
\begin{lemma}
If $\theta_0$ is smooth and $\|\theta_0\|_\infty\sim 1$, then
\begin{equation}\label{upper31}
\|\nabla\mathcal E_t\theta_0\|_\infty \lesssim \exp \left( C(1+\log^+
\|\nabla\theta_0\|_\infty)e^{Ct}\right)
\end{equation}
\end{lemma}
\begin{proof}
If $\Psi(z,t)$ is area-preserving Euler diffeomorphism, then
\[
(\mathcal E_t\theta_0) (z)=\theta_0(\Psi^{-1}(z,t))
\]
On the other hand, $\Psi(z,t)$ solves
\[
\dot \Psi=-u(\Psi,t),\quad \Psi(z,0)=z
\]
where $u(z,t)=\nabla^\perp \Delta^{-1}\theta(\cdot, t)$. For the
Riesz transform we have a trivial estimate
\begin{equation}
\|H(\Delta^{-1}\theta)\|_\infty\lesssim 1+\|\theta\|_\infty
(1+\log^+ \|\nabla \theta\|_\infty)\label{ha}
\end{equation}
$\Bigl($ Indeed, without loss of generality we can evaluate the
integral at zero and assume $\theta(0)=0$. Then, e.g.,
\[
\left|\int_{B_1(0)} \frac{\xi_1 \xi_2}{|\xi|^4}
\theta(\xi)d\xi\right|\leq \int_{B_\delta(0)}\frac{1}{|\xi|^2}
|\theta(\xi)|d\xi+\int_{\delta<|z|<1}\frac{1}{|\xi|^2}
|\theta(\xi)|d\xi
\]
where $\delta^{-1}=\max\{ \|\nabla \theta\|_\infty, 2\}$. Apply now
the Lagrange formula to the first term to get (\ref{ha}). $\Bigr)$
so
\[
|u(w_1,t)-u(w_2,t)|\lesssim |w_1-w_2|b, \quad b=1+\log^+ \|\nabla
\theta(t)\|_\infty
\]
Therefore, we have
\[
|\dot f|\lesssim f b,\quad f(t)=|\Psi(z_2,t)-\Psi(z_1,t)|^2, \,
\]
After integration
\[
|z_2-z_1|\exp\left(-C\int_0^t b(\tau)d\tau\right)\leq
|\Psi(z_2,t)-\Psi(z_1,t)|\leq |z_2-z_1|\exp\left(C\int_0^t
b(\tau)d\tau\right)
\]
Since
\[
\|\nabla
\theta(z,t)\|_\infty=\sup_{z_1,z_2}\frac{|\theta_0(\Psi^{-1}(z_2,t))-
\theta_0(\Psi^{-1}(z_1,t))|} {|z_2-z_1|}
\]
we get inequality
\[
\|\nabla \theta(z,t)\|_\infty\lesssim \|\nabla \theta_0\|_\infty\exp
\left(C\int_0^t b(\tau)d\tau\right)
\]
Taking $\log$ of the both parts and applying the
Gronwall-Bellman, we get (\ref{upper31}).
\end{proof}
In this paper, we will work only with large $\|\nabla
\theta_0\|_\infty$. For that case, we will show that, given
arbitrarily large $\lambda$, the estimate $\max_{t\in
[0,T]}\|\nabla\theta(\cdot,t)\|_\infty>\lambda^{e^{T}-1}\|\nabla\theta_0\|_\infty$
can hold for some infinitely smooth initial data. This is far from
showing that (\ref{upper1}) or (\ref{upper31}) are sharp however it
already is equivalent to the statement that $\mathcal E_t$ is linearly
unbounded. The question of whether $\|\nabla\theta_0\|_\infty$ can
be taken $\sim 1$ is left wide open, see discussion in the last
section.
Our results rigorously confirm the following observation: if the 2D
incompressible inviscid fluid dynamics gets into a certain
``instability mode'' then the Sobolev norms can grow very fast in
local time (i.e. counting from the time the ``instability regime"
was reached). Can the Sobolev norms grow at all infinitely in time
assuming that initially they are small? The answer to this question
is yes, see \cite{den} and \cite{KN,Nad,Yud,Yud1, shnir}. The
important questions of linear and nonlinear instabilities were
addressed before (see, e.g., \cite{fv} and references there). In the
recent paper \cite{hm}, it was proved that $\mathcal{E}_t$ is not
uniformly continuous on the unit ball in Sobolev spaces.
{\bf Remark 2. }It must be mentioned here that 2D Euler allows
rescaling which provides the tradeoff between the size of $\theta_0$
and the speed of the process, i.e. if $\theta(x,y,t)$ is a solution
then $\mu\theta(x,y,\mu t)$ is also a solution for any $\mu>0$.
However, in our construction we will always have $\|\theta\|_p\sim
1, \quad \forall p\in [1,\infty]$.
{\bf Remark 3.} If one replaces $\Delta^{-1}$ in (\ref{euler}) by
$\Delta^{-\alpha}$ with $\alpha>1$, then the growth of the vorticity
gradient is at most exponential, e.g.
\[
\|\nabla \mathcal E^{(\alpha)}_t \theta_0\|_\infty\lesssim \|\nabla
\theta_0\|_\infty \exp(C\|\theta_0\|_\infty t)
\]
Moreover, the lower exponential bound can hold for all times as long
as $\theta_0$ is properly chosen (see \cite{den}).
\section{The singular stationary solution and dynamics on the
torus}
The following singular stationary solutions was studied before (see,
e.g., \cite{chemin1, chemin2} in the context of $\mathbb{R}^2$). We
consider the following function
\[
\theta^s_0(x,y)=sgn(x)\cdot sgn(y), \quad |x|\leq \pi, |y|\leq \pi
\]
This is a steady state. Indeed, the function
$\psi_0=\Delta^{-1}\theta_0^s$ is odd with respect to each variable
as can be verified on the Fourier side. That, in particular, implies
that $\psi_0$ is zero on the coordinate axes so its gradient is
orthogonal to them. This steady state, of course, is a weak
solution, a vortex-patch steady state. Another consequence of
$\psi_0$ being odd is that the origin is a stationary point of the
dynamics.
By the Poisson summation formula, we have
\[
\sum_{n\in \mathbb{Z}^2,n\neq (0,0)} |n|^{-2} e^{in\cdot z} =C\ln
|z|+\phi(z), \quad z\sim 0
\]
where $\phi(z)$ is smooth and even.
Therefore,
around the origin we have
\[
\nabla \psi_0(x,y)\sim \int \int_{B_{0.5}(0)}
\frac{(x-\xi_1,y-\xi_2)}{(x-\xi_1)^2+(y-\xi_2)^2}sgn(\xi_1)
sgn(\xi_2)d\xi_1d\xi_2+(O(y),O(x))
\]
Due to symmetry, it is sufficient to consider the domain
$D=\{0<x<y<0,001\}$. Then, taking the integrals, we see that
\begin{equation}
\mu(x,y)=(\mu_1,\mu_2)= \left(\nabla^\perp \psi_0\right)(x,y)=
\label{potok}
\end{equation}
\begin{eqnarray*}
=c_1\left(-\int_0^x \ln(y^2+\xi^2)d\xi+xr_1(x,y),\int_0^y \ln(x^2+\xi^2)d\xi +yr_2(x,y)\right) \\
=c_2(-x\log y +xO(1), y\log y+yO(1)) \quad {\rm if} \,
(x,y)\in D
\end{eqnarray*}
The correction terms $r_{1(2)}$ are smooth.
Without loss
of generality we will later assume that $c_2=1$ in the last formula
(so $c_1=0.5$). That can always be achieved by time-rescaling.
Notice also that the flow given by the vector-field $\mu$ is
area-preserving.
Thus, the dynamics of the point $(\alpha,\beta)\in D, \alpha\ll
\beta$ is
\begin{equation}
(C_1\beta)^{e^t}\lesssim y(t)\lesssim (C_2\beta)^{e^t}, \quad
\alpha(C_1\beta)^{-e^t+1}\lesssim x(t)\lesssim
\alpha(C_2\beta)^{-e^t+1},\, t\in [0,t_0],\,
\label{dex}
\end{equation}
where $t_0$ is the time the trajectory leaves the domain $D$. These estimates therefore give a bound on $t_0$.
The attraction to the origin, the stationary point, is
double exponential along the vertical axis and the repulsion along
the horizontal axis is also double exponential.
\section{The idea}
The idea of constructing the smooth initial data for a double
exponential scenario is quite simple and roughly can be summarized
as follows: given any $T>0$, we will smooth out the singular steady state such that
the dynamics is double exponential over $[0,T]$ in a certain domain away from the
coordinate axes. Then we will place a small but steep bump in the
area of double exponential behavior and will let it evolve hoping
that the vector field generated by this bump itself is not going to
ruin the double exponential contraction in $OY$ direction. The rest
of the paper verifies that this indeed is the case.
\section{The Model Equation}
Consider the following system of ODE's
\begin{equation}
\left\{\begin{array}{cc}
\dot{x}=\mu_1(x,y)+\nu_1(x,y,t),\quad x(\alpha,\beta,0)=\alpha\\
\dot{y}=\mu_2(x,y)+\nu_2(x,y,t), \quad y(\alpha,\beta,0)=\beta
\end{array}\right.\label{model}
\end{equation}
Here we assume the following
\begin{equation}
|\nu_{1(2)}|<0,0001\upsilon r,\quad r=\sqrt{x^2+y^2} \label{usl1}
\end{equation}
and
\begin{equation}
\, |\nabla\nu_{1(2)}|<0,0001\upsilon \label{usl2}
\end{equation}
with small $\upsilon$ (to be specified later) and these estimates
are valid in the area of interest
\[
\aleph=\{y>\sqrt x\}\cap \{y<\epsilon_2\}\cap \{x>\epsilon_1\}
\]
where
\[
\upsilon\ll \epsilon_1\ll \epsilon_2
\]
The functions $\nu_{1(2)}$ are infinitely smooth in all variables in
$\aleph$ but we have no control over higher derivatives. We also
assume that the flow given by (\ref{model}) is area preserving. Our
goal is to study the behavior of trajectories within the time
interval $[0,T]$. In this section, the parameters will eventually be
chosen in the following order
\[
T\longrightarrow \epsilon_2\longrightarrow \epsilon_1\longrightarrow
\upsilon
\]
Here are some obvious observations:
1. If $\epsilon_{1(2)}$ are small and
\begin{equation}
\alpha \gtrsim \upsilon \left|\frac{\beta}{\log \beta}\right| \label{odno}
\end{equation}
then $x(t)$ increases and $y(t)$ decreases. This monotonicity
persists as long as the trajectory stays within $\aleph$. Assuming
that $\epsilon_{1(2)}$ are fixed, (\ref{odno}) can always be
satisfied by taking $\upsilon$ small enough, i.e.,
\begin{equation}
\upsilon\lesssim \frac{\epsilon_1|\log\epsilon_2|}{\epsilon_2}\label{uslovie1}
\end{equation}
2. We have estimates
\begin{equation}
x(-\log y+C)+\upsilon y>\dot{x}>x(-\log y-C)-\upsilon y, \quad -y(|\log
y|+C)<\dot{y}<-y(|\log y|-C)\label{rat1}
\end{equation}
The second estimate yields
\begin{equation}
e^{e^t(\log\beta+C)}> y(t)>
e^{
e^t(\log\beta-C)} \label{rat2}
\end{equation}
Let us introduce
\[
\kappa(T,\beta)=e^{ e^T(\log\beta-C)}
\]
For $x(t)$, we have
\[
x(t)\leq\alpha\exp\left(Ct-\int_0^t \log
y(\tau)d\tau\right)+\upsilon\int_0^ty(\tau)\exp\left(C(t-\tau)-\int_\tau^t
\log y(s)ds\right)d\tau
\]
\[
x(T)<(\alpha+\upsilon\beta T) \exp\Bigl(T(C+|\log
\kappa(T,\beta)|)\Bigr)
\]
Thus, the trajectory will stay inside $\aleph$ for any $t\in [0,T]$ as
long as
\[
\alpha<\kappa^{3+T}-\upsilon\epsilon_2T
\]
and if we have
\begin{equation}
\upsilon<\kappa^{4+T}(T,\beta) \label{uusl}
\end{equation}
then the condition
\begin{equation}
\alpha<\beta^{8e^{2T}}\label{dva}
\end{equation}
is sufficient for the trajectory to stay inside $\aleph$ for $t\in [0,T]$. Thus, we are
taking
\[
\epsilon_1<\epsilon_2^{8e^{2T}}
\]
and focus on the nonempty domain
\[
\Omega_0=\{(\alpha,\beta): \epsilon_1<\alpha<\beta^{8e^{2T}},
\beta<\epsilon_2\}
\]
The condition on $\upsilon$ is (\ref{uusl}), so taking the smallest
possible $\kappa(T,\beta)$ within $\Omega_0$ we get, e.g.,
\begin{equation}
\upsilon<\epsilon_1^{10} \label{uslovie2}
\end{equation}
Then, any point from $\Omega_0$ stays inside $\aleph$ over $[0,T]$,
$x(t)$ grows monotonically and $y(t)$ monotonically decays with the
double-exponential rate given in (\ref{rat2}).
Now, we will prove that the derivative in $\alpha$ of
$x(\alpha,\beta,t)$ grows with the double-exponential rate and this
will be the key calculation. For any $t\in [0,T]$, (\ref{potok})
yields
\begin{equation}
\left\{
\begin{array}{lc}
\displaystyle \dot{x}_\alpha=-0.5 x_\alpha\log (x^2+y^2)+x_\alpha r_1+\\
\hspace{2cm}+xx_\alpha r_{1x}+xy_\alpha r_{1y}+\nu_{1x}
x_\alpha+\nu_{1y}y_\alpha-y_\alpha
\arctan(xy^{-1}) \\
\dot{y}_\alpha=0.5y_\alpha\log (x^2+y^2)+y_\alpha r_2+yx_\alpha
r_{2x}+\\
\hspace{2cm}+yy_\alpha
r_{2y}+\nu_{2x}x_\alpha+\nu_{2y}y_\alpha+x_\alpha \arctan(yx^{-1}) &
\end{array}
\right.
\end{equation}
and $x_\alpha(\alpha,\beta,0)=1$, $y_\alpha(\alpha,\beta,0)=0$. Let
\[
f_{11}(t)=\nu_{1x}-0.5 \log (x^2+y^2)+r_1+xr_{1x}
\]
\[
f_{12}(t)=xr_{1y} +\nu_{1y}-\arctan(xy^{-1})
\]
\[
f_{21}(t)=yr_{2x}+\nu_{2x}+\arctan(yx^{-1})
\]
\[
\,f_{22}(t)=0.5\log (x^2+y^2)+r_2+yr_{2y}+\nu_{2y}
\]
\[
x_\alpha=\exp\left(\int_0^t f_{11}(\tau)d\tau\right)\hat x, \quad
y_\alpha=\exp\left( \int_0^t f_{22}(\tau)d\tau\right)\hat y
\]
If
\[
g=f_{11}-f_{22}
\]
then
\[
\hat x(t)=1+\int_0^t \hat x(s) f_{21}(s) \int_s^t
f_{12}(\tau)\exp\left(-\int_s^\tau g(\xi)d\xi\right)d\tau ds
\]
Since the trajectory is inside $\aleph$, we have $y>\sqrt x$ and so
\[
|f_{12}|\lesssim y+\upsilon, \quad |f_{21}|\lesssim 1 ,\quad
f_{11}>e^t(-\log \beta+C), \quad g(t)>1
\]
From (\ref{rat2}), we get
\[
|\hat x(t)-1|\lesssim\upsilon\int_0^t |\hat x (\tau)|d\tau+\int_0^t
|\hat x(s)| \left(\int_s^t e^{e^\tau (\log\beta+C)} e^{-(\tau-s)}
d\tau\right) ds
\]
The following estimate is obvious
\[
\int_s^t e^{e^\tau (\log\beta+C)} e^{-(\tau-s)} d\tau \ll e^{-s}
\]
as $\beta$ is small.
Assuming that
\begin{equation}
\upsilon\ll (T+1)^{-1}\label{uslovie3}
\end{equation}
and $\epsilon_2$ is small,
we have
\[
\hat x(t)\sim 1
\]
and
\begin{equation}
x_\alpha(\alpha,\beta,T)>\left(\frac 1\beta\right)^{(e^{T}-1)/2}
\label{klyuch}
\end{equation}
The estimate (\ref{klyuch}) is the key estimate that will guarantee the necessary growth.
Now, let us place a circle $S_\gamma(\tilde{x},\tilde{y})$ with
radius $\gamma$ and center at $(\tilde{x},\tilde{y})$ into the zone
$\Omega_0$. Consider also the line segment $l=[A_1,A_2]$,
$A_1=(\tilde{x}-\gamma/2,\tilde{y}),
A_2=(\tilde{x}+\gamma/2,\tilde{y})$ in the center, parallel to $OX$.
We will track the evolution of this disc and this line segment under
the flow. We have by the Lagrange formula
\[
x(A_2,T)-x(A_1,T)>\beta^{-(e^{T}-1)/2}|A_2-A_1|
\]
From the positivity of $x_\alpha(\alpha,\beta,T)$ it follows that
the image of $l$ under the flow is a curve given by the graph of a
smooth function $\Gamma(x)$. Thus, the image of $l$ (call it $l'$)
has length at least $\beta^{-(e^{T}-1)/2}|A_2-A_1|$. Denote the
distance from $l'$ to $S'_\gamma(\tilde{x},\tilde{y})$, the image of
the circle, by $d$. Then, the domain
$\{\Gamma(x)-d<y(x)<\Gamma(x)+d, x\in (x(A_1,T),x(A_2,T))\}$ is
inside $B'_\gamma(\tilde{x},\tilde{y})$. The area of this domain is
at least
\[
d\cdot \beta^{-(e^{T}-1)/2}|A_2-A_1|
\]
Thus, assuming that the flow preserves the area, we have
\[
d\lesssim \beta^{(e^{T}-1)/2}\gamma
\] Consequently, if we place a bump in $\Omega_0$ such that the $l$ and
$S_\gamma(\tilde{x},\tilde{y})$ correspond to level sets, say,
$h_2$ and $h_1$ ({\bf and, what is crucial, $h_{1(2)}$ are
essentially arbitrary $0<h_1<h_2<0,0001$}), then the original slope
of at least $\sim|h_2-h_1|/\gamma$ will become not less than
\begin{equation}
\beta^{-(e^{T}-1)/2}\cdot\left(|h_2-h_1|/\gamma\right) \label{eex}
\end{equation}
thus leading to double-exponential growth of arbitrarily large
gradients.
{\bf Remark 1.} If $\beta$ is a fixed small number, we have growth
in $T$. If $T$ is any positive fixed moment of time, we have the
growth if $\beta\to 0$.
{\bf Remark 2.} Let us reiterate the order in which the parameters
are chosen: we first fix any $T$, then small $\epsilon_2$, then
$\epsilon_1<\epsilon_2^{8e^{2T}}$. How small $\epsilon_2$ must be
taken will be determined by how large the parameter $\lambda$ is
chosen in the theorem \ref{main} below. This defines the set
$\Omega_0$. For the whole argument to work we need to collect all
conditions on $\upsilon$: (\ref{uslovie1}), (\ref{uslovie2}),
(\ref{uslovie3}) which leads to
\begin{equation}
\upsilon<\epsilon_1^{10} \label{choiceups}
\end{equation}
\section{Small perturbations of a singular cross can also generate double
exponential contraction in $\aleph$}
Assume that the function $\theta_1$ at any given time $t\in [0,T]$
is such that
\[
\theta_1(x,y,t)=\theta_0^s(x,y)
\]
outside the ``cross"-domain $A=\{|x-\pi k|<\tau\}\cup \{|y-\pi
l|<\tau\}$ where $\tau$ is small and $k,l\in \mathbb{Z}$. Inside
the domain $A$ we only assume that $\theta_1$ is bounded by one in
absolute value, is even, and has zero average. Notice here that the
Euler flow preserves property of the function to be even. Given
fixed $\epsilon_{1(2)}$ and the domain $\aleph$ defined by these
constants, we are going to show that the flow generated by
$\theta_1$ can be represented in $\aleph$ in the form (\ref{model})
with $\upsilon(\tau)\to 0$ as $\tau\to 0$. We assume of course that
$\tau\ll \epsilon_1$.
For that, we only need to study
\[
F_1=\nabla\Delta^{-1}p, \quad p=\theta_1-\theta_0^s
\]
Here are some obvious properties of $F_1$
1. $F_1(0)=0$ as $\theta_{1}$ and $\theta_0^s$ are both even.
2. We have
\[
F_1(z)\sim \int_A \left(
\frac{\xi-z}{|\xi-z|^2}-\frac{\xi}{|\xi|^2}\right)p(\xi)d\xi
\]
Using the formula
\[
\left| \frac{x}{|x|^2}-\frac{y}{|y|^2}\right|=\frac{|x-y|}{|x|\cdot
|y|}
\]
we get
\[
|F_1(z)|\lesssim |z|\frac{\tau|\log\tau|}{\epsilon_1}
\]
Thus, by taking $\tau$ small, we can satisfy $(\ref{usl1})$. How
about (\ref{usl2})? For the Hessian, we have
\[
|H\Delta^{-1}p|\lesssim \epsilon_1^{-2}\tau
\]
and after combining we must have
\begin{equation}
\epsilon_1^{-2}\tau|\log\tau|\lesssim \epsilon_1^{10} \label{sizecross}
\end{equation}
by (\ref{choiceups}). Thus, this condition on the size of the cross
guarantees that the arguments in the previous section work.
\section{The flow generated by a small steep bump in $\aleph$}
In this section, we assume that at a given moment $t\in [0,T]$, we
have a smooth even function $b(x,y,t)$ with support in $\aleph\cup
-\aleph$, with zero average, and
\[
\|b\|_2<\omega, \quad \|\nabla b\|_\infty <M
\]
(here one should think about small $\omega$ and large $M$).
We will study the flow generated by this function. Let
\[
F_2=\nabla \Delta^{-1} b
\]
Here are some properties of $F_2$
1. $F_2(0)=0$.
2. To estimate the Hessian of $\Delta^{-1}b$, consider the second
order derivatives. For example,
\[
(\Delta^{-1}b)_{\alpha\beta}(\alpha,\beta)\sim
\int\limits_{(\alpha-\xi)^2+(\beta-\eta)^2<1}
\frac{(\alpha-\xi)(\beta-\eta)}{((\alpha-\xi)^2+(\beta-\eta)^2)^2}b(\xi,\eta,t)d\xi
d\eta=
\]
\[
=\int_{1>(\alpha-\xi)^2+(\beta-\eta)^2>\rho^2}
\frac{(\alpha-\xi)(\beta-\eta)}{((\alpha-\xi)^2+(\beta-\eta)^2)^2}b(\xi,\eta,t)d\xi
d\eta\quad+
\]
\[
\int\limits_{(\alpha-\xi)^2+(\beta-\eta)^2<\rho^2}
\frac{(\alpha-\xi)(\beta-\eta)}{((\alpha-\xi)^2+(\beta-\eta)^2)^2}
\left[b(\alpha,\beta,t)+\nabla b(\xi',\eta',t)\cdot
(\xi-\alpha,\eta-\beta)\right] d\xi d\eta
\]
The first term is controlled by $\omega \rho^{-1}$.
By our assumption, the second term is dominated
by $M\rho$. Optimizing in $\rho$ we have
\[
\|H\Delta^{-1}b\|_\infty\lesssim \sqrt{M\omega}
\]
To guarantee the conditions that lead to double exponential growth
with arbitrary a priory given $M$, we want to make $\omega$ so small
that conditions (\ref{usl1}) and (\ref{usl2}) are satisfied with
$\upsilon$ as small as we need (i.e., (\ref{choiceups})). The
condition $(\ref{usl2})$ is immediate and $(\ref{usl1})$ follows
from $F_2(0)=0$, Lagrange formula and the estimate on the Hessian.
\section{One stability result and the proof of the main theorem}
It is well known that given $\theta_0\in L^\infty(\mathbb{T}^2)$,
the weak solution exists and the flow can be defined by the
homeomorphic maps $ \Psi_{\theta_0}(x,y,t) $ for all $t$ so that $
\theta(x,y,t)=\theta_0(\Psi^{-1}_{\theta_0}(x,y,t)) $ where
$\Psi_{\theta_0}$ itself depends on $\theta_0$. The continuity of
this map though is rather poor (\cite{chemin2}, theorem 2.3,
p.99). In this section, we will need to take smooth $\theta_0$ such
that \[\max_{t\in [0,T]}\max_{z\in \mathbb{T}^2}
|\Psi_{\theta_0}(z,t)-\Psi_{\theta_0^s}(z,t)|\to 0\] To this end, we
will consider $\theta_0=\theta_0^s$ outside the domain $\mathcal{D}$
of small area. Inside this domain we assume $\theta_0$ to be bounded
by some universal constant. The proof of Yudovich theorem (see,
e.g., the argument on pp. 313--318, proof of Proposition 8.2,
\cite{mb}) implies
\begin{equation}\label{stability}
\max_{t\in [0,T]}\max_{z\in \mathbb{T}^2}
|\Psi_{\theta_0^s}(z,t)-\Psi_{\theta_0}(z,t)|\to 0
\end{equation}
as $|\mathcal{D}|\to 0$.
This is the only stability result with respect to initial data that we are going to need in the argument below.
\begin{theorem}\label{main}
For any large $\lambda$ and any $T>0$, we can find smooth initial
data $\theta_0$ so that $\|\theta_0\|_\infty<2$ and
\[
\max_{t\in
[0,T]}\|\nabla\theta(\cdot,t)\|_\infty>\lambda^{e^{T}-1}\|\nabla\theta_0\|_\infty
\]
\end{theorem}
\begin{proof}
Fix any $T>0$ and find $\epsilon_{1(2)}$. For larger $\lambda$,
we have to take smaller $\epsilon_2$ (see remark 1 in the fourth section). Identify the domain $\Omega_0$
and place a bump (call it $b(z)$) in $\Omega_0\cup
-\Omega_0$ so that the resulting function is even. Make sure that
this bump has zero average, height $h_2$ and diameter of support
$h_1$ so that the gradient initially is of the size $\sim h_2/h_1$.
Here $h_1\ll h_2\ll 1$ will be adjusted later.
Take a smooth even function $\omega(x,y)$ supported on $B_1(0)$
such that
\[
\int_{\mathbb{T}^2} \omega(x,y)dxdy=1
\]
For positive small $\sigma$, consider
\[
\tilde\theta_\sigma(x,y)=\theta_0^s\ast \omega_\sigma \in C^\infty,
\quad \omega_\sigma=\sigma^{-2}\omega(x/\sigma,y/\sigma)
\]
We take $\sigma\ll \epsilon_1$ so $\tilde\theta_\sigma(x,y)$ and
$\theta_0^s(x,y)$ coincide in $\aleph$.
As the initial data for Euler dynamics we take a sum
\[
\tilde\theta_\sigma(z)+b(z)
\]
Then, since $\theta_0^s$ is stationary under the flow, the stability
result (\ref{stability}) guarantees that given any $\tau$ and
keeping the same value of $h_2/h_1$, we can find $\sigma$ and $h_1$
so small that over the time interval $[0,T]$ we satisfy
1. The ``evolved bump'' $b(z,t)$ stays in the domain $\aleph$ (e.g.,
$\Psi_{\theta_0}(t)\Bigl( {\rm supp }\,b(z) \Bigr) \subset \aleph$).
2. Outside the cross of size $\tau$ (the one considered in section
five) and the support of the evolved bump $b$, the solution is
identical to $\theta_0^s$.
Fix $\sigma$ and $h_1'$ so small that for any $h_1<h_1'$ we have
the size of $A$ being small, i.e. $\tau$ is small as we wish. The value of
$\tau$ must be small enough to ensure the double exponential
scenario, the conditions (\ref{usl1}) and (\ref{usl2}). For that, we
need (\ref{sizecross}).
Next, we proceed by contradiction. Assume that for all $t\in [0,T]$
we have $\|\nabla\theta(z,t)\|_\infty<M=(h_2/h_1) \lambda^{e^T-1}$.
Then, because $\|b(z,t)\|_2$ is constant in time as the flow is
area-preserving and $\|b(z,t)\|_2\lesssim h_1h_2$, we only need to
take $h_2$ so small that $ \sqrt{Mh_1h_2} $ is small enough to
guarantee the double exponential scenario and the estimate
(\ref{eex}). This gives us a contradiction as the double exponential
scenario makes the gradient's norm more than $M$ (provided that
$\epsilon_2\ll \lambda^{-2}$). For the initial value,
\[
\|\nabla \theta_0\|_\infty\sim \sigma^{-1}+h_2/h_1\sim h_2/h_1
\]
by arranging $h_{1(2)}$ (and keeping $h_1<h_1'$).
Here is an order in which parameters are chosen in this
construction:
\[
\{T,\lambda\}\longrightarrow\epsilon_2\longrightarrow\epsilon_1\longrightarrow
\{\sigma,h_{1(2)}\}
\]
\end{proof}
\section{The operator $\mathcal{E}_t$ is linearly unbounded.}
The theorem \ref{main} is equivalent to the following
\begin{proposition}
The operator $\mathcal E_t$ is linearly unbounded for any $t>0$, i.e.
\[
\sup_{\theta_0\in C^\infty(\mathbb{T}^2), 0<\|\theta_0\|_\infty \leq
1, \theta_0\perp 1}\frac{\|\nabla \mathcal E_t
\theta_0\|_\infty}{\|\nabla \theta_0\|_\infty}=+\infty
\]
\end{proposition}
\begin{proof}
The proof is immediate. Indeed, given any fixed $t$, we have
\[
\sup_{\tau\in [0,t]}\left(\sup_{\theta_0\in C^\infty(\mathbb{T}^2),
\|\theta_0\|_\infty = 1, \theta_0\perp 1}\frac{\|\nabla \mathcal
E_{\tau} \theta_0\|_\infty}{\|\nabla
\theta_0\|_\infty}\right)=+\infty
\]
by taking $\lambda\to\infty$ in the theorem \ref{main}. Then, to
have the statement at time $t$, we only need to multiply $\theta_0$
by a suitable number and use remark 2 from the first section.
Vice versa, in the theorem \ref{main} the combination
$\lambda^{e^{T}-1}$ can be replaced by arbitrary large number. In
this formulation, the statement follows from the proposition.
\end{proof}
As the statement of the theorem \ref{main} holds with any $\lambda$,
the double exponential function is not relevant at all in the
formulation itself. However, it is very special hyperbolic scenario
with double exponential rate of contraction that ultimately provided
the superlinear dependence on the initial data.
The interesting and important question is whether the vorticity
gradient can grow in the same double exponential rate starting with
initial value $\sim 1$? We do not know the answer to this question
yet and the best known bound is (see, e.g., \cite{den})
\[
\max_{t\in [0,T]} \|\nabla\theta(\cdot, t)\|_\infty >e^{0.001 T}
\]
for arbitrary $T$ and for $T$--dependent $\theta_0$ with
$\|\theta_0\|_\infty\sim \|\nabla\theta_0\|_\infty\sim 1$.
\section{Acknowledgment}
This research was supported by NSF grants DMS-1067413 and
DMS-0758239. The hospitality of the Institute for Advanced Study,
Princeton, NJ is gratefully acknowledged. We are grateful to T. Tao
for pointing out that the theorem \ref{main} is equivalent to
$\mathcal{E}_t$ being linearly unbounded and to R. Killip and A. Kiselev
for interesting comments.
|
2,869,038,155,480 | arxiv | \section{Introduction}\label{sec: Introduction}
In recent years the puzzling existence of dark matter and dark energy, which cannot be explained either by Einstein's General theory of relativity (GR) or the Standard model of elementary particles, suggests that alternative models have to be kept in mind.
\\
\indent From gravitational perspective one can consider modified theories of gravity and in particular higher derivative theories (HDTs), which include contributions from polynomial or non-polynomial functions of the scalar curvature. The most prominent of them are the so called $f(R)$ theories \cite{STAROBINSKY198099, DeFelice:2010aj, Cembranos:2008gj}. The $f(R)$ gravity is a whole family of models with a number of predictions, which differ from those of GR. Therefore, there is a great deal of interest in understanding the possible phases and stability of such higher derivative theories and, thereof, their admissible
black hole solutions \cite{Cognola:2005de, Cognola:2006eg, Oliva:2010eb, Oliva:2010zd, Cai:2010zh, Berezhiani:2008nr}.
\\
\indent However, a consistent description of black holes necessarily invokes the full theory of quantum gravity. Unfortunately, at present day, our understanding of such theory is incomplete at best. This prompts one to resort to alternative approaches, which promise to uncover many important aspects of quantum gravity and black holes. One such example is called information geometry \cite{amari2007methods, Amari:2016:IGA:3019383, amari2012differential, ay2017information}.
\\
\indent The framework of information geometry is an essential tool for understanding how classical and quantum information can be encoded onto the degrees of freedom of any physical system. Since geometry studies mutual relations between elements, such as distance and curvature, it provides us with a set of powerful analytic tools to study previously inaccessible features of the systems under consideration. It has emerged from studies of invariant geometric structures arising in statistical inference, where one defines a Riemannian metric, known as Fisher information metric \cite{amari2007methods}, together with dually coupled affine connections on the manifold of probability distributions. Information geometry already has important applications in many branches of modern physics, showing intriguing results. Some of them, relevant to our study, include condensed matter systems \cite{Wh0, Wh1, PhysRevA.20.1608, Janyszek:1989zz, Johnston:2003ed, Dolan2655, Dolan:2002wm, Janke02, Janke:2002ne, Quevedo:2015xdx, Quevedo:2008hh, Quevedo:2013bze, Quevedo:2010tz, vetsov2018}, black holes \cite{Aman:2003ug, Shen:2005nu, Janke10, Ferrara:1997tw, Cai:1998ep, Aman:2005xk, Aman:2007pp, Aman:2007ae, Suresh:2016cqp, Suresh:2016onn, Quevedo:2016swn, Channuie:2018mkt, Quevedo:2016cge, Larranaga:2010kj, Sarkar:2006tg, Astefanesei:2010bm, Mansoori:2016jer, Mansoori:2014oia, Mansoori:2013pna, e20030192, Miao:2018fke, Ruppeiner:2018pgn} and string theory \cite{Ferrara:1997tw, Larranaga:2010kj, Dimov:2017ryz, Dimov:2016vvl}. Further applications can also be found in \cite{Amari:2016:IGA:3019383, ay2017information}.
\\
\indent When dealing with systems such as black holes, which seem to possess enormous amount of entropy \cite{PhysRevD.7.2333, bardeen1973, PhysRevD.9.3292, hawking1972}, one can consider their space of equilibrium states, equipped with a suitable Riemannian metric such as the Ruppeiner information metric \cite{RevModPhys.67.605}. The latter is a thermodynamic limit of the above-mentioned Fisher information metric. Although G. Ruppeiner developed his geometric approach within fluctuation theory, when utilized for black holes, it seems to capture many features of their phase structure, resulting from the dynamics of the underlying microstates. In this case one implements the entropy as a thermodynamic potential to define a Hessian metric structure on the state space statistical manifold with respect to the extensive parameters of the system.
\\
\indent Moreover, one can utilize the internal energy (the ADM mass in the case of black holes) as an alternative thermodynamic potential, which lies at the heart of Weinhold's metric approach \cite{Wh0} to equilibrium thermodynamic states. The resulting Weinhold information metric is conformally related to Ruppeiner metric, with the temperature $T$ as the conformal factor. Unfortunately, the resulting statistical geometries coming from both approaches do not often agree with each other. The reasons for this behavior are still unclear, although several attempts to resolve this issue have already been suggested \cite{Sarkar:2006tg, Mirza:2007ev, Quevedo:2017tgyy, Quevedo:2017tgz, Mansoori:2016jer}.
\\
\indent In the current paper we are going to study the equilibrium thermodynamic state space of the Deser-Sarioglu-Tekin (DST) black hole \cite{Deser:2007za} within the framework of thermodynamic information geometry. The DST black hole solution is a static, spherically symmetric black hole solution in higher derivative theory of gravity with contributions from a non-polynomial term of the Weyl tensor to Einstein–Hilbert Lagrangian.
\\
\indent The text is organized as follows. In Section \ref{sec:2} we shortly discuss the basic concepts of geometrothermodynamics and related approaches. In Section \ref{sec:3} we calculate the mass of the DST solution via the quasilocal formalism developed by Brown and York in \cite{Brown:1992br}. In Section \ref{sec:4} we calculate the standard thermodynamic quantities such as the entropy and the Hawking temperature of the DST black hole solution and we show that the first law of thermodynamics is satisfied. In Sections \ref{sec:5} and \ref{sec:6} we study the Hessian information metrics and several Legendre invariant approaches, respectively. We show that the Hessian approaches of Ruppeiner and Weinhold fail to produce viable state space metrics, while the Legendre invariant metrics successfully manage to incorporate the Davies phase transition points. Finally, in Section \ref{sec:conclusion}, we make a short summary of our results.
\section{Information geometry on the space of equilibrium thermodynamic states}\label{sec:2}
Due to the pioneering work of Bekenstein \cite{PhysRevD.7.2333} and Hawking \cite{hawking1972} we know that any black hole represents a thermal system with well-defined temperature and entropy. Taking into account that black holes may also possess charge $Q$ and angular momentum $J$, one can formulate the analogue to the first law of thermodynamics for black holes as
\begin{equation}\label{eqBHFirstLaw}
dM = T\,dS + {\Phi _Q}\,dQ + \Omega \,dJ\,.
\end{equation}
Here $\Phi _Q$ is the electric potential and $\Omega$ is the angular velocity of the event horizon. Equation (\ref{eqBHFirstLaw}) expresses the conserved ADM mass $M$ as a function of entropy and other extensive parameters, describing the macrostates of the black hole. One can equivalently solve Eq. (\ref{eqBHFirstLaw}) with respect to the entropy $S$.
\\
\indent In the framework of geometric thermodynamics all extensive parameters of the given black hole background can be used in the construction of its equilibrium thermodynamic parameter space. The latter can be equipped with a Riemannian metric in several ways. In particular, one can introduce Hessian metrics, whose components are calculated as the Hessian of a given thermodynamic potential. For example, depending on which potential we have chosen for the description of the thermodynamic states in equilibrium, we can write the two most popular thermodynamic metrics, namely the Weinhold information metric \cite{Wh0},
\begin{equation}
ds_W^2 = {\partial _a}{\partial _b}M\,d{X^a}\,d{X^b}\,,
\end{equation}
defined as the Hessian of the ADM mass $M$, or the Ruppeiner information metric \cite{PhysRevA.20.1608},
\begin{equation}\label{eqRinfometric}
ds_R^2 = - {\partial _a}{\partial _b}S\,d{Y^a}\,d{Y^b}\,,
\end{equation}
defined as the Hessian of the entropy $S$. Here $X_a [Y_b], a,\,b=1,\dots,\,n,$ collectively denote all of the system's extensive variables except
for $M [S]$.
One can show that both metrics are conformally related to each other via the temperature:
\begin{equation}
ds_W^2 = T\,ds_R^2\,.
\end{equation}
The importance of using Hessian metrics on the equilibrium manifold is best understood when one considers small fluctuations of the thermodynamic
potential. The latter is extremal at each equilibrium point, but the second moment
of the fluctuation turns out to be directly related to the components of the corresponding Hessian metric. From statistical point of view one can define Hessian metrics on a statistical manifold spanned by any type or number of extensive (or intensive) parameters. In this case the first law of thermodynamics has to be properly generalized in order to include the chemical potentials of all relevant fluctuating parameters. This is due to the fact that the Hessian metrics are not Legendre invariant, thus they do not necessarily preserve the geometric properties of the system when a different thermodynamic potential is chosen. However, for Legendre invariant metrics, the first law of thermodynamics follows naturally.
\\
\indent In order to make things Legendre invariant, one can start from the $(2n + 1)$-dimensional thermodynamic phase space $\mathcal{F}$, spanned by the thermodynamic potential $\Phi$, the set of extensive variables $E^a$, and the set of intensive variables $I^a$, $a = 1, \dots, n$. Now, consider a symmetric bilinear form $\mathcal{G}=\mathcal{G}(Z^A)$ defining a non-degenerate metric on $\mathcal{F}$ with $Z^A=(\Phi,\,E^a,\,I^a)$, and the Gibbs 1-form $\Theta = d\Phi - {\delta _{ab}}\,{I^a}\,d{E^b}$, where $\delta_{ab}$ is the identity matrix. If the condition $\Theta\wedge(d\Theta)^n\neq 0$ is satisfied, then the triple ($\mathcal{F},\,\mathcal{G},\,\Theta$) defines a contact Riemannian manifold. The Gibbs 1-form is invariant with respect to Legendre transformations by construction, while the
metric $\mathcal{G}$ is Legendre invariant only if its functional dependence on $Z^A$ does not change under a Legendre transformation. Legendre invariance guarantees that the geometric properties of $\mathcal{G}$ do not depend on the choice of thermodynamic potential.
\\
\indent On the other hand, one is interested in constructing a viable Riemannian metric $g$ on the $n$-dimensional subspace of equilibrium thermodynamic states $\mathcal{E}\subset \mathcal{F}$. The space $\mathcal{E}$ is defined by the smooth mapping $\phi:\mathcal{E}\to\mathcal{F}$ or $E^a\to(\Phi(E^a),\,E^a,\,I^a)$, and the condition $\phi^*(\Theta)=0$. The last restriction leads explicitly to the generalization of the first law of thermodynamics (\ref{eqBHFirstLaw})
\begin{equation}\label{eqGeneralizedFirstLawofTD}
d\Phi = {\delta _{ab}}\,{I^a}\,d{E^b}\,,
\end{equation}
and the condition for thermodynamic equilibrium,
\begin{equation}
\frac{{\partial \Phi }}{{\partial {E^a}}} = {\delta _{ab}}\,{I^b}\,.
\end{equation}
The natural choice for $g$ is the pull-back of the phase space metric $\mathcal{G}$ onto $\mathcal{E}$, $g=\phi^*(\mathcal{G})$. Here, the pull-back also imposes the Legendre invariance of $\mathcal{G}$ onto $g$. However, there are plenty of Legendre invariant metrics on $\mathcal{F}$ to choose from. In Ref. \cite{Quevedo:2017tgz} it was found that the general metric for
the equilibrium state space can be written in the form
\begin{equation}
{g^{I,\,II}} = {\beta _\Phi }\,\Phi ({E^c})\,\chi _a^{\,\,b}\frac{{{\partial ^2}\Phi }}{{\partial {E^b}\,\partial {E^c}}}\,d{E^a}\,d{E^c}\,,
\end{equation}
where $\chi^{\,\,b}_a=\chi_{af}\,\delta^{fb}$ is a constant diagonal matrix and $\beta_\Phi\in\mathbb{R}$ is the degree of generalized homogeneity, $\Phi ({\lambda ^{{\beta _1}}}\,{E^1}, \ldots ,{\lambda ^{{\beta _N}}}\,{E^N})=$ ${\lambda ^{{\beta _\Phi }}}\,\Phi ({E^1}, \ldots ,{E^N}),\,{\beta _a} \in \mathbb{R}$. In this case the Euler's identity for homogeneous functions can be generalized to the form
\begin{equation}\label{eqEulerIdentity}
{\beta _{ab}}\,{E^a}\,\frac{{\partial \Phi }}{{\partial {E^b}}} = {\beta _\Phi }\,\Phi \,,
\end{equation}
where ${\beta _{ab}} = {\mathop{\rm diag}\nolimits} ({\beta _1},\,{\beta _2}, \ldots ,{\beta _N})$. In the case $\beta_{ab}=\delta_{ab}$ one returns to the standard Euler's identity. If we choose to work with $\beta_{ab}=\delta_{ab}$, for complicated systems this may lead to non-trivial conformal factor, which is no longer proportional to the potential $\Phi$. On the other hand, if we set $\chi_{ab}=\delta_{ab}$, the resulting
metric $g^I$ can be used to investigate systems with at least one first-order phase transition. Alternatively, the choice $\chi_{ab}=\eta_{ab}={\rm{diag}}(-1,1,\dots,1)$ leads to a metric $g^{II}$, which applies to systems with second-order phase transitions.
\\
\indent Once the information metric for a given statistical system is constructed, one can proceed with calculating its algebraic invariants, i.e. the information curvatures such as the Ricci scalar, the Kretschmann invariant, etc. All curvature related quantities are relevant for extracting information about the phase structure of the system. As suggested by G. Ruppeiner in Ref. \cite{RevModPhys.67.605}, the Ricci information curvature $R_I$ is related to the correlation volume of the system. This association follows from the idea that it will be less probable to fluctuate from one equilibrium thermodynamic state to the other, if the distance between the points on the statistical manifold, which correspond to these states, increases. Furthermore, the sign of $R_I$ can be linked to the nature of the inter-particle interactions in composite thermodynamic systems \cite{2010AmJPh..78.1170R}. Specifically, if $R_I=0$, the interactions are
absent, and we end up with a free theory (uncorrelated bits of information). The latter situation corresponds to flat information geometry. For positive curvature, $R_I>0$, the interactions are repulsive, therefore we have an elliptic information geometry, while for negative curvature, $R_I<0$, the interactions are of attractive nature and an information geometry of hyperbolic type is realized.
\\
\indent Finally, the scalar curvature of the parameter manifold can also be used to measure the stability of the physical system under consideration. In particular, the information curvature approaches infinity in the vicinity of critical points, where phase transition occurs \cite{Janyszek:1989zz}. Moreover, the curvature of the information metric tends to diverge not only at the critical points of phase transitions, but on whole regions of points on the statistical space, called spinodal curves. The latter can be used to discern physical from non-physical situations.
\\
\indent Furthermore, notice that in the case of Hessian metrics, in order to ensure global thermodynamic stability of a given
macro configuration of the black hole, one requires that all principal minors of the metric tensor be strictly positive definite, due to the probabilistic interpretation involved \cite{PhysRevA.20.1608}. In any other cases (Quevedo, HPEM, etc) the physical interpretation of the metric components is unclear and one can only impose the convexity condition on the thermodynamic potential, $\partial_a\partial_b\Phi\ge 0$, which is the second law of thermodynamics. Nevertheless, imposing positiveness of the black hole's heat capacity is mandatory in any case in order to ensure local thermodynamic stability.
\section{The DST black hole}\label{sec:3}
One starts with the following action (in units $\kappa = 1$)
\begin{equation}\label{eqTheDSTaction}
A = \frac{1}{{2}}\,\int_{\cal M} {{d^4}x\,\sqrt { - g} \,\left( {R + {\beta _n}\,|{\mathop{\rm Tr}\nolimits} {(C^n)}{|^{1/n}}} \right)}\,,
\end{equation}
where $C$ is the Weyl tensor and $\beta_n$ are some real constant coefficients. The spherically symmetric Deser-Sarioglu-Tekin solution \cite{Deser:2007za, Bellini:2010ar},
\begin{equation}\label{eqDSTmetric}
d{s^2} = - {k^2}\,{r^{\frac{{2\,(1 - p(\sigma ))}}{{p(\sigma )}}}}\,\left( {p(\sigma ) - \frac{c}{{{r^{1/p(\sigma )}}}}} \right)\,d{t^2} + \frac{{d{r^2}}}{{p(\sigma ) - \frac{c}{{{r^{1/p(\sigma )}}}}}} + {r^2}\,(d{\theta ^2} + {\sin ^2}\theta \,d{\phi ^2})\,,
\end{equation}
follows from (\ref{eqTheDSTaction}) by setting $n=2$ and $\sigma=\beta_2/\sqrt{3}$. Here, the integration constant $k$ can be eliminated by a proper rescaling of the time coordinate $t$. For convenience we have defined the function $p(\sigma)$ as
\begin{equation}
p(\sigma ) = \frac{{1 - \sigma }}{{1 - 4\,\sigma }}\,.
\end{equation}
To preserve the signature of the metric, we have to exclude the interval $1/4<\sigma<1$. There is only one horizon of the black hole, which is at the positive root of $g^{rr}=0$:
\begin{equation}
{r_h} = {\left( {c\,\left( {\frac{3}{{\sigma - 1}} + 4} \right)} \right)^{\frac{{\sigma - 1}}{{4\,\sigma - 1}}}}= {\left( {\frac{c}{p}} \right)^p}\,.
\end{equation}
We also note that in general the metric (\ref{eqDSTmetric}) is not asymptotically flat, unless we consider the case $\sigma=0$, for which the charge $c>0$ can be interpreted as the ADM mass of a Schwarzschild black hole ($c=2\,M$). Using the quasilocal formalism \cite{Brown:1992br, Yazadjiev:2005du}, we can support the claim that $M=c/2$ is the mass of the DST black hole for any $\sigma<1/4$ and $\sigma>1$. To show this, one has to bring the DST metric (\ref{eqDSTmetric}) in the form
\begin{equation}\label{eqDSTMetricInYCoord}
d{s^2} = - \lambda (y)\,d{t^2} + \frac{{d{y^2}}}{{\lambda (y)}} + {R^2}(y)\,d\Omega _2^2\,.
\end{equation}
The following change of variables
\begin{equation}
r = {\left( {\frac{{4\,\sigma - 1}}{{\sigma - 1}}\,y} \right)^{\frac{{\sigma - 1}}{{4\,\sigma - 1}}}}\,
\end{equation}
is suitable for this task, thus
\begin{equation}\label{eqTheLambdaFunction}
\lambda (y) = (y - c)\,{\left[ {\left( {4 + \frac{3}{{\sigma - 1}}} \right)y} \right]^{\frac{{2\,\sigma + 1}}{{4\,\sigma - 1}}}}\,
\end{equation}
and
\begin{equation}
{R^2}(y) = {\left( {\frac{{4\,\sigma - 1}}{{\sigma - 1}}\,y} \right)^{2\,\frac{{\sigma - 1}}{{4\sigma - 1}}}}\,.
\end{equation}
We can now calculate the quasilocal mass,
\begin{equation}\label{eqQuasiLocalMassGeneral}
{\cal M}(y) = \frac{1}{2}\,\frac{{d{R^2}(y)}}{{dy}}\,{\lambda ^{1/2}}(y)\,\left( {\lambda _0^{1/2}(y) - {\lambda ^{1/2}}(y)} \right)\,,
\end{equation}
derived in \cite{Yazadjiev:2005du}, where $\lambda_0(y)$ is an arbitrary non-negative function which determines the zero of the energy for a
background spacetime. Because there is no cosmological horizon present, the large $y$ limit of (\ref{eqQuasiLocalMassGeneral}) determines the
mass of the black hole. The explicit result for the DST solution is given by
\begin{equation}
{\cal M}(y) = c - y + \sqrt {{\lambda _0}(y)} \,\sqrt {y - c} \,{\left( {\left( {\frac{3}{{\sigma - 1}} + 4} \right)\,y} \right)^{\frac{{2\,\sigma + 1}}{{2(1 - 4)\,\sigma }}}}\,.
\end{equation}
The arbitrary function $\lambda_0(y)$ can be fixed as the first term in the large $y$ asymptotic expansion of $\lambda(y)$. The expression is given by
\begin{equation}\label{eqTheLambdaFunction}
\lambda_0 (y) = y\,{\left[ {\left( {4 + \frac{3}{{\sigma - 1}}} \right)y} \right]^{\frac{{2\,\sigma + 1}}{{4\,\sigma - 1}}}}\,
\end{equation}
Now, in the limit $y\to \infty$, one finds the quasilocal mass of the DST black hole,
\begin{equation}\label{eqDSTQLMass}
M = \mathop {\lim }\limits_{y \to \infty } {\cal M}(y) = \frac{c}{2}\,.
\end{equation}
The latter expression relates the unknown integration constant $c$ from Eq. (\ref{eqDSTmetric}) to the quasilocal mass $M$ of the DST black hole. Equation (\ref{eqDSTQLMass}) is valid only when $\frac{{2\,\sigma + 1}}{{4\,\sigma - 1}}\ge 0 $, i.e. $\sigma\leq -1/2$ or $\sigma>1/4$.
\\
\indent One can impose further restrictions on the parameter $\sigma$ by calculating the independent curvature invariants, i.e. the Ricci scalar,
\begin{equation}\label{eqDSTRicciPhysical}
R = - \frac{{6\,\sigma \,\left( {2\,M\,(\sigma - 1) + 3\,\sigma \,y} \right)}}{{{{(\sigma - 1)}^{\frac{{2\,\sigma + 1}}{{4\sigma - 1}}}}\,{{\left( {(4\,\sigma - 1)\,y} \right)}^{\frac{{2\,(3\,\sigma - 1)}}{{4\,\sigma - 1}}}}}}\,,
\end{equation}
and the Kretschmann invariant,
\begin{align}
K = \frac{{12\,\left( {4\,{M^2}\,\left( {5\,{\sigma ^2} + 1} \right)\,{{(\sigma - 1)}^2} + 12\,M\,\sigma \,\left( {{\sigma ^3} - 1} \right)\,y + 3\,{\sigma ^2}\,\left( {\sigma \,(7\,\sigma - 2) + 4} \right)\,{y^2}} \right)}}{{{{(\sigma - 1)}^{\frac{{2\,(2\,\sigma + 1)}}{{4\,\sigma - 1}}}}\,{{\left( {(4\,\sigma - 1)y} \right)}^{\frac{{6\,(2\,\sigma - 1)}}{{4\,\sigma - 1}}}}}}\,.
\end{align}
Both quantities are singular at $\sigma=1$ and $\sigma=1/4$. At $\sigma\to 0$ one recovers the Schwarzschild case.
\section{Thermodynamics of the DST black hole}\label{sec:4}
We proceed with Wald’s proposal \cite{Wald:1993nt} to calculate the entropy,
\begin{equation}\label{eqWaldEntropyFormula}
S = - 8\,\pi \,{\oint_{y = {y_h}\hfill\atop
t = const\hfill} {\left( {\frac{{\delta {\cal L}}}{{\delta {R_{ytyt}}}}} \right)} ^{(0)}}\,R(y)\,d{\Omega ^2}\,,
\end{equation}
of the DST black hole with metric in the form (\ref{eqDSTMetricInYCoord}). The variational derivative of the Lagrangian is given by \cite{Bellini:2010ar}:
\begin{align}
\nonumber
\frac{{\delta {\cal L}}}{{\delta {R_{\alpha \beta \gamma \delta }}}}&= \frac{{({g^{\alpha \gamma }}\,{g^{\beta \delta }} - {g^{\alpha \delta }}\,{g^{\beta \gamma }})}}{{32\,\pi }}\,\left( {1 + \frac{{\sqrt 3 \,\sigma \,R}}{{3\,\sqrt {{C^2}} }}} \right)\\
&+ \frac{{\sqrt 3 \,\sigma }}{{32\,\pi \,\sqrt {{C^2}} }}\left[ {2\,{R^{\alpha \beta \gamma \delta }} - ({g^{\alpha \gamma }}\,{R^{\beta \delta }} + {g^{\beta \delta }}\,{R^{\alpha \gamma }} - {g^{\alpha \delta }}\,{R^{\beta \gamma }} - {g^{\beta \gamma }}\,{R^{\alpha \delta }})} \right]\,.
\end{align}
In Eq. (\ref{eqWaldEntropyFormula}) the superscript (0) indicates that the variational derivative is calculated on the solution. After some lengthy calculations one arrives at the explicit formula for Wald's entropy of the DST black hole:
\begin{equation}\label{eqDSTEntropy}
S = \pi \,{\left[ {2\,M\,\left( {4 + \frac{3}{{\sigma - 1}}} \right)} \right]^{\frac{{2\,\left( {\sigma - 1} \right)}}{{4\sigma - 1}}}}\,,
\end{equation}
which is positive for $\sigma<1/4$ and $\sigma>1$. The Hawking temperature yields
\begin{equation}
T = \frac{1}{{4\,\pi }}\,\frac{{d\lambda }}{{dy}}({y_h}) = \frac{{{8^{\frac{{1 - 2\,\sigma }}{{4\,\sigma - 1}}}}}}{\pi }\,{\left[ {M\,\left( {4 + \frac{3}{{\sigma - 1}}} \right)} \right]^{\frac{{1 + 2\,\sigma }}{{4\,\sigma - 1}}}}\,,
\end{equation}
where $y_h=2\, M$ is the location of the event horizon given by the zeros of $\lambda(y)=0$. The singularities of the temperature occur at $\sigma=1/4$ and $\sigma=1$. The temperature has one local extremal point at ($M=1/4,\,\sigma=-1/2$), which is a saddle point. At $\sigma\to-1/2$ one has $T\to 1/(4\pi)$ and the DST temperature doesn't depend on the mass $M$. Moreover, the heat capacity,
\begin{equation}\label{eqDSTHeatCapacity}
C = T\,\frac{{\partial S}}{{\partial T}} = \frac{{\pi \,{8^{\frac{{1 - 2\,\sigma }}{{1 - 4\,\sigma }}}}\,{{(\sigma - 1)}^{\frac{{2\,\sigma + 1}}{{4\,\sigma - 1}}}}{{\left( {M\,(4\,\sigma - 1)} \right)}^{\frac{{2\,(\sigma - 1)}}{{4\,\sigma - 1}}}}}}{{2\,\sigma + 1}}\,,
\end{equation}
of the DST black hole diverges at $\sigma=1/4$ and $\sigma=-1/2$ and tends to zero at $\sigma=1$. However the points $\sigma=1/4$ and $\sigma=1$ are not physical due to the divergences of the physical curvature (\ref{eqDSTRicciPhysical}), while $\sigma=-1/2$ corresponds to Davies type phase transition. In the limit $\sigma\to 0$ one recovers General relativity, where the heat capacity reduces to the Schwarzschild case, $C=-8 \,\pi\,M^2<0$, which is known to be thermodynamically unstable. Our subsequent considerations will also discard this limit.
\\
\indent One can also check that the first law of thermodynamics, $dM=T\,dS$, is satisfied.
\section{Hessian thermodynamic geometries on the equilibrium state space of the DST black hole solution}\label{sec:5}
\subsection{Extended equilibrium state space}
If we consider thermal fluctuations of the parameter $\sigma$, we have to take into account its contribution to the first law of thermodynamics. In Ruppeiner's approach one takes the entropy as a thermodynamic potential, thus
\begin{equation}
dS = \frac{1}{T}\,dM + \Xi \,d\sigma\,,
\end{equation}
which is the generalized first law from Eq. (\ref{eqGeneralizedFirstLawofTD}). Here $\Xi$ plays the role of the chemical potential for $\sigma$ (considered as a new extensive parameter). The explicit form of $\Xi$ is given by
\begin{align}
\Xi = 3\,\pi \,{8^{\frac{{1 - 2\,\sigma }}{{1 - 4\,\sigma }}}}{(4\,\sigma - 1)^{\frac{{6\,\sigma }}{{1 - 4\,\sigma }}}}\,{\left( {\frac{M}{{\sigma - 1}}} \right)^{\frac{{2(\sigma - 1)}}{{4\sigma - 1}}}}\,\left[ {\ln \left( {2\,M\,\left( {\frac{3}{{\sigma - 1}} + 4} \right)} \right) - 1} \right]\,.
\end{align}
The equilibrium thermodynamic state space of the DST solution (\ref{eqDSTmetric}) is now considered as a two-dimensional manifold equipped with a suitable Riemannian metric
\begin{equation}\label{eqInfoMetricGeneral}
ds_I^2 = {g^{(I)}_{ab}}\,d{E^a}\,d{E^b}\,,
\end{equation}
where $E^a$, $a=1,2$, are the extensive parameters such as the mass, the entropy or the parameter $\sigma$. Depending on the chosen thermodynamic potential, one discerns several possibilities for the information thermodynamic metric as discussed in the Introduction section.
\\
\indent It is also well-known that in 2 dimensions all the relevant information about the phase structure is encoded only in the information metric and its scalar curvature. The latter is proportional to the (only one independent) component of the Riemann curvature tensor,
\begin{equation}\label{eqInfoCurvature2d}
{R_I} = \frac{{2\,{R_{I,1212}}}}{{\det {(g^{(I)}_{ab})}}}\,,
\end{equation}
where $\det ({g^{(I)}_{ab}}) $ is the determinant of the information metric (\ref{eqInfoMetricGeneral}). Once the scalar information curvature is obtained, we can identify its singularities as phase transition points, which should be compared to the resulting divergences of the heat capacity. If a complete match is found one can rely on the considered information metric as suitable for describing the space of equilibrium states for the given black hole solution.
\subsection{Ruppeiner information metric}
We begin by calculating the Ruppeiner information metric,
\begin{equation}
{g^{(R)}_{ab}} = - {\partial _a}{\partial _b}{S}(M,\,\sigma),\;\;\;a,b = 1,2\,,
\end{equation}
with components
\begin{align}
g_{MM}^{(R)} &= \frac{{\pi \,{8^{\frac{{2\,\sigma - 1}}{{4\,\sigma - 1}}}}\,{{(\sigma - 1)}^{\frac{{2\,\sigma + 1}}{{4\,\sigma - 1}}}}\,(2\,\sigma + 1)}}{{{{\left( {M\,(4\,\sigma - 1)} \right)}^{\frac{{6\,\sigma }}{{4\,\sigma - 1}}}}}}\,,\\\nonumber
\\
g_{M\sigma }^{(R)} &= g_{\sigma M}^{(R)} = - \frac{{3\,\pi \,{8^{\frac{{2\,\sigma - 1}}{{4\,\sigma - 1}}}}\,\left( {2\,(\sigma - 1)\,\ln \left[ {2\,M\,\frac{{4\,\sigma - 1}}{{\sigma - 1}}} \right] + 2\,\sigma + 1} \right)}}{{{M^{\frac{{2\,\sigma + 1}}{{4\sigma - 1}}}}\,{{(\sigma - 1)}^{\frac{{2\,(\sigma - 1)}}{{4\,\sigma - 1}}}}\,{{(4\,\sigma - 1)}^{\frac{{10\,\sigma - 1}}{{4\,\sigma - 1}}}}}}\,,\\\nonumber
\\\nonumber\small
g_{\sigma \sigma }^{(R)} &= \frac{{3\,\pi \,{8^{\frac{{2\,\sigma - 1}}{{4\,\sigma - 1}}}}{M^{\frac{{2\,(\sigma - 1)}}{{4\,\sigma - 1}}}}}}{{{{(\sigma - 1)}^{\frac{{2\,(2\,\sigma - 1)}}{{4\,\sigma - 1}}}}\,{{(4\,\sigma - 1)}^{\frac{{2(7\,\sigma - 1)}}{{4\,\sigma - 1}}}}}}\,\left\{ {46\,\sigma - 32\,{\sigma ^2} - 5 + \ln {2^{4\,(8\,{\sigma ^2} - 7\,\sigma + 1)}} - \ln {8^{\sigma + 1}}\,\ln 4} \right.\\
&+ \left. {2\,(\sigma - 1)\,\left( {2 + 16\,\sigma - 3\,\ln \left[ {2\,M\,\frac{{4\,\sigma - 1}}{{\sigma - 1}}} \right]} \right)\,\ln \left[ {M\,\frac{{4\,\sigma - 1}}{{\sigma - 1}}} \right]} \right\}\,.
\end{align}
\normalsize
\indent Critical points of phase transitions can be identified by the singularities of the Ruppeiner information curvature (\ref{eqInfoCurvature2d}). The resulting expression is lengthy, but one can check that at $\sigma=-1/2$, which is the relevant divergence for the heat capacity (\ref{eqDSTHeatCapacity}), the Ruppeiner curvature is finite,
\begin{equation}\label{eqDSTRuppeinerInfoCurvature}
\mathop {\lim }\limits_{\sigma \to - \frac{1}{2}} R_I^{(R)}(M,\sigma ) = \frac{{\left( {\ln \left( {{2^{9 + 7\,\ln 2}}\,{M^{3 + \ln (32\,M)}}} \right) - 4} \right)\,\ln (2\,M) + \ln {2^{{{\ln }^2}2 + 3\,\ln 2 - 4}} + 4}}{{8\,\pi \,M\,{{\ln }^2}\left[ {{2^{\ln 2}}{{(2\,M)}^{\ln (8\,M)}}} \right]}}\,,
\end{equation}
thus the Davies critical point cannot be covered by this particular thermodynamic geometry. The latter mismatch shows that the Ruppeiner information approach is not an appropriate choice for the description of the equilibrium state space of the DST black hole solution.
\\
\indent Although Ruppeiner metric fails to produce a viable thermodynamic description, one can always impose only local thermodynamic stability defined by the positive values of the heat capacity $C>0$. The latter condition leads to the parameter region $-1<\sigma<-1/2$, together with $\sigma>1$ and arbitrary large mass $M>0$.
\subsection{Weinhold information metric}
The Weinhold metric is defined as the Hessian of the mass of the black hole with respect to the entropy and the other extensive parameters. In the DST case one has
\begin{equation}
g_{ab}^{(W)} = {\partial _a}{\partial _b}M(S,\sigma )\,,\quad a,\,b = 1,2\,,
\end{equation}
where the mass $M$ of the DST black hole is given in terms of the entropy $S$ and the parameter $\sigma$ such as
\begin{equation}
M(S,\,\sigma ) = \frac{{\sigma - 1}}{{2\,(4\,\sigma - 1)}}\,{\pi ^{\frac{3}{{2\,(1 - \sigma )}} - 2}}\,{S^{\frac{3}{{2(\sigma - 1)}} + 2}}\,.
\end{equation}
The heat capacity now looks quite simple,
\begin{equation}
C = \frac{{{\partial _S}M}}{{\partial _S^2M}}=\left( {1 - \frac{3}{{2\,\sigma + 1}}} \right)\,S\,,
\end{equation}
and the Davies transition point at $\sigma=-1/2$ is present. The metric components are given explicitly by
\begin{align}
g_{SS}^{(W)} &= \frac{{{\pi ^{\frac{3}{{2 - 2\,\sigma }} - 2}}(2\,\sigma + 1)\,{S^{\frac{3}{{2\,(\sigma - 1)}}}}}}{{8\,(\sigma - 1)}}\,,\\\nonumber
\\
g_{S\sigma }^{(W)} &= g_{\sigma S}^{(W)} = - \frac{{3\,{\pi ^{\frac{3}{{2 - 2\,\sigma }} - 2}}\,{S^{\frac{3}{{2\,(\sigma - 1)}} + 1}}\,\ln \left( {{\pi ^{1 - 4\sigma }}{S^{4\sigma - 1}}} \right)}}{{8\,{{(\sigma - 1)}^2}\,(4\,\sigma - 1)}}\,,\\\nonumber
\\\nonumber
g_{\sigma \sigma }^{(W)} &= \frac{{3{\pi ^{\frac{3}{{2 - 2\sigma }} - 2}}{S^{\frac{3}{{2(\sigma - 1)}} + 2}}}}{{8{{(\sigma - 1)}^3}{{(4\sigma - 1)}^3}}}\,\left[ {32 + 4\,(\sigma - 1)\,\left( {8\,{{(\sigma - 1)}^2} + 3\,(1 - 4\,\sigma )\,\ln \pi } \right)\,\ln 2} \right.\\\nonumber
&+ 96\,{\sigma ^2} - 32\,{\sigma ^3} - 96\,\sigma + 12\,{(\sigma - 1)^2}\,{\ln ^2}2 + 3\,{(1 - 4\,\sigma )^2}\,{\ln ^2}\pi - 16\,{(\sigma - 1)^2}\,(4\,\sigma - 1)\,\ln \pi \\\nonumber
& - 4\,(\sigma - 1)\,\ln (\sigma - 1)\,\left( {2\,(\sigma - 1)\,(4\,\sigma - 4 + \ln 8) + \,\ln \left( {{\pi ^{3\,(1 - 4\,\sigma )}}\,{{(\sigma - 1)}^{3\,(1 - \sigma )}}} \right)} \right)\\\nonumber
& + 2\,(4\,\sigma - 1)\,\left( {2\,\sigma \,\left( {4\,\sigma - 8 + \ln \left( {\frac{8}{{\,{\pi ^6}}}} \right)} \right) - \ln \left[ {\frac{{{\pi ^3}}}{{64}}\,{{(\sigma - 1)}^{6\,(\sigma - 1)}}} \right] + 8} \right)\\
&\left. {\times \ln \left( {{{\left( {\frac{{\sigma - 1}}{2}} \right)}^{\frac{{2\,(\sigma - 1)}}{{4\,\sigma - 1}}}}\,S} \right) + 3\,{{(1 - 4\,\sigma )}^2}\,{{\ln }^2}\left( {{{\left( {\frac{{\sigma - 1}}{2}} \right)}^{\frac{{2\,(\sigma - 1)}}{{4\,\sigma - 1}}}}\,S} \right)} \right]\,.
\end{align}
The Weinhold approach also fails to reproduce the Davies transition point due to the fact that the Weinhold curvature is finite at $\sigma=-1/2$,
\begin{equation}
\mathop {\lim }\limits_{\sigma \to - 1/2} {R^{(W)}_I}(M,\,\sigma) = - \frac{{6.28319\,(\ln S - 3.14473)\,\left( {{{\ln }^2}S - 3.28946\,\ln S + 4.45514} \right)}}{{S\,{{(\ln S - 1.14473)}^4}}}\,,
\end{equation}
taking into account that $S>0$ is everywhere assumed.
\section{Legendre invariant thermodynamic geometries on the equilibrium state space of the DST black hole solution }\label{sec:6}
\subsection{Quevedo information metric}
The Quevedo information metric on the equilibrium state space of the DST solution is given by
\begin{equation}
ds_Q^2 = {\beta _S}\,S\,(\partial _\sigma ^2S\,d{\sigma ^2} - \partial _M^2S\,d{M^2}) = {{g}^{(Q)}_{MM}}\,d{M^2} + {{g}^{(Q)}_{\sigma \sigma }}\,d{\sigma ^2}\,.
\end{equation}
One can find the degree of generalized homogeneity, $\beta_S$, directly from Euler's theorem for homogeneous functions (\ref{eqEulerIdentity}):
\begin{equation}
M\,\frac{{\partial S}}{{\partial M}} + \sigma \,\frac{{\partial S}}{{\partial \sigma }} = {\beta _S}\,S\,.
\end{equation}
The latter equation for $\beta_S$ leads to the following components of the information metric:
\begin{equation}
{{g}^{(Q)}_{MM}} = - (M\,{\partial _M}S + \sigma \,{\partial _\sigma }S)\,\partial _M^2S\,,\qquad {{g}^{(Q)}_{\sigma \sigma }} = (M\,{\partial _M}S + \sigma \,{\partial _\sigma }S)\,\partial _\sigma ^2S\,.
\end{equation}
The regions of positive definite information metric, together with local thermodynamic stability $C>0$, in Quevedo's case are shown on Figure \ref{figDSTmetricQ}. The upper region is constrained within $\sigma>3/2$ and $0<M<1/3$, while the lower region lies within $\sigma<-1/2$ and $M>1/3$. One should have in mind that contrary to the Ruppeiner's case, in Quevedo's case we do not have clear physical interpretation of the components of the information metric, thus one is not compelled to impose the Sylvester criterion. The latter will not necessarily give the regions of global thermodynamic stability. On the other hand, one can check that the convexity condition, $\partial_a\partial_b S \ge 0$, cannot be satisfied here.
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{DSTmetricQ.pdf}
\end{subfigure}
\caption{The regions of positive definite information metric together with $C>0$ (the shaded regions) for the DST black hole with respect to the Quevedo information approach. The upper region lies within $1< \sigma< \infty$ and $0<M<1/3$, while the lower region is defined within $-\infty<\sigma<-1/2$ and $1/3<M<\infty$.}
\label{figDSTmetricQ}
\end{figure}
\indent The Quevedo thermodynamic curvature on the two-dimensional manifold ($M,\,\sigma$)\footnote{For clarity we have omitted the superscript (Q) from the metric components.},
\begin{align}
R_I^{(Q)} = \frac{{g_{MM}\left( {g_{MM,\sigma }^{}g_{\sigma \sigma ,\sigma }^{} + g_{\sigma \sigma ,M}^2} \right) + g_{\sigma \sigma }^{}\left( {g_{MM,\sigma }^2 + g_{MM,M}^{}g_{\sigma \sigma ,M}^{} - 2g_{MM}^{}\left( {g_{MM,\sigma ,\sigma }^{} + g_{\sigma \sigma ,M,M}^{}} \right)} \right)}}{{2\,g_{MM}^2\,g_{\sigma \sigma }^2}}\,,
\end{align}
is singular at $M= 1/3$, and it is also singular at the Davies transition point $\sigma\to -1/2$, suggesting that Quevedo information metric is an appropriate metric for the description of the equilibrium state space of the DST solution. However, one can check that there are also additional spinodal curves.
\subsection{HPEM information metric}
In order to avoid extra singular points in the Quevedo thermodynamic curvature, which do not coincide with phase transitions of any type, in \cite{Hendi:2015rja} the authors proposed an alternative information metric with different conformal factor,
\begin{equation}
ds_{HPEM}^2 = S\,\frac{{{\partial _S}M}}{{{{(\partial _\sigma ^2M)}^3}}}\,( - \partial _S^2M\,d{S^2} + \partial _\sigma ^2M\,d{\sigma ^2})\,,
\end{equation}
with components
\begin{equation}
g_{SS}^{(HPEM)} = - S\,\partial _S^2M\,\frac{{{\partial _S}M}}{{{{(\partial _\sigma ^2M)}^3}}}\,,\qquad g_{\sigma \sigma }^{(HPEM)} = S\,\partial _\sigma ^2M\,\frac{{{\partial _S}M}}{{{{(\partial _\sigma ^2M)}^3}}}\,.
\end{equation}
One can find the regions where the Sylvester's criterion holds together with $C>0$ as shown on Fig. \ref{figDSTmetricHPEM}.
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{DSTmetricHPEM.pdf}
\end{subfigure}
\caption{The regions of positive definite information metric together with $C>0$ (the shaded regions) for the DST black hole with respect to the HPEM information metric. The upper region lies within $1< \sigma< \infty$, while the lower region is defined within $-\infty<\sigma<-1/2$.}
\label{figDSTmetricHPEM}
\end{figure}
\indent The HPEM information curvature\footnote{For clarity we have omitted the superscript (HPEM) from the metric components.},
\begin{align}
R_I^{(HPEM)}(S,\,\sigma ) = \frac{{g_{SS}^{}\left( {g_{SS,\sigma }^{}g_{\sigma \sigma ,\sigma }^{} + g_{\sigma \sigma ,S}^2} \right) + g_{\sigma \sigma }^{}\left( {g_{SS,\sigma }^2 + g_{SS,S}^{}g_{\sigma \sigma ,S}^{} - 2g_{SS}^{}\left( {g_{SS,\sigma ,\sigma }^{} + g_{\sigma \sigma ,S,S}^{}} \right)} \right)}}{{2\,g_{SS}^2\,g_{\sigma \sigma }^2}}\,,
\end{align}
is singular only at the Davies point $\sigma=-1/2$, thus HPEM metric is also an appropriate Riemannian metric on the equilibrium state space of the DST black hole.
\subsection{MM information metric}
The final geometric approach, which we are going to consider, was proposed by A. H. Mansoori and B. Mirza in \cite{Mansoori:2013pna}. In the article the authors define a conjugate thermodynamic potential via an appropriate Legendre transformation. In the MM information approach the divergent
points of the specific heat turn out to correspond exactly to the singularities of the thermodynamic curvature.
\\
\indent The conjugate potential we choose to work with is the Helmholtz free energy $F$, which is related to the mass $M$ by the following Legendre transformation
\begin{equation}
F(T,\,\sigma ) = M(T,\,\sigma ) - T\,S(T,\,\sigma )\,.
\end{equation}
The latter yields
\begin{equation}
F(T,\,\sigma ) = - \frac{{2\,\sigma + 1}}{{4\,\sigma - 1}}\,{2^{2 - \frac{6}{{2\,\sigma + 1}}}}\,{\pi ^{2 - \frac{3}{{2\,\sigma + 1}}}}\,{T^{2 - \frac{3}{{2\,\sigma + 1}}}}\,.
\end{equation}
The components of the MM thermodynamic metric are now given by
\begin{equation}
g_{TT}^{(MM)} = \frac{1}{T}\,\frac{{{\partial ^2}F}}{{\partial \,{T^2}}}\,,\qquad g_{T\sigma }^{(MM)} = g_{\sigma T}^{(MM)} = \frac{1}{T}\,\frac{{{\partial ^2}F}}{{\partial \,T\partial \sigma }}\,,\qquad g_{\sigma \sigma }^{(MM)} = \frac{1}{T}\,\frac{{{\partial ^2}F}}{{\partial \,{\sigma ^2}}}\,.
\end{equation}
One can show that there are no regions in the ($T,\,\sigma$) parameter space, where the Sylvester's criterion holds together with $C>0$. More importantly, the MM information curvature,
\begin{align}
\nonumber
&R_I^{(MM)}(T,\,\sigma ) = \frac{1}{{2\,{{\det }^2}( - \hat g)}}\\\nonumber
&\times \left\{ {g_{TT}^{}\,\left[ {g_{TT,T}^{}\,g_{\sigma \sigma ,\sigma }^{} - 2\,g_{T\sigma ,\sigma }^{}\,\left( {g_{TT,\sigma }^{} - 2\,g_{T\sigma ,T}^{}} \right) - g_{\sigma \sigma ,T}^{}\,\left( {g_{TT,\sigma }^{} + 2\,g_{T\sigma ,T}^{}} \right)} \right]} \right.\\\nonumber
&+ g_{TT}^{}\,\left( {g_{\sigma \sigma ,\sigma }^{}\,\left( {g_{TT,\sigma }^{} - 2\,g_{T\sigma ,T}^{}} \right) + g_{\sigma \sigma ,T}^2} \right) + 2\,g_{T\sigma }^2\,\left( {g_{TT,\sigma ,\sigma }^{} - 2\,g_{T\sigma ,T,\sigma }^{} + g_{\sigma \sigma ,T,T}^{}} \right)\\\nonumber
&\left. { + g_{\sigma \sigma }^{}\,\left[ {g_{TT,\sigma }^2 + g_{TT,T}^{}\,\left( {g_{\sigma \sigma ,T}^{} - 2\,g_{T\sigma ,\sigma }^{}} \right) - 2\,g_{TT}^{}\,\left( {g_{TT,\sigma ,\sigma }^{} - 2\,g_{T\sigma ,T,\sigma }^{} + g_{\sigma \sigma ,T,T}^{}} \right)} \right]} \right\}\,,
\end{align}
is singular exactly at the Davies transition point $\sigma\to -1/2$ without any extra critical points.
\section{Conclusion}\label{sec:conclusion}
Our current investigation is instigated by the intriguing existence of dark matter and dark energy in the Universe, which cannot be explained by traditional approaches. This motivates us to consider alternative models, which can include effects related to these dark phenomena. Highly promising alternatives are the so called higher derivative theories of gravity, which include contributions from higher powers of the Ricci scalar or other geometric invariants. In particular, our focus is on the thermodynamic properties of their admissible black hole solutions, which will allow us to constrain the possible dark matter/energy contributions, at least when thermodynamics is concerned.
\\
\indent In this paper we consider one known four-dimensional higher derivative black hole solution, namely the Deser-Sarioglu-Tekin black hole. The latter being a static, spherically symmetric gravitational solution of a theory with contributions from a non-polynomial term of the Weyl tensor to the Einstein–Hilbert Lagrangian. In order to study any implications for the black hole thermodynamics, we take advantage of two different geometric formulations, namely those of the Hessian information metrics (geometric thermodynamics) and the formalism of Legendre invariant thermodynamic metrics (geometrothermodynamics) on the space of equilibrium states of the DST black hole.
\\
\indent In general, the formalism of thermodynamic information geometry identifies the phase transition points of the system with
the singularities of the corresponding thermodynamic information curvature $R_I$. Near the critical points the underlying inter-particle interactions become strongly correlated and the equilibrium thermodynamic considerations are no longer applicable. In this case
one expects that a more general approach should hold.
\\
\indent In the Hessian formulation we analyzed the Ruppeiner and the Weinhold thermodynamic metrics and showed that they are inadequate for the description of the DST black hole equilibrium state space. This is due to the occurring mismatch between the singularities of the heat capacity and the singularities of the corresponding thermodynamic curvatures. Therefore the Hessian thermodynamic geometries are unable to reproduce the Davies type transition points of the DST black hole heat capacity.
\\
\indent On the other hand, in the Legendre invariant case, all considered thermodynamic metrics successfully manage to incorporate the relevant phase transition points. Consequently they can be taken as viable metrics on the equilibrium state space of the DST black hole. However, some of them, such as the Quevedo metric, encounter additional singularities in their thermodynamic curvatures, the latter having obscure physical meaning at best. On the contrary, the HPEM and the MM metric seem to deal well with this problem and manage to get rid of the redundant spinodal curves in the case of the DST black hole.
\\
\indent Finally, let us address the problem of thermodynamic stability of the DST solution. For global stability one refers to Sylvester criterion for positive definite information metric, together with the positivity of the black hole heat capacity (local thermodynamic stability). Unfortunately, in the framework of geometric thermodynamics, both conditions can be interpreted as global thermodynamic stability only within the context of the Hessian metrics, due to their probabilistic interpretation. For the Legendre invariant metrics, imposing Sylvester criterion together with positive heat capacity does not necessarily guarantee global thermodynamic stability. The latter is caused by the current lack of physical interpretation of the components of the corresponding information metrics. Therefore, due to the failure of Hessian geometries, the DST black hole is only stable locally from a thermodynamic standpoint. The condition for local thermodynamic stability, together with the divergences of the physical DST metric curvature, constrain the values of the unknown parameter $\sigma$ in the regions $\sigma<-1/2$ and $\sigma>1$. The latter is also confirmed by imposing the Sylvester criterion for Quevedo and HPEM thermodynamic metrics.
\section*{Acknowledgements}
The author would like to thank R. C. Rashkov, H. Dimov, S. Yazadjiev and D. Arnaudov for insightful discussions and for careful reading of the draft. This work was supported by the Bulgarian NSF grant \textnumero~DM18/1 and Sofia University Research Fund under Grant \textnumero~80-10-104.
\bibliographystyle{utphys}
|
2,869,038,155,481 | arxiv |
\section{Introduction}
\label{sec:intro}
Deep learning ranking models increasingly represent the state-of-the art in ranking documents with respect to a user query. Different neural architectures \cite{drmm,duet,knrm} are used to model interaction between query and document text to compute relevance. These models rely heavily on the tokens and their embeddings for non-linear transformations for similarity computation.
Thus, they capture semantic similarity between query and document tokens circumventing the exact-match limitation of previous models such as BM25~\cite{bm25}.
The complexity of deep learning models arises from their high dimensional decision boundaries. The hyperplanes are known to be very sensitive to changes in feature values. In computer vision literature \cite{advRank2020} and text based classification tasks \cite{wang2019survey}, researchers have shown that deep learning models can change their predictions with slight variations in inputs. There is a large body of work that investigates adversarial attacks \cite{wang2019survey} on deep networks to quantify their robustness against noise. There is, however, lack of such experiments and evaluation in information retrieval literature. Mitra \emph{et.al.}~\cite{mitra2016dualAdv} demonstrated how word replacement in documents can lead to difference in word representations which leaves room for more investigation of the robustness of deep ranking models to adversarial attacks.
Usually, adversarial attackers perturb images to reverse a model's decision. Noise addition to images is performed at pixel level such that a human eye cannot distinguish true image from noisy image. In text classification tasks, characters or words \cite{textFool2017,papernot2016crafting} are modified to change the classifier output.
The objective of the adversary is to reverse model's decision with minimum amount of noise injection. In information retrieval, however, it remains an open problem as to how does one design an adversary for a ranking model. We posit that the objective of an adversary in information retrieval would be to change the position of a document with respect to the query. We follow a similar approach used in text classification tasks \cite{textFool2017,papernot2016crafting} and perturb tokens in documents, replacing them with semantically similar tokens such that the rank of the document changes.
In this work, we demonstrate by means of false negative adversarial attacks that three state-of-the-art retrieval models can change a document's position with slight changes in its text. We evaluate the robustness of these ranking models on publicly available datasets by injecting varied length of noisy text in documents. We propose a black-box adversarial attack model that takes a (query, document) pair as input and generates a noisy document such that its pushed lower in the ranked list. Our system does not need access to the ranker's internal architecture, only its output given the (query, document) tuple as input.
Our findings suggest that ranking models can be sensitive to even single token perturbations. Table \ref{tab:examples} shows some examples generated by our model of one word perturbations along with changes in document position when scored using state-of-the-art ranking model.
Overall, we found that simple attackers could perturb very few words and still generate semantically similar text that can fool a model into ranking a relevant document lower than non-relevant documents.
\begin{table}[]
\centering
\tiny
\begin{tabular}{|p{1.5cm}|p{4.2cm}|l|c|}
\hline
\textbf{Query} & \textbf{Relevant Document} & \textbf{Replaced by} & \textbf{$\downarrow$Rank} \\ \hline
what can be powered by wind
& Wind power is the conversion of wind energy into a useful form of energy, such as using wind turbines to make electrical power , windmills for mechanical power, wind pumps for \hone{water} pumping or drainage , or sails to propel ships.
& \htwo{wind} & 12\\ \hline
what causes heart disease
& The causes of cardiovascular disease are \hone{diverse} but atherosclerosis and/or hypertension are the most common.
& \htwo{disorder} & 8 \\ \hline
how many books are included in the protestant Bible?
& Christian Bibles range from the sixty-six books of the Protestant \hone{canon} to the eighty-one books of the Ethiopian Orthodox Tewahedo Church canon.
& \htwo{christian} & 5 \\ \hline
how many numbers on a credit card
& An ISO/IEC 7812 card number is typically 16 digits in \hone{length}, and consists of:
& \htwo{identification} & 4 \\ \hline
how many amendments in us
& The Constitution has been amended seventeen additional times (for a total of \hone{twenty-seven} amendments).
& \htwo{allowing} & 4 \\ \hline
what is a monarch to a monarchy
& A monarchy is a form of government in which sovereignty is actually or nominally \hone{embodied} in a single individual (the monarch ).
& \htwo{throne} & 4 \\ \hline
\end{tabular}
\caption{Examples of our one word adversarial perturbation and change in document position when ranked by DRMM.}
\label{tab:examples}
\end{table}
\section{Related Work}
Adversarial attack on deep neural networks has been extensively explored in vision \cite{onepixel} and text classification \cite{wang2019survey,papernot2016crafting}. Existing work has proposed adversarial attacks with either white-box or black-box access to the model. In this work, our focus is on black-box attacks since access to ranking models is not always available. Existing work is also limited in that the focus is on changing classifier decisions and not document positions in ranked list with respect to a query. In this work we explore utility of (query,doc) pair in adversarial attacks on ranking models.
\begin{comment}
Main idea: Explain ranker through adversarial attacks
\begin{itemize}
\item Small perturbation in the document significantly change its rank
\item Can we say something about the ranker's robustness?
\item Analyze the perturbed noise (words that need to be changed) to see if they can be used to explain the ranking.
\item Can we take into account other documents? This way we can be different from attention based approaches that only focus on a single document.
\item How about query perturbation?
\end{itemize}
\end{comment}
\section{Our Approach}
\label{sec:approach}
Our goal is to minimally perturb a document such that the rank of the document changes.
In particular, we target top-ranked documents and attempt to lower their rank through noise
injection. Assuming relevant documents are ranked higher, the objective is to lower the position of a document with minimal change in its text.
\subsection{Problem Statement}
Let $\Vec{q}$, $\Vec{d}$ and $\mathcal{F}$ represent a query, a document and a ranking model, respectively.
Given a query-document pair $(\Vec{q},\Vec{d})$, the ranker outputs a score $s=\mathcal{F}(\Vec{q},\Vec{d})$ indicating
the relevance of the document $\Vec{d}$ to the query $\Vec{q}$, where higher score means higher relevance.
Given a query and a list of documents, the ranker computes scores for every document w.r.t the
given query and rank documents based on a descending order of the scores.
Given $\Vec{q}$ and $\Vec{d}$, our goal is to find a perturbed document $\Vec{d'}$ such that
$\mathcal{F}(\Vec{q},\Vec{d'}) << \mathcal{F}(\Vec{q},\Vec{d})$ so that it ranks lower than
its original rank. At the same time we want to minimize the perturbation in the document.
We assume that both query and document are represented as vectors which could be either sequence of
terms or other suitable representations such as word embedding. Let $\Vec{q} = (q_1, q_2, ..., q_p)$
be a p-dimensional query vector and $\Vec{d} = (d_1, d_2, ..., d_q)$ be a q-dimensional document
vector. We construct a perturbed document $\Vec{d'}=(\Vec{d} + \Vec{n})$ by adding a q-dimensional noise vector
$\Vec{n} = (n_1, n_2, ..., n_q)$.
Our goal is to find a noise vector that reduces a document's score
without changing many terms in the document.
We formulate this problem as an optimization problem as follows:
\begin{equation}
\label{eq:v1}
\begin{aligned}
& \underset{\Vec{n}}{\text{minimize}}
& & \mathcal{F}(\Vec{q}, \Vec{d+n}) \\
& \text{subject to}
& & ||\Vec{n}||_0 \leq c
\end{aligned}
\end{equation}
Here, $c$ is a sparsity parameter that controls the number of perturbed terms.
Although, the above formulation provides a sparse solution, we do not have
any control over the magnitude of the noise vector. In other words,
even though we change only few terms in a document we may end up
changing them significantly. To address this problem, we modify the
objective function as follows:
\begin{equation}
\label{eq:v2}
\begin{aligned}
& \underset{\Vec{n}}{\text{minimize}}
& & \mathcal{F}(\Vec{q}, \Vec{d+n}) + \sum_{i=1}^{q}{\mathcal{D}}(d_i, (d_i+n_i)) \\
& \text{subject to}
& & ||\Vec{n}||_0 \leq c
\end{aligned}
\end{equation}
Here, $\mathcal{D}$ can be any suitable distance function (e.g., cosine distance)
that computes distance between two terms.
It ensures that the modified terms are close to the original terms.
We found this formulation too strict.
In particular, it penalizes perturbations of query terms as well non-query
terms in a document. Hence, we modify the objective function to penalize
only query terms since we want to ensure that the perturbed document
stays relevant to the query. The modified objective function is as follows:
\begin{equation}
\label{eq:v3}
\begin{aligned}
& \underset{\Vec{n}}{\text{minimize}}
& & \mathcal{F}(\Vec{q}, \Vec{d+n}) + \sum_{i=1}^{q} \mathbbm{1}_{d_i \in \Vec{q}} [{\mathcal{D}}(d_i, (d_i+n_i))] \\
& \text{subject to}
& & ||\Vec{n}||_0 \leq c
\end{aligned}
\end{equation}
Here, $\mathbbm{1}_{x}$ is an indicator function with is $1$ when $x$ is true and $0$ otherwise.
The first term in the objective function ensures that the ranker gives low score to the perturbed document
while the second term incurs penalty of changing query terms in the document.
The constrains limits the number of changed terms in a document to $c$.
\subsection{Method}
A popular approach to solve the above problem relies on computing gradient of the model's output (score) with respect to the input and use this gradient to find a noise vector that reduces model's score on noisy input~\cite{fgsm}.
Such gradient based methods do not work well with textual data due to non-differentiable components
such as an embedding layer.
A common workaround is to find perturbation in the embedding space and project the perturbation vector in the input space
through techniques like nearest neighbour search. In general, gradient based methods are restricted to differentiable models and require information about the model's architecture.
To address the shortcomings of gradient based methods, we propose a different approach that uses Differential Evolution (DE)~\cite{de}, a stochastic evolutionary algorithm. It can be applied to a variety of optimization problems including non-differentiable and multimodal objective functions. DE is a population-based optimization method which works as follows:
Let's say we want to optimize over $l$ parameters. Given the population size $m$, it randomly initializes $m$ candidate solutions $\Vec{X_t} = (\Vec{x}_1, \Vec{x}_2, ..., \Vec{x}_m)$, each of length $l$. From these parent solutions it generates candidate children solutions using the following mutation criteria:
\begin{equation}
\Vec{x}_{a,t+1} = \Vec{x}_{b,t} + F(\Vec{x}_{c,t} - \Vec{x}_{d,t})
\end{equation}
Here, $a, b, c$ and $d$ are randomly chosen distinct indices in the population, and $F$ is a mutation factor in $[0,2]$. Next, these children are compared against their parents using a fitness function and $m$ candidates are selected that have the highest fitness (i.e., minimizes the fitness function). This process is repeated until either it converges or reaches the maximum number of iterations.
Given a query-document pair $(\Vec{q},\Vec{d})$ and a trained ranker $\mathcal{F}$ we use DE to find a perturbation vector $\Vec{n}$
that changes the document's rank without significantly modifying it.
In particular, we need to find what terms to perturb in the document and the magnitude of each perturbation. Hence, we represent the solution (perturbation vector) as a sequence of tuples $(i, v)$ where $i \in [0,|\Vec{d}|]$ represents an index of the term to perturb and $v$ represents the perturbation value. The length of the solution vector is set to $c$ (the sparsity parameter).
We use the objective function proposed earlier as a fitness function for DE and run the algorithm for a fixed number of iterations.
Based on the choice of the fitness function, we propose three variants of the attack, \emph{A1}\xspace (Equation~\ref{eq:v1}), \emph{A2}\xspace (Equation~\ref{eq:v2}) and \emph{A3}\xspace (Equation~\ref{eq:v3}).
The final solution vector is used to construct the perturbed document $d'$. A similar approach has been used to perturb images in order to fool a classifier~\cite{onepixel}.
\section{Conclusion}
\label{sec:conclusion}
Adversarial attacks on classification models both in text and vision have helped reduce the generalization error of such models. However, there is limited literature on adversarial attacks on information retrieval models. In this work, we explored effectiveness and quality of three simple methods of attacking black box deep learning models. The attackers were designed to change document text such that an information retrieval model is fooled into lowering the document position. We found that perturbed text generated by these attackers by changing few tokens is semantically similar to original text and \emph{can fool} the ranker to push a relevant document below $\sim$2-3 non-relevant documents. Our findings can be further used to train rankers with adversarial examples to reduce their generalization error.
\section{Results and Discussion}
We focus on three research questions to investigate the impact of adversarial attacks on ranking models.\newline
\textbf{RQ1:} \emph{What is the attacker's success in changing a document's position without significantly changing the document?}
To answer this question, we measure an attacker's success (\sk)
when we restrict the number of perturbed words to \textit{one} in each document. The results for $k=1$ and $k=5$ are given in Table~\ref{table:success}. Notice that \emph{A0}\xspace can change the rank of almost half of the relevant documents but not beyond five positions.
Both \emph{A1}\xspace and \emph{A3}\xspace can change the rank of all the relevant documents by just changing \emph{one term} in the document.
In case of MSMarco, they can lower the rank of >81\% relevant documents by more than four positions except for DRMM ranker.
Overall, we found that \emph{A1}\xspace has the highest success rate among all the attackers as it has greater flexibility to change the terms.
We report mean and variance of \nrc on the test sets in Table~\ref{table:msmarco_nrc} and~\ref{table:wikiqa_nrc} for MSMarco and WikiQA datasets respectively.
Increasing the number of perturbed words increases \nrc across all the attackers. On MSMarco dataset, KNRM is the most vulnerable ranker across all the attackers. On average \emph{A1}\xspace can push a relevant document beyond 11 non-relevant documents by changing only one token when evaluated against KNRM ranker. On the other hand, the performance of attackers across all the models is similar on the WikiQA dataset.
Overall, all the attackers perform better on MSMarco dataset compared to WikiQA. We argue that the larger length of passages in MSMarco provides more room for perturbations as opposed to the shorter length of answers in WikiQA. \newline
\textbf{RQ2:} \emph{What is the similarity between perturbed and original text when we restrict the number of perturbed words to $p_l=\{1,3,5\}$ in each document?}
Adversarial perturbations may cause the meaning of document text to change. Thus, it is important to control this change such that perturbed words are semantically similar to original text. We measure the semantic similarity between embeddings of original text and perturbed text using cosine distance as done in previous work \cite{wang2019survey}. Figure \ref{fig:dc} shows similarity b/w perturbed and original text when 1, 3 or 5 tokens are changed in MSMarco passage text for three different models.
We only focus on similarity for \emph{A1}\xspace and \emph{A3}\xspace attackers due to space limitations.
Overall, the cosine similarity between perturbed and original text across attackers is relatively high ($\sim$0.97), even though the document may be pushed below $\sim$20 non-relevant documents with only \emph{single} token perturbations. As expected, perturbing more tokens leads to lower similarity. Cosine similarity drops to $\sim$0.90 across models for 5 word perturbations. Both \emph{A1}\xspace and \emph{A3}\xspace attackers can push documents to relatively lower positions with 1-3 token perturbations, however, we found that \emph{A1}\xspace tends to perturb query tokens in passages to change ranker output. We found that \emph{A1}\xspace changed query tokens in 65\% (DRMM), 11\% (Duet) and 2\% (KNRM) documents in MSMarco and 50\%, 17\% and 3\% documents in WikiQA respectively. However, \emph{A3}\xspace achieves similar performance in terms of similarity and rank change \emph{without} changing any query tokens. \newline
\captionsetup[figure*]{font=tiny}
\begin{figure}
\centering
\begin{subfigure}[b]{0.15\textwidth}
\includegraphics[width=\textwidth]{fig/msmarco_drmm_v1.png}
\caption{DRMM (\emph{A1}\xspace)}
\label{fig:drmm_v1}
\end{subfigure}
\begin{subfigure}[b]{0.15\textwidth}
\includegraphics[width=\textwidth]{fig/msmarco_duet_v1.png}
\caption{DUET (\emph{A1}\xspace)}
\label{fig:duet_v1}
\end{subfigure}
\begin{subfigure}[b]{0.15\textwidth}
\includegraphics[width=\textwidth]{fig/msmarco_knrm_v1.png}
\caption{KNRM (\emph{A1}\xspace)}
\label{fig:knrm_v1}
\end{subfigure}
\begin{subfigure}[b]{0.15\textwidth}
\includegraphics[width=\textwidth]{fig/msmarco_drmm_v3.png}
\caption{DRMM (\emph{A3}\xspace)}
\label{fig:drmm_v3}
\end{subfigure}
\begin{subfigure}[b]{0.15\textwidth}
\includegraphics[width=\textwidth]{fig/msmarco_duet_v3.png}
\caption{DUET (\emph{A3}\xspace)}
\label{fig:duet_v3}
\end{subfigure}
\begin{subfigure}[b]{0.15\textwidth}
\includegraphics[width=\textwidth]{fig/msmarco_knrm_v3.png}
\caption{KNRM (\emph{A3}\xspace)}
\label{fig:knrm_v3}
\end{subfigure}
\caption{\small{\nrc vs. cosine sim on MSMarco where {\color{CornflowerBlue} blue=1 word}, {\color{Peach} orange=3 words} and {\color{ForestGreen} green=5 words} perturbations respectively.}}
\label{fig:dc}
\end{figure}
\begin{table}[t]
\centering
\begin{tabular}{|c|c |c|c |c|c | }\hline
\multirow{2}{*}{\textbf{Model}} & \multirow{2}{*}{\textbf{Atk}} & \multicolumn{2}{|c|}{WikiQA} & \multicolumn{2}{|c|}{MSMarco} \\ \cline{3-6}
& & 1 token & 3 tokens & 1 token & 3 tokens \\ \hline
\multirow{3}{*}{DRMM} &\emph{A1}\xspace &0.31$\pm$0.45 & 0.43$\pm$0.47& 0.05$\pm$0.14 & 0.11$\pm$0.17 \\
&\emph{A2}\xspace&0.005$\pm$0.06 & 0.005$\pm$0.37& 0.001$\pm$0.05 & 0.01$\pm$0.11\\
&\emph{A3}\xspace&0.216$\pm$0.40 & 0.30$\pm$0.44 & 0.02$\pm$0.11 & 0.06$\pm$0.15\\ \hline
\multirow{3}{*}{Duet} &\emph{A1}\xspace& 0.24$\pm$0.41 & 0.37$\pm$0.46 & 0.59$\pm$0.40 & 0.68$\pm$0.35\\
&\emph{A2}\xspace& 0.003$\pm$0.07 & -0.04$\pm$0.23 & 0.02$\pm$0.30 & 0.03$\pm$0.46 \\
&\emph{A3}\xspace& 0.226$\pm$0.40 & 0.35$\pm$0.45& 0.56$\pm$0.40 & 0.61$\pm$0.39\\ \hline
\multirow{3}{*}{KNRM} &\emph{A1}\xspace&0.22$\pm$0.41 & 0.30$\pm$0.44& 0.53$\pm$0.37 & 0.59$\pm$0.34\\
&\emph{A2}\xspace&0.03$\pm$0.16 &0.09$\pm$0.34&0.30$\pm$0.42&0.38$\pm$0.43 \\
&\emph{A3}\xspace&0.218$\pm$0.40 & 0.31$\pm$0.44 & 0.52$\pm$0.38 & 0.59$\pm$0.34\\ \hline
\end{tabular}
\caption{\% drop in $P@5$ due to perturbed text}
\label{table:prec_drop}
\end{table}
\textbf{RQ3:} \emph{What is the ranker performance after adversarial attacks?} We evaluate model robustness against an adversarial attacker with P@5.
We compute perturbed $P@5$ by replacing the original document's ($d_i$) ranker score $s_i$ with the new score $s'_i$ given by the ranker on the perturbed input $d'_i$.
\footnote{Note that ranker output for all other documents in the list $d_{j\neq i}$ remains the same.} So, for every document the attacker perturbs, its new score replaces the old ranker score and $P@5$ is recomputed. We report the \% drop in $P@5$ for both datasets in Table \ref{table:prec_drop}.
We find that \emph{A1}\xspace and \emph{A3}\xspace attackers are able to reduce $P@5$ significantly, in some cases as much as $\sim$20\% in WikiQA and $\sim$50\% in MSMarco by changing one token in the text. Although, \emph{A3}\xspace's drop in $P@5$ is lower than that of \emph{A1}\xspace's across all datasets since it is penalized for changing query terms.
We also found \%drop in $P@5$ to be a function of ranker performance. Models with higher precision were harder to beat by the attacker, i.e., were more robust to token changes in document text. For example, in WikiQA original $P@5$ (mean,std) for DRMM, Duet and KNRM was 0.204$\pm$0.17, 0.207$\pm$0.18 and 0.22$\pm$0.20 respectively. It is interesting to note that all attackers are able to reduce DRMM $P@5$ by single token perturbations by highest margin. However, in MSMarco, DRMM $P@5$ is the highest amongst all other models, thus hardest to fool by all attackers. We observe very little drop in precision for \emph{A2}\xspace attacker across models in both datasets which indicates that very strict attackers may not be able to find suitable candidates to perturb documents. One interesting finding was that in some cases the attacker replaced document text with \emph{query tokens} to lower document score as shown in Table \ref{tab:examples}.
\section{Experimental Setup}
\label{sec:experiments}
In this section, we provide details about the datasets, model training, perturbation and evaluation metrics. \newline
\textbf{Data:} We use WikiQA~\cite{wikiqa} and MSMarco passage ranking dataset~\cite{msmarco} for training ranking models and evaluating attacks on three ranking models.
For WikiQA, we use 2K queries for training and 240 queries for evaluation.
For MSMarco, we use 20K randomly sampled queries for training and 220 queries for evaluation.
For each test query, we randomly sample 5 positive documents and 45 negative documents for evaluation. \newline
\textbf{Model training:} We use the following ranking models:
DRMM~\cite{drmm}, DUET~\cite{duet} and KNRM~\cite{knrm}.
We use 300 dimensional Glove embeddings~\cite{glove} to represent each token. \newline
\textbf{Perturbation:}
In our experiments, we perturb \emph{only} relevant documents\footnote{We found that non-relevant documents can be perturbed easily by adding noisy text.} in top 5 results for each query in the test set.
We perturb documents using all three attackers (\emph{A1}\xspace, \emph{A2}\xspace, and \emph{A3}\xspace) and evaluate ranker performance on the perturbed documents.
The number of iterations and population size are fixed to 100 and 500 respectively. \newline
\textbf{Evaluation:}
We explore several metrics for understanding the effectiveness of perturbations and their impact on ranker performance. The goal of an attacker is to fool the ranker into lowering the position of the document in the list. The success of the attacker is measured as the percentage of \emph{relevant} documents whose position changed by $k$ when perturbed by the attacker. Let $\mathcal{R}(d)$ denote the rank of a document $d$ and $l_d$ denote its relevance. Then, the attacker's success at k (\sk) is defined as follows:
\begin{equation*}
\sk = \frac{\sum_{d; l_d>0} \mathbbm{1}_{(\mathcal{R}(d') - \mathcal{R}(d)) >= k}}{\sum_{d; l_d>0} \mathbbm{1}}
\end{equation*}
Here, $d'$ is a perturbation of $d$ generated by the attacker.
We also measure the number of non-relevant documents crossed (\nrc) by perturbing $d$ to $d'$ as defined below:
\begin{equation*}
\nrc = \sum_{\hat{d}; l_{\hat{d}}=0} \mathbbm{1}_{\mathcal{R}(d) < \mathcal{R}(\hat{d}) < \mathcal{R}(d')}
\end{equation*}
We compare the performance of the proposed attacks against a random baseline \emph{A0}\xspace where the adversary perturbs a word at random from the text and replaces it with the most similar word (minimum cosine distance in the embedding space) from the corpus.
\begin{table}[t]
\centering
\begin{tabular}{|c|c |c|c |c| }\hline
\textbf{Model} & \textbf{Atk} & 1 token & 3 tokens & 5 tokens \\ \hline
\multirowcell{3}{DRMM\\} &\emph{A1}\xspace& 0.944$\pm$2.776 & 3.867$\pm$6.334 & 6.814$\pm$7.858\\
&\emph{A2}\xspace& 0.037$\pm$0.623 & 0.304$\pm$1.752 & 0.647$\pm$3.129\\
&\emph{A3}\xspace&0.404$\pm$1.730 & 1.280$\pm$3.388 & 2.084$\pm$4.496\\ \hline
\multirowcell{3}{Duet\\ } &\emph{A1}\xspace&8.110$\pm$4.252 & 13.000$\pm$4.292 & 14.637$\pm$4.043 \\
&\emph{A2}\xspace& 0.429$\pm$0.896 & 0.890$\pm$1.479 & 1.670$\pm$3.266 \\
&\emph{A3}\xspace& 7.187$\pm$4.284 & 10.440$\pm$4.806 & 12.055$\pm$4.771 \\ \hline
\multirowcell{3}{KNRM\\ } &\emph{A1}\xspace& 11.543$\pm$6.737 & 18.560$\pm$5.659 & 21.360$\pm$s6.459 \\
&\emph{A2}\xspace &6.234$\pm$8.125 & 9.463$\pm$9.408 & 11.211$\pm$10.086\\
&\emph{A3}\xspace &11.114$\pm$6.611 & 17.783$\pm$5.914 & 20.291$\pm$6.500\\ \hline
\end{tabular}
\caption{\nrc for MSMarco dataset }
\label{table:msmarco_nrc}
\end{table}
\begin{table}[t]
\centering
\begin{tabular}{|c|c |c|c |c| }\hline
\textbf{Model} & \textbf{Atk} & 1 token & 3 tokens & 5 tokens \\ \hline
\multirowcell{3}{DRMM\\} &\emph{A1}\xspace& 2.380$\pm$3.252 & 4.009$\pm$4.499& 4.403$\pm$4.753 \\
&\emph{A2}\xspace& 0.014$\pm$0.118 &0.464$\pm$1.629 & 1.098$\pm$2.943 \\
&\emph{A3}\xspace& 1.403$\pm$2.127 & 2.286$\pm$3.160 & 2.455$\pm$3.300 \\ \hline
\multirowcell{3}{Duet\\ } &\emph{A1}\xspace& 2.175$\pm$2.454 & 3.527$\pm$3.585 &3.773$\pm$3.857 \\
&\emph{A2}\xspace& 0.040$\pm$0.221 &0.175$\pm$0.506 & 0.366$\pm$0.899 \\
&\emph{A3}\xspace& 2.005$\pm$2.270 & 3.175$\pm$3.277 & 3.221$\pm$3.280 \\ \hline
\multirowcell{3}{KNRM\\ } &\emph{A1}\xspace& 1.632$\pm$2.064 &2.580$\pm$2.873 & 2.849$\pm$3.155 \\
&\emph{A2}\xspace &0.188$\pm$0.577 & 0.853$\pm$1.647 & 1.160$\pm$2.014\\
&\emph{A3}\xspace &1.575$\pm$1.990 & 2.556$\pm$2.868 & 2.830$\pm$3.084\\ \hline
\end{tabular}
\caption{\nrc for WikiQA dataset }
\label{table:wikiqa_nrc}
\end{table}
\section{Introduction}
\label{sec:intro}
Deep learning ranking models increasingly represent the state-of-the art in ranking documents with respect to a user query. Different neural architectures \cite{drmm,duet,knrm} are used to model interaction between query and document text to compute relevance. These models rely heavily on the tokens and their embeddings for non-linear transformations for similarity computation.
Thus, they capture semantic similarity between query and document tokens circumventing the exact-match limitation of previous models such as BM25~\cite{bm25}.
The complexity of deep learning models arises from their high dimensional decision boundaries. The hyperplanes are known to be very sensitive to changes in feature values. In computer vision literature \cite{advRank2020} and text based classification tasks \cite{wang2019survey}, researchers have shown that deep learning models can change their predictions with slight variations in inputs. There is a large body of work that investigates adversarial attacks \cite{wang2019survey} on deep networks to quantify their robustness against noise. There is, however, lack of such experiments and evaluation in information retrieval literature. Mitra \emph{et.al.}~\cite{mitra2016dualAdv} demonstrated how word replacement in documents can lead to difference in word representations which leaves room for more investigation of the robustness of deep ranking models to adversarial attacks.
Usually, adversarial attackers perturb images to reverse a model's decision. Noise addition to images is performed at pixel level such that a human eye cannot distinguish true image from noisy image. In text classification tasks, characters or words \cite{textFool2017,papernot2016crafting} are modified to change the classifier output.
The objective of the adversary is to reverse model's decision with minimum amount of noise injection. In information retrieval, however, it remains an open problem as to how does one design an adversary for a ranking model. We posit that the objective of an adversary in information retrieval would be to change the position of a document with respect to the query. We follow a similar approach used in text classification tasks \cite{textFool2017,papernot2016crafting} and perturb tokens in documents, replacing them with semantically similar tokens such that the rank of the document changes.
In this work, we demonstrate by means of false negative adversarial attacks that three state-of-the-art retrieval models can change a document's position with slight changes in its text. We evaluate the robustness of these ranking models on publicly available datasets by injecting varied length of noisy text in documents. We propose a black-box adversarial attack model that takes a (query, document) pair as input and generates a noisy document such that its pushed lower in the ranked list. Our system does not need access to the ranker's internal architecture, only its output given the (query, document) tuple as input.
Our findings suggest that ranking models can be sensitive to even single token perturbations. Table \ref{tab:examples} shows some examples generated by our model of one word perturbations along with changes in document position when scored using state-of-the-art ranking model.
Overall, we found that simple attackers could perturb very few words and still generate semantically similar text that can fool a model into ranking a relevant document lower than non-relevant documents.
\begin{table}[]
\centering
\tiny
\begin{tabular}{|p{1.5cm}|p{4.2cm}|l|c|}
\hline
\textbf{Query} & \textbf{Relevant Document} & \textbf{Replaced by} & \textbf{$\downarrow$Rank} \\ \hline
what can be powered by wind
& Wind power is the conversion of wind energy into a useful form of energy, such as using wind turbines to make electrical power , windmills for mechanical power, wind pumps for \hone{water} pumping or drainage , or sails to propel ships.
& \htwo{wind} & 12\\ \hline
what causes heart disease
& The causes of cardiovascular disease are \hone{diverse} but atherosclerosis and/or hypertension are the most common.
& \htwo{disorder} & 8 \\ \hline
how many books are included in the protestant Bible?
& Christian Bibles range from the sixty-six books of the Protestant \hone{canon} to the eighty-one books of the Ethiopian Orthodox Tewahedo Church canon.
& \htwo{christian} & 5 \\ \hline
how many numbers on a credit card
& An ISO/IEC 7812 card number is typically 16 digits in \hone{length}, and consists of:
& \htwo{identification} & 4 \\ \hline
how many amendments in us
& The Constitution has been amended seventeen additional times (for a total of \hone{twenty-seven} amendments).
& \htwo{allowing} & 4 \\ \hline
what is a monarch to a monarchy
& A monarchy is a form of government in which sovereignty is actually or nominally \hone{embodied} in a single individual (the monarch ).
& \htwo{throne} & 4 \\ \hline
\end{tabular}
\caption{Examples of our one word adversarial perturbation and change in document position when ranked by DRMM.}
\label{tab:examples}
\end{table}
\section{Related Work}
Adversarial attack on deep neural networks has been extensively explored in vision \cite{onepixel} and text classification \cite{wang2019survey,papernot2016crafting}. Existing work has proposed adversarial attacks with either white-box or black-box access to the model. In this work, our focus is on black-box attacks since access to ranking models is not always available. Existing work is also limited in that the focus is on changing classifier decisions and not document positions in ranked list with respect to a query. In this work we explore utility of (query,doc) pair in adversarial attacks on ranking models.
\begin{comment}
Main idea: Explain ranker through adversarial attacks
\begin{itemize}
\item Small perturbation in the document significantly change its rank
\item Can we say something about the ranker's robustness?
\item Analyze the perturbed noise (words that need to be changed) to see if they can be used to explain the ranking.
\item Can we take into account other documents? This way we can be different from attention based approaches that only focus on a single document.
\item How about query perturbation?
\end{itemize}
\end{comment}
\section{Our Approach}
\label{sec:approach}
Our goal is to minimally perturb a document such that the rank of the document changes.
In particular, we target top-ranked documents and attempt to lower their rank through noise
injection. Assuming relevant documents are ranked higher, the objective is to lower the position of a document with minimal change in its text.
\subsection{Problem Statement}
Let $\Vec{q}$, $\Vec{d}$ and $\mathcal{F}$ represent a query, a document and a ranking model, respectively.
Given a query-document pair $(\Vec{q},\Vec{d})$, the ranker outputs a score $s=\mathcal{F}(\Vec{q},\Vec{d})$ indicating
the relevance of the document $\Vec{d}$ to the query $\Vec{q}$, where higher score means higher relevance.
Given a query and a list of documents, the ranker computes scores for every document w.r.t the
given query and rank documents based on a descending order of the scores.
Given $\Vec{q}$ and $\Vec{d}$, our goal is to find a perturbed document $\Vec{d'}$ such that
$\mathcal{F}(\Vec{q},\Vec{d'}) << \mathcal{F}(\Vec{q},\Vec{d})$ so that it ranks lower than
its original rank. At the same time we want to minimize the perturbation in the document.
We assume that both query and document are represented as vectors which could be either sequence of
terms or other suitable representations such as word embedding. Let $\Vec{q} = (q_1, q_2, ..., q_p)$
be a p-dimensional query vector and $\Vec{d} = (d_1, d_2, ..., d_q)$ be a q-dimensional document
vector. We construct a perturbed document $\Vec{d'}=(\Vec{d} + \Vec{n})$ by adding a q-dimensional noise vector
$\Vec{n} = (n_1, n_2, ..., n_q)$.
Our goal is to find a noise vector that reduces a document's score
without changing many terms in the document.
We formulate this problem as an optimization problem as follows:
\begin{equation}
\label{eq:v1}
\begin{aligned}
& \underset{\Vec{n}}{\text{minimize}}
& & \mathcal{F}(\Vec{q}, \Vec{d+n}) \\
& \text{subject to}
& & ||\Vec{n}||_0 \leq c
\end{aligned}
\end{equation}
Here, $c$ is a sparsity parameter that controls the number of perturbed terms.
Although, the above formulation provides a sparse solution, we do not have
any control over the magnitude of the noise vector. In other words,
even though we change only few terms in a document we may end up
changing them significantly. To address this problem, we modify the
objective function as follows:
\begin{equation}
\label{eq:v2}
\begin{aligned}
& \underset{\Vec{n}}{\text{minimize}}
& & \mathcal{F}(\Vec{q}, \Vec{d+n}) + \sum_{i=1}^{q}{\mathcal{D}}(d_i, (d_i+n_i)) \\
& \text{subject to}
& & ||\Vec{n}||_0 \leq c
\end{aligned}
\end{equation}
Here, $\mathcal{D}$ can be any suitable distance function (e.g., cosine distance)
that computes distance between two terms.
It ensures that the modified terms are close to the original terms.
We found this formulation too strict.
In particular, it penalizes perturbations of query terms as well non-query
terms in a document. Hence, we modify the objective function to penalize
only query terms since we want to ensure that the perturbed document
stays relevant to the query. The modified objective function is as follows:
\begin{equation}
\label{eq:v3}
\begin{aligned}
& \underset{\Vec{n}}{\text{minimize}}
& & \mathcal{F}(\Vec{q}, \Vec{d+n}) + \sum_{i=1}^{q} \mathbbm{1}_{d_i \in \Vec{q}} [{\mathcal{D}}(d_i, (d_i+n_i))] \\
& \text{subject to}
& & ||\Vec{n}||_0 \leq c
\end{aligned}
\end{equation}
Here, $\mathbbm{1}_{x}$ is an indicator function with is $1$ when $x$ is true and $0$ otherwise.
The first term in the objective function ensures that the ranker gives low score to the perturbed document
while the second term incurs penalty of changing query terms in the document.
The constrains limits the number of changed terms in a document to $c$.
\subsection{Method}
A popular approach to solve the above problem relies on computing gradient of the model's output (score) with respect to the input and use this gradient to find a noise vector that reduces model's score on noisy input~\cite{fgsm}.
Such gradient based methods do not work well with textual data due to non-differentiable components
such as an embedding layer.
A common workaround is to find perturbation in the embedding space and project the perturbation vector in the input space
through techniques like nearest neighbour search. In general, gradient based methods are restricted to differentiable models and require information about the model's architecture.
To address the shortcomings of gradient based methods, we propose a different approach that uses Differential Evolution (DE)~\cite{de}, a stochastic evolutionary algorithm. It can be applied to a variety of optimization problems including non-differentiable and multimodal objective functions. DE is a population-based optimization method which works as follows:
Let's say we want to optimize over $l$ parameters. Given the population size $m$, it randomly initializes $m$ candidate solutions $\Vec{X_t} = (\Vec{x}_1, \Vec{x}_2, ..., \Vec{x}_m)$, each of length $l$. From these parent solutions it generates candidate children solutions using the following mutation criteria:
\begin{equation}
\Vec{x}_{a,t+1} = \Vec{x}_{b,t} + F(\Vec{x}_{c,t} - \Vec{x}_{d,t})
\end{equation}
Here, $a, b, c$ and $d$ are randomly chosen distinct indices in the population, and $F$ is a mutation factor in $[0,2]$. Next, these children are compared against their parents using a fitness function and $m$ candidates are selected that have the highest fitness (i.e., minimizes the fitness function). This process is repeated until either it converges or reaches the maximum number of iterations.
Given a query-document pair $(\Vec{q},\Vec{d})$ and a trained ranker $\mathcal{F}$ we use DE to find a perturbation vector $\Vec{n}$
that changes the document's rank without significantly modifying it.
In particular, we need to find what terms to perturb in the document and the magnitude of each perturbation. Hence, we represent the solution (perturbation vector) as a sequence of tuples $(i, v)$ where $i \in [0,|\Vec{d}|]$ represents an index of the term to perturb and $v$ represents the perturbation value. The length of the solution vector is set to $c$ (the sparsity parameter).
We use the objective function proposed earlier as a fitness function for DE and run the algorithm for a fixed number of iterations.
Based on the choice of the fitness function, we propose three variants of the attack, \emph{A1}\xspace (Equation~\ref{eq:v1}), \emph{A2}\xspace (Equation~\ref{eq:v2}) and \emph{A3}\xspace (Equation~\ref{eq:v3}).
The final solution vector is used to construct the perturbed document $d'$. A similar approach has been used to perturb images in order to fool a classifier~\cite{onepixel}.
\section{Conclusion}
\label{sec:conclusion}
Adversarial attacks on classification models both in text and vision have helped reduce the generalization error of such models. However, there is limited literature on adversarial attacks on information retrieval models. In this work, we explored effectiveness and quality of three simple methods of attacking black box deep learning models. The attackers were designed to change document text such that an information retrieval model is fooled into lowering the document position. We found that perturbed text generated by these attackers by changing few tokens is semantically similar to original text and \emph{can fool} the ranker to push a relevant document below $\sim$2-3 non-relevant documents. Our findings can be further used to train rankers with adversarial examples to reduce their generalization error.
\section{Results and Discussion}
We focus on three research questions to investigate the impact of adversarial attacks on ranking models.\newline
\textbf{RQ1:} \emph{What is the attacker's success in changing a document's position without significantly changing the document?}
To answer this question, we measure an attacker's success (\sk)
when we restrict the number of perturbed words to \textit{one} in each document. The results for $k=1$ and $k=5$ are given in Table~\ref{table:success}. Notice that \emph{A0}\xspace can change the rank of almost half of the relevant documents but not beyond five positions.
Both \emph{A1}\xspace and \emph{A3}\xspace can change the rank of all the relevant documents by just changing \emph{one term} in the document.
In case of MSMarco, they can lower the rank of >81\% relevant documents by more than four positions except for DRMM ranker.
Overall, we found that \emph{A1}\xspace has the highest success rate among all the attackers as it has greater flexibility to change the terms.
We report mean and variance of \nrc on the test sets in Table~\ref{table:msmarco_nrc} and~\ref{table:wikiqa_nrc} for MSMarco and WikiQA datasets respectively.
Increasing the number of perturbed words increases \nrc across all the attackers. On MSMarco dataset, KNRM is the most vulnerable ranker across all the attackers. On average \emph{A1}\xspace can push a relevant document beyond 11 non-relevant documents by changing only one token when evaluated against KNRM ranker. On the other hand, the performance of attackers across all the models is similar on the WikiQA dataset.
Overall, all the attackers perform better on MSMarco dataset compared to WikiQA. We argue that the larger length of passages in MSMarco provides more room for perturbations as opposed to the shorter length of answers in WikiQA. \newline
\textbf{RQ2:} \emph{What is the similarity between perturbed and original text when we restrict the number of perturbed words to $p_l=\{1,3,5\}$ in each document?}
Adversarial perturbations may cause the meaning of document text to change. Thus, it is important to control this change such that perturbed words are semantically similar to original text. We measure the semantic similarity between embeddings of original text and perturbed text using cosine distance as done in previous work \cite{wang2019survey}. Figure \ref{fig:dc} shows similarity b/w perturbed and original text when 1, 3 or 5 tokens are changed in MSMarco passage text for three different models.
We only focus on similarity for \emph{A1}\xspace and \emph{A3}\xspace attackers due to space limitations.
Overall, the cosine similarity between perturbed and original text across attackers is relatively high ($\sim$0.97), even though the document may be pushed below $\sim$20 non-relevant documents with only \emph{single} token perturbations. As expected, perturbing more tokens leads to lower similarity. Cosine similarity drops to $\sim$0.90 across models for 5 word perturbations. Both \emph{A1}\xspace and \emph{A3}\xspace attackers can push documents to relatively lower positions with 1-3 token perturbations, however, we found that \emph{A1}\xspace tends to perturb query tokens in passages to change ranker output. We found that \emph{A1}\xspace changed query tokens in 65\% (DRMM), 11\% (Duet) and 2\% (KNRM) documents in MSMarco and 50\%, 17\% and 3\% documents in WikiQA respectively. However, \emph{A3}\xspace achieves similar performance in terms of similarity and rank change \emph{without} changing any query tokens. \newline
\captionsetup[figure*]{font=tiny}
\begin{figure}
\centering
\begin{subfigure}[b]{0.15\textwidth}
\includegraphics[width=\textwidth]{fig/msmarco_drmm_v1.png}
\caption{DRMM (\emph{A1}\xspace)}
\label{fig:drmm_v1}
\end{subfigure}
\begin{subfigure}[b]{0.15\textwidth}
\includegraphics[width=\textwidth]{fig/msmarco_duet_v1.png}
\caption{DUET (\emph{A1}\xspace)}
\label{fig:duet_v1}
\end{subfigure}
\begin{subfigure}[b]{0.15\textwidth}
\includegraphics[width=\textwidth]{fig/msmarco_knrm_v1.png}
\caption{KNRM (\emph{A1}\xspace)}
\label{fig:knrm_v1}
\end{subfigure}
\begin{subfigure}[b]{0.15\textwidth}
\includegraphics[width=\textwidth]{fig/msmarco_drmm_v3.png}
\caption{DRMM (\emph{A3}\xspace)}
\label{fig:drmm_v3}
\end{subfigure}
\begin{subfigure}[b]{0.15\textwidth}
\includegraphics[width=\textwidth]{fig/msmarco_duet_v3.png}
\caption{DUET (\emph{A3}\xspace)}
\label{fig:duet_v3}
\end{subfigure}
\begin{subfigure}[b]{0.15\textwidth}
\includegraphics[width=\textwidth]{fig/msmarco_knrm_v3.png}
\caption{KNRM (\emph{A3}\xspace)}
\label{fig:knrm_v3}
\end{subfigure}
\caption{\small{\nrc vs. cosine sim on MSMarco where {\color{CornflowerBlue} blue=1 word}, {\color{Peach} orange=3 words} and {\color{ForestGreen} green=5 words} perturbations respectively.}}
\label{fig:dc}
\end{figure}
\begin{table}[t]
\centering
\begin{tabular}{|c|c |c|c |c|c | }\hline
\multirow{2}{*}{\textbf{Model}} & \multirow{2}{*}{\textbf{Atk}} & \multicolumn{2}{|c|}{WikiQA} & \multicolumn{2}{|c|}{MSMarco} \\ \cline{3-6}
& & 1 token & 3 tokens & 1 token & 3 tokens \\ \hline
\multirow{3}{*}{DRMM} &\emph{A1}\xspace &0.31$\pm$0.45 & 0.43$\pm$0.47& 0.05$\pm$0.14 & 0.11$\pm$0.17 \\
&\emph{A2}\xspace&0.005$\pm$0.06 & 0.005$\pm$0.37& 0.001$\pm$0.05 & 0.01$\pm$0.11\\
&\emph{A3}\xspace&0.216$\pm$0.40 & 0.30$\pm$0.44 & 0.02$\pm$0.11 & 0.06$\pm$0.15\\ \hline
\multirow{3}{*}{Duet} &\emph{A1}\xspace& 0.24$\pm$0.41 & 0.37$\pm$0.46 & 0.59$\pm$0.40 & 0.68$\pm$0.35\\
&\emph{A2}\xspace& 0.003$\pm$0.07 & -0.04$\pm$0.23 & 0.02$\pm$0.30 & 0.03$\pm$0.46 \\
&\emph{A3}\xspace& 0.226$\pm$0.40 & 0.35$\pm$0.45& 0.56$\pm$0.40 & 0.61$\pm$0.39\\ \hline
\multirow{3}{*}{KNRM} &\emph{A1}\xspace&0.22$\pm$0.41 & 0.30$\pm$0.44& 0.53$\pm$0.37 & 0.59$\pm$0.34\\
&\emph{A2}\xspace&0.03$\pm$0.16 &0.09$\pm$0.34&0.30$\pm$0.42&0.38$\pm$0.43 \\
&\emph{A3}\xspace&0.218$\pm$0.40 & 0.31$\pm$0.44 & 0.52$\pm$0.38 & 0.59$\pm$0.34\\ \hline
\end{tabular}
\caption{\% drop in $P@5$ due to perturbed text}
\label{table:prec_drop}
\end{table}
\textbf{RQ3:} \emph{What is the ranker performance after adversarial attacks?} We evaluate model robustness against an adversarial attacker with P@5.
We compute perturbed $P@5$ by replacing the original document's ($d_i$) ranker score $s_i$ with the new score $s'_i$ given by the ranker on the perturbed input $d'_i$.
\footnote{Note that ranker output for all other documents in the list $d_{j\neq i}$ remains the same.} So, for every document the attacker perturbs, its new score replaces the old ranker score and $P@5$ is recomputed. We report the \% drop in $P@5$ for both datasets in Table \ref{table:prec_drop}.
We find that \emph{A1}\xspace and \emph{A3}\xspace attackers are able to reduce $P@5$ significantly, in some cases as much as $\sim$20\% in WikiQA and $\sim$50\% in MSMarco by changing one token in the text. Although, \emph{A3}\xspace's drop in $P@5$ is lower than that of \emph{A1}\xspace's across all datasets since it is penalized for changing query terms.
We also found \%drop in $P@5$ to be a function of ranker performance. Models with higher precision were harder to beat by the attacker, i.e., were more robust to token changes in document text. For example, in WikiQA original $P@5$ (mean,std) for DRMM, Duet and KNRM was 0.204$\pm$0.17, 0.207$\pm$0.18 and 0.22$\pm$0.20 respectively. It is interesting to note that all attackers are able to reduce DRMM $P@5$ by single token perturbations by highest margin. However, in MSMarco, DRMM $P@5$ is the highest amongst all other models, thus hardest to fool by all attackers. We observe very little drop in precision for \emph{A2}\xspace attacker across models in both datasets which indicates that very strict attackers may not be able to find suitable candidates to perturb documents. One interesting finding was that in some cases the attacker replaced document text with \emph{query tokens} to lower document score as shown in Table \ref{tab:examples}.
\section{Experimental Setup}
\label{sec:experiments}
In this section, we provide details about the datasets, model training, perturbation and evaluation metrics. \newline
\textbf{Data:} We use WikiQA~\cite{wikiqa} and MSMarco passage ranking dataset~\cite{msmarco} for training ranking models and evaluating attacks on three ranking models.
For WikiQA, we use 2K queries for training and 240 queries for evaluation.
For MSMarco, we use 20K randomly sampled queries for training and 220 queries for evaluation.
For each test query, we randomly sample 5 positive documents and 45 negative documents for evaluation. \newline
\textbf{Model training:} We use the following ranking models:
DRMM~\cite{drmm}, DUET~\cite{duet} and KNRM~\cite{knrm}.
We use 300 dimensional Glove embeddings~\cite{glove} to represent each token. \newline
\textbf{Perturbation:}
In our experiments, we perturb \emph{only} relevant documents\footnote{We found that non-relevant documents can be perturbed easily by adding noisy text.} in top 5 results for each query in the test set.
We perturb documents using all three attackers (\emph{A1}\xspace, \emph{A2}\xspace, and \emph{A3}\xspace) and evaluate ranker performance on the perturbed documents.
The number of iterations and population size are fixed to 100 and 500 respectively. \newline
\textbf{Evaluation:}
We explore several metrics for understanding the effectiveness of perturbations and their impact on ranker performance. The goal of an attacker is to fool the ranker into lowering the position of the document in the list. The success of the attacker is measured as the percentage of \emph{relevant} documents whose position changed by $k$ when perturbed by the attacker. Let $\mathcal{R}(d)$ denote the rank of a document $d$ and $l_d$ denote its relevance. Then, the attacker's success at k (\sk) is defined as follows:
\begin{equation*}
\sk = \frac{\sum_{d; l_d>0} \mathbbm{1}_{(\mathcal{R}(d') - \mathcal{R}(d)) >= k}}{\sum_{d; l_d>0} \mathbbm{1}}
\end{equation*}
Here, $d'$ is a perturbation of $d$ generated by the attacker.
We also measure the number of non-relevant documents crossed (\nrc) by perturbing $d$ to $d'$ as defined below:
\begin{equation*}
\nrc = \sum_{\hat{d}; l_{\hat{d}}=0} \mathbbm{1}_{\mathcal{R}(d) < \mathcal{R}(\hat{d}) < \mathcal{R}(d')}
\end{equation*}
We compare the performance of the proposed attacks against a random baseline \emph{A0}\xspace where the adversary perturbs a word at random from the text and replaces it with the most similar word (minimum cosine distance in the embedding space) from the corpus.
\begin{table}[t]
\centering
\begin{tabular}{|c|c |c|c |c| }\hline
\textbf{Model} & \textbf{Atk} & 1 token & 3 tokens & 5 tokens \\ \hline
\multirowcell{3}{DRMM\\} &\emph{A1}\xspace& 0.944$\pm$2.776 & 3.867$\pm$6.334 & 6.814$\pm$7.858\\
&\emph{A2}\xspace& 0.037$\pm$0.623 & 0.304$\pm$1.752 & 0.647$\pm$3.129\\
&\emph{A3}\xspace&0.404$\pm$1.730 & 1.280$\pm$3.388 & 2.084$\pm$4.496\\ \hline
\multirowcell{3}{Duet\\ } &\emph{A1}\xspace&8.110$\pm$4.252 & 13.000$\pm$4.292 & 14.637$\pm$4.043 \\
&\emph{A2}\xspace& 0.429$\pm$0.896 & 0.890$\pm$1.479 & 1.670$\pm$3.266 \\
&\emph{A3}\xspace& 7.187$\pm$4.284 & 10.440$\pm$4.806 & 12.055$\pm$4.771 \\ \hline
\multirowcell{3}{KNRM\\ } &\emph{A1}\xspace& 11.543$\pm$6.737 & 18.560$\pm$5.659 & 21.360$\pm$s6.459 \\
&\emph{A2}\xspace &6.234$\pm$8.125 & 9.463$\pm$9.408 & 11.211$\pm$10.086\\
&\emph{A3}\xspace &11.114$\pm$6.611 & 17.783$\pm$5.914 & 20.291$\pm$6.500\\ \hline
\end{tabular}
\caption{\nrc for MSMarco dataset }
\label{table:msmarco_nrc}
\end{table}
\begin{table}[t]
\centering
\begin{tabular}{|c|c |c|c |c| }\hline
\textbf{Model} & \textbf{Atk} & 1 token & 3 tokens & 5 tokens \\ \hline
\multirowcell{3}{DRMM\\} &\emph{A1}\xspace& 2.380$\pm$3.252 & 4.009$\pm$4.499& 4.403$\pm$4.753 \\
&\emph{A2}\xspace& 0.014$\pm$0.118 &0.464$\pm$1.629 & 1.098$\pm$2.943 \\
&\emph{A3}\xspace& 1.403$\pm$2.127 & 2.286$\pm$3.160 & 2.455$\pm$3.300 \\ \hline
\multirowcell{3}{Duet\\ } &\emph{A1}\xspace& 2.175$\pm$2.454 & 3.527$\pm$3.585 &3.773$\pm$3.857 \\
&\emph{A2}\xspace& 0.040$\pm$0.221 &0.175$\pm$0.506 & 0.366$\pm$0.899 \\
&\emph{A3}\xspace& 2.005$\pm$2.270 & 3.175$\pm$3.277 & 3.221$\pm$3.280 \\ \hline
\multirowcell{3}{KNRM\\ } &\emph{A1}\xspace& 1.632$\pm$2.064 &2.580$\pm$2.873 & 2.849$\pm$3.155 \\
&\emph{A2}\xspace &0.188$\pm$0.577 & 0.853$\pm$1.647 & 1.160$\pm$2.014\\
&\emph{A3}\xspace &1.575$\pm$1.990 & 2.556$\pm$2.868 & 2.830$\pm$3.084\\ \hline
\end{tabular}
\caption{\nrc for WikiQA dataset }
\label{table:wikiqa_nrc}
\end{table}
|
2,869,038,155,482 | arxiv | \section*{Acknowledgment}
The research reported herein was partly funded by Fonds pour la Formation \`a la Recherche dans l'Industrie et dans l'Agriculture (F.R.I.A.). The authors also want to thank Nicolas Boumal, the founder of the Manopt toolbox, for his precious advice.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
\section{General architecture}
The WaveComBox toolbox aims at implementing a complete communication chain relying on a specific waveform. The general architecture of the toolbox is divided in three mains parts: transmitter, channel and receiver. These three parts are depicted in the three block diagrams of Fig.~\ref{fig:transmitter}, \ref{fig:channel} and \ref{fig:receiver}. Each box consists of a basic signal processing block and corresponds to a function implemented in the WaveComBox toolbox. Boxes with solid lines are mandatory boxes, \textit{i.e.}, they consist in the building blocks of the modulation. On the other hand, boxes surrounded by dashed lines are optional. Some conventions regarding notations are introduced in the figures, including the number of information streams $S$, of transmit and receive signals, $N_T$ and $N_R$.
\begin{figure*}[t!]
\centering
\resizebox{0.95\textwidth}{!}{%
{\includegraphics[clip, trim=0cm 12cm 5cm 0cm, scale=1]{Fig/Transmitter.pdf}}
}
\caption{Transmitter abstract block diagram.}
\label{fig:transmitter}
\end{figure*}
The \textbf{transmitter} consist of two key operations: generation of data symbols $\vect{d}$ and modulation of the transmitted signal $\vect{s}$. Additional operations can be included such as pre-equalization of the channel and/or the insertion of a preamble and pilot in the transmission frame.
\begin{figure*}[t!]
\centering
\resizebox{0.4\textwidth}{!}{%
{\includegraphics[clip, trim=0cm 16cm 23cm 0cm, scale=1]{Fig/Channel.pdf}}
}
\caption{Channel abstract block diagram. Dashed boxes are optional.}
\label{fig:channel}
\end{figure*}
The \textbf{channel} takes as input the transmitted signal $\vect{s}$ and outputs the received signal $\vect{r}$. The channel can be viewed in a general sense as the transfer function between the discrete baseband samples at
the transmitter and the received baseband discrete samples at the receiver. In the ideal case, we have $\vect{r}=\vect{s}$. Otherwise, many impairments may be considered including additive noise and synchronization errors. Typical wireless effects are included such as multipath fading or mobility. The toolbox should be able to address optical effects as well such as chromatic dispersion and phase noise.
\begin{figure*}[t!]
\centering
\resizebox{0.95\textwidth}{!}{%
{\includegraphics[clip, trim=0cm 12cm 7cm 0cm, scale=1]{Fig/Receiver.pdf}}
}
\caption{Receiver abstract block diagram. Dashed boxes are optional.}
\label{fig:receiver}
\end{figure*}
The \textbf{receiver} takes as input the received signal $\vect{r}$ and aims at estimating the transmitted symbols $\hat{\vect{d}}$. A central block of the receiver is the demodulator. Other possible blocks implement synchronization, channel estimation and equalization and phase tracking.
In the WaveComBox toolbox, the waveform parameters are summarized in a structure that should be initialized at the beginning of each script. Examples of such parameters are the number of subcarriers, the number of data symbols, the constellation size, the number of transmit and receive antennas... This structure also contains some general parameters on the communication chain such as the signal-to-noise ratio or the velocity of the terminal. All parameters should not always be assigned to specific values depending on the scenario. For instance, velocity is only required if mobility is considered inducing a time-varying effect of the channel.
\section{Conclusion}
\label{section_conclusion}
This paper
\section{Basic example: FBMC-OQAM chain under multipath fading}
The transmit signal $s[n]$ is obtained after FBMC-OQAM modulation of the purely real data symbols $d_{m,l}$, \textit{i.e.},
\begin{align*}
s[n]&=\sum_{m=0}^{2M-1}\sum_{l=0}^{2N_s -1} d_{m,l} g_{m,l}[n],
\end{align*}
where $g_{m,l}[n]=\jmath^{m+l}g[n-lM]e^{\jmath\frac{2\pi}{2M}m(n-lM-\frac{L_g-1}{2}) }$. Parameters $2M$, $2N_s$ and $L_g$ refer to the number of subcarriers, of real multicarrier symbols and to the length of the prototype filter $g[n]$. The received signal $r[n]$, after multipath fading and additive noise, is given by
\begin{align*}
r[n]&=(s\otimes h)[n] + w[n],
\end{align*}
where $\otimes$ stands for the convolution operator, $h$ for the channel impulse response and $w[n]$ for the additive noise samples. The samples obtained at the receiver, after FBMC-OQAM demodulation at subcarrier $m_0$ and multicarrier symbol $l_0$, are given by
\begin{align*}
z_{m_0,l_0}&=\sum_n r[n] g^*_{m_0,l_0}[n].
\end{align*}
Finally, the estimated symbols are obtained after single-tap equalization and real conversion as
\begin{align*}
\hat{d}_{m_0,l_0}&=\Re \left(\frac{z_{m_0,l_0}}{H_{m_0}}\right),
\end{align*}
where $H_{m_0}$ is the channel frequency response evaluated at subcarrier $m_0$. In WaveComBox, this example can be simulated with the simple following code:
{\small
\begin{verbatim}
Para = InitializeChainParameters( 'FBMC-OQAM' );
d = GenerateData ( Para );
s = Modulator( d, Para );
c = GenerateRayleighChannelReal('ITU_VehA', Para);
r = Channel_Multipath( s, c );
r = Channel_AWGN( r, Para );
z = Demodulator( r, Para );
x = Equalizer( z, c, Para );
d_hat = real( x );
\end{verbatim}}
A common figure of merit is the per-subcarrier mean squared error (MSE), defined as
\begin{align*}
\text{MSE}({m_0})&=\mathbb{E}\left( \left| {d}_{m_0,l_0}-\hat{d}_{m_0,l_0}\right| ^2 \right),
\end{align*}
where the expectation is taken over transmitted symbols and noise samples. The MSE can be plotted by using the following lines of code, leading to the result of Fig.~\ref{fig:MSE}.
{\small
\begin{verbatim}
MSE = MSEComputes( d, d_hat, Para );
figure
plot(10*log10(MSE),'-xb')
xlabel('Subcarrier index')
ylabel('MSE [dB]')
\end{verbatim}}
\begin{figure}[!t]
\centering
\resizebox{0.5\textwidth}{!}{%
\Large
\input{Fig/MSE}
}
\caption{MSE across the subcarriers.}
\label{fig:MSE}
\end{figure}
This example is available in the WaveComBox toolbox under the name \texttt{BasicSISO.m} together with many other examples.
\section{Introduction}
The standards for the future generations of communication systems let us expect revolutionary changes in terms of data rate, latency, energy efficiency, massive connectivity and network reliability \cite{shafi20175g,wu2017overview}. The network should not only provide very high data rates but also be highly flexible to accommodate a considerable amount of devices with very different specifications and corresponding to different applications, such as the Internet of Things, the Tactile Internet or vehicle-to-vehicle communications. These high requirements will only be met by introducing innovative technologies radically different from existing ones.
The OFDM modulation is the most popular multicarrier modulation scheme nowadays. The main advantage of OFDM is its simplicity. Thanks to the combination of the Fast Fourier transform (FFT) and the introduction at the transmitter of redundant symbols known as the cyclic prefix (CP), the OFDM modulation allows for a very simple compensation of the channel impairments at the receiver \cite{li2006orthogonal}. However, the rectangular pulse shaping of the FFT filters induces significant spectral leakage, which results in the need for large guard bands at the edges of the spectrum in order to prevent out-of-band emissions. This bad frequency localization decreases the system flexibility regarding spectrum allocation and makes it less suited for applications such as cognitive radios and the Internet of Things, which may require asynchronous transmission for multiple users.
These limitations may be very detrimental for future generations of communications systems where the modulation format should at the same time be highly flexible and achieve high spectral efficiency. In this sense, a good time-frequency localization is very desirable. This has motivated research for new waveforms that would better fit these requirements. This research has been conducted in parallel in many fields of communications including wireless communications \cite{6923528}, optical fiber communications \cite{horlin2013dual,7932847}, fiber-wireless communications \cite{Rottenberg18} or visible light communications \cite{lin2016experimental}. Actually, the research regarding waveform design has a long history and dates back to the sixties. This area of research has regained a lot of attention recently and a very large number of new waveforms have flourished, each one having its own specificity. A comprehensive survey on multicarrier modulations is proposed in \cite{Sahin2014}. In the following, we briefly describe some of the main ones.
The FBMC-OQAM modulation uses purely real symbols (instead of complex symbols) at twice the symbol rate, resulting in a maximal spectral efficiency and a very good time-frequency localization. Demodulation is made easier by ensuring that the prototype filter satisfy real orthogonality condition \cite{farhang2011ofdm}. The GFDM modulation \cite{Fettweis2009,Michailow2014} is a non-orthogonal scheme where the transmit signal is divided into multiple blocks. Each block is obtained by cyclic convolution of the complex symbols with a well localized filter. Instead of using a more refined filtering process at the subcarrier level, many schemes have been proposed recently, which perform an improved filtering at the resource block level, \textit{i.e.}, on a group of subcarriers: namely, UFMC \cite{Vakilian2013}, RB-F-OFDM \cite{li2014resource} and filtered-OFDM (F-OFDM) \cite{Abdoli2015}. This type of systems has the advantage of keep a relatively a high compatibility with current OFDM systems.
The advantages of these new waveforms generally come at the price of an increased complexity which, we believe, has slowed down their adoption by the community. This increased complexity does not only come from the more complex hardware architecture of the modulator and demodulator. More importantly, the new waveforms are conceptually more complex to apprehend and to implement. They require a deep re-thinking of the whole communication chain, implying the adaptation of general algorithms used for conventional signal processing operations such as channel estimation or equalization. The aim of the WaveComBox toolbox is to lower the entrance barrier of the new waveforms by allowing simple implementation of their physical layer functionalities.
By using an abstract architecture, the toolbox is made user-friendly, easy to apprehend and flexible. It addresses both SISO and MIMO configurations and implements conventional physical layer signal processing operations such as modulation and demodulation, channel estimation, channel equalization, synchronization... The channel models included in the toolbox may represent impairments typical from wireless and optical fiber mediums. The toolbox is open-source, allowing for easily checking and modifying the source code. It is documented with help files and examples. Finally, a forum is available to help users discuss of their problem and propose new contributions to the toolbox.
|
2,869,038,155,483 | arxiv | \section{Introduction}
Deep-inelastic scattering (DIS) remains the most important probe of
quantum chromodynamics (QCD) since it was pioneered almost fifty years ago.
Various dedicated experiments have been carried out since then aiming to
study the internal structure of nucleons, e.g., the HERA experiments
by colliding electron or positron with proton.
The HERA measurements on neutral-current (NC) and charged-current (CC) DIS
provide the backbone constraint in modern determination of the parton distribution
functions (PDFs)~\cite{1506.06042,Gao:2017yyd}.
The heavy quark, especially charm quark plays an important role in
describing the structure functions of proton measured in DIS within
the framework of QCD factorization.
In perturbative QCD calculations heavy-quark mass dependence of the
DIS coefficient functions are crucial for analyses of the DIS data,
in the inclusive structure function measurements and even more in
the open production of heavy quarks, and ensure a precise determination
of PDFs that is vital for the ongoing programs at the Large
Hadron Collider (LHC).
In the neutral-current case the DIS coefficient functions are
known up to ${\mathcal O}(\alpha_{\scriptscriptstyle S}^2)$ with exact heavy-quark mass
effects~\cite{Laenen:1992zk,Laenen:1992xs}.
For the charged-current case the heavy-quark mass effects have
been calculated to ${\mathcal O}(\alpha_{\scriptscriptstyle S})$ in
Refs.~\cite{Gottschalk:1980rv,Gluck:1997sj,Blumlein:2011zu},
to approximate ${\mathcal O}(\alpha_{\scriptscriptstyle S}^2)$ in Ref.~\cite{Alekhin:2014sya}.
Recently the exact ${\mathcal O}(\alpha_{\scriptscriptstyle S}^2)$ results with full mass
dependence have been completed~\cite{1601.05430}.
The $\mathcal {O} (\alpha_s^3)$
results are also available for structure function
$xF_3$ at large momentum transfer~\cite{Behring:2015roa}.
Among all DIS experiments there exist one specific measurement,
charm-quark production in DIS of a neutrino from a heavy-nucleus.
It provides direct access to the strange quark content of the nucleon
which is poorly constrained by the inclusive structure function data.
At lowest order, the relevant partonic process is neutrino interaction with
a strange quark, $\nu s \rightarrow c X$, mediated by the weak charged current.
Experimentally one can require a semi-leptonic decay of the charm quark to muon
that gives the so-called {\it dimuon} final state as measured by CCFR~\cite{Goncharov:2001qe},
NuTeV~\cite{Mason:2006qa}, CHORUS~\cite{KayisTopaksu:2008aa}, and NOMAD~\cite{Samoylov:2013xoa}
collaborations.
In global determination of PDFs it is from those dimuon data which prefers
a suppressed strange-quark distribution than $u$ and $d$ sea-quarks
~\cite{Dulat:2015mca,Harland-Lang:2014zoa,Ball:2014uwa}.
That agrees with predictions from various models suggesting that the
strange PDFs are suppressed compared to those of light sea quarks due
to its larger mass~\cite{Carvalho:1999he,Vogt:2000sk,Chen:2009xy}.
The strange quark PDFs can play an important role in LHC phenomenology,
contributing, for example, to the total PDF uncertainty
in $W$ or $Z$ boson production~\cite{Nadolsky:2008zw,1203.1290},
and to systematic uncertainties in
precise measurements of the $W$ boson mass and weak-mixing
angle~\cite{Krasny:2010vd,Bozzi:2011ww,Baak:2013fwa}.
On another hand, thanks to the LHC we can also extract the strange
quark PDFs independently from collider data only, e.g, via a combined
analysis of HERA DIS data and the $W$, $Z/\gamma^*$ boson production data
from the LHC.
The latter can provide constraints on the strange quark PDFs due to
its high precision and the fact that differential distributions can
separate different sea flavors.
The ATLAS collaboration have reported such a study, ATLAS-epWZ16~\cite{1612.03016},
using the HERA I and II combined data and the updated 7 TeV measurements on
$W$ and $Z/\gamma^*$ differential cross sections.
Interestingly, an unsuppressed strange quark PDF is preferred
with $R_s\equiv (s+\bar s)/(\bar u+\bar d)$ measured to
be $1.13^{+0.08}_{-0.13}$ at $x=0.023$ and $Q^2=1.9\,{\rm GeV}^2$.
Similar conclusion has been reached in an earlier ATLAS study
based on a smaller sample of $W$ and $Z$ data~\cite{1203.4051}.
In comparison the values of $R_s$ from global PDF determination
are $0.55\pm 0.21$, $0.57\pm 0.17$, $0.60\pm 0.13$, and $0.63\pm 0.03$
for CT14~\cite{Dulat:2015mca}, MMHT2014~\cite{Harland-Lang:2014zoa},
NNPDF3.1~\cite{Ball:2017nwa}, and ABMP16~\cite{1701.05838}.
In NNPDF3.1 they also performed an alternative fit using exactly
the same data sets as in the ATLAS-epWZ16 analysis.
They confirmed the pull on the central values of the strange-quark PDFs
by the ATLAS data but arrived at a much larger uncertainties.
The discrepancies seen between the determinations of strangeness
from fixed-target dimuon data and the ATLAS data have attracted a
lot of attentions recently.
It was suggested in Ref.~\cite{1708.01067} that the ATLAS determination
may be biased by the special parametrization form of PDFs adopted.
While future LHC data might be helpful to clarify whether there are
indeed tensions between those two determinations, it is also
important to investigate various theoretical uncertainties
especially in the case of charm quark production in DIS.
In Ref.~\cite{1601.05430} we have reported a first application of our
${\mathcal O}(\alpha_{\scriptscriptstyle S}^2)$ results on the massive coefficient
functions of charged-current DIS.
We calculated the next-to-next-to-leading-order (NNLO)
QCD corrections to charm-quark production in DIS of a
neutrino from a nucleon.
The calculation is based on a phase-space slicing method and
fully-differential Monte Carlo integration.
We found the NNLO corrections can change the cross sections
by up to 10\% depending on the kinematic region considered.
In this paper we provide further elaboration of the methods
and numerical results of our NNLO calculation.
Moreover, we implement our calculation into a fast interface
based on grid interpolation that ensures repetition of the
calculation within millisecond for arbitrary PDFs.
It allows a first study of effects of the NNLO massive
coefficient functions on determination of the strange-quark
PDFs in the context of Hessian profiling, and can be
used in future global analysis of PDFs.
In the remaining paragraphs we outline the method used in the
calculation, present detailed numerical results on QCD corrections
on cross sections of charm-quark production in charged-current DIS,
demonstrate the accuracy of the grid interpolation,
study the agreements of data and theory with various PDFs, and
finally study effects of the NNLO corrections on extraction of the
strange-quark PDFs.
\section{NNLO calculation}\label{sec:cal}
We have presented briefly the framework of our NNLO calculation
together with selected numerical results in Ref.~\cite{1601.05430}.
Here we give more details on the theoretical ingredients of the
calculation as well as more numerical results focusing on kinematic region
of the fixed-target dimuon measurements.
We also discuss the applicable kinematic range of our calculation
utilizing fixed-flavor number scheme for heavy quarks and the
possible improvement by extending to a variable-flavor number scheme.
\subsection{Theoretical framework}
\label{sec:qcd-corr-single}
The perturbative calculation utilizes a generalization of the phase-space
slicing method to NNLO as motivated by the $q_T$ subtraction method proposed
in~\cite{Catani:2007vq}.
There have been quite a few recent applications of similar methods on
either decay processes~\cite{1210.2808},
scattering at lepton colliders~\cite{1408.5150,1410.3165},
and hadron colliders~\cite{1504.02131,1505.04794,1606.08463,1607.06382,1708.09405,Li:2017lbf}.
The key ingredient of above methods is the use of soft-collinear effective
theory~(SCET) and heavy-quark effective theory~(HQET)~\cite{hep-ph/0005275,hep-ph/0011336,hep-ph/0109045,hep-ph/0202088}
to systematically factorize the cross section and derive its perturbative expansion in
fully unresolved region of QCD radiations as was proposed in Ref.~\cite{1210.2808}.
Note here we adopt HQET for purpose of extracting the soft singularities
in the perturbative expansion which is irrelevant to the actual
mass of the charm quark.
Other approaches on handling singularities in the fully unresolved region
include sector-improved FKS subtraction method~\cite{1005.0274,1111.7041},
sector decomposition method~\cite{hep-ph/0402265,hep-ph/0311311}, antenna subtraction
method~\cite{hep-ph/0505111,1301.4693}, colorful subtraction method~\cite{1501.07226}, and Projection-to-Born
method~\cite{1506.02660}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.3\textwidth]{plot/FeynDiag1.pdf}
\caption{LO diagram for charm quark production through weak charged current
in DIS. The thick solid line denotes the charm quark.
\label{fig:5}}
\end{figure}
Charm quark production at leading order (LO) through weak charged current in DIS can be
represented by the diagram in Fig.~\ref{fig:5}.
There are also Cabibbo suppressed contributions from $d$ quark initial
state.
The production of charm anti-quark is similar.
In our phase-space slicing method we first define a resolution variable
which can isolate the unresolved phase space.
As was discussed in Ref.~\cite{1601.05430}, the appropriate resolution
variable in this case is a fully inclusive version of beam
thrust~\cite{Stewart:2009yx} or N-jettiness~\cite{1004.2489},
\begin{align}
\label{eq:16}
\tau = \frac{2 \, p_X \!\cdot\! p_n }{ m^2_c - q^2} ,\qquad \text { with }
p_n =
\Big(\bar{n} \!\cdot\! (p_c - q)\Big) \frac{ n^\mu }{2} \,,
\end{align}
which differs from the standard beam thrust or N-jettiness in that no
partition in the phase space of final-state radiation is imposed, as there is
only one collinear direction in the problem.
In Eq.~\eqref{eq:16}, $p_X$ is the momentum of total QCD radiation in the
final state, $p_c$ is the momentum of the charm quark, and $q$ is the momentum
transfer as carried by a virtual $W$ boson.
$p_n$ is a momentum align with the incoming beam, whose large
lightcone component equals the large lightcone component of the incoming momentum
entering the $Wsc$ vertex.
Here the lightcone direction $n$ is chosen as the direction of the
incoming beam, and $\bar{n} = (1, -\vec{n})$.
With the definition for $\tau$, the differential cross section for
any infrared-safe observable $O$ can be separated into resolved and
unresolved part,
\begin{align}
\label{eq:17}
\frac{\mathrm{d}\sigma}{\mathrm{d} O} = & \int^{\tau_{\rm cut}}_0 \mathrm{d} \tau \, \frac{\mathrm{d}^2 \sigma}{\mathrm{d} O
\, \mathrm{d} \tau} + \int^{\tau_{\rm max}}_{\tau_{\rm cut}} \mathrm{d} \tau \, \frac{\mathrm{d}^2 \sigma}{\mathrm{d} O
\, \mathrm{d} \tau}
{\nonumber}\\
= & \left. \frac{\mathrm{d}\sigma}{\mathrm{d} O} \right|_{\rm unres.} +
\left. \frac{\mathrm{d}\sigma}{\mathrm{d} O} \right|_{\rm res.} \,.
\end{align}
Further we can write down a factorization formula for the unresolved
contribution, up to power corrections of
the form $\tau_{\rm cut} \ln^k \tau_{\rm cut}$,
\begin{align}
\label{eq:18}
\left. \frac{\mathrm{d}\sigma}{\mathrm{d} O} \right|_{\rm unres.} = & \int \mathrm{d} z
\, \frac{\mathrm{d}\sigma^{(0)} (z)}{\mathrm{d} O} H(y,\mu) \int^{\tau_{\rm
cut}}_0 \mathrm{d} \tau \, \mathrm{d} t \, \mathrm{d} k_s \, B_q( t, z,
\mu) S(k_s, \mu)
{\nonumber}\\
& \!\cdot\! \delta \left(\tau - \frac{t +2 k_s E_d}{m_c^2 - q^2} \right) + \mathcal{O}( \tau_{\rm cut} \ln^k \tau_{\rm cut} ) \,,
\end{align}
where $E_d$ is the energy of the $s$ quark entering the $Wcs$ vertex.
The derivation of this factorization formula is very similar to the
derivation of beam thrust of N-jettiness factorization~\cite{1004.2489}.
In Eq.~\eqref{eq:18}, $\mathrm{d}\sigma^{(0)}(z)/\mathrm{d} O$ is the Born level partonic
differential cross section for the process
\begin{align}
\label{eq:19}
s( zP_N) +W^*( q )\to c(p_c) \,,
\end{align}
where $P_N$ is the momentum of the incoming hadron.
The variable $y$
is defined as $y = q^2/m_c^2 < 0$.
The hard function $H(y,\mu)$ for charm quark
production can be straightforwardly related to the hard function for
bottom quark decay through analytic continuation.
We refer readers to
Refs.~\cite{Bonciani:2008wf,Asatrian:2008uk,Beneke:2008ei,Bell:2008ws}
for the full two-loop results
\footnote{In our calculation, we use the
result of Ref.~\cite{Asatrian:2008uk}, kindly provided to us by Ben
Pecjak in a convenient computer readable form.}.
The soft function $S$ is defined as a vacuum matrix element of Wilson
loops.
In a practical calculation, they can be obtained by taking the eikonal limit of the
real corrections, with the insertion of a
measurement function $\delta( k_s - k \!\cdot\! n)$, where $k_s$ is the
total momentum of the soft radiation in the final state.
For instance at one-loop the soft function can be calculated from the diagrams
\begin{align}
\label{eq:21}
S^{(1)}(k_s,\mu) =\mu^{2 \epsilon} \int \frac{\mathrm{d}^{4 - 2 \epsilon} k}{(2\pi)^{4 - 2
\epsilon}} (2\pi) \Theta(k^0) \delta(k^2) \delta( k_s - k \!\cdot\! n) \left| \parbox[h]{0.18\textwidth}{
\includegraphics[width=0.15\textwidth]{plot/heavysoft1.pdf}
}
+
\parbox[h]{0.18\textwidth}{
\includegraphics[width=0.15\textwidth]{plot/heavysoft2.pdf} }
\right|^2 \,,
\end{align}
where the lightlike direction $n$ is pointing in the incoming beam
direction.
We use a double line to denote a timelike Wilson line, and a solid
real line to denote a lightlike Wilson line.
Note that the definition
for the soft function is not Lorentz invariant.
The violation of
Lorentz invariance comes only from the measurement function
$\delta(k_s - k \!\cdot\! n)$.
However, the full result when combining with the $\delta$ function
in Eq.~(\ref{eq:18}) is Lorentz invariant.
We quote the result for the soft function through one loop in charm-quark
rest frame as below,
\begin{align}
\label{eq:11}
S(k_s,\mu) = \delta(k_s) + \frac{\alpha_{\scriptscriptstyle S}}{4\pi} C_F \left( - 8 \left[
\frac{\ln(k_s/\mu)}{k_s}\right]_\star^{[k_s,\mu]} - 4 \left[
\frac{1}{k_s}\right]_\star^{[k_s,\mu]} - \frac{\pi^2}{6} \delta(k_s) \right) + \mathcal{O}(\alpha_{\scriptscriptstyle S}^2) \,,
\end{align}
where the star distribution is defined as
\begin{align}
\label{eq:12}
\int^\mu_0 \mathrm{d} k_s \, [f(k_s)]_\star^{[k_s,\mu]} g(k_s) = \int^\mu_0 \mathrm{d}
k_s \, f(k_s)( g(k_s) - g(0) ) \,.
\end{align}
We refer to Ref.~\cite{Becher:2005pd} for the full two-loop soft
function.
The beam function $B$ is defined as the matrix element of collinear field
in a hadron state~(proton in our case), with the virtuality $t = 2 p_n
\!\cdot\! l$ of the beam jet measured~\cite{Stewart:2009yx}, where $l$ is the momentum of
final state collinear radiation, and $p_n$ is defined in
Eq.~\eqref{eq:16}.
The beam function can be written as convolution of
perturbative coefficient functions and the usual PDFs,
\begin{align}
\label{eq:27}
B_i(t, x, \mu) = \sum_j \int \frac{\mathrm{d}\xi}{\xi} \, \mathcal{I}_{ij} \left(t,
\frac{x}{\xi}, \mu\right) f_j ( \xi, \mu) + \mathcal{O} \left(
\frac{\Lambda_{\rm QCD}^2}{t} \right) \,.
\end{align}
For example, the one-loop quark-to-quark coefficient function can be calculated
through the diagrams
\begin{align}
\label{eq:28}
\mathcal{I}_{qq}^{(1)}\left(t, z, \mu\right) = & \int \frac{\mathrm{d}^{4 - 2 \epsilon} l}{(2\pi)^{4 - 2
\epsilon}} (2\pi) \Theta(l^0) \delta(l^2) \delta(t - 2 p_n \!\cdot\! l )
\delta \big( l \!\cdot\! \bar{n} - (1-z) p_n \!\cdot\! \bar{n} \big)
{\nonumber}\\
&
\times
\left| \parbox[h]{0.28\textwidth}{
\includegraphics[width=0.25\textwidth]{plot/beam1.pdf}
}
+
\parbox[h]{0.28\textwidth}{
\includegraphics[width=0.25\textwidth]{plot/beam2.pdf} }
\right|^2 \,.
\end{align}
We also need the gluon-to-quark coefficient function at this order.
The quark beam function has been calculated through to two loops~\cite{1401.5478}.
We quote the result up to one-loop here
\begin{align}
\label{eq:29}
\mathcal{I}_{qq}(t, z, \mu) = &\delta(t) \delta(1-z) + \frac{\alpha_{\scriptscriptstyle S}}{2 \pi} C_F
\left\{ 2\left[\frac{\ln (t/\mu^2)}{t} \right]_\star^{[t,\mu^2]} \delta(1-z) +
\left[\frac{1}{t} \right]_\star^{[t,\mu^2]} \frac{(1+z^2)}{[1-z]_+}
\right.
{\nonumber}\\
&\left.
+ \delta(t) \left[\frac{(1+z^2)}{[1-z]_+} - \frac{\pi^2}{6} \delta(1-z)
+ \left(1-z- \frac{1+z^2}{1-z} \ln z \right) \right]
\right\} \,,
{\nonumber}\\
\mathcal{I}_{qg}(t,z,\mu) = & \frac{\alpha_{\scriptscriptstyle S}}{2 \pi} T_F \left\{ \left[\frac{1}{t}
\right]_\star^{[t,\mu^2]} ( 1 - 2 z + 2 z^2) + \delta(t) \left[
(1 -2 z + 2 z^2) \left( \ln\frac{1-z}{z} - 1
\right) + 1 \right] \right\} \,.
\end{align}
Substituting the expansion of hard, soft and beam function into the
factorization formula in Eq.~\eqref{eq:18} gives the leading power in
$\tau$ prediction for the unresolved distribution.
The dependence on $\tau$ is very simple and can be integrated out analytically.
Note that the power suppressed terms neglected in Eq.~(\ref{eq:18}) also
can be calculated analytically and used to improve convergence of the
phase-space slicing method as demonstrated in Refs.~\cite{1612.00450,1612.02911}.
For a small cut-off $\tau_{\rm cut}$, integration of the unresolved
distribution obtained from the factorization formula results in large
logarithmic dependence on the cut-off.
For sufficiently small cut-off, the large cut-off
dependence is to be canceled by the resolved contribution, up to Monte-Carlo
integration uncertainty.
The resolved contribution, as its name
suggests, is free of infrared singularities at NLO.
At NNLO, the
resolved contribution contains sub-divergences.
These sub-divergences
cannot be resolved by our resolution variable $\tau$.
They must be canceled using other methods.
Fortunately, the infrared structure of
sub-divergences is lower by one order in $\alpha_{\scriptscriptstyle S}$ than the unresolved
part.
For a NNLO calculation, we can use any existing subtraction
method to cancel the sub-divergences.
In our calculation, we employ
the dipole subtraction formalism~\cite{hep-ph/9605323,hep-ph/0201036} to remove the
sub-divergences.
We also need the one-loop amplitudes for charm quark production
with an additional parton, and tree-level amplitudes for charm quark production
with two partons.
We extract the former from Ref.~\cite{Campbell:2005bb}; for the later we use
\texttt{HELAS}~\cite{Murayama:1992gi}.
The calculations in resolved region have been cross checked with Gosam~\cite{1404.7096}
and Sherpa~\cite{0811.4622} and
full agreement are found.
\subsection{Numerical results}
We move to numerical results for the reduced cross sections of charm-quark production in DIS of neutrino on iron.
We use CT14 NNLO PDFs~\cite{Dulat:2015mca} with $n_f=3$ active quark
flavors and the associated strong coupling constant by default.
We use a pole mass $m_c=1.4$ GeV for the charm quark,
and CKM matrix elements $|V_{cs}|=0.975$ and $|V_{cd}|=0.222$~\cite{Beringer:1900zz}.
The renormalization and factorization scales are set to $\mu_0=\sqrt{Q^2+m_c^2}$ unless
otherwise specified.
We choose a phase-space slicing parameter of $\tau_{\rm cut}=10^{-3}$ which
is found to be small enough to neglect the power corrections~\cite{1601.05430}.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.4\textwidth]{plot/v1310_nnlo_d3_disbx}
\hspace{0.3in}
\includegraphics[width=0.4\textwidth]{plot/v1313_nnlo_d3_disbx}
\end{center}
\vspace{-2ex}
\caption{\label{fig:scale}
%
QCD predictions including scale variations at different orders
for a differential reduced cross section
in Bjorken $x$ for charm (anti-)quark production from (anti-)neutrino scattering
on iron target.
}
\end{figure}
In Fig.~\ref{fig:scale} we show the QCD corrections to a differential reduced
cross section in Bjorken $x$ for which the electroweak couplings have been
taken out.
We plot the NLO and NNLO predictions normalized to the LO ones for charm
(anti-)quark production from (anti-)neutrino scattering with an energy
of 88.29 (77.88) GeV on iron target.
The cross sections are integrated over the full range of inelasticity $y$.
The hatched bands represent the scale variations as calculated by varying
renormalization and factorization scale from $\mu_F=\mu_R=\mu_0/2$ to $2\mu_0$
avoiding going below the charm-quark mass.
The QCD corrections are large and negative in small and moderate $x$ regions
for charm quark production with the nominal scale choice.
The NNLO corrections can reach about -10\% for $x$ up to 0.1 and turn to
positive for $x>0.4$.
The scale variations at LO are large in general but vanish at $x\sim 0.1$
indicating its limitation as estimate of perturbative uncertainties.
It was found even the scale variations at NLO underestimate the
perturbative uncertainties at small and moderate $x$ regions due to
accidental cancellations as will be explained later.
The NNLO scale variations give a more reliable estimation of the
perturbative uncertainties and also show improvement at high-$x$ compared
with the NLO case.
Results are similar for charm anti-quark production which can be related
via a charge conjugate parity transformation except for the differences
of initial state PDFs.
Especially the charm quark production involves Cabibbo suppressed contributions
at tree level from $d$-valence quark which dominate at high-$x$ while only sea-quark
contributions exist for charm anti-quark production.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.4\textwidth]{plot/v1310_nnlo_d3_disax}
\hspace{0.3in}
\includegraphics[width=0.4\textwidth]{plot/v1313_nnlo_d3_disax}
\end{center}
\vspace{-2ex}
\caption{\label{fig:subc}
%
QCD corrections at different orders separated into partonic channels
for a differential reduced cross section
in Bjorken $x$ for charm (anti-)quark production from (anti-)neutrino scattering
on iron target.
}
\end{figure}
In the small and moderate $x$ region the NNLO corrections are almost as
large as the NLO corrections.
That motivates a careful examination of the convergence of the
perturbative expansion.
In Fig.~\ref{fig:subc} we plot the QCD corrections from two main
partonic channels, i.e.,
with the strange (anti-)quark initial state, including Cabibbo
suppressed $d$($\bar d$) quark contributions, and with the gluon
initial state, for the same distribution as shown in Fig.~\ref{fig:scale}.
The right plot of Fig.~\ref{fig:subc} shows the corrections for charm
anti-quark production.
We observe a strong cancellation among the NLO corrections from
the strange anti-quark and the gluon channels starting from small-$x$
and persisting to high-$x$ region.
We regard this cancellation {\it accidental} in that it does not
arise from basic principles but is a result of several factors.
The cancellation of NLO corrections remains if instead using the NLO PDFs or
alternative NNLO PDFs e.g., MMHT2014~\cite{Harland-Lang:2014zoa} and
NNPDF3.0~\cite{Ball:2014uwa}.
A similar cancellation has also been observed in the calculation
for $t$-channel single top quark production ~\cite{1404.7116}.
The size of NNLO corrections are smaller than the NLO ones for the
individual partonic channels indicating good convergence of the
perturbative expansion.
However, the cancellation between the two channels is much mild
at NNLO which results in a net correction as large as the NLO one.
For this reason we expect the corrections from even higher orders
to be smaller than the NNLO corrections.
The left plot in Fig.~\ref{fig:subc} shows results for
charm quark production for which the situation is similar at
low-$x$ region.
At high-$x$ the correction from gluon channel flattens out
due to the smaller size of gluon PDF as comparing to $d$-valence
PDF, and the net correction is small and positive at
NNLO.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.9\textwidth]{plot/CCnupubCT14totscale.pdf}
\end{center}
\vspace{-2ex}
\caption{\label{fig:kfac1}
%
QCD predictions at different orders with scale choices of $\mu_0$
and $2\mu_0$ for a double differential reduced cross section
in Bjorken $x$ and inelasticity $y$ for charm quark production
from neutrino scattering on iron target.
}
\end{figure}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.9\textwidth]{plot/CCnbpubCT14totscale.pdf}
\end{center}
\vspace{-2ex}
\caption{\label{fig:kfac2}
%
Similar as Fig.~\ref{fig:kfac1} for charm anti-quark production from
anti-neutrino scattering.
}
\end{figure}
We further calculate the double differential reduced cross sections
in $x$ and $y$ as was measured by various experimental groups.
We choose the kinematics and neutrino energies as those in
CCFR~\cite{Goncharov:2001qe} measurement.
In Fig.~\ref{fig:kfac1} we plot ratios of various predictions to
the LO differential cross sections for charm quark production with
three different energies and each with three choices of $y$.
Here we use a charm-quark mass of 1.3 GeV.
The solid and dotted curves correspond to using scales of $\mu_0$
and $2\mu_0$.
Note the LO cross sections in the denominator are always evaluated
with the scale $\mu_0$.
For the nominal scale choice ($\mu_0$) the NNLO corrections are
about $-10$\% at $x\sim 0.02$ and a couple of percents at $x\sim 0.3$.
The size of QCD corrections increases with $y$ in low-$x$ regions.
Dependence on the beam energy is in the opposite direction and
is weaker in general.
Scale dependence of the NNLO predictions are slightly weaker than
those of the NLO predictions at small-$x$.
In moderate and large-$x$ regions the NLO predictions show a scale
dependence that is too small due to the strong cancellations mentioned
earlier.
Fig.~\ref{fig:kfac2} shows similar results for charm anti-quark
production.
The QCD corrections are even more pronounced in this case due to
the relatively larger gluon contributions.
For $y=0.802$ the NNLO corrections can reach $-15$\% for $x\sim 0.02$
and remain $-10$\% for $x\sim 0.2$.
The same conclusion holds for the scale dependence as in the
case of charm quark production.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.9\textwidth]{plot/CCnuCT14totmass.pdf}
\end{center}
\vspace{-2ex}
\caption{\label{fig:mass1}
%
Dependence of a double differential reduced cross section
in Bjorken $x$ and inelasticity $y$ on the charm quark mass,
shown in ratios of predictions with $m_c=1.5$ GeV to 1.3 GeV,
for charm quark production from neutrino scattering on iron target.
}
\end{figure}
Since the charm quark production is usually measured at low to moderate
momentum transfers, the theoretical predictions can depend significantly
on the choice of charm-quark mass, in our case the pole mass.
Note the determination of charm-quark pole mass has an intrinsic uncertainty
of $0.1\sim 0.2$ GeV due to the renormalon ambiguity.
In Fig.~\ref{fig:mass1} we show the ratio of double
differential cross sections calculated when using a charm-quark
pole mass of 1.5 GeV to 1.3 GeV, at LO, NLO, and NNLO, for charm anti-quark production.
The results for charm quark production are similar and not shown for simplicity.
At LO the charm-quark mass dependence can be calculated easily.
The dominant part of that is known as slow rescaling~\cite{Georgi:1976vf} due
to the kinematic suppression, i.e., by replacing the momentum fraction
in evaluation of PDFs with $\xi=x(1+m_c^2/Q^2)$.
That explains the trends we show in Fig.~\ref{fig:mass1}.
The cross sections with larger charm-quark mass are especially
suppressed in small-$x$ region and for smaller neutrino energies
where the $Q^2$ is low.
Shapes of the suppression factor with respect to $x$ are different
for different values of $y$ due to the sub-dominant dependence on
the mass from the hard matrix elements.
The mass dependence is insensitive to higher order corrections.
The NLO predictions show a slightly weaker suppression comparing
to LO ones in general, especially at large-$x$ and smaller neutrino
energies.
Effects of NNLO corrections on the mass dependence are almost
negligible for the full kinematic range considered.
\subsection{Heavy quark scheme}
The NNLO calculations are carried out in a fixed flavor number
scheme with $n_f=3$.
This should be the appropriate scheme for $Q\gtrsim m_c$.
For the semiinclusive charged-current (CC) DIS process we studied, at $Q \gg m_c$,
there exist logarithmic contributions of $\sim \alpha_{\scriptscriptstyle S}^n\ln^n(Q^2/m_c^2)$
due to initial-state gluon splitting into a $c\bar c$ pair in the
quasi-collinear limit.
\footnote{In this case the muon from charm decay tends to be
close to the beam, and the experimental acceptance may be different
comparing to other region of the phase space.}
In principle that needs to be resummed by using the
heavy-quark PDFs together with an appropriate general mass
variable flavor number (GM-VFN) scheme, for example, the ACOT~\cite{hep-ph/9312319},
FONLL~\cite{1001.2312}, or RT~\cite{1201.6180} schemes.
The exact ${\mathcal O}(\alpha_{\scriptscriptstyle S}^2)$ massive coefficient functions~\cite{1601.05430}
complete all ingredients needed for constructing
such a scheme like S-ACOT-$\chi$~\cite{Guzzi:2011ew} at NNLO for the charged-current
scattering.
For the kinematics where the dimuon measurements carried out,
the $Q^2$ is not too high comparing to the charm-quark mass in the
bulk of the data.
Besides, the experimental uncertainties are at least at the level of
$5\sim 10$\% for the NuTeV and CCFR measurements.
Thus a VFN scheme is not of immediate relevance for the phenomenological study
on dimuon measurements.
We leave a formal study on the GM-VFN scheme for future publications
while providing an estimate of the logarithmic contributions beyond
${\mathcal O}(\alpha_{\scriptscriptstyle S}^2)$ in below.
As mentioned earlier the logarithmic contributions can be
resummed effectively with a perturbative charm (anti-)quark
PDF in $n_f=4$ scheme that follows a DGLAP evolution,
\begin{equation}
\frac{df^{(n_f=4)}_c(x, \mu^2)}{d\ln\mu^2}=\sum_{i=q,\bar q,c, \bar c, g}P_{ci}(x,\alpha_{\scriptscriptstyle S}(\mu^2))
\otimes f^{(n_f=4)}_i(x,\mu^2),
\end{equation}
where $P_{ij}$ is the DGLAP splitting function with dependence on $n_f$ suppressed,
$\mu$ is the factorization scale.
The exact results for $P_{ij}$ are known up to three loops~\cite{hep-ph/0403192,hep-ph/0404111}.
The charm-quark PDF at arbitrary scales can be derived from the boundary
conditions at $\mu=m_c$ by evolving upward.
Note that starting at ${\mathcal O}(\alpha_{\scriptscriptstyle S}^2)$ the charm-quark PDF has
a small discontinuity at $\mu=m_c$.
We can expand the charm-quark PDF in the strong coupling constant,
\begin{align}\label{eq:expa}
f_c^{(n_f=4)}(x, \mu^2)&=\Delta^{(2)}+\left({\alpha_{\scriptscriptstyle S}(m_c^2)\over 2\pi}\right)
\left\{L(P^{(0)}_{cg}\otimes f^{(n_f=4)}_g(x,m_c^2))\right\}
+ \left({\alpha_{\scriptscriptstyle S}(m_c^2)\over 2\pi}\right)^2\nonumber \\
&\Big\{
L(\sum_i P^{(1)}_{ci}\otimes f^{(n_f=4)}_i(x,m_c^2))
+{L^2\over 2}(\sum_i P^{(0)}_{cg}\otimes P^{(0)}_{gi}\otimes f^{(n_f=4)}_i(x,m_c^2) \nonumber \\
& -\beta_0 P^{(0)}_{cg}\otimes f^{(n_f=4)}_g(x,m_c^2))\Big\}+{\mathcal O}(\alpha_{\scriptscriptstyle S}^3),
\end{align}
where $\Delta^{(2)}$ is of ${\mathcal O}(\alpha_{\scriptscriptstyle S}^2)$ due to the discontinuity
crossing the heavy-quark threshold and $L=\ln(\mu^2/m^2_c)$.
It is understood that the strong coupling constant, the one- and two-loop
splitting functions $P^{(0,1)}_{ij}$, and the one-loop $\beta$ function in
Eq.~(\ref{eq:expa}) are all evaluated with $n_f=4$.
We can translate the strong coupling constant and PDFs with
$n_f=4$ to those with $n_f=3$ via matching at the charm-quark threshold~\cite{hep-ph/9612398}.
Furthermore, we can expand them in $\alpha_{\scriptscriptstyle S}(\mu^2)$ instead.
We arrive at an expanded solution,
\begin{align}\label{eq:expb}
f_c^{(n_f=4)}(x, \mu^2)&=\Delta^{(2)}+\left({\alpha_{\scriptscriptstyle S}(\mu^2)\over 2\pi}\right)
\left\{L(P^{(0)}_{cg}\otimes f^{(n_f=3)}_g(x,\mu^2))\right\}
+ \left({\alpha_{\scriptscriptstyle S}(\mu^2)\over 2\pi}\right)^2\nonumber \\
&\Big\{
L(\sum_i P^{(1)}_{ci}\otimes f^{(n_f=3)}_i(x,\mu^2))
-{L^2\over 2}(\sum_i P^{(0)}_{cg}\otimes P^{(0)}_{gi}\otimes f^{(n_f=3)}_i(x,\mu^2) \nonumber \\
& -\beta_0 P^{(0)}_{cg}\otimes f^{(n_f=3)}_g(x,\mu^2))\Big\}+{\mathcal O}(\alpha_{\scriptscriptstyle S}^3),
\end{align}
with $\beta$ functions and splitting functions for $n_f=4$.
Those ${\mathcal O}(\alpha_{\scriptscriptstyle S})$ and ${\mathcal O}(\alpha_{\scriptscriptstyle S}^2)$ logarithmic contributions
have already been captured by our NLO and NNLO calculations respectively.
Differences of the evolved charm-quark PDF and the expansion in
Eq.~(\ref{eq:expb}) can serve as an estimate of the remaining logarithmic
contributions at higher orders.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.8\textwidth]{plot/VFNcomp.pdf}
\end{center}
\vspace{-2ex}
\caption{\label{fig:vfn}
%
Differences of an evolved charm-quark PDF $c_{evol.}$
at NNLO and the expanded solution $c_{exp.}$
up to ${\mathcal O}(\alpha_{\scriptscriptstyle S})$ (NLO) and ${\mathcal O}(\alpha_{\scriptscriptstyle S}^2)$ (NNLO) as a function of
$\mu=Q$ for several $x$ values, normalized to the effective strangeness PDF $s_{eff}$.
See text for more details.
}
\end{figure}
In Fig.~\ref{fig:vfn} we plot the differences of an evolved charm-quark PDF $c_{evol.}$
at NNLO (with 3-loop splitting functions) and the expanded solution $c_{exp.}$
up to ${\mathcal O}(\alpha_{\scriptscriptstyle S})$ and ${\mathcal O}(\alpha_{\scriptscriptstyle S}^2)$ as a function of
$\mu=Q$ for several $x$ values, normalized to the effective strangeness PDF $s_{eff}$
which is a combination of $s(\bar s)$ and $d(\bar d)$ PDFs.
The charm quark production cross section at LO is simply
proportional to the effective strangeness PDF with the slow rescaling.
We use the MSTW2008 NNLO PDFs~\cite{0901.0002} with $m_c=$ 1.4 GeV as an input.
For small $x$ values we can see the FFN calculation at ${\mathcal O}(\alpha_{\scriptscriptstyle S})$
misses a large portion of the logarithmic contributions that can
reach 10-20\% of the LO charm quark production cross sections for
$Q \sim 10$ GeV.
On another hand the NNLO calculation can reproduce well the resummed contributions
with the remaining logarithmic contributions of about 2\% of the
LO cross sections for the same $Q$ values.
Note the highest $Q$ value that the CCFR and NuTeV measurements probed
is around 10 GeV.
For large $x$ values the conclusion is similar for charm quark production.
For charm anti-quark production the charm-quark PDF has a relatively
larger weight due to the quick falling of the sea-quark PDFs.
The contributions beyond NNLO can reach 5\% for $x=0.3$ and
$Q=10$ GeV.
\section{Fast interface}
The above calculation can not be immediately used in the global analysis of QCD due
to the time-consuming nature of NNLO calculations and the fact that the analysis
involves scan over a large number of PDF ensembles.
Indeed even the NLO calculation is inadequate for direct use in the analysis.
The PDF fitting group instead needs to rely on either K-factor approximation or
fast interface based on grid interpolations.
There have been quite some developments on fast interface for high-order perturbative
calculations, e.g., APPLgrid~\cite{Carli:2010rw}, FastNLO~\cite{Wobisch:2011ij},
aMCfast~\cite{Bertone:2014zva}, starting from NLO in QCD to NNLO most
recently~\cite{Czakon:2017dip}.
We have constructed a fast interface specialized for our calculation following
similar approaches.
First of all the PDFs at arbitrary scales can be approximated by an interpolation
on a one-dimensional grid of $x$,
\begin{equation}
f(x, \mu)=\sum_{i=0}^n f_{k+i}I_i^{(n)}\left(\frac{y(x)}{\delta y}-k\right),
\end{equation}
where we choose the interpolation variable $y(x)=x^{0.3}$ and the interpolation
order $n=4$, and $f_{j}$ is the PDF value on the $j$-th grid point.
$\delta y$ has been chosen so as to give 50 grid points between $x=1$ and a minimum
determined according to the specified kinematics.
We use a $n$-th order polynomial interpolating function $I_i^{(n)}$ and the
starting grid point $k$ is determined so as $x$ located in between the
$(k+1)$-th and $(k+2)$-th grid points.
The cross section in deep inelastic scattering can thus be expressed as
\begin{equation}\label{eq:int}
d\sigma_{bin}=\sum_{p}\sum_{m}\sum_i \left(\frac{\alpha_s(\mu)}{2\pi}\right)^m
{\mathcal B}(p,m,i)f_i,
\end{equation}
where the summation runs over different sub-channels $p$, perturbative orders $m$,
and the grid points $i$.
The interpolation coefficients ${\mathcal B}(p,m,i)$ which are independent of
the PDFs can be obtained by projecting
the event weight onto the corresponding grid points during the MC integration.
Once those interpolation coefficients are calculated and stored,
the cross sections with any PDFs can be obtained via Eq.~(\ref{eq:int})
without repeating the time-consuming calculations of the matrix elements.
In Table~\ref{tab:int} we show the typical time
cost in a direct calculation and the interpolation with the NuTeV kinematics.
Also shown is the time cost for generating the interpolation grid.
The direct calculation involves intensive MC integration
as expected, costing about 60 CPU core-hours per data point of the double differential
distribution $d^2\sigma/dxdy$ in charm-quark production with NuTeV kinematics.
The grid generation costs four times more since it requires separation
of different sub-channels.
However, with the generated grid, for any PDFs the interpolation/calculation
takes less than a millisecond.
The precision of the interpolation are found to be around a few permille at
NNLO, smaller than the typical errors from MC integration.
In Fig.~\ref{fig:int} the solid line shows the ratio of the cross
sections from direct calculation and the fast interpolation using the grid
generated from the same run both using CT14 NNLO PDFs, for all the data
points in NuTeV and CCFR measurements with charm (anti-)quark production.
In this case deviation of the two predictions are simply due to the interpolation
errors.
Also shown in Fig.~\ref{fig:int} are comparison of the interpolation
results for MMHT2014 and NNPDF3.0 NNLO PDFs with the grid generated from
CT14, with independent direct calculations using the same PDFs.
Here the two predictions for each PDF choice differ at most half percent due to the
MC integration errors in the direct calculations as shown by the error bars.
\begin{table}[h!]
\centering
\begin{tabular}{c|cc}
\hline
& CPU core-hours (NLO) & CPU core-hours (NNLO) \tabularnewline
\hline
\hline
direct calculation & 0.5 & 60 \tabularnewline
\hline
grid generation & 1 & 280 \tabularnewline
\hline
interpolation & $10^{-7}$ & $10^{-7}$\tabularnewline
\hline
\end{tabular}
\caption{
Typical time cost (in CPU core-hours) for calculation and interpolation
of reduced cross section $d^2\sigma/dxdy$ (per data point) of charm-quark
production with NuTeV kinematics.
\label{tab:int}}
\end{table}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.7\textwidth]{plot/inteerrs1000m14.pdf}
\end{center}
\vspace{-2ex}
\caption{\label{fig:int}
%
Ratios of the interpolated NNLO cross sections and cross sections from direct
calculations using CT14, MMHT2014 and NNPDF3.0 PDFs all with the grid
generated from the same run of CT14 calculation.
}
\end{figure}
\section{Impact on strange-quark distributions}
Now we move to discuss potential impact of the NNLO calculations on constraining
parton distributions, especially the strange-quark distributions, by checking
the agreements of different theories with experimental data.
We select the NuTeV and CCFR measurements on charm (anti-)quark production
in the form of double differential cross sections $d^2\sigma/dxdy$, which
provides dominant constraints on the strange-quark distributions, e.g., in
MMHT2014 and CT14 global analyses.
The theoretical predictions used in those analyses are at NLO only.
We include the data point only with $Q^2>4\,{\rm GeV}^2$ to leave out
the region where higher-twist corrections can be potentially large.
That results in 38(33) data points for charm (anti-)quark production in NuTeV,
and 40(38) data points for charm (anti-)quark production in CCFR.
Besides, we have simply corrected the data for nuclear effects of iron
target~\cite{hep-ph/0312323,1203.1290} using a parametrization of $F_2$ ratio
measured at SLAC and NMC, instead of including more sophisticated corrections
to individual parton flavors~\cite{0709.3038,0902.4154,1112.6324,1509.00792} in
the theory calculations.
That leads to corrections on the data of 2\% at $x\sim 0.05$, -4\% at $x\sim 0.1$,
and $5\%$ at $x\sim 0.4$.
We did not include uncertainties on the nuclear corrections since the correction
itself is already small comparing to experimental errors for the $x$ range considered.
The experimental uncertainties include the total statistical and systematic
errors which are treated uncorrelated among different data point.
The total error for each data point has been scaled by square root of its effective
freedom so as a reasonable fit should have $\chi^2/N_{pt}$ of one~\cite{Mason:2006qa}.
Besides, there is an additional systematic error of 10\% assumed due to
the input of semi-leptonic decay branching ratio of charm quark when unfolding
the dimuon cross sections back to the charm production cross sections~\cite{Mason:2006qa}.
This normalization error is assumed to be fully correlated among
all data points in both NuTeV and CCFR measurements.
In the following we first compare predictions with various PDFs to
the experimental measurement.
Later we show how the strange quark
distributions may change if using our NNLO results instead of the
NLO ones in the PDF analyses by means of Hessian profiling~\cite{1503.05221}.
\subsection{Theory-data agreement with various PDFs}
We considered the updated NNLO PDFs from the major PDF fitting
groups, including CT14~\cite{Dulat:2015mca}, MMHT14~\cite{Harland-Lang:2014zoa},
NNPDF3.1 (both nominal set and set with only collider data)~\cite{Ball:2017nwa}, ABMP16~\cite{Alekhin:2017kpj},
HERAPDF2.0~\cite{1506.06042}, and ATLAS-epWZ16~\cite{1612.03016}.
For the case of NNPDF, the original PDF representation of MC replicas
has been transformed into a Hessian PDF set using the MC2H package~\cite{1505.06736}.
Note that we have used NNLO PDFs for both NLO and NNLO calculations.
In case that the PDFs with $n_f=3$ are not publicly available we evolve
the nominal PDFs by DGLAP evolution at 3-loop and with $n_f=3$ starting from
a scale below the charm quark mass threshold.
We show the fits to NuTeV and CCFR data (149 data points in total) with
NLO and NNLO predictions for various choices of PDFs in Table~\ref{tab:chi2a}.
Here through the calculations we have used a charm-quark pole mass of 1.3 GeV
and a scale of $\mu=\sqrt{Q^2+m_c^2}$ despite the fact that different
PDF groups use a charm-quark pole mass ranging from 1.3
to 1.6 GeV.\footnote{ABMP16 uses $\overline{\rm MS}$ mass as input and
treat them in the same footing as the other PDF parameters.}
In each fit we show the $\chi^2$ and the normalization shift (in unit of $1\,\sigma$ error)
of the central PDFs without and with including the full PDF uncertainties.
In the later case it gauges the overall agreement between data and theory
with both uncertainties included.
Shift with minus sign indicates the data prefer smaller values for the cross sections.
Each pair of eigenvector PDFs corresponds to one correlated theoretical error
with symmetric Gaussian distribution when including the PDF uncertainties~\cite{1503.05221}.
For the HERA and ATLAS fits the PDF uncertainties include those model and parametrization
uncertainties as well.
Note the $\chi^2$ shown here may not represent the actual fit quality of the
same data in their global analyses since there different predictions or input
parameters are used.
\begin{table}[h!]
\centering
\begin{tabular}{c|cc|cc}
\hline
$N_{pt}=149$ &\multicolumn{2}{c|}{NLO} & \multicolumn{2}{c}{NNLO}\tabularnewline
\hline
CT14 & 167.3(-1.0) & {\bf 130.2(1.1)} & 154.2(-0.4) &{\bf 132.9(1.3)} \tabularnewline
\hline
MMHT14 & 132.2(-1.0) & {\bf 118.6(0.1)} & 127.7(-0.3) &{\bf 118.8(0.1)} \tabularnewline
\hline
NNPDF3.1 & 157.8(-1.2) & {\bf 115.8(-1.0)} & 161.3(-0.5) &{\bf 115.1(-0.6)} \tabularnewline
\hline
ABMP16 & 189.3(-1.6) & {\bf 170.8(-0.8)} & 170.2(-1.0) &{\bf 157.6(-0.3)} \tabularnewline
\hline
HERAPDF2.0 & 258.4(-0.8) & {\bf 130.3(0.3)} & 221.6(-0.1) &{\bf 132.0(0.5)} \tabularnewline
\hline
ATLAS-epWZ16 & 352.8(-4.0) & {\bf 246.6(-2.1)} & 321.5(-3.7) &{\bf 228.7(-1.6)} \tabularnewline
\hline
NNPDF3.1 (collider) & 513.4(-5.1) & {\bf 118.5(-2.3)} & 537.8(-4.8) &{\bf 114.0(-1.9)} \tabularnewline
\hline
\end{tabular}
\caption{
%
$\chi^2$ and normalization shift (in unit of $1\,\sigma$ error) of fits to
NuTeV and CCFR charm production data with various theoretical predictions
using $m_c=1.3$ GeV and $\mu=\sqrt{Q^2+m_c^2}$.
%
The shifts are shown in brackets with minus sign indicating the data prefer
smaller values for the cross sections.
%
The numbers in bold font correspond to fits including
the full PDF uncertainties as well.
\label{tab:chi2a}}
\end{table}
The PDFs shown in Table~\ref{tab:chi2a} fall into two distinct groups, those without
including any dimuon data in the PDF analysis, namely the HERA, ATLAS, and NNPDF
collider-only PDFs, and the others with dimuon data, either from NuTeV, CCFR,
CHORUS, or NOMAD.
The $\chi^2/N_{pt}$ are about one for PDFs in the second group as all the dimuon data
consistently prefer a suppressed strangeness as discussed in the introduction
section.
Interestingly, CT14, MMHT14 and NNPDF3.1 show very similar results on $\chi^2$ and
also the normalization shift.
The NNLO predictions without PDF uncertainties give slightly smaller $\chi^2$ in
general for the specified mass
and scale as comparing to NLO,
though those PDFs are fitted with NLO or approximate NNLO predictions.
For PDFs from collider-only data, the fits are rather poor if not taking into
account the PDF uncertainties.
One reason is the ATLAS W/Z data do prefer larger central values
of the strange-quark PDFs.
With the PDF uncertainties included, the $\chi^2/N_{pt}$ have been reduced to
below one for HERA and NNPDF collider-only PDFs indicating consistency of their
PDFs and the dimuon data once both uncertainties are considered.
The situation is different for the ATLAS PDFs where $\chi^2/N_{pt}$ is still
about 1.5 even for the NNLO predictions.
That can be further visualized by a direct comparison of theory and data
as in Figs.~\ref{fig:dtcom_atlas16a} and~\ref{fig:dtcom_atlas16b} for
NuTeV charm quark and anti-quark production respectively.
Most of the data points lie far outside the PDF error bands with a non-trivial
shape dependence.
The PDF uncertainties of charm quark cross sections are smaller than
the ones of charm anti-quark in general since the former also involves contribution
from $d$ valence quark which is better constrained than sea quarks.
We conclude the ATLAS PDFs can not describe the dimuon data well and
the NNLO calculations can only bring in limited improvement.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.9\textwidth]{plot/normNTnuATLAS-epWZ16-FULL.pdf}
\end{center}
\vspace{-2ex}
\caption{\label{fig:dtcom_atlas16a}
%
NLO and NNLO predictions on charm quark production cross sections
using ATLAS-epWZ16 NNLO PDFs comparing with the NuTeV measurement.
%
Also shown are the 1 $\sigma$ PDF uncertainties.
%
The experimental data have been corrected for nuclear effects.
%
The error bars represent the total experimental uncertainties
rescaled by square root of the effective degree of freedoms.
%
A 10\% normalization uncertainty due to the semi-leptonic
decay BR of charm quark is not shown here.
}
\end{figure}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.9\textwidth]{plot/normNTnbATLAS-epWZ16-FULL.pdf}
\end{center}
\vspace{-2ex}
\caption{\label{fig:dtcom_atlas16b}
%
Similar as Fig.~\ref{fig:dtcom_atlas16a} for charm anti-quark production
using ATLAS-epWZ16 NNLO PDFs.
}
\end{figure}
We further compare predictions from the same PDF sets but with
different scale and charm quark mass inputs as shown in Tables~\ref{tab:chi2b}
and~\ref{tab:chi2c}.
From Table~\ref{tab:chi2b} we can see that by using a scale of
twice of the nominal choice the agreement between NLO predictions
and data deteriorates, especially in the case of without PDF
uncertainties.
In comparison the $\chi^2$ for NNLO predictions are less sensitive to the
change of scale for both with and without PDF uncertainties.
The cross sections are reduced especially at small-$x$ when using a
larger charm-quark mass of 1.4 GeV as shown already in Fig.~\ref{fig:mass1}.
As show in Table~\ref{tab:chi2c} that leads to a smaller $\chi^2$ at NLO in
general comparing with Table~\ref{tab:chi2a} when no PDF uncertainties are included.
At NNLO the $\chi^2$ can either decrease or increase depending on the
PDFs considered indicating a different preference of charm-quark mass
at NNLO comparing with NLO for certain PDFs.
The $\chi^2$ for the ATLAS PDFs are reduces as well.
However, in both cases
the $\chi^2$ are still over 200 for predictions with the ATLAS PDFs.
\begin{table}[h!]
\centering
\begin{tabular}{c|cc|cc}
\hline
$N_{pt}=149$ &\multicolumn{2}{c|}{NLO} & \multicolumn{2}{c}{NNLO}\tabularnewline
\hline
CT14 & 196.1(-1.3) & {\bf 131.6(1.2)} & 160.3(-0.7) &{\bf 130.5(1.3)} \tabularnewline
\hline
MMHT14 & 152.7(-1.3) & {\bf 123.1(0.0)} & 127.1(-0.6) &{\bf 117.7(0.2)} \tabularnewline
\hline
NNPDF3.1 & 163.1(-1.5) & {\bf 119.2(-1.2)} & 153.2(-0.8) &{\bf 114.4(-0.7)} \tabularnewline
\hline
ABMP16 & 223.5(-1.8) & {\bf 197.1(-1.1)} & 180.6(-1.3) &{\bf 161.8(-0.6)} \tabularnewline
\hline
HERAPDF2.0 & 308.4(-1.2) & {\bf 130.3(0.5)} & 238.9(-0.5) &{\bf 130.2(0.5)} \tabularnewline
\hline
ATLAS-epWZ16 & 391.5(-4.1) & {\bf 271.4(-2.4)} & 339.7(-3.8) &{\bf 239.4(-1.8)} \tabularnewline
\hline
NNPDF3.1 (collider) & 487.7(-5.1) & {\bf 124.1(-2.6)} & 521.0(-5.0) &{\bf 116.4(-2.0)} \tabularnewline
\hline
\end{tabular}
\caption{
%
Similar as Table~\ref{tab:chi2a} but with $\mu=2\sqrt{Q^2+m_c^2}$.
\label{tab:chi2b}}
\end{table}
\begin{table}[h!]
\centering
\begin{tabular}{c|cc|cc}
\hline
$N_{pt}=149$ &\multicolumn{2}{c|}{NLO} & \multicolumn{2}{c}{NNLO}\tabularnewline
\hline
CT14 & 158.2(-0.8) & {\bf 131.1(1.0)} & 150.5(-0.1) &{\bf 134.1(1.3)} \tabularnewline
\hline
MMHT14 & 128.2(-0.8) & {\bf 118.9(0.0)} & 129.4(-0.1) &{\bf 119.6(0.1)} \tabularnewline
\hline
NNPDF3.1 & 156.6(-1.0) & {\bf 115.9(-0.9)} & 166.4(-0.3) &{\bf 115.5(-0.5)} \tabularnewline
\hline
ABMP16 & 177.1(-1.4) & {\bf 162.6(-0.7)} & 163.2(-0.8) &{\bf 153.2(-0.1)} \tabularnewline
\hline
HERAPDF2.0 & 240.9(-0.6) & {\bf 130.5(0.2)} & 209.2(0.2) &{\bf 132.6(0.5)} \tabularnewline
\hline
ATLAS-epWZ16 & 332.8(-3.9) & {\bf 234.4(-2.0)} & 303.5(-3.5) &{\bf 218.9(-1.5)} \tabularnewline
\hline
NNPDF3.1 (collider) & 527.0(-5.0) & {\bf 116.4(-2.2)} & 553.7(-4.8) &{\bf 110.2(-1.9)} \tabularnewline
\hline
\end{tabular}
\caption{
%
Similar as Table~\ref{tab:chi2a} but with $m_c=1.4$ GeV and $\mu=Q$.
\label{tab:chi2c}}
\end{table}
\subsection{Hessian profiling of PDFs}
One main motivation of this paper is to investigate impact of the NNLO
calculations on extraction of the strange quark PDFs in global analyses including
the dimuon data.
Precisely we would like to see how the outcome strange quark PDFs
change when using NNLO predictions instead of the NLO ones,
as only NLO predictions are available in previous PDF fits.
That could be done by individual PDF groups using the fast interface
and grids presented in this paper.
Alternatively we can estimate the possible shift of the PDFs
by means of Hessian profiling~\cite{1503.05221}.
In Hessian profiling the PDF parameters are assumed to follow a
prior multi-Gaussian distribution.
That corresponds effectively to a parabolic shape of the prior $\chi^2$
around the central PDF and with $\Delta\chi^2=1$ when reaching the
$1\,\sigma$ error in each eigenvector direction.
The $\chi^2$ of any new data set to be included also forms a
parabola in the PDF parameter space under linear approximation.
The profiled PDFs are thus determined according to profiles of
the total $\chi^2$, e.g., by minimization of the
total $\chi^2$ for the central value and $\Delta\chi^2=1$ for
the PDF uncertainties.
We stress here the Hessian profiling can only serve as an estimation
of the effects of new data or theory on the PDFs since an actual
PDF fit involves further complexities due to e.g., parametrization
dependence and the requirement of tolerance conditions.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.6\textwidth]{plot/profiled_HERAPDF20_NNLO_FULL.pdf}
\end{center}
\vspace{-2ex}
\caption{\label{fig:prof_hera}
%
PDF ratio of strange to non-strange sea-quarks at 2 GeV as a function of $x$
for the HERAPDF2.0 NNLO PDFs.
%
The solid black lines indicate the upper/lower uncertainties of original
PDFs.
%
The colored bands represent the central value and uncertainty
after profiling using the NuTeV and CCFR charm (anti-)quark production
data with the NLO and NNLO predictions.
}
\end{figure}
We start with the HERAPDF2.0 NNLO PDFs which does not include any dimuon
data but rather implements certain model constraints on the strangeness fraction
and shape.
We show the PDF ratio of strange to non-strange sea-quarks, $R_s$ at a scale of 2 GeV
in Fig.~\ref{fig:prof_hera}.
We can see a moderate suppression of the strangeness in small to intermediate
$x$ region comparing to the $u$ and $d$ sea-quarks and a rapid falloff at
large $x$.
The PDF uncertainties as indicated by the solid black lines are large,
more than 30\% in the entire $x$ range,
which include the experimental, model, and parametrization uncertainties.
The colored bands in Fig.~\ref{fig:prof_hera} are for the profiled PDFs
with the NuTeV and CCFR charm (anti-)quark production data together using the NLO
and NNLO theoretical predictions.
It is clear that the dimuon data prefers an even suppressed strangeness with
$R_s$ of about 0.5 in the full range of $x$.
The profiled PDFs lie at the lower edge of the $1\, \sigma$ error of the
original PDFs indicating reasonable agreement between original PDFs
and the dimuon data as already seen in Table~\ref{tab:chi2a}.
The profiled PDFs have a much smaller uncertainties on $R_s$ than the original
PDFs as one expect.
We notice that the PDF uncertainties are also reduced significantly in the small $x$
region $10^{-4}-10^{-2}$ which are beyond the coverage of the dimuon
data.
That is possibly due to the restricted parametrization form of strange
quark PDFs used in the HERA PDF analysis.
Importantly we found the NNLO predictions prefer higher values
of $R_s$ than the NLO ones, in this case well above the $1\,\sigma$
error band of the later.
That can be understood since the NNLO corrections are negative in the bulk
of the data.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=0.6\textwidth]{plot/profiled_MMHT2014nnlo68cl_nf3.pdf}
\end{center}
\vspace{-2ex}
\caption{\label{fig:prof_mmht14}
%
Similar as Fig.~\ref{fig:prof_hera} for profiling of
the MMHT2014 NNLO PDFs.
}
\end{figure}
We perform another profiling study with the MMHT2014 NNLO PDFs as shown
in Fig.~\ref{fig:prof_mmht14}.
Noted that since the MMHT2014 analysis already includes above dimuon data,
the study here only means for checking impact of the NNLO corrections.
We can see the NNLO predictions prefer larger strangeness than NLO predictions
for $x$ up to a few times 0.1 and by a similar amount as in
Fig.~\ref{fig:prof_hera}.
The shift of central values of the NLO profiled PDFs comparing to the original
PDFs, though still within the PDF error band, is due to several facts.
In the MMHT2014 fits~\cite{Harland-Lang:2014zoa} they use a charm-quark
pole mass of 1.4 GeV and a semi-leptonic
decay branching ratio of charm quark that is 7\% lower than the one extracted
by NuTeV and CCFR, both of which lead to an increase of the strange-quark PDFs.
Besides, there are also LHC data in the MMHT analysis that pull the ratio further up.
The uncertainties are largely reduced in the profiled PDFs mostly because
we use the $\Delta \chi^2=1$ criterion rather than a dynamic tolerance
condition as in the MMHT analysis.
We have also compared the profiled PDFs with alternative scale choices and
found those with NNLO predictions are less sensitive to the choice of
scale.
\section{Conclusion}
In conclusion we have presented details on calculation of next-to-next-to-leading
order QCD corrections to massive charged-current coefficient functions in
deep-inelastic scattering.
We focus on the application to charm-quark production in neutrino scattering on
fixed target that can be measured via the dimuon final state as in the NuTeV
and CCFR experiments.
We construct a fast interface to the calculation so for any parton
distributions the dimuon cross sections can be evaluated within milliseconds
by using the pre-generated interpolation grids.
The NNLO predictions thus can be conveniently included in future global analyses
of QCD involving the dimuon data.
We further compare the dimuon data with the NNLO predictions using various PDFs and confirm
the pull of the ATLAS data on the strange-quark PDFs.
Moreover, we study impact of the NNLO corrections on the extraction of strange-quark
PDFs in the context of Hessian profiling, and find with the NNLO predictions
the dimuon data tend to favor larger strange-quark PDFs than with the
NLO predictions.
The fast interface together with the interpolation grids for dimuon
cross sections are publicly available upon request.
A definite conclusion on the potential inconsistency of the DIS and ATLAS
data awaits the ongoing global fits by the PDF analysis groups.
\begin{acknowledgments}
JG would like to thank E. Berger, P. Nadolsky, R. Thorne,
C.-P. Yuan and HX Zhu for useful conversations, and Southern Methodist University for
the use of the High Performance Computing facility ManeFrame.
The work of JG is sponsored by Shanghai Pujiang Program.
\end{acknowledgments}
|
2,869,038,155,484 | arxiv | \section{Introduction}
The anomalous magnetic moment of the muon constitutes both
experimentally and theoretically a clean quantity which makes it an ideal
object for precision studies. The current
experimental value for $a_\mu=(g-2)_\mu/2$
(see Refs.~\refcite{Bennett:2006fi,Roberts:2010cj})
\begin{eqnarray}
a_{\mu}^{\rm exp} &=& 116\,592\,089(63) \times 10^{-11}
\,,
\label{eq::amuexp}
\end{eqnarray}
matches the accuracy of the theoretical
prediction which is given by~\cite{Hagiwara:2011af} (see also
Refs.~\refcite{Jegerlehner:2011ti,Davier:2010nc})
\begin{eqnarray}
a_{\mu}^{\rm th} &=& 116\,591\,828(49) \times 10^{-11}
\,.
\label{eq::amuth}
\end{eqnarray}
However, the comparison of Eqs.~(\ref{eq::amuexp}) and~(\ref{eq::amuth})
shows that there is a discrepancy of about 3$\sigma$ which
persists for more than a decade.
The numerically most important contribution to $a_{\mu}^{\rm th}$ comes from
QED followed by hadronic, light-by-light and electroweak corrections which are
described in detail in the
reviews~\refcite{Melnikov:2006sr,Jegerlehner:2009ry}
and~\refcite{Miller:2012opa}. In this contribution recent four-loop QED
corrections~\cite{Lee:2013sx,KLMS13} are discussed which are based on
analytic calculations with the purpose to provide an independent check of
the purely numerical approach of Ref.~\refcite{Aoyama:2012wk}.
One of the motivations for such a cross-check is the fact that the
four-loop QED contribution is of the same order of magnitude as the difference
between Eqs.~(\ref{eq::amuexp}) and~(\ref{eq::amuth}). Beyond one-loop order
large QED corrections are obtained from Feynman diagrams containing closed
electron loops. Since the electron mass cannot be set to zero such diagrams
lead to sizeable logarithms $\ln(M_\mu/M_e) \approx 5.3$ which
occur up to third power at four loops. Among the various classes of Feynman
diagrams those where the external photon couples to a closed electron loop,
the so-called light-by-light-type diagrams, give the most important
contributions. At three loops, where analytic results are known, these
diagrams come with an additional factor $\pi^2$. In our approach the
light-by-light diagrams are technically quite demanding and have not yet been
considered. As a preparatory work we looked at other classes
which are discussed in the following two sections: the
contribution involving closed tau lepton loops (see Section~\ref{sec::tau})
and the one with two or three closed electron loops (see
Section~\ref{sec::electron}). The corresponding results have been obtained in
Refs.~\refcite{KLMS13} and~\refcite{Lee:2013sx}.
\section{\label{sec::tau}Closed tau loops}
Starting from two loops there are Feynman diagrams contributing to $a_\mu$
which contain a closed tau lepton loop. Actually there is only one such
diagram at two-loop order
(see Fig.~\ref{fig::diag_2l}) since the contributions where the
external photon couples to the closed fermion loop are zero due to Furry's
theorem. At three loops one has to deal with 60 and at four loops with 1169
Feynman diagrams. The four-loop diagrams can be subdivided into twelve
classes~\cite{Aoyama:2012wk} which are shown in Fig.~\ref{fig::diags_4l}.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.65]{dia2L.eps}
\end{center}
\vspace*{-1em}
\caption[]{\label{fig::diag_2l} Two-loop Feynman diagrams contributing to
$(g-2)_\mu$. Thin and thick solid lines represent muon and tau leptons,
respectively, and wavy lines denote photons.}
\end{figure}
\begin{figure}[t]
\begin{center}
\leavevmode
\epsfxsize=0.9\textwidth
\epsffile[80 450 500 750]{feyndias_4l.ps}
\end{center}
\vspace*{-1em}
\caption[]{\label{fig::diags_4l} Sample Feynman diagrams contributing to
$(g-2)_\mu$ at four-loop order. The symbols
label the individual diagram classes and are taken over from
Refs.~\refcite{Aoyama:2012wj,Aoyama:2012wk}.}
\end{figure}
The two-loop diagram can be computed exactly~\cite{Elend:1966}
in terms of a function which depends on $M_\mu/M_\tau$ (see also
Refs.~\refcite{Laporta:1992pa,Czarnecki:1998rc,Passera:2006gc}).
We nevertheless want to use this simple example in order to
demonstrate the method which we apply at four loops where an exact
calculation is out of reach with the currently available technology. The basic
idea is to obtain an expansion of $a_\mu$ in the limit $M_\tau \gg M_\mu$ by
Taylor expanding the integrand in certain kinematical regions. The latter is
visualized in Fig.~\ref{fig::ae} where the two contributions are shown which
arise after applying the rules of asymptotic expansion.\cite{Smirnov:2013} The
notation is as follows: left of the symbol $\otimes$ one finds the so-called
hard subgraphs which by definition contain all tau lepton propagators and which are
one-particle irreducible with respect to the light lines. The subgraphs are
expanded in all small quantities, in our case the external momenta and the
muon mass, and afterwards the integrations over the hard loop momenta are
performed. On the right of $\otimes$ one finds the co-subgraphs. They are
constructed from the original diagram by removing all lines which are part
of the subgraph. The blob indicates the position
where the result of the subgraph has to
be inserted before integration over the loop momenta of the co-subgraph.
At two-loop order the hard subgraphs lead to
either one- or two-loop vacuum integrals with one mass scale, $M_\tau$,
whereas for the co-subgraphs
tree-level contributions or one-loop on-shell integrals with $q^2=M_\mu^2$
have to be considered.
Analogously, at four
loops one has to deal with vacuum integrals up to four loops and on-shell
integrals up to three loops. Both classes of integrals are very well studied
up to this loop-order (see Ref.~\refcite{KLMS13} for references and
more details).
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.55]{asy2Ls1.eps}
\hfill \includegraphics[scale=0.55]{asy2Ls2.eps}
\end{center}
\caption[]{\label{fig::ae} Graphical representation of
the hard sub-graphs and co-subgraphs as obtained after applying
the rules for asymptotic expansion to the two-loop diagram
in Fig.~\ref{fig::diag_2l}.}
\end{figure}
In Ref.~\refcite{KLMS13} three expansion terms in $M_\mu^2/M_\tau^2\approx
0.0035$ have been computed for all twelve classes of diagrams shown in
Fig.~\ref{fig::diags_4l}. After adding all contributions one obtains
\begin{eqnarray}
A^{(8)}_{2,\mu}(M_\mu/M_\tau)&\approx&
0.0421670 + 0.0003257 + 0.0000015 \approx 0.0424941(2)(53)
\,,
\label{eq::A8mu}
\end{eqnarray}
where $(\alpha/\pi)^4 A^{(8)}_{2,\mu}(M_\mu/M_\tau)$ represents the four-loop
contribution to $a_\mu$ induced by virtual tau lepton loops. The first and
second uncertainty in Eq.~(\ref{eq::A8mu}) indicates the truncation error and
the error in the input quantity $M_\mu/M_\tau$, respectively. Due to the
smallness of the expansion parameter we observe a rapid convergence. Actually,
as can be seen after the first approximation sign in Eq.~(\ref{eq::A8mu}) each
subsequent term is about a factor 100 smaller than the previous one. Thus it
is save to assign 10\% of the last computed term as uncertainty of the
truncation after $(M_\mu^2/M_\tau^2)^3$. Comparing the result
of Eq.~(\ref{eq::A8mu}) with the one from Ref.~\refcite{Aoyama:2012wk}
based on numerical integration, $0.04234(12)$, shows good agreement. However,
the analytic result is significantly more precise.
\section{\label{sec::electron}Closed electron loops}
In Ref.~\refcite{Lee:2013sx} a first step towards a systematic study of four-loop
on-shell integrals has been undertaken. More precisely, all classes of Feynman
integrals have been studied which are needed to compute QED or QCD corrections
to a massive fermion propagator with on-shell external momenta and two or
three closed massless loops. Thus, contributions to $a_\mu$ from diagrams as
shown in Fig.~\ref{fig::amu_e} can be evaluated.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.75]{gm2_nl2.eps}
\end{center}
\vspace*{-1em}
\caption[]{\label{fig::amu_e} Four-loop Feynman diagrams contributing to
$(g-2)_\mu$. In at least two of the closed fermion loops electrons are
present.}
\end{figure}
The four-loop integrals considered in Ref.~\refcite{Lee:2013sx} only contain
either massive of massless lines. For this reason, as a start, the electrons
have to be chosen as massless which leads to finite results as long as the
fine structure coupling is renormalized in a mass independent renormalization
scheme. Thus, initially, we renormalize the muon mass on-shell but $\alpha$ in
the $\overline{\rm MS}$ scheme. The (finite) result contains both constant
terms and logarithms $\ln(\mu^2/M_\mu^2)$ where $\mu$ is the renormalization
scale of the fine structure constant. Afterwards we transform $\alpha$ to
the on-shell scheme which introduces $\ln(\mu^2/M_e^2)$ terms. In this way the
$\mu$ dependence cancels and $\ln(M_e^2/M_\mu^2)$ terms remain.
By construction the described approach can only be applied to
those Feynman diagrams where the closed electron loops
are related to the renormalization of $\alpha$. This excludes the
light-by-light-type Feynman diagrams where the external photon couples to an
electron.
As an example we want to discuss the four-loop contribution to $a_\mu$ which
contains two closed electron loops but no additional closed muon loop. In
Ref.~\refcite{Lee:2013sx} this contribution has been denoted by
$a_\mu^{(42)a}$ and the analytic expression is given by\footnote{For similar
corrections of this type see also
Refs.~\refcite{Laporta:1993ds,Aguilar:2008qj}.}
\newcommand{M_\mu} \newcommand{\Me}{M_e}{M_\mu} \newcommand{\Me}{M_e}
\newcommand{L_{\mu e}}{L_{\mu e}}
\begin{eqnarray}
a_\mu^{(42)a} &=&
L_{\mu e}^2
\left[\pi ^2
\left(\frac{5}{36}-\frac{a_1}{6}\right)+\frac{\zeta_3}{4}
-\frac{13}{24}\right]
+ L_{\mu e} \left[-\frac{a_1^4}{9}+\pi ^2
\left(-\frac{2 a_1^2}{9}+\frac{5
a_1}{3}-\frac{79}{54}\right)
\right.\nonumber\\&&\left.\mbox{}
-\frac{8 a_4}{3}-3 \zeta_3+\frac{11 \pi ^4}{216}
+\frac{23}{6}\right]
-\frac{2 a_1^5}{45}+\frac{5 a_1^4}{9}
+\pi ^2 \left(-\frac{4
a_1^3}{27}+\frac{10 a_1^2}{9}
\right.\nonumber\\&&\left.\mbox{}
-\frac{235
a_1}{54}-\frac{\zeta_3}{8}+\frac{595}{162}\right)
+\pi ^4 \left(-\frac{31
a_1}{540}-\frac{403}{3240}\right)+\frac{40 a_4}{3}+\frac{16
a_5}{3}-\frac{37 \zeta_5}{6}
\nonumber\\&&\mbox{}
+\frac{11167 \zeta_3}{1152}-\frac{6833}{864}
\,,
\label{eq::amu_e}
\end{eqnarray}
with $a_1=\ln2$, $a_n=\mbox{Li}_n(1/2)$ ($n\ge1$), $\zeta_n$ is Riemann's zeta
function and $L_{\mu e}=\ln(M_\mu} \newcommand{\Me}{M_e^2/\Me^2)$. It is interesting to note that
quantities up to trancendentality level five appear in Eq.~(\ref{eq::amu_e}).
The numerical evaluation leads to
$a_\mu^{(42)a} = -3.62427$ which should be compared to
$-3.64204(112)$.\cite{Kinoshita:2004wi,Aoyama:2012wk} The difference is of
order $10^{-2}$ (i.e. $0.5\%$) and can be explained by missing
$M_e/M_\mu$ terms in the analytic expression.
\section*{Acknowledgments}
I would like to thank Alexander Kurz, Roman Lee, Tao Liu, Peter Marquard,
Alexander Smirnov and Vladimir Smirnov for a fruitful collaboration on the
topics discussed in this contribution. This work was supported by the DFG
through the SFB/TR~9 ``Computational Particle Physics''.
\vspace*{-1em}
|
2,869,038,155,485 | arxiv | \section{Introduction}
The discovery of the Higgs boson completed the Standard Model (SM) of particle physics \cite{ATLAS:2012yve,CMS:2012qbp}. However, there are still some loose ends which the SM is unable to connect. For instance the nature of Dark Matter (DM), the generation of neutrino masses or the observed Baryon Asymmetry of the Universe (BAU) are open questions which call for an extension of the SM.
A very efficient theory framework that addresses these three questions simultaneously is the extension of the SM with three right-handed (or sterile) neutrinos.
Sterile neutrinos can be introduced explicitly in order to address the observations of neutrino oscillations, BAU, and DM in a minimal way. An excellent example is given by the so-called neutrino minimal Standard Model ($\nu$MSM) \cite{Asaka:2005pn, Asaka:2005an,Shaposhnikov:2008pf}, cf.\ also Ref.~\cite{Abazajian:2012ys} for a review.
It was shown by Akhmedov, Rubakov, and Smirnov that the existence of sterile neutrinos can explain the observed BAU via oscillations between the active and the sterile neutrino sectors \cite{Akhmedov:1998qx}, which is referred to as the ARS leptogenesis mechanism.
Within this mechanism CP violating interactions between sterile neutrinos and the active lepton sector result in a lepton number asymmetry in both the sterile and the active neutrino sector.
Sphaleron processes will subsequently convert the lepton asymmetry stored in the (left-handed) active sector into a baryon asymmetry \cite{Klinkhamer:1984di} thus explaining the Baryon Asymmetry of the Universe. Note that this mechanism, unlike standard thermal leptogenesis \cite{Fukugita:1986hr} or resonant leptogenesis \cite{Pilaftsis:2003gt}, requires the sterile neutrinos to be non-thermal, i.e.~the interaction rate with the active leptons must be small. Consequently, ARS leptogenesis requires the sterile neutrinos to have GeV-scale masses. See for example Refs.~\cite{Canetti:2012kh,Drewes:2017zyw} for recent reviews and Refs.~\cite{Canetti:2010aw,Eijima:2018qke} for investigations into the available parameter space of such models.
In contrast to adding sterile neutrinos explicitly to the SM, their existence is motivated naturally in non-minimal extensions of the SM, for instance in models where an additional gauge symmetry is introduced. For example, when the $B-L$ numbers of the SM fermions are gauged, a $U(1)_{B-L}$ gauge factor is introduced, together with an additional gauge field and a scalar to make the gauge field massive via spontaneous symmetry breaking.
The existence of sterile neutrinos is required in this model to keep the theory anomaly-free \cite{Montero:2007cd}. In this case an explicit Majorana mass for the sterile neutrinos would be forbidden, however, it can be introduced dynamically through the same scalar that makes the gauge boson massive.
Extensions of the minimal sterile-neutrino framework, and their effect on (ARS) leptogenesis, have been discussed in the context of many different theory frameworks:
in conformal models \cite{Khoze:2013oga,Khoze:2016zfi}; in $B-L$ extensions \cite{Caputo:2018zky}; in a Majoron model and axions \cite{Escudero:2021rfi}; in the context of inflatons \cite{Shaposhnikov:2006xi}.
Particularly interesting is the observation that resonant leptogenesis can be possible for heavy neutrinos as light as 500 GeV if their decays are assisted by additional scalars \cite{Alanne:2018brf}.
A smoking gun for a non-minimal neutrino sector would be the discovery of additional scalar resonances in the laboratory.
Scalar particles are searched for extensively at the LHC and excesses in recent data seem to point toward additional scalar degrees of freedom.
The CMS collaboration measured an excess in diphoton events \cite{CMS:2018cyk} that could be a scalar resonance at about 96 GeV, compatible with an excess in $b\bar b$ from LEP, cf.\ Refs.~\cite{Cao:2016uwt,Biekotter:2019kde}.
Moreover, there are excesses in multi-lepton final states pointing towards a heavy scalar with a mass of about 270 GeV \cite{vonBuddenbrock:2016rmr,vonBuddenbrock:2017gvy} that is connected to diphoton excesses in many signal channels pointing toward a resonance at 151 GeV \cite{Crivellin:2021ubm}.
Last but not least, there are also some less significant excesses in four-lepton final states with invariant masses around 400 GeV and above \cite{ATLAS:2020zms}.
Clearly more data is needed to determine if any of these excesses will turn into a discovery.
In this paper we consider how an additional thermalised scalar would affect the efficiency of the ARS mechanism to address the BAU. Therefore we extend the SM with two right-handed neutrinos and a scalar boson, corresponding to an effective model that can in principle explain both neutrino oscillations and the observed BAU. This model can be interpreted as an extension of the so-called $\nu$MSM \cite{Asaka:2005an} or as a $B-L$ symmetric model where the gauge boson is many orders of magnitude heavier than the other SM extending fields.
This article is structured as follows. In \cref{chapter2} we will start with a discussion of the model including the sterile neutrinos and the additional scalar. Following this, thermal effects of the scalar on the dynamics of the sterile neutrino will be summarized. Afterwards, the kinetic equations, needed to calculate the lepton asymmetry, will be reviewed and the effect of an additional scalar will be discussed. \Cref{chapter2} will end with a discussion on the timescales which are relevant for successful leptogenesis via oscillations. \Cref{chapter3} will start with a general discussion on the available parameter space, where general arguments are used to determine which parameter ranges could lead to non-trivial dynamics. In the remainder of this section results from explicit calculations will be given. The results can be used to determine how the additional scalar affects ARS leptogenesis. The article will be wrapped up with our conclusions.
\section{Theory framework}
\label{chapter2}
We consider a minimal extension of the scalar sector with a real scalar singlet and a minimal extension of the fermion sector with two right-handed neutrinos (or, analogously, sterile neutrinos).
The latter are motivated and constrained by the observed neutrino oscillations and we also consider LHC constraints on scalar resonances for the former, both of these constraints limit the model parameters at zero temperature. As we want to study early Universe cosmological implications of this framework, we also include finite temperature effects.
\subsection{The Model}
For concreteness we introduce $B-L$ symmetry with a corresponding $U(1)_{B-L}$ gauge factor that is spontaneously broken at an energy scale far above the electroweak scale.
In this model the field content beyond the SM is given by three additional sterile neutrinos $N_i,\,i=1,2,3$ and a complex scalar singlet $S$ that carries twice the charge of $N_i$ under the $B-L$ symmetry (typically lepton number $2$).
For the sake of minimality we make two assumptions:
the third sterile neutrino $N_3$ is decoupled and does not contribute to our discussion;
the gauge boson corresponding to the $B-L$ symmetry can be neglected, e.g.\ because its gauge couplings are sufficiently small or its mass is much larger than the other particle's masses.
This leaves us with a real scalar boson $S$ and two sterile neutrinos.
In this scenario the following Yukawa terms can be added to the Lagrangian of the SM:
\begin{equation}
{\cal L}_Y = - F_{\alpha i} \bar{L}_\alpha H N_i - \frac{1}{2}Y_{ij} S \bar N^c_i N_j + (h.c.) \,.
\label{eq:lagrangianSNN}
\end{equation}
Above, $H$ is the SM Higgs field, $L_\alpha$ are the left-handed lepton doublets with $\alpha=e,\ \mu,\ \tau$ and $N_i$ with $i = 1, \ 2$ are the right-handed neutrinos that couple to $S$ with Yukawa-like coupling matrix $Y$. $F$ is a Yukawa-like coupling matrix describing the interactions between the right-handed neutrinos, the lepton doublet, and the Higgs boson. We work in a basis where the mass matrix of the sterile neutrinos is diagonalized, i.e.~the Yukawa matrix $Y$ is a diagonal matrix.
We remark at this point that in this model the lightest active neutrino is exactly massless due to the decoupled third sterile neutrino $N_3$.
The scalar potential in our model can be expressed as
\begin{equation}
V(S,H) = -\frac{1}{2} \mu_S^2 S^2 - \mu_H^2 H^\dagger H + \frac{1}{4} \lambda_S S^4 + \lambda_H (H^\dagger H)^2 + \frac{1}{2}\lambda_{SH} H^\dagger H S^2 \,,
\label{eq:scalarpotential}
\end{equation}
where the $\mu_i$ are mass parameters and the $\lambda_i$ are coupling constants of the scalar fields $S$ and $H$, where the former is a real scalar field and the latter is the complex isospin doublet of the SM Higgs boson.
Notice that terms that are odd in $S$ can be neglected because of the $B-L$ symmetry.
The scalars $S$ and $H$ can develop non-zero vacuum expectation values (vevs) when $\mu_i^2 > 0$:
\begin{equation}
\langle S \rangle = v_S^0, \qquad \langle H \rangle = \begin{pmatrix} 0 \\ \frac{1}{\sqrt{2}} v_{EW} \end{pmatrix}\,,
\end{equation}
As long as the mixing between the new scalar and the Higgs is small, the physical Higgs is dominated by the neutral component of the doublet $H$, with $v_{EW} = 246$~GeV.
As $H$ has isospin and hypercharge, this leads to spontaneous breaking of the electroweak symmetry as in the SM.
\subsection{Parametrisation}
\label{subsec:casasibarra}
The scalar sector contains two independent parameters that will be relevant for our discussion below: the scalar-Higgs coupling $\lambda_{SH}$ and the scalar self-coupling $\lambda_S$.
When the scalars $S$ and $H$ develop non-zero vevs, Dirac ($M_D$) and Majorana ($M_M$) mass matrices emerge for the neutrinos:
\begin{equation}
M_D = F \cdot v_H,\qquad M_M = Y \cdot v_S^0 \,.
\end{equation}
In the type I seesaw approximation the small neutrino mass is given by $ M_D^2 / M_M$. Notice, however, that lepton number is broken in this setup only when the trace of $M_M$ is non-zero. Diagonalisation of the mass matrix yields the physical eigenstates, which are linear combinations of the interaction fields.
We anticipate the requirement from leptogenesis for the sterile neutrinos to be quasi degenerate in masses,\footnote{The leptogenesis mechanism requires the mass splitting between the sterile neutrinos to be small, i.e.~the sterile neutrino masses to be highly degenerate. This is true for the ARS mechanism and also for resonant leptogenesis, see refs.~\cite{Klaric:2020phc,Klaric:2021cpi} for an extended discussion.} and introduce the parametrisation:
\begin{equation}
M_{\pm} = M_N^0 (1 \pm \alpha)\,.
\end{equation}
The parameter $\alpha$ parametrises the mass difference of the two heavy neutrinos at zero temperature, and for $\alpha \ll 1$ we have $M_- = |Y_{11}| v_S^0 \simeq |Y_{22}| v_S^0 = M_+$ (in the basis where $M_M$ is diagonal) such that we can approximate the masses for both sterile neutrinos $M_\pm$ via the zero-temperature mass
\begin{equation}
M_N^0 = Y v_S^0\,,
\label{eq:Nbaremass}
\end{equation}
where we introduced the new parameter $Y= (|Y_{11}|+|Y_{22}|)/2$ which can be used instead of $M_N^0$.
This yields the two parameters $M_N^0$ (or $Y$) and $\alpha$, which are independent for $\alpha \ll 1$.
\begin{table}
\begin{center}
$\begin{array}{c|c|c|c|c|c}
m_1 &m_2 & m_3 & \sin^2\theta_{12} & \sin^2\theta_{13} & \sin^2\theta_{23} \\
\hline
0 \ \text{eV} & 8.68 \times 10^{-3} \ \text{eV} & 5.03 \times 10^{-2} \ \text{eV} & 0.307 & 0.0218 & 0.545
\end{array}$
\end{center}
\caption{Variables of the $\nu$MSM model, light neutrino observables from the Particle Data Group 2020 \cite{ParticleDataGroup:2020ssz}}
\label{tab: variables}
\end{table}
The Yukawa coupling $F$ can be parametrised in a bottom-up and completely general way, based on observable low-energy data, the so-called Casas-Ibarra parametrisation for a $3+2$ neutrino sector \cite{Casas:2001sr}.
For this parametrisation we use the following input parameters: the three neutrino mixing angles $\theta_{ij}$, the three active neutrino masses $m_i$, the two heavy neutrino mass eigenvalues (parametrised by $M_N^0$ and $\alpha$) and the four phases $\xi$, $\eta$, $\delta$ and $\omega$.
The mixing angles and active neutrino masses are known from neutrino experiments \cite{ParticleDataGroup:2020ssz}, see \cref{tab: variables}.
In order to compare our model to the ARS leptogenesis mechanism in the $\nu$MSM we fix the phases to the values used in Ref.~\cite{Asaka:2011wq}: $\xi = 1$, $\omega = \pi/4 $, $\delta= 7 \pi /4$ and $\eta = \pi/3$.
We remark that we checked that random variations of the internal parameters within the $1\sigma$ limits of the experimental measurements lead to ${\cal O}(1)$ modifications in entries of the Yukawa coupling matrix $F$.
In summary, in our model there are a total of five parameters that are free within certain limits, namely: $Y$, $\alpha$, $v_S^0$, $\lambda_S$ and $\lambda_{SH}$. We will discuss the constraints on these parameters below.
\subsection{Constraints}
Here we list the considered zero-temperature constraints on the masses of the scalar from the LHC and on the mixing between active and sterile neutrinos. Limits from Early Universe cosmology on the sterile neutrino masses will also be discussed.
\paragraph*{Sterile neutrinos:}
Active-sterile neutrino mixing is constrained from precision measurements of the PMNS matrix, cf.\ Ref.~\cite{Antusch:2014woa}. Our use of the Casas-Ibarra parametrisation and the considered masses $M_N^0\leq 100$~GeV renders the resulting mixing parameters small compared to current limits.
For $M_N^0 < 1$~GeV, the decays of $N$ during the recombination period releases entropy into the thermal bath and impacts Big Bang Nucleosynthesis, which in general places strong limits on the mixing and mass parameters and requires in particular $M_N^0 \geq {\cal O}(0.1)$~GeV \cite{Canetti:2012kh}.
On the other hand, it has been shown that decaying heavy neutrinos with masses ${\cal O}(30)$~MeV \cite{Gelmini:2019deq} and sterile neutrinos that interact with additional scalars can alleviate the Hubble tension \cite{Fernandez-Martinez:2021ypo}.
We will limit our discussion to $M_N^0 \geq 0.1$~GeV in the following.
\paragraph*{Scalar bosons:}
Additional scalar degrees of freedom that decay into pairs of gauge bosons have been searched for at the LHC, cf.\ e.g.\ the CMS report in Ref.~\cite{Roy:2021ooe}.
Non-observation restricts these particles to have masses above a few TeV with current data.
On the other hand, the recent LHC data includes excesses in the four-lepton invariant mass spectra that hint at additional resonances around 700 GeV \cite{Cea:2018tmm,Richard:2020jfd}. Even more convincing signals have been reported for some time now in non-resonant multi-lepton channels \cite{vonBuddenbrock:2019ajh} and recently in diphoton channels with associated production \cite{Crivellin:2021ubm}, which point at scalar bosons with masses around the electroweak scale.
We therefore conclude that scalars with masses around the TeV scale are well motivated.
The measurement of the SM Higgs boson at 125 GeV limits its possible mixing with other scalar degrees of freedom. For the example of a single additional scalar resonance, this mixing can be constrained via precision measurements of the Higgs boson, and also with direct searches. Current constraints limit the sine of the mixing angle to ${\cal O}(0.1)$ \cite{Robens:2021rkl}.
The mixing angle and $\lambda_{SH}$ are related via
\begin{equation}
\sin \alpha = \lambda_{SH} \frac{v_{EW} v_S^0}{M_S^2 - M_h^2}
\end{equation}
where $M_h = 125$~GeV is the mass of the observed Higgs boson.
If we assume that $M_h \ll M_S \simeq \sqrt{2 \lambda_S} v_S^0$, the limit on scalar mixing thus constrains
\begin{equation}
\lambda_{SH} \leq 2 \times 0.1 \lambda_S\frac{v_S^0}{v_{EW}} \,,
\label{eq:mixing}
\end{equation}
which implies that for $v_S^0 > {\cal O}(10) \times v_{EW}$ the interaction between the Higgs fields and $S$ can be strong, without affecting experimental constraints on the scalar-Higgs mixing.
As we shall see below we will consider $v_S^0 \geq 10^6$, such that this limit can be met even if $\lambda_S \ll 1$.
\subsection{Finite Temperature effects}
In the early Universe both the scalar $S$ and the Higgs are in the symmetric phase, therefore, if $\mu_i^2 >0$, none of the particles have explicit mass terms. As discussed above, the scalars $S$ and $H$ can develop non-zero vacuum expectation values, which happens at a specific time in the early Universe, i.e.\ at $T_{EW} \simeq 140$~GeV for the Higgs boson, corresponding to the electroweak phase transition and sphaleron freezeout.
The $S$ symmetry breaking occurs at the temperature $T_S$.
For concreteness, we assume that this temperature is identical to the vev of $S$, i.e.\ $T_S = v_S^0$.
We implemented the time-dependent vev for $S$ through a numerical approximation of the Heaviside-theta function:
\begin{equation}
v_S(z) = v_S^0 \cdot \frac{1}{\mathrm{e}^{- 2 k (z- T_{EW}/v_S^0)} +1} \,,
\label{eq:vs}
\end{equation}
with $k =10^5$. Furthermore, $z = T_{EW}/T$ is used as ``time'' variable.
We remark that we do not implement a similar time-dependence for $v_{EW}$ as we only consider temperatures above $T_{EW}$, where $v_{EW}=0$.
We notice that for $z<1$ or, equivalently, $T> T_{EW}$, the Higgs field remains in the unbroken phase, there is thus no mixing between the two scalars. Any mixing induced after electroweak symmetry breaking does not affect ARS leptogenesis.
In general particles receive a thermal mass from their interactions with the thermal bath.
In particular, the scalar $S$ can be thermalised via its interactions with the Higgs field for $\lambda_{SH}$ being sufficiently large, its thermal mass at one loop is given by
\begin{equation}
M_S(T)^2 = 2 \lambda_S (v_S(T))^2 +\frac{1}{4}\lambda_S T^2 + \frac{1}{6} \lambda_{SH} T^2\,,
\label{eq:scalarmass}
\end{equation}
with $v_S(T)$ defined through \cref{eq:vs}. The first term corresponds to the zero temperature mass $M_S(T=0) = M_S^0$ in the limit where $\lambda_{SH}$ is negligible. In the following we shall approximate it as follows:
\begin{equation}
M_S^0 = \sqrt{2\lambda_S} v_S^0\,.
\label{eq:Sbaremass}
\end{equation}
This is an excellent approximation for $\lambda_S \gg {v_{EW} \over v_S} \lambda_{SH}$, and we require $v_S \geq 10^6$~GeV and $\lambda_{SH} \geq 10^{-4}$ as discussed below.
When the scalar $S$ is in equilibrium with the thermal bath the sterile neutrinos can also obtain a thermal mass from their Yukawa interactions with $S$ \cite{Khoze:2013oga}:
\begin{equation}
(M_N^2(T))_{ii} = (Y \cdot Y)_{ii} v_S(T)^2 + \frac{2}{3}\frac{1}{8} (Y \cdot Y)_{ii} T^2
\label{eq:Nmass}
\end{equation}
for $i = 2, 3$. With $v_s(T)$ defined through \cref{eq:vs}.
We remark that these thermal masses do not affect the Casas-Ibarra parametrisation, which is defined at zero temperature.
\subsection{Leptogenesis}
\begin{figure}
\centering
\begin{subfigure}{0.3\linewidth}
\includegraphics[width=.8\linewidth]{finalplots/SNN.pdf}
{\bf a}
\end{subfigure}
~
\begin{subfigure}{0.3\linewidth}
\includegraphics[width=1\linewidth]{finalplots/NNHH.pdf}
{\bf b}
\end{subfigure}
~
\begin{subfigure}{0.3\linewidth}
\includegraphics[width=1\linewidth]{finalplots/NNHH_SSB.pdf}
{\bf c}
\end{subfigure}
\caption{Processes equilibrating $N$ through $S$.}
\label{fig:SNNprocesses}
\end{figure}
Leptogenesis relies on the existence of processes that fulfil the three Sakharov conditions: they must be out-of-equilibrium, they have to violate CP and they have to violate lepton number. The Sphaleron interactions then translate the lepton asymmetry in the active neutrino sector into a baryon asymmetry.
In models with 3 active neutrinos and 2 sterile neutrinos, a lepton asymmetry can be produced via oscillations between the active and the sterile neutrino sectors \cite{Asaka:2011wq}, where the dominant interaction is given by the Higgs boson-mediated process $N t \rightarrow L t$, with $L$ the lepton doublets, and $t$ the top quark.
The dynamics of the active leptons and right-handed neutrinos are determined via the kinetic equations, which describe the evolution of sterile neutrinos of each helicity $\rho_N$ and $\rho_{\bar{N}}$ as well as the evolution of the SM leptons.
For convenience we consider the chemical potential $\mu_\alpha$ with lepton flavor $\alpha =e,\mu,\tau$, instead of the number densities for each particle and anti-particle species.
The chemical potentials $\mu_\alpha$ depend on the specific momentum modes $x = k/T$, which makes solving them rigorously extremely difficult.
Fortunately, under the assumption that the sterile neutrino densities are proportional to the equilibrium density, i.e.~$\rho_N = R_N \cdot \rho_{eq}$ the kinetic equations
can be simplified by taking the thermal average, corresponding to $k= 2 T$, without significantly affecting the numerical precision \cite{Asaka:2011wq}. Note that, like the full kinetic equations, this approximation also conserves the total lepton number, which we explicitly checked.
With these approximations the kinetic equations, in terms of $x = k/T$ and $z = T_{EW}/T$, can be written as \cite{Asaka:2011wq,Shuve:2014zua}:
\begin{align}
\frac{d R_N}{d z} \frac{T_{EW}^2}{M_0 z} = & - i [\langle H_N^0 \rangle + \langle V_N \rangle, R_N] -\frac{3}{2} \langle \gamma_N^d \rangle \{F^\dagger.F, R_N -1\} + 2 \langle \gamma_N^d \rangle F^\dagger.(A-1).F \nonumber \\
& -\frac{\langle \gamma_N^d \rangle}{2} \{F^\dagger.(A^{-1}-1).F, R_N\} +\Gamma_S \\
\frac{d\mu_{\alpha}}{d z} \frac{T_{EW}^2}{M_0 z} = & -\gamma_\nu^d(T) [F.F^\dagger]_{\alpha \alpha} \tanh(\mu_\alpha) \nonumber \\ \nonumber
& + \frac{\gamma_\nu^d(T)}{4} \left( \left( 1 + \frac{2}{\cosh(\mu_\alpha) }\right) \left[ F.R_N. F^\dagger - F^*. R_{\bar{N}} .F^T \right]_{\alpha \alpha } \right. \\
& \left. - \tanh(\mu_\alpha) \left[ F.R_N.F^\dagger - F^*. R_{\bar{N}} . F^T \right]_{\alpha \alpha } \right)
\label{eq:kineticeq}
\end{align}
with $z=T_{EW}/T$, related to time and through the Hubble constant, $H = \frac{1}{2t} = \frac{T^2}{M_0}$, with $M_0 = 7.12 \times 10^{17} \text{GeV}$. The time derivative is thus related to the $z$ derivative as; $\frac{\partial }{\partial t} = \frac{T_{EW}^2}{M_0 z} \frac{\partial}{\partial z}$. Definitions of $\gamma_N^d$ and $\gamma_\nu^d$, coming from the $Nt \rightarrow Lt$ interactions can be found in ref.~\cite{Asaka:2011wq}. Note that $\langle \ \rangle $ denotes the thermal averaging, for all terms $\sim 1/k$ this corresponds to $k \rightarrow 2T$ in the Maxwell-Boltzmann approximation.
By taking the complex conjugate of the kinetic equation of $R_N$ the kinetic equation of $R_{\bar{N}}$ can be obtained straightforwardly.
The kinetic equations used in this paper, like the Boltzmann equations, are valid for relativistic systems close to equilibrium \cite{Lindner:2007am}. A full calculation requires the use of so-called Kadanoff-Baym equations, however, it was shown that in the context of thermal leptogenesis the Boltzmann equations are actually able to predict the lepton asymmetry relatively well, see e.g.~Ref.~\cite{Anisimov:2010dk}. Considering the uncertainty in the predicted BAU due to the other simplifications we have imposed on the kinetic equations, we deem it sufficient to use the kinetic equations as stated above to estimate the BAU.
$H_N^0$ is the free Hamiltonian $\sqrt{k^2 +(M_N^0)_{ii}^2} \cdot \delta_{ij}$. $V_N$ is the effective potential and contains the medium effects. As mentioned before the tree level mass $M_N^0$ is defined as
\begin{equation}
(M_N^0)_{ii} = Y_{ii} v_S(z) \,,
\end{equation}
with $v_S(z)$ defined in \cref{eq:vs}.
We remark that we are using Maxwell-Boltzmann statistics throughout, unless stated otherwise, thus $\rho_{eq} = \mathrm{e}^{-x}$.
$V_N$ is the effective potential of the sterile neutrinos, which describes the interaction with the plasma, is given by
\begin{equation}
V_N = \frac{N_D T^2}{16 k} F^\dagger F + \frac{(M_N(T))^2}{2 k} = \frac{N_D T^2}{16 k} F^\dagger F + \frac{2}{3}\frac{T^2}{16 k} Y\cdot Y \,,
\end{equation}
with $k=2T$ in the thermal averaged approximation. Whereas the first term comes from interactions of the sterile neutrino within the SM bath and is included with the $\nu$MSM formalism \cite{Asaka:2011wq}, the second term is due to interactions with the scalar.
The interaction terms in \cref{eq:lagrangianSNN} and \cref{eq:scalarpotential} introduce processes that connect the sterile neutrinos with the thermal bath, as shown in \cref{fig:SNNprocesses}. These are the scalar decay (and inverse decay) process {\bf (a)}, $t$-channel $N$-scalar boson scattering {\bf (b)}, and $s$-channel $N$-Higgs boson scattering {\bf (c)}.
We notice that the process {\bf (c)} occurs only after $S$ symmetry breaking and is proportional to the product $(Y \lambda_{SH})^2$ (and is further suppressed by a factor $(T/m_S)^4$ for $T< m_S$), while the process {\bf (b)} is proportional to $Y^4$. The decay process {\bf (a)} on the other hand is proportional to $Y^2$, which makes it the dominant process for $Y,\lambda_{SH} \ll 1$, such that we neglect the other terms in the following.
We remark that the $U(1)_{B-L}$ gauge boson brings about further interactions between the sterile neutrinos and the SM fermions. If the gauge boson is massless prior to $S$ symmetry breaking $N$ will be in thermal equilibrium at early times. In the following, we shall assume that the gauge boson has interaction rates that are sufficiently suppressed, for instance through a combination of tiny couplings or large gauge boson masses, such that $R_N=0$ for times that are early compared to the timescale where the ARS leptogenesis mechanism is efficient.
The process {\bf (a)} adds a new term to the kinetic equations, which corresponds to sterile neutrino production from the decays of the scalar $S$. While $S$ is thermalised with the SM particles, it can act as a source for $N$ and $\bar N$ production, via its decay \cite{Shaposhnikov:2006xi,Drewes:2015eoa}
\begin{equation}
\Gamma_S = \frac{Y\cdot Y}{16 \pi} \frac{1}{\rho^{eq}(x)} \frac{M_S(z)^2}{T_{EW}}\frac{z}{x^2} \int_{y_0}^\infty n_s(y) \mathrm{d} y\,,
\label{eq:scalardecay}
\end{equation}
with $y_0 = x+\frac{z^2}{4x}\frac{M_s^2}{T_{EW}^2}$ and $n_s(y) = \mathrm{e}^{-y}$, i.e.~also here the equilibrium density is approximated by the Maxwell-Boltzmann distribution.
We remark here that we consider only production of sterile neutrinos of momentum $k=2T$, which is fixed through the parameter $x$ in \cref{eq:scalardecay}.
Sterile neutrino distributions from scalar decay that are not Boltzmann-like should lead to very similar results, since this is the most relevant momentum mode for the ARS mechanism.
The source term in \cref{eq:scalardecay} depends on the coupling parameters, $Y,\lambda_S,\lambda_{SH}$ and $v_S^0$ through the thermal mass $M_S(T)$ (or $M_S(z)$).
Notice that the same process contributes to $\rho_{\bar{N}}$, which is accounted for with a factor $1/2$, compared to the decay rate stated in ref.~\cite{Khoze:2013oga}. We remark that this term also acts as a sink for the sterile-neutrino sector through inverse decays, $\bar{N}^c N \to S$. However, in the following we consider sterile neutrinos to be out of equilibrium, such that inverse decay can be neglected. If the scalar-sterile neutrino interaction would equilibrate long before Sphaleron freeze-out the sterile neutrino sector washout would remove any produced asymmetry.
\subsection{Time-scales}
It is useful to consider the time scales for understanding the dynamics of leptogenesis \cite{Shuve:2014zua}. Within ARS leptogenesis there are several important timescales, as we discuss below.
\paragraph*{Sphaleron freeze-out:} The possibly most important time scale is set by the Sphaleron freeze-out temperature, which happens around $T \sim T_{EW}$. In terms of our time variable $z$ this temperature corresponds to $z=1$. In order to have efficient Baryon Asymmetry production from the lepton asymmetry in the active sector, lepton asymmetry must be produced \textit{before} Sphaleron freeze-out. Any lepton asymmetry produced after Sphaleron freeze-out is irrelevant for the BAU. The total baryon asymmetry is given by
\begin{equation}
Y_{\Delta B} = -\frac{28}{79} (Y_{\Delta L_e} +Y_{\Delta L_\mu}+Y_{\Delta L_\tau})\,,
\end{equation}
where the $Y_{\Delta \alpha}$ correspond to the asymmetries for leptons $\ell_\alpha$.
\paragraph*{$S$ symmetry breaking:} At the temperature $T_S$ the scalar $S$ develops its vev $v_S^0$, and as discussed above we assume for simplicity that $T_S = v_S^0$. This implies that at the time $z_S = T_{EW}/T_S = T_{EW}/v_S^0$ the sterile neutrinos and the scalar receive bare masses as defined in \cref{eq:Nbaremass} and \cref{eq:Sbaremass}, respectively.
We remark that $S$ can remain thermalised until $z=1$ as is discussed below.
\paragraph*{Oscillations:} Another relevant timescale is related to the oscillations within the sterile neutrino sector; $t_{osc}$. Due to small mass splitting between the two heavy sterile neutrinos each sterile neutrino propagates at a slightly different speed through the plasma; this results in a phase shift between the two sterile neutrinos, which is crucial for developing the lepton asymmetry. This timescale, defined by the time it takes to build up an $\mathcal{O}(1)$ phase difference, can be defined as a function of $z$ as
\begin{equation}
1 = \int_0^{t_{osc}} \frac{\Delta M^2}{4T} \mathrm{d} t \,,
\label{eq:zosc}
\end{equation}
where we introduced the sterile neutrino thermal mass splitting:
\begin{equation}
\Delta M^2 = |(M_{N_1}(T))^2 - (M_{N_2}(T))^2|\,.
\label{eq:oscillationtime}
\end{equation}
We notice that the absolute sterile neutrino mass splitting depends on the temperature, such that $z_{osc}$ has to be evaluated via \cref{eq:zosc} as a time-dependent quantity.
We remark that for $z \gg z_{osc}$ the oscillations become increasingly fast, and solving the full differential equations becomes computationally expensive. Following \cite{Shuve:2014zua} we solve the full calculations up to $z = N z_{osc}$, and for $z_{osc} < z \leq 1$ only the diagonal parts of the differential equation are solved.
This is done by setting all off-diagonal components of the right hand side of eq.~\eqref{eq:kineticeq} to zero at $z=z_{osc}$.
The factor $N=20$ is chosen such that Baryon Asymmetry agrees within 0.5\% to the full calculations, as was explicitly checked for benchmark point A in \cref{tab:points}.
\section{Analysis and results}
\label{chapter3}
In the following we consider sterile neutrino masses below the electroweak scale, i.e. $M_N^0 \leq 100$~GeV.
Such masses are too small to allow for the standard thermal leptogenesis or resonant leptogenesis to produce the observed Baryon Asymmetry.
Instead, we will focus on the so-called ARS leptogenesis mechanism, where the asymmetry is produced through oscillations between the active and sterile neutrino sectors.
\subsection{Discussion of the parameter space}
Among the five model parameters, the limits in \cref{eq:mixing} allow for reasonably large mixing, e.g.\ $\lambda_{SH} \sim \lambda_S$ is allowed for $M_S \geq {\cal O}(1)$~TeV and $v_S^0\geq{\cal O}(10)$~TeV. We therefore consider $\lambda_S$ and $\lambda_{SH}$ to be free parameters.
The other three parameters, $\alpha$, $Y$, $v_S^0$, are subject to a number of constraints, following the considerations below.
\begin{figure}[!]
\centering
\begin{subfigure}{0.3\textwidth}
\subcaption*{$\alpha = 0.1$}
\includegraphics[width = \textwidth]{finalplots/paraA.pdf}
\end{subfigure}
~
\begin{subfigure}{0.3\textwidth}
\subcaption*{$\alpha = 10^{-3}$}
\includegraphics[width=\textwidth]{finalplots/paraB.pdf}
\end{subfigure}
\begin{subfigure}{0.3\textwidth}
\subcaption*{$\alpha = 10^{-8}$}
\includegraphics[width=\textwidth]{finalplots/paraC.pdf}
\end{subfigure}
~
\begin{subfigure}{0.3\textwidth}
\subcaption*{$\alpha = 10^{-8}$}
\includegraphics[width=\textwidth]{finalplots/paraD.pdf}
\end{subfigure}
\caption{Parameter space with limits from successful leptogenesis as discussed in the text, in the projection $Y$ and $y_S$. Areas where the process {\bf (a)} ($S\rightarrow NN$) starts thermalising the sterile neutrinos are shown by the blue color, considering $\lambda_S = 10^{-1}, 10^{-3} \ 10^{-5}$.
Areas where the active-sterile oscillations occur after Sphaleron freeze-out are shown by the pink color.
The green area denotes where $z_S > z_{osc}$.
The black hashed corner indicates sterile neutrino masses $M_N \geq 100$~GeV, and the dashed line corresponds to $M_N = 0.1$~GeV.
Plotted are the benchmark points A, C and D, cf.\ \cref{tab:points}. For the lower right panel, $M_S^0=10$~TeV was fixed.
Throughout, $\lambda_{SH} = 10^{-3}$ is fixed.}
\label{fig:parameters}
\end{figure}
\paragraph*{Dominant scalar decay process:}
We remind ourselves that we assume the dominant $N$ interaction with the thermal bath to be given by process {\bf (a)}, $S\to \bar N N$. This process is only kinematically allowed if the induced thermal mass of the scalar is larger than the thermal mass of the sterile neutrinos, and in particular \cref{eq:scalardecay} is only correct, if $M_N\ll M_S$.
Therefore, our kinematic equations are valid if and only if $\sqrt{\lambda_s} \gg y$.
On the other hand, when the process $S \rightarrow N \bar{N}$ is kinematically forbidden one would have to consider the processes {\bf (b)} and {\bf (c)} of $N$-scalar scattering instead, cf.~\cref{fig:SNNprocesses}, which is beyond the scope of our discussion.
\paragraph*{Out-of-equilibrium $N$:}
In \cref{eq:scalardecay} we neglected the inverse decays, which implies that the sterile neutrinos have to have small number densities and thus be out of equilibrium. Explicitly, we chose the condition $\rho_N /\rho_{eq}<0.15$ at $z=1$.
This choice is conservative: our numerical estimations, for some parameter choices, show that inverse decays are numerically negligible up to $R_N \sim 0.5$.
The two considerations above give conditions on both $v_S^0$ and $Y$, as functions of $\lambda_S$. These constraints are contained in the blue areas in \cref{fig:parameters}, for different values of $\lambda_S$ and fixed $\lambda_{SH}=10^{-3}$.
\paragraph*{Early oscillations:}
As discussed above, active-sterile oscillations need to happen before Sphaleron freeze-out, such that the phase difference between the sterile neutrinos can create a Lepton asymmetry in the active sector that can be translated into a baryon asymmetry. Since Sphalerons freeze-out at $z=1$ the oscillations need to produce an order one phase shift before this time, which requires for the oscillation time: $z_{osc}<1$.
For different values of the mass splitting $\alpha$ this condition constrains on $Y$ and $v_S^0$ through the definition of the thermal mass in \cref{eq:Nmass}. The regions where oscillations are too slow are denoted by the pink areas in \cref{fig:parameters}.
\paragraph*{Relativistic $N$:}
Sterile neutrinos must remain relativistic up to $z=1$, such that the two helicity states of the sterile neutrino remain distinct.
Moreover, $N$ being relativistic also suppresses the amount of decays $N \rightarrow L H$, compared to the $2 \rightarrow 2$ interaction, thus validating neglecting this decay throughout.
These considerations limit the sterile neutrino masses to $m_N \leq T_{EW}$. This implies that the black hashed area in \cref{fig:parameters} is nonphysical.
\paragraph*{Thermalised scalar:}
Our kinetic equations, as well as the decay rate into sterile neutrinos, make the implicit assumption that $S$ is in thermal equilibrium with the thermal bath for $v_S \geq T \geq T_{EW}$. For these temperatures the dominant interactions between $S$ and the SM is given by the $SSH^\dagger H$ term in \cref{eq:scalarpotential}.
We compute the interaction rate for the process $H^\dagger H \leftrightarrow SS$ as:
\begin{equation}
\Gamma = \sigma n(T)\,,
\end{equation}
where $n \propto T^3$ is the density of scalar bosons in the thermal plasma and $\sigma\propto \lambda_{SH}^2/T^2$ is the thermal cross section for this process. We evaluate the thermal cross section with the simplifying assumptions of massless Higgs bosons, and all external scalars having energies $E=2T$. With this, and neglecting the finite mass $M_S$,\footnote{The interaction rate drops quickly for $T\leq M_S$ due to phase space suppression. This is accounted for in the definition of the $S$ decay rate into sterile neutrinos \cref{eq:scalardecay}.} the reaction rate is identical to the Hubble rate under the condition
\begin{equation}
\lambda_{SH} > 2.4\cdot 10^{-7} \sqrt{\frac{T}{\text{GeV}}}\,.
\label{eq:lambdaSHcondition}
\end{equation}
Since we know that relevant dynamics require $T$ not to be too much larger than $T_{osc}$, let us consider $T\leq 10^{4} \cdot T_{EW}$.
The assumption that $S$ is thermalised for $z \geq 10^{-4}$ thus yields the condition: $\lambda_{SH} \geq 3 \cdot 10^{-4}$.
In the following we shall always consider values for $\lambda_{SH}$, such that the condition in \cref{eq:lambdaSHcondition} is met.
\paragraph*{Time of scalar symmetry breaking:}
We have to consider the ordering of the two times $z_S$ and $z_{osc}$.
The parameter choice $z_S > z_{osc}$ indicates that $S$ symmetry breaking occurs relatively late, and that the sterile neutrino dynamics are dominated by their thermal mass rather than a fixed mass as is the case in the $\nu$MSM. This area is shown by the green areas in the upper panels of \cref{fig:parameters}.
The parameter choice $z_S < z_{osc}$ is expected to be dynamically closer to the $\nu$MSM.
\subsection{Successful leptogenesis}
\begin{table}[]
\centering
\begin{tabular}{c|c|c|c|c|c|c}
Points & $\alpha$ & $\langle S \rangle [\text{GeV}]$ & $y$ & $\lambda_S$ & $Y_{\Delta B}$ & remarks \\ \hline
A & $10^{-8}$ & $10^{7.5}$ & $10^{-6.5}$ & $10^{-2}$ & $5.04\times 10^{-11}$& equivalent to $\nu$MSM\\ \hline
B & $10^{-1}$ & $10^{3.5}$ & $10^{-6}$ & $10^{-5}$ & $\sim 0 $& Within yellow area \\ \hline
C & $10^{-8}$ & $10^{7}$ & $10^{-6}$ & $10^{-2}$ & $4.96 \times 10^{-11}$ & relevant production of N\\ \hline
D & $10^{-8}$ & $10^{6.5}$ & $10^{-5.5}$ & $10^{-2}$ & $ 1.46 \times 10^{-11}$ & large production of N \\ \hline
E & $10^{-8}$ & $10^{8}$ & $10^{-6.5}$ & $10^{-9}$ & $ 2.5 \times 10^{-10}$ & ``enhancement''\\ \hline
\end{tabular}
\caption{
Considered parameter space points $A,B,C \text{ and } D$. The parameters are the relative mass splitting $\alpha$, the sterile neutrino Yukawa coupling $Y$, the zero-temperature vev of the scalar singlet $v_S^0$, the scalar singlet self coupling $\lambda_S$. Given are the produced Baryon Asymmetry of the Universe $Y_{\Delta B}$ for each point, where the scalar-Higgs coupling $\lambda_{SH} = 10^{-3}$ has been fixed.
}
\label{tab:points}
\end{table}
It is important to realise that the process {\bf (a)}, cf.~\cref{eq:scalardecay}, creates sterile neutrinos and sterile anti-neutrinos in equal numbers and therefore by itself does not produce any asymmetry in the sterile sector.
However, this process acts as a source for sterile neutrinos and thus increases $R_N$ and $R_{\bar{N}}$, which affects the lepton asymmetry production in the active sector via the kinetic equations.
For the discussion below we define a number of benchmark parameter points, listed in tab.~\ref{tab:points}, that correspond to different parameter space regions where successful leptogenesis is possible in principle.
\paragraph*{A: The limit of the $\nu$MSM:}
First, we consider the kinetic equations in \cref{eq:kineticeq} only, and use a fixed mass for the sterile neutrinos $M_N^0 = 10$~GeV, $\alpha = 10^{-8}$, which corresponds to the case considered in Ref.~\cite{Asaka:2011wq}. Solving the kinetic equations the total baryon asymmetry with the initial conditions $R_N=0$, $R_{\bar{N}}=0$, $\mu_\alpha=0$, $R_N(z)$, $R_{\bar{N}}(z)$ and $\mu_\alpha(z)$, we find the value $Y_{\Delta B} = -\frac{28}{79} Y_{\Delta L_{tot}} = 5.05 \times 10^{-11} $.
This value differs by a factor of about four from the results in \cite{Asaka:2011wq}, namely $Y_B = 2.73 \times 10^{-10}$, which we checked is due to the different set of neutrino parameters.
Next we consider the benchmark point A, as defined in \cref{tab:points}, with the choice of $T_S \gg T_{EW}$. The small Yukawa coupling $Y$ makes the thermal contributions to the sterile neutrino mass negligible for $z\sim z_{osc}$, compared to its vev-induced mass of 10 GeV.
This benchmark point corresponds to the limiting case where the scalar interactions are negligible, and indeed the resulting asymmetry is identical to the one evaluated above.
\paragraph*{B: Late $S$ symmetry breaking:}
The case where the scalar $S$ develops its vev after the onset of neutrino oscillations defines $z_S > z_{osc}$.
This, combined with the above discussed conditions show that only benchmark points with large relative mass splitting, small $v_S^0$, and relatively large Yukawa couplings can at least in principle generate an asymmetry, cf.\ the left panel of \cref{fig:parameters}. These parameters result in small sterile neutrino masses after the electroweak symmetry breaking, which in turn suppresses the magnitude of the Yukawa matrix $F$.
As analytic estimates from Ref.~\cite{Akhmedov:1998qx} make us expect, the resulting baryon asymmetry from benchmark point B is consistent with zero, within computational uncertainties, cf.~\cref{tab:points}.
\paragraph*{C, D: Early $S$ symmetry breaking:}
Early breaking of the symmetry related to $S$ implies $z_S <z_{osc}$.
In this regime the sterile neutrino mass is generally dominated by the zero temperature mass, i.e.\ it is temperature independent in very good approximation, and the oscillations are controlled by $M_N^0 = Y v_S^0$ for $T<v_S^0$. The dynamics are very similar to that of the $\nu$MSM, except for the additional $N$ production via the process {\bf (a)}.
The boundary of the blue area in the four panels of \cref{fig:parameters} indicates where $N$ production from $S$ decays increases the abundance of sterile neutrinos in the thermal bath to the point, where inverse decays become relevant.
Parameter space points that are in the white area and close to the boundary with the blue area are expected to have enhanced production of sterile neutrinos.
The benchmark point C with $M_N^0 = 10$~GeV is close to this boundary for $\lambda_S = 10^{-3}$ and we notice that the resulting asymmetry is slightly reduced, compared to the result from benchmark point A.
For comparison we also show the benchmark point D, which is inside the blue area and has a further reduced asymmetry compared to C.
Notice that, strictly speaking, the point D violates our assumption that the sterile neutrino densities are negligible. Estimates for the predicted BAU when inverse decay is included show that for moderate sterile neutrino production the suppression of lepton asymmetry production is actually reduced. This is a consequence of the reduced sterile neutrino abundance due to the inclusion of inverse decays. However, to fully understand the dynamics in the blue region more precise calculations are required, ideally including the momentum dependence or for example including more production channels, which is beyond the scope of this work.
\subsection{The effect of enhanced $N$ production}
We noticed above that leptogenesis can be successful only in the white regions of the parameter space, and that quantitative differences to the $\nu$MSM are to be expected only when $N$ production is not negligible, on the other hand, we expect that too large $N$ production will suppress the asymmetry production. Therefore we inspect the parameter space points that are at the boundary between the white and the blue area in \cref{fig:parameters} more closely.
\begin{figure}
\centering
\begin{subfigure}{0.3\textwidth}
\includegraphics[width = \linewidth]{finalplots/BPA.pdf}
\subcaption*{A}
\end{subfigure}
~
\begin{subfigure}{0.3\textwidth}
\includegraphics[width = \linewidth]{finalplots/BPC.pdf}
\subcaption*{C}
\end{subfigure}
~
\begin{subfigure}{0.3\textwidth}
\includegraphics[width = \linewidth]{finalplots/BPD.pdf}
\subcaption*{D}
\end{subfigure}
\caption{Production of the sterile neutrino density $R_N$ as a function of the time parameter $z$ for the benchmark points A, C, D. The orange and blue line denotes total $N$ production and $N$ production via process {\bf a}, respectively. The green line denotes sterile neutrino production from $\nu$MSM dynamics. The vertical dashed lines indicate the time where $S$ decays are relevant, the dashed-dotted lines correspond to $z=z_{osc} = 0.028$. }
\label{fig:RNprod}
\end{figure}
\paragraph*{Evolution of $R_N$:}
The evolution of the sterile neutrino density $R_N$ with $z$ for the benchmark points A, C and D, is shown in \cref{fig:RNprod}, wherein the blue line denotes production only via $S$ decays while the orange line includes the complete kinetic equations.
(Notice that the evolution of $R_{\bar N}$ is almost identical, apart from phase differences and from the relatively tiny difference that makes the asymmetry parameters.)
The figure shows that for the benchmark points A and C the sterile neutrino density $R_N$ remains below the equilibration limit of 0.15 at $z=1$. The point D, however, reaches equilibration for $z\simeq 10^{-3}$, which renders its result unphysical as the inverse decays have been neglected.
We observe that the main production of sterile neutrinos through scalar decay occurs for $T \sim \mathcal{O}(0.1) M_s(z)$, this region is denoted by the dashed grey lines in plots. For comparison the oscillation timescale is also shown in the plots as the dash-dotted lines.
\begin{figure}
\centering
\begin{subfigure}{0.45 \linewidth}
\includegraphics[width= \linewidth]{finalplots/MNchange.png}
\end{subfigure}
\begin{subfigure}{0.45 \linewidth}
\includegraphics[width= \linewidth]{finalplots/lSchange.png}
\end{subfigure}
\caption{Baryon asymmetry production as a function of the $S$ vacuum expectation value $v_S^0$. {\it Left:} The three lines correspond to fixed sterile neutrino masses $M_N^0=1,10,50$~GeV. The $S$ self coupling has been fixed to $\lambda_S = 10^{-3}$.
{\it Right:} The three lines correspond to $S$ self couplings $\lambda_S = 10^{-5}, 10^{-3}, 10^{-1} $. The sterile-neutrino mass has been fixed to $M_N^0=10$~GeV.
For this figure $\lambda_{SH} = 10^{-3}$ has been fixed.}
\label{fig:scans}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}{0.7 \linewidth}
\includegraphics[width = \linewidth]{finalplots/parallel.pdf}
\end{subfigure}
\caption{Total baryon asymmetry production as a function of the sterile neutrino mass $M_N^0$. The three lines $L_1,L_2,L_3$ are chosen parallel to the blue area. The remaining parameters are fixed to $\lambda_{SH}=10^{-3}$, $\lambda_S= 10^{-2}$ and $\alpha = 10^{-8}$. See text for more details.}
\label{fig:Mndependence}
\end{figure}
\paragraph*{Varying the scalar vev:}
As discussed above, in the limit of large vev and early $S$ symmetry breaking, we reproduce the results of the $\nu$MSM.
Considering the effect of increased $N$ production, we keep the zero temperature neutrino mass $M_N^0$ fixed and vary $v_S^0$, which implies that $Y$ co-varies with the vev as $M_N^0/v_S^0$.
The left panel of \cref{fig:scans} shows three lines for fixed $M_N=1, 10, 50$~GeV, for each $M_N$ the self-coupling $\lambda_S=10^{-3}$ is fixed.
The right panel shows three lines for fixed $\lambda_{S}=10^{-5},10^{-3},10^{-1}$, and $M_N^0=10$~GeV is fixed.
In both panels $\lambda_{SH}=10^{-3}$ is used.
Both panels show that the asymmetry is reduced for smaller $v_S^0$.
The figure shows clearly how the asymmetry converges towards a fixed value when the interaction rate drops below some critical threshold.
Conversely the asymmetry drops with decreasing $v_S^0$ due to increased washout from additional sterile neutrino production through $S$ decays for increasing $Y$.
The onset of the asymmetry reduction depends on the value of $Y$, as shown in the left panel, as well as the scalar mass $M_S$, as shown in the right panel.
\paragraph*{Varying the sterile neutrino mass:}
Here we consider the effect of varying sterile neutrino mass on the total baryon asymmetry production for different benchmark points.
Therefore we consider pairs of parameters $(v_S^0,Y)$ that are on a line parallel to the blue boundary in \cref{fig:parameters}.
We parametrise this line as
\begin{equation}
\log(v_S^0) = 2 \log(Y) +L_i\,,
\label{eq:parallel}
\end{equation}
where we fix $L_i=20, 18.5, 17.5$, which is, respectively, far away, close to, and inside the blue area for $\lambda_{S} = 10^{-2}$.
We pick parameter points on these lines for 1~GeV~$\leq M_N^0\leq 100$~GeV, and we fix $\lambda_{SH} = 10^{-3}$ and $\alpha=10^{-8}$ for definiteness.
The resulting baryon asymmetry for each mass is shown in \cref{fig:Mndependence}, where the lines $L_1,L_2,L_3$ are denoted by the blue, orange, and green line, respectively, and where we show the result for the $\nu$MSM with the red line for comparison.
The figure shows clearly how the distance from the blue boundary determines the amount of washout and thus reduces the asymmetry production.
The enhancement for $M_N^0\sim 60$~GeV can be explained as follows: increasing $M_N^0$ also increases the Yukawa coupling between the sterile and active sector (through Casas-Ibarra parametrization), which increases the oscillation speed and therefore the asymmetry production. However, at some point the Yukawa coupling becomes so large that sterile neutrinos start to thermalise with the SM bath, and the resulting washout again reduces the produced asymmetry.
\subsection{Discussion}
\begin{figure}
\centering
\includegraphics{finalplots/mass-mass.pdf}
\caption{Scalar zero-temperature mass versus sterile neutrino zero-temperature mass. Under the assumption of a value for $v_S^0$, successful leptogenesis limits the two masses above the corresponding colored line.}
\label{fig:mass-mass}
\end{figure}
\paragraph*{Implications of successful leptogenesis:}
We have seen that the dominant effect of additional scalars, in the parameter ranges discussed, is increased washout, such that successful leptogenesis imposes strong constraints on the possible values of Yukawa couplings $Y$, scalar self couplings $\lambda_S$, and the vev $v_S^0$.
Parameters that do not interfere with the leptogenesis mechanism are tiny Yukawa couplings with $Y\leq {\cal O}(10^{-7})$ and/or large vevs with $v_S^0 \geq {\cal O}(10^8)$~GeV, which corresponds to the decoupling limit.
Conversely, combinations of parameters where $Y$ and $v_S^0$ are both very small lead to very small $M_N^0$ and thus to oscillations that are too slow to generate an appreciable asymmetry before Sphaleron freeze out.
The domain with moderate vevs below the decoupling limit $10^6$~GeV~$\leq v_S^0 \leq 10^8$~GeV and Yukawa couplings $Y \geq 10^{-7}$ warrants further scrutiny.
In general also in this domain the generated asymmetry is reduced compared to the decoupling limit. For a fixed value of $v_S^0$ we can define the limiting value for $\lambda_S$ (or equivalently $M_S^0$)\footnote{We remark that the mass $M_S^0$ is obtained from diagonalising the scalar mass matrix, which includes an off-diagonal entry proportional to $\lambda_{SH} v_S^0 v_{EW}$. The condition that $S$ is thermalised, $\lambda_{SH} \geq {\cal O}(10^{-4})$ then gives a lower limit for $M_S^0$ for $\lambda_S < v_{EW}/v_S^0 \lambda_{SH}$.} where the generated asymmetry is half that of the asymmetry in the decoupling limit.
This allows us to define a minimum scalar mass for each sterile neutrino mass, for which leptogenesis is successful.
The resulting limits are shown in \cref{fig:mass-mass}, where the colored lines correspond to different values for $v_S^0$.
This figure can be intrepreted as follows. If scalar particles and sterile neutrinos are discovered, and their masses are below one of the colored lines, the corresponding vev has to be larger, or leptogenesis is not successful. As an example, consider $M_S^0=270$~GeV as motivated by the LHC multi-lepton anomalies \cite{vonBuddenbrock:2016rmr,vonBuddenbrock:2017gvy}. Our findings imply that the corresponding $M_N^0$ has to be smaller than $1$~GeV or $12$~GeV if $v_S^0=10^6$ or $v_S^0=10^7$, respectively. If $N$ with larger masses are discovered, this implies $v_S^0 \geq 10^8$~GeV, or that the BAU has to be generated in another way.
\paragraph*{Generalisation to multiple scalars:}
Here we considered the extension of the SM with sterile neutrinos and a single scalar field.
In scenarios where the SM is extended with sterile neutrinos and $n$ scalar singlet fields the sterile neutrinos can be even more connected to the thermal plasma, the zero-temperature masses of the sterile neutrinos and the sterile neutrino source terms are given by, respectively,
\begin{equation}
M_N^0 = \sum_i Y_i v_{S_i}^0\,, \qquad \Gamma_{S_i} = \frac{Y_i\cdot Y_i}{16 \pi} \frac{1}{\rho^{eq}(x)} \frac{M_{S_i}(z)^2}{T_{EW}}\frac{z}{x^2} \int_{y_{0_i}}^\infty n_{s}(y) \mathrm{d} y\,,
\end{equation}
where $Y_i$ and $v_{S_i}$ are the Yukawa coupling and vev of the scalar $S_i$.
The $\Gamma_{S_i}$ are relevant for our discussion if and only if $S_i$ is thermalised and its mass $M_{S_i}^0$ is comparable to the oscillation time, $T_{EW}/z_{osc}$.
This brings the additional degree of freedom to increase the zero-temperature mass of the sterile neutrinos without increasing the washout, if a dominant contribution stems from a non-thermalised or very heavy scalar. This is comparable to allowing for Majorana mass terms.
In general we expect that in these scenarios the resulting asymmetry will be reduced by additional washout.
\paragraph*{Enhanced asymmetry production:}
A limited enhancement of the produced BAU is found for extremely small values of $\lambda_S$, relatively small values of $Y$ and large values of $v_S^0$, i.e.\ inside the blue area in fig.~\cref{fig:parameters}.
The enhancement seems to occur when the timescales of scalar decays and sterile neutrino oscillations coincide. As an explicit example we discuss benchmark point E, see \cref{tab:points}, for this point the BAU is enhanced by about $10\%$ compared to the decoupling limit; from respectively $2.34 \times 10^{-10}$ in the decoupling limit to $2.5 \times 10^{-10}$ for benchmark point E.
This enhancement occurs for rather large $R_N$ production ($R_{N_2} \sim \mathcal{O}(1)$ at $z=1$). Thus, for a proper calculation of the BAU, inverse decay processes should be taken into account. As discussed before, this will reduce the predicted enhancement, according to our estimates the enhancement is in fact almost completely removed. We consider it unlikely that the enhancement will increase significantly in a full treatment when for example other momentum modes or inverse decays are taken into account properly.
Furthermore, we noticed that the sterile neutrino thermal mass (cf.~\cref{eq:Nmass}) increases the oscillations in the sterile sector, which in turn enhances the asymmetry production in the active one.
However, in regions of parameter space where this effect is relevant, it is overcompensated by the enhanced washout from scalar decays.
If these two effects could be separated, a significant enhancement of the asymmetry production would be possible.
One way of separating these two effects is to have the time of $S$ symmetry breaking after the onset of oscillations, $z_S>z_{osc}$.
Parameters that realise this are denoted by the green area in \cref{fig:parameters}. However, they all lead to thermalisation of $N$.
In general, the asymmetry production is enhanced when the sterile neutrinos are more degenerate in mass.
However, the asymmetry production can also be enhanced without strong mass degeneracy when three flavors of sterile neutrinos are considered, as discussed in e.g.~Ref.~\cite{Abada:2018oly}.
Another way that allows to separate zero-temperature sterile neutrino mass, finite temperature sterile neutrino oscillations, and the scalar decay into sterile neutrinos is given by a combination of thermalised and non-thermalised scalars, as discussed above. However, this goes beyond the scope of this work.
\paragraph*{Time of scalar symmetry breaking:}
For our numerical evaluation we have set the time scale $T_S$ at which the $S$ symmetry breaks and $v_S(T) \simeq v_S^0$ equal to the vev itself: $T_S = v_S^0$.
The time of symmetry breaking can be evaluated analytically if the field content of the theory is fixed, as is done for the case of the SM, for instance in Ref.~\cite{Dine:1992vs}.
From these arguments we expect that the time of symmetry breaking is proportional to
\begin{equation}
T_S \propto \frac{v_S^0}{\sqrt{\lambda_S}}
\end{equation}
while the proportionality factors involve ratios of masses of possible additional field content. It is worth pointing out that, in the case of the SM, the energy scale of the symmetry breaking time $1/T_{EW} < v_{EW}$.
For our numerical evaluations we find that the exact time of symmetry breaking is irrelevant, as long as it occurs before the relevant time scales of leptogenesis.
In particular, symmetry breaking has to occur before the oscillations, which take place typically at $T_{osc} = {\cal O}(0.01) T_{EW}$. Therefore the corresponding energy scale of $T_S > 10^4$~GeV is sufficient not to introduce numerical effects on the asymmetry calculation.
\section{Conclusions}
Sterile neutrinos are well motivated from the light neutrino oscillations and they have been shown to successfully explain the Baryon Asymmetry of the Universe (BAU) through so-called ARS leptogenesis.
Sterile neutrinos can be added in theories that include also other new fields, such as scalar bosons, which brings about the possibility of further interactions between the sterile neutrinos and the SM.
In this paper we considered an extension of the SM with two sterile neutrinos and one scalar singlet field in order to study the robustness of the ARS leptogenesis mechanism with respect to scalar extensions.
We took into account constraints from the light neutrino parameters and also discussed limits on the scalar sector from LHC searches.
We investigated the effect that the thermalised scalar has on the ARS leptogenesis mechanism.
We found that in our model the BAU of the $\nu$MSM is reproduced when the vev is at least as large as ${\cal O}(10^8)$~GeV and the Yukawa and scalar self couplings are at most of ${\cal O}(10^{-6})$, which we refer to as the decoupling limit.
In most of the remaining parameter space the thermalised scalar leads to enhanced sterile neutrino production at early times, resulting in a reduction of the predicted BAU compared to the decoupling limit.
A small enhancement of the BAU of ${\cal O}(10\% )$ is present for parameters close to the decoupling limit, i.e.\ $v_S^0\sim 10^8$~GeV and for scalar and heavy neutrino masses around and below the weak scale, respectively.
Our results are general for models with scalar singlets and with extended gauge sectors, provided that the additional field content does not thermalise the sterile neutrinos at any point of the Universe's history.
They can also be generalised to models with more than one scalar field, in which case the sterile neutrino zero-temperature mass and the scalar decay rate are sums over the scalar field content. In such models the zero-temperature sterile neutrino mass could be dominated by a scalar that is not thermalised, such that the Yukawa couplings in the sterile neutrino masses can be different from those in the decay rate of the thermalised scalar.
Our results can be useful when sterile neutrinos and scalar particles are discovered in the laboratory, such that their masses and the Yukawa coupling are known. In this case it is possible to infer whether or not the ARS mechanism is a valid possibility to create the BAU, or if another mechanism has to be invoked.
\bibliographystyle{unsrt}
|
2,869,038,155,486 | arxiv | \section{Introduction}
\vspace{-.4cm}
As the dimensions of an electronic device are reduced, the power consumption, and concomitant heat generation, increases.
Therefore, a detailed understanding of heat transport at the nanoscale
is critical for the future
development of stable high-density integrated circuits.
Fourier's law of heat conduction is an empirical relationship stating that the flow of heat is linearly related to an applied temperature gradient
via a geometry independent, but material dependent, thermal conductivity.
Although Fourier's law accurately describes heat transport in macroscopic samples, at the nanoscale heat is carried by quantum excitations
(e.g., electrons, phonons, etc.) which are generally strongly influenced by the microscopic details of a system.
For instance, violations of Fourier's law have been observed in graphene nanoribbons, where the system could be tuned between the ballistic phonon regime
and the diffusive regime by altering the edge state disorder \cite{bae2013ballistic}.
Violations in carbon nanotubes have also been observed \cite{PhysRevLett.101.075903}.
Investigations into the origin of Fourier's law generally focus on ballistic phonon heat transport.
However, the electronic heat current can dominate in a variety of systems (e.g., metals, conjugated molecule heterojunctions, etc).
Unlike phonons, only electrons in the vicinity of a contact's Fermi energy can flow, meaning that wave interference effects play an important role
in thermal conduction \cite{Bergfield2013demon,bergfield2015tunable}. In addition, the lattice (phonon) and electronic temperatures generally differ
for systems without strong electron-phonon coupling. In this article,
we investigate the onset of Fourier's law in the electronic temperature distribution where quantum effects cause the maximal deviations
from classical predictions.
Previously, Dubi and DiVentra showed that Fourier's law for the electronic temperature could be recovered from a quantum description via two mechanisms:
dephasing and disorder\cite{dubi2009fourier, dubi2009reconstructing}. Although valid for some model systems,
these mechanisms cannot provide a general framework to understand the emergence of Fourier's Law in quantum electron systems.
The principal shortcoming of these mechanisms, when applied to real nanostructures, is that
the magnitude of dephasing or disorder required to recover Fourier's relation
is so strong that the covalent bonding of the system would be disrupted,\cite{Bergfield2013demon} effectively disintegrating any real material.
In this work, we utilize a state-of-the-art nonequilibrium quantum description of heat transport to investigate the onset of Fourier's law in a
nanoscale device.
Using a non-invasive probe theory \cite{stafford2016local,stafford2017local} in which the spatial resolution of the temperature measurement is limited by
fundamental thermodynamic relationships rather than by the stucture and composition of the probe, we find that Fourier's law emerges in the limit
where many quantum states contribute to the heat transport. That is, when the energy-level spacing of the quantum states of the system is small compared to
the coupling of the system to the source and drain reservoirs, so that
the density of states of a system becomes smooth.
Finally, we apply a thermal resistor network analysis to the simulated temperature profiles and
observe the emergence of a geometry-independent thermal conductivity.
\begin{figure*}[tb]
\centering
\subfloat[Contact type I - Classical]{\includegraphics[width=0.35\linewidth]{ClassicalType1.jpg}\label{fig:ClassicalType1}
\subfloat[Contact type I - Quantum]{\includegraphics[width=0.35\linewidth]{mu01_C1_gamma3-eps-converted-to.pdf}\label{fig:mu01_C1_gamma3}}%
\subfloat[Contact type I - Quantum]{\includegraphics[width=0.35\linewidth]{mu06_C1_gamma3-eps-converted-to.pdf}\label{fig:mu06_C1_gamma3}}%
\\
\vspace{-0.35cm}
\subfloat[Contact type II - Classical]{\includegraphics[width=0.35\linewidth]{ClassicalType2.jpg}\label{fig:ClassicalType2}
\subfloat[Contact type II - Quantum]{\includegraphics[width=0.35\linewidth]{mu01_C5_gamma3-eps-converted-to.pdf}\label{fig:mu01_C5_gamma3}}%
\subfloat[Contact type II - Quantum]{\includegraphics[width=0.35\linewidth]{mu06_C5_gamma3-eps-converted-to.pdf}\label{fig:mu06_C5_gamma3}}%
\vspace{-0.2cm}
\caption{Classical (panels a,d) and quantum temperature profiles of a graphene flake under thermal bias for two contact geometries.
The hot electrode (red) is held at 110K and the cold electrode (dark blue) is held at 90K, where red and blue squares indicate the carbon atoms covalently
bonded to the hot and cold electrodes, respectively.
In contact type I (upper panels), only the left and right edges of the flake couple to the electrodes, while in contact type II, the coupling to the
electrodes wraps around three edges each, leading to three times stronger coupling to the electrodes.
The quantum calculations are at Fermi energies $\mu_0=-0.1$eV (b,e) and $-0.6$eV (c,f), relative to the Dirac point.
The quantum temperature distributions for contact type I exhibit strong oscillations that depend sensitively on $\mu_0$, while for contact type II, the
temperature distributions resemble pixelated versions of the classical distribution.
}
\label{fig:temperature_profile}%
\end{figure*}
\section{Theory of local temperature measurement}
Fourier's law for the heat current density $J_q = -\kappa \nabla T$
establishes a local linear relationship between an applied temperature gradient $\nabla T$
and the heat flow, and is generally accurate for macroscopic, dissipative systems.
In quantum systems, the local temperature $T({\bf x})$ must be thought of as the result of a local {\em measurement},
and can vary due to quantum interference effects,\cite{bergfield2015tunable,Bergfield2013demon} quantum chaos \cite{lepri1997heat},
disorder\cite{dubi2009reconstructing}, and dephasing\cite{dubi2009fourier} of the heat carriers in the sample.
The local temperature distribution of a nonequilibrium quantum system is defined by introducing a floating thermoelectric probe
\cite{Bergfield2013demon,Meair14,bergfield2015tunable,shastry2016temperature}.
The probe exchanges charge and heat with the system via a local coupling until it reaches equilibrium with
the system:
\begin{equation}
I_p^{(\nu)} =0, \; \nu=0,1,
\label{eq:def_probe}
\end{equation}
where $-eI^{(0)}_p$ and $I^{(1)}_p$ are the electric current and heat current, respectively, flowing into the probe.
The probe is then in local equilibrium with a quantum system which is itself out of equilibrium.
In the linear-response regime, for a thermal bias applied between electrodes 1 and 2, forming an open electric circuit, the heat current into electrode $\alpha$ is given by
\begin{equation}
I^{(1)}_\alpha = \sum_\beta \tilde{\kappa}_{\alpha\beta} (T_\beta - T_\alpha),
\label{eq:heat_current}
\end{equation}
where $\alpha$ and $\beta$ label one of the three electrodes (1, 2, or the probe). Solving this set of linear equations, we arrive at the local temperature
distribution \cite{Bergfield2013demon}
\begin{equation}
T_p(x,y)=\frac{\tilde{\kappa}_{p1}(x,y) T_1 + \tilde{\kappa}_{p2}(x,y) T_2 + \kappa_{p0} T_0}{
\tilde{\kappa}_{p1}(x,y) + \tilde{\kappa}_{p2}(x,y) + \kappa_{p0}}.
\label{eq:Tp}
\end{equation}
Here $\tilde{\kappa}_{p\beta}(x,y)$ is the
position-dependent
thermal conductance between electrode $\beta$ and the probe, and $\kappa_{p0}$ is the thermal
coupling of the probe to the ambient environment at temperature $T_0$.
In the absence of an external magnetic field,
the
effective two-terminal thermal conductances
are given by \cite{Bergfield2013demon}
\begin{eqnarray}
\tilde{\kappa}_{\alpha\beta} &=& \frac{1}{T}\left[\Lfun{2}{\alpha\beta}
-\frac{\left[\Lfun{1}{\alpha\beta}\right]^2}{\tilde{\gcal L}_{\alpha\beta}^{(0)}} \right. \nonumber \\
& -& \left. {\gcal L}^{(0)} \!
\left(
\frac{\Lfun{1}{\alpha\gamma}\Lfun{1}{\alpha\beta}}{\Lfun{0}{\alpha\gamma}\Lfun{0}{\alpha\beta}}
+\frac{\Lfun{1}{\gamma\beta}\Lfun{1}{\alpha\beta}}{\Lfun{0}{\gamma\beta}\Lfun{0}{\alpha\beta}}
-\frac{\Lfun{1}{\alpha\gamma}\Lfun{1}{\gamma\beta}}{\Lfun{0}{\alpha\gamma}\Lfun{0}{\gamma\beta}}
\right)
\right],
\label{eq:kappatilde}
\end{eqnarray}
where $\Lfun{\nu}{\alpha\beta}$ in an Onsager linear
response
function
\cite{Onsager31},
\begin{equation}
\tilde{\gcal L}_{\alpha\beta}^{(0)}= \Lfun{0}{\alpha\beta}+
\frac{\Lfun{0}{\alpha\gamma}\Lfun{0}{\gamma\beta}}{\Lfun{0}{\alpha\gamma}+\Lfun{0}{\gamma\beta}},
\label{eq:three_term_L0}
\end{equation}
and
\begin{equation}
\frac{1}{{\gcal L}^{(0)}} = \frac{1}{\Lfun{0}{12}} + \frac{1}{\Lfun{0}{13}} + \frac{1}{\Lfun{0}{23}}.
\label{eq:L0_series}
\end{equation}
Following the methods of Refs.~\onlinecite{Sivan86,Bergfield09b,Bergfield10},
the linear-response coefficients may be calculated in the elastic cotunneling regime as
\begin{equation}
\Lfun{\nu}{\alpha\beta} (\mu) = \frac{1}{h} \int dE \left(E-\mu \right)^\nu \left(-\frac{\partial f}{\partial E}\right) T_{\alpha\beta}(E),
\label{eq:Lnu}
\end{equation}
where $f(E)$ is the equilibrium Fermi-Dirac distribution, and $T_{\alpha\beta}(E)$ is the transmission probability from contact $\beta$ to contact $\alpha$
for an electron of energy $E$, which may be found using the usual nonequilibrium Green's function (NEGF) methods.
The details of our computational methods may be found in the Supporting Information.
In the simulations discussed below, we consider an ideal broad-band probe with perfect spatial resolution
coupled weakly to the system.\cite{stafford2016local,stafford2017local}
Furthermore, we assume $\kappa_{p0} \ll \tilde{\kappa}_{p1}, \, \tilde{\kappa}_{p2}$, so that we can unambiguously determine the fundamental value of the
local temperature in the nonequilibrium system. Any actual scanning probe won't achieve this resolution; instead a convolution between the intrinsic profile
and the probe's resolution will be measured. The advantage of considering a probe in this limit is that we can investigate the onset of Fourier's law without the complications introduced by the probe's apex wavefunction geometry.
\section{Results for $T_p(x,y)$ in graphene nanojunctions}
We investigate heat transport and temperature distributions in a
graphene flake coupled to two macroscopic metal electrodes under a thermal bias.
The electrodes are covalently bonded to the edges of the graphene flake.
See Supporting Information for details of the model.
\subsection{Emergence of Fourier's law}
The classical temperature distribution for a graphene flake with two different contact geometries
is shown in panels a and d of Fig.~\ref{fig:temperature_profile}. The behavior predicted by Fourier's law is clearly visible in the
characteristic linear temperature gradient across the sample from the hot to the cold electrode.
This behavior is to be contrasted with the temperature distributions calculated using quantum heat transport theory, shown in panels b, c, e, and f.
Figs.~\ref{fig:temperature_profile}b, c show the electron temperature distributions for two different values of the Fermi energy
($\mu_0=-0.1eV, -0.6eV$ relative to the Dirac point) for contact type I, where the hot and cold electrodes are covalently
bonded to the right and left edges of the graphene flake at the sites indicated by red squares.
The temperature exhibits large quantum oscillations\cite{bergfield2015tunable} that depend sensitively on the Fermi energy $\mu_0$, obscuring any possible
resemblance to the classical temperature distribution shown in Fig.\ \ref{fig:temperature_profile}a. The electron temperature
distributions for contact type II are shown for the same two values of $\mu_0$ in Figs.~\ref{fig:temperature_profile}d, e. In this case, although there are
atomistic deviations from Fourier's law, nonetheless the resemblance to the classical distribution shown in Fig.\ \ref{fig:temperature_profile}d is
unmistakable, and there is not a strong dependence on $\mu_0$.
The different nature of thermal transport for contact types I and II can be understood by considering the density of states (DOS) $g(E)$ of the system,
shown in Fig.~\ref{fig:DOS}. For contact type I, $g(E)$ exhibits a sequence of well
defined peaks, corresponding to the energy eigenfunctions of the graphene flake broadened by coupling to the leads.
In constrast, contact type II, where the broadening is three times as large, has a smooth, almost featureless DOS for $E<1.2$eV.
A sharply-peaked DOS indicates that the system is in the {\em resonant-tunneling regime} where
thermal transport is controlled by the wavefunction of a single resonant state [or a few (nearly) degenerate states], while a smooth DOS indicates that
many quantum states contribute to thermal transport, so that quantum oscillations tend to average out.
We find that a necessary and sufficient condition to recover Fourier's law is that many (nondegenerate) quantum states contribute with comparable strength
to the thermal transport. When transport occurs in or near the resonant-tunneling regime, on the other hand,
there is no classical limit for the temperature distribution.
\begin{figure}[tb]
\centering
\includegraphics[width=3in]{TDOS-eps-converted-to.pdf}
\caption{The calculated density of states (DOS) $g(E)$ of a graphene flake junction for two different contact geometries, defined in
Fig.\ \ref{fig:temperature_profile}.
The DOS for contact type I exhibits a sequence of sharp peaks corresponding to the energies of individual energy eigenstates
[or manifolds of (nearly) degenerate eigenstates] of the flake, broadened by coupling to the electrodes. Contact type II, for which
the broadening is three times as large, exhibits a smooth, nearly featureless DOS for $E<1.2$eV.
}
\label{fig:DOS}
\end{figure}
\begin{figure}[htb]
\centering
\subfloat[Contact type I]{\includegraphics[width=\linewidth]{ContactType1_mu22-eps-converted-to.pdf}}
\\
\vspace{-.4cm}
\subfloat[Contact type II]{\includegraphics[width=\linewidth]{ContactType2_mu01-eps-converted-to.pdf}}
\caption{Top panel: The heat current density $\vec{J}_Q$ for contact type I at $\mu_0 = -2.2$eV is calculated using classical, and quantum transport theories, indicated with the red and blue arrows, respectively. Bottom panel: $\vec{J}_Q$ for contact type II at $\mu_0 = -0.1$eV
calculated using classical, and quantum transport theories. As highlighted by the swirling blue arrows, the heat current profile of the junction shown in the top panel is highly non-classical, while the heat transport profile of the bottom panel's junction is better represented by a classical description.
}
\label{fig:heat_flux}
\end{figure}
We note that for nanostructures amenable to simulation (a few hundred atoms or less),
a very large coupling to the electrodes is necessary to push the system out of the resonant-tunneling regime, and we speculate that this may be the
reason why attempts to study the quantum to classical crossover in electron thermal transport via simulation have so far proven problematic.
Thermal transport experiments
are routinely conducted with much larger quantum systems, however, where this condition is well satisfied.
A direct test of Fourier's law involves not only the temperature distribution but also the heat current density $J_q$, which may be calculated using
NEGF methods (see Supporting Information).
The simulated heat flow patterns are shown in Fig.~\ref{fig:heat_flux} for both classical and quantum thermal transport in both contact geometries.
The quantum heat flow in contact type I bears little relation to the classical flow, but instead exhibits vortices and fine structure that is strongly
energy dependent, similar to the
local charge current structure \cite{solomon2010exploring}.
In contrast, the quantum heat flow in contact type II is nearly classical, except that it is concentrated along the C---C bonds, which serve as conducting
channels. The heat flow patterns shown in Fig.\ \ref{fig:heat_flux} confirm that the crossover to the classical thermal transport regime requires
many quantum states of the graphene flake to contribute comparably (smooth DOS).
\subsection{Thermal resistor network model}
\begin{figure*}[htb]%
\centering
\includegraphics[width=.45\linewidth]{TR_n3-eps-converted-to.pdf}\quad\quad
\includegraphics[width=.45\linewidth]{TR_n4-eps-converted-to.pdf}\\
\includegraphics[width=.45\linewidth]{TR_n5-eps-converted-to.pdf}\quad\quad
\includegraphics[width=.45\linewidth]{TR_n6-eps-converted-to.pdf}
\caption{Thermal resistance values as a function of Fermi energy $\mu_0$
for four different sized hexagonal graphene flake junctions with contact type II, where $N$ is the number of atoms in the flake.
$R_s$ is the sample thermal resistance and $R_1$ and $R_2$ are the contact thermal resistances, defined in Eqs.\ (\ref{eq:Rs})--(\ref{eq:R2}),
respectively.
The contact resistances are nearly universal in this transport regime: $R_1,\,R_2\approx R_0/N_c$, where $R_0$ is the thermal resistance quantum and
$N_c$ is the number of atoms bonded to each contact.
The sample thermal resistance $R_s$ is inversely correlated with the density of states per unit area times the sample length,
$g(\mu_0)L$, as expected based on semiclassical
Boltzmann transport theory.
}
\label{fig:R_vs_size}
\end{figure*}
The thermal conductivity $\kappa$ in Fourier's law is material dependent but dimensionally independent.
In the regime of quantum transport \cite{Datta95}, linear response theory instead treats the thermal conductance (also traditionally denoted by the symbol
$\kappa$), which depends in detail on the dimensions and structure of the conductor.
In order to investigate the cross-over between these regimes, we develop a thermal circuit model and apply
it to the temperature profiles calculated using our theory.
The temperature probe acts as a third terminal in the thermoelectric circuit, and affects the thermal conductance between the hot and cold electrodes.
Starting from Eq.~(\ref{eq:heat_current}), the heat current flowing into electrode $1$ may be expressed as
\begin{equation}
I_1^{(1)}= \tilde{\tilde{\kappa}}_{12} (T_2-T_1),
\label{eq:def_Rth}
\end{equation}
where the thermal conductance between source and drain in the presence of the thermal probe is
\begin{equation}
\tilde{\tilde{\kappa}}_{12}=\tilde{\kappa}_{12} + \frac{\tilde{\kappa}_{p1} \tilde{\kappa}_{p2}}{\tilde{\kappa}_{p1}+\tilde{\kappa}_{p2}}.
\label{eq:kappa12}
\end{equation}
The thermal resistance of the junction may be written as
\begin{equation}
R_{\rm th} \equiv \tilde{\tilde{\kappa}}_{12}^{-1} = R_s + R_1 + R_2,
\label{eq:R_network}
\end{equation}
where $R_s$ is the ``intrinsic'' thermal resistance of the system, and
$R_1$ and $R_2$ are thermal contact resistances associated with the interfaces between electrodes 1 and 2, respectively, and the quantum
system.
The individual resistances in the network are defined as follows:
\begin{eqnarray}
R_s & = & \frac{|T_{1s} - T_{2s}|}{|I_1^{(1)}|},
\label{eq:Rs}
\\
R_1 & = & \frac{|T_{1} - T_{1s}|}{|I_1^{(1)}|},
\label{eq:R1}
\\
R_2 & = & \frac{|T_{2s} - T_{2}|}{|I_1^{(1)}|},
\label{eq:R2}
\end{eqnarray}
where $T_{\alpha s}$ is the temperature averaged over the atoms bonded to electrode $\alpha$.
The contact resistances $R_1$, $R_2$, and sample thermal resistance $R_s$
are shown for four different sized graphene flakes
with contact type II as a function of Fermi energy in Fig.~\ref{fig:R_vs_size}.
Here $N$ is the number of atoms in the hexagonal flake.
The resistances are normalized by the quantum of thermal resistance $R_{0} = {3h}/{\pi^2 k_B^2 T_0}\simeq 1.2\times 10^{9}K/W$ at $T_0$=100K.
For these junctions, the contact resistances exhibit nearly universal behavior
\begin{equation}
R_1, \, R_2 \approx R_0/N_c,
\label{eq:R_1,2}
\end{equation}
where $N_c$ is the number of atoms bonded to each contact, with only small deviations that decrease in amplitude with increasing flake size.
To study the crossover to the classical transport regime, it is useful to compare the sample thermal resistance $R_s$ to the classical result derived from
a two-dimensional Boltzmann equation in the relaxation-time approximation
\begin{equation}
R_{\rm cl} = \frac{2 R_0}{h L g(\mu_0) v_F},
\label{eq:R_cl}
\end{equation}
where $g(E)$ is the density of states per unit area of the graphene flake, $v_F$ is the Fermi velocity, and we have set the scattering time
$\tau=L/v_F$ for these ballistic conductors, where $L$ is the distance between the source and drain electrodes.
Note that these hexagonal flakes have equal width and length, so the geometric factor in
$R_{\rm cl}$ is unity.
Eq.\ (\ref{eq:R_cl}) implies that near the Dirac point in graphene, where $v_F\approx \mbox{const.}$, $R_{\rm cl} \propto 1/g(\mu_0)L$.
Fig.\ \ref{fig:R_vs_size} shows that indeed the variations of $R_s$ with Fermi energy are correlated with the variations of $1/g(\mu_0)L$ for various
flake sizes, confirming the classical nature of transport in junctions with contact type II.
An improved fit might be obtained by including the variation of $v_F$ with $\mu_0$, which is important far from the Dirac point.
Although the temperature distribution can approach the classical
limit in some cases via coarse graining \cite{Bergfield2013demon}, the thermal resistor network
model is found to be quantitatively consistent with Fourier's law only for the nearly classical transport regime,
where multiple resonances contribute to the transport.
In the quantum transport regime, where individual resonances are important, the contact resistances are not universal,
but exhibit large oscillations as well,
making the identification of a ``sample thermal resistance'' problematic.
\section{Conclusions}
Thermal transport in quantum electron systems was investigated, and the crossover from the quantum transport regime to the classical
transport regime, where Fourier's law holds sway, was analyzed. In the quantum regime of electron thermal transport, the local temperature
distributions exhibit large oscillations due to quantum interference\cite{DiVentra09,Bergfield2013demon,Meair14,bergfield2015tunable}
(see Fig.\ \ref{fig:temperature_profile}b,c),
and the heat flow pattern exhibits vortices and other nonclassical features (see Fig.\ \ref{fig:heat_flux}b),
while in the classical regime, the heat flow is laminar (see Fig.\ \ref{fig:heat_flux}d)
and the temperature drops monotonically from the hot to the cold electrode (see Fig.\ \ref{fig:temperature_profile}e,f).
A satisfactory understanding of the quantum to classical crossover in electron thermal transport has been lacking for a number of years. Perhaps the most
promising explanation advanced early on\cite{Dubi09b} was in terms of dephasing of the electron waves: for sufficiently large inelastic scattering in the
system, the electron thermal transport becomes classical. However, many nanostructures of interest for technology, such as graphene, have very weak
inelastic scattering \cite{Hwang08}, and the origin of Fourier's law cannot be explained in such systems by this mechanism.
In this article, it was shown that a sufficient condition for a quantum electron system to cross over into the classical thermal transport regime is
for the broadening of the energy levels of the system to exceed their separation, so that the DOS
becomes smooth.
In this limit, the transport involves contributions from multiple resonances above and below the Fermi level, so that interference effects average out.
This condition is challenging to achieve in simulations, requiring
almost the entire edge of the largest 2D system studied to be covalently bonded to one of the two electrodes (see Fig.\ \ref{fig:temperature_profile}d--f).
For smaller systems, unphysically large
electrode coupling would be required to reach the classical regime. However, for the larger systems routinely studied in experiments
\cite{Jiamin12}, it may be quite
typical for thermal transport to occur in the classical regime, since the level spacing scales inversely with the system size.
In addition to recovering a nearly classical temperature profile in the limit where the DOS is smooth, it was also shown that the thermal resistance of the
junction could be explained using a thermal resistor network model consistent with Fourier's law in this limit. The contact thermal resistances were
found to take on universal quantized values, while the sample thermal resistance was found to be inversely proportional to the DOS per unit area times
the sample length, as expected based
on semiclassical Boltzmann transport theory (see Fig.\ \ref{fig:R_vs_size}). In contrast, in the quantum regime, where thermal transport occurs predominantly
via a single energy eigenstate (or a few closely-spaced states near the Fermi level),
the thermal resistor network model was not found to be useful in analyzing the transport.
In this sense, coarse graining of the temperature distribution due to limited spatial resolution of the probe, which leads in many cases
\cite{Bergfield2013demon} to a rather classical temperature profile, is not sufficient to explain the onset of Fourier's law, since the underlying
thermal transport remains quantum mechanical.
\begin{acknowledgments}
We acknowledge useful discussions with Brent Cook during the early stages of this project.
J.P.B. was supported by an Illinois State University NFIG grant.
C.A.S.\ was supported by the U.S.\ Department of Energy (DOE), Office of Science under Award No.\ DE-SC0006699.
\end{acknowledgments}
\section{Appendix}
We utilize a standard nonequilibrium Green's function (NEGF) framework\cite{Datta95,Bergfield09a}
to describe the quantum transport through a three-terminal junction composed
of a graphene flake coupled to source and drain electrodes, and a scanning probe. We focus on transport in the elastic cotunneling regime, where
the linear response coefficients may be calculated from the transmission coefficients $T_{\alpha\beta}(E)$ using
Eq.\ (\ref{eq:Lnu}).
The transmission function may be expressed in terms of the junction Green's functions as \cite{Datta95,Bergfield09a}
\begin{equation}
T_{\alpha\beta}(E)={\rm Tr}\left\{ \Gamma^\alpha(E) G^r(E) \Gamma^\beta(E) G^a(E)\right\},
\label{eq:transmission_prob}
\end{equation}
where $\Gamma^\alpha(E)$ is the tunneling-width matrix for lead $\alpha$
and $G^r(E)$ and $G^a$ are the retarded and advanced Green's functions of the junction, respectively.
In the general many-body problem $G(E)$ must be approximated.
In the context of the examples discussed here we consider an effective single-particle description such that
\begin{equation}
G^r(E) = \left({\bf S}E-H - \Sigma^r_{T} \right)^{-1},
\end{equation}
where $H$ is the Hamiltonian of the nanostructure, ${\bf S}$ is an overlap matrix which reduces to the identity matrix in an orthonormal basis,
and $\Sigma_{T}$ is the tunneling self-energy.
The tunneling-width matrix for contact $\alpha$ (source, drain, or probe) may be expressed as
\begin{equation}
\left[\Gamma^\alpha(E)\right]_{nm} = 2\pi \sum_{k\in\alpha} V_{nk} V_{mk}^\ast\, \delta(E-\epsilon_k),
\end{equation}
where $n$ and $m$ label $\pi$-orbitals within the graphene flake,
and $V_{nk}$ is the coupling matrix element
between orbital $n$ of the graphene and a single-particle energy eigenstate of energy $\epsilon_k$ in electrode $\alpha$.
The thermal probe is treated as an ideal broad-band probe with perfect spatial resolution
\begin{equation}
\Gamma^p = \gamma_p \delta({\bf x}-{\bf x}_p),
\end{equation}
while the coupling to the hot and cold electrodes is taken to be diagonal in the graphene atomic basis
with a per-bond broad-band coupling strength of 3eV.
In the broad-band limit, the tunneling self-energy is a constant matrix given by
\begin{equation}
\Sigma^r_{T} = -\frac{i}{2} \sum_\alpha \Gamma^\alpha.
\end{equation}
In the low-energy regime (i.e., near the Dirac point), a simple tight-binding Hamiltonian has been shown to accurately describe the
$\pi$-band dispersion of graphene \cite{Reich02}.
The Hamiltonian of the graphene flake is taken as
\begin{equation}
H_{\rm graphene} = \sum_{\langle ij \rangle} t_{ij} d_i^\dagger d_j + {\rm H.c},
\end{equation}
where $t=-2.7\mbox{eV}$ is the nearest-neighbor hopping matrix element between 2p$_z$ carbon orbitals of the graphene flake with
lattice constant of 2.5\AA, and $d_i^\dagger$ creates an electron on the i$^{th}$ 2p$_z$ orbital.
The heat current density plotted in Fig.\ \ref{fig:heat_flux} is given within NEGF theory by
\begin{equation}
J_q = \frac{\hbar}{2m} \int \frac{dE}{2\pi} (E-\mu_0) \left(\nabla - \nabla^\prime\right) G^<({\bf x},{\bf x}^\prime; E),
\end{equation}
where $G^<$ is the Keldysh lesser Green's function (see, e.g., Ref.\ \onlinecite{stafford2016local} for its definition).
|
2,869,038,155,487 | arxiv | \section{Introduction}
The experimental realization of highly population-imbalanced atomic gases has
dramatically improved our understanding of the properties of mobile impurities in a quantum
medium. Using Feshbach resonances~\cite{Chin2010} to tune the interaction
between the impurity and the reservoir, cold-atom experiments have systematically explored the properties
of impurities first in fermionic~\cite{Schirotzek2009,Kohstall2012,Koschorreck2012} and recently also in
bosonic~\cite{Jorgensen2016,Hu2016} reservoirs. While there are many similarities between impurities
in fermionic and bosonic reservoirs (termed the Fermi and Bose polaron, respectively), there are also
important differences. For instance, whereas the Fermi polaron has a sharp transition to a molecular state with increasing attraction~\cite{Chevy2006,Prokofev2008,Mora2009,Punk2009,Combescot2009,Cui2010,Massignan2011,Massignan_Zaccanti_Bruun,Cui2015}, the Bose polaron exhibits a smooth crossover
instead, either to a molecular state~\cite{Rath2013} or the lowest Efimov trimer~\cite{Levinsen2015} depending on the value of the three-body parameter. The Bose polaron has also been proposed to be unstable towards
other lower lying states~\cite{Shchadilova2016,Grusdt2017}.
Here, we investigate a unique feature of the Bose polaron (polaron from now on):
The medium exhibits a phase transition between a Bose-Einstein condensate (BEC) and a normal gas.
The effect of such a transition on the quasiparticle properties has not been explored before in previous finite-temperature studies of the Bose polaron~\cite{Boudjema2014,Schmidt2016}.
Using perturbation theory valid for
weak coupling, we show that this transition gives rise to several interesting effects. Both the energy and the damping of the polaron depend strongly and
in a non-trivial way on the temperature in the region around the critical temperature $T_c$.
More generally, these effects are relevant to the behavior of quasiparticles near a phase transition that breaks a continuous symmetry of the system.
We discuss how these effects can be
measured. Very recently, the temperature dependence of the polaron was investigated for strong coupling~\cite{Guenther2017}. Our present study focuses instead on the weak-coupling regime where rigorous results can be derived.
The paper is organized as follows. In Sec.~\ref{sec:model} we describe the model and introduce the perturbative framework. Our main results are presented in Sec.~\ref{sec:results}. Here we describe the polaron properties in three different temperature regimes: at low temperature, in the region close to the critical temperature for Bose-Einstein condensation, and all the way to high temperature. We conclude in Sec.~\ref{sec:conc}.
\section{Model and methods}
\label{sec:model}
We consider an impurity of mass $m$ in a gas of bosons with
mass $m_{\textnormal{B}}$. The Hamiltonian is
\begin{align}
H=&\sum_\k\epsilon_{\k}^{\textnormal{B} \vphantom{\dagger}}b^\dagger_\k b_\k^{\vphantom{\dagger}}
+ \frac{g_\textnormal{B}}{2} \sum_{\k,\k',{\bf q}}
b^\dagger_{{\k}+{{\bf q}}}b_{{\k}'-{\bf q}}^\dagger
b_{{\k}'}^{\vphantom{\dagger}} b_{{\k}}^{\vphantom{\dagger}}
\nonumber\\
& +\sum_{{\k}}\epsilon_{\k}^{\vphantom{\dagger}} c^\dagger_{{\k}}c_{{\k}}^{\vphantom{\dagger}}
+g \sum_{{\k},{\k}',{{\bf q}}}c_{{\k}+{\bf q}}^\dagger b^\dagger_{{\k}'-{{\bf q}}} b_{{\k}'}^{\vphantom{\dagger}} c_{{\k}}^{\vphantom{\dagger}},
\end{align}
where the operators $b^\dag_{{\k}}$ and $c^\dag_{{\k}}$ create a boson and the impurity, respectively, with momentum ${\k}$ and free dispersions
$\epsilon_{\k}^{\textnormal{B}}=k^2/2m_{\textnormal{B}}$ and $\epsilon_{\k}={k^2}/{2m}$. The boson-boson and the boson-impurity interactions are short range with coupling strengths $g_\textnormal{B}$ and $g$, respectively, and
we work in units where the volume, $\hbar$, and $k_B$ are 1.
The Bose gas is taken to be weakly interacting, i.e.,
$na_{\rm B}^3\ll1$, where $n$ is the boson density and $a_{\rm B}>0$ is the
boson-boson scattering length. As we are interested in deriving rigorous results, we use Popov theory to describe the Bose gas.
Below the BEC critical temperature
$T_c\simeq \frac{2\pi}{[\zeta(3/2)]^{2/3}}\frac{n^{2/3}}{m_\textnormal{B}}$, we have the usual Bogoliubov dispersion
$E_{\mathbf k}=[\epsilon_{\k}^\textnormal{B}(\epsilon_{\k}^\textnormal{B}+2{\cal T}_{\text{B}} n_0)]^{1/2}$, where
$n_0$ is the condensate density, and ${\cal T}_{\text{B}}=4\pi a_{\rm B}/m_{\text{B}}$ the boson vacuum scattering matrix.
Below $T_c$, we have the normal and anomalous propagators for the
bosons in the BEC,
\begin{align}
G_{11}(\mathbf k,i\omega_s)&=\frac{u^2_\mathbf k}{i\omega_s-E_{\mathbf k}}-\frac{v^2_{\mathbf k}}{i\omega_s+E_{\mathbf k}}\nonumber\\
G_{12}({\mathbf k},i\omega_s)=G_{21}({\mathbf k},i\omega_s)
&=\frac{u_{\mathbf k} v_{\mathbf k}}{i\omega_s+E_{\mathbf k}}-\frac{u_{\mathbf k} v_{\mathbf k}}{i\omega_s-E_{\mathbf k}}.
\label{BoseGreens}
\end{align}
where $u_{\mathbf k}^2=1+v_{\mathbf k}^2=[(\epsilon_{\k}^\textnormal{B}+{\cal T}_{\text{B}} n_0)/E_{\mathbf k}+1]/2$ are the coherence factors, and $\omega_s=i2sT$ is a boson Matsubara frequency with $s$ integer.
The condensate density
is then found self-consistently from the condition
\begin{align}
n&=n_0-T
\sum_{\omega_s,\k}e^{i\omega_s0_+}
G_{11}(\mathbf k,i\omega_s)\nonumber\\
& = n_0+\frac{8n_0}{3\sqrt\pi}(n_0a_B^3)^{1/2}+
\sum_\k\frac{\epsilon_\k^{\rm{B}} + {\cal T}_{\text{B}} n_0}{E_\k} f_\k.
\label{eq:npopov}
\end{align}
where
$f_\k=[\exp(E_\k/T)-1]^{-1}$ is the Bose distribution function for temperatures $T<T_c$.
Popov theory
provides an accurate description
except in a narrow critical region
determined by $|T-T_c|/T_c\lesssim n^{1/3}a_{\rm B}$~\cite{Shi1998}.
\begin{figure}[tbp]
\centering
\includegraphics[width=\columnwidth]{fig_1stand2ndOrderDiagrams_oneline}
\caption{(a-b) First and (c-h) second order diagrams for the impurity self-energy. The impurity propagator is shown as the bottom red lines, and the external impurity propagators attach to the red dots. The boson normal and anomalous
propagators are shown as the upper solid black lines, while dashed lines are condensed bosons.
The wavy vertical lines denote the impurity-boson
scattering matrix ${\cal T}_v$.}
\label{fig:diagrams}
\end{figure}
\subsection{Perturbation theory}
We use perturbation theory in powers of the impurity-boson scattering length $a$ to analyze the impurity problem. At $T=0$, this approach has yielded important information. For instance,
the impurity energy was shown to depend logarithmically on $a$ at third order~\cite{Christensen2015},
similarly to the energy of a weakly interacting Bose gas beyond Lee, Huang, and Yang \cite{Wu1959,Sawada1959}.
The first order self-energy in Fig.~\ref{fig:diagrams}(a,b) gives the mean-field energy shift $\Sigma_1={\cal T}_vn$, where ${{\cal T}}_v=2\pi a/m_{r}$ is the boson-impurity scattering amplitude at zero energy, with $m_r=m_{\textnormal{B}}m/(m_{\textnormal{B}}+m)$ the reduced mass. This shift is independent of temperature, and in order to get a non-trivial $T$-dependence, we need to go to second order.
The six possible second order diagrams are shown in Fig.\ \ref{fig:diagrams}. Diagrams (c-f) yield the ``Fr\"ohlich'' contribution
\begin{align}
&\Sigma_2^F({\bf p},\omega)=n_0(T){\cal T}_v^2
\sum_\k\left[\frac{1}{\epsilon_{\k}^{\textnormal{B}}+\epsilon_{\k}}\right.\nonumber \\
&\left.+\frac{\epsilon_{\k}^\textnormal{B}}{E_{\mathbf k}}\left(\frac{1+f_\k}{\omega - E_{\mathbf k}-\epsilon_{{\mathbf k}+{\mathbf p}}}+\frac{f_\k}{\omega + E_{\mathbf k}-\epsilon_{{\mathbf k}+{\mathbf p}}}\right)\right],
\label{Frohlich}
\end{align}
where the frequency $\omega$ is taken to have an infinitesimal positive imaginary part.
The first term
in the integrand comes from replacing the bare boson-impurity interaction
$g$ with the scattering matrix ${\cal T}_v$ (see, e.g., Ref.~\cite{Christensen2015}).
These diagrams are non-zero only for $T\le T_c$, as they correspond to the scattering of a boson into or out of the condensate.
The term $\Sigma_2^F$ can also be obtained from the Fr\"ohlich model~\cite{Huang2009,Novikov2009,Casteels2014}.
The ``bubble" diagrams (g-h) of Fig.~\ref{fig:diagrams} give
\begin{gather} \notag
\Sigma_2^B({\bf p},\omega) = {\cal T}_v^2
\sum_\k
\left[v_{\mathbf k}^2(1+f_\k)\Pi_{11}(\k+{\bf p},\omega-E_\k)\right.
\\
-u_{\mathbf k}v_{\mathbf k}
[(1+f_\k)\Pi_{12}(\k+{\bf p},\omega-E_\k)+\nonumber\\
\left. f_\k\Pi_{12}(\k+{\bf p},\omega+E_\k)]
+u_{\mathbf k}^2f_\k\Pi_{11}(\k+{\bf p},\omega+E_\k)\right]
\label{Bubble}
\end{gather}
where the pair propagators $\Pi_{11}$ and $\Pi_{12}$ are given in Appendix \ref{pairprops}.
The bubble diagrams have not previously been evaluated, as they require particles excited out of the condensate and consequently are suppressed by a factor $\sqrt{n_0a_{\rm B}^3}$ for $T\ll T_c$ compared with the Fr{\"o}hlich diagrams.
Their magnitude, however, increases with $T$ as particles get thermally excited out of the BEC, and $\Sigma_2^B$ is indeed the only non-zero contribution to second order for $T>T_c$. Note that the Fr\"ohlich model does not include
$\Sigma^B_2$
and therefore cannot describe the polaron correctly for finite $T$ \cite{Christensen2015}.
\begin{figure}[t]
\centering
\includegraphics[width=.9\columnwidth]{deltaE}
\caption{(a) Second order energy shift and (b) decay rate for $m=m_{\text{B}}$. The lines are for $n^{1/3}a_{\rm B}$ taking the values $0.04$ (solid), $0.1$ (dashed), and $0.25$ (short dashed).
In (a) we also show the $T=T_c^-$ prediction \eqref{TcFrohlichFinal} for the three interaction values (dots), as well as the low-temperature prediction to fourth order in $T/T_c$ (thin, black).
The shaded region illustrates where Popov theory is expected to fail.
\label{fig:deltaE}}
\end{figure}
\section{Bose polaron at finite temperature}
\label{sec:results}
The polaron energy $E_{{\bf p}}$ for a given momentum ${\bf p}$ is found by solving $E_{{\bf p}}=\epsilon_{\bf p}+{\text{Re}}[\Sigma({\bf p},E_{\bf p})]$. Here, we focus on an
impurity
with momentum ${\bf p}={\bf 0}$. To second order in $a$, it is sufficient to evaluate the self-energy for zero frequency~\cite{Christensen2015},
and the equation for the polaron energy therefore simplifies to
\begin{align}
E=\text{Re}[\Sigma({\bf 0},0)]={\cal T}_vn+\text{Re}[\Sigma_2^F({\bf 0},0)+\Sigma_2^B({\bf 0},0)].
\label{PolaronEnergy}
\end{align}
The broadening of the polaron is given by $\Gamma=-{\rm Im}[\Sigma_2^F({\bf 0},0)+\Sigma_2^B({\bf 0},0)]$. To simplify the notation,
we will suppress the momentum and energy arguments of the self-energy, as these are zero.
Instead, we will write $\Sigma(T)$ to focus on the $T$-dependence.
Our main results for the second-order polaron energy shift, $\Delta E\equiv E-{\cal T}_vn$, and broadening $\Gamma$
are shown in Fig.~\ref{fig:deltaE} for $m=m_{\text{B}}$.
We observe a strong temperature dependence, along with an intriguing non-monotonic behavior across the phase transition. We discuss the various regimes and limiting cases in the following.
For concreteness, we mainly discuss the case of equal masses $m_{\text{B}}=m$, with the equations for $m_{\text{B}}\neq m$ relegated to the appendices.
\subsection{Low-temperature behavior}
The term $\Sigma_2^F(T)$ can be evaluated analytically for $T=0$, giving~\cite{Novikov2009,Casteels2014,Christensen2015}
\begin{align}
\Sigma_2^F(0)=\frac{32\sqrt{2}}3\frac{a^2n_0}{m\xi_0},
\end{align}
where $\xi_0$ is the healing length $\xi=1/\sqrt{8\pi n_0a_{\rm B}}$ evaluated at zero temperature. An analytic expression for general mass ratio is given in Ref.~\cite{Novikov2009}.
When evaluating $\Sigma_2^B$, we find that it contains terms that diverge logarithmically at large momentum. This is similar
to the third order logarithmic divergence in the polaron energy at $T=0$~\cite{Christensen2015}.
The divergence can be cured by including the momentum dependence of the scattering matrix, which
provides an ultraviolet cut-off at the
scale $1/k=a^*\sim{\rm max}(a,a_{\rm B})$. Since the healing length sets the lower limit in the momentum integral, we find
\begin{align}
\Sigma_2^B(0)\simeq
\frac{4\sqrt{6\pi} a^2 n_0}{m\xi_0}
\left(\frac{2\pi}{3\sqrt{3}}-1\right)\sqrt{n_0a_{\rm B}^3}\ln(a^*/\xi),
\label{eq:S2B}
\end{align}
where we ignore terms of order $(n_0 aa_{\rm B})^2$. Equation~\eqref{eq:S2B} is suppressed by $(n_0a_{\rm B}^3)^{1/2}$ compared with $\Sigma_2^F(0)$, and we thus ignore
the terms in $\Sigma_2^B$
that give rise to this
divergence
and focus on the remainder,
denoted
$\tilde \Sigma_2^B(T)$ (see Appendix \ref{app:bubble} for details).
Note that a divergent term of the form \eqref{eq:S2B} in the self-energy is to be expected, since at $a=a_{\rm B}$ the polaron ground state energy must correspond to the chemical potential of a weakly interacting Bose gas, i.e., $E=\partial E_\text{WS}/\partial n$, with $E_\text{WS}$ the energy of the weakly interacting Bose gas including the correction by Wu and Sawada \cite{Wu1959,Sawada1959}. From this argument, we also conclude that there must be a similar contribution arising from the Fr{\"o}hlich type diagrams if we treat the excitations of the BEC beyond Bogoliubov theory. Such an investigation is beyond the scope of this work.
To proceed, we take advantage of how the self-energy below $T_c$ simplifies into a product of a $T$-dependent prefactor and a function of $\xi/\lambda$, where
$\lambda=(2\pi/m_{\text{B}} T)^{1/2}$ is the de Broglie wavelength.
Specifically
\begin{align}
\Sigma_2^F(T)=\Sigma_2^F(0)\left(\frac{n_0(T)}{n_0(0)}\right)^{3/2}[1+{\cal I}_F(\xi/\lambda)].\label{eq:frohlich}
\end{align}
Here ${\cal I}_{F}$ is a
dimensionless form of the integral appearing in (\ref{Frohlich}), see Appendix \ref{app:bubble} for details. It vanishes at $T=0$ and its imaginary part at low temperature is only non-zero when $m<m_B$ (Appendix \ref{app:im}). Similarly to Eq.~\eqref{eq:frohlich}, an expression
for $\tilde\Sigma_2^B(T)$ which explicitly contains the additional suppression factor
$(n_0a_B^3)^{1/2}$ is given in Appendix~\ref{app:bubble}.
Due to the suppression factor, at low temperature we neglect $\tilde\Sigma_2^B$ and focus on $\Sigma_2^F$. Here,
the superfluid density $n_0(T)$ decreases as $T^2$ for $T\ll T_c$~\cite{Glassgold1960,Shi1998}, which from Eq.~\eqref{eq:frohlich} gives a $T^2$ decrease in the polaron energy. Indeed, expanding Eq.~\eqref{eq:npopov} at low temperature yields
\begin{align}
\frac{n-n_0(T)}n \simeq
\frac{\pi^{3/2}\left(T/T_c\right)^2}{6\zeta(\frac32)^{4/3}(na_{\rm B}^3)^{1/6}}
-\frac{\pi^{7/2}\left(T/T_c\right)^4}{480\zeta(\frac32)^{8/3}(na_{\rm B}^3)^{5/6}},
\label{eq:popovlowT3}
\end{align}
where at each order in $T/T_c$ we keep only the leading order contribution in $na_{\rm B}^3$.
However, we find that ${\cal I}_F(\xi/\lambda)\propto(na_B^3)^{-4/3}(T/T_c)^4$ for $T\ll T_c$, and since this increase is
proportional to $(na_{\rm B}^3)^{-4/3}$, it quickly dominates for a
weakly interacting BEC. As a result, we obtain
\begin{align}
E(T)\simeq E(0)+\frac{\pi^2}{60}\frac{a^2}{a_{\rm B}^2}\frac{T^4}{nc^3},
\label{EFlowT}
\end{align}
where we have introduced the speed of sound in the BEC: $c=(4\pi a_{\rm B} n)^{1/2}/m$.
Interestingly, the low $T$ dependence
of the polaron energy \eqref{EFlowT} can be related to the free energy of phonons in a weakly interacting BEC for $T\ll T_c$: $F_{\rm{ph}}=-\pi^2 T^4/(90c^3)$~\cite{Khalatnikov2000}. Indeed, setting $a=a_{\rm B}$ we find that \eqref{EFlowT} exactly matches the change in the BEC chemical potential due to the thermal excitation of phonons, i.e.~$\Delta\mu=\left.-\partial F_{\rm ph}/\partial n\right|_{T,V}$. To our knowledge, this $T^4$ increase in the chemical potential of a weakly interacting BEC has never been measured. Our result thus suggests a way to measure this effect using for instance
radio-frequency (RF) spectroscopy on the impurity~\cite{Jorgensen2016,Hu2016}.
\subsection{Behavior close to $T_c$}\label{ClosetoTc}
We now turn our attention to temperatures close to $T_c$. From Eq.\ (\ref{Frohlich}) it follows
that $\Sigma_2^F(T)\propto n_0(T)$ and one would at first sight expect that it vanishes as $T\rightarrow T_c^-$. This is in fact \emph{not} the case when $m=m_{\text{B}}$.
Expanding Eq.\ (\ref{Frohlich}) to lowest order in $n_0$ yields
\begin{align}
\Sigma_2^F(T_c^-)=\frac{{\cal T}_v^2}{{\mathcal T}_{\textnormal{B}}}
\sum_\k f_\k=4\pi\frac{na^2}{ma_{\rm B}}.
\label{TcFrohlichFinal}
\end{align}
Thus, $\Sigma_2^F(T)$ has a \emph{non-zero} value $\propto 1/a_{\rm B}$ when $T\rightarrow T_c^-$.
Since $\Sigma_2^F$ obviously is zero for $T>T_c$, this means that it is
\emph{discontinuous} at $T_c$. The origin of this surprising result is that the low energy spectrum of the Bose gas changes from linear to quadratic in momentum at $T_c$, increasing the density-of-states dramatically. Consequently,
the diagram given by Fig.\ \ref{fig:diagrams}(d), describing the scattering of the impurity on a thermally excited boson,
develops an infrared divergence for $n_0\rightarrow 0$ when $m=m_B$.
For $m\neq m_B$, we on the other hand find $\Sigma_2^F(T_c^-)=0$ so that $\Sigma_2^F$ is continuous across $T_c$, see Appendix \ref{app:bubble}.
Above $T_c$, $\tilde\Sigma_2^B(T)$ is the only non-zero second-order term and
Eq.\ \eqref{Bubble} simplifies considerably since $v_{\mathbf k}=0$
and $E_{\mathbf k}$ becomes $\epsilon_k^{\textnormal{B}}+{\cal T}_{\text{B}} n-\mu$; i.e.\ Popov theory corresponds to
the Hartree-Fock approximation for $T>T_c$. The boson
chemical potential
is therefore $\mu=\mu_{\text{id}}+{\cal T}_{\text{B}} n$,
with
$\mu_\text{id}$ the chemical potential of an ideal Bose gas. We obtain
\begin{gather}
\hspace{-16mm}
\frac{\Sigma_2(T>T_c)}{\Sigma_2^F(T=0)} =-\frac1{\sqrt{n_0^{1/3}(0)a_{\rm B}}}\left[{\cal I}_N (T/T_c)\vphantom{\left(\frac T{T_c}\right)^2}\right.
\nonumber \\\left.
+i\frac{3\sqrt{\pi}[{\rm Li}_2(z)+\frac12\log^2(1-z)]}{16\zeta^{4/3}(3/2)}\left(\frac T{T_c}\right)^2\right],
\label{eq:aboveTc}
\end{gather}
where we have used the ideal Bose gas relation
$n\lambda^3={\rm Li}_{3/2}(z)$, with ${\rm Li}$ the polylogarithm and $z\equiv\exp(\mu_{\rm id}/T)$ the fugacity.
The dimensionless function ${\cal I}_N (T/T_c)$ is given in Appendix \ref{app:aboveTc}.
It follows from Eq.~\eqref{eq:aboveTc} that the imaginary part of the self-energy
diverges as $\log^2(1-z)$ when $z\rightarrow 1$ for $T\rightarrow T_c^+$.
This comes from infrared divergences in the integrals containing the
Bose distribution function. Physically, it
means that the polaron becomes strongly damped close to $T_c$.
The real part of $\Sigma_2(T)$
can also be shown
to diverge
when $T\rightarrow T_c^+$ as outlined in Appendix \ref{app:aboveTc}.
\begin{comment}
Again, the divergences are removed when instead of solving the perturbative expression $\omega=\Sigma(\omega=0)$,
one solves
$\omega=\Sigma(\omega)$.
\end{comment}
\begin{figure
\centering
\includegraphics[width=.95\columnwidth]{Eplot2}
\caption{Polaron energy as a function of interaction strength. (a) $m=m_{\text{B}}$ and $n^{1/3}a_{\rm B}=0.003$ as in the Aarhus experiment~\cite{Jorgensen2016} for $T=0$ (solid line) and $T=T_c/10$ (dashed). (b) $m/m_{\text{B}}=40/87$ and $n^{1/3}a_{\rm B}=0.03$ as in the JILA experiment~\cite{Hu2016} with $T=0$ (solid line) and $T=T_c/2$ (dashed). The lines are thinner in the regime $a^2>a_{\rm B}\xi_0$ where the polaron ceases to be a well-defined quasiparticle~\cite{Christensen2015}, and they are only plotted in the range where the finite-temperature 2nd order shift is smaller than the mean-field energy. Note that our perturbative results are reliable at a higher temperature in the JILA experiment since the gas parameter $n^{1/3}a_{\rm B}$ is larger than in the Aarhus experiment.
\label{fig:Eplot}}
\end{figure}
\subsection{High-temperature behavior}
Finally, we consider the limit $T\gg T_c$. Expanding the self-energy to lowest order in the fugacity $z$ yields
\begin{align}
\frac{\Sigma_2^B(T)}
{\Sigma_2^F(0)}& \simeq -\kappa
\left[0.315 \frac{T_c}T+i\frac{3\sqrt{\pi}}{16\zeta(3/2)^{1/3}}\sqrt{\frac T{T_c}}
\right]
\label{HighT}
\end{align}
with $\kappa=[n_0(0)a_{\rm B}^3]^{-1/6}$.
Thus, whereas the energy shift of the polaron decreases
with increasing temperature, the polaron becomes increasingly damped as the impurity collides with more and more energetic bosons.
\subsection{Validity of perturbation theory}
At $T=0$, the small parameter of perturbation theory
is $a/\xi$ and we
additionally require $a^2/a_{\rm B}\xi\ll1$ for the polaron to be
well-defined~\cite{Christensen2015}. In general,
we expect perturbation theory to be valid provided $\Sigma_2<\Sigma_1$. From this, we derive the condition $|a|\lla_{\rm B}$ valid close to $T_c$, by
comparing (\ref{TcFrohlichFinal}) with the first order shift ${\cal T}_vn$. For a small gas parameter, $n^{1/3}a_{\rm B}$, this condition is much stricter than the $T=0$ conditions. We therefore
expect perturbation theory to break down earlier for temperatures close to $T_c$.
Above $T_c$, perturbation theory is accurate when
$n^{-1/3},\lambda\gg |a|$.
Note also that perturbation theory breaks
down in the critical region $|T-T_c|/T_c\lesssim n^{1/3}a_{\rm B}$~\cite{Shi1998, Andersen2004}, which is the origin of the
infrared divergences as
$T\rightarrow T_c$. However, the critical region is narrow for a weakly interacting BEC, making our results reliable except very close to $T_c$.
\subsection{Numerical results}
In Fig.~\ref{fig:deltaE}, we plot the second-order self-energy $\Sigma_2$ as a function of $T$, evaluated numerically using Eq.~\eqref{eq:frohlich} for various values of the gas parameter. We see an intriguing non-monotonic temperature dependence of both the
polaron energy shift
and damping. For $T< T_c$,
the energy shift increases and the numerical results recover our predicted $T^4$ behavior
in Eq.~(\ref{EFlowT}) for
$T\ll T_c$. In particular, the rate of the increase scales with $a_{\rm B}^{-7/2}$ so
that there is a strong temperature dependence when the gas parameter of the BEC is small. The damping of the polaron, $\Gamma=-\rm{Im}\,\Sigma_2$, also increases with $T$ as more thermally excited bosons scatter on the impurity.
Both the energy shift and the damping vary strongly close to $T_c$.
This reflects both the logarithmic divergences discussed above as well as the
discontinuous jump in the Fr\"ohlich self-energy at $T_c$
given by Eq.\ (\ref{TcFrohlichFinal}), which is indicated by $\bullet$'s in Fig.~\ref{fig:deltaE}.
Since perturbation theory breaks
down close to $T_c$, we do not plot the numerical results in this region.
For $T>T_c$, the energy shift of the polaron decreases and it vanishes as $T\to \infty$. The predicted increase in the damping rate for $T\gg T_c$ in Eq.\ (\ref{HighT}) is not visible in the range of temperatures shown in Fig.~\ref{fig:deltaE} which focuses on the phase transition region.
In Fig.~\ref{fig:Eplot}, we plot the total polaron energy $\Sigma_1+\Sigma_2$
as a function of the interaction parameter $1/n^{1/3}a$ for zero and finite temperature.
We consider both the Aarhus $^{39}$K experiment and the JILA $^{40}$K-$^{87}$Rb experiment, where the latter corresponds to the case of a light impurity.
In the region where we expect perturbation theory to be reliable, we see that the polaron energy for the equal-mass Aarhus case is shifted significantly higher by temperature, even when $T \ll T_c$. Moreover, we find a small decay rate $\Gamma \ll \Delta E$ in this regime.
Thus, the polaron energy shift should be measurable, as we discuss below.
On the other hand, the light impurity in the JILA case has a finite-temperature energy shift that is negative rather than positive. The reason is that --- contrary to the equal mass case ---
$\Sigma_2^F(T)$ is now continuous across $T_c$ where it goes to zero, as discussed in Sec.\ \ref{ClosetoTc}. Its positive contribution to the polaron energy is therefore much smaller, and the overall temperature shift becomes negative. The decay rate $\Gamma$ on the other hand, is comparable to $|\Delta E|$ in the regime where $|\Delta E|$ is significant
for the JILA parameters. This can be traced to the fact that $\Sigma_2^F(T)$ develops a pole and corresponding imaginary part when $m< m_{\text{B}}$ --- see Appendix \ref{app:im}
for an analytic expression for ${\rm Im} \Sigma_2^F$.
Physically the pole originates from processes where thermally excited Bogoliubov modes scatter resonantly on the polaron. These scattering are possible since the equation $\epsilon_{\mathbf k}=E_{\mathbf k}$ has
a solution for $m<m_B$, and they lead to decay.
\section{Discussion and conclusion}
\label{sec:conc}
The non-trivial temperature dependence of the impurity properties close to $T_c$
is due to quite generic physics and is not limited to the specific system at hand. It originates from the change of
the dispersion
from quadratic to linear at $T_c$,
which is
a consequence of the $U(1)$ symmetry breaking resulting from the formation of a condensate. This dramatically changes the low-energy density of states of the Bose gas, which impacts the excitations that couple strongly to the impurity. Thus, similar effects should occur in other systems involving impurities coupled to a reservoir that undergoes a phase transition where a continuous symmetry is broken. This includes impurities in helium mixtures~\cite{BaymPethick1991book},
conventional or high $T_c$ superconductors~\cite{Dagotto1994},
magnetic systems~\cite{Kaminski2002}, and nuclear matter~\cite{Bishop1973}.
The temperature dependence of the polaron energy can be investigated by RF spectroscopy of $^{39}$K atoms. In these experiments, a RF pulse transfers a small fraction of atoms from a BEC in the $\ket{F=1,m_F=-1}$~state into the $\ket{1,0}$~state, such that they form mobile impurities. The impurity-BEC interaction is highly tunable using a Feshbach resonance and thus the polaron energy can be
obtained both for attractive and repulsive interactions. As shown in Fig.~\ref{fig:Eplot},
the energy shift due to a finite temperature is sizable in the regime where perturbation theory should be reasonable: at $1/(n^{1/3}a)=10$ the energy at $T=T_c/10$ compared to $T=0$ corresponds to a RF frequency shift of $\sim7$~kHz, which is comparable to the experimental resolution.
Since the temperature dependence of the polaron energy scales with $\Sigma_2^F(0)\propto a^2 n_0$, it is favorable to access a given interaction strength by choosing a large scattering length and accordingly small density.
To conclude, using perturbation theory valid in the weak coupling regime, we investigated the properties of the Bose polaron as a function of temperature. We derived
analytical results both for low temperature $T\ll T_c$, $T\simeq T_c$, and high temperature $T\gg T_c$.
These results show that the superfluid phase transition of the surrounding Bose gas has strong effects on the properties of the polaron. The energy depends in a non-trivial way on $T$ with a pronounced non-monotonic behaviour around $T_c$, and the damping
increases sharply as $T_c$ is approached. We argued that these effects should occur in a wide range of systems consisting of impurities immersed in an environment
undergoing a phase transition. Finally, we discussed how
this intriguing temperature dependence can be detected experimentally.
\acknowledgements
We thank M.~W.~Zwierlein for pointing out the interesting analogy between Eq.~(\ref{EFlowT}) and the energy of a phonon gas in a BEC.
We appreciate useful discussions with B.~Zhu.
JL, MMP, and GMB acknowledge financial support
from the Australian Research Council via Discovery Project
No.~DP160102739. JL is supported through the Australian
Research Council Future Fellowship FT160100244.
JL and MMP acknowledge funding from the Universities Australia -- Germany Joint Research Co-operation Scheme.
GMB wishes to acknowledge the support of the Villum Foundation via grant VKR023163.
JA acknowledges support from the Danish Council for Independent Research
and the Villum Foundation.
This work was performed in part at the Aspen Center for Physics, which is supported by the National Science Foundation Grant No.~PHY-1607611.
\onecolumngrid
|
2,869,038,155,488 | arxiv | \section{Introduction}
\label{section_introduction}
Dust from the Martian surface is regularly swept up by winds to form
local, regional, or sometimes even planet-wide dust storms.
The airborne dust particles scatter and absorb solar radiation
and are therefore very important for the thermal structure of
the thin Martian atmosphere and for the temperature of the Martian
surface \citep[see e.g.][and references therein]{2006GeoRL..3302203B,
2005Icar..175...23G,2004Icar..167..148S,1972JAtS...29..400G}.
The interaction between radiation and the dust particles thus
has to be taken into account when studying, for
example, the local and global climate on Mars.
In particular, one has to account for the spatial distribution of the
dust particles, their number density, and their optical
properties.
For a given wavelength, the optical properties of the dust particles
depend on their composition, sizes and shapes.
Despite the important role of dust particles in the Martian atmosphere,
surprisingly little is known about their optical properties
\citep[for an overview, see][]{2005AdSpR..35...21K}.
Consequently, in radiative transfer calculations that are used to
interpret space or ground-based observations of Mars, various
assumptions are made regarding the dust optical properties.
In particular, it is common to assume homogeneous, spherical or
spheroidal dust particles \citep{1995JGR...100.5235P,1999JGR...104.8987T},
although dust particles on Earth are known to be irregularly shaped.
The optical properties of homogeneous, spherical particles can
straightforwardly be calculated using Lorenz-Mie theory
\citep[][]{1957lssp.book.....V,1984A&A...131..237D}, and those
of homogeneous, spheroidal particles using
e.g. the so-called T-matrix method \citep[][]{1994OptCo.109...16M,2006JGRD..11111208D}.
The optical properties of spherical particles can, however, differ
significantly from those of irregularly shaped particles, even if
their composition and/or size distribution is the same.
Therefore, assuming spherical instead of
irregularly shaped particles in radiative transfer calculations
that are used for example to analyze observations, can lead to
significant errors in retrieved atmospheric parameters, such as the
dust optical thickness and/or dust particle size distributions
\citep[for a discussion on such errors,
see e.g.][]{2003SoSyR..37...87D,2002SoSyR..36..367D}.
For irregularly shaped particles, which are very common in nature,
the scattering matrix elements can in principle be calculated with
numerical methods such as those based on the so--called Discrete Dipole
Approximation (DDA) method \citep[see e.g.][]{1994OSAJ...11.1491D}.
However, the vast amounts of computing time that such
numerical calculations require make it at least very impractical
to calculate complete scattering matrices for a sample of
particles with various (irregular) shapes and various sizes,
in particular if the particles are large compared to the
wavelength of the scattered light.
Alternatively, employing geometrical optics for unrealistically
spiky particles combined with an imaginary part of the refractive
index that is rather small compared to the typical values
used in the literature,
appears to be useful to reproduce the scattering behavior of
irregularly shaped mineral particles
\citep[see][]{2003JGRD.108aAAC12N}.
As suggested by e.g. \citet{1999JGR...104.8987T,2003SoSyR..37...87D,2003JGRE.108i....1W},
a more practical method to obtain elements of the scattering matrix
for an ensemble of irregularly shaped particles is to measure the
elements in a laboratory.
Note that measured scattering matrix elements are also essential
for validating numerical methods and approximations.
In this article, we present measurements of ratios of elements of the scattering
matrix of irregularly shaped, randomly oriented
Martian analogue palagonite particles, described by \citet{1997JGR...10213341B},
as functions of the scattering angle.
The material palagonite is believed to be a reasonable, but not perfect, analogue for the Martian surface and atmospheric dust particles.
Terrestrial palagonite particles (i.e. terrestrial weathering
products of basaltic ash or glass) have been put forward as
Martian dust analogues \citep[][]{1981LPI....12..271E,1995JGR...100.5309R}
because of spectral similarities observed with visible and near-infrared
spectroscopic observations using both Earth-based telescopes and
several Mars orbiting spacecraft
\citep[][]{1985AdSpR...5Q..59S,1992mars.book..557S}.
The measurements have been performed using a HeNe laser, which has
a wavelength of 632.8~nm, and for
scattering angles, $\Theta$, ranging from 3$^\circ$ (near-forward scattering)
to 174$^\circ$ (near-backward scattering).
Other examples of such measurements for irregularly shaped mineral particles obtained with the
same experimental set-up have been reported by e.g.
\citet{2005JQSRT..90..191V} and references therein.
Since we have measured ratios of all (non-zero) elements of the
scattering matrix as functions of the scattering angle,
our results can be used for radiative transfer
calculations that include multiple scattering and polarization.
As described by e.g. \citet{1998GeoRL..25..135L}, ignoring polarization, i.e. using only scattering matrix element (1,1) (the so-called phase function), in
multiple scattering calculations, induces errors in calculated fluxes.
The use of only the phase function should be limited to single scattering calculations for unpolarized incident light.
A practical limitation of our experimental method is that we
cannot measure close ($< 6^\circ$) to the exact backscattering direction
($\Theta=180^\circ$), because there our detector would
interfere with the incoming beam of light,
nor close ($< 3^\circ$) to the exact forward scattering
direction ($\Theta=0^\circ$),
because there our detector would intercept the unscattered
part of the incident beam.
These two scattering directions are, however, important for
radiative transfer applications. This holds in particular for the near-forward
scattering direction,
since a significant fraction of the light that is incident
on a particle is generally scattered in the near-forward direction.
A solution to the lack of measurements in the near-forward and
near-backward scattering directions is to add artificial
data points. For the near-forward scattering direction, where a
strong peak in the phase function is expected, one can add
artificial data points calculated using e.g. Lorenz-Mie calculations.
This has been done before, e.g. by
\citet{2007JQSRT.103...27K,2003JQSRT..79..903L,2004GeoRL..3104113V,2005JGRD..11010S02H}.
In this article, we use a method similar to that of \cite{2003JQSRT..79..903L},
that was also used by e.g. \cite{2007JGRD..11213215M}, but we extend
this by using expansion coefficients
which result from a Singular Value Decomposition (SVD)
fit to the measurements with generalized
spherical functions \citep[][]{1963Gelfand,1983A&A...128....1H,1984A&A...131..237D,
2004Hovenier}.
With the added artificial data points and the expansion coefficients, we construct a so--called synthetic
scattering matrix, which is normalized so that the average of the synthetic phase function over all directions equals unity and covers the whole scattering angle range
(i.e. from 0$^\circ$ to 180$^\circ$).
Tables of the measurements, the synthetic
scattering matrix elements and the expansion coefficients will be
available from the Amsterdam
Light Scattering Database\footnote{The Amsterdam Light Scattering
Database is located at: http://www.astro.uva.nl/scatter}. See
\citet{2005JQSRT..90..191V,2006JQSRT.100..437V} for a description of this database.
The structure of this article is as follows.
In Sect.~\ref{section_palagonite}, we describe the microphysical
properties of our Martian analogue palagonite dust particles.
In Sect.~\ref{section_scatteringmatrix}, we define
the scattering matrix, describe the experimental set-up,
and present the measurements and an auxiliary scattering matrix.
In Sect.~\ref{section_expansioncoefs}, we introduce the
expansion coefficients, describe the Singular Value Decomposition
fitting method, and present the derived expansion coefficients and
synthetic scattering matrix.
In Sect.~\ref{section_summary}, finally, we summarize and discuss our results.
\section{Martian analogue palagonite particles}
\label{section_palagonite}
Palagonite is a fine--grained weathering product of basaltic glass.
At visible wavelengths it has a refractive index, $m$, typical for
silicate materials, i.e. Re($m$) is about 1.5 and Im($m$) is
in the range $10^{-3}$ to $10^{-4}$ \citep[][]{1995JGR...100.5251C}.
Palagonite contains a considerable fraction
(about 10\% by mass) of iron (III) oxide (Fe$_{2}$O$_{3}$)
\citep[][]{1995JGR...100.5309R}, which gives Mars its reddish color.
The sample in this study is sample 91-16 that is
described in detail by \citet{1997JGR...10213341B}.
Note that there is another Martian analogue palagonite sample
described in the literature, namely sample 91-1
\citep[see][]{1995JGR...100.5309R}.
Palagonite sample 91-1 appears to contain more sodium than sample 91-16
because of evaporation and deposition of salt due to the proximity of its
retrieval site to the Pacific Ocean.
Palagonite sample 91-16 was retrieved at the top of Hawaii's
Mauna Kea volcano, about 4~km above sea level, where it was formed
in a semi--arid environment likely associated with ephemeral
melting water from ice. Hence, sample 91-16 is
considered to be the better alternative for Martian dust
of the two Martian palagonite analogues.
Before using sample 91-16 in our light scattering experiment,
we removed the millimeter-sized particles by using a sieve with
a 200-$\mu$m grid width, to avoid clogging
the aerosol generator.
Figure~\ref{fig_sem} shows an image of the palagonite particles
obtained with a scanning electron microscope (SEM).
This image clearly shows the irregular shapes of the palagonite
particles.
It should be noted that SEM images are not necessarily representative
of the size distribution of the particles.
The normalized projected-surface-area distribution of the dust
particles was measured by using a laser particle sizer
that is based on diffraction without
making assumptions about the refractive indices of the materials
of the particles \citep[][]{Konert1997}.
From the projected-surface-area distribution, we derive the
number distribution and the volume distribution of the particles
because these distributions are often required for numerical
applications.
Figure~\ref{fig_size} shows the normalized number, volume, and
projected-surface-area distributions of our Martian analogue
palagonite particles as functions of $\log r$, with $r$ the radius
of a projected-surface-area equivalent sphere
\citep[for details on these size
distributions, see Appendix A of][]{2005JQSRT..90..191V}.
The number distribution of our palagonite particles was
approximated by a log-normal distribution, yielding an effective radius,
$r_{\rm eff}$, of 4.46~$\mu$m
and an effective variance, $v_{\rm eff}$, of 7.29.
Note that $v_{\rm eff}$ is a dimensionless parameter.
For precise definitions of $r_{\rm eff}$ and $v_{\rm eff}$
see \citet{1974SSRv...16..527H}, Eqs. (2.53) and (2.54), respectively.
We are well aware of the fact that the sizes of real Martian dust particles
can be very different from those in our sample. Indeed,
sizes of dust particles on Mars will probably vary from location to location, and
from time to time, especially when in local or global storms, dust particles
are lifted up from the surface to be deposited somewhere else:
depending on the atmospheric turbulence, the particles in the Martian
atmosphere could have very different size distributions than those on
the surface.
The effective radius of 4.46~$\mu$m of our sample particles
is a factor of 2 to 3 larger than the values put forward
for the effective radius of Martian dust by \cite{2003JGRE.108i....1W},
who analyzed observations by the Thermal Emission Spectrometer (TES)
on-board the Mars Global Surveyor, and by \citet{1995JGR...100.5235P},
who analyzed observations performed by the Viking Lander.
In particular, \citet{1995JGR...100.5235P} derived
an effective radius of 1.85~$\pm~0.3~\mu$m.
From Pathfinder measurements, \citet{1999JGR...104.8987T}
derived an effective radius of $1.6 \pm 0.15$ $\mu$m.
\citet{2004Sci...306.1753L} derived values from observations by the
Mars Exploration Rovers Spirit and Opportunity that are similar to
those of \citet{1995JGR...100.5235P} and \citet{1999JGR...104.8987T}.
Although our sample particles thus seem to be rather large,
it should be noted that particle sizes as derived from
observations will depend on the observing method, e.g. looking at
diffuse skylight or at the surface, as well as on the retrieval method.
In particular, according to numerical similations by \citet{2002SoSyR..36..367D}, effective radii
that are derived for spheroidal dust particles at visibile wavelengths,
under the assumption that these particles are spherical, can be
significantly underestimated. At infrared wavelengths,
\citet{2003A&A...404...35M} and \citet{2001A&A...378..228F} show that absorption and emission
processes, even in the small size parameter regime (i.e. $2 \pi r_{\rm eff}/\lambda \leq 1$),
depend on the particle shape, too.
Clearly, because Martian dust is expected to show a great
variety in microphysical properties,
our results should simply be regarded as an example of what can be
expected for the scattering properties of irregularly shaped particles.
\section{The scattering matrix}
\label{section_scatteringmatrix}
\subsection{Definition of the scattering matrix}
\label{section_definitionmatrix}
The flux and state of polarization of a quasi-monochromatic
beam of light can be described by means of a so-called flux vector.
If such a beam of light is scattered by an ensemble of randomly oriented
particles, separated by distances larger than
their linear dimensions and in the absence of multiple scattering as
in our experimental set-up
(see Sect.~\ref{section_setup}),
the flux vectors of
the incident beam, $\pi{\bf\Phi_{0}}(\lambda)$, and scattered beam,
$\pi{\bf\Phi}(\lambda,\Theta)$, are for each scattering direction,
related by a $4 \times 4$ matrix, as follows
\citep[][]{1957lssp.book.....V,2006JQSRT.100..437V}:
\begin{equation}
\label{matrix}
{\bf\Phi}(\lambda, \Theta)
= \frac{\lambda^{2}}{4\pi^{2}D^{2}}
\left( \begin{array}{c c c c}
F_{11}&F_{12}& F_{13} & F_{14}\\
F_{12}&F_{22}& F_{23} & F_{24}\\
-F_{13}&-F_{23}& F_{33} & F_{34}\\
F_{14}&F_{24}& -F_{34} & F_{44} \\
\end{array} \right) {\bf\Phi_{0}}(\lambda),
\label{eq_scatmat}
\end{equation}
where the first elements of the column vectors are
fluxes divided by $\pi$ and the other elements describe the state
of polarization of the beams by means of Stokes parameters.
Furthermore, $\lambda$ is the wavelength, and $D$ is the
distance between the ensemble of particles and the detector.
The scattering plane, i.e. the plane containing the directions
of the incident and scattered beams, is the plane of reference
for the flux vectors.
The matrix, ${\bf F}$, with elements $F_{ij}$ is called the
scattering matrix of the ensemble.
The scattering matrix elements $F_{ij}$
are dimensionless, and depend on the number of the particles and on their microphysical properties
(size, shape and refractive index), the wavelength
of the light, and the scattering direction.
For randomly oriented particles, the scattering
direction is fully described by the scattering angle $\Theta$,
the angle between the directions of propagation of the
incident and the scattered beams.
According to Eq.~(\ref{eq_scatmat}), a scattering matrix
has in general 10 different matrix elements.
For randomly oriented particles with equal
amounts of particles and their mirror particles, as we
can assume applies for the particles of our ensemble,
the four elements $F_{13}(\Theta)$, $F_{14}(\Theta)$,
$F_{23}(\Theta)$, and $F_{24}(\Theta)$ are zero over
the entire scattering angle range \citep[see][]{1957lssp.book.....V}.
This leaves us only six non-zero scattering matrix
elements, as follows
\begin{equation}
{\bf F}(\Theta)= \left[ \begin{array}{cccc}
F_{11}(\Theta) & F_{12}(\Theta) & 0 & 0 \\
F_{12}(\Theta) & F_{22}(\Theta) & 0 & 0 \\
0 & 0 & F_{33}(\Theta) & F_{34}(\Theta) \\
0 & 0 & -F_{34}(\Theta) & F_{44}(\Theta)
\end{array} \right],
\label{eq_scatteringmatrix}
\end{equation}
where $|F_{ij}(\Theta)/F_{11}(\Theta)| \leq 1$
\citep[][]{1986A&A...157..301H}.
For unpolarized incident light, matrix element $F_{11}(\Theta)$ is
proportional to the flux of the singly scattered light and is also called the phase function.
Also, for unpolarized incident light,
the ratio $-F_{12}(\Theta)/F_{11}(\Theta)$ equals the
degree of linear polarization of the scattered light.
The sign indicates the direction of polarization:
a negative degree of polarization indicates that the
scattered light is polarized parallel to the reference plane, whereas
a positive degree of polarization indicates that the light is polarized perpendicular to the
reference plane. In calculations for fluxes only and where light
is scattered only once, $F_{11}(\Theta)$ is the only matrix element
that is required. Ignoring the other matrix elements, and hence the state
of polarization of the light, in multiple
scattering calculations, usually leads to errors in calculated fluxes
\citep[see e.g.][]{1998GeoRL..25..135L,2002Icar..156..474M,2005A&A...444..275S}.
\subsection{The experimental set-up}
\label{section_setup}
Our measurements have been performed with the light scattering
experiment located in Amsterdam, the Netherlands
\citep[see e.g.][]{2005JQSRT..90..191V,
2003JQSRT..79..741H,Mishchenko2000_hovenier,
2001JGR...10617375V,2000A&A...360..777M}.
Figure~\ref{fig_setup} shows a sketch of the experimental set--up.
In our experimental apparatus, we use a HeNe laser ($\lambda$=632.8 nm, 5 mW)
as a light source. The laser light passes through a polarizer and an electro-optic
modulator. The modulated light is subsequently scattered by an ensemble of
randomly oriented particles from the sample, located in a
jet stream produced by an aerosol generator.
The scattered light may pass through a quarter-wave plate and an
analyzer, depending on the scattering matrix element of interest
\citep[for details see e.g.][]{2005JQSRT..90..191V},
and is then detected by a photomultiplier
tube which moves in steps along a ring with radius $D$ (see Eq.~(\ref{eq_scatmat}))
around the ensemble of particles; in this way a range of scattering angles
from 3$^\circ$ (nearly forward scattering) to 174$^\circ$
(nearly backward scattering) is covered in the measurements.
We cannot measure close ($< 3^\circ$) to the exact forward scattering
direction, because there our detector would intercept the unscattered
part of the incident beam, nor can we measure close ($< 6^\circ$) to the
exact backscattering direction, because there our detector would
interfere with the incoming beam of light.
A photomultiplier placed at a fixed position (i.e. at a fixed scattering angle)
is used to correct the measured scattered fluxes for time fluctuations in
the particle stream. It can safely
be assumed that during the measurements, the particles are in the
single scattering regime \citep[][]{2003JQSRT..79..741H}.
Due to the lack of measurements between 0$^\circ$ and 3$^\circ$
and between 174$^\circ$ and 180$^\circ$, we cannot measure the absolute
angular dependency of the phase function, e.g. normalized to unity when
averaged over all scattering directions. Instead, we normalize the
measured phase function to unity at a scattering angle of 30$^\circ$.
We present the other scattering matrix elements divided by
the original measured phase function. We thus present ratios of
elements of the scattering matrix instead of the elements themselves.
\subsection{Measurements}
\label{section_measuredscatteringmatrix}
Figure~\ref{fig_matrix} shows the six measured not identically zero
(cf. Eq.~\ref{eq_scatteringmatrix}) ratios of elements of the
scattering matrix of the Martian analogue palagonite particles as functions
of the scattering angle $\Theta$, together with the experimental errors.
We have verified that the measured ratios of the elements of the scattering
matrix satisfy the Cloude coherency matrix test
\citep[][]{2004Hovenier} within the experimental errors.
And we verified that the other measured ratios of the elements of the
scattering matrix, i.e. $F_{13}(\Theta)/F_{11}(\Theta)$, $F_{23}(\Theta)/F_{11}(\Theta)$,
$F_{14}(\Theta)/F_{11}(\Theta)$, and $F_{24}(\Theta)/F_{11}(\Theta)$,
do not differ from zero by more than the experimental errors
(see Eq.~(\ref{eq_scatteringmatrix})).
To illustrate the influence of the particle shape on the scattering
behavior of the palagonite particles, the measurements in Fig.~\ref{fig_matrix}
are presented together with results of Lorenz-Mie calculations
\citep[][]{1957lssp.book.....V,1984A&A...131..237D} for homogeneous,
optically nonactive, spherical particles at a wavelength of 632.8~nm.
For the Lorenz-Mie calculations we employed the number size distribution,
$n(r)$, derived from the measured projected-surface-area distribution,
and the refractive index was fixed to $m=1.5+0.0005i$
(cf. Sect.~\ref{section_palagonite} and Fig.~\ref{fig_size}).
As can be seen in Fig.~\ref{fig_matrix}, the measured phase function, i.e.
$F_{11}(\Theta)/F_{11}(30^\circ)$, of the irregularly shaped Martian
analogue palagonite particles covers almost three orders of
magnitude between $\Theta=3^\circ$ and $\Theta=174^\circ$,
with a strong peak towards the smallest scattering angles (the
so-called forward scattering peak) and a smooth drop-off towards
the largest scattering angles. The measured phase
function is very flat for scattering over intermediate
($70^\circ < \Theta < 150^\circ$) and large ($\Theta > 150^\circ$) scattering angles.
The relatively flat appearance of the phase function of the palagonite
particles at large scattering angles appears to be a general behavior for
(terrestrial) irregularly shaped mineral particles with moderate
refractive indices
\citep[see e.g.][]{2001JGR...10617375V,2000A&A...360..777M,
2001JGR...10622833M}.
Our palagonite phase function resembles the phase functions measured in-situ
with the Viking \citep[][]{1995JGR...100.5235P} and Pathfinder missions
\citep[][]{1999JGR...104.8987T}.
In Sect.~\ref{section_summary}, we make a more detailed comparison between
our phase function and those presented by \citet{1999JGR...104.8987T}.
As mentioned before (see Sect.~\ref{section_definitionmatrix}),
the ratio $-F_{12}(\Theta)/F_{11}(\Theta)$ represents the degree
of linear polarization of the singly scattered light for incident
unpolarized light. For the irregularly shaped palagonite particles,
Fig.~\ref{fig_matrix} shows that this ratio has a characteristic
(positive) bell shape at intermediate scattering
angles and a small negative branch for $\Theta \gtrsim 160^\circ$.
For scattering angles larger than about 140$^\circ$, the
scattering angle dependence of our measured ratio
$-F_{12}(\Theta)/F_{11}(\Theta)$ resembles Earth-based
observations of the planetary phase angle dependence of the
degree of linear polarization of Mars
\citep[][]{2003SoSyR..37...87D,2005Icar..176....1S}
(the planetary phase angle equals 180$^\circ - \Theta$
for single scattering).
This suggests that the polarization opposition effect
that is observed at small phase angles for most solid solar
system bodies
\citep[see e.g.][and references therein]{2005Icar..179..490R}
can be explained, at least partly, by single scattering by small
irregular particles.
Here, it should be noted that the observations discussed by
\citet{2003SoSyR..37...87D} and \citet{2005Icar..176....1S}
pertain to light that has been scattered in the Martian
atmosphere combined with light that has been reflected by the surface.
It is thus not purely representative for airborne dust particles.
The most striking difference between the measured and the calculated ratios
$-F_{12}(\Theta)/F_{11}(\Theta)$ (Fig.~\ref{fig_matrix})
is their sign, hence the direction of polarization of the scattered
light for unpolarized incident
light. The irregularly shaped particles mostly yield scattered light
polarized perpendicular to the reference plane, while the
spherical particles yield scattered light polarized parallel to this plane.
Another difference is that for the irregularly shaped particles,
ratio $-F_{12}(\Theta)/F_{11}(\Theta)$ is a smooth, almost
featureless function of $\Theta$, while for the spherical
particles, the ratio shows strong angular features,
especially at large scattering angles.
Scattering matrix element ratio $F_{22}(\Theta)/F_{11}(\Theta)$
is often used as a measure for the non-sphericity of the scattering
particles, since for homogenous, optically inactive spheres,
this ratio equals unity at all scattering angles.
As can be seen in Fig.~\ref{fig_matrix}, for the irregularly
shaped palagonite particles,
$F_{22}(\Theta)/F_{11}(\Theta)$ deviates significantly from unity at
all but the smallest scattering angles. Indeed, with increasing scattering
angle, it decreases to slightly
below 0.4 at $\Theta \approx 130^\circ$, and then increases
again to 0.5 when $\Theta$ approaches 180$^\circ$.
The scattering angle dependence measured for the palagonite
particles is similar in shape to that reported for irregularly shaped
mineral aerosol particles \citep[][]{2001JGR...10617375V},
and for e.g. various types of volcanic ashes
\citep[][]{2004JGRD..10916201M}. According to
\citet{2001JGR...10617375V}, the minimum value at intermediate
scattering angles and the maximum value at the largest scattering
angles are affected by the size and refractive index of the
particles.
Another indication for the shape of the scattering particles
are the ratios $F_{33}(\Theta)/F_{11}(\Theta)$ and
$F_{44}(\Theta)/F_{11}(\Theta)$. As can also be seen in
Fig.~\ref{fig_matrix}, for homogenous, optically
inactive spheres, $F_{33}(\Theta) \equiv F_{44}(\Theta)$
\citep[][]{2004Hovenier},
whereas we find significant differences between the
measured $F_{44}(\Theta)/F_{11}(\Theta)$ and
$F_{33}(\Theta)/F_{11}(\Theta)$ for the palagonite sample.
The ratio $F_{33}(\Theta)/F_{11}(\Theta)$ is zero at a smaller
scattering angle than $F_{44}(\Theta)/F_{11}(\Theta)$,
and has a lower minimum (-0.5 versus -0.2).
Indeed, for the irregularly shaped particles,
these ratios show an apparently typical behavior
for non-spherical particles \citep[][]{Mishchenko2000}, namely,
at large scattering angles, $F_{44}(\Theta)/F_{11}(\Theta)$
is larger than $F_{33}(\Theta)/F_{11}(\Theta)$.
Finally, scattering matrix element ratio
$F_{34}(\Theta)/F_{11}(\Theta)$ of the irregularly shaped
particles shows a shallow bell shape
with slightly negative branches for $\Theta < 30^\circ$
and for $\Theta > 165^\circ$.
This scattering angle dependence is commonly found
for irregularly shaped silicate particles
\citep[e.g.][]{2005JQSRT..90..191V,2001JGR...10617375V,
2000A&A...360..777M}.
Interestingly, whereas for the irregularly shaped particles,
$F_{34}(\Theta)/F_{11}(\Theta)$ is very similar to
$-F_{12}(\Theta)/F_{11}(\Theta)$, for the spherical particles,
these ratios differ strongly from each other, both in sign and
in shape, as can be seen in Fig.~\ref{fig_matrix}.
Comparison between the measured and the calculated scattering matrix
element ratios in Fig.~\ref{fig_matrix} supports the idea that
scattering by non-spherical particles generally leads to
smoother functions of the scattering angle than
scattering by spherical particles. This smooth scattering behavior
by irregularly shaped particles proves to be very difficult
to simulate numerically without taking into account
the irregular shape of the particles
\citep[][]{2003JGRD.108aAAC12N,2003JQSRT..79.1031N,
2003JPhD...36..915K}.
An electronic table of the measured ratios of the elements of the
scattering matrix will be available from the Amsterdam Light Scattering
Database \citep[][]{2005JQSRT..90..191V,2006JQSRT.100..437V}.
\subsection{The auxiliary scattering matrix}
\label{section_auxiliarymatrix}
It appears to be difficult to directly use the measured ratios of elements of the scattering
matrix in radiative transfer calculations, because of
the lack of measurements below $\Theta=3^\circ$ and above
$\Theta=174^\circ$.
In particular, it would be interesting to have the forward scattering
peak in the phase function since it contains a large fraction of the
scattered energy (see Fig.~\ref{fig_matrix}), and is thus very
important for the accurate modelling of scattered light
in e.g. planetary atmospheres.
In addition, the lack of measurements at small and large scattering
angles inhibits the normalization of scattering matrix elements
such that the average of the phase function over all scattering
directions equals unity.
With such a normalization and a value
for the single scattering albedo of the scattering particles,
one could model the absolute amount of radiation that is
scattered in a given direction.
To facilitate the use of the measured ratios of elements of the scattering
matrix in radiative transfer calculations, we construct from these a so--called {\em auxiliary
scattering matrix}, ${\bf F^{\rm au}}$, for which holds (for $i,j=1$ to 4 with the exception of $i=j=1$),
\begin{equation}
F^{\rm au}_{ij}(\Theta) = \frac{F_{ij}(\Theta)}{F_{11}(\Theta)} F_{11}^{\rm au}(\Theta),
\label{equation_auxiliary}
\end{equation}
where the auxiliary phase function ${F_{11}^{\rm au}}$ is equal to
\begin{equation}
F_{11}^{\rm au}(\Theta) =
\frac{F_{11}(\Theta)}{F_{11}(30^{\circ})} F_{11}^{\rm au}(30^{\circ})
\hspace*{1cm} {\rm for} \hspace*{0.5cm}
3^\circ \leq \Theta \leq 174^\circ.
\label{ratios}
\end{equation}
This auxiliary phase function is normalized
according to
\begin{equation}
\frac{1}{4\pi} \int_{4\pi} F^{\rm au}_{11}(\Theta) \, d\omega = 1,
\label{eq_normalization}
\end{equation}
where $d \omega$ is an element of solid angle.
Combining Eq.~(\ref{ratios}) and Eq.~(\ref{eq_normalization}) and
setting $F^{\rm au}_{11}(30^\circ)$ equal to $1/C$ leads to
\begin{equation}
\frac{1}{4\pi} \int_{4\pi}
\frac{F_{11}(\Theta)}{F_{11}(30^\circ)} \, d\omega = C.
\label{eq_normalization2}
\end{equation}
where $C$ is a normalization constant.
This constant can in principle be obtained by
evaluating the integral on the left-hand side of Eq. ~(\ref{eq_normalization2}),
provided the function to be integrated is known over the full range of scattering angles.
Therefore, we added artificial datapoints at 0 and 180 degrees to the measured values
of $F_{11}(\Theta)/F_{11}(30^\circ)$. At $\Theta=180^\circ$,
the smoothness of the measured phase function allows us to simply add
an artificial data point to $F_{11}(\Theta)/F_{11}(30^\circ)$
by spline extrapolation \citep[][]{1992nrfa.book.....P}
of the measured data points.
Adding an artificial data point to the measured $F_{11}(\Theta)/F_{11}(30^\circ)$
at $\Theta=0^\circ$, is more complicated. Numerical tests with the calculated phase
function for the hypothetical homogeneous, spherical palagonite particles
(see Fig.~\ref{fig_matrix}) show that extrapolation of the
calculated phase function at $\Theta \leq 3^\circ$ towards
$\Theta=0^\circ$, using e.g. splines \citep[][]{1992nrfa.book.....P},
fails to reproduce the strength of the calculated forward scattering
peak. We thus decided not to extrapolate the measured phase function
from $\Theta=3^\circ$ towards $\Theta=0^\circ$. Instead we add an artificial
data point to the measured $F_{11}(\Theta)/F_{11}(30^\circ)$ at $\Theta=0^\circ$ using
the phase function that
we calculated for the projected-surface-area equivalent, homogeneous,
spherical particles.
The rationale for this approach, which is similar to that used by
\citet{2003JQSRT..79..903L} and \citet{2007JGRD..11213215M},
is that the forward scattering peak results mainly from the
diffraction of the incident
light. The strength of the diffraction peak and its scattering
angle dependence appears to depend mainly on the size of
the particles and fairly shape independent for
projected-surface-area equivalent convex particles in random orientation
\citep[][]{2002sael.book.....M}.
Because our normalization of the measured phase function, $F_{11}(\Theta)/F_{11}(30^\circ)$, at
$\Theta=30^\circ$ is rather arbitrary (we could have chosen
a different value of $\Theta$ for the normalization), we scale the phase function
as calculated for the spherical palagonite particles to the measured phase function.
For this, we use the following equation,
\begin{equation}
\frac{F_{11}(0^\circ)}{F_{11}(30^\circ)} =
\frac{F_{11}^{\rm s}(0^\circ)}{F_{11}^{\rm s}(3^\circ)}
\frac{F_{11}(3^\circ)}{F_{11}(30^\circ)},
\label{eq_exp200}
\end{equation}
with $F_{11}(0^\circ)/F_{11}(30^\circ)$ the artificial data point at
$\Theta=0^\circ$ and the superscript "s" indicating the phase function as calculated
for the spherical particles.
We now have data points available across the full scattering angle
range to evaluate the integral in Eq.~(\ref{eq_normalization2})
and to obtain the normalization constant $C$.
The numerical evaluation of this integral, however, appears to be
difficult because of the steep slope between $\Theta=0^\circ$ and
$\Theta=3^\circ$, where no measured data points are available.
Therefore, we have chosen a different method to obtain the normalization
constant $C$ in Eq.~(\ref{eq_normalization2}).
This method is based on the expansion of the measured phase function
$F_{11}(\Theta)/F_{11}(30^\circ)$, including the added, artificial,
data points at $\Theta=0^\circ$ to $\Theta=180^\circ$,
as a function of the scattering angle into so--called generalized spherical
functions \citep[][]{1963Gelfand,1983A&A...128....1H,1984A&A...131..237D,
2004Hovenier}. This method of obtaining expansion coefficients from scattering
matrix elements is explained in detail in Sect.~\ref{section_expansioncoefs}.
The expansion of $F_{11}(\Theta)/F_{11}(30^\circ)$ yields expansion coefficients
$\alpha^l_1$ (with $0 \leq l$). The first of these expansion coefficients,
$\alpha^0_1$, is equal to constant $C$ in Eq.~(\ref{eq_normalization2}) \citep[][]{2004Hovenier}.
Having obtained $C$ in this way, and thus $F_{11}^{\rm au}(30^{\circ})$,
we readily find $F_{11}^{\rm au}(\Theta)$ from
the measured ratio $F_{11}(\Theta)/F_{11}(30^\circ)$ and Eq. ~(\ref{ratios}).
Next, given the auxiliary phase function, we derive the synthetic matrix elements
$F_{ij}^{\rm au}(\Theta)$ for $i,j=1$ to 4 with the exception of $i=j=1$ from
the measured ratios $F_{ij}(\Theta)/F_{11}(\Theta)$, using Eq.~(\ref{equation_auxiliary}).
To obtain also the complete scattering angle range
for the other scattering matrix element ratios, we extrapolate the
measured $F_{ij}(\Theta)/F_{11}(\Theta)$ ($i,j=1$ to 4 with the exception of $i=j=1$)
towards $\Theta=0^\circ$ and $180^\circ$.
At these two scattering angles, the following equalities should hold:
\citep[see Display 2.1 in][]{2004Hovenier}
\begin{eqnarray}
& & F_{12}(0^\circ)/F_{11}(0^\circ) = F_{34}(0^\circ)/F_{11}(0^\circ) = 0, \label{eq6} \\
& & F_{22}(0^\circ)/F_{11}(0^\circ) = F_{33}(0^\circ)/F_{11}(0^\circ), \label{eq5} \\
& & F_{22}(180^\circ)/F_{11}(180^\circ) = -F_{33}(180^\circ)/F_{11}(180^\circ), \label{eq7} \\
& & F_{12}(180^\circ)/F_{11}(180^\circ) = F_{34}(180^\circ)/F_{11}(180^\circ) = 0, \label{eq8} \\
& & F_{44}(180^\circ)/F_{11}(180^\circ) = 1 - 2 F_{22}(180^\circ)/F_{11}(180^\circ). \label{eq9}
\end{eqnarray}
Following Eq.~(\ref{eq6}), ratios $F_{12}(0^\circ)/F_{11}(0^\circ)$ and
$F_{34}(0^\circ)/F_{11}(0^\circ)$ are set equal to zero. Following Eq.~(\ref{eq5}),
we use splines to extrapolate the ratios $F_{22}(\Theta)/F_{11}(\Theta)$
and $F_{33}(\Theta)/F_{11}(\Theta)$ towards $\Theta=0^\circ$,
and we set both $F_{22}(0^\circ)/F_{11}(0^\circ)$ and $F_{33}(0^\circ)/F_{11}(0^\circ)$
equal to the average of the two extrapolated values.
Ratio $F_{44}(0^\circ)/F_{11}(0^\circ)$ is obtained
by extrapolating (with splines) the ratio $F_{44}(\Theta)/F_{11}(\Theta)$
from $\Theta=3^\circ$ towards $\Theta=0^\circ$.
In the backward scattering direction, the measured scattering matrix
element ratios $F_{ij}(\Theta)/F_{11}(\Theta)$ appear to be smooth
functions of $\Theta$ (see Fig.~\ref{fig_matrix}). We use splines
\citep[][]{1992nrfa.book.....P} to extrapolate
$F_{22}(\Theta)/F_{11}(\Theta)$, and $F_{33}(\Theta)/F_{11}(\Theta)$
from $\Theta=174^\circ$ to $\Theta=180^\circ$.
Because $F_{22}(180^\circ)/F_{11}(180^\circ)$ should be equal to
$-F_{33}(180^\circ)/F_{11}(180^\circ)$ (see Eq.~(\ref{eq7})), we set both
$F_{22}(180^\circ)/F_{11}(180^\circ)$ and
$-F_{33}(180^\circ)/F_{11}(180^\circ)$ equal to the average of the two
extrapolated values.
Following Eq.~(\ref{eq8}), we set $F_{12}(180^\circ)/F_{11}(180^\circ)$ and
$F_{34}(180^\circ)/F_{11}(180^\circ)$ equal to zero,
and calculate $F_{44}(180^\circ)/F_{11}(180^\circ)$ using Eq.~(\ref{eq9})
with $F_{22}(180^\circ)/F_{11}(180^\circ)$.
In the following, we will refer to the elements of our auxiliary
scattering matrix ${\bf F}^{\rm au}(\Theta)$ as
\citep[see][]{2004Hovenier}:
\begin{equation}
{\bf F}^{\rm au}(\Theta)= \left[ \begin{array}{cccc}
a_1(\Theta) & b_1(\Theta) & 0 & 0 \\
b_1(\Theta) & a_2(\Theta) & 0 & 0 \\
0 & 0 & a_3(\Theta) & b_2(\Theta) \\
0 & 0 & -b_2(\Theta) & a_4(\Theta)
\end{array} \right].
\label{eq_scatteringmatrixS}
\end{equation}
\section{Expansion coefficients}
\label{section_expansioncoefs}
\subsection{Definitions of the expansion coefficients}
For use in numerical radiative transfer algorithms, it is often
advantageous to expand the elements of a scattering matrix
as functions of the scattering angle into so--called
generalized spherical functions
\citep[][]{1963Gelfand,1983A&A...128....1H,1984A&A...131..237D,
2004Hovenier}.
The advantage of using the coefficients of this expansion,
the so--called expansion coefficients, instead
of the elements of a scattering matrix themselves is that it
can significantly speed up multiple scattering calculations,
which is of particular importance when polarization is taken
into account \citep[][]{2004Hovenier,1987A&A...183..371D}.
We indicate the generalized spherical functions by
$P_{m,n}^l(\cos{\Theta})$ with the indices $m$ and $n$
equal to +2, +0, -0, or -2,
and with $l \geq {\rm max} \{ |m|,|n| \}$.
Note that generalized spherical function $P_{0,0}^l$ is
simply a Legendre polynomial.
The expansion of the elements of auxiliary scattering
matrix ${\bf F}^{\rm au}(\Theta)$ (see Eq.~(\ref{eq_scatteringmatrixS}))
into generalized spherical functions is as follows:
\begin{eqnarray}
a_1(\Theta) &=&
\sum_{l=0}^\infty \alpha_1^l P_{0,0}^l(\cos{\Theta}),
\label{eq_exp1} \\
a_2(\Theta) + a_3(\Theta) &=&
\sum_{l=2}^\infty (\alpha_2^l + \alpha_3^l) P_{2,2}^l(\cos{\Theta}),
\label{eq_exp2} \\
a_2(\Theta) - a_3(\Theta) &=&
\sum_{l=2}^\infty (\alpha_2^l - \alpha_3^l) P_{2,-2}^l(\cos{\Theta}),
\label{eq_exp3} \\
a_4(\Theta) &=&
\sum_{l=0}^\infty \alpha_4^l P_{0,0}^l(\cos{\Theta}),
\label{eq_exp4} \\
b_1(\Theta) &=&
\sum_{l=2}^\infty \beta_1^l P_{0,2}^l(\cos{\Theta}),
\label{eq_exp5} \\
b_2(\Theta) &=&
\sum_{l=2}^\infty \beta_2^l P_{0,2}^l(\cos{\Theta}).
\label{eq_exp6}
\end{eqnarray}
Here, $\alpha_1^l$, $\alpha_2^l$, $\alpha_3^l$, $\alpha_4^l$,
$\beta_1^l$, and $\beta_2^l$ are the expansion coefficients.
For each value of integer $l$, the expansion coefficients can be
derived from the auxiliary scattering matrix elements
using the definitions of the generalized spherical functions
and their orthogonality relations \citep[see][]{2004Hovenier}.
A similar expansion in generalized spherical functions can be made
for any scattering matrix of the form given by Eq. ~(\ref{eq_scatteringmatrix}).
The coefficient $\alpha^0_1$ is always equal to the average of the one-one element
over all directions \citep[][]{2004Hovenier}.
So for the auxiliary phase function $a_1(\Theta)$ we have,
according to Eq. ~(\ref{eq_normalization}), $\alpha^0_1$ = 1 and for the measured phase
function, $F_{11}(\Theta)/F_{11}(30^{\circ})$ we have $\alpha^0_1$ = C,
as mentioned in Sect.~\ref{section_auxiliarymatrix}.
\subsection{The Singular Value Decomposition method}
To derive the expansion coefficients of the measured phase function,
incuding the points added at 0 and 180 degrees,
and of all six elements of the auxiliary scattering matrix we write each of
the Eqs.~(\ref{eq_exp1})--(\ref{eq_exp6})
into the following general form:
\begin{equation}
y(\Theta) = \sum_{l=n}^{m} \gamma^l X^l(\Theta),
\label{eq_exp7}
\end{equation}
where $y(\Theta)$ represents the value of
the measured phase function or an auxiliary scattering matrix
element at scattering angle $\Theta$, or,
in the case of $a_2$ and $a_3$, respectively
their sum, as in Eq.~(\ref{eq_exp2}), or difference, as in
Eq.~(\ref{eq_exp3}).
The functions $X^l(\Theta)$ in Eq.~(\ref{eq_exp7})
are the basis functions,
for which we choose the appropriate generalized spherical functions
\citep[][]{2004Hovenier,1987A&A...183..371D}.
The parameters $\gamma^l$ in Eq.~(\ref{eq_exp7}) represent the
expansion coefficients, or, in the case of
$\alpha_2^l$ and $\alpha_3^l$, respectively
their sum (Eq.~(\ref{eq_exp2})) or difference (Eq.~(\ref{eq_exp3})).
Furthermore in Eq.~(\ref{eq_exp7}), $n$~equals~0 or~2, depending on
the auxiliary scattering matrix element under consideration
(see Eqs.~(\ref{eq_exp1})--(\ref{eq_exp6})), and, although
theoretically $m$~equals~$\infty$
(see Eqs.~(\ref{eq_exp1})--(\ref{eq_exp6})), in practice
$m$ is restricted to the number of scattering angles
at which values of the auxiliary scattering matrix elements
$y(\Theta)$ are available.
For a linear model such as that represented by Eq.~(\ref{eq_exp7}),
the merit function $\chi^2$ is generally defined as:
\begin{equation}
\chi^2 = \sum_{i=1}^{k}\left[
\frac{y(\Theta_i)-\sum_{l=n}^{m} \gamma^l
X^l(\Theta_i)}{\sigma_i} \right]^2,
\label{eq_exp100}
\end{equation}
where $k$ is the number of available data points and $\sigma_i$
is the error associated with data point $y(\Theta_i)$.
We use the Singular Value Decomposition (SVD) method
\citep[][]{1992nrfa.book.....P} to solve Eq.~(\ref{eq_exp100})
for the expansion coefficients $\gamma^l$, because
this method is only slightly susceptible to roundoff errors and
provides a solution that is the best approximation in
the least-squares sense, both for overdetermined systems (in which the
number of data points is larger than the required number of
expansion coefficients) and underdetermined systems (in which the
number of data points is smaller than the required number of
expansion coefficients).
To test the robustness and the quality of the fit method based
on the SVD method, we applied it to scattering matrix elements
that we calculated for the hypothetical, spherical palagonite
particles, and that are shown in Fig.~\ref{fig_matrix}
(note that our Mie-algorithm \citep[][]{1984A&A...131..237D} provides
matrix elements normalized according to
Eq.~(\ref{eq_normalization}), although in Fig.~\ref{fig_matrix},
the normalization of the elements has been adapted to correspond to that
of the measurements). For the test
we compare the matrix elements calculated with our Mie-algorithm
with the matrix elements obtained using the expansion
coefficients derived with the SVD method and
Eqs.~(\ref{eq_exp1})-(\ref{eq_exp6}).
For the relative errors in the matrix elements
we adopt the values of the experimental errors.
We tested two aspects of the application of the SVD method.
First, we applied the method to matrix elements calculated
at the same set of scattering angles as the measured ratios of matrix
elements, i.e. having a typical angular resolution of 5$^\circ$
and scattering angles ranging from $3^\circ$ to
174$^\circ$. Comparing the matrix elements obtained using
the expansion coefficients that were derived
with the SVD method with the directly calculated
matrix elements, we found that the relatively coarse
angular sampling and the lack of data points below
$\Theta=3^\circ$ gave rise to strong oscillations in the
matrix elements that were calculated from the derived
expansion coefficients. In addition, with the derived expansion
coefficients, we could not reproduce the strong forward
scattering peak in the phase function.
The lack of data points above $\Theta=174^\circ$ appeared to
be less of a problem, probably because of the smoothness
of the matrix elements at those scattering angles.
Second, we applied the SVD method to matrix elements calculated
at an angular resolution of 1$^\circ$ and covering the full
scattering angle range, i.e. from 0$^\circ$ to 180$^\circ$.
The matrix elements obtained using the hence derived expansion
coefficients coincided within the numerical precision with
the directly calculated matrix elements.
From this we conclude that our implementation of the SVD method
is reliable, but that we have to apply it to the whole
scattering angle range, and with a relatively high
angular resolution.
Finally, we found that averaging two sets of derived
expansion coefficients, one of which has one coefficient
more than the other,
removes most of the left-over oscillations in the scattering
matrix element that is calculated with the coefficients.
\subsection{The derived expansion coefficients}
\label{section_derivedcoefs}
For deriving the expansion coefficients of the scattering
matrix elements of the Martian analogue palagonite particles,
we use the elements of the auxiliary scattering matrix
${\bf F}^{\rm au}(\Theta)$ (see Sect.~\ref{section_auxiliarymatrix}),
because they cover the whole scattering angle range and are normalized
according to Eq.~\ref{eq_normalization}.
We increase the angular sampling of the auxiliary scattering matrix by adding
artificial data points by spline interpolation to the dataset between $\Theta=3^\circ$ and $174^\circ$.
Starting with 44 measured data points per matrix element, adding the
artificial data points results in 220 data points for each of the auxiliary scattering matrix elements.
This amount of artificial data points includes the values at the forward and backward scattering
angles at respectively $\Theta=0^\circ$ and $\Theta=180^\circ$ as described in
Sect.~\ref{section_auxiliarymatrix}.
The 220 datapoints proved to be required to be able to follow the steep slopes
of $a_1(\Theta)$, $a_2(\Theta)$, $a_3(\Theta)$ and $a_4(\Theta)$ between
$\Theta=0^\circ$ and $\Theta=3^\circ$,
where no artifical datapoints were added, without introducing unwanted oscillations.
As 220 datapoints are needed to obtain the auxilliary phase function $a_1(\Theta)$,
it proved to be practical to extend also $F_{12}(\Theta)/F_{11}(\Theta)$ and
$F_{34}(\Theta)/F_{11}(\Theta)$ to the same amount of 220 datapoints,
to be able to straightforwardly apply Eq.~\ref{equation_auxiliary} to obtain
$b_1(\Theta)$ and $b_2(\Theta)$.
The optimal number of expansion coefficients for each of the elements
was obtained in an iterative process: unrealistic oscillations
at large scattering angles are suppressed when using a smaller number of
expansion coefficients,
while the fit of the steep phase function near $0^\circ$ is improved when
using a larger number of expansion coefficients.
Applying our SVD method to the auxiliary scattering matrix elements,
leaves us with respectively 185 expansion coefficients for $a_1(\Theta)$,
$a_2(\Theta)$, $a_3(\Theta)$, $a_4(\Theta)$, 130 expansion
coefficients for $b_1(\Theta)$ and 46 expansion coefficients for $b_2(\Theta)$.
Figure~\ref{fig_expansioncoef} shows
the expansion coefficients $\alpha^l_1$, $\alpha^l_2$,
$\alpha^l_3$, $\alpha^l_4$, $\beta^l_1$, and $\beta^l_2$,
derived from the auxiliary scattering matrix
of the Martian analogue palagonite particles.
The expansion coefficients are plotted with error
bars that originate from the experimental errors in the measurements.
An electronic table of the expansion coefficients
will be available from the Amsterdam
Light Scattering Database\footnote{Website: http://www.astro.uva.nl/scatter}
\citep[][]{2005JQSRT..90..191V,2006JQSRT.100..437V}.
\subsection{The synthetic scattering matrix}
\label{section_syntheticscatteringmatrix}
Employing Eqs.~(\ref{eq_exp1})--(\ref{eq_exp6}) with
the expansion coefficients presented in Sect.~\ref{section_derivedcoefs},
we can now calculate, at an arbitrary angular resolution,
the so--called {\em synthetic scattering matrix}
\citep[see also][]{2004JGRD..10916201M}
which covers the complete scattering range,
i.e. from 0$^\circ$ to 180$^\circ$, and which
is normalized according to Eq.~(\ref{eq_normalization}).
Figure~\ref{fig_fitmeasurement} shows the calculated synthetic scattering
matrix elements.
The elements of the synthetic scattering matrix are also listed
in Table~\ref{table1} at an angular resolution of 1 to 5 degrees.
An electronic table of the synthetic scattering matrix elements
will be available from the Amsterdam
Light Scattering Database\footnotemark[\value{footnote}]
\citep[][]{2005JQSRT..90..191V,2006JQSRT.100..437V}.
As a check we used the synthetic scattering matrix to compute
the same ratios of elements as have been measured. We found the differences
to lie within the ranges of experimental uncertanties or very nearly so.
\section{Summary and discussion}
\label{section_summary}
We present measured ratios of elements of the scattering matrix of
irregularly shaped Martian analogue palagonite particles
\citep[][]{1995JGR...100.5309R,1997JGR...10213341B} as functions of the scattering angle $\Theta$
($3^\circ \leq \Theta \leq 174^\circ$) at a wavelength of 632.8~nm.
Our measured ratios of scattering matrix elements differ strongly
from those calculated for homogeneous, spherical particles with
the same size and refractive index.
In particular, the measured phase function
(ratio $F_{11}(\Theta)/F_{11}(30^\circ)$)
shows a very strong (almost three orders of magnitude) forward scattering
peak and a smooth drop-off towards the largest scattering angles, where the
phase function of the spherical particles shows much more angular features
especially in the backward scattering direction. Clearly, using scattering matrix
elements that have been calculated for homogeneous, spherical particles when irregularly
shaped particles are to be expected in radiative transfer calculations for e.g.
the interpretation of remote-sensing observations can thus lead to errors in retrieved
dust properties (microphysical parameters and/or optical thicknesses).
To facilitate the use of our measurements in radiative transfer calculations for
e.g. Mars, we have first constructed an auxiliary scattering matrix from
the measured scattering matrix. This auxiliary scattering matrix covers
the whole scattering angle range (i.e. from 0$^\circ$ to 180$^\circ$),
and its elements have been normalized such that
the average of the phase function over all scattering directions equals unity.
The value of the phase function at $\Theta=0^\circ$ has been computed from the phase
function calculated with Mie-theory for homogeneous, spherical particles with the same size
and composition. The normalization of the auxiliary phase function, and hence the
normalization of the other elements too, is obtained by applying a Singular
Value Decomposition (SVD) method to fit an expansion in generalized spherical
functions \citep[][]{1963Gelfand,2004Hovenier}
to the measured phase function, including artificial data points.
The first expansion coefficient yields the required
normalization constant. After the normalization of the auxiliary phase function,
the SVD method is applied to the auxiliary scattering matrix
and its expansion coefficients are obtained.
With the expansion coefficients, a synthetic scattering matrix is computed
for the complete scattering range. It is normalized so that the average
of its one-one element over all directions equals unity.
The synthetic scattering matrix elements can also straightforwardly be used in
radiative transfer calculations. The need to include all scattering matrix elements,
instead of only the phase function, is obvious for the interpretation
of polarization observations. However, even for flux calculations, all scattering
matrix elements should be used, because ignoring polarization, i.e. using only
the phase function, in multiple scattering calculations induces errors in calculated
fluxes \citep[e.g.][]{1998GeoRL..25..135L}. The use of only the phase function should
be limited to single scattering calculations for unpolarized incident light.
Figure~\ref{fig_map_vs_tomasko} shows a comparison between our synthetic phase function
and two phase functions presented by \citet{1999JGR...104.8987T}
as derived from diffuse skylight observations of the Imager for Mars Pathfinder. Each of the
phase functions has its own normalization. The phase functions of \citet{1999JGR...104.8987T} show
the same general angular behavior as our synthetic phase function: a strong forward
scattering peak and a smooth drop-off towards the largest scattering angles.
The forward scattering peak of our phase function appears to be stronger than
the peaks of the phase functions of \citet{1999JGR...104.8987T}. This can easily be due to
the size difference of the particles. The dust particles in our sample have an
effective radius that is a factor of 2 to 3 larger than that of \citet{1999JGR...104.8987T}
(4.46~$\mu$m versus 1.6~$\mu$m $\pm$ 0.15~$\mu$m). The slopes of the phase functions in
the backward scattering direction are very similar, and appear to be typical
for (terrestrial) irregularly shaped mineral particles with moderate refractive
indices \citep[see e.g.][]{2001JGR...10617375V,2000A&A...360..777M,2001JGR...10622833M}.
This smooth slope can not be easily anticipated from Mie-theory.
The expansion coefficients, the synthetic scattering matrix elements,
and the measured ratios of elements of the scattering matrix of the Martian
analogue palagonite particles will all be available from the Amsterdam Light
Scattering Database
\citep[for details, see][]{2005JQSRT..90..191V,2006JQSRT.100..437V}.
The Amsterdam Light Scattering Database contains a collection of measured
scattering matrix element ratios, including information on particle sizes and
their composition, for various types of irregularly shaped particles. The SVD
method presented in this article can straightforwardly be applied to measured
scattering matrix elements of particles other than the Martian analogue palagonite
particles.
\ack
We are grateful to Ben Veihelmann for helping with the SEM
image of Fig.~\ref{fig_sem}.
It is a pleasure to thank Martin Konert of the Vrije Universiteit in
Amsterdam for performing the size distribution measurements and Michiel Min
of the University of Amsterdam for fruitful discussions.
\label{lastpage}
|
2,869,038,155,489 | arxiv | \section{Experimental results}\label{sec:results}
UED measurements of pulsed laser deposited VO$_2$ films (50 nm) reveal rich pump-fluence dependent dynamics up to the damage threshold of $\sim$40~mJ/cm$^2$ (35 fs, 800 nm, $f_{\textup{rep}}=50-200~$Hz). Figure~\ref{FIG:pump-probe}~(a) shows a typical one-dimensional powder diffraction pattern for equilibrium VO$_2$ in the $M_1$ phase and identifies the (200), (220) and (30$\bar{2}$) peaks. The (30$\bar{2}$) peak acts as an order parameter for the $M\rightarrow R$ transition, since it is forbidden by the symmetry of the $R$ phase, while (200) and (220) peaks are present in all equilibrium phases. Consistent with previous work~\citep{Morrison2014a}, the pump-induced changes to diffracted intensity (Fig.~\ref{FIG:pump-probe}~(b), 23 mJ/cm$^2$) indicate two distinct and independent photo-induced structural transformations. The first is a rapid ($\tau_{(30\bar{2})}\approx 300~$fs) non-thermal melting of the periodic lattice distortion (dimerized V--V pairs) present in $M_1$, evident in Fig.~\ref{FIG:pump-probe}~(b) and Fig.~\ref{FIG:ued_thz_time_traces}~(a) as a suppression of the ($30\bar{2}$) and related peak intensities. The second is a slower ($\tau_{(200),(220)}\approx 2~$ps) transformation associated with a significant increase in the intensity of the (200), (220) and other low index peaks whose time-dependence is also shown in Fig.~\ref{FIG:pump-probe}~(b) and Fig.~\ref{FIG:ued_thz_time_traces}~(a). As we will show, at low pump fluences ($\sim$3--8 mJ/cm$^2$) the slow process is exclusively observed, while at high pump fluences ($>35~$mJ/cm$^2$) the fast process dominates; these are independent structural transitions, the slow process does not follow the fast process.
\begin{figure}
\centering
\includegraphics[width = 0.4\textwidth]{fig2.eps}
\caption{Comparison of ultrafast electron diffraction and time-resolved terahertz spectroscopy measurements. (a) The red triangles show the suppression of the (30$\bar{2}$) peak ($\sim350~$fs, red line) associated with the $M_1$--$R$ transition (see Figure~\ref{FIG:pump-probe}). The blue circles show the intensity increase of the (200) and (220) peaks, which are associated with the slow re-organization ($\sim 2~$ps) and the formation of the $\mathscr{M}$ phase. (b) The transient change in THz conductivity (spectrally integrated from 2--6~THz, shown as grey circles) is well-described by a bi-exponential function comprised of both fast ($\sim350~$fs, red line) and slow ($\sim2.5~$ps, blue line) time constants which are in quantitative agreement with those of the structural transitions observed with UED.}
\label{FIG:ued_thz_time_traces}
\end{figure}
Complementary TRTS measurements were performed on the same samples under identical excitation conditions to determine the associated changes in the time-dependent complex conductivity, $\tilde{\sigma}(\omega,\tau)$ (see Fig.~\ref{FIG:pump-probe}~(c) and (d)). The pump-induced changes in real conductivity, $\sigma_1(\omega,\tau)$, over the $\sim$2--20 THz frequency range (Fig.~\ref{FIG:pump-probe}~(d)) also exhibit fast ($\Delta\sigma_1^{\textup{fast}}$) and slow ($\Delta\sigma_1^{\textup{slow}}$) dynamics, consistent in terms of timscales and fluence dependence with those described above for the UED measurements and similar measurements performed on sputtered VO$_2$ films in the 0.5--2 THz window~\cite{Cocker2012}. Additional structure at higher frequencies is due to optically active phonons associated with O--cage vibrations around V atoms~\cite{Kubler2007}. We connect the observed THz response to the two structural transformations by focusing on the integrated spectral region from 2--6~THz which includes exclusively electronic contributions to the conductivity (Drude-like) and omits phonon resonances~\cite{Kubler2007,Pashkin2011,wall2012}. Figure~\ref{FIG:ued_thz_time_traces}~(b) shows an example of the transient real conductivity measured at 22~mJ/cm$^2$ along with the fast and slow exponential components plotted individually. We find that these time constants are in excellent agreement with those of the fast and slow processes determined from the UED measurements (Fig.~\ref{FIG:ued_thz_time_traces}). This correspondence holds over the enitre range of fluences investigated.
\subsection{Structure of the monoclinic metallic phase}\label{sec:structure}
Since its discovery~\citep{Morrison2014a}, the structure of the photoinduced $\mathscr{M}$ phase and its relationship to the parent $M_1$ phase has remained unclear. Here we use measured UED intensities to determine the changes in the electrostatic crystal potential, $\Phi(\vec{x})$, associated with the transformation between $M_1$ and $\mathscr{M}$ phases. The centro-symmetry of the monoclinic and rutile phases provides a solution to the phase problem~\cite{Elsaesser2010} and allows for the reconstruction of the full three-dimensional real-space electrostatic potential from each one-dimensional diffraction pattern obtained using UED.
Figure~\ref{FIG:atomic_potential_maps} shows slices of $\Phi(\vec{x})$ for VO$_2$ in the $R$ (b) and $M_1$ (c) phases obtained using this procedure. The slices shown are aligned vertically along the rutile $\vec{c}_R$ axis, and horizontally cut the unit cell along $\vec{a}_R + \vec{b}_R$ as indicated in the 3D structural model of $M_1$ VO$_2$ (Fig.~\ref{FIG:atomic_potential_maps}~(f)). In this plane, adjacent vanadium chains are rotated by 90$^{\circ}$, with dimers tilting either in or orthogonal to the plane of the page as indicated in Fig.~\ref{FIG:atomic_potential_maps}~(c) and (f). The lattice parameters obtained from these reconstructions are in excellent agreement with published values for the two equilibrium phases (Fig.~\ref{FIG:atomic_potential_maps}~(a)). The autocorrelation of $\Phi(\vec{x})$ is also in quantitative agreement with the Patterson function computed directly from the UED data (See Fig.~S6 in~\cite{SM}).
\begin{figure*}
\centering
\includegraphics[width = 0.8\textwidth]{fig3.eps}
\caption{A real-space view of the photoinduced changes to $\Phi(\boldit{x})$ in VO$_2$. The plane of interest is spanned by $\boldit{c}_{R}$ and $\boldit{a}_{R}+\boldit{b}_{R}$. (a) Line-cuts of $\Phi(\boldit{x})$ along the red and blue dashed lines shown in (b) and (c) respectively. (b) $\Phi(\boldit{x})$ for $R$--phase VO$_2$ showing the absence of V--V dimerization and tilting. (c) $\Phi(\boldit{x})$ for $M_1$--phase VO$_2$ showing V--V dimerization and tilting illustrated by solid black (in-plane tilt) and red (out-of-plane tilt) lines which are also illustrated in the 3D structure ((f)). The line cuts in (a) show the expected undimerized V--V and dimerized V--V lengths of 2.9~\AA~and 2.6~\AA~respectively. (d) $\Delta\Phi(\boldit{x})$ for $\mathscr{M}$--phase VO$_2$ 10~ps after photoexcitation at a fluence of 6~mJ/cm$^2$. V--V dimerization from (c) is preserved and O atoms in the octahedra nearest to V atoms display an increase in $\Phi(\boldit{x})$ resulting in anti-ferroelectric order up the $\boldit{c}_{R}$ axis indicated by arrows. (e) Line-cut of $\Delta\Phi(\boldit{x})$ shown in (d) along the dashed black line which intersects a chain of O atoms. The anti-ferroelectric order is seen as an increase in $\Phi(\boldit{x})$ on alternating O atoms. (f) 3-dimensional structure of VO$_2$. V atoms appear as large red spheres and O atoms are grey (white) as per the anti-ferroelectric ordering depicted in (d) and (e). In-plane (out-of-plane) V--V dimers are connected by black (red) lines. (g) 3-dimensional structure of VO$_2$ looking down the $c_R$ axis.
}
\label{FIG:atomic_potential_maps}
\end{figure*}
In Fig.~\ref{FIG:atomic_potential_maps}~(d) the changes in $\Phi(\vec{x})$ associated with the $M_1$--$\mathscr{M}$ transition are revealed. This map is computed from the measured $\Delta I_{\vec{G}}$ between the $\mathscr{M}$ and $M_1$ phases 10 ps after photoexcitation at 6 mJ/cm$^2$. The preservation of $M_1$ crystallography is clear; \textit{i.e.} V--V dimerization and tilting along the $\vec{c}_R$ axis. Also evident is the transition to a novel 1D anti-ferroelectric charge order along $\vec{c}_R$. In the equilibrium phases all oxygen atoms are equivalent, but in the $\mathscr{M}$ phase there is a periodic modulation in $\Phi(\vec{x})$ at the oxygen sites along the $\vec{c}_R$ axis indicated by arrows. This modulation is commensurate with the lattice constant (Fig.~\ref{FIG:atomic_potential_maps}~(c)-(e)). The oxygen atoms exhibiting the largest changes are those associated with the minimum V--O distance in the octahedra and, therefore, the V--V dimer tilt. This emphasizes the importance of the lattice distortion to the emergence of the $\mathscr{M}$ phase. The anti-ferroelectric lattice distortion in $M_1$ was already emphasized by Goodenough~\cite{GOODENOUGH1971} in his seminal work on VO$_2$. Significant changes in electrostatic potential are also visible between vanadium atoms in the octahedrally-coordinated chains along $\vec{c}_R$ that is consistent with a delocalization or transfer of charge from the V--V dimers to the region between dimers. All of these observations suggest that the $\mathscr{M}$ phase emerges from a collective reorganization in the electron system alone.
\subsection{Fluence dependence}\label{sec:fluence_dependence}
We have established that there are two qualitatively distinct ultrafast photo-induced phase transtions in vanadium dioxide. The pump-fluence dependence of the sample response, specifically the heterogenous character of the film following photoexcitation (due to both $M_1\rightarrow\mathscr{M}$ and $M_1\rightarrow R$ transformations) and the corresponding changes in conductivity, is addressed in this section. UED intensities report on the fluence dependence of both structural phase transitions (Fig.~\ref{FIG:ued_thz_measurements}~(a)). As described earlier, the change in the ($30\bar{2}$) peak intensity provides an order parameter exclusively for the $M_1\rightarrow R$ transition, while the (200) and (220) peak intensities report on both $M_1\rightarrow\mathscr{M}$ and $M_1\rightarrow R$ transformations. Measurements of the (30$\bar{2}$) peak intensity (Fig.~\ref{FIG:ued_thz_measurements}~(a) red triangles) clearly demonstrate a fluence threshold of $\sim8~$mJ/cm$^2$ for the $M_1\rightarrow R$ transformation that is consistent with previous work~\cite{Morrison2014a,Baum2007,Cavalleri2001}. Above this threshold the suppression of the (30$\bar{2}$) peak increases approximately linearly with fluence up to a magnitude greater than $75\%$ at $\sim30$~mJ/cm$^2$, a result that is inconsistent with a ``two-step'' model that involves fast V--V dimer dilation followed by slow dimer rotation. Complete V--V dimer dilation yields a maximum (30$\bar{2}$) peak suppresion of $~50\%$. Instead, this data is consistent with a picture where the PLD of the $M_1$ phase simply melts in $\sim300~$fs. The photoinduced fraction of $R$--phase VO$_2$ reaches $\sim75\%$ of the film on this timescale at the highest pump fluences reported. The (200) and (220) peaks (Fig.~\ref{FIG:ued_thz_measurements}~(a) blue and green circles) show a more complicated fluence dependence, reaching a maximum change in intensity in the 20~mJ/cm$^2$ range. At the highest excitation fluences reported the intensity changes in the (200) and (220) peaks correspond to the relative increase expected for the $R$ phase compared to the $M_1$ of VO$_2$. The maximum at $\sim20$~mJ/cm$^2$ is entirely due to the presence of the $\mathscr{M}$ phase as we demonstrate by converting the changes in UED intensities to phase volume fractions~\cite{SM}.
\begin{figure*}
\centering
\includegraphics[width = 0.8\textwidth]{fig4.eps}
\caption{(a) Fluence dependence of diffraction peak intensity changes ($\Delta I_{hkl}$) determined by ultrafast electron diffraction. Fast non-thermal melting of $M_1\rightarrow~R$ crystallites occurs above a threshold fluence of $\sim 8~$mJ/cm$^2$ and yields a reduction of the (30$\bar{2}$) peak intensity (red triangles). Solid lines serve as a guide to the eye. The slower charge re-organization transition of $M_1\rightarrow~\mathscr{M}$ crystallites increases the (200) and (220) peak intensities (blue and green circles) which exhibit a maximum increase in the vicinity of 20~mJ/cm$^2$. At higher fluence the increase becomes less as the samples becomes predominantly $R$. The blue and green horizontal dashed lines represent the difference in the (200) and (220) peak amplitudes between the equilibrium $R$ and $M_1$ phases. (b) Phase volume fractions for the $\mathscr{M}$ (blue line) and $R$ (red line) phases calculated using the ultrafast electron diffraction data shown in (a) via the volume phase fraction model (see supplementary material~\cite{SM}). Red triangles are the ($30\bar{2}$) data points from (a) and blue circles are the (200) data points in (a) (scaled for clarity) with the $M_1\rightarrow R$ phase transition contribution subtracted. (c) Fluence dependence of the fast $\Delta\sigma_1^{\textup{fast}}$ and slow $\Delta\sigma_1^{\textup{slow}}$ components of the transient terahertz optical conductivity (red and blue respectively). Solid lines serve as a guide to the eye. $\Delta\sigma_1^{\textup{fast}}$ increased steadily and corresponds to the formation of $R$ crystallites and $\Delta\sigma_1^{\textup{slow}}$ attains a maximum at 25~mJ/cm$^2$ and corresponds to the formation of $\mathscr{M}$ crystallites. }
\label{FIG:ued_thz_measurements}
\end{figure*}
We denote $F_R$ as the phase volume fraction for the $R$ phase and $F_{\mathscr{M}}$ for the $\mathscr{M}$ phase. The results of the model are shown in Fig.~\ref{FIG:ued_thz_measurements}~(c). For fluences below the structural IMT fluence threshold of $\sim 8~$mJ/cm$^2$, we observe clearly that only a small percentage ($\sim 10\%$) of $\mathscr{M}$ crystallites have been formed by photoexcitation. As the fluence increases, the photoexcitation of $R$ crystallites begins at the threshold and increases roughly linearly afterwards. The $\mathscr{M}$ phase, achieves a maximum in the vicinity of 20~mJ/cm$^2$ (consistent with Fig.~\ref{FIG:ued_thz_measurements}~(a)) where we determine that $F_{\mathscr{M}}=45\pm13\%$. At greater fluences, $F_{\mathscr{M}}$ decreases as the material becomes increasingly $R$ phase due to stronger photoexcitation. The data points shown in Fig.~\ref{FIG:ued_thz_time_traces}~(c) as blue circles are an average of the (200) and (220) data points from (a) with the contribution from the $M_1\rightarrow R$ phase transistion subtracted.
We obtain quantitatively consistent results for the fluence dependence of the transient conductivity obtained by TRTS, firmly establishing a link between the differential structure and differential electronic response. Figure~\ref{FIG:ued_thz_measurements}~(b) shows the fluence dependence of the fast ($\Delta\sigma_1^{\textup{fast}}$) and slow ($\Delta\sigma_1^{\textup{slow}}$) conductivity terms. The $\Delta\sigma_1^{\textup{fast}}$ component corresponds to the conductivity response associated with the transition from $M_1\rightarrow R$ as it increases steadily with fluence in accordance with $F_R$ shown in Fig.~\ref{FIG:ued_thz_measurements}~(c). Furthermore, we clearly observe that $\Delta\sigma_1^{\textup{slow}}$ achieves a maximum at a fluence of $\sim$20~mJ/cm$^2$ beyond which it decreases, consistent with the behaviour of the (200) and (220) diffraction peaks and $F_{\mathscr{M}}$ shown in Fig.~\ref{FIG:ued_thz_measurements}~(a) and (c). By analyzing the conductivity terms in an effective medium model, we find good agreement for $F_{\mathscr{M}}$ in the metallic limit~\cite{SM}.
\begin{figure}
\centering
\includegraphics[width = 0.4\textwidth]{fig5.eps}
\caption{Time constants for the photoinduced phase transitions in VO$_2$. Time constants for the slow ((200) blue, (220) green) and fast (30$\bar{2}$, black) peak dynamics as determined from the UED data. THz time-domain spectroscopy measurements of the time constants pertaining to the slow and fast components of $\sigma_1$ are depicted by grey and red diamonds respectively. The red shaded area represents the temporal region ($350\pm~150~$fs) associated with the photo-induced structural phase transition from $M_1\rightarrow R$ which dominates at high pump fluence. The solid blue line is an exponential fit to the (200) peak time constants $\tau_{\textup{slow}}$. \textbf{Inset:} Plot of $\ln\left(h\tau_{\textup{slow}}^{-1}/k_B T_e\right)$ vs. inverse electron temperature $1/k_B T_e$ using values of $\tau_{\textup{slow}}$. The solid line is a fit to~\eqref{eqn:EyringPolanyi}, from which the activation energy $E_A$ and entropy $\Delta S^{\ddagger}$ are determined.}
\label{FIG:time_consts}
\end{figure}
\subsection{Activation energy and kinetics}\label{sec:kinetics}
The $M_1\rightarrow R$ and $M_1\rightarrow \mathscr{M}$ transitions exhibit qualitatively different kinetic behaviour as evidenced by the fluence dependence of the time constants $\tau_{\textup{fast}}$ and $\tau_{\textup{slow}}$ obtained from both UED and TRTS (Fig.~\ref{FIG:time_consts}). The time constant for the $M_1\rightarrow R$ transition ($\tau_{\textup{fast}}$), captured by the dynamics of the (30$\bar{2}$) peak in the UED measurements and by $\tau_{\textup{fast}}$ in the THz measurements, is $350\pm100$ fs \textit{independent} of fluence. This demonstrates that photo-induced $M_1\rightarrow R$ transition -- the melting of the periodic lattice distortion -- is non-thermal and barrier free. The results for the $R$--phase volume fraction (Fig.~\ref{FIG:ued_thz_measurements}), however, also show that the excitation threshold for this non-thermal phase transition is heterogeneous in PLD grown films, depending on local crystallite size and strain conditions. This last fact was also previously observed by Zewail using UEM~\cite{Lobastov2007} and others by nanoscopy~\cite{ocallahan2015,Donges2016}. The $M_1\rightarrow \mathscr{M}$ time constant, conversely, decreases significantly with pump fluence as seen in the (200) and (220) peak dynamics from UED and the $\tau_{\textup{slow}}$ from TRTS. The exponential increase in the $M_1\rightarrow \mathscr{M}$ rate with excitation energy deposited in the electron system strongly suggests that the $M_1\rightarrow \mathscr{M}$ is an activated process. We can extract the activation energy $E_A$ from this data by determining the electronic excitation energy $k_BT_e$ as a function of pump fluence~\cite{SM} and invoking the Eyring-Polanyi equation from transition state theory~\cite{EyringPolyani}
\begin{equation}\label{eqn:EyringPolanyi}
\ln \left({\frac {h \tau_{\textup{slow}}^{-1}}{k_B T_e}}\right)=-\frac{E_A}{k_B T_e}+\frac{\Delta S^{\ddagger }}{k_B},
\end{equation}
where $\Delta S^{\ddagger}$ is the entropy of activation. We take the values of $\tau_{\textup{slow}}$ for the (200) peak shown in Fig.~\ref{FIG:time_consts} and plot $\ln\left(h\tau_{\textup{slow}}^{-1}\big/k_B T_e\right)$ vs. $1\big/k_BT_e$ which is shown in the inset of Fig.~\ref{FIG:time_consts}. By fitting to Eqn.~\eqref{eqn:EyringPolanyi}, we determine $E_A=304\pm 109~$meV. This describes a fundamental property of the photo-induced $M_1\rightarrow\mathscr{M}$ transition. Furthermore, the fluence required to deposit $E_A$ per unit cell is $\mathscr{F}\approx 3.7~$mJ/cm$^2$, which is the value previously attributed to the $M_1\rightarrow R$ IMT threshold~\cite{Becker1994,Cavalleri2001,Kubler2007}. This is also in agreement with the fluence threshold extracted from the low fluence data points in Fig.~\ref{FIG:ued_thz_measurements}~(a).
\section{Discussion and conclusion}\label{sec:discussion}
We have demonstrated that photoexcitation of $M_1$ VO$_2$ yields a complex, heterogenous, multiphase film whose structure and properties are both time and fluence dependent. The character of the fluence depedent transformation is summarized in Fig.~\ref{FIG:summary}. At pump fluences below $\sim3$~mJ/cm$^2$ there is no long-lived ($>1~$ps) transformation of the $M_1$ structure, and VO$_2$ behaves like other Mott insulators insofar as optical excitation induces a relatively small, impulsive increase in conductivity followed by a complete recovery of the insulating state~\cite{Pashkin2011,Cocker2012,Morrison2014a,Wegkamp2014}. Above $\sim3$~mJ/cm$^2$, however, photoexcitation stimulates a phase transition in the electron system that stabilizes metallic properties through an orbitally selective charge re-organization: the $\mathscr{M}$ phase. Between 3-8~mJ/cm$^2$ photoexcitation exclusively yields the $\mathscr{M}$ phase, which populates ~15--20$\%$ of the film by 8~mJ/cm$^2$. In this fluence range, time-resolved photoemission experiments show a complete collapse of the bandgap~\cite{Wegkamp2014}, TRTS experiments show a dramatic increase in conductivity~\cite{Pashkin2011,Cocker2012} and optical studies show large changes in the dielectric function~\cite{Cavalleri2001,wall2012,Jager2017}, all of which are persistent, long-lived and characteristic of a phase-transition. Given the nature of the equilibrium phase diagram, these observations were previously interpreted as evidence of the $M_1\rightarrow R$ transition. The $M_1\rightarrow R$ transition, however, exhibits a minimum fluence threshold of $\sim$8--9~mJ/cm$^2$ consistent with surface sensitive experiments~\cite{Baum2007,Lobastov2007} and coherent phonon investigations~\cite{wall2012}. Above 8~mJ/cm$^2$ photoexcitation yields a heterogenous response with both $\mathscr{M}$ and $R$ phase fractions increasing with fluence to approximately 20~mJ/cm$^2$ where each phase occupies $\sim50\%$ of the film. At higher fluences $M_1\rightarrow R$ dominates and the $\mathscr{M}$ phase occupies a decreasing proportion of the film.
Non-thermal melting as a route to the control of material structure and properties with femtosecond laser excitation has been known for some time and there are examples in several material classes. Much more novel is the ($M_1\rightarrow \mathscr{M}$) which has no equilibrium analog and represents a new direction for using optical excitation to control the properties of strongly correlated materials. The $M_1\rightarrow \mathscr{M}$ is thermally activated and does not involve a significant lattice structural component, representing a phase transition in the electron system alone. Our results are consistent with the recent computations of He and Millis~\cite{Millis2016} which indicate that an orbital selective transition can be driven in $M_1$ VO$_2$ through the increase in electron temperature following femtosecond laser excitation. This transition depletes the occupancy of the V--$3d_{x^2-y^2}$ band that is split by the V--V dimerization in favour of the V--$3d_{xz}$ band that mixes strongly with the O--$2p$ orbitals due to the anti-ferroelectric tilting of the V--V dimers (Fig.~\ref{FIG:atomic_potential_maps}~(f)). The $M_1$ bandgap collapses along with this transition yielding a metallic phase. The three salient features of this picture are in agreement with our observatons; thermal activation on the order of $100~$meV, orbital selection and bandgap collapse to a metallic phase. Of interest is the fact that depletion of the V--$3d_{x^2-y^2}$ band where states are expected to be localized on the V--V dimers does not seem to significantly lengthen the V--V dimer bond. The question arises whether this phenomena can be entirely understood inside a picture that treats $M_1$ VO$_2$ as a $d^1$ system, or whether more than a single V-$3d$ electron is involved as DMFT calculations suggest~\cite{Weber2012}. Such DMFT results from Weber~\cite{Weber2012} suggest that $M_1$ VO$_2$ is a paramagnetic metal with antiferroelectric character, like that shown in Fig.~\ref{FIG:atomic_potential_maps}~(d), when intra-dimer correlations are not included.
The combination of UED and TRTS measurements also makes it possible to address the question of a structural bottleneck associated with the photoinduced IMT in $M_1$ VO$_2$. Here we definitively show that the timescale associated with the IMT, i.e. the timescale associated with the emergence of metallic conductivity similar to that of the equilibrium metallic phase, is determined by that of the structural phase transitions (Fig.~\ref{FIG:ued_thz_time_traces} and~\ref{FIG:time_consts}. Following photoexcitation at sufficient fluence, there is overwhelming evidence that there is an impulsive collapse of the bandgap in $M_1$ VO$_2$ with equally rapid changes in optical properties. However, the emergence of metallic transport properties occurs on the same timescale as the structural phase transitions. Clearly the localization-delocalization transition that leads to a 5 order of magnitude increase in conductivty is inseparable from the structural phase transitions.
In conclusion, we have combined UED and TRTS measurements of VO$_2$ and decoupled the concurrent structural phase transitions along with their contribution to the multiphase heterogeneity of the sample following photoexcitation. We have shown that the monoclinc metal phase is the product of a thermally activated transition in the electron system, which provides a new avenue for the optical control of strongly correlated material properties.
\begin{figure*}[t!]
\centering
\includegraphics[width = 0.85\textwidth]{fig6.eps}
\caption{Illustration of the multi-phase heterogeneity of photo-excited VO$_2$. \textbf{Top: (Left to right)} Scanning electron microscope image of polycrystalline VO$_2$ grown by pulsed laser deposition. Graphical images of crystallites phases with increasing photoexcitation strength. Below the fluence threshold of $\sim 4~$mJ/cm$^2$ the response of the material is Mott-Hubbard like: instantaneous transient metallization from excited carriers followed by a recovery to the insulating phase within $\sim 100~$fs. When the fluence threshold for the $M_1\rightarrow \mathscr{M}$ IMT is reached in a particular crystallite, the gap collapses and the crystallite transitions to the $\mathscr{M}$ phase for $\sim 100~$ps. Eventually the crystallographic structural IMT fluence threshold ($\sim 9~$mJ/cm$^2$) is reached and crystallites transforms to the Rutile phase. \textbf{Bottom left:} Schematic representation of the free energy landscape. The monoclinic metal phase is described by a free energy minimum and the associated IMT is driven by electron temperature with an activation energy of $E_A=304\pm 109$ meV. The occupancy of the $d_{xz}$ orbital changes from partial to full filling during this process. The monoclinic-rutile structural IMT has a barrier which is removed when the threshold fluence of $\sim 9~$mJ/cm$^2$ is reached leading to an increase in the V-V distance (removal of dimerization).}
\label{FIG:summary}
\end{figure*}
\bibliographystyle{apsrev}
|
2,869,038,155,490 | arxiv | \section{Introduction}
Modeling the human behavior has always been an attempt of several scientists, and with social networks this task can be done in many perspectives. Social networks allow people to interact on the Internet as they do in the real world, sharing their lives through text messages, photos, videos, and connecting to friends with comments, likes, quizzes and games. It is important to state that we follow the definition of \cite{Wellman1996} regarding social networks, who states that when computer networks link people as well as machines, they become social networks. Some social networks in particular focus on sharing users' short text messages. These are called micro blogs, since they are similar to web blogs but with just a few words, being very attractive to mobile appliances. The most popular micro blog is Twitter, and due to an easy-to-use API it is widely used in many mobile and desktop platforms. Twitter was launched in 2006, and after 6 years it has around 140 million active users sending an average of 340 million tweets, those short messages, per day\footnote{http://blog.twitter.com/2012/03/twitter-turns-six.html}. The public default policy of tweets enables researches of various areas to be done on subjects that may vary from natural language processing and data mining to public health analysis. We suggest reading the first quantitative study \cite{Kwak2010} on the entire Twitter and its information diffusion to better understand Twitter's topology, influential identification and trending topics' behavior.
Using Twitter in mobile devices makes it possible to embed geographical information in the tweets. Tweets stored within GPS coordinates or political division names enable us to identify from where these messages were sent and conduct a socio-geographic analysis.
Socio-geographic data is very difficult to be obtained. Cellular service providers, vehicle GPS trackers and credit card companies are some examples of businesses that have these data, but lock them with strict security \cite{Ferrari2012}. Some academic researches even needed to build their own set of data to study some socio-geographic patterns \cite{Li2008} \cite{Lerin2011}.
This is why public data from social networks bring these researches to a new level with live, organic and enormous amount of data. That way, human behavior can be modeled, identifying what the users from a certain city or place are saying about a specific topic and why, i.e., what their impressions are.
Along with its real-time nature, Twitter information can be used as a live sensor network, for instance, detecting earthquakes and typhoons \cite{Sakaki2010} or local social events \cite{Lee2010}. In this paper, a topic is some subject referred to in a document and which users are talking about at any particular time, and an event means a unique thing that happens at some point of time \cite{Allan1998} \cite{Allan2002}.
In this context, this paper proposes a new method for using the vast volume of Twitter's user messages to identify location-based events such as concerts, festivals, disasters, political demonstrations, etc., without having to select keywords. This points to our main contribution on event detection, changing the dimensional space from keywords to places. In this sense, the Twitter's Streaming API\footnote{https://dev.twitter.com/docs} method is used to retrieve geo-tagged and time-stamped short text messages at a worldwide coverage. Simple metrics are extracted from these messages, considering political divisions as partitions, creating time series and used as input of a neural network \cite{HEINEN2011} that models the input data based on a regression technique and identifies outliers. Text messages are then parsed to provide semantic information to the events detected.
The paper is organized as follows: section \ref{sec:related} presents related works; in section \ref{sec:method}, we present the proposed approach for location-based event detection; section \ref{sec:experiments} illustrates the experimental results and more detail on how the approach solves this task; and section \ref{sec:conclusion} provides the conclusions and discussion of further works.
\section{Related Works}\label{sec:related}
This section presents and discusses related works in the fields of Geo-social analysis and event detection, which are the main applications of our work.
\subsection{Geo-Social Analysis}\label{sec:GSA}
Despite the early stage of location-based social networks, or social network with some location information, many researches are being conducted to extract some knowledge from geo-social relations, in order to improve the location prediction of individuals in a social network better than with IP-based geo-location. Backstrom et al. \cite{Backstrom2010} used user-supplied addresses and the network of relation between profiles of the Facebook social network. Besides performing 69.1\% of accuracy with their best method, against 57.2\% for IP location, some interesting geo-social relations were confirmed, as intuitively known: people living in metropolitan areas are more cosmopolitan; they are more likely to have ties to distant places; the higher the population density, the lower the probability of knowing a person inside a square mile; and, in their data, 96\% of people live in areas less dense than 50 people per square mile.
For geographic mood characteristics analysis, Mislove et al \cite{Mislove2010} analyzed tweets posted from September 2006 to August 2009, extracting words containing psychological rating, according to ANEW system \cite{Bradley1999}, and matching them with the user profile location to identify some mood variations over the week, the hours of the day and the costs of the United States. These messages suggest that the West coast is happier than the East coast, and that happiness peaks occur each Sunday morning, with a trough on Thursday evenings, having the early morning and late evening the highest level of happy tweets. These works model some aspects of human behavior, but using static geographical information. Our study focuses on using information that changes in time and space with greater rate.
Due to its real-time property and massive adoption in the world, Twitter can be used as a sensor network for natural and social event detection, sometimes before its coverage by the news media or the government. In Sakaki's \cite{Sakaki2010} work, they use geo-located tweets that have keywords related to natural hazard events such as \textit{earthquake} or \textit{shaking} to detect such events. With particle filtering, they can estimate the centers of earthquakes and the trajectories of typhoons, detecting 96\% of earthquakes, with seismic intensity scale of 3 or more, registered by Japan's Meteorological Agency.
In a recent work, Lee \cite{Lee2010} developed a system to discover unusual regional social activities using Twitter geo-tagged information. Their framework has four steps: Collecting crowd experiences via Twitter, establishing natural socio-geographic regions, estimating geographical regularity of local crowd behavior, and detecting unusual geo-social events. The first step uses a divide and conquer solution to solve the Twitter Search API restriction of 1,500 results per query. The second uses K-Mean clustering algorithm with Voronoi's diagram \cite{kmeans} to create socio-geographic regions, a step that can impact a online system. On the third one, three metrics are estimated for each cluster, hourly: number of tweets, number of users, and movement of local crowd. The last step divides the day in 6-hour periods and calculates the regularities of each cluster's metric using box plots that can also detect unusual statuses.
This method detected 903 unusual activities from 7,200 possible (300 clusters x 6 days x 4 periods (6-h)) and compared to the investigated list of 50 events, from Japan's local event guide site, 32 of them could be found, resulting in a recall performance of 64\% (32/50) plus a precision rate of 3.54\% (32/903). We must consider that this list is somewhat restricted, because other unexpected events, off the list, occurred and were detected. Despite the great advances in local event detection, driven primarily by the movement of local crowd's metric, there are some deprecated issues, unnecessary steps and heavy processing.
\subsection{Event Detection}\label{sec:EDT}
Event detection and tracking is a subset of problems from topic detection and tracking (TDT). The early definitions are from \cite{Allan1998,Allan2002}, in an initiative to investigate the state-of-the-art on finding and following new events in a stream of broadcast news stories. With the huge amount of information available on-line, the World Wide Web is a fertile source for that kind of event detection, and web mining research is at the crossroad of research from several research communities \cite{Kosala2000}. Over the last 10 years, user-generated content has come to dominate a large portion of the web and a real-time web has arisen to challenge number of areas of research, notably information retrieval and web data mining \cite{Bermingham2010}.
Becker \cite{Becker2011} presents a task of event identification on Twitter that is based on text analysis and clustering approaches, and shows numerous categories of features that must be considered: temporal, social, topical, and Twitter-centric. He also analyzes the different features that can impact the performance of a real-time system for event detection. The proposed technique for event identification offers a significant improvement over other approaches, showing that they can identify real-world event content in a large-scale stream of Twitter data. The use of location-based signals in event identification is suggested for future work.
A filtered stream of tweets to automatically identify events of interest, using just the volume of tweets generated at any moment of an event, was suggested by \cite{Lanagan2011} to provide a very accurate means of event detection, as well as an automatic method for tagging events with representative words from the tweet stream. That approach leads to the problem of choosing a set of words and tags that represent a field of interest, missing any other event that doesn't match it.
\section{The proposed approach}\label{sec:method}
To achieve the detection of events based on location using the huge amount of data provided by Twitter, we proceeded with the simpler data flow possible that lead to this goal. Figure \ref{figFramework} shows these flow as described below:
\begin{itemize}
\item Tweets: A crawler collects tweets from Twitter using Streaming API service;
\item Places Metrics: Creates two \textit{time series} from the number of tweets and users in a \textit{time instance} (or \textit{bin});
\item IGMN: The neural network is used to create data models and identify outliers;
\item Place Outliers: Consist in the time instances that were detected as outliers in both time series;
\item Events Description: Through the messages contained in the \textit{time instance} outliers it is possible to evaluate and understand the triggered event.
\end{itemize}
\begin{figure}[t]
\centering
\includegraphics[width=110mm]{aug_SBBDFramework}
\caption{Proposed data flow\label{figFramework}}
\end{figure}
In relation to the crawler, it is important to state that Twitter's Streaming API is one of many Twitter's public services available. It allows real-time access to various subsets of public tweets with high throughput. Any message sent to the social network, with public permission and that matches a given query, will be delivered to the crawler. This service has filter parameters such as tracking some keyword occurrences in status messages, following tweets from a specific set of users or specifying a set of geographic bounding boxes to track. In this aspect, it is important to state that, since September 2010, the bounding box can be of worldwide coverage, allowing the retrieval of all tweets in a single query, and thus there is no need any more to build a monitor system as Lee \cite{Lee2010} suggests.
Each status message given by this API contains the text of the message, its creation's date/time, the message's id, the id and the full profile information of the user that has sent the message, and, sometimes, both place/country name and latitude/longitude, or just one of them. This happens because this information is sensible and for the sake of privacy the user may state whether or not he wants to share such specific latitude/longitude information or just the place's name. Current localization technology used by Twitter comprises GPS and GPS-A (which have latitude/longitude information) and originating IP (which has not latitude/longitude information). The location technology used can also be retrieved, if allowed by the user, besides information given by the Twitter's geographic database (which doesn't have all world's countries, provinces/states, cities, neighbors and areas names).
For the last problem, we use a geographic database source\footnote{http://geocommons.com/overlays/85161} to translate those latitude/longitude information into names that are not known by Twitter . For instance, many Eastern countries and cities have blank names in the service API. So, this step is important because all our analysis is based on grouping tweets in sets of places as shown in Figure \ref{figFramework}. This location identification process is made during real-time streaming consumption.
Once the messages are localized (i.e., have location information), the next step consists on the identification of events. For this task, as stated before, we use a neural network (IGMN) to analyze time series and find outliers. A time series is a sequence of observations occurring in equal time intervals, having some basic properties/components \cite{Brockwell1986}. In a time series there are different components, for instance, seasonal component, trend component, and so on. The seasonal component describes when the time series' data experience regular changes which recur in some period of time (e.g., daily, weekly, monthly, and so on). The trend component indicates a series with upward or downward long term movement. Thus, the series is stationary when the mean, variance and autocorrelation structures do not change over time, and doesn't have a trend. A multivariate time series has more than one variable, while a univariate time series has only one variable. Our data can be described as a stationary, seasonal and univariate time series.
After the time series analysis is performed, we apply specific metrics to detect events. The metrics used in this work are extracted by grouping the text messages in sets of cities, provinces/states or countries, depending on the amount of information in each instance, then computing the number of users and number of tweets, creating two separate time series. We have chosen simple metrics like these because our intention was to develop a real time on-line event detection system. So we needed to decrease the framework's processing time. The usage of geographic names improved the framework in two ways:
\begin{itemize}
\item Despite the linear complexity of K-Means, used on \cite{Lee2010}, there is no need to use clustering algorithms, since the message clustering is based on political divisions;
\item We increased the amount of analyzed tweets using all types of messages:
\begin{itemize}
\item With and without GPS features; and/or
\item With and without places' names.
\end{itemize}
\end{itemize}
Once this splitting is done, we have a set of \textit{m} messages for each political division chosen. Thus the metrics are collected for each time instance (1 minute, 10 minutes, 1 hour, 6 hours, etc.) during a period of \textit{d} days, creating a time series. Lee's approach \cite{Lee2010} splits the day in 6-hour periods and uses box plot statistical analysis to detect outliers. We have discovered that this 6-hour period can hide some interesting detailed information about events happening in these political division areas, because the tilt's curve is relevant in a 6-hour slice, smoothing the mid curve outliers and empowering the beginning/end period data outliers. Beyond that, box plot is a univariate statistical tool \cite{Hardle2012} and the Twitter stream has a temporal dependency, as can be observed in Figure \ref{figOsloMunichSP}. The term univariate has different meaning in time series analysis, it refers to a time series that consists of single (scalar) observations recorded sequentially over equal time increments, time is in fact an implicit variable in the time series\footnote{http://www.itl.nist.gov/div898/handbook/pmc/section4/pmc44.htm}.
For the outliers detection task we use the Incremental Gaussian Mixture Network (IGMN) \cite{HEINEN2011}, a neural network that creates and continually adjusts probabilistic models consistent to all sequentially presented data, after each data point presentation, and without the need to store any past data points. Its learning process is aggressive, or "one-shot", meaning that only a single scan through the data is necessary in order to obtain a consistent model. Compared to (S)ARIMA \cite{Brockwell1986} has equivalent root mean square error without the need to pre-understand the time series components and data correlation imposed to (S)ARIMA's parameters, that facilitates the process of adding new places to the framework. The incremental process is another advantage against (S)ARIMA that needs a long period of data to model time series, that makes it possible to extend the framework for real-time analysis of Twitter stream.
After the outliers detection phase, each outlier represents a time instance that is analyzed for its content. Which event triggers these outliers? We collect all messages in this time instance. Those messages are processed in a search of most frequent words, ignoring stop words. The stop word database needs to be rebuilt to the short text message context, which uses a lot of abbreviations. These top rank words can provide us with a great idea of the triggered event, confirmed or not by the web and news search over the Internet.
\section{Experiments}\label{sec:experiments}
For performing our experiments we have collected data from Twitter since January 2011. We have adjusted the \textit{locations} parameter of the Twitter's Streaming API to the bounding box corresponding to (-179.99, -89.99, 179.99, 89.99), which relates to the entire globe. Today we count with more than 1.4 billion geo-tagged messages, and around 10 million users. Considering this data set we have found that these users produced about 4.1 million geo-tagged tweets per day, where 42.25\% contained geographic coordinates, and 93.49\% contained places' names.
With an on-line collecting system, a routine calculated countries' tweets of non-set country messages using the country boundary geographic database in a PostGIS server server\footnote{http://postgis.refractions.net/}. Data were stored in a MySQL\footnote{http://www.mysql.com/} database with a single structure: tweets' and users' tables, indexing \textit{message id}, \textit{user id}, \textit{created at} timestamp, \textit{country} and \textit{city} columns for faster grouping by clause. A 3-tier architecture provided more concurrence in order to avoid overload in the database; one server is the \textit{collector}, sending packages of 30 minutes' data to the \textit{data storage} computed by the \textit{processor} that generates the time series, detects the outliers and fetches the most frequent words used to describe the event.
The first step to create our time series is to choose a political division or place. We have five types of places, from Twitter definitions: country, admin (province/state), city, neighbor and POI (i.e., points of interest like restaurants, stores, museums, etc.); from wide to narrow areas. The wider the area, the more tweets per second are generated, but some places have a greater rate than others. Besides that, as more restrict is the area, the more local the event, we need a minimum number of messages per time instance in order to make the time series smoother. If we get few tweets per bin, the time series gets fluctuated values. The bin's size, which determines the amount of messages, needs to be evaluated to each place in order to identify which value gets the best event detection.
\begin{figure}[t]
\centering
\includegraphics[width=1\textwidth]{aug_OsloMunichSP}
\caption{Sample of events from Olso, Munich and São Paulo \label{figOsloMunichSP}}
\end{figure}
Figure \ref{figOsloMunichSP} shows some samples of tweets' time series generated with a bin of 10 minutes, for visualization purposes, in which it is easy to see a pattern of daily seasonality, represented by 144 values per day. Ordered by the volume of messages per bin, this figure shows events with different characteristics, all of them identified as outliers by our approach. The real date and time in which the event starts is indicated in the figure as its disturbance on the time series:
\begin{itemize}
\item Oslo bombing event: great disturbance on time series and long duration;
\item Munich soccer match: great disturbance and short duration;
\item São Paulo carnival vote counting: small disturbance and short duration
\end{itemize}
Once the bin size is chosen, two time series are made:
\begin{itemize}
\item Tweets time series (\textit{TweetsTS}): each value represents the amount of messages sent to Twitter server in one time instance;
\item Users time series (\textit{UsersTS}): each value represents the number of unique users who have sent messages to Twitter at that time instance.
\end{itemize}
To obtain the relevant outliers, each time series is modeled by the neural network, which returns the outliers of each one. An outlier is considered relevant when a time instance is detected as an outlier in both time series. It is noteworthy that the IGMN consider the values that are above or below the local likelihood as being outliers. However, in this work, we are only interested in the values above such likelihood, since they represent data beyond the normal volume.
\begin{equation}\label{equation1}
Outliers = Intersect ( \textit{TweetsTS}.outliers\_above , \textit{UsersTS}.outliers\_above )
\end{equation}
Another parameter can be tuned to result in better quality events. The IGMN adjusts its models to the presented data using clustering techniques, and the similarity between the inputs is measured by the probability of each input belonging to the existing clusters. In this sense, the standard deviation may be used to indicate when a new cluster must be created, i.e., if the new data is too different from any cluster, this parameter is used to detect if a given input should be considered an outlier, based on the local likelihood.
For preliminary analysis and to evaluate the method's precision over different parameters, we have chosen the city São Paulo, Brazil, as a place (political division), because it is the number one city in the world in volume of tweets with geographic information. For this article the period from 2012-02-19 to 2012-02-24 was selected for those tests be done
We begin by examining the performance of the outliers' detection against the number of events occurred, unique, duplicated and missed events. Events occurred are events that happened in the real world and that were evaluated using the most frequent words in the messages of each bin matching with the result of a local newspaper's web search, using the time instance date as filter. We test the bin's size parameter for 1, 5 and 10 minutes, over the same period (Figure \ref{figSaoPauloDiffTime}), the precision rate score is presented with the mentioned metrics (Table \ref{tabDiffTime}). As bin size increases, it smooth local data likelihood making outliers the only values with significant difference. Otherwise, some not so substantial events occurred are missed.
\begin{figure}[t]
\centering
\includegraphics[width=1\textwidth]{aug_SaoPauloDiffTime}
\caption{Tweets time series on different bin's size and the detected outliers \label{figSaoPauloDiffTime}}
\end{figure}
\begin{table}[t]
\caption{Precision rate scores on different bin's size\label{tabDiffTime}}
\centering{
\begin{tabular}{|r|p{35pt}|p{48pt}|p{35pt}|p{50pt}|p{35pt}|p{36pt}|}
\hline
\textbf{Bin Size} & \textbf{Total Outliers} & \textbf{Detected Happened Events} &
\textbf{Unique Events} & \textbf{Duplicate Detections} & \textbf{Missed Events} & \textbf{Precision Rate}\\
\hline
1 minute & 90 & 22 & 6 & 16 & 0 & 24.44\%\\
\hline
5 minutes & 20 & 12 & 4 & 8 & 2 & 60.00\%\\
\hline
10 minutes & 7 & 5 & 3 & 2 & 3 & 71.43\%\\
\hline
\end{tabular}}
\end{table}
The next parameter evaluated, standard deviation, was tested with a time instance of 1 minute size and different values of deviations, i.e., 3, 4 and 5. Not surprisingly, the number of outliers detected decreased as the deviation increased (Figure \ref{figSaoPauloDiffDev}), but the change on the precision rate did not evolve like the previous experiment (Table \ref{tabDiffDev}). Our first assumption is that the 1-minute bin makes the time series rough and sensible to any minimum disturbances, making the deviation parameter tune incapable of getting better results. On the other hand, just increasing the bin's size will cause the loss of the real-time approach capability, as well as of some events. Therefore, a suggested approach is to combine the tuning of these parameters (a task that is reserved for future work).
\begin{figure}[t]
\centering
\includegraphics[width=1\textwidth]{aug_SaoPauloDiffDev}
\caption{Tweets time series on different deviation and the detected outliers \label{figSaoPauloDiffDev}}
\end{figure}
\begin{table}[t]
\caption{Precision rate scores on different deviations\label{tabDiffDev}}
\centering{
\begin{tabular}{|p{48pt}|p{40pt}|p{50pt}|p{35pt}|p{48pt}|p{36pt}|p{36pt}|}
\hline
\textbf{Standard Deviations} & \textbf{Total Outliers} & \textbf{Detected Happened Events} &
\textbf{Unique Events} & \textbf{Duplicate Detections} & \textbf{Missed Events} & \textbf{Precision Rate}\\
\hline
3 & 90 & 22 & 6 & 16 & 0 & 24.44\%\\
\hline
4 & 31 & 11 & 5 & 6 & 1 & 35.48\%\\
\hline
5 & 12 & 8 & 3 & 5 & 3 & 66.67\%\\
\hline
\end{tabular}}
\end{table}
In the task of evaluating the outliers with real-world events, the use of the most frequent terms allows us to understand the kinds of topics that trigger Twitter users to post significantly more messages than the usual. Firstly, we must understand that cultural aspects can influence social media services usage, so our findings consider, yet, only São Paulo's social behavior. All events occurred detected by our framework had televised coverage, but some with broad and other with local geographical interest (Table \ref{tabEvents}). This leads us to new perspectives of specializing event detection with only local relevance.
\begin{table}[b]
\caption{Events identified by the proposed approach\label{tabEvents}}
\centering{
\begin{tabular}{|p{160pt}|p{150pt}|p{57pt}|}
\hline
\textbf{Event Description} & \textbf{Terms} & \textbf{Geographical Interest}\\
\hline
Soccer match for Copa Libertadores in Venezuela & Corinthians, jogo, libertadores, gol, timão & Broad\\
\hline
National reality TV show & Yuri, fael, bbb, lider, ganhar & Broad\\
\hline
Soccer match on regional championship out the city & Corinthians, willian, douglas, gol, jogo & Broad\\
\hline
Riots at carnival vote counting & Gavi\~{o}es, carnaval, nota, fogo, apura\c{c}\~{a}o, escola & Local\\
\hline
Two soccer games in the regional championship out the city & Gol, jogo, bragantino, time, corinthians & Broad\\
\hline
Soccer match on regional championship in the city & Ganhar, vergonha, deus, palmeiras & Local\\
\hline
\end{tabular}}
\end{table}
\section{Conclusions}\label{sec:conclusion}
This paper presented a new method to discover events based on location over the Twitter stream, using time series analysis, and how this approach can lead to representative outliers with no need to previously select keywords, nor use clustering algorithms for geographic location grouping. This work provides the first step in a series of method to improve the detection of events with local relevance.
In future work, we will generate statistical measures of performance and compare our proposition with Lee's and Becker's method, and how those frameworks behave in a real-time environment, which can show how IGMN reuse benefits the performance. To do this comparison, we need to compute Lee's aggregation and dispersion metric, but other metrics with linear processing time can be built in order to consider the users' movement. To compute our method's precision and recall rate we intend to use human annotators and a news database to automate the events evaluation. A visualization system is suggested to provide more relevant information to the end user.
\bibliographystyle{jidm}
|
2,869,038,155,491 | arxiv | \section{Introduction}
Recently, there has been an increased research effort in exploring adversarial examples which fool machine learning classifiers~\citep{Goodfellow2015, Kurakin2017AdversarialWorld, Szegedy2014,Yuan2017}. The majority of the existing research focuses on the image domain, where an example is generated by making small perturbations to input pixels in order to make a large change in the distribution of predicted class probabilities. We are particularly interested in adversarial attacks for \textit{malware detection}, which is the task of determining if a file is benign or malicious. This involves a real-life adversary (the malware author) who is attempting to subvert detection tools, such as anti-virus programs.
With machine learning approaches to malware detection becoming more prevalent \citep{MalConv, export:249072, Saxe2015, Sahs2012}, this is an area that urgently requires solutions to the adversarial problem.
Because an adversary is actively attempting to subvert outputs, small decreases in accuracy when not under attack are an acceptable cost for remediating targeted attacks. In this scenario, the effective accuracy of the system would be the accuracy under attack, which will be at or near zero without proper defenses.
For example, \citet{MalConv} trained a convolutional neural network called MalConv to distinguish between benign and malicious Windows executable files.
When working with images, any pixel can be arbitrarily altered, but this freedom does not carry over to the malware case. The executable format follows stricter rules which constrain the options available to the attacker \citep{Kreuk2018,Russu:2016:SKM:2996758.2996771,DBLP:journals/corr/GrossePM0M16,Suciu2018}.
Perturbing an arbitrary byte of an executable file will most likely change the functionality of the file or prevent it from executing entirely. This property is useful for defending against an adversarial attack, as a malware author needs to evade detection with a \textit{working} malicious file.
\citet{Kreuk2018} were able to bypass these limitations by applying gradient-based attacks to create perturbations which were restricted to bytes located in unused sections of malicious executable files. The adversarial examples remained just as malicious, but the classifier was fooled by the introduction of overwhelmingly benign yet unused sections of the file.
This is possible because the adversary controls the input,
and the EXE format allows unused sections.
Because of the complications and obfuscations that are available to malware authors, it is not necessarily possible to tell that a section is unused,
even if its contents appear random. This is an \textit{additive only} adversary --- i.e., the attacker can only add features --- which has been widely used and will be the focus of our study.
An analogy to the image domain would be an attacker that could create new pixels which represent the desired class and put them outside of the cropping box of the image, such that they would be in the digital file, but never be seen by a human observer. This contrasts with a standard adversarial attack on images, since the attacker is typically limited to changing the values of existing pixels in the image rather than introducing new pixels entirely.
Given these unique characteristics and costs, we note that the malware case is one where we care \textit{only} about targeted adversarial attacks. The adversary always wants to fool detectors into calling malicious files benign. As such, we introduce an approach to tackle targeted adversarial attacks by exploiting non-negative learning constraints. We will highlight related work in \autoref{sec:related}. In \autoref{sec:method} we will detail our motivation for non-negative learning for malware, as well as how we generalize its use to multi-class problems like image classifiers. The attack scenario and experiments on malware, spam, and image domains will be detailed in \autoref{sec:experiments}. In \autoref{sec:results} we will demonstrate how our approach reduces evasions to almost 0\% for malware and exactly 0\% spam detection. On images we show improvements to robustness against confident adversarial attacks against images, showing that there is potential for non-negativity to aid in non-binary problems. We will end with our conclusions in \autoref{sec:conclusion}.
\section{Related Work} \label{sec:related}
The issues of targeted adversarial binary classification problems, as well as the additive adversary, was first brought up by \citet{Dalvi:2004:AC:1014052.1014066}, who noted its importance in a number of domains like fraud detection, counter terrorism, surveillance, and others. There have been several attempts at creating machine learning classifiers which can defend against such adversarial examples. \citet{Yuan2017} provide a thorough survey of both attacks and defenses specifically for deep learning systems. Some of these attacks will be used to compare the robustness of our technique to prior methods.
In our case we are learning against a real life adversary in a binary classification task, similar to the initial work in this space on evading spam filters \citep{Lowd2005a,Dalvi:2004:AC:1014052.1014066,Lowd:2005:AL:1081870.1081950}. Our malware case gives the defender a slight comparative advantage in constraining the attack to produce a working binary, where spam authors can insert more arbitrary content.
Prior works have looked at similar weight constraint approaches to adversarial robustness. \citet{citeulike:7099488} uses a technique to keep the distribution of learned weights associated with features as even as possible during training. By preventing any one feature from becoming overwhelmingly predictive, they force the adversary to manipulate many features in order to cause a misclassification.
Similarly, \citet{DBLP:journals/corr/GrossePM0M16} tested a suite of feature reduction methods specifically in the malware domain~\citep{DBLP:journals/corr/GrossePM0M16}. First, they used the mutual information between features and the target class in order to limit the representation of each file to those features. Like \citeauthor{citeulike:7099488}, they created an alternative feature selection method to limit training to features which carried near equal importance. They found both of these techniques to be ineffective.
Our approach is also a feature reduction technique. The difference is that we train on all features, but only retain the capacity to distinguish a reduced number of features at test time --- namely, only those indicative of the positive class. Training on all features allows the model to automatically determine which are important for the target class and utilizes the other features to accurately set a threshold, represented by the bias term, for determining when a requisite quantity of features are present for assigning samples to the target class.
\citet{Chorowski2015} used non-negative weight constraints in order to train more interpretable neural networks. They found that the constraints caused the neurons to isolate features in meaningful ways. We build on this technique in order to isolate features while also preventing our models from using the features predictive of the negative class.
\citet{Goodfellow2015} used RBF networks to show that low capacity models can be robust to adversarial perturbations but found they lack the ability to generalize. With our methods we find we are able to achieve generalization while also producing low confidence predictions during targeted attacks.
\section{Isolating Classes with Non-Negative Weight Constraints} \label{sec:method}
We will start by building an intuition on how logistic regression with non-negative weight constraints assigns predictive power to only features indicative of the positive ($+$) class while ignoring those associated with the negative ($-$) class.
Let $\bm{C}(\cdot)$ be a trained logistic regression binary classifier of the form $\bm{C}(\bm{x}) = \sign \left( \bm{w}^{\mathsf{T}}\bm{x} + b \right)$,
where $\bm{w}$ is the vector of non-negative learned coefficients of $\bm{C}(\cdot)$, $\bm{x}$ is a vector of boolean features for a given sample, and $b$ is a scalar bias. The decision boundary of $\bm{C}(\cdot)$ exists where $\bm{w}^{\mathsf{T}}\bm{x}+b = 0$, and because $\bm{w}^{\mathsf{T}}\bm{x} \geq 0$ $ \forall $ $\bm{x}$, the bias $b$ must be strictly negative in order for $\bm{C}(\cdot)$ to have the capacity to assign samples to both classes. The decision function can then be rewritten as:
\begin{equation}\label{eq:logreg_constrain}
\bm{C}(\bm{x}) =
\begin{cases}
(+) & \bm{w}^{\mathsf{T}}\bm{x} \geq |b| \\
(-) & \bm{w}^{\mathsf{T}}\bm{x} < |b|
\end{cases}
\end{equation}
Because $\bm{w}$ is non-negative, the presence of any feature $x_i \in \bm{x}$ can only increase the result of the dot product, thus pushing the classification toward $(+)$. Weights associated to features that are predictive of class $(-)$ will therefore be pushed toward $0$ during training. When no features are present $(\bm{x} = \vec{0})$ the model defaults to a classification of $(-)$ due to the negative bias $b$. Unless a sufficient number of features predictive of class $(+)$ are present in the sample, the decision will remain unchanged. A classifier trained in this way will use features indicative of the $(-)$ class to set the bias term, but will not allow those features to participate in classification at test time. The same logic follows for logistic regression with non-boolean features if the features are also non-negative or scaled to be non-negative before training.
Given a problem with asymmetric misclassification goals, we can leverage this behavior to build a defense against adversarial attacks. For malware detection, the malware author wishes to avoid detection as malware $(+)$, and instead induce a false detection as benign $(-)$. However, there is no desire for the author of a benign program to make their applications be detected as malicious. Thus, if we model malware as the positive class with non-negative weights, \textit{nothing can be added} to the file to make it seem more benign to the classifier $\bm{C}(\cdot)$. Because executable programs must maintain functionality, the malware author can not trivially remove content to reduce the malicious score either. This leaves the attacker with no recourse but to re-write their application, or perform more non-trivial acts such as packing to obscure information. Such obfuscations can then be remediated through existing approaches like dynamic analysis \citep{Ugarte-Pedrero:2016:RRP:2976956.2976970,Chistyakov2017}.
Notably, this method also applies to neural networks with a sigmoid output neuron as long as the input to the final layer and the final layer's weights are constrained to be non-negative. The output layer of such a network is identical to our logistic regression example. The cumulative operation of the intermediate layers $\bm{\phi}(\cdot)$ can be interpreted as a re-representation of the features before applying the logistic regression such as $\bm{C}(\bm{x}) = \sign \left( \bm{w}^{\mathsf{T}}\bm{\phi}(\bm{x}) + b \right)$.
We will denote when a model is trained in a non-negative fashion by appending "\hspace{1pt}\textsuperscript{+}\hspace{1pt}" to its name.
The ReLU function is a good choice for intermediate layers as it maintains the required non-negative representation and is already found in most modern neural networks.
For building intuition, in \autoref{fig:binary_mnist_gradient} we provide an example of how this works for neural networks using MNIST. To fool the network into predicting the positive class (one) as the negative class (zero), the adversary must now make larger removals of content --- to the point that the non-negative attack is no longer a realistic input.
\begin{figure}[!h]
\vspace{0.25\baselineskip}
\centering
\includegraphics[width=1.0\columnwidth]{Images/binary_mnist_gradient.png}
\caption{Left: Original Image; Middle: Gradient attack on LeNet; Right: Gradient attack on non-negative LeNet\textsuperscript{+}. The attack on the standard model was able to add pixel intensity in a round, zero-shaped area to fool the classifier into thinking this was a zero. The attack on the constrained model was forced to remove pixel intensity from the one rather than adding in new values elsewhere.}
\label{fig:binary_mnist_gradient}
\end{figure}
It should be noted that constraining a model in this way does reduce the amount of information available for discriminating samples at inference time, and a drop in classification accuracy is likely to occur for most problems. The trade off between adversarial robustness and performance should be analyzed for the specific domain and use case.
A practical benefit to our approach is that it is simple to implement. In the general case, on can simply use gradient decent with a projection step that clips negative values to zero after each step update. We implemented our approach in Keras \citep{chollet2015keras} by simply adding the "NonNeg" constraint to each layer in the model.\footnote{\url{https://keras.io/constraints/}}
\subsection{Non-Negativity and Multi-Class Classification} \label{sec:image}
The primary focus of our work is on binary tasks like malware and spam detection, it is also worth asking if it can be applied to multi-class problems.
In this work we show that non-negativity can still have some benefit in this scenario, but we find it necessary to re-phrase how such tasks are handled. Normally, one would use the softmax
($\text{softmax}(\bm{v})_i = {\exp(v_i)}/{\sum_{j=1}^{n} \exp(v_j)}$)
on the un-normalized probabilities $\bm{v}$ given by the final layer. The probability of a class $i$ is then taken as $\text{softmax}(\bm{v})_i$. However we find that the softmax activation makes it easier to attack networks.
Take the non attacked activation pattern $\bm{v}$, where $v_i > v_j$ $ \forall $ $j \neq i$. Now consider the new activation pattern $\bm{\hat{v}}$, which is produced by an adversarially perturbed input with the goal of inducing a prediction as class $q$ instead of $i$. Then it is necessary to force $\hat{v}_q > \hat{v}_i$. Yet even if $\hat{v}_i \approx v_i$, the probability of class $i$ can be made arbitrarily low by continuing to maximize the response of $\hat{\bm{v}}_q$. This means we are able to diminish the apparent probability of class $i$ without having impacted the model's response to class $i$. Phrased analogously as an image classification problem, adversaries don't need to remove the amount of "cat" in a photo to induce a decision of "potato," but only increase the amount of "potato."
In addition \citet{Chorowski2015} proved that a non-negative network trained with softmax activation can be transformed into an equivalent unconstrained network. This means there is little reason to expect our non-negative approach to provide benefit if we stay with the softmax activation, as it has an equivalent unconstrained form and should be equally susceptible to all adversarial attacks. As such we must move away from softmax to get the benefits of our non-negative approach in a multi-class scenario.
Instead we can look at the classification problem in a one-vs-all fashion by replacing the softmax activation over $K$ classes with $K$ independent classifications trained with the binary cross-entropy loss and using the sigmoid activation $\sigma(z) = 1/(1+\exp(-z))$. Final probabilities after training are obtained by normalizing the sigmoid responses to sum to one. We find that this strategy combined with non-negative learning provides some robustness against an adversary producing targeted high-confidence attacks (e.g., the network is 99\% sure the cat is a potato). The one-vs-all component make it such that increasing the confidence of a new class eventually requires reducing the confidence of the original class. The non-negativity increases the difficulty of this removal step, resulting in destructive changes to the image.
We make two important notes on how we apply non-negative training for image classification. First, we pre-train the network using the standard softmax activation, and then re-train the weights with our one-vs-all style and non-negative constraints on the final fully connected layers. Doing so we find only a small difference in accuracy between results, where training non-negative networks from scratch often has reduced accuracy. Second, we continue to use batch normalization without constraints. This is because batch normalization can be rolled into the bias term and as a re-scaling of the weights, and so does not break the non-negative constraint in any way. We find its positive impact on convergence greater when training with the non-negative constraints.
\section{Experimental Methodology} \label{sec:experiments}
Having defined the mechanism by which we will defend against targeted adversarial attacks, we will investigate its application to two malware detection models, one spam detection task, and four image classification tasks.
We will spend more time introducing the malware attacks, as readers may not have as much experience with this domain.
For malware, we will look at MalConv \citep{MalConv}, a recently proposed neural network that learns from raw bytes. We will also consider an N-Gram based model~\citep{raff_ngram_2016}. Both of these techniques are applied to the raw bytes of a file. We use the same 2,000,000 training and 80,000 testing datums as used in \citet{raff_shwel}.
Following recommendations by \citet{Biggio2014} we will specify the threat model under which we perform our evaluations. In all cases, our threat model will assume a white-box adversary that has full knowledge of the models, their weights, and the training data. For our binary classification problems, we assume in the threat model that our adversary can only add new features to the model (i.e., in the feature vector space they can change a zero valued feature to non-zero, but can not alter an already non-zero value).
We recognize that this threat model does not encompass all possible adversaries, but note that it is one of the most commonly used adversarial models spanning many domains. The "Good Word" attack on spam messages is itself an example of this threat model's action space, and one of the initial works in adversarial learning noted its wide applicability \citep{Lowd:2005:AL:1081870.1081950}. In a recent survey, \citet{Maiorca2018} found that 9 out of 10 works in evading malicious PDF detectors used the additive only threat model, and these additive adversaries succeeded in both white-box and black-box attacks. \citet{Demontis2017} considered both the additive only adversary, as well as one which could add or remove features, as applied to android malware detection. On their Android data, they demonstrate a learning approach which provides bounds on adversary success under both adversary action models, making it robust but still vulnerable. Under the white-box additive attack scenario, their Secure-SVM detection rate drops from 95\% on normal test data down to 60\% when attacked. Finally, for the case of Windows PE data, three different works have attacked the MalConv model using the additive adversary~\citep{Kreuk2018,Kolosnjaji2018,Suciu2018}.
The additive threat model makes sense to study, as it is easier to implement for the adversary and currently successful in practice. For this reason, it makes little sense for the adversary to consider a more powerful threat model (e.g., adding and removing features) which would increase their costs and effort, when the simpler and cheaper alternative works. We will show in \autoref{sec:results} that while not perfect, our non-negative defense is the first to demonstrably thwart the additive adversary while still obtaining reasonable accuracies. This forces a potential adversary to "step up" to a more powerful model, which increases their effort and cost. We contend this is of intrinsic value \textit{eo ipso}. Below we will review additional details regarding the threat model for each data-type we consider (Windows PE, emails, and images). This is followed by specifics on how the attacks are carried out for each classification algorithm as the details are different in all cases due to model and problem diversity.
\paragraph{Windows PE Threat Model Specifics}
For PE malware we use the appending of an unused section as the attack vector for technical simplicity. The adversary will be allowed to append any desired number of bytes into an added and unused section of the binary file, until no change in the evasion rate occurs. Our approach should still work if the adversary performed insertions between functions rather than at the end of the file.
Real malware authors often employ packing to obfuscate the entire binary. This work does not consider defense against packing obfuscation, except to note that the common defensive technique is to employ dynamic analysis. Our non-negative approach can be applied to the features derived from dynamic analysis as well, but beyond the scope of this paper. The possibility of evading non-negativity on dynamic features requires addressing the cat-and-mouse game around VM detection, stealthy malware, and the nature of features used. This discussion is important, but beyond the current ambit, which we limit to static analysis. We are interested here in whether or not non-negativity has benefit to the additive adversary, not more sophisticated ones.
\paragraph{Spam Threat Model Specifics}
For spam detection the adversary will be restrained to the insertion of new content into an existing spam message. This is because we are interested in the lower-effort "good word" attack scenario. Despite being less sophisticated, it remains effective today. Tackling wholly changed and newly crafted spam messages is beyond our current purview.
\paragraph{Image Threat model Specifics}
Image classification does not exhibit the asymmetric error costs that malware and spam do. The purpose of studying it is to determine if our non-negativity can have benefit to multi-class problems. It is intuitive that the answer would be "no," but we nevertheless find that some limited benefit exists.
In this threat model, there is no "adding" or "removing" features, due to the intrinsic nature of images. As such we consider the $L_1$ distance between a original image $x$, and its adversarially perturbed counterpart $\hat{x}$. The adversary may arbitrarily alter any pixels, so long as $\|x-\hat{x}\|_1 < \epsilon$, where $\epsilon$ is a problem-dependent maximum distance.
\subsection{Attacking MalConv}\label{sec:malconv_exper}
MalConv is the primary focus of our interest, as gradient based attacks can not naively be applied to its architecture. Only recently have attacks been proposed \citep{Kolosnjaji2018,Kreuk2018}, and we will show that non-negativity allows us to thwart these adversaries.
In MalConv, raw bytes of an executable are passed through a learned embedding layer which acts as a lookup table to transform each byte into an 8-dimensional vector of real values. This representation is then passed through a 1-dimensional gated convolution, global max pooling, and then a fully connected layer with sigmoid output. To handle varying file sizes, all sequences of bytes are padded to a fixed length of 2,100,000\ using a special "End of File" value (256) from outside of the normal range of bytes (0--255).
The raw bytes are both discrete and non-ordinal, which prevents gradient based attacks from manipulating them directly. \citet{Kreuk2018} (and independently \citet{Kolosnjaji2018}) devised a clever way of modifying gradient based attacks to work on EXEs, even with a non-differentiable embedding layer, and we will briefly recap their approach. This is done by performing the gradient search of an adversarial example in the 8-dimensional vector space produced by the embedding layer. A perturbed vector is then mapped to the byte which produces the nearest neighbor in the embedding space. Keeping with the notation of \citeauthor{Kreuk2018}, let $\bm{M} \in \mathbb{R}^{n\times{}d}$ be the lookup table from the embedding layer such that $\bm{M}: \bm{X} \to \bm{Z}$ where $\bm{X}$ is the set of $n$ possible bytes and $\bm{Z} \subseteq \mathbb{R}^d$ is the embedding space.
Then for some sequence of bytes $\bm{x} = (x_0, x_1, \ldots, x_L)$, we generate a sequence of vectors $\bm{z} = (\bm{M}[x_0], \bm{M}[x_1], \ldots, \bm{M}[x_L])$ were $\bm{M}[x_i]$ indicates row $x_i$ of $\bm{M}$. Now we generate a new vector $\bm{\widetilde{z}} = \bm{z} + \bm{\delta}$ where $\bm{\delta}$ is a perturbation generated from an adversarial attack. We map each element $\widetilde{z_i} \in \bm{\widetilde{z}}$ back to byte space by finding the nearest neighbor of $\widetilde{z_i}$ among the rows of $\bm{M}$. By applying this technique to only specific safe regions of a binary, the execution of gradient based attacks against MalConv are possible without breaking the binary. To ensure that a "safe" area exists, they append an unused section to the binary. The larger this appended section is, the more space the adversary has to develop a strong enough signal of "benign-ness" to fool the algorithm.
We replicate the attack done by \citet{Kreuk2018} which uses the \textit{fast gradient sign method} (FGSM) \citep{Goodfellow2015} to generate an adversarial example in the embedding space. We find our $\bm{\widetilde{z}}$ by solving:
$\bm{\widetilde{z}} = \bm{z} + \epsilon \cdot \sign \left(\nabla_{\bm{z}}\widetilde{\ell} \left( \bm{z},y;\bm{\theta} \right) \right)$,
where $\widetilde{\ell}(\cdot)$ is the loss function of our model parameterized by $\bm{\theta}$ and $\bm{z}$ is the embedded representation of some input with label $y$. The new $\bm{\widetilde{z}}$ is then mapped back into byte space using the method previously discussed. We performed the attack on 1000 randomly selected malicious files, varying the size of the appended section used to generate the adversarial examples.
For MalConv, adding an unused section allows an attacker to add benign features which overwhelm the classification. Our hypothesis is that MalConv\textsuperscript{+}\xspace should be immune to the attack since it only learns to look for maliciousness, defaulting to a decision of benign when no other evidence is present. We also note that this corresponds well with how anti-virus programs prefer to have lower false positive rates to avoid interfering with users' applications.
\subsection{Attacking N-Gram}\label{sec:ngram_exper}
The N-Gram model was trained using lasso regularized logistic regression on the top million most frequent 6-byte n-grams found in our 2 million file training set. The 6-byte grams are used as boolean features, where a 1 represents the n-gram's presence in a file. Lasso performed feature selection by assigning a weight of 0 to most of the n-grams. The resulting model had non-zero weights assigned to approximately 67,000 of the features.
We devise a white-box attack similar to the attack \citet{Kreuk2018} used against MalConv in that we inject benign bytes into an unused section appended to malicious files. Specifically, we take the most benign 6-grams by sorting them based on their learned logistic regression coefficients. We add benign 6-grams one at a time to the malicious file until a misclassification occurs. This ends up being the same kind of approach \citet{Lowd2005a} used to perform "Good Word" attacks on spam filters, except we assume the adversary has perfect knowledge of the model. The simplicity of the N-Gram model allows us to do this targeted attack, and specifically look at the evasion rate as a function of the number of inserted features.
To prevent these attacks, we train N-Gram\textsuperscript{+}\xspace using non-negative weight constraints on the same data. This model is prevented from assigning negative weights to any of the features. We also remove the lasso regularization from N-Gram\textsuperscript{+}\xspace as the constraints are already performing feature selection by pushing the weights of benign features to zero.
\subsection{Spam Filtering}
As mentioned in the previous section, \citet{Lowd2005a} created "Good Word" attacks to successfully evade spam filters without access to the model. These attacks append common words from normal emails into spam in order to overwhelm the spam filter into thinking the email is legitimate.
In their seminal work, they noted that it was unrealistic to assume that an adversary would have access to the spam filter, and would thus need to somehow guess at which words are good words, or to somehow query the spam filter to steal information about which words are good. Others have simply used the most frequent words from the ham messages as a proxy to good word selection that an adversary could replicate~\citep{Jorgensen:2008:MIL:1390681.1390719,Zhou2007}. We take the more pessimistic approach that the adversary has full access to our model, and can simply select the words that have the largest negative coefficients (i.e. the most good-looking words) for their attack.
This is the same assumption we make in attacking the n-gram model.
By showing that our non-negative learning approach eliminates the possibility of good word attacks in this pessimistic case, we intrinsically cover all weaker cases of an adversary's ability. We note as well that \citeauthor{Lowd2005a} speculated the only effective solution to stop the Good Word attack would be to to periodically re-train the model. By eliminating the possibility of performing Good Word attacks, we increase the cost to operate for the adversary, as they must now exert more effort into crafting significantly novel spam to avoid detection. By eliminating the lowest-effort approach the adversary can take, we remediate a sub-component of the spam problem, but not spam as a whole.
We train two logistic regression models on the TREC 2006 and 2007 Spam Corpora.\footnote{See \url{https://trec.nist.gov/data/spam.html}} The 2006 dataset contains 37,822 emails with 24,912 being spam. The 2007 dataset contains 75,419 messages with 50,199 of them being spam. We performed very little text preprocessing and represented each email as a vector of boolean features corresponding to the top 10,000 most common words in the corpus. The first model is trained with lasso regularization in a traditional manner. The second model is trained with non-negative constraints on the coefficients in order to isolate only the features predictive of spam during inference.
\subsection{Targeted Attacks on Image Classification}
For our image classification experiments we follow the recommendations of \citet{Carlini:2017:AEE:3128572.3140444} for evaluating an adversarial defense. In addition to the FGSM attack, we will also use a stronger iterated gradient attack. Specifically we use the Iterated Gradient Attack (IGA) introduced in \citep{Kurakin2017}, using Keras for our models and Foolbox \citep{Rauber2017} for the attack implementations. We evaluated the confidences at which such attacks can succeed against the standard and our non-negative models on MNIST, CIFAR 10 and 100, and Tiny ImageNet.
We note explicitly that the IGA attack is not the most poweruful adversary we could use. Other attacks like Projected Gradient Decent (PGD) and the C\&W attack\citep{Carlini2017} are more successfully, and defeat our multi-class generalization of non-negative learning. We study IGA to show that there is some benefit, but that overall the multi-class case is a weakness of our approach. We find the results interesting and informative because our prior belief would have been that non-negativity would produce no benefit to the defender at all, which is not the case.
We are specifically interested in defending against an adversary creating a high confidence targeted attack (e.g., a label was previously classified as "cat", but now is classified as "potato" with a probability of 99\%). As such we will look at the evasion rate for an adversary altering an image to other classes over a range of target probabilities $p$. The goal is to see the non-negative trained network have a lower evasion rate, especially for $p \geq 90\%$.
For MNIST and CIFAR 10, since there are only 10 classes, we calculate the evasion rate at a certain target probability $p$ as the average rate at which an adversary can successfully alter the networks prediction to every other class and reach a minimum probability $p$. For CIFAR 100 and Tiny ImageNet, the larger number of classes prohibits this exhaustive pairwise comparison. Instead we evaluate the evasion rate against a randomly selected alternative class.
On MNIST, CIFAR 10, and CIFAR 100, due to their small image sizes ($\leq 32\!\times\!32$), we found that adversarial attacks would often "succeed" by changing the image to an unrecognizable degree. For this reason we set a threshold of 60 on the $L_1$ distance between the original image and the adversarial modification. If the adversarial modified image exceeded this threshold, we counted the attack as a failure. This threshold was determined by examining several images; more information can be found in the appendix.
For Tiny ImageNet this issue was not observed, and Foolbox's default threshold was used.
\section{Results} \label{sec:results}
Having reviewed the method by which we will fight targeted adversarial attacks, and how the malware attacks will be applied, we will now present the results of our non-negative networks. First we will review those related to malware and spam detection, showing that non-negative learning effectively neutralizes evasion by a malware author. Then we will show how non-negative learning can improve robustness on several image classification benchmarks.
\subsection{Malware Detection}\label{sec:malconv_result}
Using the method outlined in \autoref{sec:experiments}, \citet{Kreuk2018} reported a 100\% evasion rate of their model. As shown in \autoref{fig:malconv_evade}, our replication of the attack yielded similar results for MalConv, which
was evaded successfully for 95.4\% of the files. The other 4.6\% of files were all previously classified as malware with a sigmoid activation of 1.0 at machine precision
due to floating-point rounding.
The attack fails for these cases since there is no valid gradient for this output. A persistent adversary could still create a successful adversarial example by replacing the sigmoid output with a linear activation function before running the attack.
\begin{figure}[!h]
\centering
\begin{adjustbox}{max size={1.0\columnwidth}{0.85\textheight}}
\begin{tikzpicture}
\begin{groupplot}[
group style={
group name=myplot, group size=1 by 3,
vertical sep=2.5cm,%
},
enlarge x limits=true,
]
\centering
\nextgroupplot[
title=Evasion Rate as Size of Appended Section Increases,
legend style={at={(0.97,0.5)},anchor=east},
ymax=120,
xlabel=Appended Section Size as Percent of File,
ylabel=Evasion Rate (\%),
ytick={0, 20, 40, 60, 80, 100},
]
\addplot +[red,mark=o,dashdotted,mark options={solid}] table [x=Percent, y=MalConv, col sep=comma] {CSVs/malconv_evade.csv};
\addplot +[blue,mark=x,loosely dashed,mark options={solid}] table [x=Percent, y=MalConv+, col sep=comma] {CSVs/malconv_evade.csv};
\legend{MalConv,MalConv\textsuperscript{+}}
\nextgroupplot[
title=Evasion Rate as Top Benign N-Grams are Added,
legend style={at={(0.97,0.5),anchor=east}},
ymax=120,
xlabel=Number of Benign N-Grams Added,
ylabel=Evasion Rate (\%),
ytick={0, 20, 40, 60, 80, 100},
]
\addplot +[red,mark=o,dashdotted,mark options={solid}] table [x=Percent, y=N-Gram, col sep=comma] {CSVs/ngram_evade.csv};
\addplot +[blue,mark=x,loosely dashed,mark options={solid}] table [x=Percent, y=N-Gram+, col sep=comma] {CSVs/ngram_evade.csv};
\legend{N-Gram,N-Gram+}
\end{groupplot}
\end{tikzpicture}
\end{adjustbox}
\caption{
Evasion rate (y-axis) for MalConv and N-Gram based models.
Top
figure shows MalConv evasion as the appended section size increases, and
bottom
figure shows the N-Gram evasion as the number of benign n-grams are added.
The number of files that evade increase as the size of the appended section increases. The evasion rates remained fixed for all section sizes greater than 25\% of the file size.
}
\label{fig:malconv_evade}
\end{figure}
Our non-negative learning provides an effective defense, with only 0.6\% of files able to evade MalConv\textsuperscript{+}\xspace. Theoretically we would expect an evasion rate of 0.0\%. Investigating these successful evasions uncovered a hidden weakness in the MalConv architecture. We found that both MalConv and MalConv\textsuperscript{+}\xspace learned to give a small amount of malicious predictive power to the special End of File (EOF) padding value. This is most likely a byproduct of the average malicious file size being less than the average benign file size in our training set, which causes the supposedly neutral EOF value itself to be seen as an indicator of maliciousness.
The process of adding an unused file section necessarily reduces the amount of EOF padding tokens given to the network, as the file is increased in size (pushing it closer to the 2.1MB processing limit) the new section replaces the EOF tokens. Replacing the slightly malicious EOF tokens with benign content reduces the network's confidence in the file being malicious.
The 0.6\% of files that evaded MalConv\textsuperscript{+}\xspace only did so when files were small, and the appended section ended up comprising 50\% of the resulting binary. The slight maliciousness from the EOF was the needed feature to push the network into a decision of "malicious." However, the removal of EOFs by the unused section removed this slight signal, and pushed the decision back to "benign."
If we instead replace the bytes of the unused section with random bytes from the uniform distribution, the files still evade detection. This means the evasion is not a function of the attack itself, but the modification of the binary that removes EOF tokens.
A simple fix to this padding issue is to force the row of the embedding table corresponding to the special byte to be the zero vector during training. This would prevent the EOF token from providing any predictive power during inference.
We observed similar results for the N-Gram model. The evasion rate increases rapidly as benign features are added to the malicious files. We found that appending the top 41 most benign features resulted in a 100\% evasion rate. This attack is completely mitigated by N-Gram\textsuperscript{+}\xspace since none of its features have negative weights supporting the benign class. The only way to alter the classification would be to remove malicious n-grams from the files. Our results for both models are depicted in \autoref{fig:malconv_evade}.
\subsection*{Accuracy vs Defense}
The only drawback of this approach is the possible reduction in overall accuracy. Limiting the available information at inference time will likely reduce performance for most classification tasks. Alas, many security related applications exist because adversaries are present in the domain. We have shown that under attack our normal classifiers completely fail --- therefore a reduction in overall accuracy may be well worth the increase in model defensibility.
\autoref{tab:1} shows metrics from our models under normal conditions for comparison.
\begin{table}[tb]
\centering
\caption{Out of sample performance on malware detection in the absence of attack. }
\label{tab:1}
\begin{adjustbox}{max width=\columnwidth}
\begin{tabular}{@{}lcccc@{}}
\toprule
Classifier & Accuracy \% & Precision & Recall & AUC \%\\ \midrule
\textbf{MalConv} & 94.1 & 0.913 & 0.972 & 98.1 \\
\textbf{MalConv\textsuperscript{+}} & 89.4 & 0.908 & 0.888 & 95.3 \\
\textbf{N-Gram} & 95.5 & 0.926 & 0.987 & 99.6 \\
\textbf{N-Gram\textsuperscript{+}} & 91.1 & 0.915 & 0.885 & 95.5 \\
\bottomrule
\end{tabular}
\end{adjustbox}
\end{table}
Since different members of the security community have different desires with respect to the true positive vs. false positive trade-off, we also report the ROC curves for the MalConv, MalConv\textsuperscript{+}\xspace, N-Gram, and N-Gram\textsuperscript{+}\xspace classifiers in \autoref{fig:malware_roc}.
\begin{figure}[tb]
\centering
\vspace{1.0\baselineskip}
\begin{adjustbox}{max width=1.0\columnwidth}
\begin{tikzpicture}
\begin{axis}[
xlabel={False Positive Rate},
ylabel={True Positive Rate},
legend pos=south east,
]
\addplot+[dashed] table [mark=none,each nth point=73, x=fpr, y=tpr, col sep=comma] {CSVs/roc_malcon.csv};
\addplot+[] table [mark=none,each nth point=163, x=fpr, y=tpr, col sep=comma] {CSVs/roc_malconP.csv};
\addplot+[dashed] table [mark=none,each nth point=500, x=fpr, y=tpr, col sep=comma] {CSVs/roc_ngram.csv};
\addplot+[] table [mark=none,each nth point=500, x=fpr, y=tpr, col sep=comma] {CSVs/roc_ngramP.csv};
\legend{MalConv, MalConv+, N-Gram, N-Gram+}
\end{axis}
\end{tikzpicture}
\end{adjustbox}
\caption{
ROC curves for MalConv and N-Gram malware classifiers, with and without non-negative restraints, in the absence of attack.
}
\label{fig:malware_roc}
\end{figure}
While our non-negative approach has paid a penalty in accuracy, we note that we can see this has predominately come from a reduction in recall. Because features can only indicate maliciousness, some malicious binaries are labeled as benign due to a lack of information. This scenario corresponds to the preferred deployment scenario of security products in general, which is to have a lower false positive rate (benign files marked malicious) at the expense of false negatives (malicious files marked benign)
\citep{Ferrand2016,Zhou:2008:MDU:1456377.1456393,learning-at-low-false-positive-rates}.
As such the cost of non-negativity in this scenario is well aligned with its intended use case, making the cost palatable and the trade-off especially effective.
To us it seems reasonable to accept a small loss in accuracy when not under attack in exchange for a large increase in accuracy when under attack. An interesting solution could be employing a pair of models, one constrained and one not, in addition to some heuristic indicating an adversarial attack is underway. The unconstrained model would generate labels during normal operations and fail over to the constrained model during attack. The confidence of the constrained model could be used as this switching heuristic as we empirically observe that the confidences during attack are much lower.
Those who work in an industry environment and produce commercial grade AV products may object that our accuracy numbers do not reflect the same levels obtained today. We remind these readers that we do not have access to the same amount of data or access to the resources necessary to produce training corpora of similar quality, and so it should not be expected that we would obtain the same levels of accuracy as production systems. The purpose of this work is to show that a large class of models that have been used and attacked in prior works can be successfully defended against this common threat model. This comes at minor price, as just discussed, but this is the first technique to show that it can be wholly protective.
\subsection{Spam Filtering}
The accuracies for our traditional, unconstrained models were high on both datasets, but both were susceptible to our version of \citeauthor{Lowd2005a}'s "Good Word" attack. Both classifiers were evaded 100\% of the time by appending only 7 words to each message in the 2006 case and only 4 words in the 2007 case. These words correspond to the features with the lowest regression coefficients (i.e., negative values with high magnitude) for each model.
Use of the non-negative constraint lowers our accuracy for both datasets when not under attack, but \textit{completely eliminates} susceptibility to these attacks as all "Good Words" have coefficients of 0.0. The spam author would only be able to evade detection by removing words indicative of spam from their message. A comparison of performance is shown in \autoref{tab:2}.
\begin{table}[tbh]
\centering
\caption{Out of sample performance on spam filtering in the absence of attack.}
\label{tab:2}
\begin{adjustbox}{max width=\columnwidth}
\begin{tabular}{@{}lccccc@{}}
\toprule
Classifier & Accuracy \% & Precision & Recall & AUC \% &F1 Score\\ \midrule
\textbf{2006 Lasso} & 96.5 & 0.974 & 0.993 &97.1 & 0.983 \\
\textbf{2006 Non-Neg.} & 82.6 & 0.912 & 0.820 &83.5 & 0.864 \\
\textbf{2007 Lasso} & 99.7 & 0.999 & 0.999 &99.7 & 0.999 \\
\textbf{2007 Non-Neg.} & 93.6 & 0.962 & 0.940 &93.0 & 0.951 \\
\bottomrule
\end{tabular}
\end{adjustbox}
\end{table}
\begin{figure*}[!h]
\centering
\begin{adjustbox}{max size={1.00\textwidth}{0.85\textheight}}
\begin{tikzpicture}
\begin{groupplot}[
group style={
group name=myplot,
group size=4 by 1,
vertical sep=2.0cm,%
},
enlarge x limits=true,
]
\centering
\nextgroupplot[
title=MNIST,
legend pos=north east,
ymax=30,
xlabel=Target Confidence,
ylabel=Evasion Rate,
title style={font=\LARGE},
label style={font=\Large},
tick label style={font=\large},
legend style={font=\large},
]
\addplot +[red,mark=square,dashdotted,mark options={solid}] table [x=p, y=softmax_evade] {CSVs/mnist_fgsm.dat};
\addplot +[red,mark=triangle,dashed,mark options={solid}] table [x=p, y=softmax_evade] {CSVs/mnist_iga.dat};
\addplot +[blue,mark=o,dashdotted,mark options={solid}] table [x=p, y=nonneg_evade] {CSVs/mnist_fgsm.dat};
\addplot +[blue,mark=otimes,dashed,mark options={solid}] table [x=p, y=nonneg_evade] {CSVs/mnist_iga.dat};
\legend{Softmax FGSM, Softmax IGA, Non-Neg FGSM, Non-Neg IGA, style={font=\large}}
\nextgroupplot[
title=CIFAR10,
legend pos=north east,
ymax=30,
xlabel=Target Confidence,
title style={font=\LARGE},
label style={font=\Large},
tick label style={font=\large},
]
\addplot +[red,mark=square,dashdotted,mark options={solid}] table [x=p, y=softmax_evade] {CSVs/cifar10_target_fsgm.dat};
\addplot +[red,mark=triangle,dashed,mark options={solid}] table [x=p, y=softmax_evade] {CSVs/cifar10_target_iga.dat};
\addplot +[blue,mark=o,dashdotted,mark options={solid}] table [x=p, y=nonneg_evade] {CSVs/cifar10_target_fsgm.dat};
\addplot +[blue,mark=otimes,dashed,mark options={solid}] table [x=p, y=nonneg_evade] {CSVs/cifar10_target_iga.dat};
\nextgroupplot[
title=CIFAR100,
legend pos=north east,
ymax=8,
xlabel=Target Confidence,
xmode=log,
title style={font=\LARGE},
label style={font=\Large},
tick label style={font=\large},
]
\addplot +[red,mark=square,dashdotted,mark options={solid}] table [x=p, y=softmax_evade] {CSVs/cifar100_target_fgsm.dat};
\addplot +[red,mark=triangle,dashed,mark options={solid}] table [x=p, y=softmax_evade] {CSVs/cifar100_target_iga.dat};
\addplot +[blue,mark=o,dashdotted,mark options={solid}] table [x=p, y=nonneg_evade] {CSVs/cifar100_target_fgsm.dat};
\addplot +[blue,mark=otimes,dashed,mark options={solid}] table [x=p, y=nonneg_evade] {CSVs/cifar100_target_iga.dat};
\nextgroupplot[
title=Tiny ImageNet,
legend pos=north east,
ymax=8,
xlabel=Target Confidence,
xmode=log,
title style={font=\LARGE},
label style={font=\Large},
tick label style={font=\large},
]
\addplot +[red,mark=square,dashdotted,mark options={solid}] table [x=p, y=softmax_evade] {CSVs/mininet_fgsm.dat};
\addplot +[red,mark=triangle,dashed,mark options={solid}] table [x=p, y=softmax_evade] {CSVs/mininet_iga.dat};
\addplot +[blue,mark=o,dashdotted,mark options={solid}] table [x=p, y=nonneg_evade] {CSVs/mininet_fgsm.dat};
\addplot +[blue,mark=otimes,dashed,mark options={solid}] table [x=p, y=nonneg_evade] {CSVs/mininet_iga.dat};
\end{groupplot}
\end{tikzpicture}
\end{adjustbox}
\caption{
Targeted evasion rate (y-axis) as a function of the desired misclassification confidence $p$ (x-axis) for four datasets. Due to the differing ranges of interest, right two figures shown in log scale for the x-axis.
}
\label{fig:image_target_resist}
\end{figure*}
Despite the drops in accuracy imposed by our non-negative constraint, the results are better than prior works in defending against weaker versions of the "Good Word" attack. For example, \citet{Jorgensen:2008:MIL:1390681.1390719} developed a defense based on multiple instance learning. Their approach when attacked with all of their selected good words had a precision of 0.772 and a recall of 0.743 on the 2006 TREC corpus. This was the best result of all their tested methods, but our non-negative approach achieves a superior 0.912 and 0.820 precision and recall respectively. While spam authors are not as restricted to modify their inputs, our approach forces them to move up to a more expensive threat model (removing and modifying features, rather than just adding) --- which we argue is of intrinsic value.
\citet{Demontis2017} had concluded that there existed an "implicit trade-off between security and sparsity" in building a secure model in their Android malware work. At least for the additive adversary, we provide evidence with our byte n-grams and spam models that this is not an absolute. In both cases we begin with a full feature set and the non-negative approach learns a sparse model, where all "good words" (or bytes) are given coefficient values of zero. As such we see that sparsity and security occur together to defend against the additive adversary.
\subsection{Image Classification}
Having investigated the performance of non-negative learning for malware detection, we now look at its potential for image classification. In particular, we find it is possible to leverage non-negative learning as discussed in \autoref{sec:image} to provide robustness against confident targeted attacks. That is to say if the predicted class is $y_i$, the adversary wants to trick the model into predicting class $y_j, j \neq i$ and that the confidence of the prediction be $\geq p$.
For MNIST we will use LeNet. Our out of sample accuracy using a normal model is 99.2\%, while the model with non-negative constrained dense layers achieves 98.6\%. For CIFAR 10 and 100 we use a ResNet based architecture.\footnote{v1 model taken from \url{https://tinyurl.com/keras-cifar10-restnet}} For CIFAR 10 we get 92.3\% accuracy normally, and 91.6\% with our non-negative approach. On CIFAR 100 the same architecture gets 72.2\% accuracy normally, and 71.7\% with our non-negative approach. For Tiny ImageNet, we also use a ResNet architecture with the weights of all but the final dense layers initialized pretrained from ImageNet.\footnote{ResNet50 built-in application from \url{https://keras.io/applications/\#resnet50}} The normal model has an accuracy of 56.6\%, and the constrained model 56.3\%.
The results as a function of the target confidence $p$ can be seen in \autoref{fig:image_target_resist}.
An interesting artifact of our approach is that the non-negative networks are easier to fool for low-confidence errors. We posit this is due to the probability distribution over classes becoming near uniform under attack. On CIFAR100 the y-axis is truncated for legibility since the evasion rate of FGSM is 93\% and IGA is 99\%. Similarly for non-negative Tiny ImageNet, FGSM and IGA achieve 14\% and 17\% evasion rates when $p=0.005$.
Despite these initial high evasion rates, we can see in all cases the success of targeted adversarial attacks reaches 0\% as the desired probability $p$ increases. For MNIST and CIFAR10, which have only 10 classes, this occurs at up to a target 30\% confidence. As more classes are added, the difficulty of the attack increases. For Tiny ImageNet and CIFAR100, targeted attacks fail by $\leq 2\%$.
If \textit{targeted} attacks from IGA were the only type of attack we needed to worry about, these results would also allow us to use the confidence as a method of detecting attacks. For example, CIFAR10 had the weakest results, needing a target confidence of 30\% before targeted attacks failed. The average predicted confidence of the non-negative network on the test set was 93.8\%. This means we can use the confidence itself as a measure of network robustness. If we default to a "no-answer" for everything with a confidence of 40\% or less on CIFAR10, and assume anything below that level is an attack and error, the accuracy would have only gone down 1.2\%.
In order to determine if non-negative constraints are merely acting as a gradient obfuscation technique \citep{Athalye2018}, we also attempted a black box attack by attacking a substitute model without the non-negative constraints and assessing whether the perturbed images created by this attack were able to fool the constrained model. In order to make the attack as strong as possible, the unconstrained network was the same as the network that was used to warm-start the non-negative training. This should maximize the transferability of attacks from one to the other. Despite this similarity, transfered attacks had only a 1.042\% success rate, which is one reason we believe that non-negative constraints are not merely a form of gradient obfuscation.
We emphasize that these results are evidence that we can extend non-negativity to provide benefit in the multi-class case. Our approach appears to have lower cost in multi-class case than in the binary, as accuracy drops by less than 1\% in each dataset. While the cost is lower, its utility is lower as well. Our multi-class non-negative approach provides no benefit in \textit{untargeted} attacks --- where any error by the model is acceptable to the attacker --- even under the weaker FGSM attack. When even stronger attacks like Projected Gradient Descent are used, our approach is also defeated in the targeted scenario. Under the moderate-strength IGA attack, we also see that susceptibility to evasion is increased for low-confidence evasions. In total, we view these results as indicative that non-negativity can have utility for the multi-class case and provide some level of benefit that is intrinsically interesting, but more work is needed to determine a better way to apply the technique.
\section{Conclusion} \label{sec:conclusion}
We have shown that an increased robustness to adversarial examples can be achieved through non-negative weight constraints. Constrained binary classifiers can only identify features associated with the positive class during test time. Therefore, the only method for fooling the model is to remove features associated with that class. This method is particularly useful in security-centric domains like malware detection, which have well-known adversarial motivation. Forcing adversaries to remove maliciousness in these domains is the desired outcome. We have also described a technique to generalize this robustness to multi-class domains such as image classification. We showed a significant increase in robustness to targeted adversarial attacks while minimizing the amount of accuracy lost in doing so.
|
2,869,038,155,492 | arxiv | \section{Introduction}
Inverse semigroups provide an algebraic framework to study partial dynamical systems and groupoids. This relation is described by noncommutative Stone duality, extending the classical duality between totally disconnected Hausdorff spaces and generalized Boolean algebras to a duality between ample Hausdorff groupoids and so called Boolean inverse semigroups. This corner stone of the modern theory of inverse semigroups was introduced by Lawson and Lenz \cite{lawson2010, lawson2012, lawsonlenz13} following ideas of Exel \cite{exel2009} and Lenz \cite{lenz2008}. In a nutshell, noncommutative Stone duality associates to an ample Hausdorff groupoid the Boolean inverse semigroup of its compact open bisections. Various filter constructions can be employed to describe the inverse operation \cite{lawson2012, lawsonmargolissteinberg2013, armstrongclarkhuefjoneslin2020}.
Since its discovery of noncommutative Stone duality, an important aspect of this theory was to establish a dictionary between properties of ample groupoids and Boolean inverse semigroups. A summary of the results obtained so far, can be found in Lawson's survey article \cite{lawson2019-survery}. From an operator algebraic and representation theoretic point of view, one fundamental side of groupoids and semigroups is the property of being CCR or type I. These notions arise upon considering groupoid and semigroup \ensuremath{\text{C}^*}-algebras. So far, neither of these properties has been addressed by the dictionary of noncommutative Stone duality. Our first two main results fill this gap and establish algebraic characterizations of CCR and type I Boolean inverse semigroups, matching Clark's characterization of the respective property for groupoids \cite{clark07}.
Our characterisation roughly takes the form of forbidden subquotients, in analogy with the theme of forbidden minors in graph theory and other fields of combinatorics. The Boolean inverse semigroup $B_\ensuremath{{(\mathrm{T}_1)}}$ featuring in the next statement is introduced in \th\ref{ex:btone}. It is the algebraically simplest possible Boolean inverse semigroup which is not CCR. We refer the reader to Sections \ref{sec:inverse-semigroups} and \ref{sec:nc-stone-duality} for further information about Boolean inverse semigroups, their corners and group quotients.
\begin{introtheorem}
\th\label{introthm:ccr-groupoid}
Let $\mathcal{G}$ be a second countable, ample Hausdorff groupoid. Then $\mathcal{G}$ is CCR if and only if the following two conditions are satisfied.
\begin{itemize}
\item No corner of $\Gamma(\mathcal{G})$ has a non virtually abelian group quotient, and
\item $\Gamma(\mathcal{G})$ does not have $B_\ensuremath{{(\mathrm{T}_1)}}$ as a subquotient.
\end{itemize}
\end{introtheorem}
\begin{introtheorem}
\th\label{introthm:type-I-groupoid}
Let $\mathcal{G}$ be a second countable, ample Hausdorff groupoid. Then $\mathcal{G}$ is type ${\rm I}$ if and only if the following two conditions are satisfied.
\begin{itemize}
\item No corner of $\Gamma(\mathcal{G})$ has a non virtually abelian group quotient, and
\item $\Gamma(\mathcal{G})$ does not have an infinite, monoidal and $0$-simplifying subquotient.
\end{itemize}
\end{introtheorem}
Historically the notation of CCR and type I \ensuremath{\text{C}^*}-algebras was motivated by problems in representation theory. Roughly speaking, groups enjoying one of these properties have a well behaved unitary dual. Originally studied in the context of Lie groups and algebraic groups \cite{harish-chandra53, dixmier57, bernstein74-type-I,bekkaechterhoff2020}, other classes of non-discrete groups were considered more recently \cite{ciobotaru15,houdayerraum16-non-amenable}. The question which discrete groups are CCR and type I was answered conclusively by Thoma \cite{thoma68}, characterizing them as the virtually abelian groups. A more direct proof of Thoma's result was obtained by Smith \cite{smith72} and his original proof was the basis for a Plancherel formula for general discrete groups recently obtained by Bekka \cite{bekka2020-plancherel}. The fundamental nature of CCR and type I \ensuremath{\text{C}^*}-algebras also led to study other group like objects such as the aforementioned characterisation of groupoids with these property by Clark \cite{clark07}. Compared with Thoma's characterisation of discrete type I groups, our Theorems \ref{introthm:ccr-groupoid} and \ref{introthm:type-I-groupoid} can be considered an analogue for Boolean inverse semigroups. Naturally, the question arises whether a similar characterisation can be obtained for inverse semigroups. The bridge between these structures is provided by the booleanization of an inverse semigroup \cite{lawsonlenz13, lawson2019}, which we denote by $B(S)$. In view of a direct algebraic construction of the booleanization we expose in Section \ref{sec:inverse-semigroups}, the next two results give an intrinsic characterisation of CCR and type I inverse semigroups.
\begin{introtheorem}
\th\label{introthm:ccr-semigroup}
Let $S$ be a discrete inverse semigroup. Then $S$ is CCR if and only if the following two conditions are satisfied
\begin{enumerate}
\item $S$ does not have any non virtually abelian group subquotient, and
\item $B(S)$ does not have $B_{(T_1)}$ as a subquotient.
\end{enumerate}
\end{introtheorem}
\begin{introtheorem}
\th\label{introthm:type-I-semigroup}
Let $S$ be an inverse semigroup. $S$ is type I if and only if the following two conditions are satisfied
\begin{enumerate}
\item $S$ does not have any non virtually abelian group subquotient, and
\item $B(S)$ does not have an infinite, monoidal, $0$-simplifying subquotient.
\end{enumerate}
\end{introtheorem}
The booleanization of an inverse semigroup was first introduced in \cite{lawson2019}. Our description is more natural from an operator algebraic point of view and equivalent to the original definition of Lawson.
This paper contains five sections. After the introduction, we expose necessary preliminaries on (Boolean) inverse semigroups, ample groupoids and noncommutative Stone duality. In Section \ref{sec:separation-properties}, we study separation properties of the orbit space of a groupoid and obtain algebraic characterisations of groupoids with $\ensuremath{{(\mathrm{T}_1)}}$ and $\ensuremath{{(\mathrm{T}_0)}}$ orbit spaces. In Section~\ref{sec:isotropy-groups}, we relate isotropy groups of a groupoid to subquotients of the associated Boolean inverse semigroup. The proofs of our main results are collected in Section \ref{sec:main-proofs}.
\subsection*{Acknowledgements}
S.R.\ was supported by the Swedish Research Council through grant number 2018-04243 and by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement no. 677120-INDEX). He would like to thank Piotr Nowak and Adam Skalski for their hospitality during the authors stay at IMPAN. G.F.\ would like to thank Piotr Nowak for the hospitality and financial support during his visit to IMPAN in autumn 2020.
\section{Preliminaries}
\label{sec:preliminaries}
In this section, we recall all notations relevant to our work and introduce some elementary constructions that will be important throughout the text. For inverse semigroups the standard reference is \cite{lawson1998}. The survey article \cite{lawson2019-survery} describes recent advances and the state of the art in inverse semigroup theory. Further, \cite{lawson2019} and \cite[Sections 3 and 4]{lawson2019-survery} provide an introduction to Boolean inverse semigroups. It has to be pointed out that the definition of Boolean inverse semigroups used in the literature has changed over the years, so that care is due when consulting older material. The standard reference for groupoids and their C$^*$-algebras is \cite{renault80}. For groupoids attached to inverse semigroups and Boolean inverse semigroups, we refer to \cite{li2017-mfo, lawson2012}.
\subsection{Inverse semigroups and Boolean inverse semigroups}
\label{sec:inverse-semigroups}
In this section we recall the notions of inverse semigroups, Boolean inverse semigroups and the link between them provided by the universal enveloping Boolean inverse semigroup of an inverse semigroup, termed booleanization.
An \emph{inverse semigroup} $S$ is a semigroup in which for every element $s \in S$ there is a unique element $s^* \in S$ satisfying $ss^*s = s$ and $s^*ss^* = s^*$. The set of idempotents $E(S) \subseteq S$ forms a commutative meet-semilattice, when endowed with the partial order $e \leq f$ if $e f = e$. We denote by $\text{supp} s = s^*s$ and $\ensuremath{\mathop{\mathrm{im}}} s = ss^*$ the support and the image of an element $s \in S$, which are idempotents. The partial order on $E(S)$ extends to $S$ by declaring $s \leq t$ if $\text{supp} s \leq \text{supp} t$ and $t \text{supp} s = s$. Given an inverse semigroup $S$, we denote by $S_0$ the inverse semigroup with zero obtained by formally adjoining an absorbing idempotent $0$. In particular, we will make use of groups with zero. A \textit{character} on $E(S)$ is a non-zero semilattice homomorphism to $\{ 0, 1 \}$. We will denote by $\widehat{E(S)}$ the space of characters on $E(S)$ equipped with the topology of pointwise convergence.
Let $S$ be an inverse semigroup with zero. Two elements $s,t \in S$ are called \emph{orthogonal}, denoted $s \perp t$, if $s t^* = 0 = s^*t$. A \emph{Boolean inverse semigroup} is an inverse semigroup with zero, whose semilattice of idempotents is a generalized Boolean algebra such that finite families of pairwise orthogonal elements have joins. Recall that a generalized Boolean algebra can be conveniently described as a Boolean rng. Given two Boolean inverse semigroups $B$ and $C$, and an inverse semigroup morphism $\phi : B \to C$, we say that $\phi$ is a morphism of Boolean inverse semigroups if it preserves the joins of orthogonal elements.
To every inverse semigroup $S$, one associates the \emph{enveloping Boolean inverse semigroup} or \emph{booleanization} $S \subseteq B(S)$, which satisfies the universal property that for every Boolean inverse semigroup $B$ and every semigroup homomorphism $S \to B$, there is a unique extension to $B(S)$ such that the following diagram commutes.
\begin{gather*}
\xymatrix{
S \ar[r] \ar[d] & B \\
B(S) \ar[ur]
}
\end{gather*}
The enveloping Boolean inverse semigroup is conveniently described as the left adjoint of the forgetful functor from Boolean inverse semigroups to inverse semigroups with zero \cite{lawson2019}. We will use the following concrete description. Consider a semigroup $S$. The semigroup algebra $I(S) = F_2[E(S)]$ is a Boolean rng and whose characters (as a rng) are in one to one correspondence with characters of $E(S)$. We consider the following set of formal sums.
\begin{gather*}
C(S) = \{\sum_i s_i e_i \mid s_i \in S, e_i \in I(S) \text{ and } (e_i)_i \text{ and } (s_ie_is_i^*)_i \text{ are pairwise orthogonal}\}.
\end{gather*}
We consider the equivalence relation given by the following condition. We have $\sum_i s_i e_i \sim \sum_j t_j f_j$ if and only if $e_i f_j \neq 0$ implies that there is some $p \in E(S)$ such that $s_i p = t_j p$. One readily checks that the quotient of $C(S)$ by this equivalence relation is a Boolean inverse semigroup. Thanks to the existence of joins of orthogonal families, every map of semigroups with zero into a Boolean inverse semigroup $S_0 \to B$ extends uniquely to a map ${C(S)}/{\sim} \to B$. By uniqueness of adjoint functors, this shows that ${C(S)}/{\sim} \cong B(S)$ is the booleanization of $S$.
\subsection{Ample groupoids}
\label{sec:ample-groupoids}
In this section we fix our notation for groupoids and recall some basic results. We recommend \cite{renault80} and \cite{paterson1999} as resources on the topic.
Given a groupoid $\mathcal{G}$, we denote its set of units by $\ensuremath{\G}^{(0)}$ and the range and source map by $r: \mathcal{G} \to \ensuremath{\G}^{(0)}$ and $d: \mathcal{G} \to \ensuremath{\G}^{(0)}$, respectively. Throughout the text, $\mathcal{G}$ will be a \textit{topological groupoid} meaning it is equipped with a topology making the multiplication and inversion continuous. A \textit{bisection} of a topological groupoid $\mathcal{G}$ is a subset $U \subseteq \mathcal{G}$ such that the restrictions $d|_U$ and $r|_U$ are homeomorphisms onto their images. We call $\mathcal{G}$ \emph{{\'e}tale} if its topology has a basis consisting of bisections. It is called \emph{ample} if its topology has a basis consisting of compact open bisections.
If $\mathcal{G}$ is a groupoid and $A \subseteq \ensuremath{\G}^{(0)}$ is a set of units, we denote by $\mathcal{G}|_A = \{g \in \mathcal{G} \mid d(g), r(g) \in A\}$ the restriction of $\mathcal{G}$ to $A$. It does not need to be {\'e}tale, even if $G$ is so. If $\mathcal{G}$ is {\'e}tale and $A \subseteq \ensuremath{\G}^{(0)}$ is open, then $G|_A$ is {\'e}tale too.
The isotropy at $x \in \ensuremath{\G}^{(0)}$ is $\mathcal{G}|_x = \mathcal{G}|_{\{x\}}$. We denote by $\text{Iso}(\mathcal{G})$ the union over all isotropy groups, considered as subsets of $\mathcal{G}$. Then $\mathcal{G}$ is \emph{effective} if the interior of $\text{Iso}(\mathcal{G}) \setminus \ensuremath{\G}^{(0)}$ is empty. Given a unit $x \in \ensuremath{\G}^{(0)}$, its orbit is denoted by $\mathcal{G} x \subseteq \ensuremath{\G}^{(0)}$. We call $\mathcal{G}$ \emph{minimal} if all its orbits are dense.
The set of orbits of a topological groupoid inherits a natural topology. We will be interested in separation properties of this \emph{orbit space}. First, we observe that the orbit space of a groupoid is a \ensuremath{{(\mathrm{T}_1)}}-space if and only if its orbits are closed. An analogue characterisation of groupoids whose orbit space is a \ensuremath{{(\mathrm{T}_0)}}-space, is the subject of the Ramsey-Effros-Mackey dichotomy, which we now recall.
\begin{proposition}
\th\label{prop:tzero-characterisations}
Let $\mathcal{G}$ be a second countable ample groupoid. Then the following statements are equivalent.
\begin{itemize}
\item The orbit space of $\mathcal{G}$ is \ensuremath{{(\mathrm{T}_0)}}.
\item $\mathcal{G}$ has a self-accumulating orbit.
\item Orbits of $\mathcal{G}$ are locally closed.
\end{itemize}
\end{proposition}
\begin{proof}
The equivalence between the first two items follows from \cite[Theorem 2.1, (2) and (4)]{ramsey1990}. In order to prove the equivalence to the last item, we want to apply \cite[Theorem 2.1, (4) and (5)]{ramsey1990}. To this end, we have to show that the equivalence relation induced by $\mathcal{G}$ on $X = \ensuremath{\G}^{(0)}$ is an $F_\sigma$-subset of $X \times X$. The map $\mathcal{G} \to X \times X$ restricted to any compact open bisection of $\mathcal{G}$ has a closed image. We conclude with the observation that there are only countably many compact open bisections, since $\mathcal{G}$ is second countable.
\end{proof}
\subsection{Noncommutative Stone duality}
\label{sec:nc-stone-duality}
Classical Stone duality establishes an equivalence of categories between locally compact totally disconnected Hausdorff topological spaces and generalized Boolean algebras. If $X$ is such a space, the generalized Boolean algebra associated with it is $\ensuremath{\mathrm{CO}}(X)$, the algebra of compact open subsets of $X$. Vice versa, given a generalized Boolean algebra $B$, its spectrum $\widehat{B}$ is a locally compact totally disconnected Hausdorff topological space. Noncommutative Stone duality generalises this correspondence to an equivalence between ample Hausdorff groupoids and Boolean inverse semigroups. We refer the reader to \cite{lawson2010,lawson2012}.
Given an ample Hausdorff groupoid $\mathcal{G}$, we denote by $\Gamma(\mathcal{G})$ the set of compact open bisections of $\mathcal{G}$, which is a Boolean inverse semigroup. Conversely, given a Boolean inverse semigroup $B$, the set of ultrafilter for the natural order on $B$ forms an ample Hausdorff groupoid $\mathcal{G}(B)$. It follows from \cite[Duality Theorem]{lawson2012} that these two operations are dual to each other. See also \cite[Theorem 4.4]{lawson2019-survery} and \cite[Theorem 4.2]{lawsonvdovina2019}. We will not need to specify the morphisms of the categories involved in this duality. Invoking \cite[Theorem 1.2]{lawsonvdovina2019}, the Paterson groupoid of an inverse semigroup $S$ can now be identified with $\mathcal{G}(B(S))$.
Noncommutative Stone duality establishes a dictionary between properties of Boolean inverse semigroups and ample Hausdorff groupoids. We recall some of its aspects that will be needed in this piece.
\subsubsection*{Corners and subgroupoids}
Given an ample Hausdorff groupoid $\mathcal{G}$, idempotents in $\Gamma(\mathcal{G})$ correspond to compact open subsets of $\mathcal{G}$. Given such idempotent $p \in E(\Gamma(\mathcal{G}))$ corresponding to $U \subseteq \ensuremath{\G}^{(0)}$, the \emph{corner} $p \Gamma(\mathcal{G}) p$ is naturally isomorphic with $\Gamma(\mathcal{G}|_U)$. Further, there is a one-to-one correspondence between open subgroupoids of $\mathcal{G}$ and Boolean inverse subsemigroups $B \subseteq \Gamma(\mathcal{G})$ assigning the groupoid $\bigcup B \subseteq \mathcal{G}$ to a Boolean inverse semigroup $B \subseteq \Gamma(\mathcal{G})$.
\subsubsection*{Morphisms}
Noncommutative Stone duality does not cover all morphisms that one would naturally consider in the respective category. This is why it is necessary and useful to note the following two statements. They ensure compatibility of noncommutative Stone duality with restriction maps. The next proposition is a reformulation of \cite[Proposition 5.10]{lawsonvdovina2019}. See also \cite{lenz2008}.
\begin{proposition}
\th\label{prop:restriction-induces-bis-map}
Let $\mathcal{G}$ be an ample groupoid, $A \subseteq \ensuremath{\G}^{(0)}$ a closed $\mathcal{G}$-invariant set. Then the restriction map $\ensuremath{\mathrm{CO}}(\ensuremath{\G}^{(0)}) \to \ensuremath{\mathrm{CO}}(A)$ extends to a unique homomorphism $\Gamma(\mathcal{G}) \to \Gamma(\mathcal{G}|_A)$ with the universal property that for every other homomorphism $\pi: \Gamma(\mathcal{G}) \to B$ such that
\begin{gather*}
\xymatrix{
\ensuremath{\mathrm{CO}}(\ensuremath{\G}^{(0)}) \ar[r]^{\pi|_{\ensuremath{\mathrm{CO}}(\ensuremath{\G}^{(0)})}} \ar[d]_{\text{res}_A} & E(B) \\
\ensuremath{\mathrm{CO}}(A) \ar@{-->}[ur]
}
\end{gather*}
commutes, there is a unique extension to a commutative diagram
\begin{gather*}
\xymatrix{
\Gamma(\mathcal{G}) \ar[r]^{\pi} \ar[d]_{\text{res}_A} & B \\
\Gamma(\mathcal{G}|_A) \ar@{-->}[ur]
}
\end{gather*}
\end{proposition}
The following converse to \th\ref{prop:restriction-induces-bis-map} can be considered a reformulation of \cite[Lemma 5.6]{lawsonvdovina2019}.
\begin{lemma}
\th\label{lem:restriction-implies-invariant}
Let $\mathcal{G}$ be an ample groupoid and $A \subseteq \ensuremath{\G}^{(0)}$ a closed subset, such that the restriction $\mathrm{res}_A: \mathrm{CO}(\ensuremath{\G}^{(0)}) \to \mathrm{CO}(A)$ extends to a map of inverse semigroups $\Gamma(\mathcal{G}) \to B$, for some inverse semigroup $B$. Then $A$ is $\mathcal{G}$-invariant.
\end{lemma}
\begin{proof}
Denote by $\pi: \Gamma(\mathcal{G}) \to B$ a map as in the statement of the lemma. Take $s \in \Gamma(\mathcal{G})$ satisfying $\text{supp}(s) \cap A = \emptyset$. Then the calculation
\begin{gather*}
\mathrm{res}_A(ss^*) = \pi(ss^*) = \pi(s) \pi(s^*s) \pi(s^*) = 0
\end{gather*}
shows that $\ensuremath{\mathop{\mathrm{im}}}(s) \cap A = \emptyset$. So $A$ is indeed $\mathcal{G}$-invariant.
\end{proof}
\subsubsection*{Minimal groupoids and $0$-simplifying Boolean inverse semigroups}
We introduce the algebraic notion corresponding to minimality of groupoids following \cite{steinbergszakacs2020}.
\begin{definition}
\th\label{def:zero-simplifying}
Let $B$ be a Boolean inverse semigroup. An ideal $I$ of $B$ is called \emph{additive}, if it is closed under joins of orthogonal elements. Further, $B$ is called \emph{$0$-simplifying} if its only additive ideals are $\{0\}$ and $B$.
\end{definition}
The name of the notation $0$-simplifying stems from the fact that additive ideals are exactly of the form $\pi^{-1}(0)$ for homomorphisms of Boolean inverse semigroups $B \to C$.
\begin{proposition}[{\cite[Proposition 2.7]{steinbergszakacs2020}}]
\th\label{prop:0simplifying}
Let $\mathcal{G}$ be an ample Hausdorff groupoid. Then $\mathcal{G}$ is minimal if and only if $\Gamma(\mathcal{G})$ is $0$-simplifying
\end{proposition}
\subsection{CCR and type I groupoids}
\label{sec:ccr-type-I}
We refer the reader to \cite{murphy90} for the basic theory of \ensuremath{\text{C}^*}-algebras. To each of the objects considered in Sections \ref{sec:inverse-semigroups} and \ref{sec:ample-groupoids} one can associate a \ensuremath{\text{C}^*}-algebra. For groupoid \ensuremath{\text{C}^*}-algebras $\ensuremath{\text{C}^*}(\mathcal{G})$, we refer the reader to \cite{renault80}. The \ensuremath{\text{C}^*}-algebras $\ensuremath{\text{C}^*}(S)$ associated with inverse semigroups are explained in \cite{kumjian1984, paterson1999}. In particular, it is known that $\ensuremath{\text{C}^*}(S) \cong \ensuremath{\text{C}^*}(\mathcal{G}(S))$ canonically. We refer to \cite[Section 5.6]{murphy90} for details on the following two notions from representation theory of \ensuremath{\text{C}^*}-algebras.
\begin{definition}
\th\label{def:ccr-type-I}
Let $A$ be a \ensuremath{\text{C}^*}-algebra. Then $A$ is called \emph{CCR} if the image of every irreducible *-representation of $A$ equals the compact operators. We call $A$ \emph{GCR} or \emph{type I} if the image of every irreducible *-representation of $A$ contains the compact operators.
Inverse semigroups and a groupoids are called CCR or type I, respectively, if their \ensuremath{\text{C}^*}-algebras have this property.
\end{definition}
In this article, the notions of CCR and type I are accessed solely through the following special case of a result of Clark combined with the characterisation of discrete type I groups by Thoma.
\begin{theorem}[{\cite[Theorems 1.3 and 1.4]{clark07} and \cite{thoma68}}]
\th\label{thm:clark}
Let $\mathcal{G}$ be a second countable, {\'e}tale, Hausdorff groupoid. Then $\mathcal{G}$ is CCR if and only if all the isotropy groups are virtually abelian and the orbit space of $\mathcal{G}$ is \ensuremath{{(\mathrm{T}_1)}}. Further, $\mathcal{G}$ is type I if and only if all the isotropy groups are virtually abelian and the orbit space of $\mathcal{G}$ is \ensuremath{{(\mathrm{T}_0)}}.
\end{theorem}
\section{Separation properties of orbit spaces}
\label{sec:separation-properties}
In this section we consider separation properties of orbit spaces and provide algebraic characterisations of ample groupoids whose orbit space is a \ensuremath{{(\mathrm{T}_1)}}-space and a \ensuremath{{(\mathrm{T}_0)}}-space, respectively.
Let us start by introducing the simplest example of a groupoid whose orbit space is not \ensuremath{{(\mathrm{T}_1)}}.
\begin{example}
\th\label{ex:btone}
Let $\mathcal{G}_\ensuremath{{(\mathrm{T}_1)}}$ be the groupoid associated with the equivalence relation $\{(n,m) \in (\mathbb{N} \cup \{\infty\})^2 \mid n + m < \infty\}$. Denote by $B_\ensuremath{{(\mathrm{T}_1)}} = \Gamma(\mathcal{G}_\ensuremath{{(\mathrm{T}_1)}})$ the Boolean inverse semigroup of compact open bisections of $\mathcal{G}_\ensuremath{{(\mathrm{T}_1)}}$. Then $B_\ensuremath{{(\mathrm{T}_1)}}$ admits a presentation with generators $(s_n)_{n \in \mathbb{N}}$ and $f$ satisfying the relations
\begin{align*}
& \ensuremath{\mathop{\mathrm{im}}} s_n = \text{supp} s_{n+1} & \text{ for all } n \\
& \text{supp} s_n \perp \text{supp} s_{n + 1} & \text{ for all } n \neq m \\
& f^2 = f \geq \text{supp} s_n & \text{ for all } n
\end{align*}
\end{example}
\begin{proposition}
\th\label{prop:tone}
Let $\mathcal{G}$ be an ample Hausdorff groupoid. Then the orbit space of $\mathcal{G}$ is not $\ensuremath{{(\mathrm{T}_1)}}$ if and only if $\Gamma(\mathcal{G})$ has $B_{\ensuremath{{(\mathrm{T}_1)}}}$ as a subquotient.
\end{proposition}
\begin{proof}
Assume that the orbit space of $\mathcal{G}$ is not $\ensuremath{{(\mathrm{T}_1)}}$. Then by \th\ref{prop:tzero-characterisations} there is some non-closed orbit, say $\mathcal{G} x \subseteq \ensuremath{\G}^{(0)}$. Let $(x_n)_{n \in \mathbb{N}}$ be a convergent sequence from $\mathcal{G} x$ whose limit $x_\infty = \lim x_n$ does not lie in $\mathcal{G} x$. Since $\mathcal{G}$ is ample, there are bisections $(s_n)_{n \in \mathbb{N}}$ in $\Gamma(\mathcal{G})$ such that $s_n \cap d^{-1}(x_n) \cap r^{-1}(x_{n+1}) \neq \emptyset$ for all $n \in \mathbb{N}$. Without loss of generality, we may assume that $(\text{supp} s_n)_n$ are pairwise disjoint subsets of $\ensuremath{\G}^{(0)}$. Let $B \subseteq \Gamma(\mathcal{G})$ be the Boolean inverse semigroup generated by all $(s_n)_{n \in \mathbb{N}}$ together with the idempotents of $\Gamma(\mathcal{G})$. Denote by $\ensuremath{\mathcal{H}} = \bigcup B$ the open subgroupoid of $\mathcal{G}$ associated with $B$. The subset $A = \{x_n \mid n \in \mathbb{N} \cup \{\infty\}\} \subseteq \ensuremath{\G}^{(0)}$ is $\ensuremath{\mathcal{H}}$-invariant, so that by \th\ref{prop:restriction-induces-bis-map} there is a quotient map of Boolean inverse semigroups $B \cong \Gamma(\ensuremath{\mathcal{H}}) \to \Gamma(\ensuremath{\mathcal{H}}|_A)$. If suffices to note that $\ensuremath{\mathcal{H}}|_A \cong \ensuremath{\mathcal{G}}_{\ensuremath{{(\mathrm{T}_1)}}}$ so that $\Gamma(\ensuremath{\mathcal{H}}|_A) \cong B_{\ensuremath{{(\mathrm{T}_1)}}}$ follows.
Assume now that $B_\ensuremath{{(\mathrm{T}_1)}}$ is a subquotient of $\Gamma(\mathcal{G})$. Denote by $(s_n)_{n \in \mathbb{N}}$ and $f$ preimages in $\Gamma(\mathcal{G})$ of the generators of $B_\ensuremath{{(\mathrm{T}_1)}}$. Replacing $s_n$ by $s_n f$, we may suppose that $\text{supp} s_n \leq f$ holds for all $n \in \mathbb{N}$. Further, writing $p_n = \bigvee_{k < n} \text{supp} s_k$ and replacing $s_n$ by $s_n(f - p_n)$, we may assume that $(\text{supp} s_n)_n$ are pairwise orthogonal. Write $t_n = s_{n-1} \dotsm s_0$ and $q_n = t_n^* (\text{supp} s_n) t_n$. Then every finite subfamily of $(q_n)_{n \in \mathbb{N}}$ has a non-zero meet, since the same statement holds true for their images in $B_\ensuremath{{(\mathrm{T}_1)}}$. Denoting by $U_n \subset \ensuremath{\G}^{(0)}$ the compact open subset corresponding to $q_n$, it follows that $\bigcap_{n \in \mathbb{N}} U_n \neq \emptyset$. Choose $x_0$ in there and define $x_n = t_n x_0 \in \text{supp} s_n$. Since $\text{supp} s_n \leq f$ for all $n \in \mathbb{N}$, there is a convergent subsequence $(x_{n_k})_k$ of $(x_n)$. So the orbit $\mathcal{G} x_0$ has an accumulation point, which proves that the orbit space of $\mathcal{G}$ is not $\ensuremath{{(\mathrm{T}_1)}}$.
\end{proof}
\begin{remark}
\th\label{rem:tone-no-corner}
Comparing the statement of \th\ref{prop:tone} with \th\ref{prop:tzero}, it is natural to ask whether it is possible to find a corner of $\Gamma(\mathcal{G})$ that has $B_\ensuremath{{(\mathrm{T}_1)}}$ as a quotient, rather than finding $B_\ensuremath{{(\mathrm{T}_1)}}$ as a subquotient of $\Gamma(\mathcal{G})$. This is not possible as the following example show. We consider the groupoid $\mathcal{G}$ arising from the equivalence relation on $\beta \mathbb{N}$, the Stone-{\u C}ech compactification of $\mathbb{N}$, that relates all elements from $\mathbb{N}$, and nothing else. It is straight forward to check that $\mathcal{G}$ is an ample groupoid, since every point of $\mathbb{N}$ is isolated in $\beta \mathbb{N}$. Corners of $\Gamma(\mathcal{G})$ correspond to restrictions of $\mathcal{G}$ to compact open subsets, as explained in Section \ref{sec:nc-stone-duality}. Compact open subsets of $\beta \mathbb{N}$ are exactly of the form $U = \overline{D}$ for subsets $D \subset \mathbb{N}$. Clearly if $D$ is finite, $\mathcal{G}_\ensuremath{{(\mathrm{T}_1)}}$ cannot arise as a restriction of $\mathcal{G}|_U = \mathcal{G}|_D$. But every infinite subset of $\mathbb{N}$ has infinitely many accumulation points in $\beta \mathbb{N}$, so $\mathcal{G}_\ensuremath{{(\mathrm{T}_1)}}$ cannot arise as a restriction of $\mathcal{G}$ at all.
\end{remark}
\begin{proposition}
\th\label{prop:tzero}
Let $\mathcal{G}$ be a second countable, ample Hausdorff groupoid. Then the following statements are equivalent.
\begin{itemize}
\item The orbit space of $\mathcal{G}$ is not $\ensuremath{{(\mathrm{T}_0)}}$.
\item A corner of $\Gamma(\mathcal{G})$ has an infinite, monoidal and $0$-simplifying quotient.
\item There is an infinite, monoidal and $0$-simplifying subquotient of $\Gamma(\mathcal{G})$.
\end{itemize}
\end{proposition}
\begin{proof}
Assume first that the orbit space of $\mathcal{G}$ is not $\ensuremath{{(\mathrm{T}_0)}}$. By \th\ref{prop:tzero-characterisations}, there exists a self-accumulating orbit of $\mathcal{G}$. Denote its closure by $A$. Then $A$ is a $\mathcal{G}$-invariant subset of $\ensuremath{\G}^{(0)}$, so that by \th\ref{prop:restriction-induces-bis-map} the restriction to $A$ induces a quotient map $\Gamma(\mathcal{G}) \to \Gamma(\mathcal{G}|_A)$. The groupoid $\mathcal{G}|_A$ has a dense orbit, so that by \cite[Lemma 3.4]{steinberg2019} the set of its units with a dense orbit is comeager. In particular, there is a compact open subset $U \subseteq A$ such that every point of $U$ has a dense $\mathcal{G}|_A$-orbit. Note that since $U \subseteq A$ is open, $\mathcal{G}|_U$ is {\'e}tale. Further, since $U$ is compact open, $\Gamma(\mathcal{G}|_U)$ is a corner of $\Gamma(\mathcal{G}|_A)$. We note that $A$ is infinite, since it is self-accumulating, so that also $U$ is infinite. Hence $\Gamma(\mathcal{G}|_U)$ is infinite. Further, $\Gamma(\mathcal{G}|_U)$ is monoidal, since $U$ is compact. It is $0$-simplifying by \th\ref{prop:0simplifying}, since $\mathcal{G}|_U$ is minimal. If $V \subseteq \ensuremath{\G}^{(0)}$ denotes any compact open subset such that $V \cap A = U$, then the quotient map $\Gamma(\mathcal{G}) \to \Gamma(\mathcal{G}|_A)$ maps $\Gamma(\mathcal{G}|_V)$ onto $\Gamma(\mathcal{G}|_U)$. So $\Gamma(\mathcal{G}|_U)$ is a quotient of a corner of $\Gamma(\mathcal{G})$.
If $\Gamma(\mathcal{G})$ has a corner with an infinite, monoidal and $0$-simplifying quotient, then it is a subquotient of $\Gamma(\mathcal{G})$. So let us assume that there is an infinite, monoidal and $0$-simplifying subquotient of $\Gamma(\mathcal{G})$. We will show that the orbit space of $\mathcal{G}$ is not $\ensuremath{{(\mathrm{T}_0)}}$. Write $\Gamma(\mathcal{G}) \supset C \ensuremath{\twoheadrightarrow} B$ for the given subquotient. Choosing an idempotent preimage $p \in \ensuremath{\mathrm{E}}(C)$ of the unit of $B$, we obtain a surjection $p C p \ensuremath{\twoheadrightarrow} B$. Let $V \subset \ensuremath{\G}^{(0)}$ be the compact open subset corresponding to $p$. Replacing $C$ by $pCp$, we thus find a unital inclusion $\Gamma(\mathcal{G}|_V) \supset C$ and a quotient map $\pi: C \ensuremath{\twoheadrightarrow} B$. Let $\ensuremath{\mathcal{H}}$ be the ample groupoid associated with $C$ by noncommutative Stone duality, that is $\Gamma(\ensuremath{\mathcal{H}}) \cong C$. Write $X = \ensuremath{\mathcal{H}}^{(0)}$. Considering the restriction of $\pi$ to idempotents, we find a closed subset $A \subseteq X$ such that the following diagram commutes.
\begin{gather*}
\xymatrix{
\ensuremath{\mathrm{CO}}(X) \ar[r]^{\pi} \ar[d]_{\text{res}_A} & \ensuremath{\mathrm{E}}(B) \\
\ensuremath{\mathrm{CO}}(A) \ar@{-->}[ru]_{\cong}
}
\end{gather*}
Let $x \in A$. We will show that $\ensuremath{\mathcal{H}} x \cap A$ is self-accumulating. By noncommutative Stone duality, there is an ample groupoid $\ensuremath{\mathcal{K}}$ such that $\Gamma(\ensuremath{\mathcal{K}}) \cong B$. From the fact that $B$ is infinite, monoidal and 0-simplifying, it follows that $\ensuremath{\mathcal{K}}$ has is infinite, has a compact unit space and is minimal. In particular, the orbits of $\ensuremath{\mathcal{K}}$ are self-accumulating. So for every compact open neighbourhood $U \subseteq A$ of $x$, there is some bisection $t \in \Gamma(\ensuremath{\mathcal{K}})$ such that $x \in \text{supp} t$ and $x \notin \ensuremath{\mathop{\mathrm{im}}} t \leq U$. Let $s \in \pi^{-1}(t)$ denote a preimage of $t$. Then $\text{supp} s \cap A = \pi(\text{supp} s) = \text{supp} \pi(s) = \text{supp} t$ and similarly $\ensuremath{\mathop{\mathrm{im}}} s \cap A = \ensuremath{\mathop{\mathrm{im}}} t$. It follows that $\ensuremath{\mathcal{H}} x \cap A$ and thus also the $\ensuremath{\mathcal{H}}$-orbit of $x$ is self-accumulating. Thus the orbit space of $\ensuremath{\mathcal{H}}$ is not \ensuremath{{(\mathrm{T}_0)}}. Consider now the surjective map $\varphi: V \to X$ which is dual to the inclusion $\ensuremath{\mathrm{CO}}(X) \subset \ensuremath{\mathrm{CO}}(V)$. Given $x,y \in X$ in the same $\ensuremath{\mathcal{H}}$-orbit, there is $s \in C$ such that $sx = y$. Let $u \in V$ be some preimage of $x$ under $\varphi$. Considering $s$ as a bisection of $\mathcal{G}|_V$, we define $v = su$, which lies in the same $\mathcal{G}|_V$-orbit as $u$ and satisfies $\varphi(v) = y$. This shows that every $\ensuremath{\mathcal{H}}$-orbit is contained in the image of a $\mathcal{G}|_V$-orbit. In particular, there is some orbit of $\mathcal{G}|_V$ that is not finite and hence not locally closed. So \th\ref{prop:tzero-characterisations} says that the orbit space of $\mathcal{G}$ is not \ensuremath{{(\mathrm{T}_0)}}.
\end{proof}
\section{Group quotients and isotropy groups}
\label{sec:isotropy-groups}
In this section, we relate the isotropy groups of an ample groupoid with certain subquotients of the Boolean inverse semigroup of its compact open bisections. This result will allow us to address the condition on isotropy groups from \cite{clark07}.
\begin{proposition}
\th\label{prop:isotropy-groups-characterisation}
Let $\mathcal{G}$ be a second countable, ample Hausdorff groupoid whose orbit space is \ensuremath{{(\mathrm{T}_0)}}. Let $x \in \ensuremath{\G}^{(0)}$ be a unit, write $G =\mathcal{G}|_x$ for the isotropy group at $x$ and denote by $G_0$ the associated group with zero. Then $G_0$ is a quotient of a corner of $\Gamma(\mathcal{G})$. Vice verse, if $\mathcal{G}$ is any ample Hausdorff groupoid and $G$ is a group such that $G_0$ is a quotient of a corner of $\mathcal{G}$, then $G$ is a quotient of a point stabiliser of $\mathcal{G}$.
\end{proposition}
\begin{proof}
Since the orbit space of $\mathcal{G}$ is assumed to be {\ensuremath{{(\mathrm{T}_0)}}} and $\mathcal{G}$ is second countable, its orbits are locally closed by \th\ref{prop:tzero-characterisations}. Let $U \subseteq \ensuremath{\G}^{(0)}$ be a compact open neighbourhood of $x$, such that $\mathcal{G} x \cap U$ is closed in $U$ and hence compact. Since $\mathcal{G}$ is {\'e}tale, we know that $\mathcal{G} x$ is countable, so that $\mathcal{G} x \cap U$ is actually finite. We may thus shrink $U$ so that $\mathcal{G} x \cap U = \{x\}$ holds. Denote by $p = U \in \Gamma(\mathcal{G})$ the idempotent bisection associated with $U$. Then the corner of $\Gamma(\mathcal{G})$ can be identified as $p \Gamma(\mathcal{G}) p = \Gamma(\mathcal{G}|_U)$. Since $x$ is fixed by $\mathcal{G}|_U$, the restriction from $U$ to $\{x\}$ induces a quotient of Boolean inverse semigroups $\Gamma(\mathcal{G}|_U) \to \Gamma(\mathcal{G}|_x) = (\mathcal{G}|_x)_0$ by \th\ref{prop:restriction-induces-bis-map}.
Let us know assume that $\mathcal{G}$ is any ample Hausdorff groupoid, let $G$ be a group and $p \in \Gamma(\mathcal{G})$ an idempotent, for which there is a quotient map $\pi: p\Gamma(\mathcal{G})p \ensuremath{\twoheadrightarrow} G_0$. Denote by $U \subseteq \ensuremath{\G}^{(0)}$ the compact open subset corresponding to $p$. Then there is a natural isomorphism $p \Gamma(\mathcal{G}) p \cong \Gamma(\mathcal{G}|_U)$. Since the algebra of idempotents of $G_0$ is trivial, $\pi|_{\ensuremath{\mathrm{E}}(\Gamma(\mathcal{G}|_U))}$ is a character. By Stone duality, there is $x \in U$ such that $\pi|_{\ensuremath{\mathrm{E}}(\Gamma(\mathcal{G}|_U))} = \ensuremath{\mathrm{ev}}_x$. In particular, $\{x\} \subseteq U$ is a $\mathcal{G}|_U$-invariant subset by Lemma \ref{lem:restriction-implies-invariant}. So by the universal property of the restriction map described in \th\ref{prop:restriction-induces-bis-map}, the homomorphism $\pi$ factors through $\Gamma(\mathcal{G}|_U) \to \Gamma(\mathcal{G}|_x) = (\mathcal{G}_x)_0$. So $G_0$ is a quotient of $(\mathcal{G}|_x)_0$, which implies that $G$ is a quotient of $\mathcal{G}|_x$.
\end{proof}
\begin{example}
\th\label{ex:subquotient-groups-not-sufficient}
It might be tempting to admit arbitrary subquotients of $\Gamma(\mathcal{G})$ in the statement of Proposition \ref{prop:isotropy-groups-characterisation}, however this does not even suffice under the condition that the orbit space of $\mathcal{G}$ is \ensuremath{{(\mathrm{T}_1)}}. Indeed, the topological full group of $\mathcal{G}$ is always a subgroup of $\Gamma(\mathcal{G})$, and it can be large even if $\mathcal{G}$ is effective. For example the topological full group associated with $B_\ensuremath{{(\mathrm{T}_1)}}$ is $\ensuremath{\mathrm{Sym}}(\mathbb{N})$.
\end{example}
We next formulate an appropriate version of Proposition \ref{prop:isotropy-groups-characterisation} for inverse semigroups. Let us start with a short lemma relating quotients of an inverse semigroup and its booleanization.
\begin{lemma}
\th\label{lem:quotient-semigroup-Boolean-envelop}
Let $S$ be an inverse semigroup and $B(S) \ensuremath{\twoheadrightarrow} G_0$ a quotient of its booleanization. Then $S_0 \to G_0$ is surjective.
\end{lemma}
\begin{proof}
Denote the quotient map $B(S) \ensuremath{\twoheadrightarrow} G_0$ by $\pi$ and let $g \in G$. Using the description of $B(S)$ presented in Section \ref{sec:inverse-semigroups}, there is some preimage $\sum_i t_i e_i \in B(S)$ of $g$. Since $G_0$ has only two idempotents, $\pi|_{E(B(S))}$ is a character. So there is a unique $i_0$ satisfying $\pi(e_{i_0}) = 1$. This implies $\pi(\sum_i t_i e_i) = \pi(t_{i_0})$, showing that $g \in \pi(S_0)$.
\end{proof}
\begin{proposition}
\th\label{prop:isotropy-groups-characterisation-semigroups}
Let $S$ be a countable inverse semigroup such that the orbit space of $\mathcal{G} = \mathcal{G}(S)$ is \ensuremath{{(\mathrm{T}_0)}}. Let $x \in \ensuremath{\G}^{(0)}$ and write $G = \mathcal{G}\vert_x$. Then the group with zero $G_0$ is a quotient of a corner of $S_0$.
\end{proposition}
\begin{proof}
Fix $x \in \ensuremath{\G}^{(0)}$. Since $\ensuremath{\G}^{(0)} \cong \widehat{E(S)}$, there is $q \in E(S)$ such that $x(q) = 1$. Denote by $U \subseteq \ensuremath{\G}^{(0)}$ the compact open subset corresponding to $q$. As in the proof of Proposition \ref{prop:isotropy-groups-characterisation}, the fact that orbits of $\mathcal{G}$ are locally closed implies that $\mathcal{G} x \cap U$ is finite. Fix an enumeration $x = x_0, x_1, \dotsc, x_n$ of $\mathcal{G} x \cap U$. For $i \in \{1, \dotsc, n\}$ there is $q_i \in E(S)$ such that $x_0(q_0) = 1$ and $x_i(q_i) = 0$. Put $p = q \cdot q_1 \dotsm q_n$ and let $V \subseteq \ensuremath{\G}^{(0)}$ be the compact open subset corresponding to $p$. Then the identification of corners of Boolean inverse semigroups says that
\begin{gather*}
\mathcal{G}(pSp) = \mathcal{G}(B(pSp)) \cong \mathcal{G}(pB(S)p) \cong \mathcal{G}|_V.
\end{gather*}
Since $x \in V$ is $\mathcal{G}|_V$-fixed, there is a quotient map $\Gamma(\mathcal{G}|_V) \to \Gamma(\mathcal{G}|_x) \cong (\mathcal{G}|_x)_0$. The identification $\Gamma(\mathcal{G}|_V) \cong B(pSp)$ shows that there is a surjection $B(pSp) \to (\mathcal{G}|_x)_0$. By \th~\ref{lem:quotient-semigroup-Boolean-envelop} its restriction $p S_0 p \cong (pSp)_0 \to (\mathcal{G}|_x)_0$ remains surjective.
\end{proof}
\section{Proof of the main results}
\label{sec:main-proofs}
We now obtain the proof of our main theorems. Thanks to the preparation made in the previous sections, all proofs are rather similar and we spell out details only for \th\ref{introthm:ccr-groupoid}.
\begin{proof}[Proof of \th\ref{introthm:ccr-groupoid}]
Since $\mathcal{G}$ is an {\'e}tale, second countable Hausdorff groupoid, we may appeal to the results of Clark and Thoma described in \th\ref{thm:clark}. It follows that $\mathcal{G}$ is CCR if and only if all its isotropy groups are virtually abelian and its orbit space is \ensuremath{{(\mathrm{T}_1)}}. We can thus combine \th\ref{prop:isotropy-groups-characterisation} and \th\ref{prop:tone} to complete our proof.
\end{proof}
\begin{proof}[Proof of \th\ref{introthm:type-I-groupoid}]
Upon replacing the reference to \th\ref{prop:tone} by \th\ref{prop:tzero}, the same argument as used in the proof of \th\ref{introthm:ccr-groupoid} can be applied.
\end{proof}
\begin{proof}[Proof of \th\ref{introthm:ccr-semigroup}]
Replacing the reference to \th\ref{prop:isotropy-groups-characterisation} by \th\ref{prop:isotropy-groups-characterisation-semigroups}, and alluding to the fact that $\Gamma(\mathcal{G}(S)) \cong B(S)$, the proof of \th\ref{introthm:ccr-groupoid} applies.
\end{proof}
\begin{proof}[Proof of \th\ref{introthm:type-I-semigroup}]
Making the same adaptions as in the passage from the proof of \th\ref{introthm:ccr-groupoid} to \ref{introthm:type-I-groupoid}, the proof of \th\ref{introthm:ccr-semigroup} applies.
\end{proof}
\begin{remark}
\label{rem:no-quotient-semigroup}
In view of our construction of $B(S)$ described in Section \ref{sec:inverse-semigroups}, the booleanization of an inverse semigroup can be concretely calculated, so that the conditions on $B(S)$ in Theorems \ref{introthm:ccr-semigroup} and \ref{introthm:type-I-semigroup} can checked. Specifically for Theorem \ref{introthm:type-I-semigroup} we remark that the groupoid $\mathcal{G}(S)$ associated with an inverse semigroup always has a fixed point (whose isotropy group is the maximal group quotient of $S$). It corresponds to the trivial character on $E(S)$ that maps every idempotent to $1$. Since $0$-simplifying Boolean inverse semigroups correspond to minimal groupoids, it is not possible to directly translate the condition on $B(S)$ in Theorem \ref{introthm:type-I-semigroup} to an algebraic statement about $S$ itself.
\end{remark}
\printbibliography
\vspace{2em}
\begin{minipage}[t]{0.45\linewidth}
\small
Gabriel Favre \\
Department of Mathematics \\
Stockholm University \\
106 91 Stockholm \\
Sweden \\[1em]
[email protected]
\end{minipage}
\begin{minipage}[t]{0.45\linewidth}
\small
Sven Raum \\
Department of Mathematics \\
Stockholm University \\
106 91 Stockholm \\
Sweden \\[1em]
and \\[1em]
Institute of Mathematics of the \\ Polish Academy of Sciences \\
ul.\ \'Sniadeckich 8 \\
00-656 Warszawa \\
Poland \\[1em]
[email protected]
\end{minipage}
\end{document}
|
2,869,038,155,493 | arxiv | \section{\@startsection {section}{1}{\z@}{-3.5ex plus -1ex minus
-.2ex}{2.3ex plus .2ex}{\normalsize\bf}}
\def\@startsection{subsection}{2}{\parindent{\@startsection{subsection}{2}{\parindent}
{.25ex plus 1ex minus .2ex}{-.3em}{\normalsize}}
\def\subsubsection{\@startsection{subsubsection}{3}{\z@}{-3.25ex plus
-1ex minus -.2ex}{1.5ex plus .2ex}{\normalsize}}
\def\paragraph{\@startsection
{paragraph}{4}{\z@}{3.25ex plus 1ex minus .2ex}{-1em}{\normalsize\bf}}
\def\subparagraph{\@startsection
{subparagraph}{4}{\parindent}{3.25ex plus 1ex minus
.2ex}{-1em}{\normalsize}}
\setcounter{secnumdepth}{3}
\newcounter{appendix}
\def{}
\def |
2,869,038,155,494 | arxiv | \section{Introduction}
Decarbonising global energy systems is at the core of climate change mitigation. The expansion of renewable energies is one important measure to attain this goal \cite{Jaegemann2013,Dagoumas2019}.
Globally, wind power and solar PV have been the renewable energy sources with the highest growth rates in recent years. While the installed capacity on a global level is similar for PV (579 GW) and wind power (594 GW), wind power generation (1195 TWh) is substantially higher than electricity generation from PV (550 TWh) \cite{irena2020}.
This trend of a higher share of wind power generation is likely to continue for some world regions, e.g. Europe \cite{ECA2019}. Scenarios have explored the importance of wind power in future energy systems, with shares of around 50\% of global power demand in 2030 \cite{Jacobson2011}, 74\% in Europe by 2050 \cite{Zappa2018}, or even 80\% to 90\% of the European VRES mix \cite{Eriksen2017}.\\
For an adequate assessment of the impacts of high shares of renewable electricity and in particular of wind power generation on power systems, long spatially and temporally highly resolved renewable power generation time series are necessary to represent short and long term changes in resource availability \cite{Collins2018}. At least one climatological normal of 30 years should be used to understand variability \cite{WMO2017}.\\
Reanalysis climate data sets are frequently used to generate such time series.
Two of the most prominent global reanalyses are NASA's MERRA and MERRA-2 and the more recent ERA5 provided by the European Centre for Medium-Range Weather Forecasts.
The MERRA reanalyses were used for example for estimating the global technical onshore and offshore wind power generation potentials \cite{Bosch2017,Bosch2018}, or the integration of renewables into the European power system \cite{Huber2014}. Also correlations between wind power generation in European countries \cite{Olauson_2016}, extreme events in Britain \cite{Cannon_2015}, or the impacts of uncertainty factors \cite{Monforti_2017} and ageing \cite{Soares2020} in wind power simulation were studied.
With ERA5 global \cite{Soares2020} and Lebanese \cite{IbarraBerastegi2019} offshore wind power potential, as well as electricity demand and renewable generation in Europe \cite{Bloomfield2020a} and West Africa \cite{Sterl_2018} were estimated.
While global reanalysis data sets offer the advantage of conducting multi-country or global analyses without the need for country or region-specific climate data sources, they also come with their drawbacks.
Although the temporal resolution is usually high at one hour or even less, the spatial resolution is rather coarse at a grid size of several kilometres (eg. MERRA-2 about 50 km). Therefore, those data sets, in contrast to regional reanalyses such as COSMO-REA \cite{CosmoREA2}, are limited in representing local climatic conditions in sufficient detail, as required for the simulation of wind power generation \cite{Staffell_2016}.
It is known that reanalysis data are subject to bias \cite{Cannon_2015,Pfenninger_2016,Olauson_2016}. To increase simulation quality, efforts should be made to correct the bias \cite{Monforti_2017,Henckes2020}, as the bias of reanalysis data may result in differences in model-derived installed capacities of up to 20\% difference \cite{Henckes2020}.
In many cases, however, reanalysis data is used directly \cite{Ren2019, Monforti_2017, Cannon_2015, Cradden2017, Kubik2013, Camargo2019, Camargo2019a}. If it is corrected, observed wind power generation data is mostly used \cite{Olauson_2018, Staffell_2016, Olauson_2015, Olauson_2016, Camargo2019b}. This approach is not globally applicable, as observations of wind power generation are unavailable for many world regions. Additionally, data quality and the level of temporal and spatial aggregation varies between countries.\\
Therefore, other forms of bias correction are required when conducting global analysis \cite{Staffell_2016}. Here, we aim at reducing the bias in reanalysis data by applying the Global Wind Atlas \cite{GWA3}. Recently, the Global Wind Atlas Version 3.0 has been released and we put a particular focus on assessing the quality of this latest version compared to the previous version 2.1. GWA 3.0 has - at the moment- only been assessed for Pakistan, Papua New Guinea, Vietnam, and Zambia for wind speeds \cite{GWA3val}, however not for the purpose of wind power simulation.\\
Of course, the GWA may not necessarily decrease bias. It is therefore of great interest to validate simulated wind power simulation data against observed generation - for both, raw reanalysis data and reanalysis data corrected with the GWA. Other work has mainly focused on validating raw wind power simulation data:
\citeauthor{Staffell_2016} validate wind power simulations derived from MERRA and MERRA-2 against observed generation data for 23 European countries and find significant bias.
\citeauthor{Olauson_2015} \cite{Olauson_2015} used the MERRA data set to model Swedish wind power generation, and production data from the Swedish TSO to validate and bias-correct their modelled data. In a comparison of MERRA-2 and ERA5 for the use of wind power simulation, time series for four European countries and one region in the USA were validated\cite{Olauson_2018}.
\citeauthor{Jourdier2020} compared MERRA-2 and ERA5 \cite{Jourdier2020} to simulations of French wind power generation based on two high-resolution models (COSMO-REA6 and AROME) and a mesoscale model (NEWA) and validated all datasets against observed wind speed and power generation data.\\
Since most of the previous analyses only assessed one particular reanalysis data set, we focus on the comparison of ERA5 and MERRA-2, on results quality and the additional use of the GWA for bias-correction.
As Europe has already been studied in several other analyses \cite{Staffell_2016,Olauson_2015, Jourdier2020,Monforti_2017,GonzalezAparicio2017} and to cover different global climatic conditions, we study the following non-European countries: Brazil, USA, South Africa and New Zealand. These countries are spatially very diverse, host significant wind power capacities, and provide timeseries of wind power generation that can be used for validation.
Furthermore, we contribute to a better understanding of the role of spatial and temporal resolution by assessing simulation quality on different levels of spatial and temporal aggregation. This is highly relevant information for users in power- and energy system models \cite{Bloomfield_2020}.
In particular, we answer the following research questions: (1) Does the newer reanalysis ERA5 with higher spatial resolution perform better than the older MERRA-2 when validated against historical wind power generation data? (2) Does bias-correction with the spatially highly resolved GWA increase simulation quality? (3) Does the GWA 3.0 perform better than the previous GWA 2.1.? (4) Does aggregating single wind parks to larger systems decrease the error due to spatial complementarity and error compensation effects, as indicated by Goi\'{c} et al. \cite{Goic2010} and Santos-Alamillos et al. \cite{SantosAlamillos2015}? (5) Does temporal aggregation reduce errors?
We assess those questions by simulating wind power generation in the four countries for all wind parks, using both ERA5 and MERRA-2 with and without bias-correction with the GWA. We validate simulated against observed generation on different spatial levels and compare quality between all simulations.
\section{Data}
We use several data sets for simulation, bias correction and validation: wind speeds are taken from the MERRA-2 and ERA5 reanalysis data sets. The GWA is used for mean bias correction. Information on wind park locations and the used turbine technology is collected from different country specific data sources (see section \ref{subsection:windpark_info}). Similarly, country specific wind power generation data is gathered to perform the final validation. \\
\subsection{Reanalysis data}
From MERRA-2 \cite{MERRA2}, we use the time-averaged, single-level, assimilation, single-level diagnostics (tavg1\_2d\_slv\_Nx) dataset, while we use hourly data on single levels from 1950 to present from ERA5\cite{ERA5}. MERRA-2 reanalysis data are provided by the National Aeronautics and Space Administration via the Goddard Earth Sciences Data and Information Services Center and follow the earlier version of MERRA, while ERA5 is the follow-up product of ERA-Interim provided by the European Centre for Medium Range Weather Forecast (ECMWF). MERRA-2 is available for circa 40 years (since 1980), while ERA5 has recently been extended to reach back to 1950. While both exhibit a temporal resolution of one hour, the spatial resolution is higher in the more recent ERA5 data set (~31 km) than in MERRA-2 (~50 km).\\
The climate input data is downloaded for time periods corresponding to the temporal availability of validation data. Spatial boundaries are defined by the size of the respective country.
The downloaded parameters are eastward (u) and northward (v) wind speeds at two different heights for each reanalysis data set (ERA5: 10 m and 100 m above surface, MERRA-2: 10 m above displacement height and 50 m above surface), as well as the displacement height for MERRA-2.
\subsection{Global Wind Atlas}
The Global Wind Atlas \cite{GWA3} provided by the Technical University of Denmark (DTU) is used to spatially downscale the reanalysis data to a resolution of 250 m, in order to take into account local variations of mean wind speeds. The current version, GWA 3.0 was derived from the ERA5 reanalysis and provides mean wind speeds and mean power densities at five different heights (10, 50, 100, 150 and 200 m), as well as mean capacity factors for three different turbine classes according to IEC\footnote{
International Electrotechnical Commission} for the period 2008-2017. Furthermore, there are layers describing the terrain surface and a validation layer showing in which countries and for which wind measurement stations the GWA has been validated.\\
The previous version, GWA 2.1, which is also used in this analysis, provides wind speeds at only three heights (50, 100 and 200 m) at the same spatial resolution and was derived from ERA-Interim, the preceding data set of ERA5 \cite{Badger2019} for the period 1987-2016.\\
For the purpose of mean bias correction, the wind speed layers at 50 m and 100 m height are downloaded for each country. They correspond to the upper layer of reanalysis wind speeds in MERRA-2 and ERA5, respectively. Since the GWA2 is no longer available at the official GWA homepage, data were extracted from the stored global data set \cite{GWA2} around the country boundaries.
\subsection{Wind park information}
\label{subsection:windpark_info}
For the simulation of wind power generation, we use turbine specific information on location, installed capacity, hub height and rotor diameter. The spatial distribution of wind power plants is shown in Figure \ref{fig:wp_map}. In countries where turbine specific location information is not available, we use wind park specific data. This information is retrieved from freely available country level data sets (see Table \ref{tab:TURB_table}).\\
For Brazil, two data sets, the Geographic Information System of the Electrical Sector (SIGEL) \cite{SIGEL} and the Generation Database (BIG) \cite{BIG}, from the National Electrical Energy Agency (ANEEL) \cite{ANEEL} are combined using the wind park codes.
The use of both datasets is necessary, as SIGEL data contains only the location, installed capacity, hub height and rotor diameter, while the state and the commissioning dates are added from the BIG database.
Two wind turbines in the BIG dataset have a hub height and rotor diameter of 0 meters. They are replaced by values from turbines with similar capacity.\\
The information on ten wind parks with available production data is collected from the New Zealand Wind Energy Association \cite{NZWEA}. Similarly, the information on 39 wind parks in South Africa is gathered from the Renewable Energy Data and Information Service (REDIS) \cite{ZAFwp}, while rotor diameters, hub heights and capacities are complemented with information from The Wind Power\cite{TWP}. Since several data points were obviously erroneous or missing, the database was completed with an online search (see Table \ref{tab:zaf_turb_complete}). The resulting South Africa wind park data set is available online for further use \cite{GDWA}.\\
The information on the over 60 000 wind turbines in the USA is obtained from the US Wind Turbine Data Base (Version 3.2) \cite{USWTDB}, which comprises most of the necessary data. Missing information\footnote{Lacking data of commissioning date: 1540 turbines, turbine capacity: 5530 turbines, hub height: 7790 turbines, and rotor diameter: 6728 turbines} is replaced by the yearly mean (installed capacities, hub heights) or the overall mean (commissioning year) and rotor diameters are completed by fitting a linear model to the hub heights. In some cases the specific power calculated from rotor diameter and capacity is too low (below 100 W/m\textsuperscript{2}) resulting in unrealistic power curves, and thus replaced by the mean specific power of turbines with the same capacity \footnote{This applies to 49 wind turbines, of which 48 have incomplete turbine specifications}.
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.7]{map_windparks}
\caption{Locations of wind parks in Brazil, New Zealand, USA and South Africa}
\label{fig:wp_map}
\end{figure}
\begin{table}[ht]
\centering
\caption{Wind turbine and wind park data sets applied for simulation}
\makebox[\textwidth][c]{}
\begin{tabular}{p{1.3cm}p{1.3cm}p{1.3cm}p{1.3cm}p{1.3cm}p{1.3cm}p{1.3cm}p{1.3cm}p{1.3cm}p{1.3cm}}
\hline
Country & Source & Avail-ability
& turbines & parks & total capacity [MW] & avg. park capacity [MW]
& avg. turbine capacity [kW] & avg. rotor diameter [m] & avg. hub height [m] \\
\hline
\hline
Brazil & ANEEL (BIG, SIGEL) \cite{ANEEL,BIG,SIGEL} & turbines
& 7438 & 603 & 15190 & 25
& 2031 & 98 & 87 \\
New Zealand & NZWEA \cite{NZWEA} & wind parks
& 405 & 10 & 564 & 56
& 1719 & 61 & 53 \\
South Africa & REDIS \cite{ZAFwp} and various & wind parks
& 1466 & 39 & 3545 & 90
& 1719 & 84 & 95 \\
USA & USWTDB \cite{USWTDB} & turbines
& 63002 & 1565 & 108301 & 69
& 2525 & 105 & 75 \\
\hline
\end{tabular}
\label{tab:TURB_table}
\end{table}
\subsection{Wind power generation data for validation}
The validation of the simulated wind power generation time series is based on observed generation at different spatial and temporal resolutions, gathered from country specific data sources. While there is data available on all time scales (hourly, daily and monthly) for each of the four studied countries or regions in those countries, historical wind power generation records on the level of wind parks are available only for Brazil and New Zealand. In South Africa, the country's observed power generation is only available per Cape (Eastern, Northern and Southern Cape), while for the USA the smallest level of spatial disaggregation available is the state level.\\
Temporal availability of the generation time series varies depending on the data source and commissioning dates of wind parks. The highest resolution of data is given in Brazil, where the National Electrical System Operator (ONS) \cite{ONS} provides data on three temporal (hourly, daily, monthly), as well as four spatial levels (wind park, state, subsystem, country). Of the 174 wind parks in Brazil for which hourly data are available in the ONS dataset, 70 can be matched by their name to simulated wind parks based on ANEEL data, and 42 show sufficient data quality (also see Table \ref{tab:data_cleaning_bra}). They are consequently used for the further analysis. Due to data quality issues and the requirement of consistency only hourly data on the wind park level were used and aggregated spatially and temporally (also see section \ref{subsection:data_cleaning}).
In New Zealand, wind park specific generation data is also available, however only for ten wind parks. The information on historical wind power and other generation is provided by the Electricity Market Information (EMI) \cite{EMI} half hourly and aggregated to hourly production values for validation against hourly simulated values.\\
In South Africa, generation data is provided by REDIS \cite{REDIS} as capacity factors. For observed power generation in the USA, several data sources are used. The U.S. Energy Information Administration (EIA) \cite{EIA} provides monthly resolved generation data for the USA, its 51 states and 10 sub-regions\footnote{New England, Mid-Atlantic, East North Central, West North Central, South Atlantic, East South Central, West South Central, Mountain, Pacific Continental and Pacific Non-Continental}. For New England\footnote {Connecticut, New Hampshire, Maine, Massachusetts, Rhode Island and Vermont}, monthly data are retrieved from ISO New England \cite{isoNE}\footnote {Data from EIA were discarded due to poor quality (nearly constant/fluctuating generation instead of seasonal pattern and some very low production months, see Figure \ref{fig:regionsm}) and instead ISO New England data are used}. The Electric Reliability Council of Texas (ERCOT) \cite{ERCOT} provides hourly generation data for Texas. The 5-minute wind power generation data in the Bonneville Power Administration (BPA) \cite{BPA}, which is responsible for 49 wind parks in the regions of Oregon and Washington, is aggregated to hourly output.\\
Table \ref{tab:valdata_tab} summarises the data sources used for validation.\\
\begin{table}[ht]
\centering
\caption{Data sets applied for validation}
\begin{tabular}{llll}
\hline
Country & Regions & Temporal resolution & Source \\
\hline
\hline
Brazil & 42 wind parks, 4 states, country & hourly, daily, monthly & ONS \cite{ONS}\\ \hline
New Zealand & 10 wind parks, country & hourly, daily, monthly & EMI \cite{EMI}\\ \hline
South Africa & 3 capes, country & hourly, daily, monthly & REDIS \cite{REDIS}\\ \hline
USA & 25 states, 8 regions, country & monthly & EIA \cite{EIA} \\
& Texas & hourly, daily, monthly & ERCOT \cite{ERCOT} \\
& New England & monthly & ISO New England \cite{isoNE} \\
& BPA & hourly, daily, monthly & BPA \cite{BPA} \\
\hline
\end{tabular}
\label{tab:valdata_tab}
\end{table}
\subsection{Data cleaning}
\label{subsection:data_cleaning}
In a preliminary screening, parts of the available observed wind power generation time series showed long sequences of missing data and unlikely generation patterns, such as long periods of constant output. We therefore applied a thorough cleaning procedure.
\subsubsection{Brazil}
First, wind park names in the ANEEL and the ONS data set have to be machted in order to validate the simulation with observed generation from the according wind park. Due to the large number of available wind park data, this step is performed using fuzzy matching, ignoring special characters and case sensitivity. Only wind parks with a matching score of 100 are used for validation. From a total of 174 parks, only 72 satisfied this criterion. \\
For these wind parks, leading and trailing series of zero production are removed from hourly generation time series at wind park level. For constant parts of time series, two different approaches are taken. If those parts are 0, they either indicate (a) a long period of very low or very high wind speeds (i.e. either below cut-in or above cut-out wind speed), (b) a downtime of the turbine due to e.g. maintenance, and (c) an error in the observed data. Filtering out all instances of 0 wind power production would remove all three events, however, this would be inconsistent with other countries, where this approach cannot be taken (as wind power generation on the level of wind parks is not available). We therefore opted for removing constant parts of the timeseries with periods of 0 generation larger than the largest period of 0 generation in the simulated timeseries which accounts to 180 hours.\\
For other constant parts of the timeseries, which are above 0, we removed them if the period was longer than 24 hours. Time series which contain less than 2 years of data are excluded from the analysis to guarantee capturing seasonal effects. We stress that the two years of data do not necessarily occur consecutively.
Furthermore, the data are assessed with respect to their capacity factors. We removed all instances in the timeseries where capacity factors above 1 where observed. Table \ref{tab:data_cleaning_bra} gives an overview how many locations where affected by the performed data cleaning in Brazil.\\
\begin{table}[ht]
\centering
\caption{Data cleaning steps and remaining wind parks for validation in Brazil}
\begin{tabular}{lrr}
\hline
& Applies to & Remaining \\&&wind parks \\
\hline
\hline
- total number of observed wind park time series
& & 174 \\ \hline
1. matching of ONS and ANEEL
& & 72 \\
- keep only 100 matching score
& & 70 \\ \hline
2. data cleaning & &\\
- remove constant parts of time series except 0 ($>$24h)
& 50 & 70 \\
- remove constant parts of 0 generation ($>$180h)
& 28 & 70\\
- remove capacity factors $>$ 1
& 59 & 70\\
- remove short time series ($<$2y)
& 17 & 53 \\
\hline
\end{tabular}
\label{tab:data_cleaning_bra}
\end{table}
In order to ensure consistent data quality throughout the evaluation, instead of applying the temporally and spatially aggregated data sets provided by ONS, we aggregate the hourly wind power generation time series on wind park level spatially and temporally. This is necessary since daily data are equal to aggregated hourly data on the ONS site. However, this approach ignores missing or erroneous data, resulting in lower power generation in periods where generation data are missing in at least one of the wind parks in a particular region. We remove time steps from simulation data, when the data are missing in some wind parks in the validation data and aggregate after this correction.
Furthermore, hourly and daily data are not consistent with monthly data. As the applied aggregation method is not made explicit, the reason for the inconsistency remains unclear. To overcome the inconsistency, aggregation of validation data is performed starting at the highest spatio-temporal resolution of the available data, i.e. at the hourly wind park data. This approach allows to remove missing data from all spatial and temporal scales, improving the fit of observed and simulated data.\\
\subsubsection{USA}
In the USA, different measures were applied depending on the data source. In the EIA data set, leading zero production is removed. Since before 2010 the fit of simulation to validation data is low, the installed capacity in the USA from the USWTDB is compared to the yearly cumulative installed wind power capacity as provided by IRENA \cite{IRENA}. This comparison shows large inconsistencies (see Figure \ref{fig:uswtdb_irena}). Therefore, wind power generation is analysed for the past ten years only, starting in 2010. This approach notably improves results (see Figure \ref{fig:2000vs2010}).
Despite the cleaning measures, several regions still result in unusually low correlations and high errors. A visual inspection of the monthly time series shows that the observed generation of several states and regions is nearly constant or repetitively fluctuating between different generation levels for long time series. This contrasts with our expectation of observing seasonal patterns (see section \ref{subsection:quality_USA}). Due to this reason, seven states and three regions affected by this approach are discarded for further use, while in nine states, only part of the time series is used for validation. These are indicated in Figure \ref{fig:statesm}.
In the BPA data set, some observations are missing. As the data is available at a 5 minutes resolution, the missing values are interpolated. The maximum consecutive missing observations is one hour.\\
\subsubsection{New Zealand and South Africa}
In New Zealand, constant output over more than 24 hours is removed from the time series. No further data cleaning operations are applied. In South Africa, a limited amount of capacity factors larger than 1 are observed. These time steps are removed.\\
\section{Methods}
\subsection{Wind power simulation}
Wind power is simulated based on reanalysis data and mean wind speeds in the GWA. In a preparatory step, effective wind speeds are calculated from eastward (u) and northward (v) wind speed components in reanalysis data according to the Pythagorean theorem for the two heights available.
From the effective wind speed, the Hellmann exponent $\alpha$, describing the structure of the surface, is calculated. Using the location information of wind turbines or wind parks, reanalysis and GWA wind speeds are interpolated to the nearest neighbour and extrapolated to the hub height using Hellmann's power law.\\
When bias correction is applied, mean wind speeds are retrieved from the GWA at the location closest to the wind park or turbine and divided by the average of the reanalysis wind speed time series at the specific locations at the same height, i.e. 50 m for MERRA-2 and 100 m for ERA5, as these are the heights closer to hub height.
This quotient is used as a bias correction factor to shift reanalysis wind speeds interpolated to hub height up or down according to the GWA.\\
In order to convert wind speeds to wind power, the power curve model introduced by Ryberg et al. \cite{Ryberg2019} is applied and scaled to the installed capacity of the turbines.
This model estimates power curves empirically from the specific power, i.e. the installed capacity per rotor swept area, of wind turbines. It therefore does take into account differences in the power output according to specific power, but additional technology or turbine specific effects are not considered. We follow this approach, as otherwise we would have to manually research power curves for 283 different turbine models, and as additionally turbine models are not know for 865 cases.
Wind power generation is simulated for the whole country-specific time period, but generation is set to 0 for periods before the commissioning date of the respective wind park.
If only the month of commissioning is known, we assume the middle of the month as commissioning date. For the USA, only the commissioning year is known.
In order to avoid large increments of wind power generation on any particular date, the capacity installed within a year is linearly interpolated from the 1st of January to the end of the year.\\
\subsection{Validation}
218 different data sets of observed generation are suitable for validation. 10 data sets are on country scale, 58 on state or regional scale, and 150 on wind park scale. 62 of those have hourly resolution, 62 daily, and 94 monthly. Due to data quality issues, not all available time series could be used (see section \ref{subsection:data_cleaning}).
In order for results to be comparable between different levels of spatial and temporal aggregation, as well as countries, generation time series are normalised to capacity factors.\\
Validation of the simulated time series was performed using three statistical parameters to assess quality. Pearson correlation, RMSE (root mean square error) and MBE (mean biased error) were used, as suggested by Borsche et al. \cite{Borsche2015}.
The RMSE is an indicator that increases if (a) there is a significant difference in the level of simulated and observed timeseries, and (b) if there is a temporal mismatch between the two. As we use capacitiy factors which are comparable in scale between regions, the RMSE does not have to be normalized. To assess the different components of mismatch, i.e. temporal mismatch and mismatch in level of production, we additionally calculate the Pearson correlation which indicates if the temporal profile of simulated and observed generation are similar. To assess differences in levels including over- or underestimation, we determine the MBE.
Since the proposed model does not consider losses due to wakes or down-times due to maintenance, a slight overestimation of generation is expected. I.e. slightly overestimating models tend to represent actual generation better than underestimating ones.
Results for different regions and temporal aggregation levels are compared in notched boxplots. The notches indicate if the median's differ significantly at the 95\% level \footnote{The notches are determined according to $M \pm 1.57 \cdot IQR \cdot \sqrt{n}$, with M being the median, IQR the interquartile range and n the number of samples. If the notches of two boxes do not overlap, the difference between their medians is statistically significant at the 0.05 level \cite{Chambers1983}.} As we cannot assume that our sample of wind parks and regions represents a random sample of global wind power generation locations and as there is a bias in the amount of timeseries available for different regions, we report on different results for different countries whenever they deviate from the generally observed pattern. Respective figures are put into the appendix.\\
In order to estimate the effect of system size on simulation quality, a system size parameter is introduced.
It measures the number of reanalysis grid cells occupied by wind turbines or parks, e.g. per wind park or region (see Figure \ref{fig:syssize}). Individual wind turbines therefore always have size 1. Wind parks can have a size larger 1, if they cover more than one grid cell, but this is mostly not the case. Countries cover always more than one grid cell.\\
\begin{figure}[!ht]
\centering
\includegraphics[scale=0.5]{system_sizes.png}
\caption{System sizes per country and data set (non-normalised)}
\label{fig:syssize}
\end{figure}
\section{Results}
\label{section:results}
In this section we first present how the choice of the reanalysis dataset affects simulation quality. Subsequently, we investigate whether the use of the GWA for mean bias correction can improve our simulation's goodness of fit. Finally, we assess the effect of spatial and temporal aggregation of wind power generation on simulation quality.
\subsection{Impact of choice of reanalysis dataset on simulation quality}
Here we assess the difference in simulation quality as implied by using different reanalysis data sets, i.e. MERRA-2 and the more recent ERA5.\\
Figure \ref{fig:era5_vs_merra2_all} presents a comparison of statistical parameters between simulations based on ERA5 and MERRA-2 reanalyses for all analysed regions, i.e. wind parks, states, regions, and countries. While ERA5 correlations (median: 0.82) are higher than the ones achieved with MERRA-2 (median: 0.77) and while MERRA-2 has a larger spread of correlations, one of them being even negative, the difference in correlations is not significant. Overall, there is a significant (notches do not overlap) difference in RMSEs (median ERA5: 0.15, MERRA-2: 0.19). Regarding the MBEs, there is a significant difference between the median MBE of ERA5 (-0.05) and MERRA-2 (0.09), with ERA5 MBEs slightly underestimating generation on average, while MERRA-2 overestimating generation quite substantially (by approx. 1\%). Underestimation of ERA5 can be as low as almost 40\% for some locations, while MERRA2 overestimates generation by as much as 40\%. In general, both data sets seem to underestimate wind power generation in New Zealand, which is the only region where this occurs.\\
On a country level (see Figure \ref{fig:era5_vs_merra2_dif}), these results are replicated with the exception of New Zealand, where all indicators, i.e. correlations, RMSE, and MBE are better for MERRA-2. However, only the MBE shows a significant improvement when comparing MERRA-2 with ERA5.
The differences in correlations between countries indicate that the ERA5 based simulation in most regions has a higher correlation than the one based on MERRA-2, except for New Zealand (see also Figure \ref{fig:era5_vs_merra2}).
In summary, using ERA5 as data source for wind power simulations will result in better or at least as good timeseries as using MERRA-2. On average, quality indicators are reasonable, but extreme outliers are observed for both data sets. As they mostly occur for both reanalysis data sets, this may also be a problem of lacking data quality in observed wind power generation.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.7]{era5_vs_merra2_all}
\caption{Comparison of statistical parameters for simulations with ERA5 and MERRA-2 reanalyses for all analysed regions. Non-overlapping notches indicate a difference in the medians statistically significant at the 95\% level.}
\label{fig:era5_vs_merra2_all}
\end{figure}
\subsection{Bias correction with GWA}
In order to adjust the mean bias of the wind speeds taken from reanalysis data, we use the Global Wind Atlas. Due to the higher spatial resolution compared to the reanalysis data sets, we expect an improvement in particular in RMSE and MBE. The effect of bias-correction on correlations depends on the non-linear relationship between wind speeds and wind power as shifting wind speeds by a constant factor does not imply a proportional shift in wind power output. Hence, bias correction may impact correlations, too. In most cases, however, this impact is small and not significant (see \ref{fig:era5_gwa_all}). In New Zealand, correlations are slightly increased with GWA2 and in South Africa using any of the GWAs, however these increases are not significant (Figure \ref{fig:era5_gwa}). \\
The RMSEs are decreased slightly by GWA2 in comparison to simulations without bias correction, but the median does not differ significantly. The simulation with GWA3, however, implies a significant increase of the median of the distribution of RMSEs, compared to GWA2 as well as compared to the simulation without mean bias correction. On a regional level, however, the significant difference in medians of GWA3 to the other simulations is only found in the USA, as well as between simulations with GWA2 and GWA3 in New Zealand (see Figure \ref{fig:era5_gwa}), i.e. the overall results are mainly driven by the US and New Zealand.\\
If measured by MBEs, a similar conclusion can be drawn: GWA2 reduces the median of the error and shifts it closer to 0. Even though this is not significant for the overall regions, a significant shift towards 0 is seen in all countries besides New Zealand.
The GWA3, in contrast, leads to a large increase in the MBE. This applies also in New Zealand and South Africa, while for Brazil GWA2 is less recommended.\\
To sum up, in most of the investigated regions, the GWA2 may be used to increase correlations (New Zealand, South Africa), decrease the RMSE (all countries) and shift the MBE closer to 0 or to a small positive value (all except Brazil). From our results, GWA3 is not recommended for bias correction as it increases the errors (RMSEs as well as MBEs for three out of four countries, see Figure \ref{fig:era5_gwa_all}).\\
A similar analysis was conducted by applying the GWA to MERRA-2 based wind power simulation. The results can be found in section \ref{subsection:merra2_gwa}. For MERRA-2, using the GWA for bias-correction has ambiguous impacts on results and we therefore do not fully recommend using it as a mean for bias-correction.\\
\begin{figure}[!h]
\centering
\includegraphics[scale=0.7]{ERA5_GWA_all}
\caption{Comparison of statistical parameters for simulations with ERA5 and different versions of the GWA for all analysed regions. Non-overlapping notches indicate difference in medians statistically significant at the 95\% significance level.}
\label{fig:era5_gwa_all}
\end{figure}
\subsection{Impact of spatial and temporal aggregation}
In this section we assess the impact of spatial and temporal aggregation on the quality of wind power simulations. The impact on the correlation cannot be analytically derived: while an aggregation of two time-series of capacity factors will lower the variance of the combined time-series compared to the maximum of the variance of the original time-series, the change in co-variance of the combined time-series compared to the single locations cannot be analytically derived, as it depends on the co-variances of wind patterns at the two locations (see Appendix \ref{subsection:aggregation_timeseries}).\\
Therefore, we assess here empirically, how aggregation impacts time-series quality. For this analysis, the wind power simulations with ERA5 data and bias correction with GWA2 on Brazil and New Zealand (the only countries in which wind park level data are available)) are used, as this combination showed decent simulation quality for all regions.
Figure \ref{fig:spatial_res_all} shows the resulting simulation quality indicators. Overall, a tendency that at larger system size, the simulation quality as measured by correlations and RMSEs decrease can be observed. In particular, the largest system (Brazil) has a significantly lower median than the smaller systems in terms of RMSE, although single negative outliers can reach the simulation quality of the largest systems. For particular countries, this is difficult to assess, since there is a lack of variety of different system sizes. Nevertheless, in the USA and Brazil simulation quality increases as can be observed in Figure \ref{fig:spatial_res}.
With regard to spatial relations, we also assess how geography might impact the accuracy of simulation. We therefore consult the correlations of the best simulation (ERA5 with GWA2 mean bias correction) in Brazil and New Zealand (where validation data on wind park level are available). Figure \ref{fig:map_corr} indicates that in Brazil southern wind parks have higher correlation, whereas in New Zealand the highest correlations are found in proximity to the coast.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.7]{spatial_resolution_BRA_NZ}
\caption{Impact of spatial resolution (system size 1: wind parks (system size parameter (ssp) $<$ 5), system size 2: states of Brazil and New Zealand (5 $\leq$ ssp $<$ 25), system size 3: Brazil (ssp $\leq$ 25)) on simulation quality in Brazil and New Zealand. Non-overlapping notches indicate a statistical difference in the median at the 95\% significance level.}
\label{fig:spatial_res_all}
\end{figure}
When assessing the impact of temporal resolution on simulation quality, for the US some locations had to be excluded, as they do not provide hourly time resolution. Therefore, there only the regions of Texas and the Bonneville Power Administration were included. In all other countries, all locations are available at hourly resolution.
The medians of correlation significantly increase from hourly to daily as well as daily to monthly correlations (Figure \ref{fig:temporal_res_all}. While the increase from daily to monthly correlation is at around 5 \% points, daily correlations are around 15 \% points higher than hourly ones. This is observed in all individual countries, however only Brazil shows significant changes in median correlation for both temporal aggregation steps (Figure \ref{fig:temporal_res}).\\
The RMSE can be reduced by temporal aggregation, from hourly to daily by about 12 \% points, and from daily to monthly by around 10 \% points on average. In all countries except Brazil, the decrease in correlation is significant (Figure \ref{fig:temporal_res}).
\begin{figure}[ht]
\centering
\includegraphics[scale=0.7]{temporal_resolution_all}
\caption{Impact of temporal resolution on simulation quality. Non-overlapping notches indicate a statistical difference in the median at the 95\% significance level.}
\label{fig:temporal_res_all}
\end{figure}
To sum up, simulation quality tends to increase rather strongly when aggregating temporally. Spatial aggregation is somehow ambiguous, but when comparing very low to very high resolutions, the effect can also be detected.
\section{Discussion}
In this work we compare the capabilities of the two reanalyses MERRA-2 and ERA5 as data sources for wind power simulation in several countries around the world and analyse the suitability of the Global Wind Atlas to increase the quality of the simulated time series.
With a few exceptions, ERA5 performs better, with respect to the chosen quality measures and the selected samples, than MERRA-2. The better performance may be partly due to a higher spatial resolution of the input data set, but also due to using a more recent climate model based on a large amount of observed data \cite{era5_indat}. The capability of representing wind conditions especially in complex terrain should therefore be improved \cite{Olauson_2018}.
This result is not supported by Lileó et al. \cite{lileo2013long} who claim that an increase in spatial resolution does not necessarily result in higher correlations between reanalyses and local wind measurements in a similar assessment for wind speeds.
Our results coincide with findings of Olauson \cite{Olauson_2018}, who studied the performance of these two reanalysis data sets for wind power simulation in four European countries and a region in the USA, as well as Jourdier \cite{Jourdier2020} who compared MERRA-2, ERA5, two high-resolution models and the New European Wind Atlas for the purpose of wind power simulation in France.
Olauson found hourly correlations of over 0.94 for all regions investigated (except the BPA with MERRA-2, where it is at 0.75), which is higher than the correlations identified in our study. For most locations, we find correlations above 0.7, only in South Africa they are around 0.6 (ERA5) or even below (MERRA-2). This coincides with the correlations fround by Olauson for individual wind parks in Sweden, which are above 0.5 (MERRA-2) and 0.8 (ERA5).
While Olauson finds an increase in correlation by ERA5 compared to MERRA-2 by less than 1 \% point in three of the examined regions (i.e. Germany, Denmark and France), in our study correlations of ERA5 are up to 10 \% points higher, with a higher increase in some exceptional cases. This is in the range of the increase in correlation reported by Jourdier \cite{Jourdier2020} in France and sub regions, with correlation being 0.15 higher for ERA5 compared to MERRA-2. However, in our analysis in some cases there is also a lower correlation with ERA5 based simulations compared to MERRA-2, especially in New Zealand. An interesting result is that in \cite{Olauson_2018} the highest increase in correlation by nearly 20 \% points is seen in the BPA in the USA, which agrees with the results of the present study.
Only for the USA we estimated RMSEs comparable to the results in \cite{Olauson_2018}, with values between 2.35 \% and 9.1 \% for ERA5, and 2.82 \% and 18.4 \% for MERRA-2. In the other regions (Brazil, New Zealand, South Africa), the RMSE is higher, with about 75 \% of the locations showing RMSEs above 10 \%. Reasons for these differences may be explained on the one hand by different data quality of validation data, on the other hand by a better fit of the data for the regions of the USA and Europe compared to other world regions (South America, Africa or Oceania).
Regarding the comparison of the two reanalyses, Olauson found that for ERA5, the RMSE was between 20 \% and 50 \% lower than for MERRA-2 (except in Denmark where there was hardly any impact). In absolute terms, this means a decrease of up to 0.02 (except for BPA with over 0.09), while we found that in some locations the RMSE was up to 0.2 lower for ERA5 than for MERRA-2. In other, but fewer locations, particularly in New Zealand, however, the RMSE was up to 0.2 higher with ERA5 compared to MERRA-2 based simulations.
The GWA does not improve simulation quality consistently for all locations. While GWA2 showed a potential to decrease RMSEs, GWA3 rather increases them. Considering the MBEs, the results are ambiguous. GWA3 often increased errors and performed worse than GWA2. Despite an analysis showing that ERA5 performs better than ERA-Interim \cite{Rivas2019}, this cannot be confirmed for GWA3 and GWA2, respectively, which are based on these two different reanalysis data sets. So far, no other study using the GWA3 has been conducted, but results from analyses of the previous version showed that applying the GWA for downscaling MERRA reanalysis wind speeds (EMHIRES dataset \cite{gonzalez2016emhires}) has no unambiguously positive effect on the simulation quality when compared to TSO time series. Despite the claim of the authors that the simulation based on MERRA data underestimates the variability compared to the GWA-downscaled dataset (EMHIRES) and that downscaling improves results, their statistical results indicate that neither correlations increase (13 of 24 countries investigated have higher correlation with EMHIRES than with MERRA), nor RMSE (9 countries) or biases (7 countries) decrease consistently \cite{GonzalezAparicio2017}. This fits well to the results of our current study, where the results of different countries or regions vary in terms of whether the GWA improves the quality of wind power simulation time series or not. Another study which uses the GWA and MERRA-2 for wind power simulation in Brazil finds that bias correction in general improves results \cite{Gruber2019}.
A further subject we investigated are the implications of spatial and temporal aggregation on the measures applied for quality assessment. The expectation was that the higher the level of spatial or temporal aggregation, the lower the error, since compensating effects of negative and positive bias could reduce errors. For temporal aggregation this could be confirmed by the analysed data. This is also confirmed by Staffell and Pfenninger who compute higher correlations for eight European countries on a monthly than on an hourly basis \cite{Staffell_2016} .
For spatial aggregation, however, we could not consistently confirm such an effect. This matches the results of an analysis conducted in Europe, using MERRA and MERRA-2 reanalysis data. Monthly correlations on country level were lower than correlations on European level only in some of the 13 studied countries (9 for MERRA and 7 for MERRA-2). Also, the median of correlations per country was above the correlations of aggregated data \cite{Staffell_2016}. In contrast to this Olauson \cite{Olauson_2018} finds higher correlations, as well as lower RMSEs and errors in Sweden compared to 1051 individual wind turbines when simulating wind power with MERRA-2 and ERA5.
Limitations of this study were data availability and data quality. For future research, also validation in other countries is desirable. Moreover, better quality data for simulation could highly increase the validity of the results. Nevertheless, we feel confident that our results hold when comparing different simulations, despite some of the validation timeseries being of lesser quality.\\
\section{Conclusions}
In this paper we assessed how different reanalysis data sets for wind power simulation in different regions of the world, as well as means for global bias correction of reanalysis wind speeds, affect simulation quality. We additionally looked into the implications of spatial and temporal aggregation on quality measures.
Our main conclusions are (1) that ERA5 performs better than MERRA-2 in all regions and for all different indicators, with ERA5 showing approximately 0.05 higher correlations than MERRA-2 and 0.05 lower RMSEs in most regions. (2) No version of the GWA consistently improves simulation quality. GWA2 may be used, however improvements over the use of no bias correction may be minor and in some cases, simulation results may even deteriorate. We discourage the use of GWA3. (3) Temporal aggregation increases quality indicators due to compensating effects, with an increase of about 0.2 in correlation and about 0.1 to 0.2 lower RMSEs in most regions when aggregating from hourly to monthly time series. (4) For spatial aggregation, a much more limited effect was found: only when comparing very low and very high spatial aggregations, an increase in quality was observed.\\
The results of our analysis \footnote{The resulting time series aggregated per wind park will be made available after submission in an online repository} can be used as basis for future wind power simulation efforts and are the foundation for a new global dynamic wind atlas. Access to this global dynamic wind atlas is enabled by making our code openly available \cite{GDWA}.
The tool is able to generate wind power generation timeseries for all locations worldwide for use in energy system models or for studying the variability of wind power generation. Furthermore, our results allow estimating the magnitude of error that has to be expected when relying on reanalysis data for wind power simulation. These conclusions are important for energy system modellers when designing highly renewable energy systems.
\section{Acknowledgements}
This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 758149).
\sloppy
\printbibliography
\newpage
|
2,869,038,155,495 | arxiv | \section{Introduction}
\label{sec:intro}
Modern machine learning (ML) methods are driven by complex, high-dimensional, and nonparametric models that can capture highly nonlinear phenomena. These models have proven useful in wide-ranging applications including vision, robotics, medicine, and natural language. At the same time, the complexity of these methods often obscure their decisions and in many cases can lead to wrong decisions by failing to properly account for---among other things---spurious correlations, adversarial vulnerability, and invariances \citep{bottou2015two,scholkopf2019causality,buhlmann2018invariance}. This has led to a growing literature on correcting these problems in ML systems. A particular example of this that has received widespread attention in recent years is the problem of causal inference, which is closely related to these issues. While substantial methodological progress has been made towards embedding complex methods such as deep neural networks and RKHS embeddings into learning causal graphical models \citep{huang2018generalized,mitrovic2018causal,zheng2019learning,yu2019dag,lachapelle2019gradient,zhu2019causal,ng2019masked}, theoretical progress has been slower and typically reserved for particular parametric models such as linear \citep{chen2018causal,wang2018nongauss,ghoshal2017ident,ghoshal2017sem,loh2014causal,geer2013,aragam2015ccdr,aragam2015highdimdag,aragam2019globally} and generalized linear models \citep{park2017,park2018learning}.
In this paper, we study the problem of learning directed acyclic graphs (DAGs) from data in a nonparametric setting. Unlike existing work on this problem, we do not require linearity, additivity, independent noise, or faithfulness.
Our approach is model-free and nonparametric, and uses nonparametric estimators (kernel smoothers, neural networks, splines, etc.) as ``plug-in'' estimators. As such, it is agnostic to the choice of nonparametric estimator chosen. Unlike existing consistency theory in the nonparametric setting \citep{peters2014,hoyer2009,buhlmann2014,rothenhausler2018causal,huang2018generalized,tagasovska2018nonparametric,nowzohour2016}, we provide explicit (nonasymptotic) finite sample complexity bounds and show that the resulting method has polynomial time complexity.
The method we study is closely related to existing algorithms that first construct a variable ordering \citep{ghoshal2017ident,chen2018causal,ghoshal2017sem,park2020identifiability}.
Despite this being a well-studied problem, to the best of our knowledge our analysis is the first to provide explicit, simultaneous statistical and computational guarantees for learning nonparametric DAGs.
\begin{figure}[t]
\centering
\begin{subfigure}[t]{0.5\textwidth}
\includegraphics[width=\textwidth]{figures/counter_example1}
\caption{}
\label{fig:intro:camfail}
\end{subfigure}
\hspace{1em}
\begin{subfigure}[t]{0.43\textwidth}
\includegraphics[width=\textwidth]{figures/eqvar_layers.pdf}
\caption{}
\label{fig:intro:layers}
\end{subfigure}
\caption{(a) Existing methods may not find a correct topological ordering in simple settings when $d=3$. (b) Example of a layer decomposition $L(\mathsf{G})$ of a DAG on $d=6$ nodes.}
\label{fig:intro}
\end{figure}
\paragraph{Contributions}
Figure~\ref{fig:intro:camfail} illustrates a key motivation for our work: While there exist methods that obtain various statistical guarantees, they lack provably efficient algorithms, or vice versa.
As a result, these methods can fail in simple settings.
Our focus is on \emph{simultaneous} computational and statistical guarantees that are explicit and nonasymptotic in a model-free setting.
More specifically, our main contributions are as follows:
\begin{itemize}
\item We show that the algorithms of \citet{ghoshal2017ident} and \citet{chen2018causal} rigourously extend to a model-free setting, and provide a method-agnostic analysis of the resulting extension (Theorem~\ref{thm:main:sample}). That is, the time and sample complexity bounds depend on the choice of estimator used, and this dependence is made explicit in the bounds (Section~\ref{sec:ident:alg}, Section~\ref{sec:sample}).
\item We prove that this algorithm runs in at most $O(nd^{5})$ time and needs at most $\Omega((d^{2}/\varepsilon)^{1+d/2})$ samples (Corollary~\ref{cor:main:sample}).
Moreover, the exponential dependence on $d$ can be improved by imposing additional sparsity or smoothness assumptions, and can even be made polynomial (see Section~\ref{sec:sample} for discussion). This is an expected consequence of our estimator-agnostic approach.
\item We show how existing identifiability results based on ordering variances can be unified and generalized to include model-free families (Theorem~\ref{thm:gen:ident}, Section~\ref{sec:ident:nonpar}).
\item We show that greedy algorithms such as those used in the CAM algorithm \citep{buhlmann2014} can provably fail to recover an identifiable DAG (Example~\ref{ex:cam:fail}), as shown in Figure~\ref{fig:intro:camfail} (Section~\ref{sec:ident:comparison}).
\item Finally, we run a simulation study to evaluate the resulting algorithm in a variety of settings against seven state-of-the-art algorithms (Section~\ref{sec:exp}).
\end{itemize}
Our simulation results can be summarized as follows: When implemented using generalized additive models \citep{hastie1990generalized}, our method outperforms most state-of-the-art methods, particularly on denser graphs with hub nodes.
We emphasize here, however, that our main contributions lay in the theoretical analysis, specifically providing a polynomial-time algorithm with sample complexity guarantees.
\paragraph{Related work}
The literature on learning DAGs is vast, so we focus only on related work in the nonparametric setting.
The most closely related line work considers additive noise models (ANMs) \citep{peters2014,hoyer2009,buhlmann2014,chicharro2019conditionally}, and prove a variety of identifiability and consistency guarantees. Compared to our work, the identifiability results proved in these papers require that the structural equations are (a) nonlinear with (b) additive, independent noise. Crucially, these papers focus on (generally asymptotic) \emph{statistical} guarantees without any computational or algorithmic guarantees.
There is also a closely related line of work for bivariate models \citep{mooij2014,monti2019causal,wu2020causal,mitrovic2018causal} as well as the post-nonlinear model \citep{zhang2009}.
\citet{huang2018generalized} proposed a greedy search algorithm using an RKHS-based generalized score, and proves its consistency assuming faithfulness. \citet{rothenhausler2018causal} study identifiability of a general family of partially linear models and prove consistency of a score-based search procedure in finding an equivalence class of structures. There is also a recent line of work on embedding neural networks and other nonparametric estimators into causal search algorithms \citep{lachapelle2019gradient,zheng2019learning,yu2019dag,ng2019masked,zhu2019causal} without theoretical guarantees. While this work was in preparation, we were made aware of the recent work \citep{park2020condvar} that proposes an algorithm that is similar to ours---also based on \citep{ghoshal2017ident} and \citep{chen2018causal}---and establishes its sample complexity for linear Gaussian models.
In comparison to these existing lines of work, our focus is on simultaneous computational and statistical guarantees that are explicit and nonasymptotic (i.e. valid for all finite $d$ and $n$), for the fully nonlinear, nonparametric, and model-free setting.
\paragraph{Notation}
Subscripts (e.g. $X_{j}$) will always be used to index random variables and superscripts (e.g. $X_{j}^{(i)}$) to index observations.
For a matrix $W=(w_{kj})$, $w_{\cdot j}\in\mathbb{R}^{d}$ is the $j$th column of $W$.
We denote the indices by $[d]=\{1,\ldots,d\}$, and frequently abuse notation by identifying the indices $[d]$ with the random vector $X=(X_{1},\ldots,X_{d})$.
For example, nodes $X_{j}$ are interchangeable with their indices $j$ (and subsets thereof), so e.g. $\var(j\given A)$ is the same as $\var(X_{j}\given X_{A})$.
\section{Background}
\label{sec:background}
Let $X=(X_{1},\ldots,X_{d})$ be a $d$-dimensional random vector and $\mathsf{G}=(V,E)$ a DAG where we implicitly assume $V=X$. The \emph{parent set} of a node is defined as
$\pa_{\mathsf{G}}(X_{j})=\{i: (i,j)\in E\}$,
or simply $\pa(j)$ for short.
A \emph{source} node is any node $X_{j}$ such that $\pa(j)=\emptyset$ and an \emph{ancestral set} is any set $A\subset V$ such that $X_{j}\in A\implies\pa(j)\subset A$.
The graph $\mathsf{G}$ is called a \emph{Bayesian network} (BN) for $X$ if it satisfies the Markov condition, i.e. that each variable is conditionally independent of its non-descendants given its parents.
Intuitively, a BN for $X$ can be interpreted as a representation of the direct and indirect relationships between the $X_{j}$, e.g. an edge $X_{i}\to X_{j}$ indicates that $X_{j}$ depends directly on $X_{i}$, and not vice versa.
Under additional assumptions such as causal minimality and no unmeasured confounding, these arrows may be interpreted causally; for more details, see the surveys \citep{buhlmann2018invariance,scholkopf2019causality} or the textbooks \citep{lauritzen1996,koller2009,spirtes2000,pearl2009,peters2017elements}.
The goal of structure learning is to learn a DAG $\mathsf{G}$ from i.i.d. observations $X^{(i)}\overset{\text{iid}}{\sim}\mathbb{P}(X)$. Throughout this paper, we shall exploit the following well-known fact: To learn $\mathsf{G}$, it suffices to learn a topological sort of $\mathsf{G}$, i.e. an ordering $\prec$ such that $X_{i}\to X_{j}\implies X_{i}\prec X_{j}$. A brief review of this material can be found in the supplement.
\paragraph{Equal variances}
Recently, a new approach has emerged which was originally cast as an approach to learn equal variance DAGs \citep{ghoshal2017ident,chen2018causal}, although it has since been generalized beyond the equal variance case \citep{ghoshal2017sem,park2020identifiability,park2020condvar}. An equal variance DAG is a linear structural equation model (SEM) that satisfies
\begin{align}
\label{eq:eqvar:lin}
X_{j}
= \ip{w_{\cdot j},X} + z_{j},
\quad
\var(z_{j})
= \sigma^{2},
\quad
z_{j}\indep\pa(j),
\quad
w_{kj} = 0\iff k\notin\pa(j)
\end{align}
for some weights $w_{kj}\in\mathbb{R}$.
Under the model \eqref{eq:eqvar:lin}, a simple algorithm can learn the graph $\mathsf{G}$
by first learning a topological sort $\prec$. For these models, we have the following decomposition of the variance:
\begin{align}
\label{eq:var:decomp:lin}
\var(X_{j})
&= \var(\ip{w_{\cdot j},X}) + \var(z_{j}).
\end{align}
Thus, as long as $\var(\ip{w_{\cdot j},X})>0$, we have $\var(X_{j})>\var(z_{j})$. It follows that as long as $\var(z_{j})$ does not depend on $j$, it is possible to identify a source in $\mathsf{G}$ by simply minimizing the residual variances.
This is the essential idea behind algorithms based on equal variances in the linear setting \citep{ghoshal2017ident,chen2018causal}.
Alternatively, it is possible to iteratively identify best sinks by minimizing marginal precisions.
Moreover, this argument shows that the assumption of linearity is not crucial, and this idea can readily be extended to ANMs, as in \citep{park2020condvar}. Indeed, the crucial assumption in this argument is the independence of the noise $z_{j}$ and the parents $\pa(X_{j})$; in the next section we show how these assumptions can be removed altogether.
\paragraph{Layer decomposition of a DAG}
Given a DAG $\mathsf{G}$, define a collection of sets as follows: $L_{0}:=\emptyset$, $A_{j}=\cup_{m=0}^{j}L_{m}$ and for $j>0$, $L_{j}$ is the set of all source nodes in the subgraph $\mathsf{G}[V-A_{j-1}]$ formed by removing the nodes in $A_{j-1}$. So, e.g., $L_{1}$ is the set of source nodes in $\mathsf{G}$ and $A_{1}=L_{1}$. This decomposes $\mathsf{G}$ into layers, where each layer $L_{j}$ consists of nodes that are sources in the subgraph $\mathsf{G}[V-A_{j-1}]$, and $A_{j}$ is an ancestral set for each $j$.
Let $r$ denote the number of ``layers'' in $\mathsf{G}$, $L(\mathsf{G}):=(L_{1},\ldots,L_{r})$ be the corresponding layers.
The quantity $r$ effectively measure the depth of a DAG. See Figure~\ref{fig:intro:layers} for an illustration.
Learning $\mathsf{G}$ is equivalent to learning the sets $L_{1},\ldots,L_{r}$, since any topological sort $\pi$ of $\mathsf{G}$ can be determined from $L(\mathsf{G})$, and from any sort $\pi$, the graph $\mathsf{G}$ can be recovered via variable selection. Unlike a topological sort of $\mathsf{G}$, which may not be unique, the layer decomposition $L(\mathsf{G})$ is always unique.
Therefore, without loss of generality, in the sequel we consider the problem of identifying and learning $L(\mathsf{G})$.
\section{Identifiability and algorithmic consequences}
\label{sec:ident}
This section sets the stage for our main results on learning nonparametric DAGs: First, we show that existing identifiability results for equal variances generalize to a family of model-free, nonparametric distributions. Second, we show that this motivates an algorithm very similar to existing algorithms in the equal variance case. We emphasize that various incarnations of these ideas have appeared in previous work \citep{ghoshal2017ident,chen2018causal,ghoshal2017sem,park2020identifiability,park2020condvar}, and our effort in this section is to unify these ideas and show that the same ideas can be applied in more general settings without linearity or independent noise.
Once this has been done, our main sample complexity result is presented in Section~\ref{sec:sample}.
\subsection{Nonparametric identifiability}
\label{sec:ident:nonpar}
In general, a BN for $X$ need not be unique, i.e. $\mathsf{G}$ is not necessarily identifiable from $\mathbb{P}(X)$. A common strategy in the literature to enforce identifiability is to impose structural assumptions on the conditional distributions $\mathbb{P}(X_{j}\given \pa(j))$, for which there is a broad literature on identifiability. Our first result shows that identifiability is guaranteed as long as the residual variances $\mathbb{E}\var(X_{j}\given \pa(j))$ do not depend on $j$. This is a natural generalization of the notion of equality of variances for linear models \citep{peters2013,ghoshal2017ident,chen2018causal}.
\begin{thm}
\label{thm:gen:ident}
If $\mathbb{E}\var(X_{j}\given \pa(j))\equiv\sigma^{2}$ does not depend on $j$, then $\mathsf{G}$ is identifiable from $\mathbb{P}(X)$.
\end{thm}
The proof of Theorem~\ref{thm:gen:ident} can be found in the supplement.
This result makes no structural assumptions on the local conditional probabilities $\mathbb{P}(X_{j}\given \pa(j))$. To illustrate, we consider some examples below.
\begin{ex}[Causal pairs, \citep{mooij2014}]
Consider a simple model on two variables: $X\to Y$ with $\mathbb{E}\var(Y\given X)=\var(X)$. Then as long as $\mathbb{E}[Y\given X]$ is nonconstant,
Theorem~\ref{thm:gen:ident} implies the causal order is identifiable. No additional assumptions on the noise or functional relationships are necessary.
\end{ex}
\begin{ex}[Binomial models, \citep{park2017}]
Assume $X_{j}\in\{0,1\}$ and $X_{j}=\BernoulliDist(f_{j}(\pa(j)))$ with $f_{j}(\pa(j))\in[0,1]$. Then Theorem~\ref{thm:gen:ident} implies that if $\mathbb{E} f_{j}(\pa(j))(1-f_{j}(\pa(j)))\equiv\sigma^{2}$ does not depend on $j$, then $\mathsf{G}$ is identifiable.
\end{ex}
\begin{ex}[Generalized linear models]
The previous example can of course be generalized to arbitrary generalized linear models: Assume $\mathbb{P}[X_{j}\given \pa(j)]\propto \exp(X_{j}\theta_{j}-K(\theta_{j}))$, where $\theta_{j}=f_{j}(\pa(j))$ and $K(\theta_{j})$ is the partition function. Then Theorem~\ref{thm:gen:ident} implies that if $\mathbb{E}[K''(f_{j}(\pa(j)))]\equiv\sigma^{2}$ does not depend on $j$, then $\mathsf{G}$ is identifiable.
\end{ex}
\begin{ex}[Additive noise models, \citep{peters2014}]
Finally, we observe that Theorem~\ref{thm:gen:ident} generalizes existing results for ANMs:
In an ANM, we have $X_{j}=f_{j}(\pa(j))+z_{j}$ with $z_{j}\independent\pa(j)$. If $\var(z_{j})=\sigma^{2}$, then an argument similar to \eqref{eq:var:decomp:lin} shows that ANMs with equal variances are identifiable.
Theorem~\ref{thm:gen:ident} applies to more general additive noise models $X_{j}=f_{j}(\pa(j))+g_{j}(\pa(j))^{1/2}z_{j}$ with heteroskedastic, uncorrelated (i.e. not necessarily independent) noise.
\end{ex}
\paragraph{Unequal variances}
Early work on this problem focused on the case of equal variances \citep{ghoshal2017ident,chen2018causal}, as we have done here. This assumption illustrates the main technical difficulties in proving identifiability, and it is well-known by now that equality of variances is not necessary, and a weaker assumption that allows for heterogeneous residual variances suffices in special cases \citep{ghoshal2017sem,park2020identifiability}. Similarly, the extension of Theorem~\ref{thm:gen:ident} to such heterogeneous models is straightforward, and omitted for brevity; see Appendix~\ref{app:unequal} in the supplement for additional discussion and simulations. In the sequel, we focus on the case of equality for simplicity and ease of interpretation.
\subsection{A polynomial-time algorithm}
\label{sec:ident:alg}
The basic idea behind the top-down algorithm proposed in \citep{chen2018causal} can easily be extended to the setting of Theorem~\ref{thm:gen:ident}, and is outlined in Algorithm~\ref{alg:eqvaranm:pop}. The only modification is to replace the error variances $\var(z_{j})=\sigma^{2}$ from the linear model \eqref{eq:eqvar:lin} with the corresponding residual variances (i.e. $\mathbb{E}\var(X_{\ell}\given S_{j})$), which are well-defined for any $\mathbb{P}(X)$ with finite second moments.
A natural idea to translate Algorithm~\ref{alg:eqvaranm:pop} into an empirical algorithm is to replace the residual variances with an estimate based on the data.
One might then hope to use similar arguments as in the linear setting to establish consistency and bound the sample complexity. Perhaps surprisingly, this does not work unless the topological sort of $\mathsf{G}$ is unique. When there is more than one topological sort, it becomes necessary to uniformly bound the errors of all possible residual variances---and in the worst case there are exponentially many ($d2^{d-1}$ to be precise) possible residual variances.
The key issue is that the sets $S_{j}$ in Algorithm~\ref{alg:eqvaranm:pop} are \emph{random} (i.e. data-dependent), and hence unknown in advance.
This highlights a key difference between our algorithm and existing work for linear models such as \citep{ghoshal2017ident,ghoshal2017sem,chen2018causal,park2020condvar}: In our setting, the residual variances cannot be written as simple functions of the covariance matrix $\Sigma:=\mathbb{E} X\!X^{T}$, which simplifies the analysis for linear models considerably.
Indeed, although the same exponential blowup arises for linear models, in that case consistent estimation of the covariance matrix $\Sigma:=\mathbb{E} X\!X^{T}$ provides \emph{uniform} control over all possible residual variances (e.g., see Lemma~6 in \citep{chen2018causal}). In the nonparametric setting, this reduction no longer applies.
To get around this technical issue, we modify Algorithm~\ref{alg:eqvaranm:pop} to learn $\mathsf{G}$ one layer $L_{j}$ at a time, as outlined in Algorithm~\ref{alg:eqvaranm:emp} (see Section~\ref{sec:background} for details on $L_{j}$). As a result, we need only estimate $\sigma_{\ell j}^{2}:=\mathbb{E}\var(X_{\ell}\given A_{j})$, which involves regression problems with at most $|A_{j}|$ nodes.
We use the plug-in estimator \eqref{eq:plugin} for this, although more sophisticated estimators are available \citep{doksum1995nonparametric,robins2008higher}. This also necessitates the use of sample splitting in Step 3(a) of Algorithm~\ref{alg:eqvaranm:emp}, which is necessary for the theoretical arguments but not needed in practice.
The overall computational complexity of Algorithm~\ref{alg:eqvaranm:emp}, which we call \text{NPVAR}{}, is $O(ndrT)$, where $T$ is the complexity of computing each nonparametric regression function $\widehat{f}_{\ell j}$. For example, if a kernel smoother is used, $T=O(d^{3})$ and thus the overall complexity is $O(nrd^{4})$. For comparison, an oracle algorithm that knows the true topological order of $\mathsf{G}$ in advance would still need to compute $d$ regression functions, and hence would have complexity $O(dT)$. Thus, the extra complexity of learning the topological order is only $O(nr)=O(nd)$, which is linear in the dimension and the number of samples. Furthermore, under additional assumptions on the sparsity and/or structure of the DAG, the time complexity can be reduced further, however, our analysis makes no such assumptions.
\begin{algorithm}[t]
\caption{Population algorithm for learning nonparametric DAGs}
\label{alg:eqvaranm:pop}
\begin{enumerate}
\item Set $S_{0}=\emptyset$ and for $j=0,1,2,\ldots$, let
\begin{align*}
k_{j}
&=\argmin_{\ell\notin S_{j}}\mathbb{E}\var(X_{\ell}\given S_{j}), \qquad
S_{j+1}
= S_{j} \cup \{k_{j}\}.
\end{align*}
\item Return the DAG $\mathsf{G}$ that corresponds to the topological sort $(k_{1},\ldots,k_{d})$.
\end{enumerate}
\end{algorithm}
\begin{algorithm}[t]
\caption{\text{NPVAR}{} algorithm}
\label{alg:eqvaranm:emp}
\textbf{Input:} $X^{(1)},\ldots,X^{(n)}$, $\eta>0$.
\begin{enumerate}
\item Set $\widehat{L}_{0}=\emptyset$, $\widehat{\sigma}_{\ell 0}^{2}=\widehat{\var}(X_{\ell})$, $k_{0}=\argmin_{\ell}\widehat{\sigma}_{\ell 0}^{2}$, $\widehat{\sigma}_{0}^{2}=\sigma_{k_{0}0}^{2}$.
\item Set $\widehat{L}_{1}:=\{\ell : |\widehat{\sigma}_{\ell 0}^{2} - \widehat{\sigma}_{0}^{2}| < \eta\}$.
\item For $j=2,3,\ldots$:
\begin{enumerate}
\item Randomly split the $n$ samples in half and let $\widehat{A}_{j}:=\cup_{m=1}^{j}\widehat{L}_{m}$.
\item For each $\ell\notin\widehat{A}_{j}$, use the first half of the sample to estimate $f_{\ell j}(X_{\widehat{A}_{j}})=\mathbb{E}[X_{\ell}\given \widehat{A}_{j}]$ via a nonparametric estimator $\widehat{f}_{\ell j}$.
\item For each $\ell\notin\widehat{A}_{j}$, use the second half of the sample to estimate the residual variances via the plug-in estimator
\begin{align}
\label{eq:plugin}
\widehat{\sigma}_{\ell j}^{2}
&= \frac1{n/2}\sum_{i=1}^{n/2}(X_{\ell}^{(i)})^{2} - \frac1{n/2}\sum_{i=1}^{n/2}\widehat{f}_{\ell j}(X_{\widehat{A}_{j}}^{(i)})^{2}.
\end{align}
\item Set $k_{j} =\argmin_{\ell\notin\widehat{A}_{j}}\widehat{\sigma}_{\ell j}^{2}$ and $\widehat{L}_{j+1} = \{\ell : |\widehat{\sigma}_{\ell j}^{2} - \widehat{\sigma}_{k_{j}j}^{2}| < \eta, \,\ell\notin\widehat{A}_{j}\}$.
\end{enumerate}
\item Return $\widehat{L}=(\widehat{L}_{1},\ldots,\widehat{L}_{\widehat{r}})$.
\end{enumerate}
\end{algorithm}
\subsection{Comparison to existing algorithms}
\label{sec:ident:comparison}
Compared to existing algorithms based on order search and equal variances, \text{NPVAR}{} applies to more general models without parametric assumptions, independent noise, or additivity.
It is also instructive to make comparisons with greedy score-based algorithms such as causal additive models (CAM, \citep{buhlmann2014}) and greedy DAG search (GDS, \citep{peters2013}). We focus here on CAM since it is more recent and applies in nonparametric settings, however, similar claims apply to GDS as well.
CAM is based around greedily minimizing the log-likelihood score for additive models with Gaussian noise. In particular, it is not guaranteed to find a global minimizer, which is as expected since it is based on a nonconvex program. This is despite the global minimizer---if it can be found---having good statistical properties.
The next example shows that, in fact, there are identifiable models for which CAM will find the wrong graph with high probability.
\begin{ex}%
\label{ex:cam:fail}
Consider the following three-node additive noise model with $z_{j}\sim\mathcal{N}(0,1)$:
\begin{align}
\label{eq:cam:fail}
\begin{aligned}
X_{1}
&= z_{1}, \\
X_{2}
&= g(X_{1}) + z_{2}, \\
X_{3}
&= g(X_{1}) + g(X_{2}) + z_{3}.
\end{aligned}
\end{align}
In the supplement (Appendix~\ref{app:cam}), we show the following: \emph{There exist infinitely many nonlinear functions $g$ for which the CAM algorithm returns an incorrect order under the model \eqref{eq:cam:fail}.}
This is illustrated empirically in Figure~\ref{fig:intro:camfail} for the nonlinearities $g(u)=\text{sgn}(u)|u|^{1.4}$ and $g(u)=\sin u$. In each of these examples, the model satisfies the identifiability conditions for CAM as well as the conditions required in our work.
\end{ex}
We stress that this example does not contradict the statistical results in \citet{buhlmann2014}: It only shows that the \emph{algorithm} may not find a global minimizer and as a result, returns an incorrect variable ordering. Correcting this discrepancy between the algorithmic and statistical results is a key motivation behind our work. In the next section, we show that \text{NPVAR}{} provably learns the true ordering---and hence the true DAG---with high probability.
\section{Sample complexity}
\label{sec:sample}
Our main result analyzes the sample complexity of \text{NPVAR}{} (Algorithm~\ref{alg:eqvaranm:emp}). Recall the layer decomposition $L(\mathsf{G})$ from Section~\ref{sec:background} and define $d_{j}:=|A_{j}|$. Let $f_{\ell j}(X_{A_{j}})=\mathbb{E}[X_{\ell}\given A_{j}]$.
\begin{cond}[Regularity]
\label{cond:reg}
For all $j$ and all $\ell\notin A_{j}$,
(a)~$X_{j}\in[0,1]$,
(b)~$f_{\ell j}:[0,1]^{d_{j}}\to[0,1]$,
(c)~$f_{\ell j}\in L^{\infty}([0,1]^{d_{j}})$, and
(d)~$\var(X_{\ell}\given A_{j}) \le \zeta_{0}<\infty$.
\end{cond}
These are the standard regularity conditions from the literature on nonparametric statistics~\citep{gyorfi2006distribution,tsybakov2009introduction}, and can be weakened (e.g. if the $X_{j}$ and $f_{\ell j}$ are unbounded, see \citep{kohler2009optimal}). We impose these stronger assumptions in order to simplify the statements and focus on technical details pertinent to graphical modeling and structure learning. The next assumption is justified by Theorem~\ref{thm:gen:ident}, and as we have noted, can also be weakened.
\begin{cond}[Identifiability]
\label{cond:ident}
$\mathbb{E}\var(X_{j}\given \pa(j))\equiv\sigma^{2}$ does not depend on $j$.
\end{cond}
Our final condition imposes some basic finiteness and consistency requirements on the chosen nonparametric estimator $\widehat{f}$, which we view as a function for estimating $\mathbb{E}[Y\given Z]$ from an arbitrary distribution over the pair $(Y,Z)$.
\begin{cond}[Estimator]
\label{cond:est}
The nonparametric estimator $\widehat{f}$ satisfies (a) $\mathbb{E}[Y\given Z]\in L^{\infty}\implies \widehat{f}\in L^{\infty}$ and (b) $\mathbb{E}_{\widehat{f}}\norm{\widehat{f}(Z) - \mathbb{E}[Y\given Z]}_{2}^{2}\to 0$.
\end{cond}
This is a mild condition that is satisfied by most popular estimators including kernel smoothers, nearest neighbours, and splines, and in particular, Condition~\ref{cond:est}(a) is only used to simplify the theorem statement and can easily be relaxed.
\begin{thm}
\label{thm:main:sample}
Assume Conditions~\ref{cond:reg}-\ref{cond:est}.
Let $\Delta_{j}>0$ be such that $\mathbb{E}\var(X_{\ell}\given A_{j})>\sigma^{2}+\Delta_{j}$ for all $\ell\notin A_{j}$ and define $\Delta:=\inf_{j}\Delta_{j}$.
Let $\delta^{2}:=\sup_{\ell,j}\mathbb{E}_{\widehat{f}_{\ell j}}\norm{f_{\ell j}(X_{A_{j}})-\widehat{f}_{\ell j}(X_{A_{j}})}_{2}^{2}$.
Then for any $\delta\sqrt{d}<\eta<\Delta/2$,
\begin{align}
\label{eq:thm:main:sample}
\mathbb{P}(\widehat{L} = L(\mathsf{G}))
&\gtrsim 1 - \frac{\delta^{2}}{\eta^{2}}rd%
\end{align}
\end{thm}
Once the layer decomposition $L(\mathsf{G})$ is known, the graph $\mathsf{G}$ can be learned via standard nonlinear variable selection methods (see Appendix~\ref{app:order} in the supplement).
A feature of this result is that it is agnostic to the choice of estimator $\widehat{f}$, as long as it satisfies Condition~\ref{cond:est}. The dependence on $\widehat{f}$ is quantified through $\delta^{2}$, which depends on the sample size $n$ and represents the rate of convergence of the chosen nonparametric estimator. Instead of choosing a specific estimator, Theorem~\ref{thm:main:sample} is stated so that it can be applied to general estimators. As an example, suppose each $f_{\ell j}$ is Lipschitz continuous and $\widehat{f}$ is a standard kernel smoother. Then
\begin{align*}
\mathbb{E}_{\widehat{f}_{\ell j}}\norm{f_{\ell j}(X_{L_{j}})-\widehat{f}_{\ell j}(X_{L_{j}})}_{2}^{2}
\le \delta^{2}
\lesssim n^{-\tfrac{2}{2+d}}.
\end{align*}
Thus we have the following special case:
\begin{cor}
\label{cor:main:sample}
Assume each $f_{\ell j}$ is Lipschitz continuous. Then $\widehat{L}$ can be computed in $O(nd^{5})$ time and $\mathbb{P}(\widehat{L} = L(\mathsf{G}))\ge1-\varepsilon$ as long as $n=\Omega((rd/(\eta^{2}\varepsilon))^{1+d/2})$.
\end{cor}
This is the best possible rate attainable by any algorithm without imposing stronger regularity conditions (see e.g. \sec5 in \citep{gyorfi2006distribution}). Furthermore, $\delta^{2}$ can be replaced with the error of an arbitrary estimator of the residual variance itself (i.e. something besides the plug-in estimator \eqref{eq:plugin}); see Proposition~\ref{prop:sample:bound} in Appendix~\ref{app:proof:main} for details.
To illustrate these results, consider the problem of finding the direction of a Markov chain $X_{1}\to X_{2}\to\cdots\to X_{d}$ whose transition functions $\mathbb{E}[X_{j}\given X_{j-1}]$ are each Lipschitz continuous. Then $r=d$, so Corollary~\ref{cor:main:sample} implies that $n=\Omega((d^{2}/(\eta\sqrt{\varepsilon}))^{1+d/2})$ samples are sufficient to learn the order---and hence the graph as well as each transition function---with high probability. Since $r=d$ for any Markov chain, this particular example maximizes the dependence on $d$; at the opposite extreme a bipartite graph with $r=2$ would require only $n=\Omega((\sqrt{d}/(\eta\sqrt{\varepsilon}))^{1+d/2})$. In these lower bounds, it is not necessary to know the type of graph (e.g. Markov chain, bipartite) or the depth $r$.
\paragraph{Choice of $\eta$}
The lower bound $\eta>\delta\sqrt{d}$ is not strictly necessary, and is only used to simplify the lower bound in \eqref{eq:thm:main:sample}. In general, taking $\eta$ sufficiently small works well in practice. The main tradeoff in choosing $\eta>0$ is computational: A smaller $\eta$ may lead to ``splitting'' one of the layers $L_{j}$. In this case, \text{NPVAR}{} still recovers the structure correctly, but the splitting results in redundant estimation steps in Step 3 (i.e. instead of estimating $L_{j}$ in one iteration, it takes multiple iterations to estimate correctly). The upper bound, however, is important: If $\eta$ is too large, then we may include spurious nodes in the layer $L_{j}$, which would cause problems in subsequent iterations.
\paragraph{Nonparametric rates}
Theorem~\ref{thm:main:sample} and Corollary~\ref{cor:main:sample} make no assumptions on the sparsity of $\mathsf{G}$ or smoothness of the mean functions $\mathbb{E}[X_{\ell}\given A_{j}]$. For this reason, the best possible rate for a na\"ive plug-in estimator of $\mathbb{E}\var(X_{\ell}\given A_{j})$ is bounded by the minimax rate for estimating $\mathbb{E}[X_{\ell}\given A_{j}]$. For practical reasons, we have chosen to focus on an agnostic analysis that does not rely on any particular estimator. Under additional sparsity and smoothness assumptions, these rates can be improved, which we briefly discuss here.
For example, by using adaptive estimators such as RODEO \citep{lafferty2008rodeo} or GRID \citep{giordano2020grid}, the sample complexity will depend only on the sparsity of $f_{\ell j}(X_{A_j})$, i.e. $d^{*}=\max_j\max_{\ell\notin A_j}|\{k \in A_j : \partial_k f_{\ell j} \ne 0 \}|$, where $\partial_k$ is the $k$th partial derivative.
Another approach that does not require adaptive estimation is to assume $|L_{j}|\le w$ and define $r^{*} := \sup\{ |i-j| : e=(e_{1},e_{2})\in E, e_{1}\in L_{i}, e_{2}\in L_{j}\}$.
Then $\delta^{2}\asymp n^{-2/(2+wr^{*})}$, and the resulting sample complexity depends on $wr^{*}$ instead of $d$.
For a Markov chain with $w=r^{*}=1$ this leads to a substantial improvement.
Instead of sparsity, we could impose stronger smoothness assumptions: Let $\beta_{*}$ denote the smallest H\"older exponent of any $f_{\ell j}$. Then if $\beta_{*}\ge d/2$, then one can use a one-step correction to the plug-in estimator \eqref{eq:plugin} to obtain a root-$n$ consistent estimator of $\mathbb{E}\var(X_{\ell}\given A_{j})$ \citep{robins2008higher,kandasamy2015nonparametric}. Another approach is to use undersmoothing \citep{doksum1995nonparametric}. In this case, the exponential sample complexity improves to polynomial sample complexity. For example, in Corollary~\ref{cor:main:sample}, if we replace Lipschitz with the stronger condition that $\beta_{*}\ge d/2$, then the sample complexity improves to $n=\Omega(rd/(\eta^{2}\varepsilon))$.
\section{Experiments}
\label{sec:exp}
Finally, we perform a simulation study to compare the performance of \text{NPVAR}{} against state-of-the-art methods for learning nonparametric DAGs. The algorithms are:
\text{RESIT}~\citep{peters2014},
\text{CAM}~\citep{buhlmann2014},
\text{EqVar}~\citep{chen2018causal},
\text{NOTEARS}~\citep{zheng2019learning},
\text{GSGES}~\citep{huang2018generalized},
\text{PC}~\citep{spirtes1991}, and
\text{GES}~\citep{chickering2003}.
In our implementation of \text{NPVAR}{}, we use generalized additive models (GAMs) for both estimating $\widehat{f}_{\ell j}$ and variable selection.
One notable detail is our implementation of \text{EqVar}{}, which we adapted to the nonlinear setting by using GAMs instead of subset selection for variable selection (the order estimation step remains the same).
Full details of the implementations used as well as additional experiments can be found in the supplement.
Code implementing the \text{NPVAR}{} algorithm is publicly available at \url{https://github.com/MingGao97/NPVAR}.
\begin{figure}[t]
\centering
\begin{subfigure}[t]{0.9\textwidth}
\includegraphics[width=1.\textwidth]{figures/mainFigure_shd_vs_n.pdf}
\caption{SHD vs. $n$ ($d=20$).}
\label{fig:exp:shd_vs_n}
\end{subfigure}
\\
\begin{subfigure}[t]{0.9\textwidth}
\includegraphics[width=1.\textwidth]{figures/mainFigure_shd_vs_d.pdf}
\caption{SHD vs. $d$ ($n=1000$).}
\label{fig:exp:shd_vs_d}
\end{subfigure}
\caption{Structural Hamming distance (SHD) as a function of sample size ($n$) and number of nodes ($d$). Error bars denote $\pm$1 standard error. Some algorithms were only run for sufficiently small graphs due to high computational cost.}
\label{fig:exp:shd}
\end{figure}
We conducted a series of simulation on different graphs and models, comparing the performance in both order recovery and structure learning. Due to space limitations, only the results for structure learning in the three most difficult settings are highlighted in Figure~\ref{fig:exp:shd}. These experiments correspond to non-sparse graphs with non-additive dependence given by either a Gaussian process (GP) or a generalized linear model (GLM):
\begin{itemize}
\item \emph{Graph types.} We sampled three families of DAGs: Markov chains (MC), Erd\"os-R\'enyi graphs (ER), and scale-free graphs (SF). For MC graphs, there are exactly $d$ edges, whereas for ER and SF graphs, we sample graphs with $kd$ edges on average. This is denoted by ER4/SF4 for $k=4$ in Figure~\ref{fig:exp:shd}. Experiments on sparser DAGs can be found in the supplement.
\item \emph{Probability models.} For the Markov chain models, we used two types of transition functions: An additive sine model with $\mathbb{P}(X_{j}\given X_{j-1})=\mathcal{N}(\sin(X_{j-1}),\sigma^{2})$ and a discrete model (GLM) with $X_{j}\in\{0,1\}$ and $\mathbb{P}(X_{j}\given X_{j-1})\in\{p,1-p\}$. For the ER and SF graphs, we sampled $\mathbb{E}[X_{j}\given\pa(j)]$ from both additive GPs (AGP) and non-additive GPs (NGP).
\end{itemize}
Full details as well as additional experiments on order recovery, additive models, sparse graphs, and misspecified models can be found in the supplement (Appendix~\ref{app:exp}).
\paragraph{Structure learning}
To evaluate overall performance, we computed the structural Hamming distance (SHD) between the learned DAG and the true DAG. SHD is a standard metric used for comparison of graphical models.
According to this metric, the clear leaders are \text{NPVAR}{}, \text{EqVar}{}, and \text{CAM}{}. Consistent with existing results, existing methods tend to suffer as the edge density and dimension of the graphs increase, however, \text{NPVAR}{} is more robust in these settings. Surprisingly, the \text{CAM}{} algorithm remains quite competitive for non-additive models, although both \text{EqVar}{} and \text{NPVAR}{} clearly outperform \text{CAM}{}. On the GLM model, which illustrates a non-additive model with non-additive noise, \text{EqVar}{} and \text{NPVAR}{} performed the best, although \text{PC}{} showed good performance with $n=1000$ samples. Both \text{CAM}{} and \text{RESIT}{} terminated with numerical issues on the GLM model.
These experiments serve to corroborate our theoretical results and highlight the effectiveness of the \text{NPVAR}{} algorithm, but of course there are tradeoffs. For example, algorithms such as CAM which exploit sparse and additive structure perform very well in settings where sparsity and additivity can be exploited, and indeed outperform \text{NPVAR}{} in some cases. Hopefully, these experiments can help to shed some light on when various algorithms are more or less effective.
\paragraph{Misspecification and sensitivity analysis}
We also considered two cases of misspecification: In Appendix~\ref{app:unequal}, we consider an example where Condition~\ref{cond:ident} fails, but \text{NPVAR}{} still successfully recovers the true ordering. This experiment corroborates our claims that this condition can be relaxed to handle unequal residual variances. We also evaluated the performance of \text{NPVAR}{} on linear models as in \eqref{eq:eqvar:lin}, and in all cases it was able to recover the correct ordering.
\section{Discussion}
\label{sec:disc}
In this paper, we analyzed the sample complexity of a polynomial-time algorithm for estimating nonparametric causal models represented by a DAG. Notably, our analysis avoids many of the common assumptions made in the literature. Instead, we assume that the residual variances are equal, similar to assuming homoskedastic noise in a standard nonparametric regression model. Our experiments confirm that the algorithm, called \text{NPVAR}{}, is effective at learning identifiable causal models and outperforms many existing methods, including several recent state-of-the-art methods. Nonetheless, existing algorithms such as CAM are quite competitive and apply in settings where NPVAR does not.
We conclude by discussing some limitations and directions for future work.
Although we have relaxed many of the common assumptions made in the literature, these assumptions have been replaced by an assumption on the residual variances that may not hold in practice. An interesting question is whether or not there exist provably polynomial-time algorithms for nonparametric models in under less restrictive assumptions.
Furthermore, although the proposed algorithm is polynomial-time, the worst-case $O(d^{5})$ dependence on the dimension is of course limiting. This can likely be reduced by developing more efficient estimators of the residual variance that do not first estimate the mean function. This idea is common in the statistics literature, however, we are not aware of such estimators specifically for the residual variance (or other nonlinear functionals of $\mathbb{P}(X)$).
Furthermore, our general approach can be fruitfully applied to study various parametric models that go beyond linear models, for which both computation and sample efficiency would be expected to improve. These are interesting directions for future work.
\paragraph{Acknowledgements}
We thank the anonymous reviewers for valuable feedback, as well as Y. Samuel Wang and Edward H. Kennedy for helpful discussions. B.A. acknowledges the support of the NSF via IIS-1956330 and the Robert H. Topel Faculty Research Fund. Y.D.'s work has been partially supported by the NSF (CCF-1439156, CCF-1823032, CNS-1764039).
|
2,869,038,155,496 | arxiv |
\section{Introduction}
Mechanical ventilation is a widely used treatment with applications spanning anaesthesia \citep{coppola2014protective}, neonatal intensive care \citep{van2019modes}, and life support during the current COVID-19 pandemic \citep{meng2020intubation, wunsch2020mechanical, mohlenkamp2020ventilation}. This life-sustaining treatment has two common modes: invasive ventilation, where the patient is fully sedated, and assist-control ventilation, where the patient can initiate breaths \citep{patientventasynch}.
Even though mechanical ventilation has been deployed in ICUs for decades, several challenges remain that can lead to ventilator-induced lung injury (VILI) for patients \citep{vili}. In pressure-support ventilation, a form of assist-control ventilation, evidence suggests that a combination of high peak pressure and high tidal volume can lead to tissue injury in the lung \citep{vili_1}. Pressure-support ventilation also suffers from patient-ventilator asynchrony, where the patient's breathing pattern does not match the ventilator's, and can result in hypoxemia (low level of blood oxygen), cardiovascular compromise, and patient discomfort \citep{asynchrony}.
However, the risk of developing VILI depends not only on factors related to the ventilator, but also on intrinsic characteristics of the patient's lung \citep{vili}. These characteristics usually cannot be directly observed, so trained clinicians must continuously monitor the patient. Given the highly manual process of mechanical ventilation, it is desirable to have control methods that can better track prescribed pressure targets and are robust to variations of the patient's lung.
Motivated by this potential to improve patient health, we focus on pressure-controlled invasive ventilation (PCV) \citep{rittayamai2015pressure} as a starting point. In this setting, an algorithm controls two valves that let air in and out of a patient's lung according to a target waveform of lung pressure (see Figure \ref{fig:tracking}). We consider the control task only on ISO-standard \citep{ISO68844} artificial lungs.
\paragraph{State of the art.}
Despite its importance, ventilator control has remained largely unchanged for years, relying on PID \citep{pid2} controllers and similar variants to track patient state according to a prescribed target waveform.
However, this approach is not optimal in terms of tracking---PID can overshoot, undershoot, and exhibit ringing behavior for certain lungs. It is also not sufficiently robust---ventilators are carefully tuned during design, manufacture, and maintenance \citep{ziegler1942optimum,chen2012control} and any changes in ventilator dynamics (e.g., tubing, response delay), environment (e.g., atmospheric pressure), or patient must be accounted for and continuously monitored by trained clinicians via various physical controls on the ventilator \citep{rees2006using}.
\begin{figure}[!h]
\centering
\includegraphics[width=0.5\textwidth]{figures/circuit.pdf}
\caption{A simplified respiratory circuit showing the airflow through the inspiratory pathway, into and out of the lung, and out the expiratory pathway. We shade the components that our algorithms can control in green.}
\label{fig:circuit}
\end{figure}%
\begin{figure}[!h]
\centering
\includegraphics[width=0.45\textwidth]{figures/tracking.pdf}
\caption{An example run of three breaths where PID (dark gray line) controls lung pressure (blue line) according to a prescribed target waveform (orange line).}
\label{fig:tracking}
\end{figure}
\paragraph{Challenges of ventilator control.}
A ventilator controller must adapt quickly and reliably across the spectrum of clinical conditions, which are only indirectly observable given a single measurement of pressure.
A model that is highly expressive may learn the dynamics of the underlying systems more precisely and thus adapt faster to the patient's condition. However, such models usually require a large amount of data to train, which can take prohibitively long to collect by purely running the ventilator. We opt instead to learn a simulator to generate artificial data, though learning such a simulator for a partially observed non-linear system is itself a difficult problem.
\subsection{Our contributions}
We present better-performing, more robust results and present resources for future researchers. Specifically,
\begin{enumerate}
\item We demonstrate that learning a controller as a neural network correction to PID outperforms its uncorrected counterpart (optimality).
\item We show that a single learned controller trained on several ISO lung settings outperforms the PID controller that performs best across the same settings (robustness).
\item We provide self-contained differentiable simulators for the ventilation problem. These simulators reduce the entrance cost for future researchers to contribute to invasive mechanical ventilation.
\item We conduct a methodological study of reinforcement learning techniques, both model-based and model-free, including policy gradient, Q-learning and other variants. We conclude that model-based approaches are more sample and computationally efficient.
\end{enumerate}
Of note, we limit our investigation to open-source ventilators. Control methods used by proprietary ventilators cannot be modified or assessed independently from their hardware and such equipment are cost-prohibitive for academic research.
We see this study as a preliminary investigation of machine learning for ventilator control. In future work, we hope to extend this methodology to non-invasive ventilation, pressure-support ventilation, and conduct clinical trials.
\subsection{Related work}\label{sec:related}
The modern positive-pressure ICU mechanical ventilator dates back to the 1940s \citep{kacmarek11} with many open-source ventilator designs \citep{ventlist} published during the COVID-19 pandemic.
Yet at their core, ventilators all rely on controlling air in and out of an elastic lung via a respiratory circuit, as described in many physics-based models \citep{PMID:8420408}. Such simple operation masks the complexity of treatment \citep{chatburn2011closed} and recent work on augmenting PID controllers with adaptive methods \citep{9122946,PPR:PPR169448} have sought to address more advanced clinical needs. To the best of our knowledge, our data-driven approach of learning both simulator and controller is novel in this field.
\paragraph{Control and RL in virtual and physical systems.} Much progress has been made on learning dynamics when the dynamics themselves exist \emph{in silico}: MuJoCo physics \citep{hafner2019learning}, Atari games \citep{kaiser2019model}, and board games \citep{schrittwieser2020mastering}. Combining such data-driven models with either pixel-space or latent-space planning has been shown to be effective for policy learning. \citep{ha2018recurrent} is an example of this research program for the deep learning era. Progress on deploying end-to-end learned agents (i.e. controllers) in the physical world is more limited in comparison, due to difficulties in scaling parallel data collection and higher variability in real-world data. \citep{bellemare2020autonomous} present a case study on autonomous balloon navigation using a Q-learning approach, rather than a model-based one like ours. \citep{akkaya2019solving} use domain randomization with non-differentiable simulators for a difficult dexterous manipulation task.
\paragraph{System identification and residual policy learning. } System identification has been studied for decades in control and reinforcement learning,
see e.g. \citep{schoukens2019nonlinear,billings1980identification} for nonlinear system identification. Deep neural networks have been used to represent nonlinear dynamics, see e.g. \cite{helicopter}. Residual policy learning \citep{rpl} is a model-free analogue of our controller design: it learns a correction term on an initial, imperfect policy, and is shown to be more data-efficient than learning from scratch, especially for complex robotic tasks. More recently, concurrent work by \citep{pidcar} uses residual policy learning to improve PID for the car suspension control problem.
\paragraph{Multi-task reinforcement learning.} Part of our methodology has close parallels in multi-task reinforcement learning \citep{taylor2009transfer}, where the objective is to learn a policy that performs well across diverse environments. To make our controllers more robust, we optimize our policy simultaneously on an ensemble of learned models corresponding to different physical settings, similar to the work of \citep{rajeswaran2016epopt, chebotar2019closing} on robotic manipulation.
\paragraph{Machine learning for health applications.} Healthcare offers a multitude of opportunities and challenges for machine learning; for a survey, see \citep{ghassemi2020review}. Specifically, reinforcement learning and control have found numerous applications \citep{yu2020reinforcement}, and recently for weaning patients off mechanical ventilators \citep{prasad2017reinforcement,yu2019inverse,yu2020supervised}. As far as we know, there is no prior work on improving the control of ventilators using machine learning.
\section{Scientific background}
\label{sec:background}
\subsection{Control of dynamical systems}
We begin with some formalisms of the control problem. A partially-observable discrete-time dynamical system is given by the following equation:
$$ x_{t+1} = f(x_t, u_t), o_{t+1} = g(x_{t+1})$$
where $x_t$ is the underlying state of the dynamical system, $o_t$ is the observation of the state available to the controller, $u_t$ is the control input and $f,g$ are the transition function and observation functions respectively. Given a dynamical system, the control problem is to minimize the sum of cost functions over a long-term horizon:
\begin{align*}
& \min_{u_{1:T} } \sum_{t=1}^T c_t(x_t, u_t) \quad \text{s.t.}\;\; x_{t+1} = f_t(x_t, u_t).
\end{align*}
This problem is in general computationally intractable, and theoretical guarantees are available for special cases of dynamics (notably linear dynamics) and perturbations. For an in-depth exposition on the subject, see the textbooks by \citet{Bertsekas17,kemin,tedrake}.
\paragraph{PID control.} A ubiquitous technique for the control of dynamical systems is the use of linear error-feedback controllers, i.e. policies that choose a control based on a linear function of the current and past errors vs. a target state. That is,
$$ u_{t+1} = \sum_{i=0}^k \alpha_i \epsilon_{t-i} , $$
where $\epsilon_t = x_t - {x}^{\star_t}$ (or $\epsilon_t = o_t - {o}^{\star_t}$ if the system is partially-observable) is the deviation from the target state at time $t$, and $k$ represents the history length of the controller. PID applies a linear control with \emph{proportional}, \emph{integral}, and \emph{differential} coefficients,
$$ u_t = \alpha \epsilon_{t} + \beta \sum_{i=0}^k \epsilon_{t-i} + \gamma (\epsilon_{t} - \epsilon_{t-1}) . $$
This special class of linear error-feedback controllers, motivated by physical laws, is a simple, efficient and widely used technique \citep{PID1}. It is currently the industry standard for (open-source) ventilator control.
\subsection{The physics of ventilation}
\label{subsec:physics}
In invasive ventilation, the ventilator is connected to a patient's main airway, and applies pressure in a cyclic manner to simulate healthy breathing. During the inspiratory phase, the target applied pressure increases to the peak inspiratory pressure (PIP). During the expiratory phase, the target decreases to the positive-end expiratory pressure (PEEP), maintained in order to prevent the lungs from collapsing. The PIP and PEEP values, along with the durations of these phases, define the time-varying target \emph{waveform}, specified by the clinician.
The goal of ventilator control is to regulate the pressure sensor measurements to follow the target waveform $p_t^{\star}$ via controlling the air-flow into the system which forms the control input $u_t$. As a dynamical system, we can denote the underlying state of the ventilator-patient system as $x_t$ evolving as $x_{t+1} = f(x_t, u_t),$ for an unknown $f$ and the pressure sensor measurement $p_t$ is the observation available to us. The cost function can be defined to be a measure of the deviation from the target; e.g. the absolute deviation $c_t(p_t, u_t) = |p_t - p_t^{\star}|$. The objective is to design a controller that minimizes the total cost over $T$ time steps.
A ventilator needs to take into account the structure of the lung to determine the optimal pressure to induce. Such structural factors include \textit{compliance} ($C$), or the change in lung volume per unit pressure, and \textit{resistance} ($R$), or the change in pressure per unit flow.
\paragraph{Physics-based model.} A simplistic formalization of the ventilator-lung dynamical system can be derived from the physics of a connected two-balloon system, with a \emph{latent state} $v_t$ representing the volume of air inside the lung. The dynamics equations can be written as
$$v_{t+1} = v_t + u_t \cdot \Delta_t$$ $$p_t = p_0 + \left( 1 - \left( \frac{r_t}{r_0} \right)^6 \right) \cdot \frac{1}{r_t r_0^2}, \;\;r_t = \left( \frac{3 v_t}{4 \pi} \right)^{1/3},$$ where $p_t$ is the measured pressure, $v_t$ is volume, $r_t$ is radius of the lung, $u_t$ is the input air flow rate, and $\Delta_t$ is the time increment. $u_t$ originates from a pressure difference between lung-pressure $p_t$ and supply-pressure $p_\text{supply}$, regulated by a valve: $u_t = \frac{p_\text{supply} - p_t}{R_\text{in}}$. The resistance of the valve is $R_\text{in}\propto 1/d^4$ (Poiseuille's law) where $d$, the opening of the valve, is controlled by a motor. The constants $p_0,r_0$ depend on both the lung and ventilator. In \cite{nadeem_2021}, several physics-based models are benchmarked, showing errors that are an order of magnitude larger than what can be achieved with a data driven approach. While the interpretability of such models is appealing, their low fidelity is prohibitive for offline reinforcement learning.
\subsection{Challenges and benefits of a model-based approach}
The physics-based dynamics models described above are highly idealized, and are suitable only to provide coarse predictions for the behaviors of very simple controllers. We list some sources of error arising from using physics equations for model-based control:
\begin{itemize}
\item \emph{Idealization of physics:} oversimplifying fluid flow and turbulence via ideal incompressible gas assumptions; linearizing the dynamics of the lung and ventilator components.
\item \emph{Lagged and partial observations:} assuming instantaneous changes to volume and pressure across the system. In reality, there are non-negligible propagation times for pressure impulses, delayed pressure feedback arising from lung elasticity, and computational latency.
\item \emph{Underspecification of variability:} different patients' clinical scenarios, captured by the latent constants $p_0, r_0$, may intrinsically vary in more complex (i.e. higher-dimensional) ways.
\end{itemize}
Due to the reasons listed above, it is highly desirable to adopt a learned model-based approach in this setting because of its sample-efficiency and reusability. A reliable simulator enables much cheaper and faster data collection for training a controller, and allows us to incorporate multitask objectives and domain randomization (e.g. different waveforms, or even different patients). An additional goal is to make the simulator \emph{differentiable}, enabling direct gradient-based policy optimization through the system's dynamics (rather than stochastic estimates thereof).
We show that in this partially-observed (but single-input single-output) system, we can query a reasonable amount of training data in real time from the test lung, and use it offline to learn a differentiable simulator of its dynamics (\emph{``real2sim''}). Then, we complete the pipeline by leveraging interactive access to this simulator to train a controller (\emph{``sim2real''}). We demonstrate that this pipeline is sufficiently robust that the learned controllers can outperform PID controllers tuned directly on the test lung.
\section{Experimental Setup}
\label{sec:hardware}
To develop simulators and control algorithms, we run mechanical ventilation tasks on a physical test lung \citep{ingmar_medical_2020} using the open-source ventilator designed by Princeton University's People's Ventilator Project (PVP) \citep{pvp2020}.
\subsection{Physical test lung}
For our experiments, we use the commercially-available adult test lung, ``QuickLung'', manufactured by IngMar Medical. The lung has three lung compliance settings ($C=\{10, 20, 50\}$ mL/cmH2O) and three airway resistance settings ($R=\{5, 20, 50\}$ cmH2O/L/s) for a total of 9 settings, which are specified by the ISO standard for ventilatory support equipment \citep{ISO68844}. An operator can change the lung's compliance and resistance settings manually. We connect the test lung to the ventilator via a respiratory circuit \citep{canadian1986canadian, parmley1972disposable} as shown in Figure \ref{fig:circuit}. Figure \ref{fig:vent-farm} shows a snapshot of our hardware setup.
\begin{figure}[!h]
\centering
\includegraphics[width=7cm]{figures/vent-farm.jpg}
\caption{The ventilator cluster we constructed to run our experiments, featuring 10 ventilators, 4 air compressors, and 2 control servers. Each ventilator is re-calibrated after each experimental run for consistency across ventilators and over time.}
\label{fig:vent-farm}
\end{figure}
\subsection{Mechanical ventilator}
There are many forms of ventilator treatment. In addition to various pressure target trajectories, clinicians may want to focus on other factors, such as volume and flow \citep{chatburn2007classification}. The PVP ventilator focuses on targeting pressure for a completely sedated patient (i.e., the patient does not initiate any breaths) and comprises two pathways (see Figure \ref{fig:circuit}): (1) the inspiratory pathway that regulates airflow into the lung and (2) the expiratory pathway for airflow out of the lung. A software controller is able to adjust one valve for each pathway. The inspiratory valve is a proportional control flow valve that allows control in a continuous range from fully closed to fully open. The expiratory valve is a binary on-off valve that only permits zero or maximum airflow.
To prevent damage to the ventilator and/or injury to the operator, we implement software overrides that abort a given run: 1) if pressure or volume in the lung exceeds certain thresholds, 2) if tubing disconnects, or 3) if there is significant software delay. The PVP pneumatic design also includes a safety valve in case software overrides fail.
\subsection{Abstraction of the simulation task}
We treat the mechanical ventilation task as episodic by separating each inspiratory phase (e.g., light gray regions in Figure \ref{fig:lung-task}) from the breath timeseries and treating those as individual episodes. This approach reflects both physical and medical realities. Mechanically ventilated breaths are by their nature highly regular and feature long expiratory phases (dark gray regions in Figure \ref{fig:lung-task}) that end with the ventilator-lung system close to its initial state, thereby justifying the episodic nature. Further, the inspiratory phase is indeed the most relevant to clinical treatment and the harder regime to control with prevalent problems of under- or over-shooting the target pressure and ringing. Naturally thus, we attempt to learn a simulator for the ventilator-lung dynamics for the inspiratory phase. To this end repeated episodes of inspiratory phases are thus simplified, faithful units of training data.
\begin{figure}
\centering
\includegraphics[width=8cm]{figures/lung-task.pdf}
\caption{PID controllers exhibit their suboptimal behavior (under- or over-shooting, ringing) primarily during the inspiratory phase. Note that we use a hard-coded controller during expiratory phases to ensure safety. This choice does not affect our results.}
\label{fig:lung-task}
\end{figure}
\section{Learning a data-driven simulator} \label{sec:sim}
With the hardware setup outlined in Section~\ref{sec:hardware}, we have a physical system suitable for benchmarking, in place of a true patient's lung. In this section, we present our approach to learning a simulator for the inspiratory phase of this ventilator-lung system, subject to the practical constraints of real-time data collection. Two main considerations drive our simulator training and evaluation design:
First, the evaluation for any simulator can only be performed using a {\bf black-box metric}, since we do not have explicit access to the system dynamics, and existing physics models are poor approximations to the empirical behavior.
Second, the dynamical system we simulate is very challenging for a comprehensive simulation covering all modalities and in particular exhibits chaotic behavior in boundary cases. Therefore, since the end goal for the simulator is better control, we only evaluate the simulator for ``reasonable" scenarios that are relevant to the control task.
\subsection{Black-box simulator evaluation}\label{sec:sim_metric}
The class of learned simulators we consider are deep neural networks and thus in addition to the lack of explicit access to system dynamics, the simulator dynamics are also complex non-linear operations. Thus we deviate from standard distance metrics (between the simulator and the true system) considered in the literature, such as \citet{ferns2005metrics}, as they explicitly involve the value function over states, transition probabilities or other unknown quantities. Rather, we consider metrics that are based on the evolution of the dynamics, as studied in \citet{vishwanathan2007binet}.
However, unlike the latter work, we take into account the particular distribution over control sequences that we expect to search around during the controller training phase. We thus define the following distance between dynamical systems.
Let $f_1,f_2$ be two dynamical systems over the same state-action spaces.
Let $\mathcal{D}$ be a distribution over sequences of controls denoted $\mathbf{u} = \{u_1,u_2,...,u_T\}$.
We define the {\bf open-loop distance} w.r.t. horizon $T$ and control sequence distribution $\mathcal{D}$ as
\begin{align*}
\|f_1 - f_2\|_{ol} \stackrel{\text{def}}{=} \mathbb{E}_{\mathbf{u} \sim \mathcal{D}} \left[ \sum_{t=1}^T \| f_1(x_{t,1},u_t) - f_2(x_{t,2},u_t) \| \right] . \end{align*}
We use the Euclidean norm over the states in the inner loop, although this can be generalized to any metric. Compared to metrics involving feedback from the simulator, the open-loop distance is a more reliable description of transfer, since it minimizes hard-to-analyze interactions between the policy and the simulator.
We evaluate our data-driven simulator using the open loop distance metric, and we illustrate a result in the top half of Figure~\ref{fig:sim_testing}. In the bottom half, we show a sample trajectory of our simulator and the ground truth. See Section \ref{sec:sim_model} for experimental details.
\begin{figure}
\centering
\includegraphics[width=8cm]{figures/sim-testing.pdf}
\caption{Performance for a learned simulator for ventilator + a particular lung setting $R=5, C=50$. The upper plot shows the \textit{open-loop distance} and the lower shows a sample trajectory with a fixed sequence of controls (open-loop controls), both for lung setting . In the former, we see low error as we increase the number of steps we project and in the latter, we see that our simulated trajectory tracks the true trajectory quite closely.}\label{fig:sim_testing}
\end{figure}
\subsection{Data-collection via targeted exploration}\label{sec:safe_exp}
Motivated by the black-box metric described above, we focus on collecting trajectories comprising of control sequences and the measured pressure sequences upon the execution of the control sequences to form a training dataset. Due to safety and complexity issues, we cannot hope to exhaustively explore the space of all trajectories. Instead keeping the eventual control task in mind, we choose to explore trajectories \textit{near} the control sequence generated by a baseline PI controller. The goal is to have the simulator faithfully capture the true dynamics in a reasonably large vicinity of the optimal control trajectory on the true system. To this end, for each of the lung settings, we collect data by choosing a safe PID controller baseline and introducing random exploratory perturbations according to the following two policies:
\begin{enumerate}
\item Boundary exploration: To the very beginning of the inhalation, add an additional control sampled uniformly from $(c^a_{\min}, c^a_{\max})$ and decrease this additive control linearly to zero over a time frame sampled randomly from $(t^a_{\min}, t^a_{\max})$;
\item Triangular exploration: sample a maximal additional control from a range $(c^b_{\min}, c^b_{\max})$ and an interval $(t^b_{\min}, t^b_{\max})$, within the inhalation. Start from $0$ additional control at time $t^b_{\min}$, increase the additional control linearly until $(t^b_{\min} + t^b_{\max}))/2$, and then decrease to $0$ linearly until $t^b_{\min}$.
\end{enumerate}
For each breath during data collection, we choose policy $(a)$ with probability $p_a$ and policy $(b)$ with probability $(1-p_a)$. The ranges in $(a)$ and $(b)$ are lung-specific. We give the exact values used in the Appendix.
This protocol balances the need to explore a significant part of the state space with the need to ensure safety. The boundary exploration capitalizes on the fact that at the beginning of the breath, exploration is safer and also more valuable. The former, due to the lung being at steady state and the latter due to the fact that the typical target waveform for inhalation requires a rapid pressure increase with a quick switch to stabilization, leading to a need for better understanding of dynamics in the early phases of a breath. The structure for the triangular exploration is inspired by the need for a persistent exploration strategy (similar ideas exist in \cite{dabney2020temporally}) which can capture intrinsic delay in the system. We illustrate this approach in Figure \ref{fig:exp_data}: control inputs used in our exploration policy are shown on the top, and the pressure measurements of the ventilator-lung system are shown on the bottom. Precise parameters for our exploration policy are listed in Table \ref{table:appendix-sim-data} in the Appendix.
\begin{figure}\centering
\includegraphics[width=8cm]{figures/exploratory.pdf}
\caption{We overlay the controls and pressures from all inspiratory phases in the upper and lower plots, respectively. From this example of the simulator training data (lung setting $R=5, C=50$), we see that we explore a wide range of control inputs (upper plot), but a more limited ``safe'' range around the resulting pressures.}\label{fig:exp_data}
\end{figure}
\subsection{Model architecture}\label{sec:sim_model}
Now we describe the architectural details of our data-driven simulator. Due to the inherent differences across lungs, we opt to learn a different simulator for each of the tasks, which we can wrap into a single meta-simulator through code that selects the appropriate model based on a user's input of $R$ and $C$ parameters.
\paragraph{Training Task(s).} The simulator aims to learn the unknown dynamics of the inhalation phase. We approximate the state of the system (which is not observable to us) by the sequence of the past pressures and controls upto a history length of $H_c$ and $H_p$ respectively. The task of the simulator can now be distilled down to that of predicting the next pressure $p_{t+1}$, based on the past $H_c$ controls $u_{t},\ldots, u_{t-H_c}$ and $H_p$ pressures $p_{t},\ldots, p_{t-H_p}$. We define the training task by constructing a regression data set whose inputs come from contiguous overlapping sections of $H_p, H_c$ within the collected trajectories and the task is to predict the following pressure.
\paragraph{Boundary Model} Towards further improvement of simulator performance we found that, additional distinction needs to be provided for difference in the behavior of the dynamics during the "rise" and "stabilize" phases of an inhalation. Thus we learned a collection of individual models for the very beginning of the inhalation/episode and a general model for the rest of the inhalation, mirroring our choice of exploration policies. This proves to be very helpful as the dynamics at the very beginning of an inhalation are transient, and also extremely important to get right due to downstream effects. Concretely, our final model stitches together a list of $N_B$ boundary models and a general model, whose training tasks are as described earlier (details found in Appendix \ref{app:sim}, Table \ref{table:appendix-sim-training}).
\section{Learning controllers from learned physics} \label{sec:control}
In this section we describe the following two controller tasks:
\begin{enumerate}
\item {\bf Performance:} improve performance for tracking desired waveform in ISO-specified benchmarks. Specifically, we minimize the combined $L_1$ deviation from the target inhalation behavior across all target pressures on the simulator corresponding to a single lung setting of interest.
\item {\bf Robustness:} improve performance using a {\bf single} trained controller. Specifically, we minimize the combined $L_1$ deviation from the target inhalation behavior across all target pressures \textit{and} across the simulators corresponding to \textit{several} lung settings of interest.
\end{enumerate}
\paragraph{Controller architecture.}
Our controller is comprised of a PID baseline upon which we learn a deep network correction controlled with a regularization parameter $\lambda$. This \textit{residual} setup can be seen as a regularization against the gap between the simulator and the real dynamics. In particular this prevents the controller training from over-fitting on the simulator. We found this approach to be significantly better than the directly using the best (and perhaps over-fitted) controller on the simulator. We provide further details about the architecture and ablation studies in the Appendix.
\subsection{Experiments}
For our experiments, we use the physical test lung to run our proposed controllers (trained on the simulators) and compare it against the PID controller that perform best on the physical lung.
To make comparisons, we compute a score for each controller on a given test lung setting (e.g., $R=5, C=50$) by averaging the $L_1$ deviation from a target pressure waveform for all inspiratory phases, and then averaging these average $L_1$ errors over six waveforms specified in \citet{ISO68844}. We choose $L_1$ as an error metric so as not to over-penalize breaths that fall short of their target pressures and to avoid engineering a new metric. We determine the best performing PID controller for a given setting by running exhaustive grid searches over $P,I,D$ coefficients for each lung setting (details for both our score and the grid searches can be found in the Appendix).
\begin{figure}[!h]
\centering
\includegraphics[width=8cm]{figures/local.png}
\caption{We show that for each lung setting, the controller we trained on the simulator for that setting outperforms the best-performing PID controller found on the physical test lung.}\label{fig:results-tracking}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=8cm]{figures/global.png}
\caption{The controller we trained on all six simulators outperforms the best PID found over the the same six settings on the physical test lung. Of note, our wins are proportionally greater when trained on all six settings whereas individual lung settings are more achievable by PID alone.}\label{fig:results-generalization}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=8cm]{figures/traj-compare.pdf}
\caption{As an example, we compare our method (learned controller on learned simulator) to the best P-only, I-only, and PID controllers relative to a target waveform (dotted line). Whereas our controller rises quickly and stays very near the target waveform, the other controllers take significantly longer to rise, overshoot, and, in the case of P-only and PID, ring the entire inspiratory phase.}\label{fig:traj}
\end{figure}
\section{Comparison of RL methods} \label{sec:benchmarks}
As part of our investigation, we benchmarked and compared several Reinforcement Learning(RL) methods for policy optimization on the simulator before settling on the analytic policy gradient approach that leverages the ability to differentiate through the simulated dynamics outlined before. We consider popular RL algorithms, namely PPO~\citep{ppo} and DQN~\citep{dqn} and compare them to direct analytical policy gradient descent. These algorithms are representative of two mainstream RL paradigms, policy gradient and Q-learning, respectively. We performed experiments on simulators that represent lungs with different R,C parameters. The metric as earlier is the L1 distance between the target and achieved lung pressure per step. To ensure a fair model comparison, we used the same state featurization (as described in the previous section) for all algorithms and performed extensive hyperparameter search for our baselines during the training phase. Results are shown in Figure~\ref{fig:baselines}. Our algorithm achieves comparable scores to the baselines across all simulators.
Importantly, our analytical gradient based method achieves a comparable score relative to PPO/DQN in orders of magnitude less samples. This sample-efficiency property of our algorithm can be clearly observed from (Figure~\ref{fig:samplecomplexity}). Our method converges within ~100 episodes of training, while the other methods require tens of thousands of episodes. Further, our algorithm has a stable training process, in contrast to the notable training instability for the baselines. Furthermore, our method is robust with respect to hyperparameter tuning, unlike the baselines, which require an extensive search over hyperparameters to achieve comparable performance. This extensive hyperparameter search required by the baselines is unfeasible for use in resource-constrained or online learning scenarios, which are typical use cases for these control systems. Specifically for the results provided here, we conducted 720 trials with different hyperparameter configuration for PPO and 180 trials for DQN. In contrast, using our method only involves a few trials of standard optimizer learning rate tuning, which is the minimum effort in deep learning practices.
\begin{figure}[!h]
\centering
\includegraphics[width=8cm]{ML4H 2021/figures/2rowbar.pdf}
\caption{Performance comparison of our controller with PPO/DQN. The score is calculated by average per-step L1 distance between target and achieved pressure.}\label{fig:baselines}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=8cm]{ML4H 2021/figures/samplecomplexity.pdf}
\caption{Convergence behavior demonstration. }\label{fig:samplecomplexity}
\end{figure}
\section{Conclusions and future work} \label{sec:discussion}
We have presented a machine learning approach to ventilator control, demonstrating the potential of end-to-end learned controllers by obtaining improvements over industry-standard baselines.
Our main conclusions are
\begin{enumerate}
\item The nonlinear dynamical system of lung-ventilator can be modeled by a neural network more accurately than previously studied physics-based models.
\item Controllers based on deep neural networks can outperform PID controllers across multiple clinical settings (waveforms), and can generalize better across patient lungs characteristics, despite having significantly more parameters.
\item
Direct policy optimization for differentiable environments has potential to significantly outperform Q-learning or (standard) policy gradient methods in terms of sample and computational complexity.
\end{enumerate}
There remain a number of areas to explore, mostly motivated by medical need. The lung settings we examined are by no means representative of all lung characteristics (e.g., neonatal, child, non-sedated) and lung characteristics are not static over time; a patient may improve or worsen, or begin coughing. Ventilator costs also drive further research. As an example, inexpensive valves have less consistent behavior and longer reaction times, which exacerbate bad PID behavior (e.g., overshooting, ringing), yet are crucial to bringing down costs and expanding access. Learned controllers that adapt to these deficiencies may obviate the need for such trade-offs.
\section{Experimental Setup}
\label{sec:hardware}
To develop simulators and control algorithms, we run mechanical ventilation tasks on a physical test lung \citep{ingmar_medical_2020} using the open-source ventilator designed by Princeton University's People's Ventilator Project (PVP) \citep{pvp2020}.
\subsection{Physical test lung}
For our experiments, we use the commercially-available adult test lung, ``QuickLung'', manufactured by IngMar Medical. The lung has three lung compliance settings ($C=\{10, 20, 50\}$ mL/cmH2O) and three airway resistance settings ($R=\{5, 20, 50\}$ cmH2O/L/s) for a total of 9 settings, which are specified by the ISO standard for ventilatory support equipment \citep{ISO68844}. An operator can change the lung's compliance and resistance settings manually. We connect the test lung to the ventilator via a respiratory circuit \citep{canadian1986canadian, parmley1972disposable} as shown in Figure \ref{fig:circuit}. Figure \ref{fig:vent-farm} shows a snapshot of our hardware setup.
\begin{figure}[!h]
\centering
\includegraphics[width=7cm]{figures/vent-farm.jpg}
\caption{The ventilator cluster we constructed to run our experiments, featuring 10 ventilators, 4 air compressors, and 2 control servers. Each ventilator is re-calibrated after each experimental run for consistency across ventilators and over time.}
\label{fig:vent-farm}
\end{figure}
\subsection{Mechanical ventilator}
There are many forms of ventilator treatment. In addition to various pressure target trajectories, clinicians may want to focus on other factors, such as volume and flow \citep{chatburn2007classification}. The PVP ventilator focuses on targeting pressure for a completely sedated patient (i.e., the patient does not initiate any breaths) and comprises two pathways (see Figure \ref{fig:circuit}): (1) the inspiratory pathway that regulates airflow into the lung and (2) the expiratory pathway for airflow out of the lung. A software controller is able to adjust one valve for each pathway. The inspiratory valve is a proportional control flow valve that allows control in a continuous range from fully closed to fully open. The expiratory valve is a binary on-off valve that only permits zero or maximum airflow.
To prevent damage to the ventilator and/or injury to the operator, we implement software overrides that abort a given run: 1) if pressure or volume in the lung exceeds certain thresholds, 2) if tubing disconnects, or 3) if there is significant software delay. The PVP pneumatic design also includes a safety valve in case software overrides fail.
\subsection{Abstraction of the simulation task}
We treat the mechanical ventilation task as episodic by separating each inspiratory phase (e.g., light gray regions in Figure \ref{fig:lung-task}) from the breath timeseries and treating those as individual episodes. This approach reflects both physical and medical realities. Mechanically ventilated breaths are by their nature highly regular and feature long expiratory phases (dark gray regions in Figure \ref{fig:lung-task}) that end with the ventilator-lung system close to its initial state, thereby justifying the episodic nature. Further, the inspiratory phase is indeed the most relevant to clinical treatment and the harder regime to control with prevalent problems of under- or over-shooting the target pressure and ringing. Naturally thus, we attempt to learn a simulator for the ventilator-lung dynamics for the inspiratory phase. To this end repeated episodes of inspiratory phases are thus simplified, faithful units of training data.
\begin{figure}
\centering
\includegraphics[width=8cm]{figures/lung-task.pdf}
\caption{PID controllers exhibit their suboptimal behavior (under- or over-shooting, ringing) primarily during the inspiratory phase. Note that we use a hard-coded controller during expiratory phases to ensure safety. This choice does not affect our results.}
\label{fig:lung-task}
\end{figure}
\section{Introduction}
Mechanical ventilation is a widely used treatment with applications spanning anaesthesia \citep{coppola2014protective}, neonatal intensive care \citep{van2019modes}, and life support during the current COVID-19 pandemic \citep{meng2020intubation, wunsch2020mechanical, mohlenkamp2020ventilation}. This life-sustaining treatment has two common modes: invasive ventilation, where the patient is fully sedated, and assist-control ventilation, where the patient can initiate breaths \citep{patientventasynch}.
Even though mechanical ventilation has been deployed in ICUs for decades, several challenges remain that can lead to ventilator-induced lung injury (VILI) for patients \citep{vili}. In pressure-support ventilation, a form of assist-control ventilation, evidence suggests that a combination of high peak pressure and high tidal volume can lead to tissue injury in the lung \citep{vili_1}. Pressure-support ventilation also suffers from patient-ventilator asynchrony, where the patient's breathing pattern does not match the ventilator's, and can result in hypoxemia (low level of blood oxygen), cardiovascular compromise, and patient discomfort \citep{asynchrony}.
However, the risk of developing VILI depends not only on factors related to the ventilator, but also on intrinsic characteristics of the patient's lung \citep{vili}. These characteristics usually cannot be directly observed, so trained clinicians must continuously monitor the patient. Given the highly manual process of mechanical ventilation, it is desirable to have control methods that can better track prescribed pressure targets and are robust to variations of the patient's lung.
Motivated by this potential to improve patient health, we focus on pressure-controlled invasive ventilation (PCV) \citep{rittayamai2015pressure} as a starting point. In this setting, an algorithm controls two valves that let air in and out of a patient's lung according to a target waveform of lung pressure (see Figure \ref{fig:tracking}). We consider the control task only on ISO-standard \citep{ISO68844} artificial lungs.
\paragraph{State of the art.}
Despite its importance, ventilator control has remained largely unchanged for years, relying on PID \citep{pid2} controllers and similar variants to track patient state according to a prescribed target waveform.
However, this approach is not optimal in terms of tracking---PID can overshoot, undershoot, and exhibit ringing behavior for certain lungs. It is also not sufficiently robust---ventilators are carefully tuned during design, manufacture, and maintenance \citep{ziegler1942optimum,chen2012control} and any changes in ventilator dynamics (e.g., tubing, response delay), environment (e.g., atmospheric pressure), or patient must be accounted for and continuously monitored by trained clinicians via various physical controls on the ventilator \citep{rees2006using}.
\begin{figure}[!h]
\centering
\includegraphics[width=0.5\textwidth]{figures/circuit.pdf}
\caption{A simplified respiratory circuit showing the airflow through the inspiratory pathway, into and out of the lung, and out the expiratory pathway. We shade the components that our algorithms can control in green.}
\label{fig:circuit}
\end{figure}%
\begin{figure}[!h]
\centering
\includegraphics[width=0.45\textwidth]{figures/tracking.pdf}
\caption{An example run of three breaths where PID (dark gray line) controls lung pressure (blue line) according to a prescribed target waveform (orange line).}
\label{fig:tracking}
\end{figure}
\paragraph{Challenges of ventilator control.}
A ventilator controller must adapt quickly and reliably across the spectrum of clinical conditions, which are only indirectly observable given a single measurement of pressure.
A model that is highly expressive may learn the dynamics of the underlying systems more precisely and thus adapt faster to the patient's condition. However, such models usually require a large amount of data to train, which can take prohibitively long to collect by purely running the ventilator. We opt instead to learn a simulator to generate artificial data, though learning such a simulator for a partially observed non-linear system is itself a difficult problem.
\subsection{Our contributions}
We present better-performing, more robust results and present resources for future researchers. Specifically,
\begin{enumerate}
\item We demonstrate that learning a controller as a neural network correction to PID outperforms its uncorrected counterpart (optimality).
\item We show that a single learned controller trained on several ISO lung settings outperforms the PID controller that performs best across the same settings (robustness).
\item We provide self-contained differentiable simulators for the ventilation problem. These simulators reduce the entrance cost for future researchers to contribute to invasive mechanical ventilation.
\item We conduct a methodological study of reinforcement learning techniques, both model-based and model-free, including policy gradient, Q-learning and other variants. We conclude that model-based approaches are more sample and computationally efficient.
\end{enumerate}
Of note, we limit our investigation to open-source ventilators. Control methods used by proprietary ventilators cannot be modified or assessed independently from their hardware and such equipment are cost-prohibitive for academic research.
We see this study as a preliminary investigation of machine learning for ventilator control. In future work, we hope to extend this methodology to non-invasive ventilation, pressure-support ventilation, and conduct clinical trials.
\subsection{Related work}\label{sec:related}
The modern positive-pressure ICU mechanical ventilator dates back to the 1940s \citep{kacmarek11} with many open-source ventilator designs \citep{ventlist} published during the COVID-19 pandemic.
Yet at their core, ventilators all rely on controlling air in and out of an elastic lung via a respiratory circuit, as described in many physics-based models \citep{PMID:8420408}. Such simple operation masks the complexity of treatment \citep{chatburn2011closed} and recent work on augmenting PID controllers with adaptive methods \citep{9122946,PPR:PPR169448} have sought to address more advanced clinical needs. To the best of our knowledge, our data-driven approach of learning both simulator and controller is novel in this field.
\paragraph{Control and RL in virtual and physical systems.} Much progress has been made on learning dynamics when the dynamics themselves exist \emph{in silico}: MuJoCo physics \citep{hafner2019learning}, Atari games \citep{kaiser2019model}, and board games \citep{schrittwieser2020mastering}. Combining such data-driven models with either pixel-space or latent-space planning has been shown to be effective for policy learning. \citep{ha2018recurrent} is an example of this research program for the deep learning era. Progress on deploying end-to-end learned agents (i.e. controllers) in the physical world is more limited in comparison, due to difficulties in scaling parallel data collection and higher variability in real-world data. \citep{bellemare2020autonomous} present a case study on autonomous balloon navigation using a Q-learning approach, rather than a model-based one like ours. \citep{akkaya2019solving} use domain randomization with non-differentiable simulators for a difficult dexterous manipulation task.
\paragraph{System identification and residual policy learning. } System identification has been studied for decades in control and reinforcement learning,
see e.g. \citep{schoukens2019nonlinear,billings1980identification} for nonlinear system identification. Deep neural networks have been used to represent nonlinear dynamics, see e.g. \cite{helicopter}. Residual policy learning \citep{rpl} is a model-free analogue of our controller design: it learns a correction term on an initial, imperfect policy, and is shown to be more data-efficient than learning from scratch, especially for complex robotic tasks. More recently, concurrent work by \citep{pidcar} uses residual policy learning to improve PID for the car suspension control problem.
\paragraph{Multi-task reinforcement learning.} Part of our methodology has close parallels in multi-task reinforcement learning \citep{taylor2009transfer}, where the objective is to learn a policy that performs well across diverse environments. To make our controllers more robust, we optimize our policy simultaneously on an ensemble of learned models corresponding to different physical settings, similar to the work of \citep{rajeswaran2016epopt, chebotar2019closing} on robotic manipulation.
\paragraph{Machine learning for health applications.} Healthcare offers a multitude of opportunities and challenges for machine learning; for a survey, see \citep{ghassemi2020review}. Specifically, reinforcement learning and control have found numerous applications \citep{yu2020reinforcement}, and recently for weaning patients off mechanical ventilators \citep{prasad2017reinforcement,yu2019inverse,yu2020supervised}. As far as we know, there is no prior work on improving the control of ventilators using machine learning.
\section{Scientific background}
\label{sec:background}
\subsection{Control of dynamical systems}
We begin with some formalisms of the control problem. A partially-observable discrete-time dynamical system is given by the following equation:
$$ x_{t+1} = f(x_t, u_t), o_{t+1} = g(x_{t+1})$$
where $x_t$ is the underlying state of the dynamical system, $o_t$ is the observation of the state available to the controller, $u_t$ is the control input and $f,g$ are the transition function and observation functions respectively. Given a dynamical system, the control problem is to minimize the sum of cost functions over a long-term horizon:
\begin{align*}
& \min_{u_{1:T} } \sum_{t=1}^T c_t(x_t, u_t) \quad \text{s.t.}\;\; x_{t+1} = f_t(x_t, u_t).
\end{align*}
This problem is in general computationally intractable, and theoretical guarantees are available for special cases of dynamics (notably linear dynamics) and perturbations. For an in-depth exposition on the subject, see the textbooks by \citet{Bertsekas17,kemin,tedrake}.
\paragraph{PID control.} A ubiquitous technique for the control of dynamical systems is the use of linear error-feedback controllers, i.e. policies that choose a control based on a linear function of the current and past errors vs. a target state. That is,
$$ u_{t+1} = \sum_{i=0}^k \alpha_i \epsilon_{t-i} , $$
where $\epsilon_t = x_t - {x}^{\star_t}$ (or $\epsilon_t = o_t - {o}^{\star_t}$ if the system is partially-observable) is the deviation from the target state at time $t$, and $k$ represents the history length of the controller. PID applies a linear control with \emph{proportional}, \emph{integral}, and \emph{differential} coefficients,
$$ u_t = \alpha \epsilon_{t} + \beta \sum_{i=0}^k \epsilon_{t-i} + \gamma (\epsilon_{t} - \epsilon_{t-1}) . $$
This special class of linear error-feedback controllers, motivated by physical laws, is a simple, efficient and widely used technique \citep{PID1}. It is currently the industry standard for (open-source) ventilator control.
\subsection{The physics of ventilation}
\label{subsec:physics}
In invasive ventilation, the ventilator is connected to a patient's main airway, and applies pressure in a cyclic manner to simulate healthy breathing. During the inspiratory phase, the target applied pressure increases to the peak inspiratory pressure (PIP). During the expiratory phase, the target decreases to the positive-end expiratory pressure (PEEP), maintained in order to prevent the lungs from collapsing. The PIP and PEEP values, along with the durations of these phases, define the time-varying target \emph{waveform}, specified by the clinician.
The goal of ventilator control is to regulate the pressure sensor measurements to follow the target waveform $p_t^{\star}$ via controlling the air-flow into the system which forms the control input $u_t$. As a dynamical system, we can denote the underlying state of the ventilator-patient system as $x_t$ evolving as $x_{t+1} = f(x_t, u_t),$ for an unknown $f$ and the pressure sensor measurement $p_t$ is the observation available to us. The cost function can be defined to be a measure of the deviation from the target; e.g. the absolute deviation $c_t(p_t, u_t) = |p_t - p_t^{\star}|$. The objective is to design a controller that minimizes the total cost over $T$ time steps.
A ventilator needs to take into account the structure of the lung to determine the optimal pressure to induce. Such structural factors include \textit{compliance} ($C$), or the change in lung volume per unit pressure, and \textit{resistance} ($R$), or the change in pressure per unit flow.
\paragraph{Physics-based model.} A simplistic formalization of the ventilator-lung dynamical system can be derived from the physics of a connected two-balloon system, with a \emph{latent state} $v_t$ representing the volume of air inside the lung. The dynamics equations can be written as
$$v_{t+1} = v_t + u_t \cdot \Delta_t$$ $$p_t = p_0 + \left( 1 - \left( \frac{r_t}{r_0} \right)^6 \right) \cdot \frac{1}{r_t r_0^2}, \;\;r_t = \left( \frac{3 v_t}{4 \pi} \right)^{1/3},$$ where $p_t$ is the measured pressure, $v_t$ is volume, $r_t$ is radius of the lung, $u_t$ is the input air flow rate, and $\Delta_t$ is the time increment. $u_t$ originates from a pressure difference between lung-pressure $p_t$ and supply-pressure $p_\text{supply}$, regulated by a valve: $u_t = \frac{p_\text{supply} - p_t}{R_\text{in}}$. The resistance of the valve is $R_\text{in}\propto 1/d^4$ (Poiseuille's law) where $d$, the opening of the valve, is controlled by a motor. The constants $p_0,r_0$ depend on both the lung and ventilator. In \cite{nadeem_2021}, several physics-based models are benchmarked, showing errors that are an order of magnitude larger than what can be achieved with a data driven approach. While the interpretability of such models is appealing, their low fidelity is prohibitive for offline reinforcement learning.
\subsection{Challenges and benefits of a model-based approach}
The physics-based dynamics models described above are highly idealized, and are suitable only to provide coarse predictions for the behaviors of very simple controllers. We list some sources of error arising from using physics equations for model-based control:
\begin{itemize}
\item \emph{Idealization of physics:} oversimplifying fluid flow and turbulence via ideal incompressible gas assumptions; linearizing the dynamics of the lung and ventilator components.
\item \emph{Lagged and partial observations:} assuming instantaneous changes to volume and pressure across the system. In reality, there are non-negligible propagation times for pressure impulses, delayed pressure feedback arising from lung elasticity, and computational latency.
\item \emph{Underspecification of variability:} different patients' clinical scenarios, captured by the latent constants $p_0, r_0$, may intrinsically vary in more complex (i.e. higher-dimensional) ways.
\end{itemize}
Due to the reasons listed above, it is highly desirable to adopt a learned model-based approach in this setting because of its sample-efficiency and reusability. A reliable simulator enables much cheaper and faster data collection for training a controller, and allows us to incorporate multitask objectives and domain randomization (e.g. different waveforms, or even different patients). An additional goal is to make the simulator \emph{differentiable}, enabling direct gradient-based policy optimization through the system's dynamics (rather than stochastic estimates thereof).
We show that in this partially-observed (but single-input single-output) system, we can query a reasonable amount of training data in real time from the test lung, and use it offline to learn a differentiable simulator of its dynamics (\emph{``real2sim''}). Then, we complete the pipeline by leveraging interactive access to this simulator to train a controller (\emph{``sim2real''}). We demonstrate that this pipeline is sufficiently robust that the learned controllers can outperform PID controllers tuned directly on the test lung.
\section{Learning controllers from learned physics} \label{sec:control}
In this section we describe the following two controller tasks:
\begin{enumerate}
\item {\bf Performance:} improve performance for tracking desired waveform in ISO-specified benchmarks. Specifically, we minimize the combined $L_1$ deviation from the target inhalation behavior across all target pressures on the simulator corresponding to a single lung setting of interest.
\item {\bf Robustness:} improve performance using a {\bf single} trained controller. Specifically, we minimize the combined $L_1$ deviation from the target inhalation behavior across all target pressures \textit{and} across the simulators corresponding to \textit{several} lung settings of interest.
\end{enumerate}
\paragraph{Controller architecture.}
Our controller is comprised of a PID baseline upon which we learn a deep network correction controlled with a regularization parameter $\lambda$. This \textit{residual} setup can be seen as a regularization against the gap between the simulator and the real dynamics. In particular this prevents the controller training from over-fitting on the simulator. We found this approach to be significantly better than the directly using the best (and perhaps over-fitted) controller on the simulator. We provide further details about the architecture and ablation studies in the Appendix.
\subsection{Experiments}
For our experiments, we use the physical test lung to run our proposed controllers (trained on the simulators) and compare it against the PID controller that perform best on the physical lung.
To make comparisons, we compute a score for each controller on a given test lung setting (e.g., $R=5, C=50$) by averaging the $L_1$ deviation from a target pressure waveform for all inspiratory phases, and then averaging these average $L_1$ errors over six waveforms specified in \citet{ISO68844}. We choose $L_1$ as an error metric so as not to over-penalize breaths that fall short of their target pressures and to avoid engineering a new metric. We determine the best performing PID controller for a given setting by running exhaustive grid searches over $P,I,D$ coefficients for each lung setting (details for both our score and the grid searches can be found in the Appendix).
\begin{figure}[!h]
\centering
\includegraphics[width=8cm]{figures/local.png}
\caption{We show that for each lung setting, the controller we trained on the simulator for that setting outperforms the best-performing PID controller found on the physical test lung.}\label{fig:results-tracking}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=8cm]{figures/global.png}
\caption{The controller we trained on all six simulators outperforms the best PID found over the the same six settings on the physical test lung. Of note, our wins are proportionally greater when trained on all six settings whereas individual lung settings are more achievable by PID alone.}\label{fig:results-generalization}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=8cm]{figures/traj-compare.pdf}
\caption{As an example, we compare our method (learned controller on learned simulator) to the best P-only, I-only, and PID controllers relative to a target waveform (dotted line). Whereas our controller rises quickly and stays very near the target waveform, the other controllers take significantly longer to rise, overshoot, and, in the case of P-only and PID, ring the entire inspiratory phase.}\label{fig:traj}
\end{figure}
\section{Comparison of RL methods} \label{sec:benchmarks}
As part of our investigation, we benchmarked and compared several Reinforcement Learning(RL) methods for policy optimization on the simulator before settling on the analytic policy gradient approach that leverages the ability to differentiate through the simulated dynamics outlined before. We consider popular RL algorithms, namely PPO~\citep{ppo} and DQN~\citep{dqn} and compare them to direct analytical policy gradient descent. These algorithms are representative of two mainstream RL paradigms, policy gradient and Q-learning, respectively. We performed experiments on simulators that represent lungs with different R,C parameters. The metric as earlier is the L1 distance between the target and achieved lung pressure per step. To ensure a fair model comparison, we used the same state featurization (as described in the previous section) for all algorithms and performed extensive hyperparameter search for our baselines during the training phase. Results are shown in Figure~\ref{fig:baselines}. Our algorithm achieves comparable scores to the baselines across all simulators.
Importantly, our analytical gradient based method achieves a comparable score relative to PPO/DQN in orders of magnitude less samples. This sample-efficiency property of our algorithm can be clearly observed from (Figure~\ref{fig:samplecomplexity}). Our method converges within ~100 episodes of training, while the other methods require tens of thousands of episodes. Further, our algorithm has a stable training process, in contrast to the notable training instability for the baselines. Furthermore, our method is robust with respect to hyperparameter tuning, unlike the baselines, which require an extensive search over hyperparameters to achieve comparable performance. This extensive hyperparameter search required by the baselines is unfeasible for use in resource-constrained or online learning scenarios, which are typical use cases for these control systems. Specifically for the results provided here, we conducted 720 trials with different hyperparameter configuration for PPO and 180 trials for DQN. In contrast, using our method only involves a few trials of standard optimizer learning rate tuning, which is the minimum effort in deep learning practices.
\begin{figure}[!h]
\centering
\includegraphics[width=8cm]{ML4H 2021/figures/2rowbar.pdf}
\caption{Performance comparison of our controller with PPO/DQN. The score is calculated by average per-step L1 distance between target and achieved pressure.}\label{fig:baselines}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[width=8cm]{ML4H 2021/figures/samplecomplexity.pdf}
\caption{Convergence behavior demonstration. }\label{fig:samplecomplexity}
\end{figure}
\section{Conclusions and future work} \label{sec:discussion}
We have presented a machine learning approach to ventilator control, demonstrating the potential of end-to-end learned controllers by obtaining improvements over industry-standard baselines.
Our main conclusions are
\begin{enumerate}
\item The nonlinear dynamical system of lung-ventilator can be modeled by a neural network more accurately than previously studied physics-based models.
\item Controllers based on deep neural networks can outperform PID controllers across multiple clinical settings (waveforms), and can generalize better across patient lungs characteristics, despite having significantly more parameters.
\item
Direct policy optimization for differentiable environments has potential to significantly outperform Q-learning or (standard) policy gradient methods in terms of sample and computational complexity.
\end{enumerate}
There remain a number of areas to explore, mostly motivated by medical need. The lung settings we examined are by no means representative of all lung characteristics (e.g., neonatal, child, non-sedated) and lung characteristics are not static over time; a patient may improve or worsen, or begin coughing. Ventilator costs also drive further research. As an example, inexpensive valves have less consistent behavior and longer reaction times, which exacerbate bad PID behavior (e.g., overshooting, ringing), yet are crucial to bringing down costs and expanding access. Learned controllers that adapt to these deficiencies may obviate the need for such trade-offs.
\section{Learning a data-driven simulator} \label{sec:sim}
With the hardware setup outlined in Section~\ref{sec:hardware}, we have a physical system suitable for benchmarking, in place of a true patient's lung. In this section, we present our approach to learning a simulator for the inspiratory phase of this ventilator-lung system, subject to the practical constraints of real-time data collection. Two main considerations drive our simulator training and evaluation design:
First, the evaluation for any simulator can only be performed using a {\bf black-box metric}, since we do not have explicit access to the system dynamics, and existing physics models are poor approximations to the empirical behavior.
Second, the dynamical system we simulate is very challenging for a comprehensive simulation covering all modalities and in particular exhibits chaotic behavior in boundary cases. Therefore, since the end goal for the simulator is better control, we only evaluate the simulator for ``reasonable" scenarios that are relevant to the control task.
\subsection{Black-box simulator evaluation}\label{sec:sim_metric}
The class of learned simulators we consider are deep neural networks and thus in addition to the lack of explicit access to system dynamics, the simulator dynamics are also complex non-linear operations. Thus we deviate from standard distance metrics (between the simulator and the true system) considered in the literature, such as \citet{ferns2005metrics}, as they explicitly involve the value function over states, transition probabilities or other unknown quantities. Rather, we consider metrics that are based on the evolution of the dynamics, as studied in \citet{vishwanathan2007binet}.
However, unlike the latter work, we take into account the particular distribution over control sequences that we expect to search around during the controller training phase. We thus define the following distance between dynamical systems.
Let $f_1,f_2$ be two dynamical systems over the same state-action spaces.
Let $\mathcal{D}$ be a distribution over sequences of controls denoted $\mathbf{u} = \{u_1,u_2,...,u_T\}$.
We define the {\bf open-loop distance} w.r.t. horizon $T$ and control sequence distribution $\mathcal{D}$ as
\begin{align*}
\|f_1 - f_2\|_{ol} \stackrel{\text{def}}{=} \mathbb{E}_{\mathbf{u} \sim \mathcal{D}} \left[ \sum_{t=1}^T \| f_1(x_{t,1},u_t) - f_2(x_{t,2},u_t) \| \right] . \end{align*}
We use the Euclidean norm over the states in the inner loop, although this can be generalized to any metric. Compared to metrics involving feedback from the simulator, the open-loop distance is a more reliable description of transfer, since it minimizes hard-to-analyze interactions between the policy and the simulator.
We evaluate our data-driven simulator using the open loop distance metric, and we illustrate a result in the top half of Figure~\ref{fig:sim_testing}. In the bottom half, we show a sample trajectory of our simulator and the ground truth. See Section \ref{sec:sim_model} for experimental details.
\begin{figure}
\centering
\includegraphics[width=8cm]{figures/sim-testing.pdf}
\caption{Performance for a learned simulator for ventilator + a particular lung setting $R=5, C=50$. The upper plot shows the \textit{open-loop distance} and the lower shows a sample trajectory with a fixed sequence of controls (open-loop controls), both for lung setting . In the former, we see low error as we increase the number of steps we project and in the latter, we see that our simulated trajectory tracks the true trajectory quite closely.}\label{fig:sim_testing}
\end{figure}
\subsection{Data-collection via targeted exploration}\label{sec:safe_exp}
Motivated by the black-box metric described above, we focus on collecting trajectories comprising of control sequences and the measured pressure sequences upon the execution of the control sequences to form a training dataset. Due to safety and complexity issues, we cannot hope to exhaustively explore the space of all trajectories. Instead keeping the eventual control task in mind, we choose to explore trajectories \textit{near} the control sequence generated by a baseline PI controller. The goal is to have the simulator faithfully capture the true dynamics in a reasonably large vicinity of the optimal control trajectory on the true system. To this end, for each of the lung settings, we collect data by choosing a safe PID controller baseline and introducing random exploratory perturbations according to the following two policies:
\begin{enumerate}
\item Boundary exploration: To the very beginning of the inhalation, add an additional control sampled uniformly from $(c^a_{\min}, c^a_{\max})$ and decrease this additive control linearly to zero over a time frame sampled randomly from $(t^a_{\min}, t^a_{\max})$;
\item Triangular exploration: sample a maximal additional control from a range $(c^b_{\min}, c^b_{\max})$ and an interval $(t^b_{\min}, t^b_{\max})$, within the inhalation. Start from $0$ additional control at time $t^b_{\min}$, increase the additional control linearly until $(t^b_{\min} + t^b_{\max}))/2$, and then decrease to $0$ linearly until $t^b_{\min}$.
\end{enumerate}
For each breath during data collection, we choose policy $(a)$ with probability $p_a$ and policy $(b)$ with probability $(1-p_a)$. The ranges in $(a)$ and $(b)$ are lung-specific. We give the exact values used in the Appendix.
This protocol balances the need to explore a significant part of the state space with the need to ensure safety. The boundary exploration capitalizes on the fact that at the beginning of the breath, exploration is safer and also more valuable. The former, due to the lung being at steady state and the latter due to the fact that the typical target waveform for inhalation requires a rapid pressure increase with a quick switch to stabilization, leading to a need for better understanding of dynamics in the early phases of a breath. The structure for the triangular exploration is inspired by the need for a persistent exploration strategy (similar ideas exist in \cite{dabney2020temporally}) which can capture intrinsic delay in the system. We illustrate this approach in Figure \ref{fig:exp_data}: control inputs used in our exploration policy are shown on the top, and the pressure measurements of the ventilator-lung system are shown on the bottom. Precise parameters for our exploration policy are listed in Table \ref{table:appendix-sim-data} in the Appendix.
\begin{figure}\centering
\includegraphics[width=8cm]{figures/exploratory.pdf}
\caption{We overlay the controls and pressures from all inspiratory phases in the upper and lower plots, respectively. From this example of the simulator training data (lung setting $R=5, C=50$), we see that we explore a wide range of control inputs (upper plot), but a more limited ``safe'' range around the resulting pressures.}\label{fig:exp_data}
\end{figure}
\subsection{Model architecture}\label{sec:sim_model}
Now we describe the architectural details of our data-driven simulator. Due to the inherent differences across lungs, we opt to learn a different simulator for each of the tasks, which we can wrap into a single meta-simulator through code that selects the appropriate model based on a user's input of $R$ and $C$ parameters.
\paragraph{Training Task(s).} The simulator aims to learn the unknown dynamics of the inhalation phase. We approximate the state of the system (which is not observable to us) by the sequence of the past pressures and controls upto a history length of $H_c$ and $H_p$ respectively. The task of the simulator can now be distilled down to that of predicting the next pressure $p_{t+1}$, based on the past $H_c$ controls $u_{t},\ldots, u_{t-H_c}$ and $H_p$ pressures $p_{t},\ldots, p_{t-H_p}$. We define the training task by constructing a regression data set whose inputs come from contiguous overlapping sections of $H_p, H_c$ within the collected trajectories and the task is to predict the following pressure.
\paragraph{Boundary Model} Towards further improvement of simulator performance we found that, additional distinction needs to be provided for difference in the behavior of the dynamics during the "rise" and "stabilize" phases of an inhalation. Thus we learned a collection of individual models for the very beginning of the inhalation/episode and a general model for the rest of the inhalation, mirroring our choice of exploration policies. This proves to be very helpful as the dynamics at the very beginning of an inhalation are transient, and also extremely important to get right due to downstream effects. Concretely, our final model stitches together a list of $N_B$ boundary models and a general model, whose training tasks are as described earlier (details found in Appendix \ref{app:sim}, Table \ref{table:appendix-sim-training}).
\section{Data collection}
\label{app:data}
\subsection{PID residual exploration} The following table describes the settings for determining policies $(a)$ and $(b)$ for collecting simulator training data as described in Section \ref{sec:sim}.
\label{app:data-sim-explore}
\hfill \break
\begin{table}[!h]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$(R, C)$ & $(P, I, D)$ & $(c^a_{\min}, c^a_{\max})$ & $(t^a_{\min}, t^a_{\max})$ & $(c^b_{\min}, c^b_{\max})$ & $(t^b_{\min}, t^b_{\max})$ & $p_a$\\
\hline
(5, 10) & (1, 0.5, 0) & (50, 100) & (0.3, 0.6) & (-20, 40) & (0.1, 0.5) & 0.25 \\
(5, 20) & (1, 3, 0) & (50, 100) & (0.4, 0.8) & (-20, 60) & (0.1, 0.5) & 0.25 \\
(5, 50) & (2, 4, 0) & (75, 100) & (1.0, 1.5) & (-20, 60) & (0.1, 0.5) & 0.25 \\
(20, 10) & (1, 0.5, 0) & (50, 100) & (0.3, 0.6) & (-20, 40) & (0.1, 0.5) & 0.25 \\
(20, 20) & (0, 3, 0) & (30, 60) & (0.5, 1.0) & (-20, 40) & (0.1, 0.5) & 0.25 \\
(20, 50) & (0, 4, 0) & (70, 100) & (1.0, 1.5) & (-20, 40) & (0.1, 0.5) & 0.25 \\
\hline
\end{tabular}
\caption{Parameters for exploring in the boundary of a PID controller}
\label{table:appendix-sim-data}
\end{table}
\subsection{PID grid search}
\label{app:data-best-pid}
For each lung setting, we run a grid of settings over $P$, $I$, and $D$ (with values $[0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, $ $1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0]$ each). For each grid point, we target six different waveforms (with identical PEEP and breaths per minute, but varying PIP over $[10, 15, 20, 25, 30, 35]$ cmH2O. This gives us 2,400 trajectories for each lung setting. We determine a score for the run by averaging the L1 loss between the actual and target pressures, ignoring the first breath. Each run lasts 300 time steps (approximately 9 seconds, or three breaths), which we have found to give sufficiently consistent results compared with a longer run.
Of note, some of our coefficients reach our maximum grid setting (i.e., 10.0). We explored going beyond 10 but found that performance actually degrades quickly since a quickly rising pressure is offset by subsequent overshooting and/or ringing.
\hfill \break
\begin{table}[H]
\centering
\begin{tabular}{ |c|c|c|c| }
\hline
$(R, C)$ & $P$ & $I$ & $D$\\
\hline
(5, 10) & 10.0 & 0.2 & 0.0\\
(5, 20) & 10.0 & 10.0 & 0.0\\
(5, 50) & 10.0 & 10.0 & 0.0\\
(20, 10) & 8.0 & 1.0 & 0.0\\
(20, 20) & 5.0 & 10.0 & 0.0\\
(20, 50) & 5.0 & 10.0 & 0.0\\
\hline
\end{tabular}
\caption{$P$ and $I$ coefficients that give the best L1 controller performance relative to the target waveform averaged across the six waveforms associated with $PIP=[10, 15, 20, 25, 30, 35]$.}
\label{table:appendix-best-pid}
\end{table}
\section{Simulator details}
\label{app:sim}
\subsection{Evaluation}
\label{app:sim-evaluation}
\paragraph{Open-loop test.} To validate the simulator's performance, we hold out 20\% of the trajectory data we collected including residual exploration. We run the exact sequence of controls derived from the lung execution on the simulator. We define the point-wise error to be the absolute value of the distance between the pressure observed on the real lung and the corresponding output of the simulator, i.e. $\text{err}_t = \lvert p_t^{sim} - p_t^{lung} \rvert$. We assess the MAE loss corresponding to the errors accumulated across all test trajectories.
The following table contains the optimal objective values achieved via the above training and evaluation along with an architecture search over the parameters $H_p$ (pressure window), $H_c$ (control window), $W$ (width) , $d$ (depth), $N_B$ (number of boundary models),
\begin{table}[]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\toprule
(R,C) & Open-loop Average MAE & $d$ & $N_B$ & $H_p$ & $W$ & $H_c$ \\
\midrule
(5,10) & 0.64 & 9.0 & 1.0 & 5.0 & 150.0 & 10.0 \\
(5,20) & 0.72 & 6.0 & 1.0 & 5.0 & 100.0 & 5.0 \\
(5,50) & 0.39 & 6.0 & 1.0 & 10.0 & 150.0 & 10.0 \\
(20, 10) & 0.72 & 9.0 & 1.0 & 10.0 & 100.0 & 10.0 \\
(20, 20) & 0.60 & 9.0 & 1.0 & 10.0 & 150.0 & 10.0 \\
(20, 50) & 0.85 & 9.0 & 1.0 & 10.0 & 150.0 & 10.0 \\
\bottomrule
\end{tabular}
\caption{Mean absolute error for the open-loop test under each lung setting under the optimal architectural parameters.}
\label{table:appendix-sim-training}
\end{table}
\begin{table}[]
\centering
\begin{tabular}{ccccc}
\begin{tabular}{|c|c|}
\toprule
$d$ & MAE \\
\midrule
3.0 & 0.696342 \\
6.0 & 0.649357 \\
9.0 & 0.643421 \\
\bottomrule
\end{tabular} &
\begin{tabular}{|c|c|}
\toprule
$N_B$ & MAE \\
\midrule
1.0 & 0.643421 \\
3.0 & 0.676724 \\
5.0 & 0.718358 \\
\bottomrule
\end{tabular} &
\begin{tabular}{|c|c|}
\toprule
$H_p$ & MAE \\
\midrule
3.0 & 0.649357 \\
5.0 & 0.643421 \\
10.0 & 0.647772 \\
\bottomrule
\end{tabular} &
\begin{tabular}{|c|c|}
\toprule
$W$ & MAE \\
\midrule
50.0 & 0.679061 \\
100.0 & 0.650625 \\
150.0 & 0.643421 \\
\bottomrule
\end{tabular} &
\begin{tabular}{|c|c|}
\toprule
$H_c$ & MAE \\
\midrule
3.0 & 0.675960 \\
5.0 & 0.647772 \\
10.0 & 0.643421 \\
\bottomrule
\end{tabular}
\end{tabular}
\caption{Open-Loop Errors across multiple dimensions of the architecture search for one lung setting R5C10. We see while more expressive networks and featurizations lead to gains, the relative gains plateau quickly. Similar trends are observed across lung settings.}
\label{tab:my_label}
\end{table}
\paragraph{Trajectory comparison.} In addition to the open-loop test, we compare the true trajectories to simulated ones as described in Section \ref{sec:sim}.
\hfill \break
\begin{table}[H]
\centering
\begin{tabular}{ccc}
\includegraphics[width=55mm]{figures/sim-testing-R5-C10.pdf} &
\includegraphics[width=55mm]{figures/sim-testing-R5-C20.pdf} &
\includegraphics[width=55mm]{figures/sim-testing-R5-C50.pdf} \\
\small R=5, C=10 & R=5, C=20 & R=5, C=50 \\
\includegraphics[width=55mm]{figures/sim-testing-R20-C10.pdf} &
\includegraphics[width=55mm]{figures/sim-testing-R20-C20.pdf} &
\includegraphics[width=55mm]{figures/sim-testing-R20-C50.pdf} \\
\small R=20, C=10 & R=20, C=20 & R=20, C=50
\end{tabular}
\caption{We plot both open-loop testing and pressure trajectories for each of the six simulators corresponding to the six lung settings under consideration. These plots are described more in Section \ref{sec:sim}}
\label{table:appendix-sim-plots}
\end{table}
\section{Controller details}
\label{app:controller}
\subsection{Training hyperparameters}
\label{app:controller-hyperparmeters}
We use an initial learning rate of $10^{-1}$ and weight decay $10^{-5}$ over 30 epochs.
\subsection{Training a controller across multiple simulators}
\label{app:controller-global}
To the generalization task, we train controllers across multiple simulators corresponding to the lung settings ($R = 20$, $C = [10, 20, 50]$ in our case). For each target waveform (there are six, one for each PIP in $[10, 15, 20, 25, 30, 35]$ cmH2O) and each simulator, we train the controller round-robin (i.e., one after another sequentially) once per epoch. We zero out the gradients between each epoch.
\section{Data collection}
\label{app:data}
\subsection{PID residual exploration} The following table describes the settings for determining policies $(a)$ and $(b)$ for collecting simulator training data as described in Section \ref{sec:sim}.
\label{app:data-sim-explore}
\hfill \break
\begin{table}[!h]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$(R, C)$ & $(P, I, D)$ & $(c^a_{\min}, c^a_{\max})$ & $(t^a_{\min}, t^a_{\max})$ & $(c^b_{\min}, c^b_{\max})$ & $(t^b_{\min}, t^b_{\max})$ & $p_a$\\
\hline
(5, 10) & (1, 0.5, 0) & (50, 100) & (0.3, 0.6) & (-20, 40) & (0.1, 0.5) & 0.25 \\
(5, 20) & (1, 3, 0) & (50, 100) & (0.4, 0.8) & (-20, 60) & (0.1, 0.5) & 0.25 \\
(5, 50) & (2, 4, 0) & (75, 100) & (1.0, 1.5) & (-20, 60) & (0.1, 0.5) & 0.25 \\
(20, 10) & (1, 0.5, 0) & (50, 100) & (0.3, 0.6) & (-20, 40) & (0.1, 0.5) & 0.25 \\
(20, 20) & (0, 3, 0) & (30, 60) & (0.5, 1.0) & (-20, 40) & (0.1, 0.5) & 0.25 \\
(20, 50) & (0, 4, 0) & (70, 100) & (1.0, 1.5) & (-20, 40) & (0.1, 0.5) & 0.25 \\
\hline
\end{tabular}
\caption{Parameters for exploring in the boundary of a PID controller}
\label{table:appendix-sim-data}
\end{table}
\subsection{PID grid search}
\label{app:data-best-pid}
For each lung setting, we run a grid of settings over $P$, $I$, and $D$ (with values $[0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, $ $1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0]$ each). For each grid point, we target six different waveforms (with identical PEEP and breaths per minute, but varying PIP over $[10, 15, 20, 25, 30, 35]$ cmH2O. This gives us 2,400 trajectories for each lung setting. We determine a score for the run by averaging the L1 loss between the actual and target pressures, ignoring the first breath. Each run lasts 300 time steps (approximately 9 seconds, or three breaths), which we have found to give sufficiently consistent results compared with a longer run.
Of note, some of our coefficients reach our maximum grid setting (i.e., 10.0). We explored going beyond 10 but found that performance actually degrades quickly since a quickly rising pressure is offset by subsequent overshooting and/or ringing.
\hfill \break
\begin{table}[H]
\centering
\begin{tabular}{ |c|c|c|c| }
\hline
$(R, C)$ & $P$ & $I$ & $D$\\
\hline
(5, 10) & 10.0 & 0.2 & 0.0\\
(5, 20) & 10.0 & 10.0 & 0.0\\
(5, 50) & 10.0 & 10.0 & 0.0\\
(20, 10) & 8.0 & 1.0 & 0.0\\
(20, 20) & 5.0 & 10.0 & 0.0\\
(20, 50) & 5.0 & 10.0 & 0.0\\
\hline
\end{tabular}
\caption{$P$ and $I$ coefficients that give the best L1 controller performance relative to the target waveform averaged across the six waveforms associated with $PIP=[10, 15, 20, 25, 30, 35]$.}
\label{table:appendix-best-pid}
\end{table}
\section{Simulator details}
\label{app:sim}
\subsection{Evaluation}
\label{app:sim-evaluation}
\paragraph{Open-loop test.} To validate the simulator's performance, we hold out 20\% of the trajectory data we collected including residual exploration. We run the exact sequence of controls derived from the lung execution on the simulator. We define the point-wise error to be the absolute value of the distance between the pressure observed on the real lung and the corresponding output of the simulator, i.e. $\text{err}_t = \lvert p_t^{sim} - p_t^{lung} \rvert$. We assess the MAE loss corresponding to the errors accumulated across all test trajectories.
The following table contains the optimal objective values achieved via the above training and evaluation along with an architecture search over the parameters $H_p$ (pressure window), $H_c$ (control window), $W$ (width) , $d$ (depth), $N_B$ (number of boundary models),
\begin{table}[]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\toprule
(R,C) & Open-loop Average MAE & $d$ & $N_B$ & $H_p$ & $W$ & $H_c$ \\
\midrule
(5,10) & 0.64 & 9.0 & 1.0 & 5.0 & 150.0 & 10.0 \\
(5,20) & 0.72 & 6.0 & 1.0 & 5.0 & 100.0 & 5.0 \\
(5,50) & 0.39 & 6.0 & 1.0 & 10.0 & 150.0 & 10.0 \\
(20, 10) & 0.72 & 9.0 & 1.0 & 10.0 & 100.0 & 10.0 \\
(20, 20) & 0.60 & 9.0 & 1.0 & 10.0 & 150.0 & 10.0 \\
(20, 50) & 0.85 & 9.0 & 1.0 & 10.0 & 150.0 & 10.0 \\
\bottomrule
\end{tabular}
\caption{Mean absolute error for the open-loop test under each lung setting under the optimal architectural parameters.}
\label{table:appendix-sim-training}
\end{table}
\begin{table}[]
\centering
\begin{tabular}{ccccc}
\begin{tabular}{|c|c|}
\toprule
$d$ & MAE \\
\midrule
3.0 & 0.696342 \\
6.0 & 0.649357 \\
9.0 & 0.643421 \\
\bottomrule
\end{tabular} &
\begin{tabular}{|c|c|}
\toprule
$N_B$ & MAE \\
\midrule
1.0 & 0.643421 \\
3.0 & 0.676724 \\
5.0 & 0.718358 \\
\bottomrule
\end{tabular} &
\begin{tabular}{|c|c|}
\toprule
$H_p$ & MAE \\
\midrule
3.0 & 0.649357 \\
5.0 & 0.643421 \\
10.0 & 0.647772 \\
\bottomrule
\end{tabular} &
\begin{tabular}{|c|c|}
\toprule
$W$ & MAE \\
\midrule
50.0 & 0.679061 \\
100.0 & 0.650625 \\
150.0 & 0.643421 \\
\bottomrule
\end{tabular} &
\begin{tabular}{|c|c|}
\toprule
$H_c$ & MAE \\
\midrule
3.0 & 0.675960 \\
5.0 & 0.647772 \\
10.0 & 0.643421 \\
\bottomrule
\end{tabular}
\end{tabular}
\caption{Open-Loop Errors across multiple dimensions of the architecture search for one lung setting R5C10. We see while more expressive networks and featurizations lead to gains, the relative gains plateau quickly. Similar trends are observed across lung settings.}
\label{tab:my_label}
\end{table}
\paragraph{Trajectory comparison.} In addition to the open-loop test, we compare the true trajectories to simulated ones as described in Section \ref{sec:sim}.
\hfill \break
\begin{table}[H]
\centering
\begin{tabular}{ccc}
\includegraphics[width=55mm]{figures/sim-testing-R5-C10.pdf} &
\includegraphics[width=55mm]{figures/sim-testing-R5-C20.pdf} &
\includegraphics[width=55mm]{figures/sim-testing-R5-C50.pdf} \\
\small R=5, C=10 & R=5, C=20 & R=5, C=50 \\
\includegraphics[width=55mm]{figures/sim-testing-R20-C10.pdf} &
\includegraphics[width=55mm]{figures/sim-testing-R20-C20.pdf} &
\includegraphics[width=55mm]{figures/sim-testing-R20-C50.pdf} \\
\small R=20, C=10 & R=20, C=20 & R=20, C=50
\end{tabular}
\caption{We plot both open-loop testing and pressure trajectories for each of the six simulators corresponding to the six lung settings under consideration. These plots are described more in Section \ref{sec:sim}}
\label{table:appendix-sim-plots}
\end{table}
\section{Controller details}
\label{app:controller}
\subsection{Training hyperparameters}
\label{app:controller-hyperparmeters}
We use an initial learning rate of $10^{-1}$ and weight decay $10^{-5}$ over 30 epochs.
\subsection{Training a controller across multiple simulators}
\label{app:controller-global}
To the generalization task, we train controllers across multiple simulators corresponding to the lung settings ($R = 20$, $C = [10, 20, 50]$ in our case). For each target waveform (there are six, one for each PIP in $[10, 15, 20, 25, 30, 35]$ cmH2O) and each simulator, we train the controller round-robin (i.e., one after another sequentially) once per epoch. We zero out the gradients between each epoch.
|
2,869,038,155,497 | arxiv | \chapter*{Executive Summary}
\addtocentrydefault{chapter}{}{Executive Summary}
\markboth{Executive Summary}{Executive Summary}
\ifx1\undefined\else
This deliverable reports on the activities in EuroLab-4-HPC WP2 Research during the final year (M13 to M24) of the project. It contains the EuroLab-4-HPC Long-Term Vision for High Performance Computing, as well as a final chapter proposing topics for the future EuroLab-4-HPC Centre of Excellence portfolio.
The Long-Term Vision also exists as a separate public document that will be printed and distributed within the HPC community.
\fi
Radical changes in computing are foreseen for the next decade. The US IEEE society wants to ``reboot computing'' and the HiPEAC Vision 2017 sees the time to ``re-invent computing'', both by challenging its basic assumptions. This document presents the ``EuroLab-4-HPC Long-Term Vision on High-Performance Computing'' of August 2017, a road mapping effort within the EC CSA\footnote{European Commission Community and Support Action} Eurolab-4-HPC that targets potential changes in hardware, software, and applications in High-Performance Computing (HPC).
The objective of the Eurolab-4-HPC vision is to provide a long-term roadmap from
2023 to 2030 for High-Performance Computing (HPC). Because of the long-term perspective and its speculative nature, the authors started with an assessment of future computing technologies that could influence HPC hardware and software. The proposal on research topics is derived from the report and discussions within the road mapping expert group. We prefer the term ``vision'' over ``roadmap'', firstly because timings are hard to predict given the long-term perspective, and secondly because EuroLab-4-HPC will have no direct control over the realization of its vision.
\subsection*{The Big Picture}
High-performance computing (HPC) typically targets scientific and engineering simulations with numerical programs mostly based on floating-point computations. We expect the continued scaling of such scientific and engineering applications to continue well beyond Exascale computers. As just one example, the NASA CFD roadmap from 2014 envisions scaling of Computational Fluid Dynamics to Zetascale by 2030\footnote{CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences. NASA/CR-2014-218178}.
However, three trends are changing the landscape for high-performance computing and supercomputers. The first trend is the emergence of data analytics complementing simulation in scientific discovery. While simulation still remains a major pillar for science, there are massive volumes of scientific data that are now gathered by sensors augmenting data from simulation available for analysis.
High-Performance Data Analysis (HPDA) will complement simulation in future HPC applications.
The second trend is the emergence of cloud computing and warehouse-scale computers (also known as data centres). Data centres consist of low-cost volume processing, networking and storage servers, aiming at cost-effective data manipulation at unprecedented scales. The scale at which they host and manipulate (e.g., personal, business) data has led to fundamental breakthroughs in data analytics.
There are a myriad of challenges facing massive data analytics including management of highly distributed data sources, and tracking of data provenance, data validation, mitigating sampling bias and heterogeneity, data format diversity and integrity, integration, security, privacy, sharing, visualization, and massively parallel and distributed algorithms for incremental and/or real-time analysis.
Large datacentres are fundamentally different from traditional supercomputers in their design, operation and software structures. Particularly, big data applications in data centres and cloud computing centres require different algorithms and differ significantly from traditional HPC applications such that they may not require the same computer structures.
With modern HPC platforms being increasingly built using volume servers (90\% of the systems in the June 2017 TOP500 list are based on Intel Xeon), there are a number of features that are shared among warehouse-scale computers and modern HPC platforms, including dynamic resource allocation and management, high utilization, parallelization and acceleration, robustness and infrastructure costs. These shared concerns will serve as incentives for the convergence of the platforms.
There are, meanwhile, a number of ways that traditional HPC systems differ from modern warehouse-scale computers: efficient virtualization, adverse network topologies and fabrics in cloud platforms, low memory and storage bandwidth in volume servers. HPC customers must adapt to co-exist with cloud services; warehouse-scale computer operators must innovate technologies to support the workload and platform at the intersection of commercial and scientific computing.
It is unclear whether a convergence of HPC with big data applications will arise. Investigating hardware and software structures targeting such a convergence is of high research and commercial interest.
However, some HPC applications will be executed more economically on data centres. Exascale and post-Exascale supercomputers could become a niche for HPC applications.
The third trend arises from Deep Neural Networks (DNN) for back propagation learning of complex patterns, which emerged as new technique penetrating different application areas. DNN learning requires high performance and is often run on high-performance supercomputers. Recent GPU accelerators are seen as very effective for DNN computing by their enhancements, e.g. support for 16-bit floating-point and tensor processing units.
It is widely assumed that it will be applied in future autonomous cars thus opening a very large market segment for embedded HPC. DNNs will also be applied in engineering simulations traditionally running on HPC supercomputers.
Embedded high-performance computing demands are upcoming needs. It may concern smartphones but also applications like autonomous driving, requiring on-board high-performance computers. In particular the trend from current advanced ADAS (automatic driving assistant systems) to piloted driving (2018–2020) and to fully autonomous cars in the next decade will increase on-board performance requirements and may even be coupled with high-performance supercomputers in the Cloud. The target is to develop systems that adapt more quickly to changing environments, opening the door to highly automated and autonomous transport, capable of eliminating human error in control, guidance and navigation and so leading to more safety. High-performance computing devices in cyber-physical systems will have to fulfil further non-functional requirements such as timeliness, (very) low energy consumption, security and safety.
However, further applications will emerge that may be unknown today or that receive a much higher importance than expected today.
Power and thermal management is considered as highly important and will continue its preference in future. Post-Exascale computers will target more than 1 Exaflops with less than 30 MW power consumption requiring processors with a much better performance per Watt than available today. On the other side, embedded computing needs high performance with low energy consumption. The power target at the hardware level is widely the same, a high performance per Watt.
In addition to mastering the technical challenges, reducing the environmental impact of upcoming computing infrastructures is also an important matter. Reducing CO\textsubscript{2} emissions and overall power consumption should be pursued. A combination of hardware techniques, such as new processor cores, accelerators, memory and interconnect technologies, and software techniques for energy and power management will need to be cooperatively deployed in order to deliver energy-efficient solutions.
Because of the foreseeable end of CMOS scaling, new technologies are under development, such as, for example, Die Stacking and 3D Chip Technologies, Non-volatile Memory (NVM) Technologies, Photonics, Resistive Computing, Neuromorphic Computing, Quantum Computing, Nanotubes, Graphene, and diamond-based transistors. Since it is uncertain if/when some of the technologies will mature, it is hard to predict which ones will prevail.
The particular mix of technologies that achieve commercial success will strongly impact the hardware and software architectures of future HPC systems, in particular the processor logic itself, the (deeper) memory hierarchy, and new heterogeneous accelerators.
There is a clear trend towards more complex systems, which is expected to continue over the next decade. These developments will significantly increase software complexity, demanding more and more intelligence across the programming environment, including compiler, run-time and tool intelligence driven by appropriate programming models. Manual optimization of the data layout, placement, and caching will become uneconomic and time consuming, and will, in any case, soon exceed the abilities of the best human programmers.
If accurate results are not necessarily needed, another speedup could emerge from more efficient special execution units, based on analog, or even a mix between analog and digital technologies. Such developments would benefit from more advanced ways to reason about the permissible degree of inaccuracy in calculations at run time. Furthermore, new memory technologies like memristors may allow on-chip integration, enabling tightly-coupled communication between the memory and the processing unit. With the help of memory computing algorithms, data could be pre-processed ``in-'' or ``near-'' memory.
But it is also possible that new hardware developments reduce software complexity. New materials like graphene, nanotubes and diamonds could be used to run processors at much higher frequencies than are currently possible, and with that, may even enable a significant increase in the performance of single-threaded programs.
Optical networks on die and Terahertz-based connections may eliminate the need for preserving locality since the access time to local storage may not be as significant in future as it is today. Such advancements will lead to storage-class memory, which features similar speed, addressability and cost as DRAM combined with the non-volatility of storage. In the context of HPC, such memory may reduce the cost of checkpointing or eliminate it entirely.
The adoption of neuromorphic, resistive and/or quantum computing as new accelerators may have a dramatic effect on the system software and programming models. It is currently unclear whether it will be sufficient to offload tasks, as on GPUs, or whether more dramatic changes will be needed. By 2030, disruptive technologies may have forced the introduction of new and currently unknown abstractions that are very different from today. Such new programming abstractions may include domain-specific languages that provide greater opportunities for automatic optimization. Automatic optimization requires advanced techniques in the compiler and runtime system. We also need ways to express non-functional properties of software in order to trade various metrics: performance vs. energy, or accuracy vs. cost, both of which may become more relevant with near threshold, approximate computing or accelerators.
Nevertheless, today's abstractions will continue to evolve incrementally and will continue to be used well beyond 2030, since scientific codebases have very long lifetimes, on the order of decades.
Execution environments will increase in complexity requiring more intelligence, e.g., to manage, analyse and debug millions of parallel threads running on heterogeneous hardware with a diversity of accelerators, while dynamically adapting to failures and performance variability. Spotting anomalous behavior may be viewed as a big data problem, requiring techniques from data mining, clustering and structure detection. This requires an evolution of the incumbent standards such as OpenMP to provide higher-level abstractions. An important question is whether and to what degree these fundamental abstractions may be impacted by disruptive technologies.
\subsection*{The Work Needed}
As new technologies require major changes across the stack, a vertical funding approach is needed, from applications and software systems through to new hardware architectures and potentially down to the enabling technologies. We see HP Lab's memory-driven computing architecture ``The Machine'' as an exemplary project that proposes a low-latency NVM (Non-Volatile Memory) based memory connected by photonics to processor cores. Projects could be based on multiple new technologies and similarly explore hardware and software structures and potential applications. Required research will be interdisciplinary. Stakeholders will come from academic and industrial research.
\subsection*{The Opportunity}
The opportunity may be development of competitive new hardware/software technologies based on upcoming new technologies to advantageous position European industry for the future. Target areas could be High-Performance Computing and Embedded High-Performance devices. The drawback could be that the chosen base technology may not be prevailing but be replaced by a different technology. For this reason, efforts should be made to ensure that aspects of the developed hardware architectures, system architectures and software systems could also be applied to alternative prevailing technologies. For instance, several NVM technologies will bring up new memory devices that are several magnitudes faster than current Flash technology and the developed system structures may easily be adapted to the specific prevailing technologies, even if the project has chosen a different NVM technology as basis.
\subsection*{EC Funding Proposals}
The Eurolab4HPC vision recommends the following funding opportunities for topics beyond Horizon 2020 (ICT):
\begin{itemize}
\item Convergence of HPC and HPDA:
\begin{itemize}
\item Data Science, Cloud computing and HPC: Big Data meets HPC
\item Inter-operability and integration
\item Limitations of clouds for HPC
\item Edge Computing: local computation for processing near sensors
\end{itemize}
\item Impact of new NVMs:
\begin{itemize}
\item Memory hierarchies based on new NVMs
\item Near- and in-memory processing: pre- and post-processing in (non-volatile) memory
\item HPC system software based on new memory hierarchies
\item Impact on checkpointing and reciliency
\end{itemize}
\item Programmability:
\begin{itemize}
\item Hide new memory layers and HW accelerators from users by abstractions and intelligent programming environments
\item Monitoring of a trillion threads
\item Algorithm-based fault tolerance techniques within the application as well as moving fault detection burden to the library, e.g. fault-tolerant message-passing library
\end{itemize}
\item Green ICT and Energy
\begin{itemize}
\item Integration of cooling and electrical subsystem
\item Supercomputer as a whole system for Green ICT
\end{itemize}
\end{itemize}
As remarked above, projects should be interdisciplinary, from applications and software systems through hardware architectures and, where relevant, enabling hardware technologies.
\section*{Overall Editors and Authors}
Prof. Dr. Theo Ungerer, University of Augsburg\\
Dr. Paul Carpenter, BSC, Barcelona
\vskip4\baselineskip
\section*{Authors}
\tabularz{p{4cm},p{4cm},X}{
Nader Bagherzadeh & University of California, Irvine & Die Stacking and 3D-Chips \\
Sandro Bartolini & Univeristy of Siena & Photonics \\
Luca Benini & ETH Zürich & Die Stacking and 3D-Chips \\
Koen Bertels & Delft University of Technology & Quantum Computing \\
Fran\c{c}ois Bodin & University of Rennes & Overall Comments \\
Jose Manuel García Carrasco & University of Murcia & Photonics \\
Koen De Bosschere & Ghent University & Overall Comments \\
Marc Duranton & CEA LIST DACLE & Overall Comments \\
Babak Falsafi & Ecole Polytechnique Federale de Lausanne & Data Centre and Cloud Computing, Green ICT and Resiliency \\
Dietmar Fey & University of Erlangen-Nuremberg & Memristors \\
Said Hamdioui & Delft University of Technology & Quantum and Resistive Computing \\
Christian Hochberger & Technical University of Darmstadt & Nanotubes and Nanowires, Graphene, Diamond Transistors \\
Avi Mendelson & Technion & Diamond Computing, Hardware Impact \\
Benjamin Pfundt & University of Erlangen-Nuremberg & 3D Stacking, Memristors, Resistive Computing \\
Ulrich Rückert & University of Bielefeld & Neuromorphic Computing \\
Igor Zacharov & Eurotech & Green ICT and Resiliency \\
}
\vfill
\section*{Compiled by}
\tabularz{X,p{4cm}}{
Rico Amslinger, Martin Frieb, Florian Haas, Christian Mellwig, Jörg Mische,\newline Alexander Stegmeier, Sebastian Weis & University of Augsburg \\
}
\vfill
\parbox[t][2\baselineskip][t]{\textwidth}{
We also acknowledge the members of the Working Groups during the first year, as
well as the numerous people that provided valuable feedback at the
roadmapping workshops at HiPEAC~CSW and HPC~Summit, to HiPEAC and EXDCI for hosting
the workshops and Xavier Salazar for the organizational support.
}
\chapter{Introduction}
Upcoming application trends and disruptive VLSI technologies will change the way computers will be programmed and used as well as the way computers will be designed. New application trends such as High-Performance Data Analysis (HPDA) and deep-learning will induce changes in High-Performance Computing; disruptive technologies will change the memory hierarchy, hardware accelerators and even potentially lead to new ways of computing. The HiPEAC Vision 2017\footnote{\url{www.hipeac.net/publications/vision}} sees the time to revisit the basic concepts: The US wants to ``reboot computing'', the HiPEAC Vision proposes to ``re-invent computing'' by challenging basic assumptions such as binary coding, interrupts, layers of memory, storage and computation.
Exascale does not merely refer to a LINPACK $R_{\mathrm{max}}$ of 1 Exaflops. The PathForward definition of a capable Exascale system is a good one, as it focuses on scientific problems rather than benchmarks, as well as raising the core challenges of power consumption and resiliency: ``a supercomputer that can solve science problems 50X faster (or more complex) than on the 20 Petaflop systems (Titan and Sequoia) of today in a power envelope of 20-30 megawatts, and is sufficiently resilient that user intervention due to hardware or system faults is on the order of a week on average''~\cite{Hemsoth2016missing}.
This document has been funded by the EC CSA Eurolab-4-HPC (Sept. 2015 – August 2017) project. It outlines a long-term vision for excellence in European High-Performance Computing research, with a timescale beyond Exascale computers, i.e. a timespan of approximately 2023-2030.
\section{Current Proposals for Exascale Machines}
\paragraph{USA:} Today's leading organizations are using machine learning-based tools to automate decision processes, and they are starting to experiment with more advanced uses of artificial intelligence (AI) for digital transformation. Corporate investment in artificial intelligence is predicted to triple in 2017, becoming a \$100 billion market by 2025.~\cite{Wellers2017}
The U.S. Department of Energy --- and the hardware vendors it partners with --- are set to enliven the Exascale effort with nearly a half billion dollars in research, development, and deployment investments. The push is led by the DoE's Exascale Computing Project and its extended PathForward program landing us in the 2021 -- 2022 timeframe with ``at least one'' Exascale system. This roadmap was confirmed in June 2017 with a DoE announcement that backs six HPC companies as they create the elements for next-generation systems. The vendors on this list include Intel, Nvidia, Cray, IBM, AMD, and Hewlett Packard Enterprise (HPE).~\cite{hemsoth2017american}
\paragraph{China} currently possesses the fastest supercomputer in the world, called the Sunway TaihuLight. The supercomputer is theoretically capable of 124.5 Petaflops of performance, making it the first computer system to surpass 100 Petaflops. Interestingly, this supercomputer contains entirely Chinese-made processing chips. In January, China said it would soon have the world's first Exascale supercomputer prototype up and running. China said it will have a completed Exascale supercomputer by 2020.~\cite{tilley2017}.
This year, China is aiming for breakthroughs in high-performance processors and other key technologies to build the world's first prototype Exascale supercomputer, the Tianhe-3, said Meng Xiangfei, the director of application at the National Super Computer Tianjin Center. ``The prototype is expected to be completed in early 2018. Tianhe-3 will be made entirely in China, from processors to operating system. It will be stationed in Tianjin and fully operational by 2020, earlier than the US plan for its Exascale supercomputer''.~\cite{zhihao2017}.
The Exascale supercomputer will be able to analyse smog distribution on a national level, while current models can only handle a district. Tianhe-3 also could simulate earthquakes and epidemic outbreaks in more detail, allowing swifter and more effective government responses. The new machine also will be able to analyse gene sequence and protein structures in unprecedented scale and speed. That may lead to new discoveries and more potent medicine, he said.~\cite{zhihao2017}.
\paragraph{Japan:} The successor to the K supercomputer, which is being developed under the Flagship2020 program, will use ARM-based processors and these chips will be at the heart of a new system built by Fujitsu for RIKEN (Japan's Institute of Physical and Chemical Research) that would break the Exaflops barrier by 2020.~\cite{morgan2016japan}.
\paragraph{European Community:} EC President Juncker has declared that the European Union has to be competitive in the international arena with regard to the USA, China, Japan and other stakeholders, in order to enhance and promote the European industry in the public as well as the private sector related to HPC.~\cite{emmen2017}
The first step will be ``Extreme-Scale Demonstrators'' (EsDs) that should provide pre-Exascale platforms deployed by HPC centres and used by Centres of Excellence for their production of new and relevant applications. Such demonstrators are planned by ETP4HPC Initiative and included in the EC LEIT-ICT 2018 calls. At project end, the EsDs will have a high TRL (Technical Readiness Level) that will enable stable application production at reasonable scale.~\cite{etp4hpcsra}
The EuroHPC Initiative is based on a Memorandum of Understanding that was signed on March 23, 2017 in Rome. Its plans for the creation of two pre-Exascale machines, followed by the delivery of two machines that are actually Exascale. There are a lot of things to consider such as the creation of the micro-processor with European technology and the integration of the micro-processor in the European Exascale machines~\cite{emmen2017}. IPCEI (Important Project of Common European Interest) is another parallel initiative, related to EuroHPC. The IPCEI for HPC at the moment involves France, Italy, Spain, and Luxembourg but it is also open to other countries in the European Union. If all goes according to plan, the first pre-Exascale machine will be released by 2022 -- 2023. By 2024 -- 2025, the Exascale machines will be delivered.~\cite{emmen2017}
Partly other time lines are shown in the summary on Exascale race as seen by Hyperion at April 20, 2017~\cite{russell2017} (see Figure~\ref{fig-intro-hyperion}).
\begin{figure*}[ht]
\centering
\begin{tikzpicture}
{\sffamily\small
\ifx1\undefined
\fill[background] (-5,3.5) rectangle ++(17,-12);
\else
\fi
\setlength{\fboxsep}{0pt}
\node[align=left] (us) at (-0.5,0.5) {\parbox[t][7cm][c]{7cm}{
\begin{center}\fbox{\includegraphics[scale=0.046]{images/us}}
\parbox[b][0.8cm][c]{1cm}{~~\textbf{U.S.}}\end{center}
\begin{itemize}
\setlength\itemsep{0pt}
\setlength\parskip{0pt}
\item Sustained ES: 2023
\item Peak ES: 2021
\item Vendors: U.S.
\item Processors: U.S.
\item Initiatives: NSCI/ECP
\item Cost: \$\,300 -- \$\,500\,M per system, plus heavy R\&D investments
\end{itemize}
}};
\node[align=left] (eu) at (7,0.5) {\parbox[t][7cm][c]{7cm}{
\begin{center}\fbox{\includegraphics[scale=0.119]{images/eu}}
\parbox[b][0.8cm][c]{1cm}{~~\textbf{EU}}\end{center}
\begin{itemize}
\setlength\itemsep{0pt}
\setlength\parskip{0pt}
\item Sustained ES: 2023 -- 2024
\item Peak ES: 2021
\item Vendors: U.S., Europe
\item Processors: U.S., ARM
\item Initiatives: PRACE, ETP4HPC
\item Cost: \$\,300 -- \$\,350\,M per system, plus heavy R\&D investments
\end{itemize}
}};
\node[align=left] (cn) at (-0.5,-5.5) {\parbox[t][5.5cm][t]{7cm}{
\begin{center}\fbox{\includegraphics[scale=0.05]{images/cn}}
\parbox[b][0.8cm][c]{2cm}{~~\textbf{China}}\end{center}
\begin{itemize}
\setlength\itemsep{0pt}
\setlength\parskip{0pt}
\item Sustained ES: 2023
\item Peak ES: 2020
\item Vendors: Chinese
\item Processors: Chinese (plus U.S.?)
\item 13\textsuperscript{th} 5-Year Plan
\item Cost: \$\,350 -- \$\,500\,M per system, plus heavy R\&D investments
\end{itemize}
}};
\node[align=left] (jp) at (7,-5.5) {\parbox[t][5.5cm][t]{7cm}{
\begin{center}\fbox{\includegraphics[scale=0.05]{images/jp}}
\parbox[b][0.8cm][c]{2cm}{~~\textbf{Japan}}\end{center}
\begin{itemize}
\setlength\itemsep{0pt}
\setlength\parskip{0pt}
\item Sustained ES: 2023 -- 2024
\item Peak ES: Not planned
\item Vendors: Japanese
\item Processors: Japanese
\item Cost: \$\,600 -- \$\,850\,M, this includes both 1 system and the R\&D costs. Will also do many smaller size systems
\end{itemize}
}};
}
\draw[thick] (-3.5,-2.25) -- ++(14,0);
\draw[thick] (3.5,3) -- ++(0,-11);
\end{tikzpicture}
\caption{Summary of Exascale race as seen by Hyperion at April 20, 2017~\cite{russell2017}}
\label{fig-intro-hyperion}
\end{figure*}
\section{Related Roadmapping Initiatives}
The Eurolab-4-HPC vision complements existing efforts such as the ETP4HPC Strategic Research Agenda (SRA). ETP4HPC is an industry-led initiative to build a globally competitive HPC system value chain. Development of the EuroLab-4-HPC vision is aligned with ETP4HPC SRA in its latest version to be scheduled for September 2017. SRA 2017 is targeting a roadmap towards Exascale computers that spans until approximately 2022, whereas the Eurolab-4-HPC vision targets the speculative period beyond Exascale, so approximately 2023 – 2030.
The EuroLab-4-HPC vision is developed in close collaboration with the ``HiPEAC Vision'' of HiPEAC CSA that features the broader area of ``High Performance and Embedded Architecture and Compilation''. The EuroLab-4-HPC vision complements the HiPEAC Vision 2017 document with a stronger focus on disruptive technologies and HPC.
The current state of available roadmaps that are adjacent to the Eurolab-4-HPC vision is shown in the table below:
\begin{table*}[ht]
\caption{Current state of available roadmaps adjacent to the Eurolab-4-HPC vision.}
\label{tbl-intro-roadmaps}
\tabularz[header]{p{2.5cm},X,p{2.1cm},p{1.1cm},p{3cm}}{
& Goal & Timespan & SWOT/\newline Political & Scope \\
HiPEAC Vision & Steer European academic research (driven by industry) & Short: 3 years,\newline Mid: 6 years,\newline Long: > 2020 & Yes & HPC + embedded \\
ETP4HPC SRA/EXDCI & Strengthening European (industrial) HPC ecosystem & 6 years\newline (2014 to 2020) & Yes & HPC except applications \\
PRACE Scientific Case &
(Academic) need for European HPC infrastructure &
8 years (2012 to 2020) &
Yes &
HPC applications \\
EESI (European Exascale Software Initiative) &
Development of efficient Exascale applications &
5 to 10 years &
No &
Exascale applications \\
BDVA (Big Data Value Association) &
Big Data technologies roadmap &
2020 &
-- &
Big data \\
Rethink Big &
Roadmap for European Technologies in Hardware and Networking for Big Data &
-- &
-- &
Big data \\
ECSEL MASRIA &
European leadership in enabling and industrial technologies. Competitive EU ECS industry. &
2015 roadmap to about 2025 &
Yes &
Electronic components and systems (ECS) \\
Next Generation Computing Roadmap &
Strengthening European industry &
2014: 10 to 15 years &
-- &
HPC
extensively covered \\
\textbf{Eurolab-4-HPC} &
\textbf{Academic excellence in HPC} &
\textbf{2023 -- 2030} &
\textbf{No} &
\textbf{Whole HPC stack }\\
}
\end{table*}
\section{Working Towards the Eurolab-4-HPC Roadmap/Vision}
The Eurolab-4-HPC vision has been developed as a research roadmap with a substantially longer-term time-window than most of the roadmaps shown above. Since the beginning, it has been our target to stick to technical matters and provide an academic research perspective. Because targeting the post-Exascale era with a horizon of approximately 2022 -- 2030 will be highly speculative, we proceeded as follows:
\begin{enumerate}
\item Select disruptive technologies that may be technologically feasible in the next decade.
\item Assess the potential hardware architectures and their characteristics.
\item Assess what that could mean for the different working groups (WG) topics (concerns all WGs).
\end{enumerate}
The vision roughly follows the structure:
``\emph{IF} technology ready \emph{THEN} foreseeable impact on WG topic could be ...''
The first task performed was to select potentially disruptive technologies and summarize its potential for the next decade with the help of experts in a ``Report on Disruptive Technologies''. The report has reached a stage of maturity and its impact on hardware and software is provided in working group zero, which is the basis for all other working groups.
\begin{enumerate}
\setcounter{enumi}{-1}
\item Impact of disruptive technologies\\
(Theo Ungerer, University of Augsburg, Germany)
\end{enumerate}
Aside from a working group zero on disruptive technologies, we defined five more working groups and assigned working group leaders:
\begin{enumerate}
\item New technologies and hardware architectures\\
(Avi Mendelson, Technion, Haifa)
\item System software and programming environment\\
(Paul Carpenter, BSC, Barcelona)
\item HPC application requirements\\
(Paul Carpenter, BSC, Barcelona)
\item Vertical challenges: Green ICT, energy and resiliency\\
(Bastian Koller and Axel Tenschert, HLRS, Stuttgart)
\item Convergence of HPC, with IoT and the Cloud\\
(Babak Falsafi, EPFL, Lausanne)
\end{enumerate}
Altogether about 46 contributors signed up to work on the vision.
The timescale of the first year concerned:
2016, April 30: We delivered an input to the EC consultation process regarding ``game changing technology''\footnote{\url{ec.europa.eu/futurium/en/content/fet-proactive}}.
2016, August 31: preliminary roadmap available\footnote{\url{www.eurolab4hpc.eu/roadmap/}}
The preliminary roadmap deliverable was well received by EC reviewers as well as in HPC public shown by an article in The Next Platform of October 12, 2016~\cite{hemsoth2016postexa}.
The EC Reviewer Comments were
\begin{itemize}
\item enhance with proposals of what EC should be funding
\item integrate/combine with EXDCI/ETP4HPC SRA.
\end{itemize}
Other observations were that the working groups proved not very effective and the WG sections in preliminary roadmap are not well aligned with each other.
The Mission for the Second Year concerned:
\begin{itemize}
\item form a single \emph{expert working group}:\\
Avi Mendelson, Luca Benini, Babak Falsafi, Sandro Bartolini, Dietmar Fey, Marc Duranton, Fran\c{c}ois Bodin, Simon McIntosh–Smith, Igor Zacharov, Paul Carpenter, Theo Ungerer
\item revisit Disruptive Technologies and implications of current roadmap
\item harmonize, restructure and revise the different roadmap sections
\item recommend potential EC funding opportunities
\end{itemize}
The working schedule for the second year was:
\begin{itemize}
\item 2017-01-23 DTHPC: Workshop on Disruptive Technologies in high-Performance Computing in the Next Decade, talk on roadmap
\item 2017-03-17 Kickoff Telco of expert working group
\item 2017-03-20 Discuss at ETP4HPC SRA Kickoff
\item 2017-04-28 HiPEAC CSW Zagreb: Workshop: ``Towards the Eurolab-4-HPC Long-term Roadmap on High-performance Computing in Years 2022 -- 2030''
\item 2017-05-03 Talk ``Potential Impact of Future Disruptive Technologies on Embedded Multicore Computing'', at AK Multicore, Munich
\item 2017-05-04 same talk at PARS Workshop, Hagen
\item 2017-05-17 at HPC Summit: Roadmapping talk at Workshop of EuroLab-4-HPC: the Future of High-Performance
\item 2017-05-29+30: 1\textonehalf~ day Roadmap meeting of expert working group at EPFL Lausanne
\item 2017-06: Experts prepare inputs
\item 2017-07-31: Final Roadmap done
\item 2017-08-31: Final Roadmap deliverable due
\end{itemize}
\section{Document Structure}
The rest of this document is structured as follows: The next section provides some insights in evolutionary, i.e. scaling of engineering HPC applications, and potentially new upcoming applications. Section~\ref{sec-datacenter} covers data centres and cloud computing, eventually leading from HPC to HPDA. Section~\ref{sec-disruptive} focuses on Disruptive Technologies followed by section~\ref{sec-impact} that summarizes the Potential Long-Term Impacts of Disruptive Technologies for HPC Hardware and Software in separate subsections. Section~\ref{sec-vertical} covers Green ICT and Resiliency as Vertical Challenges, and finally Section~\ref{sec-system} focuses on System Software and Programming Environment.
\chapter{Evolutionary and New Upcoming Applications}
Industrial and scientific applications are the \emph{raison d'\^{e}tre} of high-performance computing. Production HPC systems should meet the needs of the users, and they must anticipate future evolutionary and disruptive changes in these requirements. This section collects the main requirements of HPC users, including applications, numerical libraries, and algorithms. The focus is on the impact of HPC requirements on HPC computing systems, rather than the applications themselves.
We expect a continuous scaling of existing HPC engineering applications, but also combining HPC engineering applications with data analysis (from HPC to HPDA) and AI techniques. It is important to note that scientific applications have very long lifetimes, measured in decades, which is much longer than other software domains, and dramatically longer than hardware. We also see new applications for HPC and HPDA that may influence future HPC systems. Such new applications can only partly be predicted.
The top three challenges for support of future applications in HPC and HPDA are:
\begin{itemize}
\item Programming environments: the need for suitable abstractions between the application and the underlying complex hardware and storage,
\item System software and management: scalable and smart runtime systems,
\item Big data and usage models: smart processing, visualization, quantification of uncertainties; post-processing takes more time than computation.
\end{itemize}
\section{Strong Scaling Evolutionary HPC Applications}
The workflows of HPC applications are becoming more complex, moving from code coupling, resilience, and reproducibility to add application integrating multiphysics, multiphase models, data assimilations, data analytics, and edge computing.
\subsection*{Importance of Legacy Codes}
GPGPU-programming, PGAS and other programming paradigms have not become as widespread as envisioned. Fortran, C++, OpenMP, MPI are still dominating.
The cost of code is so huge that rewriting or re-architecturing an application is almost unfeasible, particularly due to the existence of community code and code developed by ISVs (independent software vendors), and due to specific training of users.
In practice, changes happen on extreme events (e.g. stop working) not in anticipation and cannot be anticipated without access to the new technologies (e.g. NVM).
The larger the code is, the more difficult it becomes to change since the number of possible failures due to source modifications increases dramatically with size as well as the time to run validation tests.
The older the code is, the higher is the probability that the original authors are not around anymore and that nobody really masters the innards of the source code.
The cost of evolution is also a slowing factor. A code developer produces around 10,000 lines of validated code (LOC) per year and a LOC costs between 10\,€ and 100\,€~\cite{prace2012}.
\section{Potentially New HPC Applications}
We expect a \emph{pull} by New Applications and SW Technologies:
\begin{itemize}
\item High Performance Data Analysis (HPDA): data mining and analysis of big data
\item Pre- and post-processing, and data assimilation
\item Integrate simulation, big data and machine learning
\item Machine Learning: deep learning/neuromorphic in engineering simulation
\item Internet of Things: real-time and interactive analysis and visualisation (Industry 4.0, smart cities, connected autonomic cars)
\item New expert programming languages (DSLs)
\item Approximate computing (concerns both software and hardware)
\end{itemize}
\paragraph{HPDA} will be covered in detail in Section~\ref{sec-datacenter} ``Data Centre and Cloud Computing''.
\paragraph{Machine Learning} is a very popular approach for Artificial Intelligence that trains systems to learn how to make decisions and predict results on their own. Deep learning is a machine learning technique inspired by the neural learning process of the human brain. Deep learning uses deep neural networks (DNNs), so called because of their deep layering of many connected artificial neurons (sometimes called \emph{perceptrons}), which can be trained with enormous amounts of input data to quickly solve complex problems with high accuracy. Once a neural network is trained, it can be deployed and used to identify and classify objects or patterns in a process known as \emph{inference}.
Most neural networks consist of multiple layers of interconnected neurons. Each neuron and layer contributes towards the task that the network has been trained to execute. For example; AlexNet, the Convolutional Neural Network (CNN) that won the 2012 ImageNet competition, consists of eight layers, 650,000 interconnected neurons, and almost 60 million parameters. Today, the complexity of neural networks has increased significantly, with recent networks such as deep residual networks (for example ResNet-152) having more than 150 layers, and millions more connected neurons and parameters.~\cite{nvidiav100}
Today's leading organizations are using machine learning-based tools to automate decision processes, and they're starting to experiment with more-advanced uses of artificial intelligence (AI) for digital transformation. Corporate investment in artificial intelligence is predicted to triple in 2017, becoming a \$\,100 billion market by 2025.~\cite{Wellers2017}
Deep learning brings a shift in how we approach massive-scale simulations. The early applications of deep learning in using an approximation approach to HPC is taking experimental or supercomputer simulation data and using it to train a neural network, then turning that network around in inference mode to replace or augment a traditional simulation. Such an approach is incredibly promising, but raises the question of understandability and confidence in the results, unless the neural network is able to explain the reasons for its output. Ultimately, by allowing the simulation to become the training set, the Exascale-capable resources can be used to scale a more informed simulation, or simply be used as the hardware base for a massively scalable neural network.
On the software side, it means that pre- and post-processing data can be trained and certain parts of the application can be scrapped in favour of AI (or numerical approaches can click on at a certain point using trained data). Either way, applications will have to change.~\cite{hemsoth2017shift}
\paragraph{Internet of Things (IoT)} is also having an impact on traditional high-performance computing because of a number of industrial applications that have historically adopted embedded technologies but can benefit from higher performance. Sensors and cyber-physical systems are prominent examples of embedded technologies that require managing and analysing massive amounts of data. In these applications, the embedded systems must collaborate hand-in-hand to filter and analyse data locally due to the massive scale of the data generated prior to consulting with a cloud service for high quality decisions.
\paragraph{New expert programming languages (DSLs)} Domain-specific languages (DSLs) are computer languages specialized to particular application domains, in contrast to general-purpose languages.~\cite{wikidsl}
\paragraph{Approximate computing} is a computation which returns a possibly inaccurate result rather than a guaranteed accurate result, for a situation where an approximate result is sufficient for a purpose. One example of such situation is for a search engine where no exact answer may exist for a certain search query and hence, many answers may be acceptable. Similarly, occasional dropping of some frames in a video application can go undetected due to perceptual limitations of humans~\cite{approximate}. Scientific domains such as weather and climate prediction have had success using lower precision calculations: 32 bits and even potentially 16 or 8 bits~\cite{dueben2014}.
Approximate computing trades off computation quality with effort expended, and as rising performance demands confront plateauing resource budgets, approximate computing has become not merely attractive, but even imperative. A survey of techniques for approximate computing is provided by~\cite{Mittal2016}.
\section{HPC Application Requirements}
\subsection{Need for More Performance}
There is no doubt that all user communities see a continued demand for ever-more computational performance well beyond Exascale. In addition, many users highlight increasing challenges related to data storage and processing. More quantitative details on future computational requirements are in the U.S Advanced Scientific Computing Advisory Committee (ASCAC) report~\cite{ascacreport2010} and the 2012 PRACE Scientific Case~\cite{prace2012}.
\subsection{Adapting Applications for Scalability and Heterogeneity}
HPC applications need to be adapted for Exascale systems and beyond. It will be some time after the introduction of the first Exaflops machine before more than a handful of applications are able to take full advantage of such a machine. The biggest issues relate to scalability (identifying and managing sufficient levels of parallelism), heterogeneity (including accelerators), and parallel I/O. Scientific codebases have very long lifetimes, on the order of decades, over which they have earned their users' trust~\cite{ascac2015}. For this reason, HPC application developers are reluctant to rewrite their software, and are keen to follow an incremental path~\cite{eesi2final}.
There is strong interest in higher-level programming abstractions to provide independence and portability from the details of particular hardware implementations and execution environments, including varying degrees of parallelism, application-specific designs, heterogeneity, accelerators, and complex (deeper) memory hierarchies~\cite{eesi2enabling}. Compilers and runtime systems should perform complex transformations such as overlapping computations and communications~\cite{eesi2final}, auto-tuning~\cite{eesi2enabling}, scheduling and load balancing (especially difficult with multi-scale multiphysics codes). New abstractions are needed to improve parallel I/O. Domain-Specific Languages (DSLs) help by separating domain science from the programming challenges~\cite{eesi2enabling}. Much more research is needed in these areas, but from the application point-of-view, the main barriers to their adoption are lack of standardization or long-term support in compilers and libraries~\cite{eesi2report}, as well as difficulties in the interoperability of multiple programming models in large codebases. Regarding accelerators, there are currently too many incompatible programming interfaces, e.g. CUDA, OpenCL, OpenACC, and OpenMP 4.0, and consolidation on an open, vendor-neutral and widely used standard is needed~\cite{eesi2enabling}.
There are serious difficulties with performance analysis and debugging, and existing techniques based on printf, logging and trace visualization will soon be intractable. Existing debuggers are good for small problems, but more work is needed to (graphically) track variables to find out where the output first became incorrect, especially for bugs that are difficult to reproduce. Performance analysis tools require lightweight data collection using sampling, folding and other techniques, so as not to increase execution time or disturb application performance (leading to non-representative analysis). There is a need for both superficial on-the-fly analysis and in-depth AI and deep learning analytics. As compilers and runtime systems become more complex, there will be a growing gap between runtime behaviour and the changes in the application's source code required to improve performance—although this does not yet seem to be a significant problem.
There is a concern that future systems will have worse performance stability and predictability, due to complex code transformations, dynamic adapting for energy and faults, dynamically changing clock speeds, and migrating work~\cite{ascac2015}. This is problematic when predictability is required, e.g., for real-time applications such as weather forecasting and for making proposals for access to HPC resources (since proposals need an accurate prediction of application performance scalability).
\subsection{Need for Co-Design}
Application communities are keen to be involved in co-design activities, in order to ensure appropriate future system designs, with appropriate memory capacities, memory hierarchies, networks and topologies and storage systems well suited to a class of applications~\cite{eesi2enabling}. Users need early access to prototypes, in order to test algorithm performance and provide feedback to system designers~\cite{ascac2015}. LINPACK performance is seen as non-representative of real world performance. Long-term partnerships are needed between vendors, HPC centres, research institutes and universities, as is being done in the U.S. ExMatEx (extreme materials), CESAR (advanced reactors) and ExaCT (combustion in turbulence) co-design centres.
\subsection{Extreme Data}
A new paradigm for scientific discovery is emerging due to the exponentially increasing volumes of data generated by HPC simulations and collected from telescopes, colliders, and other scientific instruments or sensors~\cite{ascac2013}. From the application point of view, the major problem is how to extract new knowledge or insights from the data~\cite{connolly,ascac2013}. Specific problems related to computing systems are \emph{managing data} (streaming data processing, archiving, curation, metadata, provenance, distribution and access), \emph{data analytics} (statistical streaming data analysis, machine learning on high-dimensional data), \emph{data-intensive simulation} (large-scale multi-physics and multi-scale simulations), \emph{data-driven inversion and assimilation} (high-dimensional Bayesian inference, e.g., Full Waveform Inversion for oil and gas), and \emph{statistics and stochastic methods} (direct-inverse uncertainties and extreme event statistics)~\cite{vilotte2016}. Users may wish to continue using a trusted (but inefficient) algorithm that has worked well on smaller data volumes~\cite{idcbigdata}.
Data movement is a major problem, including distributing data among scientists worldwide at acceptable cost and movement across infrastructure from the point of generation or collection. There is a need for \emph{in situ} analytics and data reduction, with pre-processing, simulation, post-processing and visualization executed on the same HPC cluster. This requires batch and interactive workloads to coexist and it needs interoperable file formats~\cite{eesi2final} and means of communication between HPC and analytics, such as databases or object stores.
More details on the convergence of HPC and big data are given in Section 3.
\subsection{Interactivity and Usage Models}
There are two broad categories of HPC usage. Capability computing refers to very large jobs that use (almost) the entire machine, e.g., brain simulation, or high-resolution turbulence model, and such a job must complete in the minimum time. Capacity (or throughput) computing refers to a large number of concurrent jobs, with a trade-off between minimising individual job execution time and maximising overall throughput. Capacity computing currently uses perhaps a few thousand cores per job, and it is commonly used for large ensembles of moderate-scale computations, e.g., for weather or climate simulations (in order to understand the distribution of possible outcomes) and for design space exploration.
There is increasing interest in ``real time'' and interactive supercomputing. High priority simulations are needed for extreme weather and mission-critical jobs (e.g. at NASA). Interactive jobs are also needed, as described above, for in situ visualization, as well as for computational steering: changing parameters in a simulation model as it runs, and changing resolutions in certain places of importance. Interactive and batch jobs should adapt to dynamic resource availability~\cite{eesi2enabling}, which is an opportunity for new algorithms and programming models.
Finally, there is an opportunity to execute HPC workloads in the cloud, especially for SMEs and to support real time or high priority jobs. There have been some pilots, that show problems with the cost model, data security~\cite{prace2012} and privacy (e.g. for medical data), licencing problems and data transfer costs.
\subsection{Other Application Issues}
\paragraph{Resiliency} is a vertical problem, and Application-Based Fault Tolerance (ABFT) techniques handle detectable, correctable and silent errors inside the application. Some algorithms have better fault tolerance than others, for example iterative solvers, which are widely used in Computational Fluid Dynamics (CFD) and other areas tolerate errors (or approximations like analog computing) by executing more iterations.
\paragraph{Energy Minimization} Since energy consumption is a major concern, users require better tools to measure the energy consumption. More importantly, they also need to be incentivized to minimise their energy use.
\paragraph{Other Application Issues} outside the scope of this roadmap (because they can be dealt with inside the application communities themselves) include: development of ultra-scalable solvers based on hierarchical algorithms~\cite{eesi2report}, mesh generation~\cite{eesi2report}, verification and validation and uncertainty quantification (VVUQ)~\cite{eesi2report}, difficulty of coupling models at different scales, etc.~\cite{ascac2013}, parallelization in time~\cite{eesi2report}, methods to extract information/understanding from large quantities of scientific data~\cite{connolly}, parallelization in time~\cite{eesi2report}.
\chapter{Data Centre and Cloud Computing}
\label{sec-datacenter}
\section{Convergence of HPC and Cloud Computing}
High-performance computing refers to technologies that enable
achieving a high-level computational capacity as compared to a
general-purpose computer \cite{supercomputer}. High-performance
computing in recent decades has been widely adopted for both
commercial and research applications including but not limited to
high-frequency trading, genomics, weather prediction, oil
exploration. Since inception of high-performance computing, these
applications primarily relied on simulation as a third paradigm for
scientific discovery together with empirical and theoretical science.
The technological backbone for simulation has been high-performance
computing platforms (also known as supercomputers) which are
specialized computing instruments to run simulation at maximum speed
with lesser regards to cost. Historically these platforms were
designed with specialized circuitry and architecture ground up with
maximum performance being the only goal. While in the extreme such
platforms can be domain-specific \cite{anton-computer}, supercomputers
have been historically programmable to enable their use for a broad
spectrum of numerically-intensive computation. To benefit from the
economies of scale, supercomputers have been increasingly relying on
commodity components starting from microprocessors in the eighties and
nineties, to entire volume servers with only specialized interconnects
\cite{crayxc} taking the place of fully custom-designed platforms
\cite{np-sc-strategy-shifts}.
In the past decade, there have been two trends that are changing the
landscape for high-performance computing and supercomputers. The first
trend is the emergence of data analytics as the fourth paradigm
\cite{ms-the-fourth-paradigm} complementing simulation in scientific
discovery. The latter is often related to as High-Performance Data
Analytics (HPDA). While simulation still remains as a major pillar for
science, there are massive volumes of scientific data that are now
gathered by instruments, sensors augmenting data from simulation
available for analysis. The Large Hadron Collider and the Square
Kilometre Array are just two examples of scientific experiments that
generate in the order of Petabytes of data a day. This recent trend
has led to the emergence of data science and data analytics as a
significant enabler not just for science but also for humanities.
The second trend is the emergence of cloud computing and
warehouse-scale computers (also known as data centres)
\cite{41606}. Today, the backbone of IT and the ``clouds'' are data
centres that are utility-scale infrastructure. Data centres consist of
low-cost volume processing, networking, and storage servers aiming at
cost-effective data manipulation at unprecedented scales. Data centre
owners prioritize capital and operating costs (often measured in
performance per watt) over ultimate performance. Typical high-end
data centres draw around 20 MW, occupy an area equivalent to 17 times a
football field and incur a 3 billion Euros in investment. While
data centres are primarily designed for commercial use, the scale at
which they host and manipulate (e.g., personal, business) data has led
to fundamental breakthroughs in both data analytics and data
management. By pushing computing costs to unprecedented low limits and
offering data and computing services at a massive scale, the clouds
will subsume much of embarrassingly parallel scientific workloads in
high-performance computing, thereby pushing custom infrastructure for
the latter to a niche.
\section{Massive Data Analytics}
We are witnessing a second revolution in IT, at the centre of which is
data. The emergence of e-commerce and massive data analytics for
commercial use in search engines, social networks and online shopping
and advertisement has led to wide-spread use of massive data analytics
(in the order of Exabytes) for consumers. Data now also lies at the
core of the supply-chain for both products and services in modern
economies. Collecting user input (e.g., text search) and documents
online not only has led to ground-breaking advances in language
translation but is also in use by investment banks mining blogs to
identify financial trends. The IBM Watson experiment is a major
milestone in both natural language processing and decision making to
showcase a question answering system based on advanced data analytics
that won a quiz show against human players.
The scientific community has long relied on generating (through
simulation) or recording massive amounts of data to be analysed
through high-performance computing tools on supercomputers. Examples
include meteorology, genomics, connectomics (connectomes:
comprehensive maps of connections within an organism's nervous
system), complex physics simulations, and biological and environmental
research. The proliferation of data analytics for commercial use on
the internet, however, is paving the way for technologies to collect,
manage and mine data in a distributed manner at an unprecedented scale
even beyond conventional supercomputing applications.
Sophisticated analytic tools beyond indexing and rudimentary
statistics (e.g., relational and semantic interpretation of underlying
phenomena) over this vast repository of data will not only serve as
future frontiers for knowledge discovery in the commercial world but
also form a pillar for scientific discovery \cite{NAP18374}. The
latter is an area where commercial and scientific applications
naturally overlap, and high-performance computing for scientific
discovery will highly benefit from the momentum in e-commerce.
There are a myriad of challenges facing massive data analytics
including management of highly distributed data sources, and tracking
of data provenance, data validation, mitigating sampling bias and
heterogeneity, data format diversity and integrity, integration,
security, sharing, visualization, and massively parallel and
distributed algorithms for incremental and/or real-time analysis.
With respect to algorithmic requirements and diversity, there are a
number of basic operations that serve as the foundation for
computational tasks in massive data analytics (often referred to as
\emph{dwarfs} \cite{Asanovic:EECS-2006-183} or \emph{giants}
\cite{NAP18374}). They include but are not limited to: basic
statistics, generalized n-body problems, graph analytics, linear
algebra, generalized optimization, computing integrals and data
alignment. Besides classical algorithmic complexity, these basic
operations all face a number of key challenges when applied to massive
data related to streaming data models, approximation and sampling,
high-dimensionality in data, skew in data partitioning, and sparseness
in data structures. These challenges not only must be handled at the
algorithmic level, but should also be put in perspective given
projections for the advancement in processing, communication and
storage technologies in platforms.
Many important emerging classes of massive data analytics also have
real-time requirements. In the banking/financial markets, systems
process large amounts of real-time stock information in order to
detect time-dependent patterns, automatically triggering operations in
a very specific and tight timeframe when some pre-defined patterns
occur. Automated algorithmic trading programs now buy and sell
millions of dollars of shares time-sliced into orders separated by 1\,ms.
Reducing the latency by 1\,ms can be worth up to \$\,100 million a
year to a leading trading house. The aim is to cut microseconds off
the latency in which these systems can reach to momentary variations
in share prices \cite{algo-trading}.
\section{Warehouse-Scale Computers}
Large-scale internet services and cloud computing are now fuelled by
large data centres which are a warehouse full of computers. These
facilities are fundamentally different from traditional supercomputers
and server farms in their design, operation and software structures
and primarily target delivering a negotiated level of internet service
performance at minimal cost. Their design is also holistic because
large portions of their software and hardware resources must work in
tandem to support these services \cite{41606}.
High-performance computing platforms are also converging with
warehouse scale computers primarily due to the growth rate in cloud
computing and server volume in the past decade. James Hamilton, Vice
President and Distinguished Engineer at Amazon and the architect of
their data centres commented on the growth of Amazon Web Services (AWS)
stating in 2014 that ``every day AWS adds enough new server capacity to
support Amazon's global infrastructure when it was a \$7B annual
revenue enterprise (in 2004)''.
Silicon technology trends such as the end of Dennard Scaling
\cite{5967003} and the slowdown and the projected end of density
scaling \cite{te-after-moores-law} are pushing computing towards a new
era of platform design tokened ISA: (1) technologies for tighter
integration of components (from algorithms to infrastructure), (2)
technologies for specialization (to accelerate critical services), and
(3) technologies to enable novel computation paradigms for
approximation. These trends apply to all market segments for digital
platforms and reinforce the emergence and convergence of volume
servers in warehouse-scale computers as the building block for
high-performance computing platforms.
With modern high-performance computing platforms being increasingly
built using volume servers, there are a number of salient features
that are shared among warehouse-scale computers and modern
high-performance computing platforms including dynamic resource
allocation and management, high utilization, parallelization and
acceleration, robustness and infrastructure costs. These shared
concerns will serve as incentive for the convergence of the platforms.
There are also a number of ways that traditional high-performance
computing ecosystems differ from modern warehouse-scale computers
\cite{hpc-cloud-bad}. With performance being a key criterion, there are a number of
challenges facing high-performance computing on warehouse-scale
computers. These include but are not limited to efficient
virtualization, adverse network topologies and fabrics in cloud
platforms, low memory and storage bandwidth in volume servers,
multi-tenancy in cloud environments, and open-source deep software
stacks as compared to traditional supercomputer custom stacks. As
such, high-performance computing customers must adapt to co-exist with
cloud services given these challenges, while warehouse-scale computer
operators must innovate technologies to support the workload and
platform at the intersection of commercial and scientific computing.
\section{Cloud-Embedded HPC and Edge Computing}
The emergence of data analytics for sciences and warehouse scale
computing will allow much of the HPC that can run on massively
parallel volume servers at low cost to be embedded in the clouds,
pushing infrastructure for HPC to the niche. While the cloud vendors
primarily target a commercial use of large-scale IT services and may
not offer readily available tools for HPC, there are a myriad of
opportunities to explore technologies that enable embedding HPC into
public clouds.
Large-scale scientific experiments also will heavily rely on edge
computing. The amount of data sensed and sampled is far beyond any
network fabric capabilities for processing in remote sites. For
example, in the Large Hadron Collider (LHC) in CERN, beam collisions
occur every 25 ns, which produce up to 40 million events per
second. All these events are pipelined with the objective of
distinguishing between interesting and non-interesting events to
reduce the number of events to be processed to a few hundreds events~\cite{supersymmetry}. These endeavours will need custom solutions with
proximity to sensors and data to enable information extraction and
hand in hand collaboration with either HPC sites or cloud-embedded HPC
services.
\subsection{Continuous CMOS scaling}
Continuing Moore's Law and managing power and performance tradeoffs remain as the key drivers of the International Technology Roadmap For Semiconductors 2015 Edition (ITRS 2015) grand challenges. Silicon scales according to the Semiconductor Industry Association's ITRS 2.0, Executive Summary 2015 Edition~\cite{itrs-exec} to 11/10\,nm in 2017, 8/7\,nm in 2019, 6/5\,nm in 2021, 4/3\,nm in 2024, 3/2.5\,nm in 2027, and 2/1.5\,nm in 2030 for MPUs or ASICs.
DRAM half pitch (i.e., half the distance between identical features in an array) is projected to scale down to 10\,nm in 2025 and 7.7\,nm in 2028 allowing up to 32\,GBits per chip. However, DRAM scaling below 20\,nm is very challenging. DRAM products are approaching fundamental limitations as scaling DRAM capacitors is becoming very difficult in 2D structures. It is expected that these limits will be reached by 2024 and after this year DRAM technology will saturate at the 32\,Gbit level unless some major breakthrough will occur~\cite{itrs-report-dram}. The same report foresees that by 2020 the 2D Flash topological method will reach a practical limit with respect to cost effective realization of pitch dimensions. 3D stacking is already extensively used to scale flash memories by 3D flash memories.
Process downscaling results in increasing costs below 10\,nm: the cost per wafer increases from one technology node to the next~\cite{hipeac-vision-2015}. The ITRS roadmap does not guarantee that silicon-based CMOS will extend that far because transistors with a gate length of 6nm or smaller are significantly affected by quantum tunneling.
CMOS scaling depends on fabs that are able to manufacture chips at the highest technology level. Only four such fabs are remaining worldwide: GlobalFoundries, TSMC, Intel and Samsung.
\subsubsection{Current State}
Current (July 2017) high-performance multiprocessors feature 14- to 16-nm technology. 14-nm FinFET technology is available by Intel (Intel Kaby Lake) and GlobalFoundries. 10-nm manufacturing process is expected for 2nd half of 2017 or beginning of 2018 by Intel (Cannonlake processor), Intel's difficulties and changed plans show the continuing challenges with keeping pace with Moore's law. Samsung and TSMC also apply 10-nm technology in 2017.
Samsung revealed in March 2017 that it had shipped over 70 thousand wafers processed using its first-generation 10\,nm FinFET fabrication process (10LPE). This manufacturing process allowed the company to make its chips 30\% smaller compared to ICs made using its 14LPE process as well as reducing power consumption by 40\% (at the same frequency and complexity) or increasing their frequency by 27\% (at the same power and complexity). Samsung applies that process to the company's own Exynos 9 Octa 8895 as well as Qualcomm's Snapdragon 835 seen in the Samsung Galaxy S8~\cite{anandtech-samsung-tsmc}.
\subsubsection{Company Roadmaps}
R\&D has begun for 5\,nm by all four remaining fabs TSMC, GlobalFoundries, Intel and Samsung and also beyond towards 3\,nm. Both 5\,nm and 3\,nm present a multitude of unknowns and challenges. Regardless, based on the roadmaps from various chipmakers, Moore's Law continues to slow as process complexities and costs escalate at each new chip generation.
\textbf{Intel} plans 7\,nm FinFET for production in early to mid-2020, according to industry sources. Intel's 5\,nm production is targeted for early 2023, sources said, meaning its traditional 2-year process cadence is extending to roughly 2.5 to 3 years~\cite{semieng-uncertainty}.
\textbf{TSMC} plans to ship 5\,nm in 2020, which is also expected to be a FinFET. In reality, though, TSMC's 5\,nm will likely be equivalent in terms of specs to Intel's 7\,nm, analysts said~\cite{semieng-uncertainty}.
TSMC will be starting risk production of their 7\,nm process in early 2017 and is already actively in development of 5\,nm process technology as well. Furthermore, TSMC is also in development of 3nm process technology. Although 3\,nm process technology already seems so far away, TSMC is further looking to collaborate with academics to begin developing 2\,nm process technology~\cite{cpcr-already}.
\textbf{Samsung}'s newest foundry process technologies and solutions introduced at the annual Samsung Foundry Forum include 8\,nm, 7\,nm, 6\,nm, 5\,nm, 4\,nm in its newest process technology roadmap~\cite{samsung-process-roadmap}. However, no time scale is provided.
\textbf{GlobalFoundries} decided to skip 10\,nm in favor for its next generation 7\,nm manufacturing technology, which is planned to start mass production of commercial chips in the second half of 2018. It is targeting high-performance components, such as CPUs, GPUs and SoCs for various applications (mobile, PC, servers, etc.)~\cite{anandtech-globalfoundries-roadmap}.
Compared to GlobalFoundries' current leading-edge 14LPP fabrication technology, the initial DUV-only (deep ultraviolet) 7\,nm process promises over 50\% area reduction as well as over 60\% power reduction (at the same frequency and complexity) or over 30\% performance improvement (at the same power and complexity). In practice, this means that in an ideal scenario GlobalFoundries will be able to double the amount of transistors per chip without increasing its die size while improving its performance per watt characteristics~\cite{anandtech-globalfoundries-roadmap}.
Initially GlobalFoundries will continue to use DUV argon fluoride (ArF) excimer lasers with 193\,nm wavelength with its 7\,nm production process, but over time it hopes to insert extreme ultraviolet lithography (EUV) tools with 13.5\,nm wavelength into production flow. GlobalFoundries does not reveal timeframes for its 7\,nm with EUV, but it is safe to say that EUV will be used in 2019 at the earliest. Also Samsung, Intel, and TSMC have confirmed their intentions to pursue a DUV-only 7\,nm process technology~\cite{anandtech-globalfoundries-roadmap}.
\subsubsection{Research Perspective}
``It is difficult to shed a tear for Moore's Law when there are so many interesting architectural distractions on the systems horizon''~\cite{nextplatform-neuromorphic}. However, silicon technology scaling will still continue and research in silicon-based hardware is still prevailing, in particular targeting specialized and heterogeneous processor structures and hardware accelerators.
However, each successive process shrink becomes more expensive and therefore each wafer will be more expensive to deliver. One trend to improve the density on chips will be 3D integration also of logic. Hardware structures that mix silicon-based logic with new NVM technology are upcoming and intensely investigated. A revolutionary DRAM/SRAM replacement will be needed~\cite{itrs-exec}.
As a result, non-silicon extensions of CMOS, using III–V materials or Carbon nanotube/nanowires, as well as non-CMOS platforms, including molecular electronics, spin-based computing, and single-electron devices, have been proposed~\cite{itrs-exec}.
For a higher integration density, new materials and processes will be necessary. Since there is a lack of knowledge of the fabrication process of such new materials, the reliability might be lower, which may result in the need of integrated fault-tolerance mechanisms~\cite{itrs-exec}.
Research in CMOS process downscaling and building fabs is driven by industry, not by academic research. Availability of such CMOS chips will be a matter of costs and not only of availability of technology.
\subsection{Die Stacking and 3D-Chip}
\begin{figure*}[ht]
\centering
\begin{tikzpicture}
{\sffamily
\ifx1\undefined
\fill[background] (-1.0,1.75) rectangle ++(17,5.5);
\else
\fi
\newcommand{\hbmdie}[1]{%
\draw[black,fill=white] (2,#1) rectangle node[black,anchor=east,align=left,xshift=-0.25cm,font=\scriptsize] {HBM DRAM Die} ++(5,0.5);
\fill[black] (5,#1) rectangle ++(0.05,0.5);
\fill[black] (5.25,#1) rectangle ++(0.05,0.5);
\fill[black] (5.5,#1) rectangle ++(0.05,0.5);
\fill[black] (5.75,#1) rectangle ++(0.05,0.5);
\fill[black] (4.95,#1) rectangle ++(0.15,-0.1);
\fill[black] (5.2,#1) rectangle ++(0.15,-0.1);
\fill[black] (5.45,#1) rectangle ++(0.15,-0.1);
\fill[black] (5.7,#1) rectangle ++(0.15,-0.1);
}
\newcommand{\mbump}[1]{%
\fill[black] (#1) rectangle ++(0.15,-0.1);
}
\newcommand{\bbump}[1]{%
\fill[black,rounded corners=2pt] (#1) rectangle ++(0.25,-0.2);
\fill[black] ([xshift=0.1cm]#1) rectangle ++(0.05,0.1);
}
\newcommand{\lbump}[1]{%
\draw[black,fill=vdarkgray,rounded corners=4pt] (#1) rectangle ++(0.35,-0.3);
}
\hbmdie{6}
\hbmdie{5.4}
\hbmdie{4.8}
\hbmdie{4.2}
\node[font=\scriptsize,anchor=west] (tsv) at (7.1,6.75) {TSV};
\node[font=\scriptsize,anchor=west] (mib) at (7.1,6.25) {Microbump};
\draw[-latex] (tsv.west) -- (5.07,6.3);
\draw[-latex] (mib.west) -- (5.87,5.95);
\draw[black,fill=white] (1.75,3.6) rectangle node[black,anchor=east,align=left,xshift=-1.3cm,font=\scriptsize] {Logic Die} ++(5.5,0.5);
\fill[black] (5,3.85) rectangle ++(0.05,0.25);
\fill[black] (5.25,3.85) rectangle ++(0.05,0.25);
\fill[black] (5.5,3.85) rectangle ++(0.05,0.25);
\fill[black] (5.75,3.85) rectangle ++(0.05,0.25);
\mbump{3.0,3.6}
\mbump{3.5,3.6}
\mbump{4.0,3.6}
\mbump{4.5,3.6}
\mbump{5.0,3.6}
\mbump{5.5,3.6}
\draw[black,fill=white] (6.25,3.6) rectangle node[black,yshift=0.1cm,font=\scriptsize] {PHY} ++(1,0.5);
\mbump{6.3,3.6}
\mbump{6.55,3.6}
\mbump{6.80,3.6}
\mbump{7.05,3.6}
\draw[black,fill=white] (7.55,3.6) rectangle node[black,anchor=west,align=right,xshift=0.2cm,font=\scriptsize] {GPU/CPU/SoC Die} ++(5.5,0.5);
\mbump{10.5,3.6}
\mbump{11.0,3.6}
\mbump{11.5,3.6}
\mbump{12.0,3.6}
\draw[black,fill=white] (7.55,3.6) rectangle node[black,yshift=0.1cm,font=\scriptsize] {PHY} ++(1,0.5);
\mbump{7.6,3.6}
\mbump{7.85,3.6}
\mbump{8.10,3.6}
\mbump{8.35,3.6}
\draw[black,fill=white] (1.25,3.0) rectangle node[black,anchor=east,align=left,xshift=-4.6cm,font=\scriptsize] {Interposer} ++(12.25,0.5);
\bbump{2.75,3.0};
\bbump{3.45,3.0};
\bbump{4.15,3.0};
\bbump{4.85,3.0};
\bbump{5.55,3.0};
\bbump{6.25,3.0};
\bbump{10.35,3.0};
\bbump{11.05,3.0};
\bbump{11.75,3.0};
\bbump{12.45,3.0};
\draw[black,fill=black] (0.75,2.3) rectangle node[white,anchor=east,align=left,xshift=-4.2cm,font=\scriptsize] {Package Substrate} ++(13.25,0.5);
\lbump{1.25,2.3};
\lbump{2.0,2.3};
\lbump{2.75,2.3};
\lbump{3.5,2.3};
\lbump{4.25,2.3};
\lbump{5.0,2.3};
\lbump{5.75,2.3};
\lbump{6.5,2.3};
\lbump{7.25,2.3};
\lbump{8.0,2.3};
\lbump{8.75,2.3};
\lbump{9.5,2.3};
\lbump{10.25,2.3};
\lbump{11.0,2.3};
\lbump{11.75,2.3};
\lbump{12.5,2.3};
\lbump{13.25,2.3};
\draw[thick] (5.025,3.85) -- ++(0,-0.19) -- ++(1.35,0) -- ++(0,-0.1);
\draw[thick] (5.275,3.85) -- ++(0,-0.14) -- ++(1.35,0) -- ++(0,-0.2);
\draw[thick] (5.525,3.85) -- ++(0,-0.09) -- ++(1.35,0) -- ++(0,-0.2);
\draw[thick] (5.775,3.85) -- ++(0,-0.04) -- ++(1.35,0) -- ++(0,-0.3);
\draw[thick] (6.375,3.65) -- ++(0,-0.40) -- ++(2.05,0) -- ++(0,0.24);
\draw[thick] (6.625,3.65) -- ++(0,-0.34) -- ++(1.55,0) -- ++(0,0.18);
\draw[thick] (6.875,3.65) -- ++(0,-0.28) -- ++(1.05,0) -- ++(0,0.12);
\draw[thick] (7.125,3.65) -- ++(0,-0.22) -- ++(0.55,0) -- ++(0,0.06);
\draw[thick] (3.075,3.6) -- ++(0,-0.35) -- ++(-0.2,0) -- ++(0,-0.15);
\draw[thick] (3.575,3.6) -- ++(0,-0.35) -- ++(0.0,0) -- ++(0,-0.15);
\draw[thick] (4.075,3.6) -- ++(0,-0.35) -- ++(0.2,0) -- ++(0,-0.15);
\draw[thick] (4.575,3.6) -- ++(0,-0.35) -- ++(0.4,0) -- ++(0,-0.15);
\draw[thick] (5.075,3.6) -- ++(0,-0.45) -- ++(0.6,0) -- ++(0,-0.05);
\draw[thick] (5.575,3.6) -- ++(0,-0.4) -- ++(0.8,0) -- ++(0,-0.1);
\draw[thick] (10.575,3.6) -- ++(0,-0.35) -- ++(-0.1,0) -- ++(0,-0.15);
\draw[thick] (11.075,3.6) -- ++(0,-0.35) -- ++(0.1,0) -- ++(0,-0.15);
\draw[thick] (11.575,3.6) -- ++(0,-0.35) -- ++(0.3,0) -- ++(0,-0.15);
\draw[thick] (12.075,3.6) -- ++(0,-0.35) -- ++(0.5,0) -- ++(0,-0.15);
}
\end{tikzpicture}
\caption{High Bandwidth Memory utilizing an active silicon Interposer~\cite{amdhbm}}
\label{fig-diestacking}
\end{figure*}
Die Stacking and 3D chip integration denote the concept of stacking integrated circuits (e.g. processors and memories) vertically in multiple layers. 3D packaging assembles vertically stacked dies in a package, e.g., system-in-package (SIP) and package-on-package (POP).
Die stacking can be achieved by connecting separately manufactured wafers or dies vertically either via wafer-to-wafer, die-to-wafer, or even die-to-die. The mechanical and electrical contacts are realized either by wire bonding as in SIP and POP devices or microbumps. SIP is sometimes listed as a 3D stacking technology, although it should be better denoted as 2.5D technology.
An evolution of SiP approach consists of stacking multiple dies (called chiplets) on a large interposer that provides connectivity among chiplets and to the package. The interposer can be passive or active. A passive interposer, often implemented with an organic material to reduce cost, provides multiple levels of metal interconnects and vertical vias for inter-chiplet connectivity and for redistribution of connections to the package. It also provides micropads for the connection of the chiplets on top. Active silicon interposers offer the additional possibility to include logic and circuits in the interposer itself. This more advanced and high cost integration approach is much more flexible than passive interposers, but it is also much more challenging for design, manufacturing, test and thermal management. Hence it is not yet widespread in commercial products, with the exception of three-dimensional DRAM memories (e.g. High-Bandwidth Memory (HBM, cf. Fig.~\ref{fig-diestacking}) and Hybrid Memory Cube (HMC)) where the bottom layer is active and hosts the physical interface of the memory to the external system.
The advantages of 3D technology based on Interposers are numerous: Firstly, short communication distance between dies, thus reducing communication load and then reducing communication power consumption. Secondly, the possibility of stacking dies from various heterogeneous technologies, like stacking memory on top of logic like Flash, non-volatile memories, or even photonic devices, in order to benefit of the best technology where it fits best. And thirdly, an improved system yield and cost by partitioning the system in a divide and conquer approach: multiple similar dies are fabricated, tested and sorted before the final 3D assembly, instead of fabricating ultra large dies with much reduced yield. The main challenges to the diffusion of these technologies are manufacturing cost (setup and yield optimization) and thermal management since cooling high-performance die stacks requires complex packages, thermal coupling materials and heat spreaders.
Die stacking can also be achieved by stacking active layers vertically on a single wafer in a monolithic approach. Such kind of 3D chip integration does not use micro-pads or Through-Silicon Vias (TSVs) for communication, but it uses multi-level interconnects between layers, with a much finer pitch than that allowed by TSVs. The main challenge in monolithic integration is to ensure that elementary devices (transistors) have similar quality level and performance in all the silicon layers. This is a very challenging goal since the manufacturing process is not identical for all the layers (low temperature processes are needed for the layers grown on top of the bulk layer).
Some advanced solutions for vertical communication do not require ohmic contact in metal, i.e. capacitive and inductive coupling as well as short-range RF communication solutions that are proposed instead do not require a flow of electrons passing through a continuous metal connection. These approaches are usable both in die-stacked and monolithically integrated ICs, but the modulation and demodulation circuits do take space and vertical connectivity density is currently not better than that of TSVs.
\subsubsection{Current State}
The monolithic approach of die stacking is already used in 3D Flash memories from Samsung and also for smart sensors. Commercial prototypes of 3D technology date back until 2004 when Tezzaron released a 3D IC microcontroller~\cite{tezzaron2016}. Intel evaluated chip stacking for a Pentium 4 already in 2006~\cite{Black2006}. Recent multicore designs using Tezzaron's technology include the 64 core 3D-MAPS (3D MAssively Parallel processor with Stacked memory) research prototype from 2012~\cite{kim20123dmaps} and the Centip3De with 64 ARM Cortex-M3 Cores also from 2012~\cite{fick2012centipede}. Fabs are able to handle 3D packages (e.g.~\cite{amkor3d}). In 2011 IBM announced 3D chip production process~\cite{ibm3dchip}. Intel announced "3D XPoint" memory in 2015 (assumed to be 10x the capacity of DRAM and 1000x faster than NAND Flash~\cite{inteloptane}). Intel/Micro 3D-Xpoint memory is now available as Optane-SSDs DC P4800X-SSD as 375-Gbyte since March 2017 and stated to be 2.5 to 77 times better than NAND-SSDs.
Both NVIDIA and AMD already exploit the high-bandwidth and low latencies given by 3D stacked memories for a high-dense memory processor, called High-Bandwidth Memory (HBM). AMD's GPUs based on the Fiji architecture with HBM are available since 2015, and NVIDIA released Pascal-based GPUs in 2016~\cite{nvidiapascal}. A direction towards future 3D stacking of memory dies on processor dies is the Hybrid Memory Cube from Micron~\cite{micron-hmc}. It stacks multiple DRAM dies and a separate layer for a controller which is vertically linked with the DRAM dies. This interposer approach is used in high end FPGAs to reduce cost.
\subsubsection{Perspective}
3D NAND Flash may be prevailing. 3D Flash memories may enable SSDs with up to 10 TB of capacity in the short term~\cite{registernand}. In 2007, earliest potential was seen in memory stacks for mobile applications~\cite{lu20073d}. It is to expect that 3D chip technology will widely enter the market for mainstream architectures within the next 5 years. Representative for this current development are, e.g., Intel's Xeon Phi Knights Landing processors equipped with package-integrated DRAMs in 2016 as a result of their cooperation with Micron. While memories are already exploiting the full spectrum of 3D integration options, logic processes lag behind in this respect, mostly because of cost and thermal reasons. Since memories are much cooler than logic (due to the lower activity and operating frequency) their 3D integration is thermally sustainable and cost-effective today.
It is also to be expected that in a long-term perspective the technology will be expanded progressively from 3D packaging technologies towards real 3D chip stacking and possibly towards 3D ICs in 3D packages in order to profit from all the benefits such technology will offer in particular for HPC architectures.
The main challenge in establishing this 3D chip stacking technology is gaining control of the thermal problems that have to be overcome to realize reliably very dense 3D interconnections. This requires the availability of appropriate design tools, which are explicitly supporting 3D layouts. Both topics represent an important issue for research in the next 10 to 15 years.
\subsubsection{Impact on Hardware}
3D stacking has a series of beneficial impacts on the hardware in general and on the possibilities how to design future processor-memory-architectures in particular. Wafers can be partitioned into smaller dies because comparatively long horizontally running links are relocated to the third dimension and thus enable smaller form factors. 3D stacking also enables heterogeneity, by integrating layers, manufactured in different processes, e.g., different memory technologies, like SRAM, DRAM, Spin-transfer-torque RAM (STT–RAM) and also memristor technologies, which would be incompatible among each other in monolithic circuits. Due to short connection wires, reduction of power consumption is to be expected. Simultaneously, a high communication bandwidth between layers connected with TSVs can be expected leading to particularly high processor-to-memory bandwidth.
The last-level caches will probably be the first to be affected by 3D stacking technologies when they will enter logic processes. 3D caches will increase bandwidth and reduce latencies by a large cache memory stacked on top of logic circuitry. In a further step it is consequent to expand 3D chip integration also to main memory in order to make a strong contribution in reducing decisively the current memory wall which is one of the strongest obstructions in getting more performance in HPC systems. Furthermore, possibly between 2026 and 2030, 3D arithmetic units will undergo the same changes ending up in complete 3D many-core microprocessors, which are optimized in power consumption due to reduced wire lengths.
A collateral but very interesting trend is 3D stacking of sensors. A technology was presented by Olympus in which more than 4 million microbumps have been used for stacking a 16 megapixel array sensor directly on top of a circuit implementing a global shutter control logic. Sony used TSV technology to combine image sensors directly with column-parallel analogue-digital-converters and logic circuits~\cite{kondo2015cmos,sonysensor}. This trend will open the opportunity for fabricating fully integrated systems that will also include sensors and their analogue-to-digital interfaces.
\subsubsection{Funding Perspectives}
The main issue is that 3D as a technology requires heavy industrial investment because it ultimately is a problem of reaching cost effective volume production. So this is probably beyond what can be funded by research money. However, more and more hardware devices will use 3D technology, so even system-level design will need to become 3D-aware. So definitely the EU needs to invest in research on how to develop components and systems based on 3D technology. Even if 3D technology will not be developed in EU at production level, EU should invest in research to design effective components and computing systems that use 3D technology.
\section{Sustaining Technology (improving HPC HW in ways that are generally expected)}
\label{sec-sustaining}
\subsection{Non-volatile Memory (NVM) Technologies}
\label{sec-nvm}
The computer architecture development in the last 1.5 decades was primarily characterized by energy driven advancement (better performance/Watt ratio) and by the transition from single- to multi-/many-core and to heterogeneous architectures consisting of a multi-core processor and an accelerator. New Non-volatile Memory (NVM) technologies will strongly influence the memory hierarchy and potentially lead to Resistive Computing and new Neuromorphic Computing chips (see section~\ref{sec-neuromorphic}).
Currently NAND Flash is the most common NVM technology, which finds its usages on SSDs, memory cards and memory sticks. NAND Flash uses floating-gate transistors for storing single bits. This technology is facing a big challenge, because scaling down decreases the endurance and performance significantly \cite{shimpi2013samsung}. Hence the importance of other NVM technologies increases. Using emerging NVM technologies in computing systems is a further step towards energy-aware measures for future HPC architectures.
Resistive memories, i.e. memristors, are an emerging class of non-volatile memory technology. A memristor is defined by Leon Chua's system theory as a memory device with a hysteresis loop that is pinched i.e. their I–U (current–voltage) curve goes to the zero point of the coordinate system. To this memristor class belong PCM, ReRAM, CBRAM and STT-RAMs. The memristors electrical resistance is not constant but depends on the previously applied voltage and the resulting current. The device remembers its history—the so-called non-volatility property: when the electric power supply is turned off, the memristor remembers its most recent resistance until it is turned on again \cite{wpMemristor}.
Among the most prominent memristor candidates and close to commercialization are phase change memory (PCM) \cite{lee2010phase,lam2008cell,lee2009architecting,qureshi2009scalable,zhou2009durable}, metal oxide resistive random access memory (RRAM or ReRAM) \cite{xu2015overcoming,xu2011design}, and conductive bridge random access memory (CBRAM) \cite{wong2014conductive}.
PCM can be integrated in the CMOS process and the read/write latency is only by tens of nanoseconds slower than DRAM whose latency is roughly around 100ns. The write endurance is hundreds of millions of writes per cell at current processes. This is why PCM is currently positioned only as a Flash replacement \cite{lee2009architecting}. RRAM offers a simple cell structure which enables reduced processing costs. The endurance can be more than 50 million cycles and the switching energy is very low \cite{govoreanu201110}. RRAM can deliver 100x lower read latency and 20x faster write performance compared to NAND Flash \cite{Crossbar}. CBRAM can also write with relatively low energy and with high speed. The read/write latencies are close to DRAM.
Spintronics is the technology of manipulating the spin state of electrons. Instead of using the electrons charge, spin states can be utilized as a substitute in logical circuits or in traditional memory technologies like SRAM. A Spin-transfer torque magnetic random-access memory (STT-RAM) \cite{apalkov2013spin} memory cell stores data in a magnetic tunnel junction (MTJ). Each MTJ is composed of two ferromagnetic layers (free and reference layers) and one tunnel barrier layer (MgO). If the magnetization direction of the magnetic fixed reference layer and the switchable free layer is anti-parallel, resp. parallel, a high, resp. a low, resistance is adjusted, representing a digital "0" or "1". Recently it was reported that by adjusting intermediate magnetization angles in the free layer 16 different states can be stored in one physical cell, enabling to realize multi-cell storages in MTJ technology \cite{bernard2016spintronic}.
The read latency and read energy of STT-RAM is expected to be comparable to that of SRAM. The expected 3x higher density and 7x less leakage power consumption in the STT-RAM makes it suitable for replacing SRAMs to build large NVMs. However, a write operation in an STT-RAM memory consumes 8x more energy and exhibits a 6x longer latency than a SRAM. Therefore, minimizing the impact of inefficient writes is critical for successful applications of STT-RAM \cite{noguchi2013250}.
NRAM, short for Nano RAM is a proprietary technology of Nantero. The RAM uses a fabric of carbon nanotubes (CNT) for saving bits. The resistive state of the CNT fabric determines, whether a one or a zero is saved in a memory cell. The resistance depends on if the CNTs are in contact with each other. With the help of a small voltage, the CNTs can be brought into contact or be separated. Reading out a bit means to measure the resistance. Nantero claims that their technology features the same read- and write latencies as DRAM, has a high endurance and reliability even in high temperature environments and is low power with essentially zero power consumption in standby mode. Furthermore NRAM is compatible with existing CMOS fabs without needing any new tools or processes, and it is scalable even to below 5\,nm \cite{Nantero}.
\subsubsection{Current state}
IBM announced MLC-PCM technology replacing Flash and to use them e.g. as storage class memory (SCM) to fill the latency gap between DRAM main memory and the hard disk based background memory.
Intel and Micron announced the new Breakthrough Memory 3D XPoint Technology \cite{evangelho2015intel} as revolutionary Flash replacement. Their Optane-SSDs DC P4800X-SSD with 375-Gbyte is available since March 2017 and said to be 2.5 to 77 times better that NAND-SSDs. It is widely assumed but not confirmed that Optane is based on PCM. Intel and Micron expect that the X-Point technology could become the dominating technology as an alternative to RAM devices offering in addition NVM property in the next ten years.
Adesto is currently offering CBRAM technology in their serial memory chips \cite{Adesto}.
Nantero together with Fujitsu announced a Multi-GB-NRAM memory in Carbone-Nanotube-Technique expected for 2018. Everspin announced Spin-Torque-Transfer-MRAMs (STT) in perpendicular Magnetic Tunnel Junction Process (pMTJ) as 256-MBit-MRAMs und 1 GB-SSDs expected in 2017. IBM also developed a neuromorphic core with a 64-K-PCM-cell as Synaptic-Array (256 Axons $\times$ 256 Dendrite) to implement SNNs (Spiking Neural Networks) \cite{hilson2015ibm}. The circuit-level performance, energy, and area model of the emerging non-volatile memory simulator NVSim \cite{dong2014nvsim} allows the investigation of architectural structures for future NVM based high-performance computers.
\subsubsection{Perspective}
It is foreseeable, that other NVM technologies will supersede current Flash memory. PCM for instance might be 1000 times faster and 1000 times more resilient. Some NVM technologies have been considered as a feasible replacement for SRAM \cite{noguchi2014highly,noguchi20153,ahn2014dasca}. Studies suggest that replacing SRAM with STT-RAM could save 60\% of LLC energy with less than 2\% performance degradation \cite{noguchi2014highly}.
It is unclear when most of the new technologies may be mature enough and which of them will prevail.
\subsubsection{Impact on hardware}
Memristors will deliver non-volatile memory which can be used potentially in addition to DRAM, or as a complete replacement. The latter will lead to a new Storage-Class Memory (SCM), i.e., a technology that blurs the distinction between memory and storage by enabling new data access modes and protocols that serve both ``memory'' and ``storage''. These new SCM types of non-volatile memory could be integrated on-chip with the microprocessor cores as they use CMOS-compatible sets of materials and require different device fabrication techniques from Flash. In a VLSI post-processing step they can be integrated on top of the last metal layer (see the note on Back-end of line service in section~\ref{sec-resistive}). One of the challenges for the next decade is the provision of appropriate interfacing circuits between the SCMs and the microprocessor cores. The benefits of memristor devices in integration density, energy consumption and access times may not get lost by costly interface circuitry. This holds in particular for exploiting the multi-level cell storage capability of NVMs for future HPDA systems, e.g., for big data applications. Moreover, memristors offer orders of magnitude faster read/write accesses and also much higher endurance. They are resistive switching memory technologies, and thus rely on different physics than that of storing charge on a capacitor as is the case for SRAM, DRAM and Flash \cite{eleftheriou2015future}.
STT-RAM devices are also an important class of non-volatile memory that primarily targets the replacement of DRAM, e.g., in Last-Level Caches (LLC). However, the asymmetric read/write energy and latency of NVM technologies introduces new challenges in designing memory hierarchies. Spintronic allows integration of logic and storage at lower power consumption.
Also new hybrid PCM / Flash SSD chips could emerge with a processor-internal last-level cache (STT-RAM), main processor memory (PCRAM), and storage class memory (ReRAM) \cite{eleftheriou2015future}.
\subsection{Photonics}
The general idea of using photonics into computing systems is to replace electrons with photons in intra-chip, inter-chip, processor-to-memory connections and maybe even logic.
\subsubsection{Introduction to photonics and integrated photonics}
An optical transmission link is composed by some key modules: laser light source, a modulator that converts electronic signals into optical ones, waveguides and other passive modules (e.g. couplers, photonic switching elements, splitters) along the link, a possible drop filter to steer light towards the destination and a photodetector to revert the signal into the electronic domain. The term \emph{integrated photonics} refers to a photonic interconnection where at least some of the involved modules are integrated into silicon~\cite{SiliconIntegrated}. \emph{Active} components (lasers, modulators and photodetectors) cannot be trivially implemented in CMOS process as they require the presence of materials different from silicon and, typically, not exactly compatible with it.
Optical communication nowadays features about 10-50 GHz modulation frequency and can support wavelength-division-multiplexing (WDM) up to 100+ \emph{colors} in fiber and 10+ (and more are expected in near future) in silicon. Propagation loss is relatively small in silicon and polymer materials so that optical communication can be regarded as quite insensitive to chip- and board-level distances. Where fiber can be employed (e.g. rack- and data centre levels) attenuation is no problem. Optical communication can rely on extremely fast signal propagation speed (head-flit latency): around 15 ps/mm in silicon and about 5.2 ps/mm in polymer waveguides that is traversing a 2\,cm x 2\,cm chip corner-to-corner in 0.6 and 0.2 ns, respectively. However, conversions to/from the optical domain can erode some of this intrinsic low-latency, as it is the case for network-level protocols and shared resource management.
Manufacturing of \emph{passive} optical modules (e.g. waveguides, splitters, crossings, microrings) is relatively compatible with CMOS process and the typical cross-section of a waveguide (about 500 nm) is not critical, unless for the smoothness of the waveguide walls as to keep light scattering small. Turns with curvature of a few \textmu m and exposing limited insertion loss are possible, as well as grating couplers to introduce/emit light from/into a fiber outside of the chip. Even various 5x5 optical switches~\cite{gu2009cygnus} can be manufactured out of basic photonic switching elements relying on tunable micro-ring resonators. Combining these optical modules, various optical interconnection topologies and schemes can be devised: from all-to-all contention-less networks up to arbitrated ones which share optical resources among different possible paths.
In practice, WDM requires precision in microring manufacturing, runtime tuning (e.g. thermal), alignment (multiple microrings with the same resonant frequency) and make more complex both the management of multi-wavelength light from generation, distribution, modulation, steering up to photo-detection. The more complex a topology, the more modules can be found along the possible paths between source and destination, on- and off-chip, and more laser power is needed to compensate their attenuation and meet the sensitivity of the detector. For these reasons, relatively simple topologies can be preferable as to limit power consumption and, spatial division multiplexing (using multiple parallel waveguides) can allow to trade WDM for space occupation.
Optical inter-chip signals are then expected to be conveyed also on different mediums to facilitate integrability with CMOS process, e.g., polycarbonate as in some IBM research prototypes and commercial solutions.
\subsubsection{Current status and current roadmaps}
Currently, optical communication is mainly used in HPC systems in the form of \emph{optical cables} which have progressively substituted shorter and shorter electronic links. From 10+ meters inter-rack communication down to 1+ meter intra-rack and sub meter intra-blade links.
A number of industrial and research roadmaps are projecting and expecting this trend to arrive within boards and then to have optical technology that crosses the chip boundary, connects chips within silicon- and then in optical-interposers and eventually arriving to a complete integration of optics on a different layer of traditional chips by around 2025. For this reason, also the evolution of 2.5 - 3D stacking technologies is expected to enable and sustain this roadmap up to seamless integration of optical layers along with logic ones. The expected rated performance/consumption/density metrics are shown in the 2016 Integrated Photonic Systems Roadmap~\cite{photonicsroadmap} (see Table \ref{tbl-photonics}).
\begin{table*}
\caption{Expected performance evolution of optical interconnection~\cite{photonicsroadmap}.}
\label{tbl-photonics}
\tabularz[header]{p{2cm},X,X,X,X,X,X}{
Time Frame & \textasciitilde 2000 & \textasciitilde 2005 & \textasciitilde 2010 & \textasciitilde 2015 & \textasciitilde 2020 & \textasciitilde 2025 \\
Interconnect & Rack & Chassis & Backplane & Board & Module & Chip \\
Reach & 20 -- 100\,m & 2 -- 4\,m & 1 -- 2\,m & 0.1 -- 1\,m & 1 -- 10\,cm & 0.1 -- 3\,cm \\
Bandw. (Gb/s, Tb/s) & 40 -- 200\,G & 20 -- 100\,G & 100 -- 400\,G & 0.3 -- 1\,T & 1 -- 4\,T & 2 -- 20\,T \\
Bandw. Density (GB/s/cm\textsuperscript{2}) & \textasciitilde 100 & \textasciitilde 100 -- 400 & \textasciitilde 400 & \textasciitilde 1250 & > 10000 & > 40000 \\
Energy (pJ/bit) & 1000 $\rightarrow$ 200 & 400 $\rightarrow$ 50 & 100 $\rightarrow$ 25 & 25 $\rightarrow$ 5 & 1 $\rightarrow$ 0.1 & 0.1 $\rightarrow$ 0.01 \\
}
\end{table*}
IBM, HPM, Intel, STM, CEA–LETI, Imec and Petra, to cite a few, essentially share a similar view on this roadmap and on the steps to increase bandwidth density, power consumption and cost effectiveness of the interconnections needed in the Exascale, and post-Exascale HPC systems. For instance, Petra labs demonstrated the first optical silicon interposer prototype~\cite{urino2014} in 2013 featuring 30 TB/s/cm\textsuperscript{2} bandwidth density and in 2016 they improved consumption and high-temperature operation of the optical modules~\cite{Urino2016}. HP has announced the Machine system which relies on the optical X1 photonic module capable of 1.5 Tbps over 50m and 0.25 Tbps over 50km. Intel has announced the Omni-Path Interconnect Architecture that will provide a migration path between Cu and Fiber for future HPC/Data Centre interconnections. Optical thunderbolt and optical PCI Express by Intel are other examples of optical cable solutions. IBM is shipping polymer + micro-pod optical interconnection within HPC blades since 2012 and it is moving towards module-to-module integration.
The main indications from current roadmaps and trends can be summarized as follows. Optical-cables (optical links) are evolving in capability (bandwidth, integration and consumption) and are getting closer to the chips, leveraging more and more photonics in an integrated form. Packaging problem of photonics remains a major issue, especially where optical signals need to traverse the chip package. Also for these reasons, interposers (silicon and optical) appear to be the reasonable first steps towards optically integrated chips. Then, full 3D processing and hybrid material integration are expected from the process point of view.
Conversion from photons to electrons is costly and for this reason there are currently strong efforts in improving the crucial physical modules of an integrated optical channel (e.g. modulators, photodetectors and thermally stable and efficiently integrated laser sources).
\subsubsection{Alternate and emerging technologies around photonics}
Photonics is in considerable evolution, driven by innovations in existing components (e.g. lasers, modulators and photodetectors) in order to push their features and applicability (e.g. high-temperature lasers). Consequently, its expected potential is a moving target based on the progress in the rated features of the various modules. At the same time, some additional variations, techniques and approaches at the physical level of the photonic domain are being investigated and could potentially create further discontinuities and opportunities in the adoption of photonics in computing. For instance, we cite here a few:
\begin{itemize}
\item Mode division multiplexing~\cite{huang2015}: where light propagates within a group of waveguides in parallel. This poses some criticalities but could allow to scale parallelism more easily than WDM and/or be an orthogonal source of optical bandwidth;
\item Free-air propagation: there are proposals to exploit light propagation within the chip package without waveguides to efficiently support some interesting communication pattern (e.g. fast signaling)~\cite{malik2015free};
\item Plasmonics: interconnect utilize surface plasmon polaritons (SPPs) for faster communication than photonics and far lower consumption over relatively short distances at the moment (below 1mm)~\cite{gao2015chip};
\item Optical domain buffering: recent results~\cite{Tsakmakidis2017} indicate the possibility to temporarily store light and delay its transmission. This could enable the evolution of additional network topologies and schemes, otherwise impossible, for instance avoiding the reconversion to the electronic domain;
\item Photonic non-volatile memory~\cite{IntegratedAllPhotonic}. This could reduce latencies of memory accesses by eliminating costly optoelectronic conversions while dramatically reducing the differences in speed between CPU and main memory in fully optical chips.
\item Optics computing: Optalysys project\footnote{\url{www.optalysys.com}} for computing in the optical domain mapping information onto light properties and elaborating the latter directly in optics in an extremely energy efficient way compared to traditional computers~\cite{OpticalComputing}. This approach cannot suit every application but a number of algorithms, like linear and convolution-like computations (e.g. FFT, derivatives and correlation pattern matching), are naturally compatible~\cite{Optalysys}. Furthermore, also bioinformatics sequence alignment algorithms have been recently demonstrated feasible.
\end{itemize}
\subsubsection{Optical communication close to the cores and perspectives}
As we highlighted, the current trend is to have optics closer and closer to the cores, from board-to-board, to chip-to-chip and up to within chips. The more optical links get close to the cores, the more the managed traffic becomes processor-specific. Patterns due to the micro-architectural behaviour of the processing cores become visible and crucial to manage, along with cache-coherence and memory consistency effects. This kind of traffic poses specific requirements to the interconnection sub-system which can be quite different from the ones induced by traffic at a larger scale. In fact, at rack or inter-rack level, the aggregate, more application-driven, traffic tends to smooth out individual core needs so that "average" behaviours emerge.
For instance, inter-socket or intra-processor coherence and synchronizations have been designed and tuned in decades for the electronic technology and, maybe, need to be optimized, or re-though, to take the maximum advantage from the emerging photonic technology.
Research groups and companies are progressing towards inter-chip interposer solutions and completely optical chips. In this direction researchers have already identified the \emph{crucial importance of a vertical cross-layer design} of a computer system endowed with integrated photonics. A number of studies have already proposed various kinds of on-chip and inter-chip optical networks designed around the specific traffic patterns of the cores and processing chips~\cite{pan2010,Vantrease2008,pan2009,petracca2008,Grani2014,oconnor2012}.
These studies suggest also that further challenges will arise from inter-layer design interference, i.e. lower-layer design choices (e.g. WDM, physical topology, access strategies, sharing of resources) can have a significant impact in higher layers of the design (e.g. NoC-wise and up to memory coherence and programming model implications) and vice versa. This is mainly due to the scarce experience in using photonics technology for serving computing needs (close to processing cores requirements) and, most of all, due to the intrinsic end-to-end nature of an efficient optical channel, which is conceptually opposed to the well-established and mature know-how of ``store-and-forward'' electronic communication paradigm. Furthermore, the quick evolution of optical modules and the arrival of discontinuities in their development hamper the consolidation of layered design practices.
Lastly, intrinsic low-latency properties of optical interconnection (on-chip and inter-chip) could imply a re-definition of what is local in a future computing system, at various scales, and specifically in a perspective HPC system, as it has already partially happened within the HP \emph{Machine}. These revised locality features will then require modifications in the programming paradigms as to enable them to take advantage of the different organization of future HPC machines, including resource disaggregation. On this specific point, if other emerging technologies (e.g. NVM, in-memory computation, approximate, quantum computing, etc.) will appear in future HPC designs as it is expected to meet performance/watt objectives, it is highly likely that for the reasons above, photonic interconnections will require to be co-designed in integration with the whole heterogeneous HPC architecture.
\subsubsection{Funding opportunities}
Photonic technology at the physical and module level is quite well funded in H2020 program~\cite{horizon2020photonics} as it has been regarded as strategic by the EU since years. For instance Photonics 21~\cite{photonics21} initiative gather groups and researchers from a number of enabling disciplines for the wider adoption of photonics in general and specifically also integrated photonics. Typically, funding instruments and calls focus on basic technologies and specific modules and in some cases towards point-to-point links as a final objective (e.g. optical cables).
Conversely, as photonics is coming close to the processing cores, which expose quite different traffic behaviour and communication requirements compared to larger scale interconnections (e.g. inter-rack or wide-area), it would be highly advisable to promote also a separate funding program for investigating the specific issues and approaches for the effective adoption of integrated photonics at the inter-chip and intra-chip scale. In fact the market is getting close to the cores \emph{from the outside} with an \emph{optical cable} model that will be less and less suitable to serve the traffic as the communication distance will decrease. Therefore, now could be just the right time to invest into chip-to-chip and intra-chip optical network research in order to be prepared to apply it effectively in the few years when current roadmaps expect optics to arrive there.
\section{Disruptive Technology in Hardware/VLSI (innovation that creates a new line of HPC hardware superseding existing HPC techniques)}
\label{sec-disruptive-hw}
\subsection{Resistive Computing}
\label{sec-resistive}
Apart from using memristors as non-volatile memory, there are several
other ways to use memristors in computing
\cite{DiVentra2013,Pershin2012}. Using memristors as memristive
synapses in neuromorphic computing
\cite{Pershin2012,Pickett2013a,Jo2010} and using memristors in quantum
computing \cite{Pershin2012} are discussed in separate sections. In
this section, resistive (or memristive) computing is discussed in
which logic circuits are built by memristors \cite{Borghetti2010}.
Memristive gates have a lower leakage power, but switching is slower
than in CMOS gates \cite{Pershin2012}. However, the integration of
memory into logic allows to reprogram the logic, providing low power
reconfigurable components \cite{Borghetti2009} and can reduce energy
and area constraints in principle due to the possibility of computing
and storing in the same device (computing in memory). Memristors can
also be arranged in parallel networks to enable massively parallel
computing~\cite{Pershin2011}.
Resistive computing is one of the emerging and promising computing
paradigms \cite{Borghetti2010,DiVentra2013a,Hamdioui2015}. It takes
the data-centric computing concept much further by interweaving the
processing units and the memory in the same physical location using
non-volatile technology, therefore significantly reducing not only the
power consumption but also the memory bottleneck. Resistive devices
such as memristors have been shown to be able to perform both storage
and logic functions \cite{Borghetti2010,Snider2005,Gao2013,Xie2015}.
Resistive computing provides a huge potential as compared with the
current state-of the art:
\begin{itemize}
\item It significantly reduces the memory bottleneck as it interweaves
the storage, computing units and the communication
\cite{Borghetti2010,DiVentra2013a,Hamdioui2015}.
\item It features low power leakage \cite{Pershin2012}.
\item It enables maximum parallelism \cite{Hamdioui2015,Pershin2011}.
\item It allows full configurability and flexibility
\cite{Borghetti2009}.
\item It provides order of magnitude improvements for the energy-delay
product per operations, the computation efficiency, and performance
per area \cite{Hamdioui2015}.
\end{itemize}
Serial and parallel connections of memristors were proposed for the
realization of Boolean logic gates with memristors by the so-called
memristor ratio logic. In such circuits the ratio of the stored
resistances in memristor devices is exploited for the set-up of
Boolean logic. Memristive circuits realizing AND, OR gates and the
implication function were presented in
\cite{Yang2013,Kvatinsky2011,Tran2012}. Hybrid memristive computing
circuits consist of memristors and CMOS gates. The research of Singh
\cite{Singh2015}, Xia et.al. \cite{Xia2009}, and Rothenbuhler
et.al.~\cite{Tran2012} are representative for numerous proposals of
hybrid memristive circuits, in which most of the Boolean logic
operators are handled in the memristors and the CMOS transistors are
mainly used for level restoration to retain defined digital signals.
\begin{figure*}[t]
\centering
\begin{tikzpicture}
{\sffamily\small
\ifx1\undefined
\fill[background] (-6.0,0.75) rectangle ++(16.25,-8.25);
\else\fi
\node[draw=black,fill=white,minimum width=4cm] (mem) at (0,0) {Memristive Computing};
\node[draw=black,fill=white,minimum width=4cm] (ana) at (-2.5,-1.5) {Analog Computing};
\node[draw=black,fill=white,minimum width=4cm] (dig) at (2.5,-1.5) {Digital Computing};
\node[draw=black,fill=table,ellipse,minimum width=4cm,align=center,inner sep=-0.01cm] (neural) at (-2.5,-3.25) {Neural networks,\\ neuromorphic processing,\\ STDP};
\node[draw=black,fill=white,minimum width=4cm,align=center] (hyb) at (7.5,-3.0) {Hybrid Approaches:\\ CMOS + Memristors};
\node[anchor=south east,yshift=-0.0cm,xshift=-0.25cm] at (neural.north) {\footnotesize{large field}};
\node[draw=black,fill=white,minimum width=2.5cm,align=center] (ratioed) at (-0.5,-5.5) {Ratioed Logic};
\node[draw=black,fill=white,minimum width=2.5cm,align=center] (imply) at (2.5,-5.5) {IMPLY Logic};
\node[draw=black,fill=white,minimum width=2.5cm,align=center] (cmos) at (6.5,-5.5) {CMOS-circuit like equivalent\\ memristor networks};
\node[ellipse,draw=black,fill=table,fit=(ratioed)(imply)(cmos),inner sep=-1.0cm, minimum height=3cm] (ell) {};
\node[anchor=south,yshift=0.25cm] at (ell.south) {\textbf{In-Memory Computing}};
\node[draw=black,fill=white,minimum width=2.5cm,align=center] (ratioed) at (-0.5,-5.5) {Ratioed Logic};
\node[draw=black,fill=white,minimum width=2.5cm,align=center] (imply) at (2.5,-5.5) {IMPLY Logic};
\node[draw=black,fill=white,minimum width=2.5cm,align=center] (cmos) at (6.5,-5.5) {CMOS-circuit-like equivalent\\ Memristor Networks};
\draw[-stealth,thick] (mem.south) -- (ana.north);
\draw[-stealth,thick] (mem.south) -- (dig.north);
\draw[-stealth,thick] (ana.south) -- (neural.north);
\draw[-stealth,thick] (dig.south) -- (hyb.west);
\draw[-stealth,thick] (dig.south) -- (ratioed.north);
\draw[-stealth,thick] (dig.south) -- (imply.north);
\draw[-stealth,thick] (dig.south) -- (cmos.north);
}
\end{tikzpicture}
\caption{Summary of activities on resistive and memristive computing.}
\label{fig-memristive}
\end{figure*}
Figure \ref{fig-memristive} summarizes the activities on resistive or memristive computing. We have the large block of neuromorphic processing with memristors (see section~\ref{sec-neuromorphic}) and concerning the published papers probably smaller branch of digital memristive computing with several sub branches, like ratioed logic, imply logic or CMOS-like equivalent memristor circuits in which Boolean logic is directly mapped onto crossbar topologies with memristors. These solutions refer to pure in-memory computing concepts. Besides that, exist proposals for hybrid solutions in which the the memristors are used as memory for CMOS circuits.
\subsubsection{Current state} A couple of start-up companies appeared in
2015 on the market who offer memristor technology as BEOL (Back-end of
line) service in which memristive elements are post-processed in CMOS
chips directly on top of the last metal layers. Also some European
institutes reported just recently at a workshop meeting ``Memristors:
at the crossroad of Devices and Applications'' of the EU cost action
1401 MemoCiS\footnote{\url{www.cost.eu/COST\_Actions/ict/IC1401}} the
possibility BEOL integration of their memristive technology to allow
experiments with such technologies. This offers new perspectives in
form of hybrid CMOS/memristor logic which use memristor networks for
high-dense resistive logic circuits and CMOS inverters for signal
restoration to compensate the loss of full voltage levels in
memristive networks. Multi-level cell capability of memristive
elements can be used to face the challenge to handle the expected huge
amount of Zettabytes produced annually in a couple of years. Besides,
proposals exist to exploit the multi-level cell storing property for
ternary carry-free arithmetic \cite{El-Slehdar2015,Fey2014} or both
compact storing of keys and matching operations in future associative
memories realized with memristors \cite{Junsangsri2014}.
\subsubsection{Impact on hardware} Using NVM technologies for resistive
computing is a further step towards energy-aware measures for future
HPC architectures. It supports the realization of both near-memory and
in-memory computing concepts which are both an important brick for the
realization of more energy-saving HPC systems (see Section
\ref{Summary-of-Potential-Long-Term-Impacts-of-Disruptive-Technologies-for-HPC-Hardware}). Near-memory
is currently based on 3D stacking of a logic layer with DRAMs
extending HBM and may in future stack logic with NVMs. In-memory
computing could be based on resistive computing techniques combined
with resistive memory.
A further way to save energy, e.g. in near-memory computing schemes,
is to use hybrid non-volatile register cells, in which each SRAM
flip-flop cell is attached to a NVM cell and using NVM technology in
the memory hierarchy as SCM.
The NVMs, either as part of a flip-flop memristor register pair or as
pair of a complete SRAM cell array and a subsequent attached memristor
cell array are used to keep data in time periods in which this data is
not needed for computation. Other data, which have to be processed,
are stored in conventional faster SRAM/DRAM devices. Using pipeline
schemes, e.g. under control of the OS, parts of data are shifted from
NVM to SRAM/DRAM before they are accessed in the fast memory. The
latency for the data transfer from NVM to DRAM can be hidden by a
timely overlapping of data transfer with simultaneous processing of
other parts of the DRAM. The same latency hiding principle can happen
in the opposite direction. Data that are newly computed and that are not
needed in the next computing steps can be saved in NVMs. It is
to be expected that we will see in future HPC systems SCMs as
near- ad mid-term solution and in a possibly next step also hybrid
flip-flops as realization of registers.
\subsubsection{Perspective} Resistive computing, if successful, will be able to
significantly reduce the power consumption and enable massive
parallelism; hence, increase computing energy and area efficiency by
orders of magnitudes. This will transform computer systems into new
highly parallel architectures and associated technologies, and enable
the computation of currently infeasible big data and data-intensive
applications, fuelling important societal changes.
Research on resistive computing is still in its infancy stage, and the
challenges are substantial at all levels, including material and
technology, circuit and architecture, tools and compilers, and
algorithms. As of today most of the work is based on simulations and
small circuit designs. It is still unclear when the technology will be
mature and available. Nevertheless, some start-ups on memristor
technologies are emerging such as KNOWM\footnote{\url{www.knowm.org}}.
\subsection{Neuromorphic Computing}
\label{sec-neuromorphic}
Neuromorphic Computing (NMC), as developed by Carver Mead in the late 1980s, describes the use of large-scale adaptive analog systems to mimic organizational principles used by the nervous system. Originally, the main approach was to use elementary physical phenomena of integrated electronic devices (transistors, capacitors, \dots) as computational primitives~\cite{Mead1990}. In recent times, the term neuromorphic has also been used to describe analog, digital, and mixed-mode analog/digital hardware and software systems that transfer aspects of structure and function from biological substrates to electronic circuits (for perception, motor control, or multisensory integration). Today, the majority of NMC implementations is based on CMOS technology. Interesting alternatives are, for example, oxide-based memristors, spintronics, or nanotubes~\cite{Pershin2012,Pickett2013,Jo2010}. Such kind of research is still in its infancy.
The basic idea of NMC is to exploit the massive parallelism of such circuits and to create low-power and fault-tolerant information-processing systems. Aiming at overcoming the big challenges of deep-submicron CMOS technology (power wall, reliability, and design complexity), bio-inspiration offers alternative ways to (embedded) artificial intelligence. The challenge is to understand, design, build, and use new architectures for nanoelectronic systems, which unify the best of brain-inspired information processing concepts and of nanotechnology hardware, including both algorithms and architectures~\cite{Rueckert2016}. A key focus area in further scaling and improving of cognitive systems is decreasing the power density and power consumption, and overcoming the CPU/memory bottleneck of conventional computational architectures~\cite{Eleftheriou2015}.
\subsubsection{Current State} Large scale neuromorphic chips exist based on CMOS technology, replacing or complementing conventional computer architectures by brain-inspired architectures. Mapping brain-like structures and processes into electronic substrates has recently seen a revival with the availability of deep-submicron CMOS technology. Large programs on brain-like electronic systems have been launched worldwide. At present, the largest programs are the SyNAPSE program (Systems of Neuromorphic Adaptive Plastic Scalable Electronics) in the US (launched in 2009,~\cite{SyNAPSE2013}) and the EC flagship Human Brain Project (launched in 2013,~\cite{HBP2017}).
SyNAPSE is a DARPA-funded program to develop electronic neuromorphic machine technology that scales to biological levels. More simply stated it is an attempt to build a new kind of computer with similar form and function to the mammalian brain. Such artificial brains would be used to build robots whose intelligence matches that of mice and cats. The ultimate aim is to build an electronic microprocessor system that matches a mammalian brain in function, size, and power consumption. It should recreate 10 billion neurons, 100 trillion synapses, consume one kilowatt (same as a small electric heater), and occupy less than two litres of space~\cite{SyNAPSE2013}.
The ``Cognitive Computing via Synaptronics and Supercomputing'' (C2S2) project is a funded project from DARPA's SyNAPSE initiative. Headed by IBM the group will turn to digital special-purpose hardware for brain emulation. The TrueNorth chip is an impressive outcome of this project integrating a two-dimensional on-chip network of 4096 digital application-specific cores ($64 \times64$) and over 400 Mio. bits of local on-chip memory (\textasciitilde{}100 Kb SRAM per core) to store synapses and neuron parameters as well as 256 Mio. individually programmable synapses on-chip. One million individually programmable neurons can be simulated time-multiplexed per chip, sixteen-times more than the current largest neuromorphic chip. The chip with about 5.4 billion transistors is fabricated in a \SI{28}{\nm} CMOS process (\SI{4.3}{\cm\per\squared} die size, \SI{240}{\um} $\times$ \SI{390}{\um} per core). By device count, TrueNorth is the largest IBM chip ever fabricated and the second largest (CMOS) chip in the world. The total power, while running a typical recurrent network at biological real-time, is about \SI{70}{\milli\watt} resulting in a power density of about \SI{20}{\milli\watt\per\cm\squared} (about \SI{26}{\pico\joule} which is in turn comparable to the cortex but three to four orders-of magnitude lower compared to 50--100 \si{\watt\per\cm\squared} for a conventional CPU~\cite{Merolla2014}.
The Human Brain Project (HBP) is a European Commission Future and Emerging Technologies Flagship. The HBP aims to put in place a cutting-edge, ICT-based scientific research infrastructure that will allow scientific and industrial researchers to advance our knowledge in the fields of neuroscience, computing, and brain-related medicine. The project promotes collaboration across the globe, and is committed to driving forward European industry. Within the HBP the subproject SP9 designs, implements, and operates a Neuromorphic Computing Platform with configurable Neuromorphic Computing Systems (NCS). The platform provides NCS based on physical (analogue or mixed-signal) emulations of brain models, running in accelerated mode (NM-PM1, wafer-scale implementation of 384 chips with about 200.000 analog neurons on a wafer in \SI{180}{\nm} CMOS, 20 wafer in the full system), numerical models running in real time on a digital multicore architecture (NM-MC1 with 18 ARM cores per chip in \SI{130}{\nm} CMOS, 48 chips per board, and 1200 boards for the full system), and the software tools necessary to design, configure, and measure the performance of these systems. The platform will be tightly integrated with the High Performance Analytics and Computing Platform, which will provide essential services for mapping and routing circuits to neuromorphic substrates, benchmarking, and simulation-based verification of hardware specifications~\cite{HBP2017}. For both neuromorphic hardware systems new chip versions are under development within HBP (NM-PM2: wafer-scale integration based on a new mixed-signal chip in \SI{65}{\nm} CMOS; NM-MC2: 68 ARM M4 cores per chip in \SI{28}{\nm} CMOS with floating point support).
Closely related to the HBP is the Blue Brain Project~\cite{BBP2017}. The goal of the Blue Brain Project (EPFL and IBM, launched 2005): ``[...] is to build biologically detailed digital reconstructions and simulations of the rodent, and ultimately the human brain. The supercomputer-based reconstructions and simulations built by the project offer a radically new approach for understanding the multilevel structure and function of the brain.'' The project uses an IBM Blue Gene supercomputer (100 TFLOPS, 10TB) with currently 8,000 CPUs to simulate ANNs (at ion-channel level) in software~\cite{BBP2017}.
In the long run also the above mentioned memristor technology (see section~\ref{sec-nvm} and section~\ref{sec-resistive}) is heavily discussed in literature for future neuromorphic computing. The idea, e.g. in so-called spike-time-dependent plasticity (STDP) networks \cite{Snider08,memristor_STDP_13}, is to mimic directly the functional behaviour of a neuron. In STDP networks the strength of a link to a cell is determined by the time correlation of incoming signals to a neuron along that link and the output spikes. The shorter the input pulses are compared to the output spike, the stronger the input links to the neuron are weighted. In contrast, the longer the input signals lay behind the output spike, the weaker the link is adjusted.
This process of strengthening or weakening the weight shall be directly mapped onto memristors by increasing or decreasing their resistance depending which voltage polarity is applied to the poles of a two-terminal memristive device. This direct mapping of an STDN network to an analogue equivalent of the biological cells to artificial memristor-based neuron cells shall emerge new extreme low-energy neuromorphic circuits. Besides this memristor-based STDP networks there are lots of proposals for neural networks to be realised with memristor-based crossbar and mesh architectures for cognitive detection and vision applications, e.g. \cite{Lim_2014}.
All above mentioned projects have in common that they model spiking neurons, the basic information processing element in biological nervous systems. A more abstract implementation of biological neural systems are Artificial Neural Networks (ANNs). Popular representatives are Deep Neural Networks (DNNs) as they have propelled an evolution in the machine learning field. DNNs share some architectural features of the nervous systems, some of which are loosely inspired by biological vision systems~\cite{LeCun1998}. DNNs are dominating computer vision today and observe a strong growing interest for solving all kinds of classification, function approximation, interpolation, or forecasting problems. Training DNNs is computationally intense. For example, Baidu Research\footnote{\url{www.baidu.com}} estimated that training one DNN for speech recognition can require up to 20 Exaflops ($10^{18}$ floating point operations per second); whereas the world's largest supercomputer delivers about 100 Petaflops ($10^{15}$ floating point operations per second). Companies such as Facebook and Google have a nearly unlimited appetite for performance, because increasing the available computational resources enables more accurate models as well as newer models for high-value problems such as autonomous driving and to experiment with more-advanced uses of artificial intelligence (AI) for digital transformation. Corporate investment in artificial intelligence is predicted to triple in 2017, becoming a \$100 billion market by 2025~\cite{Wellers2017}.
Hence, a variety of hardware and software solutions have emerged to slake the industry's thirst for performance. The currently most well-known commercial machines targeting deep learning are the TPUs of Google and the Nvidia Volta V100.
A tensor processing unit (TPU) is an ASIC developed by Google specifically for machine learning. The chip has been specifically designed for Google's TensorFlow framework. The first generation of TPUs applied 8-bit integer MAC (multiply accumulate) operations. It is deployed in data centres since 2015 to accelerate the inference phase of DNNs. An in-depth analysis was recently published by Jouppi et al.~\cite{Jouppi2017}. The second generation TPU of Google was announced in May 2017. The individual TPU ASICs are rated at 45 TFLOPS and arranged into 4-chip 180 TFLOPS modules. These modules are then assembled into 256 chip pods with 11.5 PFLOPS of performance~\cite{Wikipedia2017}. The new TPUs are optimized for both training and making inferences.
Nvidia's Tesla V100 GPU contains 640 Tensor Cores delivering up to 120 Tensor TFLOPS for training and inference applications. Tensor Cores and their associated data paths are custom-designed to dramatically increase floating-point compute throughput with high energy efficiency. For deep learning inference, V100 Tensor Cores provide up to 6x higher peak TFLOPS compared to standard FP16 operations on Nvidia Pascal P100, which already features 16-bit FP operations~\cite{NvidiaCorporation2017}.
Matrix-Matrix multiplication operations are at the core of DNN training and inferencing, and are used to multiply large matrices of input data and weights in the connected layers of the network. Each Tensor Core operates on a $4 \times 4$ matrix and performs the following operation: $D = A \times B + C$, where $A$, $B$, $C$, and $D$ are $4\times4$ matrices. Tensor Cores operate on FP16 input data with FP32 accumulation. The FP16 multiply results in a full precision product that is then accumulated using FP32 addition with the other intermediate products for a $4 \times 4 \times4$ matrix multiply~\cite{NvidiaCorporation2017}. The new Nvidia DGX-1 system based on the Volta V100 GPUs will be delivered in the third quarter of 2017~\cite{Morgan2017}. It is the world's first purpose built system optimized for deep learning, with fully integrated hardware and software.
Many more options for DNN hardware acceleration are showing up~\cite{Chen2014}. AMD's forthcoming Vega GPU should offer 13 TFLOPS of single precision, 25 TFLOPS of half-precision performance, whereas the machine-learning accelerators in the Volta GPU-based Tesla V100 can offer 15 TFLOPS single precision and 120 TFLOPS for deep learning workloads. Microsoft has been using Altera FPGAs for similar workloads, though a performance comparison is tricky; the company has performed demonstrations of more than 1 Exa-operations per second~\cite{Bright2017}. Intel offers the Xeon Phi 7200 family and IBMs TrueNorth tackles deep learning as well~\cite{Gwennap2016}.
Other chip and IP (Intellectual Property) vendors---including Cadence, Ceva, Synopsys, and Qualcomms zeroth---are touting DSPs for learning algorithms. Although these hardware designs are better than CPUs, none was originally developed for DNNs. Ceva's new XM6 DSP core\footnote{\url{www.ceva-dsp.com}} enables deep learning in embedded computer vision (CV) processors. The synthesizable intellectual property (IP) targets self-driving cars, augmented and virtual reality, surveillance cameras, drones, and robotics. The normalization, pooling, and other layers that constitute a convolutional-neural-network model run on the XM6's 512-bit vector processing units (VPUs). The new design increases the number of VPUs from two to three, all of which share 128 single-cycle $(16 \times 16)$-bit MACs, bringing the XM6's total MAC count to 640. The core also includes four 32-bit scalar processing units.
Examples for start-ups are Nervana Systems\footnote{\url{www.nervanasys.com}}, Knupath\footnote{\url{www.knupath.com}}, Wave Computing\footnote{\url{www.wavecomp.com}}. The Nervana Engine will combine a custom \SI{28}{\nm} chip with 32 GB of high bandwidth memory and replacing caches with software-managed memory. Kupath second generation DSP Hermosa is positioned for deep learning as well as signal processing. The \SI{32}{\nm} chip contains 256 tiny DSP cores operation at \SI{1}{\GHz} along with 64 DMA engines and burns \SI{34}{\watt}. The dataflow processing unit from Wave Computing implements \mbox{``tens of thousands''} of processing nodes and ``massive amounts'' of memory bandwidth to support TensorFlow and similar machine-learning frameworks. The design uses self-timed logic that reaches speeds of up to \SI{10}{\GHz}. The \SI{16}{\nm} chip contains 16 thousand independent processing elements that generate a total of 180 Tera 8-bit integer operations per second.
\subsubsection{Perspective} Brain-inspired hardware computing architectures have the potential to perform AI tasks better than conventional architecture by means of better performance, lower energy consumption, and higher resilience to defects. Neuromorphic Computing and Deep Neural Networks represent two approaches for taking inspiration from biological brains. Software implementations on HPC-clusters, multi-cores (OpenCV), and GPGPUs (NVidia cuDNN) are already commercially used. FPGA acceleration of neural networks is available as well. From a short term perspective these software implemented ANNs may be accelerated by commercial transistor-based neuromorphic chips or accelerators. Future emerging hardware technologies, like memcomputing and 3D stacking~\cite{Belhadj2014} may bring neuromorphic computing to a new level and overcome some of the restriction of Von-Neumann-based systems in terms of scalability, power consumption, or performance.
Particularly attractive is the application of ANNs in those domains where, at present, humans outperform any currently available high-performance computer, e.g., in areas like vision, auditory perception, or sensory motor control. Neural information processing is expected to have a wide applicability in areas that require a high degree of flexibility and the ability to operate in uncertain environments where information usually is partial, fuzzy, or even contradictory. This technology is not only offering potential for large scale neuroscience applications, but also for embedded ones: robotics, automotive, smartphones, IoT, surveillance, and other areas. Even more computational power may be obtained by emerging technologies like quantum computing, molecular electronics, or novel nano-scale devices (memristors, spintronics, nanotubes (CMOL, i.e. combining CMOS with nanowire crossbars)~\cite{Rueckert2016}.
Neuromorphic computing appears as key technology on several emerging technology lists. Hence, Neuromorphic technology developments are considered as a powerful solution for future advanced computing systems~\cite{FET2017}. Neuromorphic technology is in early stages, despite quite a number of applications appearing. To gain leadership in this domain there are still many important open questions that need urgent investigation (e.g. scalable resource-efficient implementations, online learning, and interpretability).
There is a need to continue to mature the NMC system and at the same time to demonstrate the usefulness of the systems in applications, for industry and also for the society: more usability and demonstrated applications. More focus on technology access might be needed in Europe. Regarding difficulties for NMC in EC framework programmes, integrated projects were well fitting the needs of NMC in FP7, but are missing in H2020. For further research on neuromorphic technology the FET-OPEN scheme could be a good path as it requires several disciplines (computer scientists, material science, engineers in addition to neuroscience, modelling). One also needs support for many small-scale interdisciplinary exploratory projects to take advantage of newly coming out developments, and allow funding new generation developers having new ideas.
\subsubsection{Impact on Hardware} Creating the architectural design for NMC requires an integrative, interdisciplinary approach between computer scientists, engineers, physicists, and materials scientists. NMC would be efficient in energy and space and applicable as embedded hardware accelerator in mobile systems. The building blocks for ICs and for the Brain are the same at nanoscale level: electrons, atoms, and molecules, but their evolutions have been radically different. The fact that reliability, low-power, reconfigurability, as well as asynchronicity have been brought up so many times in recent conferences and articles, makes it compelling that the Brain should be an inspiration at many different levels, suggesting that future nano-architectures could be neural-inspired. The fascination associated with an electronic replication of the human brain has grown with the persistent exponential progress of chip technology. The present decade 2010–2020 has also made the electronic implementation more feasible, because electronic circuits now perform synaptic operations such as multiplication and signal communication at energy levels of 10 fJ, comparable to biological synapses. Nevertheless, an all-out assembly of $10^{14}$ synapses will remain a matter of a few exploratory systems for the next two decades because of several challenges~\cite{Rueckert2016}.
Up to now, there is little agreement on what a learning chip should actually look like. The companies withheld details on the internal architecture of their learning accelerators. Most of the designs appear to focus on high throughput for low-precision data, backed by high bandwidth memory subsystems. The effect of low-precision on the learning result has not been analysed in detail yet. Recent work on low-precision implementations of backprop-based neural nets~\cite{Gupta2015} suggests that between 8 and 16 bits of precision can suffice for using or training DNNs with backpropagation. What is clear is that more precision is required during training than at inference time, and that some forms of dynamic fixed point representation of numbers can be used to reduce how many bits are required per number. Using fixed point rather than floating point representations and using less bits per number reduces the hardware surface area, power requirements, and computing time needed for performing multiplications, and multiplications are the most demanding of the operations needed to use or train a modern deep network with backprop.
\subsection{Quantum Computing}
Today's computers, both in theory (Turing machines) and in practice (personal computers) are based on classical bits which can be either 0 or 1 to perform operations. Modern Quantum Computing systems operate differently as they make use of quantum bits (qubits) which can be in a superposition state and entangled with other qubits \cite{QuantumComputing101}. Superposition and entanglement are thus the two main phenomena that one tries to exploit in quantum computing. Superposition implies that a qubit is both in the ground and the excited state. Entanglement means that two (or more) qubits can be combined with each other such that their states have become inseparable. This gives rise to very interesting properties that can be exploited algorithmically.
The computational power of a quantum computer is directly related to these phenomena and the number of qubits. Two qubits can hold four values at any given time, namely (00, 01, 10, and 11). With each qubit that is added, the compute capacity of the quantum computer is doubled and thus increases exponentially. All these qubits states (in superposition and entangled with each other) can then be manipulated in parallel as, e.g., gates are applied on them which gives the exponential computing power. The problem is that building a qubit is an extremely difficult task as the quantum state that is needed is very fragile and decoheres (losing the state information due to dynamic coupling with the external environment) rapidly. In addition, it is impossible to read out the state of a qubit, which ultimately is necessary to get the answer of a computation, without destroying the superposition state, thus destroying information contained in the qubit state. Basically, it turns into a classical bit that houses only a single value \cite{Metz2015}.
\subsubsection{Current State} A well-known but highly debated example of a quantum computer is the \mbox{D-Wave} machine built by the Canadian company \mbox{D-Wave} \cite{Metz2015}. \mbox{D-Wave} is based on quantum annealing and thus only usable for specific optimization problems. \mbox{D-Wave's} qubits are much easier to build than the equivalent in more traditional quantum computers, but their quantum states are also more fragile, and their manipulation less precise \cite{Gibney2017}. \mbox{D-Wave's} latest processor already has 4,000 qubits.
An alternative direction is to build a universal quantum computer based on quantum gates, such as Hadamard, rotation gates and CNOT. Google, IBM and Intel have all initiated research projects in this domain and currently superconducting qubits seem to be the most promising direction \cite{Simonite2014, Simonite2015-IBM, Odeh2013, Clark2015}.
IBM has announced two new quantum computers as a continuation of the IBM~Q program. The first is a 16 qubit machine that will be used as a follow-on to the 5 qubit machine that is currently accessible through the IBM Quantum Experience program \cite{Quantumcomputingreport2017}. IBM~Q states to have successfully built and tested two of its most powerful universal quantum computing processors to date: 16 qubits for public use and a 17 qubit prototype commercial processor \cite{Ibmq}.
On the application side, Google, NASA, Lockheed Martin, Los Alamos National Lab, and Volkswagen have all focused on developing their own applications and software tools \cite{Hemsoth2017}. Volkswagen Group IT is cooperating with quantum computing company D-Wave on a research project for traffic flow optimization. The first research project is traffic flow optimization in the Chinese mega-metropolis of Beijing. Data scientists and AI specialists from Volkswagen have successfully programmed an algorithm to optimize the travel time of all public taxis in the city \cite{Volkswagen2017}.
Virginia Tech researchers are working on next-generation tools to fit the 4,000 qubit quantum systems D-Wave to help expand the application set and developer tools \cite{Hemsoth2017}.
A major threat on cybersecurity is that quantum computers could attack RSA und EEC encryption. Research towards Post-Quantum-Cryptography (PQC) is a concern even for companies like Infineon Technologies \cite{Arnold2017}.
Currently, the European Commission is preparing the ground for the launch in 2018 of a €1 billion flagship initiative on quantum technologies \cite{Europa2016}.
\subsubsection{Perspective} Making use of Quantum Computing has the benefit to improve the speed-up of certain computations enormously, and even allows solving problems that are impossible for classical computing. Even though the challenges are substantial, they can be separated in physics-oriented and engineering-oriented ones. The physics challenges primarily have to address the lifetime of qubits and the fidelity of qubit gate operations. The engineering challenges go from identifying relevant algorithms and provide compiler and runtime support. It is also clear that a quantum computer will require a supercomputer to provide the necessary quantum error correction mechanisms as error rates of around $ 10^{-3} $ are not uncommon. As the quantum phenomena require mK (milli Kelvin) conditions, the control logic should be brought as close as possible to reduce the transfer of data up to room temperature computers. Understanding how conventional CMOS behaves under cryo-conditions is another challenge.
Quantum Computing might have the advantage to solve some problems that couldn't be solved with classical computers - one example is \textit{Shor's Algorithm} for decryption which, at least assuming that a large scale quantum computer can be built consisting of millions of qubits, could decrypt a 2,000 bit word in around one day which is completely impossible for conventional supercomputers.
In the short term, the \textit{Quantum Key Distribution Algorithm} (QKD) \cite{Odeh2013} can be used as a new encryption technology that relies on the fact that, when a third party tries to eavesdrop, the entangled state is immediately destroyed.
Further quantum algorithms are \cite{Kudrow2013}:
\begin{itemize}
\item \textit{Grover's Algorithm} is the second most famous result in quantum computing. Often referred to as ``quantum search'', Grover's Algorithm actually inverts an arbitrary function by searching n input combinations for an output value in $\sqrt n$ time.
\item \textit{Binary Welded Tree} is the graph formed by joining two perfect binary trees at the leaves. Given an entry node and an exit node, The Binary Welded Tree Algorithm uses a quantum random walk to find a path between the two. The quantum random walk finds the exit node exponentially faster than a classical random walk.
\item \textit{Boolean Formula Algorithm} can determine a winner in a two player game by performing a quantum random walk on a NAND tree.
\item \textit{Ground State Estimation Algorithm} determines the ground state energy of a molecule given a ground state wave function. This is accomplished using quantum phase estimation.
\item \textit{Linear Systems Algorithm} makes use of the quantum Fourier Transform to solve systems of linear equations.
Shortest Vector problem is an NP-Hard problem that lies at the heart of some lattice-based cryptosystems. The Shortest Vector Algorithm makes use of the quantum Fourier Transform to solve this problem.
\item \textit{Class Number} computes the class number of a real quadratic number field in polynomial time. This problem is related to elliptic-curve cryptography, which is an important alternative to the product-of-two-primes approach currently used in public-key cryptography.
\end{itemize}
It is expected that machine learning will be transformed into quantum learning - the prodigious power of qubits will narrow the gap between machine learning and biological learning \cite{Simonite2015-Google}.
In general, the focus is now on developing algorithms requiring a low number of qubits (a few hundred) as that seems to be the most likely reachable goal in the 10-15 year time frame.
\subsubsection{Impact on Hardware} An interesting point to investigate is a better hardware architecture supporting the power efficiency of quantum better. If this is too complex, it should be at least possible to provide a hybrid architecture of both systems enabling to run the simplest sequences of an application as usually on classical computers and the complex ones on quantum co-processors. By doing this, the system performance can be improved during runtime \cite{Clark2015}.
As pointed out earlier, a quantum computer will always be a heterogeneous computing platform where conventional supercomputing facilities will be combined with quantum processing units. How they interact and communicate is clearly a challenging line of research \cite{Kudrow2013}. Quantum Computing looks more and more as a viable technology for the future and Europe best starts developing some serious activities, as indicated by the flagship project on quantum technologies that starts in 2018.
\section{Disruptive Technology (Alternative Ways of Computing)}
\label{sec-disruptive-alt}
\subsection{Nanotubes and Nanowires}
Nano structures like Carbon Nanotubes (CNT) or Silicon Nanowires (SiNW) expose a number of special properties which make them attractive to build logic circuits or memory cells.
CNTs are tubular structures of carbon atoms.
These tubes can be single-walled (SWNT) or multi-walled nanotubes (MWNT).
Their diameter is in the range of a few nanometers.
Their electrical characteristics vary, depending on their molecular structure, between metallic and semiconducting \cite{wiki2017nanotube}.
A CNTFET consists of two metal contacts which are connected via a CNT.
These contacts are the drain and source of the transistor.
The gate is located next to or around the CNT and separated via a layer of silicon oxide \cite{rispal2009large}.
Also, crossed lines of appropriately selected CNTs can form a tunnel diode.
This requires the right spacing between the crossed lines.
The spacing can be changed by applying appropriate voltage to the crossing CNTs.
SiNWs can be formed in a bottom up self-assembly process.
This might lead to substantially smaller structures as those that can be formed by lithographic processes.
Additionally, SiNWs can be doped and thus, crossed lines of appropriately doped SiNW lines can form diodes.
Both, CNTs and SiNWs can be used to build nano-crossbars, which logically are similar to a PLA (programmable logic array).
They offer wired-AND conjunctions of the input signal.
Together with inversion/buffering facilities, they can create freely programmable logic structures.
The density of active elements is much higher as with individually formed CNTFETs.
\subsubsection{Current state} In September 2013, Max Shulaker from Stanford University published a computer with digital circuits based on carbon nanotubes.
It contains a 1 bit processor, consisting of 178 transistors and runs with a frequency of 1 kHz.\cite{shulaker2013carbon}
Nanotube-based RAM is a proprietary memory technology for non-volatile random access memory developed by Nantero (this company also refers to this memory as NRAM) and relies on crossing CNTs as described above.
An NRAM ``cell'' consists of a non-woven fabric matrix of CNTs located between two electrodes.
The resistance state of the fabric is high (representing \emph{off} or $0$ state) when (most of) the CNTs are not in contact and is low (representing \emph{on} or $1$ state) vice versa.
Switching the NRAM is done by adjusting the space between the layers of CNTs.
In theory NRAM can reach the density of DRAM while providing performance similar to SRAM \cite{wiki2017nanoram}.
Nano crossbars have been created from CNTs and SiNWs \cite{devisree2016nanoscale}.
In both cases, the fundamental problem is the high defect density of the resulting circuits.
Under normal semiconductor classifications, these devices would be considered broken.
In fact, usage of these devices is only possible, if the individual defects of the devices can be respected during the logic mapping stage of the HW synthesis \cite{zamani2011self}.
Currently, less research on nanowires and nanotubes is active than in the early 2000s.
Nevertheless, some groups are pushing the usage of nanowires and nanotubes for the creation of logic circuits.
At the same time, more research is going on to deal with the high defect density.
\subsubsection{Perspective} It will take an unknown number of years before NRAM drives might be in production stage \cite{compworld2017nanomem}.
It is unclear whether the defect density can be substantially lowered by better fabrication processes.
\subsubsection{Impact on hardware} CNTs and SiNWs can be utilized for a lot of different applications in several areas of research.
The construction of carbon nanotube field-effect transistors (CNTFETs) and nanotube-based RAM (or Nano-RAM) are important for HPC.
CNTs are very good thermal conductors.
Thus, they could significantly improve conducting heat away from CPU chips \cite{extremtech2017conductive}.
Nano crossbar circuits are inherently programmable.
This leads to more freedom, if the programmability is taken into account during the HW design stage.
Potentially, customizable HW is available in each component once nano crossbars are employed as logic circuits.
\subsection{Graphene}
In 2010 two physicists at Manchester University in the U.K. shared a Nobel Prize in Physics for their work on a new wonder material: graphene, a flat sheet of carbon with the thickness of a single atom.
Konstantin Novoselov and Andre Geim discovered the material by applying plain old sticky tape to simple graphite \cite{moskvitch2015graphene}.
Graphene grows on semiconductor i.e. on the surface of a germanium crystal, which is seen as big step towards manufacturability, see \cite{benchoff2015graphene,jacobberger2015direct}.
\subsubsection{Current state} In 2010, IBM researchers demonstrated a radio-frequency graphene transistor with a cut-off frequency of 100 Gigahertz.
This is the highest achieved frequency so far for any graphene device.
In 2014, engineers at IBM Research have built the world's most advanced graphene-based chip, with performance that's 10,000 times better than previous graphene ICs.
The key to the breakthrough is a new manufacturing technique that allows the graphene to be deposited on the chip without it being damaged \cite{ibm2010made}.
Graphene Project is an EC Flagship project with considerable research efforts in making graphene useful, however, still focused more on the material science perspective than on its potential usage for future computer technology.
Graphene is among the strongest materials known and has attractive potential also outside of computer technology, e.g., as electrodes for solar cells, for use in sensors, as the anode electrode material in lithium batteries and as efficient zero-band-gap semiconductors \cite{rodewald2008researchers}.
The use of graphene in CMOS circuits has been demonstrated in different settings \cite{chen2010fully,lee2010low}.
Also, graphene has been used in digital circuits as an interconnect material for an FPGA \cite{lee2013demonstration}.
In its most advanced form, graphene is subject to electrostatic doping which results in a behaviour that resembles classical p-type and n-type semiconductors.
Thus, graphene layers doped in this way can form p-n junctions which in turn can be used to build so called Pass-XNOR gates \cite{tanachutiwat2010reconfigurable}.
Unfortunately, these gates require a clocked evaluation signal which results in a two-phased operation and limits the operating frequency of the logic.
By clever combination of several such Pass-XNOR gates, one can create a real PLA \cite{tenace2016graphene}.
Currently, the focus of this research is on low-power operation, and the resulting circuit is not very fast due to the two-phased logic operation.
\subsubsection{Perspective} Graphene is a promising technology in laboratory.
Due to the fact that the new graphene manufacturing method is actually compatible with standard silicon CMOS processes, it will probably be possible to realize commercial graphene computer chip in future \cite{anthony2014ibmbuilds}.
Graphene as an interconnect material offers many advantages which might play an important role in future chip architectures, since data transport over longer distances will be much faster and less power hungry than current metal based transmission structures.
Usage of graphene as active element in logic circuits is still in its infancy.
Electrostatically doped graphene layers can be used to build p-n junctions, Nevertheless, these junctions cannot yet be used to build high-speed, high-density logic circuits.
It is unclear whether other basic circuit design approaches will help to circumvent this drawback.
\subsubsection{Impact on hardware} Graphene has an excellent capacity for conducting heat and electricity.
New on-chip communication architectures might come up due to these good conductance values.
Using graphene as active element results in PLA structures.
Thus, similar opportunities and problems apply to these PLAs (programmability + defect density).
\subsection{Diamond Transistors}
Diamonds can be processed in a way that they act like a semiconductor.
Diamond based transistors can be fabricated.
\subsubsection{Current state} Researchers at the Tokyo Institute of Technology fabricated a diamond junction field-effect transistors (JFET) with lateral p-n junctions.
The device shows excellent physical properties such as a wide band gap of 5.47 eV, a high breakdown field of 10 MV/cm (3–4 times higher than 4H-SiC and GaN), and a high thermal conductivity of 20 W/cm*K (4–10 times higher than 4H-SiC and GaN).
It has been found that this diamond transistor works with excellent electrical characteristics, up to 723 K \cite{twasaki2013high}.
\subsubsection{Perspective} Currently the gate length of the fabricated diamond transistors is in the single-digit micrometer range.
Compared with the current 22nm technology with gate lengths of about 25nm \cite{wiki201722nanometer}, a reduction in size is absolutely necessary in order to allow fast working circuits (limitation of the propagation delays).
Producing reasonable diamond wafers for mass production could be possible with the method of \cite{aida2016fabrication}.
The time for producing diamond wafers is another factor that has to be reduced drastically to compete with other technologies.
\subsubsection{Impact on hardware} The high thermal conductivity of diamond, which is several magnitudes higher than that of conventional semiconductor material, allows faster heat dissipation.
This could solve the temperature problem of stacked dies.
Switching energy of a diamond based semiconductor is expected to be much smaller than silicon and the maximum operating temperature can be much higher.
It may "revive" the traditional Moore law.
\section{Beyond CMOS}
\label{sec-disruptive-beyond}
\chapter{Disruptive Technologies in Hardware/VLSI}
\label{sec-disruptive}
\section{Introduction}
Roadmapping beyond the upcoming Exascale machines (2023 -- 2030) is extremely speculative. The basic idea of the Eurolab-4-HPC vision is therefore to assess potentially disruptive technologies and summarize their impacts on HPC hardware as \emph{IF~...~THEN~...} statements, i.e. \emph{IF} disruptive technology will be available \emph{THEN} potential impact on hardware could be.
We survey the current state of research and development and its potential for the future of the following VLSI/hardware technologies:
\begin{itemize}
\setlength\itemsep{0.25\baselineskip}
\setlength\parskip{0pt}
\item CMOS scaling
\item Die stacking and 3D chip technologies
\item Non-volatile Memory (NVM) technologies
\item Photonics
\item Resistive Computing
\item Neuromorphic Computing
\item Quantum Computing
\item Nanotubes and Nanowires
\item Graphene
\item Diamond Transistors
\end{itemize}
\vskip\baselineskip
To sort the different technologies we define three different types of disruptive technology innovations besides Sustaining Technology. The above technologies are filed as follows:
\begin{itemize}
\item \emph{Sustaining:} An innovation that does not principally affect existing HPC. Innovations improving HPC hardware in ways that were generally expected.
\vspace{-1.5ex}
\begin{itemize}
\item CMOS scaling and Die stacking, see section~\ref{sec-sustaining}
\end{itemize}
\item \emph{Disruptive technologies that create a new line of HPC hardware} in a way generally expected.
\vspace{-1.5ex}
\begin{itemize}
\item NVM and Photonics, see section~\ref{sec-disruptive-hw}
\end{itemize}
\item \emph{Disruptive technologies that potentially create alternative ways of computing}.
\vspace{-1.5ex}
\begin{itemize}
\item Resistive, Neuromorphic, and Quantum Computing, see section~\ref{sec-disruptive-alt}
\end{itemize}
\item \emph{Disruptive technologies that potentially replace CMOS} for processor logic.
\vspace{-1.5ex}
\begin{itemize}
\item Nanotube, Graphene, and Diamond technologies, see section~\ref{sec-disruptive-beyond}
\end{itemize}
\end{itemize}
We summarize potential long-term impacts of Disruptive Technologies on HPC hardware in section~\ref{sec-impact} of the vision. Such impacts could concern the processor logic, the memory hierarchy, and potential hardware accelerators.
\chapter{Impact of Disruptive Technologies}
\label{sec-impact}
\section{Summary of Potential Long-Term Impacts of Disruptive Technologies for HPC Hardware}
\label{Summary-of-Potential-Long-Term-Impacts-of-Disruptive-Technologies-for-HPC-Hardware}
Potential long-term impacts of disruptive technologies could concern the processor logic, the processor-memory interface, the memory hierarchy, and future hardware accelerators.
\subsection{New Ways of Computing}
Processor logic could be totally different if materials like graphene, nanotube or diamond would replace classical integrated circuits based on silicon transistors, or could integrate effectively with traditional CMOS technology to overcome its current major limitations like limited clock rates and heat dissipation.
A physical property that these materials share is the high thermal conductivity: Diamonds for instance can be used as a replacement for silicon, allowing diamond based transistors with excellent electrical characteristics. Graphene and nanotubes are highly electrically conductive and could allow a reduced amount of heat generated because of the lower dissipation power, which makes them more energy efficient. With the help of those good properties, less heat in the critical spots would be expected which allows much higher clock rates and highly integrated packages. Whether such new technologies will be suitable for computing in the next decade is very speculative.
Furthermore, Photonics, a technology that uses photons for communication, can be used to replace communication busses to enable a new form of inter- and intra-chip communication.
Current CMOS technology may presumably scale continuously in the next decade, down to 4 or 3 nm. However, scaling CMOS technology leads to steadily increasing costs per transistor, power consumption, and to less reliability. Die stacking could result in 3D many-core microprocessors with reduced intra core wire length, enabling high transfer bandwidths, lower latencies and reduced communication power consumption.
\subsection{New Processor-Memory Interfaces}
Near-memory computing and in-memory computing will change the interface between the processor and the memory: memory will in future not only be accessed by loads and stores respectively cache-line misses, but additionally provide a semantically stronger access pattern based on simple operations on a large number of memory cells.
\paragraph{Near-memory computing} to be considered as a near- and mid-term realizable concept, is characterized by logic, e.g. small cores, which are located directly to the memory in order to carry out pre-processing steps, like e.g. stencil operations, or vector operations on data either stored in memory, caches or so-called storage class memory (SCM). It is an acceptable fact that due to energy reasons it is preferable to process data in-situ directly where they are located before they are sent to the processor in particular if this pre-processing goes along with a reduction of data amount.
\paragraph{In-memory computing} goes a step further in such a way that the NVM cell itself is not only a storage cell but it becomes an integral part of the processing step. This can help to reduce further the energy consumption and the area requirement in comparison to near-memory computing. However, this technology has to be improved and therefore it is considered at least as a mid-term or probably as a more long-term solution.
In-memory computing uses the NVM cells not only for storing but as inherent part of the processing step itself, e.g. to pre-process data in pre-processing steps embedded as part of a much more complex processing task. This will help to face the challenging of processing and holding large data amount as we see in HPDA.
In-memory computing will also influence strongly Edge computing approaches in which new architectures have to be found that are characterized by processing data directly at sensors where the data is captured to reduce as described above the amount of data that has to be transferred to more-coarse grained cores for post-processing.
In-memory and near-memory computing concepts will rely on 3D stacking techniques for an efficient realization. In near-memory, computing refers to the coupling of logic cores and e.g. hybrid memory cells; in-memory concepts refer on the coupling of NVM cell arrays and conventional CMOS that will work as memory controller to control the NVMs. This coupling has to be realized in a so-called BEOL (back end on line) step, i.e. the NVM cells are deposited in a post-process step on the top metal layer of a CMOS chip.
\subsection{New Memory Hierarchies}
3D stacking will also be used to scale Flash memories, because 2D NAND Flash technology does not further scale. In the long run even 3D Flash memories will probably be replaced by memristor or other non-volatile memory (NVM) technologies. These, depending on the actual type, allow higher structural density, less leakage power, faster read- and write access, more endurance and can nevertheless be more cost efficient.
However, the whole memory hierarchy may change in the upcoming decade. DRAM scaling will only continue with new technologies, in fact NVMs, which will deliver non-volatile memory potentially replacing or being used in addition to DRAM. Some new non-volatile memory technologies could even be integrated on-chip with the microprocessor cores and offer orders of magnitude faster read/write accesses and also much higher endurances than Flash. Intel demonstrated the possible fast memory accesses of the 3D-XPoint NVM Technology used in their Optane Technology. HP's computer architecture proposal called ``The Machine'' targets a machine based on new NVM memory and photonic busses. The Machine sees the memory instead of processors in the centre. This so called Memory-Centric Computing unifies the memory and storage into one vast pool of memory. HP proposes advanced photonic fabric to connect the memory and processors. Using light instead of electricity is the key to rapidly accessing any part of the massive memory pool while using much less energy.
\begin{figure*}[ht]
\centering
\begin{tikzpicture}
{\sffamily\small
\fill[background] (-4.5,0.75) rectangle ++(15.0,-8.5);
\node (top) at (0,0) {};
\node (left) at (-4.0,-7.25) {};
\node (right) at (4.0,-7.25) {};
\draw[fill=table] (left.center) -- (top.center) -- (right.center) -- (left.center);
\begin{scope}
\clip (left.center) -- (top.center) -- (right.center) -- (left.center);
\draw (-5,-1.875) -- ++(10,0);
\draw (-5,-2.625) -- ++(10,0);
\draw (-5,-3.375) -- ++(10,0);
\draw (-5,-4.875) -- ++(10,0);
\draw (-5,-5.625) -- ++(10,0);
\draw (-5,-6.375) -- ++(10,0);
\end{scope}
\draw[ultra thick] (-4.3,-4.125) -- ++(7.8,0);
\node[align=center,anchor=south] (regs) at (0,-1.85) {CPU\\ Registers};
\node (l1) at (0,-2.25) {L1 \$};
\node (l2) at (0,-3) {L2 \$};
\node (l3) at (0,-3.75) {L3 \$};
\node (mem) at (0,-4.5) {Memory};
\node[align=center] (scmem1) at (0,-5.25) {Storage Class Memory SCM1};
\node[align=center] (scmem2) at (0,-6.0) {Storage Class Memory SCM2};
\node (bulk) at (0,-6.75) {Bulk Storage};
\node[rotate=90,anchor=west] at (-4.05,-4.0) {Processor Chip};
\node[rotate=90,anchor=east] at (-4.05,-4.25) {Off Chip};
\node[anchor=west,font=\footnotesize] at (4.2,-2.625) {SRAM, expensive cost/mm\textsuperscript{2}};
\node[anchor=west,font=\footnotesize] at (4.2,-3.75) {SRAM, STT-RAM};
\node[anchor=west,font=\footnotesize] at (4.2,-4.5) {3D-RAM, high bandwidth memory / PCM};
\node[anchor=west,font=\footnotesize] at (4.2,-5.25) {PCM or other NVM, fast read, less fast write};
\node[anchor=west,font=\footnotesize] at (4.2,-6.0) {NAND Flash, low cost/mm\textsuperscript{2}};
\node[anchor=west,font=\footnotesize] at (4.2,-6.75) {Disk, Cloud, low cost/mm\textsuperscript{2}};
}
\end{tikzpicture}
\caption{Usage of NVM in a future complex supercomputer memory hierarchy.}
\end{figure*}
The Machine is a first example of the new Storage-class Memory (SCM), i.e., a non-volatile memory technology in between memory and storage, which may enable new data access modes and protocols that are neither ``memory'' nor ``storage''. It would particularly increase efficiency of fault tolerance check pointing, which is potentially needed for shrinking CMOS processor logic that leads to less reliable chips. There is a major impact from this technology on software and computing. SCM provides orders of magnitude increase in capacity with near-DRAM latency which would push software towards in-memory computing.
\subsection{New Hardware Accelerators}
Resistive Computing, Neuromorphic Computing and Quantum Computing are promising technologies that may be suitable for new hardware accelerators but less for new processor logic. Resistive computing promises a reduction in power consumption and massive parallelism. It could enforce memory-centric and reconfigurable computing, leading away from the Von-Neumann architecture. Humans can easily outperform currently available high-performance computers in tasks like vision, auditory perception and sensory motor-control. As Neuromorphic Computing would be efficient in energy and space for artificial neural network applications, it would be a good match for these tasks. More lack of abilities of current computers can be found in the area of unsolved problems in computer science. Quantum Computing might solve some of these problems, with important implications for public-key cryptography, searching, and a number of specialized computing applications.
\section{Applying Disruptive Technologies More Aggressively}
A valuable way to evaluate potential disruptive technologies is to examine their impact on the fundamental assumptions that are made when building a system using current technology, in order to determine whether future technologies have the potential to change these assumptions, and if yes what the impact of that change is.
\subsection{Power is a First-Class Citizen when Committing to New Technology}
For the last decade, power and thermal management has been of high importance. The entire market focus has moved from achieving better performance through single-thread optimizations, e.g., speculative execution, towards simpler architectures that achieve better performance per watt, provided that vast parallelism exists. The problem with this approach is that it is not always easy to develop parallel programs and moreover, those parallel programs are not always performance portable, meaning that each time the architecture changes, the code may have to be rewritten.
Research on new materials, such as graphene, nanotubes and diamonds as (partial) replacements for silicon can turn the tables and help to produce chips that could run at much higher frequencies and with that may even use massive speculative techniques to significantly increase the performance of single threaded programs. A change in power density vs. cost per area will have an effect on the likelihood of dark silicon.
The reasons why such technologies are not state of the art yet are their premature state of research, which is still far from fabrication, and the unknown production costs of such high performing chips. But we may assume that in 10 to 20 years the technologies may be mature enough or other such technologies will be discovered.
Going back to improved single thread performance may be very useful for many segments of the market. Reinvestment in this field is essential since it may change the way we are developing and optimizing algorithms and code.
Dark Silicon (i.e. large parts of the chip have to stay idle due to thermal reasons) may not happen when specific new technologies ripen. New software and hardware interfaces will be the key for successfully applying future disruptive technologies.
\subsection{Locality of References}
Locality of references is a central assumption of the way we design systems. The consequence of this assumption is the need of hierarchically arranged memories, 3D stacking and more.
But new technologies, including optical networks on die and Terahertz based connections, may reduce the need for preserving locality, since the differences in access time and energy costs to local memory vs. remote storage or memory may not be as significant in future as it is today.
When such new technologies find their practical use, we can expect a massive change in the way we are building hardware and software systems and are organizing software structures.
The restriction here is purely the technology, but with all the companies and universities that work on this problem, we may consider it as lifted in the future.
\subsection{Digital and Analog Computation}
The way how today's computers are built is based on the digital world. This allows the user to get accurate results, but with the drawbacks of cost of time, energy consumption and loss of performance. But accurate results are not always needed. Due to this limitation the production of more efficient execution units, based on analog or even a mix between analog and digital technologies could be possible. Such an approach can revolutionize the way of the programming and usage of future systems.
Currently the main problem is, that we have no effective way to reason at run time on the amount of inaccuracy we introduces to a system.
\subsection{End of Von Neumann Architecture}
The Von Neumann architecture assumes the use of central execution units that interface with different layers of memory hierarchies. This model, serves as the execution model for more than three decades. But this model is not effective in terms of performance for a given power.
New technologies like memristors may allow an on-chip integration of memory which in turn grants a very tightly coupled communication between memory and processing unit.
Assuming that these technologies will be mature, we could change algorithms and data structures to fit the new design and thus allow memory-heavy ``in-memory'' computing algorithms to achieve significantly better performance.
We may need to replace the notion of general purpose computing with clusters of specialized compute solution. Accelerators will be ``application class'' based, e.g. for deep learning (such as Google's TPU and Fujitsu's DLU), molecular dynamics, or other important domains.
It is important to understand the usage model in order to understand future architectures/systems.
\subsection{Open Questions and Research Challenges}
The discussion above leads to the following principal questions und research challenges for future HPC hardware architectures and implicitly for software and applications as well:
\begin{itemize}
\item Impact, if power and thermal will not be limiter anymore (frequency increase vs. many-cores)?
\item Impact, if Dark Silicon can be avoided?
\item Impact, if communication becomes so fast so locality will not matter?
\item Impact, if data movement could be eliminated (and so data locality)?
\item Impact, if memory and I/O could be unified and efficiently be managed?
\end{itemize}
Evolution of system complexity: will systems become more complex or less complex in future?
\section{Summary of Potential Long-Term Impacts of Disruptive Technologies for HPC Software and Applications}
New technologies will lead to new hardware structures with demands on system software and programming environment and also opportunities for new applications.
CMOS scaling will require system software to deal with higher fault rate and less reliability. Also programming environment and algorithms may be affected, e.g., leading to specifically adapted approximate computing algorithms.
The most obvious change will result from changes in memory technology. NVM will prevail independent of the specific memristor technology that will win. The envisioned Storage-Class Memory (SCM) will influence system software and programming environments in several ways:
\begin{itemize}
\item Memory and storage will be accessed in a uniform way.
\item Computing will be memory-centric.
\item Faster memory accesses by the combination of NVM and photonics could lead either to an even more complex or to a shallower memory hierarchy envisioning a flat memory where latency does not matter anymore.
\item Read accesses will be faster than write accesses, though, software needs to deal with the read/write disparity, e.g., by database algorithms that favour more reads over writes.
\item NVM will allow in-memory checkpointing, i.e. checkpoint replication with memory to memory operations.
\item Software and hardware needs to deal with limited endurance of NVM memory.
\end{itemize}
A lot of open research questions arise from these changes for software.
Full 3D stacking may pose further requirements to system software and programming environments:
\begin{itemize}
\item The higher throughput and lower memory latency when stacking memory on top of processing may require changes in programming environments and application algorithms.
\item Stacking specialized (e.g. analog) hardware on top of processing and memory elements lead to new (embedded) high-performance applications.
\item Stacking hardware accelerators together with processing and memory elements require programming environment and algorithmic changes.
\item 3D multicores require software optimizations able to efficiently utilize the characteristics of 3rd dimension, .i.e. e.g., different latencies and throughput for vertical versus horizontal interconnects.
\item 3D stacking may to new form factors that allow for new (embedded) high-performance applications.
\end{itemize}
Photonics will be used to speed up all kind of interconnects – layer to layer, chip to chip, board to board, and compartment to compartment with impacts on system software, programming environments and applications such that:
\begin{itemize}
\item A flatter memory hierarchy could be reached (combined with 3D stacking and NVM) requiring software changes for efficiency redefining what is local in future.
\item It is mentioned that energy-efficient Fourier-based computation is possible as proposed in the Optalysys project.
\item The intrinsic end-to-end nature of an efficient optical channel will favour broadcast/multicast based communication and algorithms.
\item A full photonic chip will totally change software in a currently rarely investigated manner.
\end{itemize}
A number of new technologies will lead to new accelerators. We envision programming environments that allow defining accelerator parts of an algorithm independent of the accelerator itself. OpenCL and OpenACC are such languages distinguishing ``general purpose'' computing parts and accelerator parts of an algorithm, where the accelerator part can be compiled to GPUs, FPGAs, or many-cores like the Xeon Phi. Such programming environment techniques and compilers have to be enhanced to improve performance portability and to deal with potentially new accelerators as, e.g., neuromorphic chips, quantum computers, in-memory resistive computing devices etc. System software has to deal with these new possibilities and map computing parts to the right accelerator.
Neuromorphic Computing is particularly attractive for applying artificial neural network and deep learning algorithms in those domains where, at present, humans outperform any currently available high-performance computer, e.g., in areas like vision, auditory perception, or sensory motor-control. Neural information processing is expected to have a wide applicability in areas that require a high degree of flexibility and the ability to operate in uncertain environments where information usually is partial, fuzzy, or even contradictory. The success of the IBM Watson computer is an example for such new application possibilities. It is envisioned that neuromorphic computing could help understanding the multi-level structure and function of the brain and even reach an electronic replication of the human brain at least in some areas such as perception and vision.
Quantum Computing potentially solves problems impossible by classical computing, but posts challenges to compiler and runtime support. Moreover, quantum error correction is needed due to high error rates (10-3). Applications of quantum computers could be new encryptions, quantum search, quantum random walk, etc.
Resistive Computing may lead to massive parallel computing based on data-centric and reconfigurable computing paradigms. In memory computing algorithms may be executed on specialised resistive computing accelerators.
Quantum Computing, Resistive Computing as well as Graphene and Nanotube-based computing are still highly speculative hardware technologies.
\chapter{Vertical Challenges: Green ICT, Energy and Resiliency}
\label{sec-vertical}
\section{GreenICT}
The term ``Green ICT'' refers to the study and practice of environmentally sustainable computing. The 2010 estimates put the ICT at 3\% of the overall carbon footprint, ahead of the airline industry~\cite{Smarr2010}. Modern large-scale data centres are already multiple of tens of MWs, on par with estimates for Exascale HPC sites. Therefore, computing is among heavy consumers of electricity and subject of sustainability considerations with high societal impact.
For the HPC sector the key contributors to electricity consumption are the computing, communication, and storage systems and the infrastructure including the cooling and the electrical subsystems. Power usage effectiveness (PUE) is a common metric characterizing the infrastructure overhead (i.e., electricity consumed in IT equipment as a function of overall electricity). Data centre designs taking into consideration sustainability~\cite{Shuja2016} have reached unprecedented low levels of PUE. Many EU projects have examined CO2 emissions in cloud-based services~\cite{ECO2017} and approaches to optimize air cooling~\cite{CoolEmAll2017}.
It is expected that the \mbox{(pre-)Exascale} IT equipment will use direct liquid cooling without use of air for the heat transfer~\cite{DEEP2017}. Cooling with temperatures of the liquid above 45°\,C open the possibility for ``free cooling'' in all European countries and avoid energy cost of water refrigeration. Liquid cooling has already been employed in HPC since the earlier Cray machines and continues to play a key role. The CMOSAIC project~\cite{CMOSAIC2017} has demonstrated two-phase liquid cooling previously shown for rack-, chassis- and board-level cooling to 3D-stacked IC as a way to increase thermal envelopes. The latter is of great interest especially for end of Moore's era where stacking is emerging as the only path forward in increasing density. Many vendors are exploring liquid immersion technologies with mineral-based oil and other material to enable higher power envelopes.
We assert that to reach Exascale performance an improvement must be achieved in driving the Total Power usage effectiveness (TUE) metric~\cite{TUE2017}. This metric highlights the energy conversion costs within the IT equipment to drive the computing elements (processor, memory, and accelerators). As a rule of thumb, in the pre-Exascale servers the power conversion circuitry consumes 25\% of all power delivered to a server. Facility targeting TUE close to one will focus the power dissipation on the computing (processor, memory, and accelerators) elements.
The CMOS computing elements (processor, memory, accelerators) power dissipation (and therefore also the heat generation) is characterized by the leakage current. It doubles for every 10°\,C increase of the temperature~\cite{Wolpert2012}. Therefore the coolant temperature has influence on the leakage current and may be used to balance the overall energy effectiveness of the data centre for the applications. We expect that the \mbox{(pre-)Exascale} pilot projects, in particular funded by the EU, will address creation and usage of the management software for global energy optimization in the facility~\cite{Li2012}.
Beyond Exascale we expect to have results from the research related to the CMOS devices cooled to low temperatures~\cite{Ellsworth2001} (down to Liquid Nitrogen scale, 77\,K). The expected effect is the decrease of the leakage current and increased conductivity of the metallic connections at lower temperatures. We suggest that an operating point on this temperature scale can be found with significantly better characteristics of the CMOS devices. Should such operating point exist, a practical way to cool such computational device must be found. This may be one possible way to overcome the CMOS technology challenges beyond the feature size limit of \SI{10}{\nm}~\cite{Hu2017}. We suggest that such research funded in Europe may yield significant advantage to the European HPC position beyond Horizon 2020 projects.
The electrical subsystem also plays a pivotal role in Green ICT. Google has heavily invested in renewables and announced in 2017 that their data centres will be energy neutral. However, as big consumers of electricity, HPC sites will also require a tighter integration of the electrical subsystem with both the local/global grids and the IT equipment. Modern UPS systems are primarily designed to mitigate electrical emergencies. Many researchers are exploring the use of UPS systems as energy storage to regulate load on the electrical grid both for economic reasons, to balance the load on the grid or to tolerate the burst of electricity generated from renewables. The Net-Zero data centre at HP and GreenDataNet~\cite{GreenDataNet2017} are examples of such technologies.
\section{Resiliency}
Preserving data consistency in case of faults is an important topic in HPC. Individual hardware components can fail causing software running on them to fail as well. System software would take down the system if it experiences an unrecoverable error to preserve data consistency. At this point the machine (or component) must be restarted to resume the service from a well-defined state.
The traditional failure recovery technique is to restart the whole user application from a user-assisted coordinated checkpoint taken at synchronization point. The optimal checkpoint period is a function of time/energy spent writing the checkpoint and the expected failure rate~\cite{Plank2001}. The challenge is to guess the failure rate, since this parameter is not known in general. If a failure could be predicted, preventive action such as the checkpoint can be taken to mitigate the risk of the pending failure.
No deterministic failure prediction algorithm is known. However, collecting sensor data and Machine Learning (ML) on this sensor data yields good results~\cite{Turnbull2003}. We expect that the \mbox{(pre-)Exascale} machine design especially funded by the EU will incorporate sufficient sensors for the failure prediction and monitoring. This may be a significant challenge, as the number of components and the complexity of the architecture will increase. Therefore, also the monitoring data stream will increase, leading to a fundamental Big Data problem just to monitor a large machine.
We see this monitoring problem as an opportunity for the EU funding of fundamental research in ML techniques for real-time monitoring of hardware facilities in general. The problem will not yet be solved in the next round of the \mbox{(pre-)Exascale} machine development. Therefore, we advocate a targeted funding for this research to extend beyond Horizon 2020 projects.
The traditional failure recovery scheme with the coordinated checkpoint may be relaxed if fault-tolerant communication libraries are used~\cite{Fagg2000}. In that case the checkpoints do not need to be coordinated and can be done per node when the computation reaches a well-defined state. When million threads are running in a single scalable application, the capability to restart only a few communicating threads after a failure is important.
The non-volatile memories may be available for the checkpoints; it is a natural place to dump the HBM contents. We expect these developments to be explored on the time scale of \mbox{(pre-)Exascale} machines. It is clear that the system software will incorporate failure mitigation techniques and may provide feedback on the hardware-based resiliency techniques such as the ECC and Chipkill. The software-based resiliency has to be designed together with the hardware-based resiliency. Such design is driven by the growing complexity of the machines with a variety of hardware resources, where each resource has its own failure pattern and recovery characteristics.
On that note the compiler assisted fault tolerance may bridge the separation between the hardware-only and software-only recovery techniques~\cite{Herault2015}. This includes automation for checkpoint generation with the optimization of checkpoint size~\cite{Plank1995}. More research is needed to implement these techniques for the Exascale and post-Exascale architectures with the new levels of memory hierarchy and increased complexity of the computational resources. We see here an opportunity for the EU funding beyond the Horizon 2020 projects.
Stringent requirements on the hardware consistency and failure avoidance may be relaxed, if an application algorithm incorporates its own fault detection and recovery. Fault detection is an important aspect, too. Currently, applications rely on system software to detect a fault and bring down (parts of) the system to avoid the data corruption. There are many application environments that adapt to varying resource availability at service level---Cloud computing works in this way. Doing same from within an application is much harder. Recent work on the ``fault-tolerant'' message-passing communication moves the fault detection burden to the library, as discussed in the previous section. Still, algorithms must be adopted to react constructively after such fault detection either by ``rolling back'' to the previous state (i.e. restart from a checkpoint) or ``going forward'' restoring the state based on the algorithm knowledge. The forward action is subject of a substantial research for the \mbox{(pre-)Exascale} machines and typically requires algorithm redesign. For example, a possible recovery mechanism is based on iterative techniques exploited in Linear Algebra operations~\cite{Langou2007}.
The Algorithm Based Fault Tolerance (ABFT) may also use fault detection and recovery from within the application. This requires appropriate data encoding, algorithm to operate on the encoded data and the distribution of the computation steps in the algorithm among (redundant) computational units~\cite{Huang1984}. We expect these aspects to play a role with NMP. The ABFT techniques will be required when running applications on machines where the strong reliability constraint is relaxed due to the subthreshold voltage settings. Computation with very low power is possible~\cite{Gupta2015} and opens a range of new ``killer app'' opportunities. We expect that much of this research will be needed for post-Exascale machines and therefore is an opportunity for EU funding beyond the Horizon 2020 projects.
\chapter{System Software and Programming Environment}
\label{sec-system}
\section{Scope}
The system software is the part of the HPC software stack that is optimized by the HPC vendor and managed by the system's operator, and it includes the Operating System (OS), cluster management tools, distributed file systems, and resource management software (job scheduler). It is essential for an operational HPC system to have an efficient system software stack below the end user's application. The programming environment comprises the development tools used to build the end user's application (compilers, IDEs, debuggers, and performance analysis tools) along with the associated abstractions (e.g. programming models), as well as the runtime components: libraries and runtime systems. Workflow management tools and commonly pre-installed application libraries such as BLAS and LAPACK are also in the scope of this section.
\section{Current Research Trends}
\subsection{Sustained Increases in System Complexity, Specialization, and Heterogeneity}
An important role of the system software and programming environment is to provide the application developers with common standardized abstractions. Such abstractions greatly improve programmer productivity and portability across systems. Today's dominant abstractions include Fortran, C, MPI, POSIX-style file systems, threads and locking, which are all relatively low-level. By 2030, disruptive technologies may have forced the introduction of new and currently unknown low-level abstractions that are very different from these, and this topic is addressed below. Nevertheless, today's abstractions will continue to evolve incrementally and probably increase in their level of abstraction, and will continue to be used well beyond 2030, since scientific codebases have very long lifetimes, on the order of decades. Developers are unwilling to adopt a new programming language or API until they are convinced that it will be supported for a long time.
Continuous CMOS scaling and 3D stacking are pointing towards increasingly complex hardware. High-bandwidth (3D integrated) and non-volatile memories (memristors, etc.) will lead to different memory hierarchies. Increasing performance per watt demands accelerators (many-core, GPU, vector, dataflow, and their successors), heterogeneous processors (big and small cores) and potentially reconfigurable logic (FPGA). The choice of processor cores will likely become increasingly heterogeneous (within a system) and varied (across systems). Certain techniques for energy efficiency (near threshold, DVFS, energy-efficient interconnects) increase timing variability among the processes in an HPC application. Virtualization, if adopted, will also increase timing variability. In addition to hardware complexity, execution environments will also increase in complexity, through interactive use (which will require workloads to adjust to dynamically variable numbers of nodes, cores, memory capacities, and so on).
Hiding or mitigating this increasingly complex and varied hardware requires more and more intelligence across the programming environment. Manual optimization of the data layout, placement, and caching will become uneconomic and time consuming, and will, in any case, soon exceed the abilities of the best human programmers. There needs to be a change in mentality from programming ``heroism'' towards trusting the compiler and runtime system (as in the move from assembler to C/Fortran). Automatic optimization requires advanced techniques in the compiler and runtime system. In the compiler, there is opportunity for both fully automated transformations and the replacement of manual refactoring by automated program transformations under the direction of human programmers (e.g. Halide [14]). Advanced runtime and system software techniques, e.g., task scheduling, load balancing, malleability, caching, energy proportionality are needed.
Increasing complexity also requires an evolution of the incumbent standards such as OpenMP, in order to provide the right programming abstractions. There is as yet no standard language for GPU-style accelerators (CUDA is controlled and only well supported by a single vendor and OpenCL provides portability). Domain-specific languages (e.g. for partial differential equations, linear algebra or stencil computations) allow programmers to describe the problem in terms much closer to the original scientific problem, and they provide greater opportunities for automatic optimization. In general there is a need to raise the level of abstraction. In some domains (e.g. embedded) prototyping is already done in a high-level environment similar to a DSL (Matlab), but the implementation still needs to be ported to a more efficient language.
A different opinion expressed the need to continue to provide a (simple) cost model, in similar terms to the correspondence of the programming language C to a von Neumann CPU, so that programmers could have an intuition about the effect on performance. There is scope for ways to express non-functional properties of software, as commonly done in embedded systems, in order to trade various metrics, e.g., performance vs. energy or accuracy vs. cost, both of which may become more relevant with near threshold, approximate computing or accelerators (quantum/neuromorphic).
There is a need for global optimization across all levels of the software stack, including OS, runtime system, application libraries, and application. Examples of global problems that span multiple levels of the software stack include a) support for resiliency (system/application-level checkpointing), b) data management transformations, such as data placement in the memory hierarchy, c) minimising energy (sleeping and controlling DVFS), d) constraining peak power consumption or thermal dissipation, and e) load balancing. Different software levels have different levels of information, and must cooperate to achieve a common objective subject to common constraints, rather than competing or becoming unstable.
\subsection{Complex Application Performance Analysis and Debugging}
Performance analysis and debugging are particularly difficult problems beyond Exascale. The problems are two-fold. The first problem is the enormous number of concurrent threads of execution (millions), which provides a scalability challenge (particularly in performance tools, which must not unduly affect the original performance) and in any case there will be too many threads to analyse by hand. Secondly, there is an increasing gap between (anomalous) runtime behaviour and the user's changes in the source code needed to fix it, due to libraries, runtime systems and system software, and potentially disaggregated resources, that the application programmer would know little or nothing about.
Spotting anomalous behaviour, such as the root cause of a performance problem or bug, will be a \emph{big data} problem, requiring techniques from data mining, clustering and structure detection, as well as high scalability through summarized data, sampling and filtering and special techniques like spectral analysis. As implied above, the tools need to be interoperable with programming abstractions, so that problems in a loop in a library or dynamic scheduling of tasks can be translated into terms that the programmer can understand.
\section{Potential Implications of Disruptive Technologies}
\subsection{Disruptive Hardware Models of Computation}
Many of the fundamental abstractions used in computing in general, and high-performance computing in particular, have evolved steadily since their introduction decades ago:
\begin{itemize}
\item Fortran programming language (introduced in the 1950s)
\item C programming language (1973)
\item Sockets communications (1983)
\item File system in terms of files, directories, POSIX API (1988)
\item POSIX threads, locks, condition variables, etc. (1988)
\item MPI message passing API (1994)
\item OpenMP (1997)
\end{itemize}
An important question is whether and to what degree these fundamental abstractions may be broken by new technologies, especially disruptive technologies. The above abstractions have stood the test of time and will endure in HPC, given the long lifetimes of scientific codebases. Nevertheless, certain disruptive technologies on the horizon have the potential to challenge certain basic assumptions.
\subsection{Convergence Between Storage and Memory}
All existing computing systems make a strong distinction between memory and storage. Random-access memory is fast (in both bandwidth and latency), it is byte addressable and randomly accessible by the processor, it has high cost-per-bit, and its contents are volatile. Storage is slow, in both bandwidth and latency, data is accessed through at I/O device in 512-byte (or larger) blocks, it has lower cost-per-bit, and the data is persistent.
This (hardware) correspondence between persistence on the one hand and speed, addressability and granularity on the other is the basis for the different roles of memory and storage. Temporary data structures are held in memory, and manipulated using random accesses. Data that must be persistent and/or passed among programs is serialized to a file as a byte stream.
Storage-class memory, including HPE's Persistent Memory, has similar speed, addressability and cost as DRAM with the non-volatility of storage. In the context of HPC, such memory can reduce the cost of checkpointing or eliminate it entirely. There is also work on persistent objects, e.g., NV-Heaps, and further work is needed.
\subsection{Neuromorphic, Resistive and Quantum Computing}
The adoption of neuromorphic, resistive computing and/or quantum computing may have a dramatic effect on the system software and programming model. It is currently unclear whether it will be sufficient to offload tasks, as on GPUs, or whether more dramatic changes will be needed.
\subsection*{#1}}
\renewcommand*{\bibfont}{\footnotesize}
|
2,869,038,155,498 | arxiv | \section{The MALT90 Survey}
The Millimeter Astronomy Legacy Team Survey at 90 GHz (MALT90) will characterize the physical and chemical conditions of dense molecular clumps associated with high-mass star formation over a wide range of evolutionary states. MALT90 uses the Mopra Spectrometer (MOPS) and the fast mapping capability of the Mopra 22-m radio telescope to map 2000+ candidate dense molecular clumps simultaneously in 16 different lines near 90 GHz. The clumps are drawn from the ATLASGAL \citep{Schuller:2009} survey, and then classified based on their Spitzer morphology to cover a broad range of evolutionary states, from pre-stellar clumps to accreting high-mass protostars and on to H II regions.
Over the first three years, MALT90 has mapped 1912 dense molecular clumps, which is an order of magnitude more sources than previous comparable surveys \citep[e.g.,][]{Shirley:2003, Pirogov:2003, Gibson:2009, Wu:2010}. This large number of sources allows us to divide the sample into sub-samples (based on mass, evolutionary phase, etc.) while retaining a sufficient number of sources in each sub-sample for statistical analysis. In addition, the large number of sources means that MALT90 includes short-lived phases in the evolution of clumps forming massive stars as well as objects that are intrinsically rare for any other reason. These rare objects will provide interesting targets for followup at higher resolution, primarily with ALMA (the Atacama Large Millimeter Array).
\begin{figure}[th]
\begin{center}
\includegraphics[width=12cm, angle=0]{JonathanFoster_Figure1.eps}
\caption{Far-infrared luminosity (a proxy for the star formation rate) versus HCN luminosity (a proxy for the surface density of dense gas) for MALT90 sources calculated in the same way as for the external galaxies of \citet{Gao:2004}. The relationship between these two quantities obeys the same relationship in both the MALT90 sources and the external galaxies. }
\label{fig:sfrelations}
\end{center}
\end{figure}
\section{Early Science Highlights}
\subsection{Relationship Between Galactic and Extragalactic Star Formation Relations}
The relationship between gas surface density and star formation rates is a topic of considerable interest for extragalactic studies. Recent work has focused on connecting the relations determined in external galaxies with the same relations within the Milky Way \citep{Kennicutt:2012}. One of the tightest extragalactic star formation relations was described by \citet{Gao:2004}, and connects the luminosity of a galaxy in HCN (a tracer of dense gas) with the far-infrared luminosity (assumed to be a tracer of star formation rate). \citet{Wu:2010} verified that this same relationship holds in massive clumps within the Milky Way. Our much larger sample confirms the results of \citet{Wu:2010}; the \citet{Gao:2004} relationship holds over roughly 10 orders of magnitude to encompass MALT90 clumps (see Figure~\ref{fig:sfrelations}). One important caveat is that the IRAS survey used to estimate the far-infrared luminosity has limited sensitivity, such that a number of MALT90 sources are not detected by IRAS, and are thus not plotted. Investigating the low-luminosity portion of this relationship by using other far-infrared surveys will be future work.
\subsection{Chemical Trends}
We classify MALT90 sources into rough evolutionary states based on their Spitzer morphology; the different evolutionary states show variation in the strengths of molecular lines (see Figure~\ref{fig:spectra}). We have completed a study of the chemical trends in MALT90 clumps observed during the first season \citetext{Hoq et al. submitted}. In particular, we study the ratio of N$_2$H$^+$ to HCO$^{+}$ abundances as a function of evolutionary state of the clumps and the ratio of HCN to HNC integrated intensities. There is no statistically significant trend with evolutionary state in the N$_2$H$^+$ to HCO$^{+}$ abundance ratio, athough models of chemical evolution would predict significant variation as CO (which decreases the abundance of N$_2$H$^+$ relative to HCO$^{+}$) depletes out of the gas phase during pre-stellar collapse and then is released following ignition of the protostar. The HCN to HNC integrated intensity ratio increases as a function of evolutionary state, as expected by models in which this ratio is temperature dependent.
\begin{figure}[th]
\begin{center}
\includegraphics[width=12cm, angle=0]{JonathanFoster_Figure2.eps}
\caption{Typical spectra for each of the evolutionary states classified in the MALT90 Survey, from the mid-IR dark quiescent clumps (top [blue]) to the protostellar clumps associated with 24 \micron\ point sources (second-from-top [green]), to the bright H II regions (second-to-bottom [red]) and photodisociation regions (PDRs; bottom [purple]). Spectra are shown on the same temperature scale (among classes), and the strongest lines (N$_2$H$^+$, HNC, HCO$^+$, and HCN) are shown scaled down by a factor of 3.5 relative to the other lines. }
\label{fig:spectra}
\end{center}
\end{figure}
\subsection{Distribution of Massive Star Formation in the Milky Way}
We are able to estimate the distance to all MALT90 sources by using kinematic distances and HI self-absorption \citep[based on SGPS;][]{McClure-Griffiths:2005} to break the kinematic distance ambiguity. Preliminary results from this analysis \citetext{Whitaker et al. in preparation} confirms that MALT90 sources are clustered in previously identified spiral arms of the Milky Way. \citet{Dame:2011} have recently discovered the continuation of the Scutum-Centaurus arm into the first quadrant. In the MALT90 survey region ($l$ $>$ 300) in the fourth quadrant, each line of sight crosses the Scutum-Centaurus arm twice (or is a line of sight down the tangent). This arm is well mapped on the near side of the Galactic center (at a heliocentric distance of 3-4 kpc); a number of MALT90 sources are identified in the far portion of the Scutum-Centaurus arm (heliocentric distances of 15-20 kpc). Future work on MALT90 distances will include using the near-infrared extinction distance methods tested in \citet{Foster:2012} to provide an independent estimate of the distance to each MALT90 source.
\subsection{The Brick}
G0.253$+$0.016, also known as the Brick, is a dark, dense molecular cloud near the Galactic Center. This cloud is potentially the progenitor of a Young Massive Cluster such as the Arches \citep{Longmore:2012}. This object, which is part of the MALT90 survey, was the target for successful ALMA Cycle 0 and Cycle 1 proposals. Preliminary reduction of these data show a wealth of dense filaments with a very complicated velocity structure and complicated chemical patterns \citetext{Rathborne et al. in preparation}.
\acknowledgements Operation of the Mopra radio telescope is made possible by funding from the National Astronomical Observatory of Japan, the University of New South Wales, the University of Adelaide, and the Commonwealth of Australia through CSIRO.
\bibliographystyle{asp2010}
|
2,869,038,155,499 | arxiv | \section{Introduction}
Nowadays, transportation is predominantly dependent on motor vehicles, which has resulted in a practical problem in urban areas, traffic congestion. In 2014, congestion caused urban Americans to spend a cost of \$160 billion on substantial delays and extra fuel consumption \citep{schrank20152015}, besides the detrimental impact on environment from the increased vehicle emissions. To tackle the challenges in maintaining urban sustainability, bicycle use as an emission-free substitute for motor vehicles was encouraged and has become an increasing trend in cities around the world~\citep{fishman2014bike}. Bicycling can either replace driving for the short-to-medium-distance trips, or provide first- and last-mile connections to other transportation modes to facilitate an intermodal transportation system~\citep{demaio2009bike,shaheen2013public,ma2015bicycle}. Use of bicycles is hugely rewarding from social, environmental, economic, and health-related aspects for cities, communities and bike users. Benefits of alleviating congestion and mitigating associated environmental damages accrue from the vehicle miles traveled (VMT) in transportation reduced by bicycling~\citep{hamilton2017bicycle,wang2017bike}. Accessibility to neighborhoods is enhanced by bicycling to boost economic opportunities to local businesses~\citep{buehler2015business}. Moreover, as an active transportation mode, bicycling not only plays a unique role in supporting recreational trips, but also provides substantial public health advantages \citep{shaheen2013public,mueller2015health}. In terms of the increased physical activity, bicycling is shown quite effective for reducing potential health risks \citep{mueller2015health,fishman2016bikeshare}.
In recent years, many cities have improved their cycling infrastructure. The adoption of the bicycling mode in transportation has experienced significant growth. According to American Community Survey (ACS) 2008--2012 \citep{mckenzie2014modes}, commuting by bike had a percentage increase about 61\% from 2000, which increased larger than any other commuting mode. More cities around the world have invested substantially in public bicycle programs or bikesharing systems (BSS) \citep{demaio2009bike,shaheen2013public,o2014mining,fishman2016bikeshare}. Incorporated with information technology, BSS allows users to immediately reserve, pickup, and drop-off public bikes in the network of docking stations at an affordable cost of paying some user or membership fee for bike riding services. Compared to private bikes, BSS not only makes users freed from ownership and regular maintenance of bike, but also allows users to bike one way to connect with other transportation modes with more flexibility on intermodal trips. In 2017, over 1000 cities have offered bikeshare programs and over 4.5 million public bicycles have been in use \citep{Meddin2017bike}.
As a new form of mobility that gradually emerged, BSS has attracted much interest and attention in research. Analysis of surveys and data \citep{romanillos2016big} have been performed by the operators and analysts who aim to achieve a better understand on the system states and key factors influencing the user experiences and the effectiveness of BSS, and on the role of BSS in transforming future urban mobility. Various aspects of bikesharing have been studied. \cite{o2014mining} classified 38 BSSs based on an analysis of variations in occupancy rate. A few studies provided basic insights concerning the impacts of seasonal weather and temporal trends on bicycling in urban environments \citep{gebhart2014impact,el2017effects}. Corresponding to bikeshare users, several studies summarized data and surveys on some significant differences of the user behaviors in terms of their demographic characteristics~\citep{zhao2015exploring,bikeshare2016capital,bhat2017spatial}.
Some other research analyzed the difference of the trip attributes to gain some understandings on trip purposes, especially between round and one-way trips \citep{zhao2015exploring,Noland2017} and between casual and member users \citep{buck2013bikeshare,Wergin2017,Noland2017} among other factors \citep{fishman2016bikeshare}.
Mobility and safety are two main factors for road users to make their mode choice of transportation and for urban planners and policy makers to improve transportation systems. Concerning the bikeshare mobility, some initial studies have been conducted in literature. As shown in the survey by \cite{moritz1997survey}, the average speed of bicycle commuting was 14.6 mph. \cite{Wergin2017} estimated the average speed using a small sample of 3,596 trips with GPS tracking data. \citet{Perez2017} studied the impact of mobility from the viewpoint of accessibility in the bike lane network. As sharing road with vehicles, cyclists are vulnerable users that are more likely to be injured when involved in traffic collisions. According to \cite{NHTSA2015TSF}, 818 bicyclists were killed and an additional estimated 45,000 were injured in traffic crashes in USA in 2015. \cite{lowry2016prioritizing} classified bike roads in a network in terms of stress levels \citep{Rixey2017}. Other studies were performed to understand the crash risk of bikeshare users~\citep{martin2016bikesharing,fishman2016global}.
It has been a common interest for researchers and operators to push BBS into demand-responsive operations. A few studies examined BSS usage and traffic patterns at different levels of spatio-temporal aggregation to recognize the impacts of contributing indicators on BSS demand \citep{fournier2017sinusoidal,jestico2016mapping,faghih2016incorporating}. \citet{vogel2011understanding} applied clustering-based data mining to explore activity patterns, which revealed the imbalances in the spatial distribution of bikes in BSS. \citet{o2014mining} documented the redistribution problem of bikes in BSS from the variations in load factor. Some studies \citep{de2016bike,fishman2016bikeshare,faghih2017empirical} proposed bike rebalancing methods (e.g., trucks and corral services of BSS) to help solving the imbalance between demand and supply at bike stations so that to improve the operational efficiency for BSS and to meet the service level agreements (SLA) and guarantee the quality of service (QoS) for users. Finding a more cost-effective way for sustainable operations requires us to harness the spatio-temporal flow patterns of bikesharing.
However, our understanding remains incomplete on the patterns and characteristics of BSS. For example, the impacts from some operational activities of BSS on the patterns and characteristics have not been investigated. Some fundamental questions remain open even on the known patterns and characteristics of BSS, particularly on their implications for the potential decision supports toward sustainable transportation in complex urban environments.
As a function of moving people in the spatio-temporal dimensions using shared bikes, BSS outputs the patterns and characteristics according to the integrated inputs combining many critical factors provided by the stakeholders, including infrastructures, policies, operating activities, management agreements, trip information (such as purpose, route, origin, destination) and etc. Incorporating these inputs from key stakeholders into BSS modeling and analysis is essential to understand the patterns and characteristics for the decision making of improving sustainability in multimodal transportation through BSS.
Some corresponding unsolved questions include (but are not limited to), for example, how to link these inputs with the patterns and characteristics to better understand their impacts on BSS for decision making? Referring to the inputs from different stakeholders, what do the patterns and characteristics imply on key measures of effectiveness (MoE) and supports to BSS? What roles would the patterns and characteristics from data play for decision making of BSS?
Notice that urban transportation is a complex system, and in contrast, our available data and computational resources are rather limited. It is challenging to promptly provide a fully automated data-driven decision making that is realistic and efficient for BSS, although some successful efforts on data-driven decision supports (DDDS) have been put in the recent non-BSS transportation research \citep{cesme2017data,yi2018data,zhou2017data} to prove the value of DDDS in offering intelligence and performance monitoring for decision making \citep{power2008understanding}. To unleash the potentials of BSS in fostering sustainable multimodal urban transportation, we need take initial steps to bridge the gap between the current comprehension on the patterns and characteristics of BSS and the needs from the BSS modeling and applications for practical and effective data-driven decision supports.
In this paper, we perform a comprehensive data analysis to examine the underlying patterns and characteristics of BSS embedded in a complex urban environment. Aiming to help improving sustainability in multimodal transportation through BSS, we also investigate the implications of the patterns and characteristics for data-driven decision supports (DDDS). We choose the trip history data from \citet{CaBi2017Data} as our main data source of case study. The Capital Bikeshare (CaBi) system is a public-private venture operating more than 3,500 bicycles to casual and member users at over 400 stations in the Washington metropolitan area \citep{bikeshare2016capital}. The data contains 14 million anonymous individual bike trips between 2012-2016. Beyond CaBi, we also extract related information from auxiliary data sources, including the Google Maps application program interfaces (APIs) from \citet{GMap2017API}, LEHD Origin-Destination Employment Statistics (LODES) from \cite{LEHD2017LODES}, and the crash data from Open Data DC in \citet{DC2017Data}. We use data visualization, data fusion, data analysis, and statistical analysis to systematically investigate BSS scheme and examine travel patterns and characteristics on seven important aspects, which are respectively trip demand and flow, operating activities, use and idle times, trip purpose, origin-destination (O-D) flows, mobility, and safety. For each aspect, we explore the results to discuss qualitative and quantitative impacts of the inputs from various stakeholders of BSS on key measures of effectiveness (MoE) such as trip costs, mobility, safety, quality of service, and operational efficiency. We are also interested in revealing new patterns and characteristics of BSS to expand our knowledge on travel behaviors. Finally, we briefly discuss the implications of the patterns and characteristics and some critical roles for data-driven decision supports from the relations between BSS and key stakeholders to show and summarize the values of our findings for transforming urban transportation to be more sustainable, where key stakeholders include road users, system operators, and city planners and policymakers.
\section{Data Description}
The major data source for this analysis is from the CaBi system. CaBi offers bicycle sharing service in the Washington metropolitan area. Like other BSSs, CaBi consists a network of docking stations and a fleet of bikes. The bikes can be rented from and returned to any station of the system, which gives their users flexibility to use BSS to both round and one-way trips and on different purposes. The program offers single trip or multiple options on use time for casual and member users. In the choice of membership, trips under 30 minutes are free of charge, and incremental charges are added for the extra use time afterwards. Obviously, this pricing strategy encourages short trips rather than extended long trips.
We consider the CaBi trip history data in the recent 5-year fully operational period between 2012-2016. In this period, the data contains 14 million anonymous individual bike trips between docking stations. Any trips lasting less than 60 seconds or taken for system testing or maintenance are excluded by the data provider. Let the set of stations be $S$, the set of bikes be $B$, and the set of trips be $L$. Each station $s\in S$ has an associated geolocation. Each trip $l \in L$ is described by a tuple $l=<t_o, t_d, s_o, s_d, b, u>$, where $t_o$ and $t_d$ are respectively the start and end times, $s_o, s_d \in S$ are respectively the origin and destination stations, $b \in B$ is the bike used for the trip, and $u \in \{Casual, Member\}$ indicates whether the user was a {\em casual user} (i.e., Single Trip, 24-Hour Pass or 3-Day Pass) or a {\em member user} (i.e., Day Key, 30-Day or Annual Member). The system has kept expanding over the years \citep{bikeshare2016capital}. The numbers of stations are from 186 in 2012 to 435 in 2016, the number of bikes in service are from 1746 in 2012 to 4449 in 2016, and the number of trips grows from 2.05 millions in 2012 to 3.33 millions in 2016.
We also consider the following data sources: (1) Google Maps APIs are used to extract additional trip information between locations;
(2) The 2014 LEHD Origin-Destination Employment Statistics (LODES) dataset is used to extract commuting information (where LODES is produced by \cite{LEHD2017LODES} using an extract of the Longitudinal
Employer Household Dynamics (LEHD) data); and (3) The 2016 Crash Data in Open Data DC is used for bike safety information.
\section{Data Analysis and Results} \label{sec:anaysis_results}
In this section, we conduct a comprehensive data analysis to uncover underlying patterns and characteristics of the system dynamics of the bikeshare network in the Washington DC area. We first show the basic spatial and temporal characteristics of the bikeshare network. Fig. \ref{fig:dc_bikelanes} gives the District of Columbia (DC) area and its cycling infrastructure. The total length of bike lanes and the percentage of residents regularly biking to work have respectively increased from 30.1 to 69 miles and from 1.68\% to 4.54\%, between 2007 and 2013, according to \citet{DDOT2014BikeFact}. Fig.~\ref{fig:BikeState_5Y_Geo_grp_osid} gives the spatial distributions of daily averaged trip counts by origin stations. We color each station $s$ according to the values of $\operatorname{log}_{10}(C_s/\operatorname{max}(T_{LD}, TH_{LD}))$, where $C_s \in \{C_{O(s)}, C_{D(s)}\}$ is the total number of trips using the station as either trip origin or destination, $T_{LD}$ is the number of operation days of the station, and $TH_{LD}$ is defined as the threshold value of $T_{LD}$. By default, $TH_{L}=50$ is used to prevent Fig.~\ref{fig:BikeState_5Y_Geo_grp_osid} from showing statistical noise which has very small values of $T_{LD}$. It is found that the use frequency of station varies greatly among various bike stations. As shown in Fig.~\ref{fig:BikeState_5Y_Geo_grp_osid}, the difference of the use counts among stations could be in several orders of magnitude in terms of the number of bike trips using them as origin or destination, demonstrating that most of the total flows are concentrated at a few stations. As displayed by the combined Figs.~\ref{fig:BikeState_5Y_Geo_grp_osid} and \ref{fig:dc_bikelanes}, the demand of bike stations has a strong spatial correlation with the bike lanes distribution. The higher is the spacial density of bike lanes, the higher demand for bike stations by users. It indicates that cycling infrastructure development is able to encourage and attract more users to use the bikeshare in the region.
\begin{figure} [htb]
\centering
\begin{subfigure}{.535\textwidth}
\centering \includegraphics[width=0.95\textwidth]{img/dc_bikelanes} \caption{D.C. Area and Its Cycling Infrastructure.}
\label{fig:dc_bikelanes}
\end{subfigure}
\begin{subfigure}{.455\textwidth}
\centering \includegraphics[width=0.95\textwidth]{img/BikeState_5Y_Geo_grp_osid_DailyRate2_1} \caption{Trip Rates by Origins.}
\label{fig:BikeState_5Y_Geo_grp_osid}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering \includegraphics[width=0.95\textwidth]{img/BikeState_5Y_hod_Week} \caption{Trip Rates by Weekday and Weekend.}
\label{fig:BikeState_5Y_hod_Week}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering \includegraphics[width=0.95\textwidth]{img/BikeState_5Y_hod_Season} \caption{Trip Rates by Seasons.}
\label{fig:BikeState_5Y_hod_Season}
\end{subfigure}
\setlength{\belowcaptionskip}{-5.6pt}
\caption{Cycling Infrastructure and Basic Temporal and Spatial Characteristics of Bike Trips in the D.C. Area.}
\label{fig:BikeState_5Y_hod}
\end{figure}
Figs. \ref{fig:BikeState_5Y_hod_Week} and~\ref{fig:BikeState_5Y_hod_Season} show the temporal distributions of bikeshare trips averaged by the number of associated days, respectively by weekday/weekend and seasons. Two sharp AM and PM peaks emerge in the commuting pattern of weekdays, while only one single gentle peak appears in that of weekends, see Fig.~\ref{fig:BikeState_5Y_hod_Week}. The bikeshare trip distributions show a primary utilitarian pattern \citep{miranda2013classification}. It is consistent with the 2016 CaBi survey \citep{bikeshare2016capital}, where 65\% of respondents confirmed that commuting was a primary trip purpose for their bikeshare usage. In Fig.~\ref{fig:BikeState_5Y_hod_Season}, twelve months are classified into four seasons respectively as Spring (March, April, May), Summer (June, July, August), Autumn (September, October, November), and Winter (December, January, February) in terms of monthly average temperature in the region, which are respectively 4.67$^{\circ}$C, 14$^{\circ}$C, 25.67$^{\circ}$C, and 16$^{\circ}$C \citep{DC2017Weather}. The number of trips in Summer is about 2.33 times of that in Winter. The significantly lowered counts of biking trips in Winter are most likely resulted from the unfavorable weather and road conditions associated with cold temperatures \citep{gebhart2014impact,el2017effects}.
\subsection{Trip Demand and Flow} \label{sec:TripDemand}
The bikeshare network is a self-organized network formed by the demand and supply of bike trips between stations. Let $C_{OD(s_o,s_d)}$ be the number of trips from station $s_o$ to station $s_d$. Fig. \ref{fig:BikeState_5Y_grp_OD_DiffOD_Histogram} gives the distribution of trip counts between stations in the bikeshare network. The trip counts follow a scale-free power-law distribution, which is a common pattern shown in other human mobility examples \citep{Xie2015,gonzalez2008understanding}, indicating a strong heterogeneity of human movements in this area. The formation of a scale-free network is an important feature. In such a network, maintenance or improvement of a small set of important O-D paths would bring benefits to bikeshare users on a large amount of trips.
For a station, let $C_{O}$ and $C_{D}$ respectively be the number of the trips taking it as the trip origin and destination, its approximate demand-supply (D-S) ratio $R_{DS}$ is defined as
\begin{equation} \label{eq:r_ds}
R_{DS}=(C_{O}+C_\epsilon)/(C_{D}+C_\epsilon),
\end{equation}
where $C_\epsilon \ge 0$ is a constant to stabilize the result. By default, $C_\epsilon=1000$.
Based on $R_{DS}$, a station is classified as one in the demand-supply balance if
\begin{equation} \label{eq:r_ds_balance}
R_{DS} \in [1-R_\delta, 1/(1-R_\delta)],
\end{equation}
where $R_\delta \in (0, 1)$ is a constant in the definition of the balance range. By default, $R_\delta=0.2$.
In Eq. \ref{eq:r_ds}, if $C_\epsilon=0$, we have the basic D-S ratio $R_{DS}^{(0)}=C_O/C_D$. If the sample sizes $C_{O}$ and $C_{D}$ are both very small, $R_{DS}^{(0)}$ could be a quite inaccurate estimation of the actual D-S ratio. In this case, however, the actual D-S ratio would be insignificant since it would require too much time to make a station reach the completely empty or full state. From Eq. \ref{eq:r_ds}, $R_{DS}$ is a value between 1 and $R_{DS}^{(0)}$. If $C_D \ll C_\epsilon$ and $C_O \ll C_\epsilon$, $R_{DS}$ would be closed to 1 and likely in the balance range. To have a $R_{DS}$ outside of the balance range, $C_D$ and/or $C_O$ should be sufficiently large, by taking $C_\epsilon$ as a reference value. When $C_D$ and $C_O$ are much larger than $C_\epsilon$, $R_{DS} \to R_{DS}^{(0)}$.
The balance range is determined by $R_\delta$ (see Eq. \ref{eq:r_ds_balance}). As $R_\delta$ approaches 0 or 1, respectively almost none or all values of $R_{DS}$ would be in the balance range. To evaluate if a selected $R_\delta$ value is suitable in practice, we could apply it to $R_{DS(s)}$ for all $s \in S$ and check the portion of stations outside of the balance.
Fig. \ref{fig:BikeState_5Y_Geo_grp_odCount} gives $R_{DS(s)}$ for all $s \in S$ in a sorted form. Here the default value $R_\delta=0.2$ is used. It shows that most stations have the $R_{DS}$ in the demand-supply balance range, i.e. with a demand-supply ratio in the range of $[0.8, 1.25]$; and only a small number of stations have the $R_{DS}$ away from the balance, i.e. $R_{DS}<0.8$ or $R_{DS}>1.25$.
\begin{figure} [t]
\centering
\begin{subfigure}{.49\textwidth}
\centering \includegraphics[width=0.95\textwidth]{img/BikeState_5Y_grp_OD_DiffOD_Histogram} \caption{Flow Count Distribution.}
\label{fig:BikeState_5Y_grp_OD_DiffOD_Histogram}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering \includegraphics[width=0.95\textwidth]{img/BikeState_5Y_Geo_grp_odCount} \caption{Demand-Supply (D-S) Ratio.}
\label{fig:BikeState_5Y_Geo_grp_odCount}
\end{subfigure}
\caption{Flow Count Distribution and Demand-Supply Ratio.}
\label{fig:BikeState_5Y_UserType_hod}
\end{figure}
To further analyze the balancing situations of stations in different times of day (ToD), we focus on the stations corresponding to the two main trip-count peaks in Fig. \ref{fig:BikeState_5Y_hod_Week}. We classify the stations into three groups in terms of the range of demand-supply ratio, $R_{DS}$ (with $R_\delta=0.2$) and show their spatial locations in Figs. \ref{fig:BikeState_5Y_Geo_grp_odCounts_WeekDayAM} and \ref{fig:BikeState_5Y_Geo_grp_odCounts_WeekDayPM} in different colors, being $R_{DS}<0.8$ (blue), $R_{DS}\in [0.8, 1.25]$ (white), or $R_{DS}>1.25$ (red), where the spatial distributions of the stations in Figs. \ref{fig:BikeState_5Y_Geo_grp_odCounts_WeekDayAM} and \ref{fig:BikeState_5Y_Geo_grp_odCounts_WeekDayPM} are respectively corresponding to the two main trip-count peaks in Fig. \ref{fig:BikeState_5Y_hod_Week}.
During both of the two ranges of trip-count peak time (7-10 AM and 17-20 PM), the bikeshare stations are found highly spatial clusterable in terms of the color that represents the level of demand-supply ratio $R_{DS}$ (see Figs. \ref{fig:BikeState_5Y_Geo_grp_odCounts_WeekDayAM} and \ref{fig:BikeState_5Y_Geo_grp_odCounts_WeekDayPM}). Interestingly, the spatial patterns of the demand-supply imbalance show a well-observed inverse relationship between the stations used respectively in the AM and PM peak times. This may be due to the commuting of bikeshare users between their home and work places, as they use bikes from home to work around 7-10 AM and reversely from work to home around 17-20 PM. If $R_{DS}$ is too large or too small for a station, its bike docks will likely be all empty or full after some service time, i.e. hit the station's service limit where the station cannot provide renting (for the empty-docks status) or returning service (for the full-docks status) anymore~\citep{fishman2016bikeshare,bikeshare2016capital}. In this case, rebalancing strategies \citep{de2016bike} are needed to balance the demand and supply at some critical bikeshare stations.
\begin{figure} [t]
\centering
\begin{subfigure}{.32\textwidth} \centering \includegraphics[width=.95\linewidth]{img/BikeState_5Y_Geo_grp_odCounts_WeekDayAM_3} \caption{$R_{DS}$ in Weekday AM Peak} \label{fig:BikeState_5Y_Geo_grp_odCounts_WeekDayAM} \end{subfigure}
\begin{subfigure}{.32\textwidth} \centering \includegraphics[width=.95\linewidth]{img/Census_LODES2014_DC_grp_w_geocode_ct_Geo_Tract10_2} \caption{LODES Data by Workplace} \label{fig:Census_LODES2014_DC_grp_w_geocode_ct_Geo_Tract10} \end{subfigure}
\begin{subfigure}{.32\textwidth} \centering \includegraphics[width=.95\linewidth]{img/BikeState_5Y_Geo_grp_odCounts_WeekDayAM_Lodes14w_3} \caption{Overlap of (\ref{fig:BikeState_5Y_Geo_grp_odCounts_WeekDayAM}) and (\ref{fig:Census_LODES2014_DC_grp_w_geocode_ct_Geo_Tract10})}
\label{fig:BikeState_5Y_Geo_grp_odCounts_WeekDayAM_Lodes14w} \end{subfigure}
\begin{subfigure}{.32\textwidth} \centering \includegraphics[width=.95\linewidth]{img/BikeState_5Y_Geo_grp_odCounts_WeekDayPM_3} \caption{$R_{DS}$ in Weekday PM Peak} \label{fig:BikeState_5Y_Geo_grp_odCounts_WeekDayPM} \end{subfigure}
\begin{subfigure}{.32\textwidth} \centering \includegraphics[width=.95\linewidth]{img/Census_LODES2014_DC_grp_h_geocode_ct_Geo_Tract10_2} \caption{LODES Data by Residence} \label{fig:Census_LODES2014_DC_grp_h_geocode_ct_Geo_Tract10} \end{subfigure}
\begin{subfigure}{.32\textwidth} \centering \includegraphics[width=.95\linewidth]{img/BikeState_5Y_Geo_grp_odCounts_WeekDayPM_Lodes14h_3} \caption{Overlap of (\ref{fig:BikeState_5Y_Geo_grp_odCounts_WeekDayPM}) and (\ref{fig:Census_LODES2014_DC_grp_h_geocode_ct_Geo_Tract10})}
\label{fig:BikeState_5Y_Geo_grp_odCounts_WeekDayPM_Lodes14h} \end{subfigure}
\setlength{\belowcaptionskip}{-2.5pt}
\caption{Distributions of Stations Classified in Colors by Demand-Supply Ratios ($R_\delta=0.2$) from CaBi and Distributions of Commuters (in $\operatorname{log}_{10}$) by Workplace and Residence from LODES.}
\label{fig:BikeState_5Y_Geo_grp_osid_ToD_DoW}
\end{figure}
Two other methods, which respectively measure the occupancy rate \citep{o2014mining} and the normalized hourly pickup and return rates \citep{vogel2011understanding}, have also been proposed for identifying the demand-supply imbalance of bikeshare. Although the occupancy rate can directly show the status of docks as is, $R_{DS}$ is the underlying cause of imbalance. Beyond functioning as an alternative measure of the imbalance, $R_{DS}$ has an extra usage in simulation-based studies at an advantage for predictive analytics, which is important as potential impacts can be tested and evaluated accordingly to acquire better rebalancing strategies. It is worthy to note that using the normalized hourly pickup and return rates separately ~\citep{vogel2011understanding} actually removes the information corresponding to $R_{DS}$, thus is complementary with the method using $R_{DS}$ for measuring the imbalance of bikeshare.
To gain more insights of the underlying driving force for the demand-supply patterns, we use LODES data in our analysis. LODES covers workplace and residence information on wage and salary jobs in private sectors and state and local governments. In Figs. \ref{fig:Census_LODES2014_DC_grp_w_geocode_ct_Geo_Tract10} and \ref{fig:Census_LODES2014_DC_grp_h_geocode_ct_Geo_Tract10}, each census tract $ct$ in DC is colored according to the values of $\operatorname{log}_{10}(C_{w(ct)}/A_{ct})$ and $\operatorname{log}_{10}(C_{h(ct)}/A_{ct})$ respectively, where $A_{ct}$ is the land area of $ct$ in square miles, $C_{w(ct)}$ and $C_{h(ct)}$ are the worker counts in $ct$ grouped respectively by their workplace and residence. In Geographic Information System (GIS), workplaces are mostly clustered and located at the mixed-use neighborhoods in the north and south of National Mall, while residing homes are mainly spatially clustered and located at the neighborhoods in the north and east of the main workplace region. Most of the workers are commuters between the two workspace and residence regions during weekdays.
To reveal the relationship between commuting behavior and demand-supply ratio, we give
Figs. \ref{fig:BikeState_5Y_Geo_grp_odCounts_WeekDayAM_Lodes14w} and \ref{fig:BikeState_5Y_Geo_grp_odCounts_WeekDayPM_Lodes14h} respectively showing the overlapped views of Figs. \ref{fig:BikeState_5Y_Geo_grp_odCounts_WeekDayAM} and \ref{fig:Census_LODES2014_DC_grp_w_geocode_ct_Geo_Tract10}, and of Figs. \ref{fig:BikeState_5Y_Geo_grp_odCounts_WeekDayPM} and \ref{fig:Census_LODES2014_DC_grp_h_geocode_ct_Geo_Tract10}. For bike stations with $R_{DS}<0.8$ (in blue color), they are mostly located in high-density workplace and residence regions during the AM and PM peak periods, respectively. Figs. \ref{fig:BikeState_5Y_Geo_grp_odCounts_WeekDayAM_Lodes14w} and \ref{fig:BikeState_5Y_Geo_grp_odCounts_WeekDayPM_Lodes14h} indicate that there is a strong spatial correlation between commuting behavior and demand-supply ratio. It confirms that the commuting between workplace and residence is one of main factors affecting the demand-supply balance of the bikeshare stations.
\subsection{Operating Activities} \label{sec:OperationalActivity}
Operations are important for maintaining daily functions of BSS. Here, we analyze and try to measure the efficiency of operating activities of bikeshare through data mining. We take the valet and corral service \citep{de2016bike,CaBi2017Corrals} as an example of case study on operating activities. If a bikeshare station has valet or corral service, operating staffs will take care of bike returning at the station or keep the station from fully occupied by removing bikes from docks and storing them to a corralled place. Corral service can lift serving capacity of a bikeshare station from a limited to a rather high level during its operational period. In addition, the service can guarantee users to return their bikes at their expected stations, without wasting time (or paying extra fee if the bikeshare charging system is usage-time-based) to frustratingly find an empty dock in neighbor stations. It is especially useful for high-demand stations of bikeshare. In practice, corral service can be provided by bikeshare operators regularly in high-demand seasons or specially on some high-attendance events. We analyze the both conditions in our case study.
According to social media and CaBi public reports, CaBi launched seasonally regular corral service for weekdays since the year of 2015. At the beginning, corral service was provided only at two stations --- the Stations 31205 and 31227, which both started from May 14, 2015, but respectively ended on November 16 and December 18, 2015. In 2016, the two stations continued to provide the service between April 4 and December 23. In addition, four more stations started to provide corral service in 2016, where Station 31233 provided the service between June 6 and December 23, Stations 31259, 31243 and 31620 all started the service from Jun 8, but respectively ended on November 9, November 9, and October 14. The time-of-day servicing periods were [7AM, 11AM] and [8AM, 12PM] respectively for 2015 and 2016, meaning that the operating cost is at least $20$ hours of work per week by staff operators for each station.
\afterpage{
\begin{figure} [h]
\centering
\begin{subfigure}{.49\textwidth}
\centering \includegraphics[width=0.95\textwidth]{img/BikeState_5Y_grp_Week_AMCorral4_31205_3Y} \caption{Station 31205.}
\label{fig:BikeState_5Y_grp_Week_AMCorrals_31205}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering \includegraphics[width=0.95\textwidth]{img/BikeState_5Y_grp_Week_AMCorral4_31227_3Y} \caption{Station 31227.}
\label{fig:BikeState_5Y_grp_Week_AMCorrals_31227}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering \includegraphics[width=0.95\textwidth]{img/BikeState_5Y_grp_Week_AMCorral4_31233_3Y} \caption{Station 31233.}
\label{fig:BikeState_5Y_grp_Week_AMCorrals_31233}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering \includegraphics[width=0.95\textwidth]{img/BikeState_5Y_grp_Week_AMCorral4_31259_3Y} \caption{Station 31259.}
\label{fig:BikeState_5Y_grp_Week_AMCorrals_31259}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering \includegraphics[width=0.95\textwidth]{img/BikeState_5Y_grp_Week_AMCorral4_31243_3Y} \caption{Station 31243.}
\label{fig:BikeState_5Y_grp_Week_AMCorrals_31243}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering \includegraphics[width=0.95\textwidth]{img/BikeState_5Y_grp_Week_AMCorral4_31620_3Y} \caption{Station 31620.}
\label{fig:BikeState_5Y_grp_Week_AMCorrals_31620}
\end{subfigure}
\caption{Results of Regular Bike Corral Service from Data: Comparisons of Weekly Drop-off Counts during [8AM, 12PM] of Weekdays among 2014, 2015, and 2016 at Six Stations Respectively. For Each Station, its Start and End Dates of Corral Service are Marked with a Pair of Vertical Lines for Each Service-Available Year (Dash for 2015 and Solid for 2016), and its Peak of Weekly Drop-off Counts of 2014 is Marked with a Horizon Line.}
\label{fig:BikeState_5Y_AMCorral_Regular}
\end{figure}
}
Fig. \ref{fig:BikeState_5Y_AMCorral_Regular} shows the comparisons of weekly drop-off counts during [8AM, 12PM] of Weekdays among 2014 (green), 2015 (red), and 2016 (blue) respectively at each of the six stations where seasonally regular bike corral service was provided by CaBi in 2015 and 2016. In the figure, the start and end dates of corral service are all marked using vertical lines. Notice that neither station in 2014 nor any of the last four CaBi stations in 2015 was provided corral service by CaBi. These results of no corral service provide us baselines to compare bikeshare capacity for studying effects of corral service. For each station without corral service, let $\hat{C}_D^{w}$ be its maximum weekly drop-off count and $\bar{C}_D^{w}$ its maximum weekly capacity, then $\hat{C}_D^{w}$ is an approximation to the lower bound of $\bar{C}_D^{w}$. As shown in Fig. \ref{fig:BikeState_5Y_AMCorral_Regular}, during the periods without corral service, all the weekly drop-off counts are below $\hat{C}_D^{w} + \delta$, where $\hat{C}_D^{w}$ is the peak weekly drop-off count in 2014 (see the horizon lines in Fig. \ref{fig:BikeState_5Y_AMCorral_Regular}) and $\delta$ is a small value representing the noises in data. It turned out that the peak demand in 2014 was sufficiently high to yield the results of $\hat{C}_D^{w}$ approaching $\bar{C}_D^{w}$. More importantly, Fig. \ref{fig:BikeState_5Y_AMCorral_Regular} reveals that at each of the six stations, the value of $\hat{C}_D^{w}$ with corral service (in 2015 or 2016) is higher than that without the service, reflecting that corral service has the capability of raising capacity of bikeshare stations. Concerning the six CaBi stations in Fig. \ref{fig:BikeState_5Y_AMCorral_Regular}, it is worth noting that corral service led to significant increases in weekly drop-off counts at four of the stations, but induced only a little change at the other two stations. In Figs. \ref{fig:BikeState_5Y_grp_Week_AMCorrals_31205} through \ref{fig:BikeState_5Y_grp_Week_AMCorrals_31259}, the increase in weekly counts is notable for most of the weeks with corral service. While in Figs. \ref{fig:BikeState_5Y_grp_Week_AMCorrals_31243} and \ref{fig:BikeState_5Y_grp_Week_AMCorrals_31620}, weekly drop-off counts are found raised only for a few of the weeks with corral service, and the extent of increase in weekly drop-off counts is rather small in comparison with those shown in Figs. \ref{fig:BikeState_5Y_grp_Week_AMCorrals_31205} through \ref{fig:BikeState_5Y_grp_Week_AMCorrals_31259}. The results provide us a straightforward MoE on corral service at different stations. To augment the overall QoS and operational efficiency, operators of BSS would need optimize their operating activities. As an example, Fig. \ref{fig:BikeState_5Y_AMCorral_Regular} shows that data analysis is able to provide bikeshare operators evidence-based supports to help them reallocate their resources such as redistribute corral service among different stations for a better operational efficiency.
Although the system data does not provide the details of redistribution efforts at these stations, we can gain the insights by comparing the redistribution efforts between different years. Here, as an example, we compare the redistribution efforts during [8AM, 12PM] from the start to the end dates of corral service in 2016 with those during the same time-of-day and date-of-year period in 2014. The comparison is performed with the following method. At each station $s$, we collect all the trips that have arrived the station during the considered time period in each year as $L_C$. For each $l_c \in L_C$, the next trip $l_n$ using the same bike is extracted. If the origin station of $l_n$ is the same as $s$, the bike is considered as in normal use, otherwise as in maintenance. In this context, the maintenance activity would most likely be the redistribution effort moving the bike from $s$ to the origin station of $l_n$. Table \ref{tab:carral_redistribution} summarizes the results in the two years of 2014 and 2016, where $|L_C|$, $|L_{C,N}|$, and $|L_{C,M}|$ are respectively the numbers of total trips arriving the station, the trips in normal use departing at the station, and the maintenance trips at the station, and $R_{C,M}=|L_{C,M}|/|L_C|$.
It is clear that coral service has enabled the bike stations to serve users with more trips. As shown in Table \ref{tab:carral_redistribution}, although the number of departing trips, $|L_{C,N}|$, remained similar in the two years, the number of arriving trips, $|L_C|$, increased significantly from 2014 to 2016 due to corral service (which was used in 2016 but not in 2014). Table \ref{tab:carral_redistribution} also shows that the number of bikes in maintenance (mainly redistribution) trips, $|L_{C,M}|$, remarkably increased in 2016, compared to 2014. It is consistent with the fact that the number of arriving trips, $|L_C|$, is mostly much larger than that of departing trips, $|L_{C,N}|$, at the stations. Essentially, corral service at a station could function as a temporary bike depot during the service period, but the corralled bikes should be cleared/redistributed from the station before the end of service.
Although the redistribution operation might be costly, it could increase the availability and usage of bikes. The corralled bikes could be relocated to the stations in supply shortage, therefore total bike ridership could be increased, leading to an operational gain that would offset redistribution cost. Some of the increased ridership in bikeshare could contribute to reducing VMT, which would offset the impact to environment from using rebalancing trucks in the redistribution operation. In addition, with corral service at the stations, high capacity trucks could be utilized to redistribute a large number of bikes at once, which would reduce the average redistribution cost and the impact to environment per bike. Another cost-effective operational option could be extending the service to the PM peak period. In this case, corralled bikes would be picked up by users (as indicated by the changes of the time-of-day $R_{DS}$ in Fig. \ref{fig:BikeState_5Y_Geo_grp_osid_ToD_DoW}), rather than call for any redistributions by system operators. Notice that, in one hand, the extension of corral service could support normal bike trips for more revenue and reduce redistribution efforts for less costs; in another hand, the extension of service time would need several extra work hours of staffs and also reduce bicycle availability as some bikes would stay in corrals. An optimization should be considered by system operators in order to reach the best solution in corral service operations.
\begin{table*}
\centering \caption{Bike Use and Redistribution Efforts during Regular Corral Service in 2016 and Same Time Periods in 2014.}
\label{tab:carral_redistribution}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline
\multirow{2}{*}{Station} & \multicolumn{4}{|c|}{2014} & \multicolumn{4}{|c|}{2016} \tabularnewline \cline{2-9}
& $|L_C|$ & $|L_{C,N}|$ & $|L_{C,M}|$ & $|R_{C,M}|$ & $|L_C|$ & $|L_{C,N}|$ & $|L_{C,M}|$ & $|R_{C,M}|$ \\ \hline
31205 & 5684 & 4581 & 1103 & 19.4\% & 9884 & 4196 & 5688 & 57.5\% \\ \hline
31227 & 4488 & 3085 & 1403 & 31.3\% & 12695 & 3372 & 9323 & 73.4\% \\ \hline
31233 & 3776 & 2838 & 938 & 24.8\% & 8151 & 3181 & 4970 & 61.0\% \\ \hline
31259 & 1273 & 826 & 447 & 35.1\% & 2212 & 543 & 1669 & 75.5\% \\ \hline
31243 & 2480 & 2120 & 360 & 14.5\% & 2554 & 1397 & 1157 & 45.3\% \\ \hline
31620 & 2010 & 1717 & 293 & 14.6\% & 2566 & 1835 & 731 & 28.5\% \\ \hline
Average & 3285.2 & 2527.8 & 757.3 & 23.1\% & 6343.7 & 2420.7 & 3923.0 & 61.8\% \\ \hline
\end{tabular}
\end{table*}
\afterpage{
\begin{figure} [H]
\centering
\begin{subfigure}{.49\textwidth}
\centering \includegraphics[width=0.95\textwidth]{img/BikeState_5Y_grp_Hour_31209_2012-10-10} \caption{2012-10-10.}
\label{fig:BikeState_5Y_grp_Hour_31209_2012-10-10}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering \includegraphics[width=0.95\textwidth]{img/BikeState_5Y_grp_Hour_31209_2012-10-11} \caption{2012-10-11.}
\label{fig:BikeState_5Y_grp_Hour_31209_2012-10-11}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering \includegraphics[width=0.95\textwidth]{img/BikeState_5Y_grp_Hour_31209_2014-10-03} \caption{2014-10-03.}
\label{fig:BikeState_5Y_grp_Hour_31209_2014-10-03}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering \includegraphics[width=0.95\textwidth]{img/BikeState_5Y_grp_Hour_31209_2014-10-04} \caption{2014-10-04.}
\label{fig:BikeState_5Y_grp_Hour_31209_2014-10-04}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering \includegraphics[width=0.95\textwidth]{img/BikeState_5Y_grp_Hour_31209_2016-10-07} \caption{2016-10-07.}
\label{fig:BikeState_5Y_grp_Hour_31209_2016-10-07}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering \includegraphics[width=0.95\textwidth]{img/BikeState_5Y_grp_Hour_31209_2016-10-13} \caption{2016-10-13.}
\label{fig:BikeState_5Y_grp_Hour_31209_2016-10-13}
\end{subfigure}
\caption{Results of Bike Corral Service for Events from Data: Drop-offs and Pickups at the Nearest Station 31209 during the Six Events of Washington Nationals Playing at Nationals Park in 2012, 2014 and 2016 National League Division Series (NLDS). For Each Event, Two Vertical Lines are Used to Mark the Official Start and End Times of Baseball Game.}
\label{fig:BikeState_5Y_AMCorral_Events}
\end{figure}
}
CaBi provided bike corral service for some high-attendance events \citep{CaBi2017Corrals}. In DC area, the baseball games of Washington Nationals held at Nationals Park have often attracted high attendances. Let us take this event as an example to analyze corral service. CaBi Station 31209 is chosen for this analysis, as it is the nearest station to Nationals Park. The data shows that the top six days in the ranking of the highest daily drop-off counts at Station 31209 are respectively 2012-10-10, 2012-10-11, 2014-10-03, 2014-10-04, 2016-10-07 and 2016-10-13, coincident with the dates when Washington Nationals played their games in the National League Division Series (NLDS) (\url{https://en.wikipedia.org/wiki/National_League_Division_Series}) at Nationals Park in 2012, 2014 and 2016. Fig. \ref{fig:BikeState_5Y_AMCorral_Events} gives hourly drop-off and pickup counts at Station 31209 around the time when the six events were held at Nationals Park, where the start and end times are respectively marked with two vertical lines. As shown in the figure, both drop-off and pickup hourly counts were very low at Station 31209 during normal time, but they both suddenly increased to reach rather high values around the start and end times of the games.
Concerning the corral service for each event, the operational time with respect to work of staffs is $T_E^O=T_E^B+T_E^D+T_E^A$, where $T_E^B$ is the period to handle drop-off demand before the event, $T_E^D$ is the duration of the event, and $T_E^A$ is the period to handle pickup demand after the event. For the six events, $T_E^D$ is in the range of between 2.92 hours (Fig. \ref{fig:BikeState_5Y_grp_Hour_31209_2012-10-11}) and 6.38 hours (Fig. \ref{fig:BikeState_5Y_grp_Hour_31209_2014-10-04}). The longer $T_E^D$ is, the more costly the corral service is.
Fig. \ref{fig:BikeState_5Y_AMCorral_Events} indicates that corral service significantly increased capacity of Station 31209 for the high-attendance events. As an operating activity, corral service can be very useful and effective for attracting more car users to adopt biking transportation mode when participating these events, which not only augments operating gains of bikeshare, but also relieves the difficulty and problems in transportation due to regional traffic congestion and parking limits.
\subsection{Use and Idle Time} \label{sec:UseTime}
Now we analyze the use and idle times of bikes. For all bikes in $B$, the total operational time $T_{L}$ contains three components, i.e.,
\begin{equation}
T_{L}=T_{U}+T_{N}=T_{U}+T_{I}+T_{M},
\end{equation}
\begin{equation}
T_{U}=\sum_{b\in B}T_{U(b)},~~~~T_{I}=\sum_{b\in B}T_{I(b)},~~~~T_{M}=\sum_{b\in B}T_{M(b)},
\end{equation}
where $T_{U}$ is the total {\em use time} of all bikes ridden by users between stations, $T_{I}$ is the total {\em idle time} of all bikes sitting at docking stations, $T_{M}$ is the total {\em maintenance time} of all bikes taken away from the system for maintenance, and $T_{U(b)}$, $T_{I(b)}$ and $T_{M(b)}$ are respectively the use, idle, and maintenance times of bike $b \in B$. The {\em non-use time} $T_{N}$ is $T_{N}=T_{I}+T_{M}$. The {\em use time ratio} $R_U$ is defined as $R_U=T_{U}/T_{L}$. Higher $R_U$ represents a better utilization, meaning that the BSS functions more actively as a transportation mode for users and therefore generates more revenue to support the development of itself.
For bike $b$, its use time can be directly obtained from individual trips, i.e.,
\begin{equation}
T_{U(b)}=\sum_{l \in L_b} t_{U(l)}=\sum_{l \in L_b} (t_{d(l)}-t_{o(l)}),
\end{equation}
where $L_b$ is the set of trips by bike $b$, and $t_{U(l)}=t_{d(l)}-t_{o(l)}$ is the duration of trip $l \in L_b$.
Let the trips in $L_b$ be ordered by $t_{o(l)}$, and $l$ and $l'$ be the $k$th and $(k-1)$th trips in $L_b$, for $k \in [1, |L_b|]$. For bike $b$, we can calculate the non-use time between the trips as
\begin{equation}
T_{N(b)}=\sum_{l \in L_b} t_{N(l)}=\sum_{l \in L_b} (t_{o(l)}-t_{d(l')}),
\end{equation}
where $t_{N(l)}=t_{o(l)}-t_{d(l')}$ is the trip-level non-use time between trips $l'$ and $l$.
The trips used for or associated with any maintenance have all been removed from the data source by CaBi. Thus it is impossible for us to precisely separate idle time $t_{I(l)}$ and maintenance time $t_{M(l)}$ from each $t_{N(l)}$ at a trip level. To extract reasonable samples of $t_{I(l)}$, we use the following conditions: (a) $s_{o(l)} \equiv s_{d(l')}$, and (b) $t_{o(l)}-t_{d(l')}<TH_I$, where $TH_I$ is a threshold value. By default, $TH_I=\infty$. Both of the assumptions are rationale and meet practical conditions. The condition of (a) can be satisfied to exclude some major maintenance activities, which is reasonable in practice, for example, {\em rebalancing} maintenance \citep{de2016bike} will always move bikes to different bike stations, and other maintenance activities may also return a bike to a different station in $S$ in a high probability. The condition of (b) is used in consideration of some other maintenance activities that might cost a significant amount of time to get accomplished, such as bike repairing.
Figs. \ref{fig:BikeState_5Y_grp_Time_PDF} and \ref{fig:BikeState_5Y_grp_idletime_PDF} show the empirical probability density functions (PDF) respectively for the use time $t_{U(l)}$ of each trip and for the idle time $t_{I(l)}$ before each trip, $\forall l \in L$. Correspondingly, Fig. \ref{fig:BikeState_5Y_grp_Time_PDF_CDF} and Fig. \ref{fig:BikeState_5Y_grp_idletime_PDF_CDF} give the cumulative distribution functions (CDF). As shown in Figs. \ref{fig:BikeState_5Y_grp_Time_PDF} and \ref{fig:BikeState_5Y_grp_idletime_PDF}, the two empirical PDFs can be well fitted with the lognormal distribution, i.e.,
\begin{equation}
f(x | \mu, \sigma)=(x\sigma\sqrt{2\pi})^{-1}\operatorname{exp}({-\frac{\left(\ln x-\mu\right)^2}{2\sigma^2}}),
\end{equation}
where the location and shape parameters $\mu$ and $\sigma$ are estimated using the cooperative group optimization (CGO) \citep{xie2014cooperative} to minimize the least squares. For the two PDFs, the parameters ($\mu$, $\sigma$) are respectively (2.3911, 0.7641) and (4.2167, 2.2145), and the root-mean-square errors (RMSE) are respectively 1.89E-4 and 4.47E-5. The lognormal model allows us to describe an empirical PDF in two parameters, to help modeling the stochastic use time in simulations, and to robustly estimate the mean value as ${\displaystyle \exp(\mu +\sigma ^{2}/2)}$ if there are any outliers in the data.
\begin{figure} [ht]
\centering
\begin{subfigure}{.49\textwidth}
\centering \includegraphics[width=0.95\textwidth]{img/BikeState_5Y_grp_Time_PDF+Fitting} \caption{Use Time Distribution: PDF}
\label{fig:BikeState_5Y_grp_Time_PDF}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering \includegraphics[width=0.95\textwidth]{img/BikeState_5Y_grp_Time_PDF_CDF} \caption{Use Time Distribution: CDF}
\label{fig:BikeState_5Y_grp_Time_PDF_CDF}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering \includegraphics[width=0.95\textwidth]{img/BikeState_5Y_grp_idletime_PDF+Fitting} \caption{Idle Time Distribution: PDF}
\label{fig:BikeState_5Y_grp_idletime_PDF}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering \includegraphics[width=0.95\textwidth]{img/BikeState_5Y_grp_idletime_More_PDF_CDF} \caption{Idle Time Distribution: CDF}
\label{fig:BikeState_5Y_grp_idletime_PDF_CDF}
\end{subfigure}
\caption{Empirical Distributions of Use and Idle Times.}
\label{fig:BikeState_5Y_User_Idle_Time_PDF_CDF}
\end{figure}
As shown in Fig. \ref{fig:BikeState_5Y_grp_Time_PDF_CDF}, the median of use time is 10.57 minutes, 90.01\% of CaBi bike trips have a use time below the 30-minutes limit to avoid additional charges, and only around 1.64\% of trips have a use time over 2 hours. As shown in Fig. \ref{fig:BikeState_5Y_grp_idletime_PDF_CDF}, the median of idle time is 1.09 hours, and 2.90\% of CaBi bikes have an idle time over 24 hours, if $TH_I=\infty$ is considered. There is only a minor and negligible difference in the median of idle time between the cases using $TH_I=\infty$ and $TH_I \in \{24, 48\}$ hours.
From the whole data, the total use and non-use time, i.e., $T_{U}$ and $T_{N}$, are respectively 467.02 and 13563.77 bikes$\cdot$years. The use time ratio $R_U$ is 3.33\%, meaning that there is quite a large room to improve the operational efficiency of BSS, and the improvement could be significant. The rebalancing maintenance contributes to an increase in idle time, but it also increases use time by reducing the imbalance between demand and supply at stations as described in Section \ref{sec:TripDemand}. Advanced algorithms may be implemented to optimize a practical rebalancing strategy for better tackling the imbalance issue, which can enable a BSS to serve more users while to reduce cost of maintenance. Subject to solving the imbalance between demand and supply of bikeshare, the $R_{DS}$ values can be used as fast-and-frugal heuristic information \citep{gigerenzer1999simple} to identify the imbalance regions at different times of day, as shown in Fig.~\ref{fig:BikeState_5Y_Geo_grp_osid_ToD_DoW}.
\subsection{Trip Purpose} \label{sec:UserTripTypes}
In general, bike users take two basic types of trips in terms of trip purpose, which is either utilitarian or recreational~\citep{miranda2013classification}. If the primary purpose of a trip is utilitarian, for example, commuting for work, the bike user would more likely prefer a shorter travel time to reach the destination and the user normally does not take any intermediate stop~\citep{conley2016view}. A commuting trip has typical AM/PM peaks in volume during weekday and has a lower volume during weekend. Instead, if the primary purpose of a trip is recreational, for example, riding in parkland, the user would pay more attention to the attributes pertinent to comfort rather than travel time. Recreational trips often have a higher volume during weekend, and most of them would be taken around an area with an open or recreational space.
CaBi data of bike trips has no direct record to identify purpose of trips by users. We will study if it is possible to estimate purpose of trips by users from data analysis on typical features of trips in terms of the types of trips and user. We classify trips to two types based on their forms of origin and destination stations, $s_o$ and $s_d$. One type is called {\em O-O trip} \citep{zhao2015exploring}, or ``loop'' trip \citep{Noland2017}, where $s_o=s_d$. Another type is called {\em O-D trip}, where $s_o \neq s_d$. In the collected data, O-O trips account for 4.03\% of trips in total. All commuting trips are O-D trips. In addition, two user types are defined in the CaBi data, which are {\em casual users} and {\em member users} \citep{buck2013bikeshare,Noland2017,Wergin2017} respectively. The trips taken by casual users account for 20.64\% of trips in total.
\begin{figure} [p]
\centering
\begin{subfigure}{.49\textwidth}
\centering \includegraphics[width=0.95\textwidth]{img/BikeState_5Y_SameOD_hod_Week} \caption{O-O Trips.}
\label{fig:BikeState_5Y_SameOD_hod_Week}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering \includegraphics[width=0.95\textwidth]{img/BikeState_5Y_DiffOD_hod_Week} \caption{O-D Trips.}
\label{fig:BikeState_5Y_DiffOD_hod_Week}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering \includegraphics[width=0.95\textwidth]{img/BikeState_5Y_CasualUser_hod_Week} \caption{Casual Users.}
\label{fig:BikeState_5Y_CasualUser_hod_Week}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering \includegraphics[width=0.95\textwidth]{img/BikeState_5Y_MemberUser_hod_Week} \caption{Member Users.}
\label{fig:BikeState_5Y_MemberUser_hod_Week}
\end{subfigure}
\caption{Average Hour-of-Day Trip Counts by Trip and User Types.}
\label{fig:BikeState_5Y_Trip_User_Type_hod}
\end{figure}
\begin{figure} [p]
\centering
\begin{subfigure}{.49\textwidth}
\centering \includegraphics[width=0.95\textwidth]{img/BikeState_5Y_OD_grp_Time_SameOD_UserType_PDF} \caption{Empirical PDF for O-O Trips.}
\label{fig:BikeState_5Y_OD_grp_Time_SameOD_UserType_PDF}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering \includegraphics[width=0.95\textwidth]{img/BikeState_5Y_OD_grp_Time_SameOD_UserType_PDF_CDF} \caption{Empirical CDF for O-O Trips.}
\label{fig:BikeState_5Y_OD_grp_Time_SameOD_UserType_PDF_CDF}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering \includegraphics[width=0.95\textwidth]{img/BikeState_5Y_OD_grp_Time_DiffOD_UserType_PDF} \caption{Empirical PDF for O-D Trips.}
\label{fig:BikeState_5Y_OD_grp_Time_DiffOD_UserType_PDF}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering \includegraphics[width=0.95\textwidth]{img/BikeState_5Y_OD_grp_Time_DiffOD_UserType_PDF_CDF} \caption{Empirical CDF for O-D Trips.}
\label{fig:BikeState_5Y_OD_grp_Time_DiffOD_UserType_PDF_CDF}
\end{subfigure}
\caption{Empirical Distributions of Use Time by Trip and User Types.}
\label{fig:BikeState_5Y_UserTime_Trip_User_Type_PDF_CDF}
\end{figure}
Fig. \ref{fig:BikeState_5Y_Trip_User_Type_hod} shows the comparisons of the temporal distributions of hourly averaged trip counts between weekday and weekend by different types of trips and users respectively. Both the O-O trip users and casual users take more trips in weekend than in weekday, expressing a typical recreational pattern \citep{miranda2013classification} as shown in Fig.~\ref{fig:BikeState_5Y_SameOD_hod_Week} and Fig.~\ref{fig:BikeState_5Y_CasualUser_hod_Week}. Consistent with previous study, the pattern is a fairly general for recreational trips. As indicated by the study on the bikesharing data in New York \citep{Noland2017}, casual users are more likely to take recreational trips, including loop trips. The O-D trip users and member users both have two peak trip periods at AM and PM in weekday but a single peak trip period in weekend, revealing a typical commuting pattern as shown in Fig.~\ref{fig:BikeState_5Y_DiffOD_hod_Week} and Fig.~\ref{fig:BikeState_5Y_MemberUser_hod_Week}.
Fig. \ref{fig:BikeState_5Y_UserTime_Trip_User_Type_PDF_CDF} gives the comparison of the distributions (i.e. empirical PDFs and CDFs) in bikeshare use time between casual and member users by different trip types. In Fig. \ref{fig:BikeState_5Y_OD_grp_Time_SameOD_UserType_PDF}, at a very small value of use time ($t_{U} \le 2$ minutes), the distribution of O-O trips has a sharp peak for member users (the blue curve). This is because member users would return rented bikes to the docking stations under an unsatisfied condition. For member users, the use time distribution of O-D trips shows a higher peak value than that of the O-O trips (see the comparison between the blue curves in Figs. \ref{fig:BikeState_5Y_OD_grp_Time_SameOD_UserType_PDF} and \ref{fig:BikeState_5Y_OD_grp_Time_DiffOD_UserType_PDF}). For O-D trips, member users are likely to have a shorter use time than casual users, as indicated by the comparison in Fig.~\ref{fig:BikeState_5Y_OD_grp_Time_DiffOD_UserType_PDF}.
These features in use time by different users are clearly shown by Figs. \ref{fig:BikeState_5Y_OD_grp_Time_SameOD_UserType_PDF_CDF} and \ref{fig:BikeState_5Y_OD_grp_Time_DiffOD_UserType_PDF_CDF}, where the CDF curve by member users is at the left side of that by casual users for both O-O and O-D trips, and the CDF curve of O-D trips is at the left side of that of O-O trips for both member and casual users. Comparing member and casual users, we find that the median use time for member and casual users are respectively 10.50 and 59.31 minutes for O-O trips, and respectively 8.90 and 22.15 minutes for O-D trips. The trips below 30 minutes for the O-O and O-D trips respectively account for 84.84\% and 97.64\% for member users, whereas respectively 28.90\% and 67.63\% for casual users. Combining the result from the comparisons above with that from Fig. \ref{fig:BikeState_5Y_Trip_User_Type_hod}, we can conclude that member users prefer to use bikeshare for utilitarian trips, while casual users prefer recreational trips.
\subsection{O-D Flows in Bikesharing Network} \label{sec:odFlows}
To better understand a bikesharing network, we analyze the top O-D pairs in the ranking of the highest O-D flows respectively by casual and member users (see Fig. \ref{fig:BikeState_5Y_CasualUser_TopRoutes}). As shown in Fig. \ref{fig:BikeState_5Y_Geo_grp_osid_DailyRate_Top50_AllUsers_Top6_2}, the top 50 O-D pairs form one main cluster by casual users in and around a famous recreational area --- the National Mall and Memorial Parks, while form two clusters by member users respectively at the east and north neighborhoods of central business district. The top 6 origin stations in the ranking of the highest O-D flows are separated into three clusters (see the black dots in Fig. \ref{fig:BikeState_5Y_Geo_grp_osid_DailyRate_Top50_AllUsers_Top6_2}). Among the 6 origin stations, one at the east cluster is the Union Station, three at the north cluster are near the triangle of Dupont Circle, Logan Circle and Thomas Circle Park, and two are in National Mall area. To analyze how a bikeshare network structure grows with the increase in the number of O-D links, we show the top 50, 500, and 5000 highest-ranking O-D pairs in O-D flows respectively by casual users (see Fig. \ref{fig:BikeState_5Y_Geo_grp_OD_DiffOD_CasualUser_Sorted_50,500,5000_4}) and by member users (see Fig.\ref{fig:BikeState_5Y_Geo_grp_OD_DiffOD_MemberUser_Sorted_50,500,5000_4}). Fig.~\ref{fig:BikeState_5Y_Geo_grp_OD_DiffOD_MemberUser_Sorted_50,500,5000_4} indicates that the trip network formed by member users covers the neighborhoods for commuters --- the areas that feature high densities of workplaces (see Fig.~\ref{fig:Census_LODES2014_DC_grp_w_geocode_ct_Geo_Tract10}) or homes (see Fig.~\ref{fig:Census_LODES2014_DC_grp_h_geocode_ct_Geo_Tract10}).
In contrast, casual users take more long-distance trips (see Fig. \ref{fig:BikeState_5Y_Geo_grp_OD_DiffOD_CasualUser_Sorted_50,500,5000_4}).
From Figs. \ref{fig:BikeState_5Y_Geo_grp_OD_DiffOD_CasualUser_Sorted_50,500,5000_4} and \ref{fig:BikeState_5Y_Geo_grp_OD_DiffOD_MemberUser_Sorted_50,500,5000_4}, community structure can be clearly identified, where three clusters appear showing densely connected links of O-D pairs. The formation of such a community structure is quite common in real-world self-organized networks \citep{girvan2002community}. The polycentricity at a metropolitan scale is an interesting feature of modern urban landscapes \citep{anas1998urban}.
Most users prefer to bike on a short-time trip (as shown in \ref{fig:BikeState_5Y_grp_Time_PDF}), thus most regions far from core areas of existing bikeshare clusters are not reached by users, even though some of these regions have sufficiently high densities of workplace or residence for generating a large trip demand (see Figs. \ref{fig:Census_LODES2014_DC_grp_w_geocode_ct_Geo_Tract10} and \ref{fig:Census_LODES2014_DC_grp_h_geocode_ct_Geo_Tract10} on the densities of workplace and residence respectively). To increase bikeshare ridership of these regions, it is important to foster the formation of new clusters with densely connected O-D links in the bikesharing network, which may be considered by system operators while new bikeshare stations need to be added or by city planners while building environment need to be improved.
\begin{figure} [htb]
\centering
\begin{subfigure}{.32\textwidth} \centering \includegraphics[width=.95\linewidth]{img/BikeState_5Y_Geo_grp_osid_DailyRate_Top50_AllUsers_Top6_2} \caption{Top 50 O-D Pairs for Casual (Blue) and Member (Red) Users} \label{fig:BikeState_5Y_Geo_grp_osid_DailyRate_Top50_AllUsers_Top6_2} \end{subfigure}
\begin{subfigure}{.32\textwidth} \centering \includegraphics[width=.95\linewidth]{img/BikeState_5Y_Geo_grp_OD_DiffOD_CasualUser_Sorted_50,500,5000_4} \caption{Top 50, 500, and 5000 O-D Pairs for Casual Users} \label{fig:BikeState_5Y_Geo_grp_OD_DiffOD_CasualUser_Sorted_50,500,5000_4} \end{subfigure}
\begin{subfigure}{.32\textwidth} \centering \includegraphics[width=.95\linewidth]{img/BikeState_5Y_Geo_grp_OD_DiffOD_MemberUser_Sorted_50,500,5000_4} \caption{Top 50, 500, and 5000 O-D Pairs for Member Users}
\label{fig:BikeState_5Y_Geo_grp_OD_DiffOD_MemberUser_Sorted_50,500,5000_4} \end{subfigure}
\caption{Top O-D Pairs in the Ranking of the Highest O-D Flows by Casual and Member Users.}
\label{fig:BikeState_5Y_CasualUser_TopRoutes}
\end{figure}
\begin{figure} [p]
\centering
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_grp_Time_31613_31619_MemberUser_PDF_Fitting}}} \caption{31613 $\rightarrow$ 31619} \label{fig:BikeState_5Y_grp_Time_31613_31619_MemberUser_PDF_Fitting} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_grp_Time_31619_31613_MemberUser_PDF_Fitting}}} \caption{31619 $\rightarrow$ 31613} \label{fig:BikeState_5Y_grp_Time_31619_31613_MemberUser_PDF_Fitting} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_grp_Time_31623_31631_MemberUser_PDF_Fitting}}} \caption{31623 $\rightarrow$ 31631} \label{fig:BikeState_5Y_grp_Time_31623_31631_MemberUser_PDF_Fitting} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_grp_Time_31104_31121_MemberUser_PDF_Fitting}}} \caption{31104 $\rightarrow$ 31121} \label{fig:BikeState_5Y_grp_Time_31104_31121_MemberUser_PDF_Fitting} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_grp_Time_31229_31200_MemberUser_PDF_Fitting}}} \caption{31229 $\rightarrow$ 31200} \label{fig:BikeState_5Y_grp_Time_31229_31200_MemberUser_PDF_Fitting} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_grp_Time_31121_31104_MemberUser_PDF_Fitting}}} \caption{31121 $\rightarrow$ 31104} \label{fig:BikeState_5Y_grp_Time_31121_31104_MemberUser_PDF_Fitting} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_grp_Time_31631_31623_MemberUser_PDF_Fitting}}} \caption{31631 $\rightarrow$ 31623} \label{fig:BikeState_5Y_grp_Time_31631_31623_MemberUser_PDF_Fitting} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_grp_Time_31622_31623_MemberUser_PDF_Fitting}}} \caption{31622 $\rightarrow$ 31623} \label{fig:BikeState_5Y_grp_Time_31622_31623_MemberUser_PDF_Fitting} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_grp_Time_31623_31614_MemberUser_PDF_Fitting}}} \caption{31623 $\rightarrow$ 31614} \label{fig:BikeState_5Y_grp_Time_31623_31614_MemberUser_PDF_Fitting} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_grp_Time_31201_31200_MemberUser_PDF_Fitting}}} \caption{31201 $\rightarrow$ 31200} \label{fig:BikeState_5Y_grp_Time_31201_31200_MemberUser_PDF_Fitting} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_grp_Time_31200_31201_MemberUser_PDF_Fitting}}} \caption{31200 $\rightarrow$ 31201} \label{fig:BikeState_5Y_grp_Time_31200_31201_MemberUser_PDF_Fitting} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_grp_Time_31614_31623_MemberUser_PDF_Fitting}}} \caption{31614 $\rightarrow$ 31623} \label{fig:BikeState_5Y_grp_Time_31614_31623_MemberUser_PDF_Fitting} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_grp_Time_31623_31622_MemberUser_PDF_Fitting}}} \caption{31623 $\rightarrow$ 31622} \label{fig:BikeState_5Y_grp_Time_31623_31622_MemberUser_PDF_Fitting} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_grp_Time_31612_31623_MemberUser_PDF_Fitting}}} \caption{31612 $\rightarrow$ 31623} \label{fig:BikeState_5Y_grp_Time_31612_31623_MemberUser_PDF_Fitting} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_grp_Time_31200_31229_MemberUser_PDF_Fitting}}} \caption{31200 $\rightarrow$ 31229} \label{fig:BikeState_5Y_grp_Time_31200_31229_MemberUser_PDF_Fitting} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_grp_Time_31201_31229_MemberUser_PDF_Fitting}}} \caption{31201 $\rightarrow$ 31229} \label{fig:BikeState_5Y_grp_Time_31201_31229_MemberUser_PDF_Fitting} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_grp_Time_31623_31612_MemberUser_PDF_Fitting}}} \caption{31623 $\rightarrow$ 31612} \label{fig:BikeState_5Y_grp_Time_31623_31612_MemberUser_PDF_Fitting} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_grp_Time_31611_31623_MemberUser_PDF_Fitting}}} \caption{31611 $\rightarrow$ 31623} \label{fig:BikeState_5Y_grp_Time_31611_31623_MemberUser_PDF_Fitting} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_grp_Time_31623_31611_MemberUser_PDF_Fitting}}} \caption{31623 $\rightarrow$ 31611} \label{fig:BikeState_5Y_grp_Time_31623_31611_MemberUser_PDF_Fitting} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_grp_Time_31613_31622_MemberUser_PDF_Fitting}}} \caption{31613 $\rightarrow$ 31622} \label{fig:BikeState_5Y_grp_Time_31613_31622_MemberUser_PDF_Fitting} \end{subfigure}
\caption{Empirical PDFs of Bikeshare Use Time for the Top 20 Highest-Ranking O-D Pairs in O-D Flows by Member Users, where Empirical Data (Blue Solid Line) are Fitted with Lognormal Distributions (Red Dash Line).}
\label{fig:BikeState_5Y_DiffOD_Time_MemberUser_PDF}
\end{figure}
\begin{figure} [p]
\centering
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_31613_31619_MemberUser_hod_Week}}} \caption{31613 $\rightarrow$ 31619} \label{fig:BikeState_5Y_31613_31619_MemberUser_hod_Week} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_31619_31613_MemberUser_hod_Week}}} \caption{31619 $\rightarrow$ 31613} \label{fig:BikeState_5Y_31619_31613_MemberUser_hod_Week} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_31623_31631_MemberUser_hod_Week}}} \caption{31623 $\rightarrow$ 31631} \label{fig:BikeState_5Y_31623_31631_MemberUser_hod_Week} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_31104_31121_MemberUser_hod_Week}}} \caption{31104 $\rightarrow$ 31121} \label{fig:BikeState_5Y_31104_31121_MemberUser_hod_Week} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_31229_31200_MemberUser_hod_Week}}} \caption{31229 $\rightarrow$ 31200} \label{fig:BikeState_5Y_31229_31200_MemberUser_hod_Week} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_31121_31104_MemberUser_hod_Week}}} \caption{31121 $\rightarrow$ 31104} \label{fig:BikeState_5Y_31121_31104_MemberUser_hod_Week} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_31631_31623_MemberUser_hod_Week}}} \caption{31631 $\rightarrow$ 31623} \label{fig:BikeState_5Y_31631_31623_MemberUser_hod_Week} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_31622_31623_MemberUser_hod_Week}}} \caption{31622 $\rightarrow$ 31623} \label{fig:BikeState_5Y_31622_31623_MemberUser_hod_Week} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_31623_31614_MemberUser_hod_Week}}} \caption{31623 $\rightarrow$ 31614} \label{fig:BikeState_5Y_31623_31614_MemberUser_hod_Week} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_31201_31200_MemberUser_hod_Week}}} \caption{31201 $\rightarrow$ 31200} \label{fig:BikeState_5Y_31201_31200_MemberUser_hod_Week} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_31200_31201_MemberUser_hod_Week}}} \caption{31200 $\rightarrow$ 31201} \label{fig:BikeState_5Y_31200_31201_MemberUser_hod_Week} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_31614_31623_MemberUser_hod_Week}}} \caption{31614 $\rightarrow$ 31623} \label{fig:BikeState_5Y_31614_31623_MemberUser_hod_Week} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_31623_31622_MemberUser_hod_Week}}} \caption{31623 $\rightarrow$ 31622} \label{fig:BikeState_5Y_31623_31622_MemberUser_hod_Week} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_31612_31623_MemberUser_hod_Week}}} \caption{31612 $\rightarrow$ 31623} \label{fig:BikeState_5Y_31612_31623_MemberUser_hod_Week} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_31200_31229_MemberUser_hod_Week}}} \caption{31200 $\rightarrow$ 31229} \label{fig:BikeState_5Y_31200_31229_MemberUser_hod_Week} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_31201_31229_MemberUser_hod_Week}}} \caption{31201 $\rightarrow$ 31229} \label{fig:BikeState_5Y_31201_31229_MemberUser_hod_Week} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_31623_31612_MemberUser_hod_Week}}} \caption{31623 $\rightarrow$ 31612} \label{fig:BikeState_5Y_31623_31612_MemberUser_hod_Week} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_31611_31623_MemberUser_hod_Week}}} \caption{31611 $\rightarrow$ 31623} \label{fig:BikeState_5Y_31611_31623_MemberUser_hod_Week} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_31623_31611_MemberUser_hod_Week}}} \caption{31623 $\rightarrow$ 31611} \label{fig:BikeState_5Y_31623_31611_MemberUser_hod_Week} \end{subfigure}
\begin{subfigure}{.19\textwidth} \centering \includegraphics[width=.95\linewidth]{img/{{BikeState_5Y_31613_31622_MemberUser_hod_Week}}} \caption{31613 $\rightarrow$ 31622} \label{fig:BikeState_5Y_31613_31622_MemberUser_hod_Week} \end{subfigure}
\caption{Comparison of the Trip Counts between Weekdays (Blue) and Weekends (Red) for the Top 20 Highest-Ranking O-D Pairs in O-D Flows by Member Users. The Stations 31121, 31200, 31613, and 31623 are Respectively Adjacent to Four Transportation Hubs (Metrorail and Railway Stations), i.e., Woodley Park, Dupont Circle, Eastern Market, and Union Station.}
\label{fig:BikeState_5Y_DiffOD_MemberUser_hod_Week}
\end{figure}
Fig.~\ref {fig:BikeState_5Y_DiffOD_Time_MemberUser_PDF} shows the empirical PDFs of use time for the top 20 O-D pairs in the ranking of the highest O-D flows by member users. Fig.~\ref{fig:BikeState_5Y_DiffOD_MemberUser_hod_Week} gives the comparison of average daily counts of the trips between weekdays (blue) and weekends (red) for the top 20 O-D pairs. Although the distributions of use time by member users for all the top 20 O-D pairs express a similar pattern on a utilitarian purpose --- with a single sharp peak that can be fitted into the lognormal distribution (see Fig.~\ref {fig:BikeState_5Y_DiffOD_Time_MemberUser_PDF}), their daily trip counts show diverse patterns (see Fig.~\ref{fig:BikeState_5Y_DiffOD_MemberUser_hod_Week}, especially the blue curves for weekdays on the same trip purpose). These patterns give us additional information on the land use near origin and destination stations (such as workplace or high-density residence) as well as more detailed trip purpose of an O-D pair $(s_o, s_d)$ (such as for commuting or for non-commuting).
Here we present a few decision rules based on associated knowledge. First, a dominant AM or PM peak of trip counts in weekday indicates that the trip purpose between the O-D pair is for commuting. Let R, W, and T respectively be a residence, workplace, and transportation hub. Basic commuting trip segments include H $\rightarrow$ W, H $\rightarrow$ T, T $\rightarrow$ T, and T $\rightarrow$ W during the AM period, and W $\rightarrow$ H, W $\rightarrow$ T, T $\rightarrow$ T, and T $\rightarrow$ H during the PM period. If one and only one station (either $s_o$ or $s_d$) is T, we can define the following {\em commuting-related} rule ($R_C$): $s_o$ is H if $s_d$ is T and $s_d$ is W if $s_o$ is T given a dominant AM peak, whereas $s_o$ is W if $s_d$ is T and $s_d$ is H if $s_o$ is T given a dominant PM peak.
Let us take the O-D pairs in Fig.~\ref{fig:BikeState_5Y_DiffOD_MemberUser_hod_Week} as an example to illustrate the usage of the $R_C$ rule. The Stations 31121, 31200, 31613, and 31623 are known respectively adjacent to four transportation hubs (Metrorail and railway stations) of Woodley Park, Dupont Circle, Eastern Market, and Union Station. Based on the $R_C$ rule and the patterns shown in Fig.~\ref{fig:BikeState_5Y_DiffOD_MemberUser_hod_Week}, we can identify that the eight stations, i.e. Stations 31201, 31229, 31611, 31612, 31614, 31619, 31622, and 31631, are located in primary residence neighborhoods. This is consistent with the result in Fig. \ref{fig:Census_LODES2014_DC_grp_h_geocode_ct_Geo_Tract10}, where the LODES data by residence confirmed that seven and one of the eight stations are respectively located in the high-density residential census tracts with $\operatorname{log}_{10}(C_{h(ct)}/A_{ct})$ in $[4.0,4.5]$ and $[3.4, 4.0]$. Among the top 20 O-D pairs in the ranking of the highest O-D flows by member users, 17 are the O-D links between the four T stations and the eight H stations, i.e. all the top O-D pairs in Fig.~\ref{fig:BikeState_5Y_DiffOD_MemberUser_hod_Week} are T $\rightarrow$ H or H $\rightarrow$ T except for Figs. \ref{fig:BikeState_5Y_31104_31121_MemberUser_hod_Week}, \ref{fig:BikeState_5Y_31121_31104_MemberUser_hod_Week} and \ref{fig:BikeState_5Y_31201_31229_MemberUser_hod_Week}. It indicates that bikesharing plays an important role in providing first- and last-mile connections between residence places and transportation hubs.
We can also define decision rules to identify the land use related to non-commuting trips. In the {nightlife-related} rule ($R_N$), $s_o$ or $s_d$ is likely located near a nightlife area, if there is a nontrivial trip rate during 0--4 AM in weekend. For example, in Fig. \ref{fig:BikeState_5Y_31104_31121_MemberUser_hod_Week}, the station 31104 is at Adams Morgan, which is a major night life area with many bars and restaurants.
\subsection{Mobility} \label{sec:Mobility}
On mobility, travel or trip speed is important. For each bicycle trip, its bikeshare use time provides an upper bound of travel time. Particularly, for the utilitarian O-D trips, the bikeshare use time approaches the travel time if there is no intermediate stop. Therefore, we can estimate the lower bound of trip speed for member users using the bikeshare use time and the shortest path length between each O-D pair, without requiring extensive GPS tracking data \citep{Wergin2017}. Fig. \ref{fig:BikeState_5Y_med_osid_dsid_DiffOD_medSpeed_UserType_T1000_PDF_CDF} gives the distribution of median speed $v_{med}$ for all the O-D pairs with more than 1000 trips taken by bikeshare users. For each O-D pair, we compute $v_{med}=L_R/T_{U,med}$, where $L_R$ is the distance of the recommended bicycling route between the O-D pair retrieved with Google Maps Distance Matrix API, $T_{U,med}$ is the median bikeshare use time for the O-D pair. For casual, member, and all users, the medians values of $v_{med}$ are respectively 3.48, 8.31, and 8.08 mph. Comparison between member and all users shows a significant difference in the 10th percentile values of $v_{med}$, which are respectively 6.54 and 4.00 mph.
\begin{figure} [htb]
\centering
\begin{subfigure}{.49\textwidth}
\centering \includegraphics[width=0.95\textwidth]{img/BikeState_5Y_med_osid_dsid_DiffOD_medSpeed_UserType_T1000_PDF_CDF} \caption{Full System in Terms of User Types.}
\label{fig:BikeState_5Y_med_osid_dsid_DiffOD_medSpeed_UserType_T1000_PDF_CDF}
\end{subfigure}
\begin{subfigure}{.49\textwidth}
\centering \includegraphics[width=0.95\textwidth]{img/BikeState_5Y_med_osid_dsid_Regions_medSpeed_Member_T1000_PDF_CDF} \caption{Member Users in Different Regions.}
\label{fig:BikeState_5Y_med_osid_dsid_Regions_medSpeed_Member_T1000_PDF_CDF}
\end{subfigure}
\caption{CDF Distributions of Median Travel Speed from O-D Trips.} \label{fig:BikeState_5Y_med_osid_dsid_medSpeed_T1000_PDF_CDF}
\end{figure}
Concerning the traditional vehicle flow, traffic congestion has been a serious problem \citep{schrank20152015}. Let the free flow, average, and 95th percentile travel times respectively be $TT_{O}$, $\overline{TT}$, and $TT_{95}$, the {\em Travel Time Index} (TTI) and {\em Planning Time Index} (PTI) are respectively
\begin{equation}
\text{TTI}=\overline{TT}/TT_{O},~~~~\text{PTI}=TT_{95}/TT_{O},
\end{equation}
as defined in the Urban Congestion Reports (UCR) by \cite{FHWA2017Congestion}. According to UCR, during the last quarter of 2016, DC had TTI=1.48 and PTI=3.08, and the average duration of daily congestion lasted for nearly 8 hours. According to \cite{DDOT2017Mobility}, some road segments have severe congestion with TTI $>$ 3.5 during peak time. Notice that the maximum lawful speed is 25 mph on most streets in DC. Thus, an extremely high TTI means a rather slow traffic speed.
Congestion affects both travel time and reliability of vehicles~\citep{martin2014evaluating,stinson2004frequency}. In general, congestion factors do not have effects on bike users as much as on car and bus transit users. With the current median speed of about 8 mph, bicycling is certainly a viable commuting option in urban transportation, as compared to other surface transportation modes. In particular, if travel time reliability is important for users, bicycling might be a better choice during peak time. Compared to bicycling, although driving might save travel time in terms of the free flow speeds, the variation in vehicle travel time is rather large in rush hours with a high TTI and PTI for vehicles, where a delay can even reach several times of the travel time savings~\citep{asensio2008commuters}. According to the 2016 member survey, 89\% of respondents cited access and speed as their primary reasons for joining CaBi \citep{bikeshare2016capital}.
The speed for CaBi members is still far below than the average cycling speed that could reach 14.6 mph \citep{moritz1997survey}. It is likely due to the high density of intersections in urban area imposing unnecessary delays for cyclists. At this point, smart multimodal intersection control \citep{portilla2016model} can help to gain a better biking mobility while without any significant interruption to existing vehicle flows. The improvement in bike mobility might further promote more bike usage as a part of the green intermodal transportation system, and accelerate the traffic mode shift towards reducing VMT~\citep{fishman2014barriers} and mitigating traffic congestion as well as vehicle emissions~\citep{schrank20152015,hamilton2017bicycle}.
It is important to understand the dependence of bike mobility on the network infrastructure. For urban planners and policymakers, such knowledge could provide critical supports to upgrade the bike infrastructure in multimodal urban transportation networks. To gain some insights, a case study is performed on a dedicated bike lane with bikeshare stations. The bike lane is on the 14th Street between Station 31407 at Colorado Ave and Station 31241 at Thomas Circle. There are 12 CaBi stations in total(i.e., 31101, 31105, 31119, 31123, 31124, 31202, 31203, 31241, 31401, 31402, 31406, 31407) on the bike lane, and the O-D trips among these stations are considered. Only member users are considered for evaluating the bike mobility of utilitarian trips. It is reasonable to assume that nearly all these bikeshare users will follow the route on the 14th Street, since it is the shortest route and it is a dedicated bike lane. Fig. \ref{fig:BikeState_5Y_med_osid_dsid_Regions_medSpeed_Member_T1000_PDF_CDF} shows the difference in bike mobility between the full system and the selected region on the 14th Street. Compared with the full system, there are more percentiles of users at the higher travel speeds on the 14th Street. For a travel speed over 10 mph, the percentages of users are respectively 37.68\% and 14.88\% on the 14th Street and in the full system. The result indicates that bike mobility could be significantly improved on dedicated bike lanes.
\subsection{Safety} \label{sec:Safety}
Concerning Safety, we focus on the bike crash data in DC. According to the combined data of the traffic safety statistics of 2013-2015 from \cite{DDOT2015SafetyFact} and the crash data from Open Data DC, there were 707.4 bike crashes per year in average between 2012-2016, where 46.8 bicyclists were fatal or major injured and 436.6 were minor injured. These crashes involved all bike riders, although it was reported that bikeshare users are associated with a lower crash risk as compared to other cyclists \citep{martin2016bikesharing}. For CaBi, there were 132 reported crashes (including 50 hospital injures) in its 9.16 million trips between 2012 to July 2015 \citep{martin2016bikesharing}.
In DC, there were 768 bicyclists in total involved in traffic crashes in 2016. Fig. \ref{fig:dc_crashes2_grp_Geo_TOTAL_BICYCLES_Y2016} shows the crash locations. The broad spatial distribution of crashes certainly gives a warning that bike safety requires a significant improvement. This is an important issue, as more people are adopting this active transportation mode nowadays. Fig.~\ref{fig:dc_crashes2_grp_Geo_TOTAL_BICYCLES_Y2016} also illustrates that the whole bike network is well connected in DC, since crashes are rare events in the total trips but they distributed widely in the map of DC. The heatmap of crashes is shown in Fig. \ref{fig:dc_crashes2_grp_Geo_TOTAL_BICYCLES_Y2016_w_Heatmap}, which provides us a visual summary of the most frequent crash regions.
Fig. \ref{fig:BikeState_5Y_Geo_grp_osid_DailyRate_BikeAccidentHotSpots} gives the significant crash hot spots in statistics, which are obtained using the method of Z scores from the Getis-Ord Gi* statistic with 99\% confidence. By including the trip demand information from Fig. \ref{fig:BikeState_5Y_Geo_grp_osid} into Fig. \ref{fig:BikeState_5Y_Geo_grp_osid_DailyRate_BikeAccidentHotSpots}, we find that the hot spots of bike crashes have a strong spatial correlation with trip demand.
\begin{figure} [h]
\centering
\begin{subfigure}{.328\textwidth} \centering \includegraphics[width=.95\linewidth]{img/dc_crashes2_grp_Geo_TOTAL_BICYCLES_Y2016} \caption{DC Bike Crashes in 2016} \label{fig:dc_crashes2_grp_Geo_TOTAL_BICYCLES_Y2016} \end{subfigure}
\begin{subfigure}{.32\textwidth} \centering \includegraphics[width=.95\linewidth]{img/dc_crashes2_grp_Geo_TOTAL_BICYCLES_Y2016_w_Heatmap2} \caption{Crash Heatmap} \label{fig:dc_crashes2_grp_Geo_TOTAL_BICYCLES_Y2016_w_Heatmap} \end{subfigure}
\begin{subfigure}{.32\textwidth} \centering \includegraphics[width=.95\linewidth]{img/BikeState_5Y_Geo_grp_osid_DailyRate_BikeAccidentHotSpots2_2} \caption{Demands and Crash Hot Spots} \label{fig:BikeState_5Y_Geo_grp_osid_DailyRate_BikeAccidentHotSpots} \end{subfigure}
\caption{Spatial Distribution of Bike Crashes in Washington DC.} \label{fig:Bike_Accident_16}
\end{figure}
Safety has been a major barrier for people to adopt biking as a new form of mobility. Especially, bike users would have concerns when biking in traffic \citep{fishman2016bikeshare}. Besides education and enforcement, upgrading bike infrastructure and using technology can both help to reduce considerable risks or exposures and to lower the stress level \citep{lowry2016prioritizing} for cyclists on urban roads.
\section{Discussions}
\subsection{Implications for Data-Driven Decision Supports} \label{sec:ddds}
Fig. \ref{fig:BSSStakeholders} shows data-driven decision supports (DDDS) from the relations between BSS and key stakeholders for taking advantages of the mined travel patterns and characteristics from data, where the key stakeholders consist of road users, system operators, and city (urban planners and policymakers). The inputs from key stakeholders to BSS include trip information (such as O-D, purpose, mode and route choice), bikeshare operating strategies (such as corral service, rebalancing, and pricing) and service level agreements (SLA), and urban landscapes (such as land use and infrastructure) and policies. Data analysis is then performed upon the data sources integrating BSS and other related datasets, such as LODES, crash data, and data from Google Maps APIs. From the viewpoint of DDDS, the data analysis could provide supporting information to key stakeholders to help them achieve their goals. The process could be iterative where the inputs to BSS are updated over time, which would make key stakeholders benefit from the changes and also foster the transportation shift towards more sustainable urban mobility in a complex urban environment.
\begin{figure} [htbp]
\centering \includegraphics[width=0.99\textwidth]{img/BSSStakeholder} \caption{Data-Driven Decision Supports from Relations Between Bikesharing Systems and Stakeholders.}
\label{fig:BSSStakeholders}
\end{figure}
The patterns and characteristics extracted from data are able to provide supports for measures of effectiveness (MoE) to stakeholders through tracking key metrics of system on an ongoing basis.
We consider essential MoEs in Fig. \ref{fig:BSSStakeholders} and discuss them mostly in the context of the CaBi system. Note that DDDS is extensible, and new MoEs could be incorporated into DDDS implementations according to specific goals while the relevant data are available.
The use time ratio of bikeshare (Section \ref{sec:UseTime}) can be used by system operators for understanding or estimating their operational efficiency. The demand-supply imbalance (Section \ref{sec:TripDemand}) can function as an alert on the need of rebalancing to system operators for a better QoS of bikeshare. The use time distribution in Fig. \ref{fig:BikeState_5Y_User_Idle_Time_PDF_CDF} indicates that most road users are sensitive to trip time as additional trip time costs them more usage-time-based fee. Concerning QoS, although SLA is provided by operators and has to be adopted by individual users, it is often negotiated between city and system operators \citep{de2016bike} if they are in public-private partnerships (which is in fact a usual case). The evaluations on mobility and safety in Sections \ref{sec:Mobility} and \ref{sec:Safety} have straightforward impacts on miscellaneous transportation decision makings by key stakeholders at different effecting weights. Notice that the metrics of mobility and safety represent aggregated experiences of road users, which can be improved by city planners or policymakers through upgrading urban landscapes or adopting new technologies to reduce number of bike crash incidents or encouraging safety culture among road users.
For system operators, a higher operational efficiency is corresponding to a higher revenue and/or a lower operating cost. The revenue of BSS system depends on the pricing scheme and ridership. The trip cost of road users depends on the pricing scheme and trip time. If BSS improves its operational efficiency by reducing its operating cost (e.g., through optimizing rebalancing activities \citep{shui2017dynamic,elhenawy2018heuristic}), some flexibility in adjustment of the pricing scheme would be available to attract more ridership from road users, and the revenue would be improved in return. According to \citet{DDOT2015CabiPlan}, although the cost recovery rate of CaBi is projected to increase from 75\% (FY2016) to 87\% (FY2021), its operating balance might stay in negative in the foreseeing future.
Notice that BSS could bring us public benefits from social, environmental, economic, and health-related aspects that may not be shown in the operating balance~\citep{hamilton2017bicycle,DDOT2015CabiPlan,shaheen2013public,fishman2016bikeshare}. There are many usable business models as surveyed in \cite{shaheen2013public}.
In the case that BSS needs subsidy from the city, the city may also consider the operational efficiency as one of MoEs. Notice that one of important goals of the city is to achieve sustainable urban transportation, for which reducing private car usage through increasing bike ridership in road users is a highly viable way and tightly associated with the operational efficiency of BSS. The indirect yet tight relation between the city and BSS operational efficiency is indicated with a dash line in Figure \ref{fig:BSSStakeholders}.
To be concise, we do not show relatively loose relations among the components of DDDS in Fig. \ref{fig:BSSStakeholders}. For example, people's preferences for biking facilities may impact the quality of land use through changing the values of businesses \citep{buehler2015business,Poirier2018Bicycle} and residential properties \citep{Liu2017Impact}. The reduction in trip costs and the increase in mobility and safety would attract more road users to adopt BSS, which might potentially raise operational efficiency of BSS. System operators can enhance biking safety through adding protection or safety alert accessories to bikes, such as installing self-powered LED lights to provide more visibility during riding.
Some MoEs could involve conflicting goals. For instance, on operational efficiency, system operators might explore options to reduce QoS for saving the high maintenance and service costs \citep{demaio2009bike}, or pursue a higher payment of trip cost from users through increasing membership rates or usage fees \citep{DDOT2015CabiPlan}, which however might reduce total ridership. The total impacts of the operation on efficiency should be carefully evaluated given any conflicts are entailed in the operation. Another example is on usage of helmet. Bicycle helmet has been shown effective to protect bike users and improve road safety \citep{attewell2001bicycle}. Although the law in DC does not enforce the usage of bike helmet for the people above the age of 16, \cite{DDOT2015CabiPlan} has a target to increase the percentage of CaBi riders wearing helmet year by year. As shown in the references~\citep{shaheen2013public,fishman2016bikeshare}, bikeshare users demonstrate a strong reluctance to wear helmet. If without sufficient improvements in the practical conditions, the policies aiming to increase road safety through more helmet usage might conflict with that of augmenting total ridership and operational efficiency of BSS. For one more example, electric bicycles, or e-bikes, would help increasing the usage of bikesharing \citep{pucher2017cycling} as they could lower the activity burdens for seniors and long-distance travelers, could make cycling easier in hilly areas, also could significantly improve mobility for all bike users; however, on road safety, e-bike users have shown more likely to be involved in severe crashes than classical bike users \citep{schepers2014safety}. For achieving overall benefits among difference goals, constraints and relations should be considered and the total effect could be evaluated as data-driven decision support.
Beyond providing MoE supports, the patterns and characteristics can be used by stakeholders as additional decision supports. Effectiveness of some operating activities such as corral service, which is shown evaluable from the patterns and characteristics in drop-off/pickup counts at different stations or for different events (Section \ref{sec:OperationalActivity}), can be used by system operators to gauge the impact of their operating activities, to evaluate their operating cost, and to augment operational efficiency and QoS. The empirical PDFs of O-D trip time, which are shown able to be fitted by parametric models in Fig. \ref{fig:BikeState_5Y_DiffOD_Time_MemberUser_PDF} (Section \ref{sec:UseTime}), can be used not only by system operators for operational simulations, but also by road users for trip planning. The trip flow imbalance in Fig. \ref{fig:BikeState_5Y_Geo_grp_osid_ToD_DoW} (Section \ref{sec:TripDemand}) and the flow patterns in Fig. \ref{fig:BikeState_5Y_DiffOD_MemberUser_hod_Week} (Section \ref{sec:odFlows}), which are both shown associated with the information of land use, can be used by system operators to optimize operating efforts and by city to improve urban landscapes. Trip purpose, which is shown extractable in Section \ref{sec:UserTripTypes}, can greatly influences travel behavior and mode choice. The advantage of building biking infrastructure such as dedicated bike lanes in enhancing biking mobility, as shown in Section \ref{sec:Mobility}, can be used by system operators to plan or develop bikeshare networks and by city to develop or improve biking infrastructure. Significant improvement of biking mobility and safety is critical for road users to determine whether or not to adopt biking as their transportation mode, and it would highly affect if city could win a mode shift to have the VMT and traffic congestion reduced.
Fig. \ref{fig:BikeState_5Y_CasualUser_TopRoutes} shows the community structure emerged from the O-D flows of bikeshare users. It reveals a self-organized formation of polycentricity in urban spatial structure \citep{anas1998urban} as the result of the addition of a BSS into urban landscapes of city. Traditionally, traffic congestion is often considered as the most serious negative externality \citep{louf2014congestion} leading to the formation of polycentricity that helps to keep urban spatial structure efficient. In the bikesharing network, the primary negative externality however appears to be trip time, as most people would try to keep their biking trips done within 30 minutes in order to retain a low trip cost under the current price scheme of CaBi. Fig. \ref{fig:BikeState_5Y_CasualUser_TopRoutes} indicates that the existing CaBi coverage on communities is rather small, limited to only a small portion of DC area. It is also shown in Fig. \ref{fig:BikeState_5Y_Geo_grp_osid} that the trip rates of CaBi stations in quite a large area of DC are lower than the rate in the bikeshare-centric areas by a few orders of magnitude. This problem can considerably reduce the overall operational efficiency of the BSS, but simple solutions such as eliminating the low-use bikeshare stations \citep{garcia2012optimizing} may increase inequity in accessibility to bikeshare service. From the viewpoint of polycentricity, there are some potential solutions, such as, 1) foster the formation of new communities in the bikeshare low-use area, which can be taken into account by system operators when some bikeshare stations would be relocated or new pricing schemes would be implemented, or by city when urban landscapes would be updated; and 2) improve biking mobility to extend the biking distance accessible by road users at the same time limit, which can significantly expand biking communities to cover broader areas. In addition, if an adjustment of current pricing scheme could be made by system operators to encourage longer trips beyond the existing communities, it would help solving the problem.
It is worthy to note that data itself is incomplete and is only one of the inputs providing supports to the complex decision making for the public good. To prevent potential risks from data-driven decision making \citep{lepri2017tyranny}, we must adopt a human-centric perspective, since people are the subjects of the decisions. Moreover, stakeholders might have different perspectives and weights on some key MoEs. For example, the pursue of operational efficiency by system operators might impact transportation equity given any disadvantages were induced to some groups of road users \citep{lee2017understanding}. Due to the conflicts and differences in perspectives, negotiations among different stakeholders should be included in the policy decision-making process to achieve widely acceptable solutions or service terms that could be written into SLA.
DDDS adopts the way of iterative improvement process in real-world decision making and policy implemention \citep{mclaughlin1987learning}.
As indicated (by the blue dash cycle) in Fig. \ref{fig:BSSStakeholders}, the DDDS can iteratively incorporate all the changes and inputs from the key stakeholders into BSS to update the function of BSS. The patterns and characteristics as the outputs of each updated BSS bring up the MoEs and supports to date and serve as the information sources to the key stakeholders, leading to updated decision makings by the key stakeholders and new changes in the inputs to BSS in the next iteration. The iterative process can link the inputs from the key stakeholders with the patterns and characteristics in a systematic way for BSS. This is important for BSS optimization in practice. For example, the changes may be high-cost (for example, implementing bike infrastructure), may significantly impact road users, or both; however, their potential effects are rather difficult to assure in advance, based on incomplete information or in complex urban environments. In this case, the changes could be exploited first in a small scale as a pilot, from which the data as the responses from road users on the changes could be collected and analyzed to gain patterns and characteristics providing evidence-based supports to evaluate the effect and efficiency of the performed changes. By gradually expanding the usage of the changes in the DDDS, the iterative improvements of BSS would allow city policymakers and BSS system operators progressively evaluate the impacts of changes, learn from experiences, and make their decisions of optimizing BSS at a lower risk.
\subsection{Contributions on Travel Patterns and Characteristics}
Besides the MoEs and evidence-based decision supports in DDDS as discussed in Section \ref{sec:ddds}, some additional contributions adding to the body of knowledge on travel behaviors are offered by the systematic examination of new patterns and characteristics in BSS. Here we provide some brief discussions.
In Section \ref{sec:TripDemand}, we have studied the cause of demand-supply imbalance in BSS by combining the BSS data and the LODES data from \cite{LEHD2017LODES}. As one of the main challenging problems in BSS operations, bike redistribution problem has been illustrated in the earlier work of \citet{vogel2011understanding} and \citet{o2014mining}. Departing from the existing studies, we have focused on the underlying cause of the redistribution problem, i.e., the demand-supply imbalance of BSS. We transformed the patterns and characteristics to the useful information for building the relation between the imbalance and the spatial distributions of workplace and residence that than reflect the commuting behaviors during peak periods. From the viewpoint of DDDS, although the demand-supply imbalance is output by road users, it is in fact the result of the users' best responses to the inputs from system operators and policymakers. Using the information as decision support, BSS operators can optimize their rebalancing efforts, and policymakers can make strategic policy addressing the rebalancing problems more effectively (for example, one of useful policies is to encourage mixed land use through zoning on workplace and residence).
In Section \ref{sec:OperationalActivity}, we have analyzed the impacts of some real-world operating activities. Various operating strategies \citep{shaheen2013public,de2016bike} have been implemented in different cities. Most previous research on operating activities focused on route optimization of rebalancing vehicles for reducing cost including emission \citep{shui2017dynamic,elhenawy2018heuristic} and lost demand \citep{ghosh2017dynamic}. Our work have focused on evaluating qualitative and quantitative impacts on specific operating activities, including corral services and accompanying rebalancing efforts, at specific BSS stations.
From the viewpoint of DDDS, operating activities are the inputs from BSS operators. Using the results as decision support, BSS operators can re-allocate their services to optimize overall operating efficiency and QoS at BSS stations.
In Section \ref{sec:UseTime}, we have separated use, maintenance (e.g., rebalancing and bike repairing) and idle times based on practical conditions and extracted their distributions in parametric models. In the recent work of \citet{faghih2017empirical}, the separation between user and rebalancing trips was conducted by applying an approximated heuristic approach on aggregate data in five-minute time intervals, which could lead to a significant underestimation of the bike usage by users.
In Section \ref{sec:UserTripTypes}, we have shown that trip purposes are likely associated with user and trip types in the trip data in DC. Specifically, member users and O-D trips are likely associated with utilitarian trips, whereas casual users and O-O trips are likely associated with recreational trips. Based on the survey data \citep{buck2013bikeshare}, CaBi member and short-term users are likely to cycle for utilitarian and recreational trip purposes, respectively. From the study on GPS trajectory data of 3,596 trips by CaBi users \citep{Wergin2017}, casual trips are longer in trip time and mostly around the National Mall, whereas member trips are faster and mostly in popular mixed-use neighborhoods. Similar geospatial distributions for causal and member trips have been shown in Fig. \ref{fig:BikeState_5Y_CasualUser_TopRoutes}. Our study on empirical use time distributions has shown that respectively 84.84\% and 97.64\% of O-O and O-D trips are below 30 minutes for member users for retaining a low trip cost under the current price scheme of CaBi. Such user behavior might lead to the self-organized formation of polycentricity in Fig. \ref{fig:BikeState_5Y_CasualUser_TopRoutes} from the perspective of urban spatial structure \citep{anas1998urban}.
Our study has also disclosed some additional information on the broad usage of bikeshare in multimodal transportation environments, based on the analysis on the top O-D pairs with the highest flows by member users in Section \ref{sec:odFlows}. By applying a few simple rules, we have found that most of the top O-D pairs in BSS provide first- and last-mile connections between transportation hubs and residence neighborhoods in DC. Such multimodal connections indicate a complementary relationship \citep{barber2018unraveling} between bikeshare and transit that is potentially beneficial to cyclists and operators \citep{Ravensbergen2018Biking}.
The results in Fig.\ref{fig:BikeState_5Y_DiffOD_Time_MemberUser_PDF} indicate that bike trip time has low variations between O-D pairs, and the parametric models of O-D trip time between BSS stations could be adopted and used for real-time intermodal planning \citep{griffin2016planning} by road users.
On bike mobility, we have derived the travel speed information using the BSS data and an open API from \citet{GMap2017API}. Traditionally, mobility information could be estimated from survey data \citep{moritz1997survey} or extracted from GPS trajectory data \citep{Wergin2017}, but the two methods may be considerably constrained respectively by the subjective sampling and by some challenges in data collection such as privacy concerns \citep{seidl2016privacy}, data volume \citep{romanillos2016big}, and etc. Compared to existing methods, our method provides a relatively objective and economic way to evaluate bike mobility in BSS, especially for utilitarian trips. The underlying assumption of our method is that road users would follow the shortest bike route recommendation by Google Maps. In practice, most utilitarian users, e.g., commuters, would mostly stick to 2-3 near optimal routes in their trips \citep{Anowar2017}, as travel time has the most importance to these users \citep{bikeshare2016capital,Anowar2017} and near optimal routes would often have similar travel time with the shortest bike route in urban road networks.
The patterns and characteristics also show that travel speed could be significantly improved by using the dedicated bike lane, which provides an important decision support for policymakers to choose the option in bike infrastructure improving mobility in biking networks for the city and fostering the mode shift toward a reduced private car usage. Notice that safety has been a major barrier to the adoption of biking as a new form of mobility for road users \citep{fishman2016bikeshare}. We should keep in mind that bike safety requires significant improvements, according to the broad spatial distribution of bike crashes and the strong correlation between bike crashes and trip demand, as indicated from the results in Section \ref{sec:Safety}.
\subsection{Generalizability to Other Systems}
It would be valuable to know that how much the data analysis and results from one city could be generalized to other systems in different cities. From the perspective of data availability, the datasets used in this analysis are essential and commonly available for most existing systems. The analysis methods and processes generating the outputs of patterns and characteristics are applicable to other different systems. However, in consideration of the potential differences in the inputs of key stakeholders among different systems, we should be very careful to generalize interpretations on different parts of the results regarding MoE and other supports. Let us first take the BSS redistribution as an example. The redistribution problem has been illustrated in different cities from the studies of \citet{vogel2011understanding} and \citet{o2014mining}. In Section \ref{sec:TripDemand}, we have found that the demand-supply imbalance in BSS station clusters is caused by the commuting behavior of road users during AM and PM peak periods and can be reflected from the underlying nonuniform and spatially distributions of workplace and residence clusters, which commonly exists in other cities from the LODES data \citep{schleith2014commuting}. In Section \ref{sec:OperationalActivity}, we have examined the corral service and related bike rebalancing behaviors to analyze the impacts of operating activities. The analysis is generalizable to the other cities only if they have similar valet and corral services \citep{de2016bike}. Next, let us briefly summarize the other findings presented in Section \ref{sec:anaysis_results} on generality. The results are applicable to other cities as data-driven decision supports but at different supporting extents to different stakeholders. The analysis of use and idle times in Section \ref{sec:UseTime} is quite general for different systems, and it would be interesting to examine the differences in the parameters of the model among different systems. In Section \ref{sec:UserTripTypes}, we have focused on understanding the trip purposes for different user and trip types, of which the generality is under the condition of using the 30-minute free-of-charge pricing scheme that has been adopted by most BSS systems \citep{shaheen2013public}.
Notice that changes in pricing scheme would have substantial impacts on the BSS usage patterns \citep{wu2017explore}. Investigating the other systems using different pricing schemes is definitely beneficial to BSS planning and operations, but it is not in the scope of this paper. In Section \ref{sec:odFlows}, the analysis demonstrates a self-organized formation of urban spatial structure \citep{anas1998urban} in a monocentric or polycentric form depending on the underlying urban landscapes of a city. The analysis on O-D trips discloses the broad usage of bikeshare in multimodal transportation systems to connect with major transportation hubs (Metrorail and railway stations), highlighting the important role of bikesharing on providing first- and last-mile connections between residence places and transportation hubs in DC area. Such multimodal connections can be generalized to the other cities possessing rapid transit systems \citep{barber2018unraveling,adnan2018preferences}. The analysis in Section \ref{sec:Mobility} can be applied to other cities to gauge the attractiveness of biking mode from the perspective of improving mobility, especially to the cities where road users are suffering heavy traffic congestion from vehicle flow. The analysis in Section \ref{sec:Safety} provides general understanding of road safety for bike users.
The implementations of DDDS could be varying for different cities, in consideration of the difference in stakeholders, demographics, urban landscapes, goals, data sources and etc. On the shared aspects among cities, such as pricing schemes and trip purposes, integrating data across cities would be helpful to identify the hidden factors in similar settings or to understand the key impacts of different settings.
\section{Conclusions} \label{sec:conclusions}
Aiming to improve bikeshare and help the transformation of urban transportation system to be more sustainable, we conducted a comprehensive analysis to examine underlying patterns and characteristics of a bikeshare network and
to acquire implications of the patterns and characteristics for data-driven decision supports (DDDS). As a case study, we used the trip history from the CaBi system in the Washington DC area as our main data source of data analysis, and other data, including Google Maps APIs, the LODES data and the crash data in Open Data DC, as auxiliary data sources to extract related information. With appropriate statistical methods and geographic techniques, we mined travel patterns and characteristics from data on seven important aspects for BSS, which include trip flow and demand, operating activities, use and idle times, trip purpose, O-D flows, mobility, and safety. For each aspect, we explored the results to discuss qualitative and quantitative impacts of the inputs from key stakeholders of BSS on main MoEs such as trip costs, mobility, safety, quality of service, and operational efficiency, where key stakeholders include road users, system operators, and city planners and policymakers. We also disclosed some new patterns and characteristics of BSS to advance the knowledge on travel behaviors.
On trip demand and flow, we showed spatial and temporal patterns in trip demand of bikeshare, and found that trip flow of bikeshare follows a scale-free power-law distribution---a common pattern for modeling human mobility. We computed demand-supply ratio for each station, and discussed the advantages of using the ratio as a complementary method for identifying bikeshare demand-supply imbalance. Moreover, by combining CaBi and LODES data, we found that the clustered regions reflecting bikeshare demand-supply imbalance result from large-scale human mobility behaviors, such as commuting.
On bikeshare operations, we investigated the effects and implications of operating activities by taking the corral service provided by CaBi as an example. The results on the regular corral service for high-demand seasons provided straightforward MoEs for different stations, which as evidence-based supports can help system operators to make redistribution strategies among different stations for a better operational efficiency. It turned out that more rebalancing efforts would be needed to redistribute extra bikes. The results on the special corral service for high-attendance events demonstrated that corral service is a useful and effective operating activity for encouraging more people to choose biking to participate events.
On use and idle times, our study indicated that the empirical distributions of the times can be well fitted into parametric models, which can be used for simulation studies. For CaBi, the use time ratio is 3.33\%, meaning that its operational efficiency could be improved enormously. For the improvement, we briefly discussed its concerns and potential solutions.
On trip purpose, although it is known that biking is usually used for two types of purposes --- utilitarian or recreational, CaBi does not provide any direct corresponding records. We showed that trip purposes can be extracted from data analysis through identifying the differences of patterns in trip flows and in use-time distributions of bikeshare.
On O-D flows, we found that core clusters and community structure can be identified by analyzing the top O-D pairs in the ranking of the highest O-D flows in a bikeshare network. In addition, we found that although the use-time distributions for the top O-D pairs by member users all follow a typical pattern of utilitarian trips, the O-D trip flows exhibit diverse patterns and can provide us more information of trip purpose and land use.
On biking mobility, we performed statistical analysis on the O-D pairs of primary utilitarian trips to compute the distributions of trip speeds. For the primary utilitarian trips in the bikeshare network, the median speed extracted from the data analysis is 8.31 mph. The biking mobility is competitive as compared with driving a car in over-congested urban areas, but still has a big room to improve. The speed would be enhanced on dedicated bike lanes.
On biking safety, we performed spatial analysis on biking crash data and found a strong spatial correlation between trip demand and the hot spots of bike crashes. We thereupon briefly considered the means to improve biking safety.
Finally, we discussed and summarized the values of our findings for promoting biking as a frequently used transportation mode and transforming our urban mobility into a better and more sustainable transportation system. We discussed some critical roles and implications of the patterns and characteristics for DDDS from the relations between BSS and key stakeholders. We briefly discussed the importance of adopting a human-centric perspective to the usage of DDDS and of considering negotiations among stakeholders in policy decision making process. We summarized the new patterns and characteristics of BSS disclosed in this study and the added knowledge for bridging the gap between current understanding on patterns and characteristics of BSS and the needs from modeling and applications on turning the patterns and characteristics into evidence-based decision supports in the context of DDDS. We also discussed how much the analysis and results from the current study can be generalized to the BSS systems in other cities.
Several aspects of the current work warrant further study. First, data-driven solutions might be developed to facilitate the operations, maintenance, and expansions of BSS. Second, for urban planners and policy makers, data-driven recommendations might be provided to support or assist improving the bicycle infrastructures and upgrading the multimodal urban transportation systems for a better safety and mobility and expanding the coverage of community structure efficiently. Finally, there is a huge benefit from the data fusion combining bikeshare data with other data sources such as surveys or participatory sensing data, especially for better understanding the driving forces for the shift of transportation towards more sustainable urban mobility.
\section*{References}
\bibliographystyle{apacite}
\input{bikeDFT.bbl}
\end{document}
|
2,869,038,155,500 | arxiv | \section{INTRODUCTION\label{intro}}
Detections of multiple main sequences and giant branches in globular clusters (GC; \mbox{\it e.g.}\ $\omega$Cen, NGC~2808, and NGC~1851;
Bedin \mbox{et al.}\ 2004\citep{bed04}, Piotto \mbox{et al.}\ 2007\citep{pio07}, Han \mbox{et al.}\ 2009\citep{han09}) have challenged
the notion that these objects are {\it uniformly} mono-metallic stellar systems of unique age.
In addition to the metallicity variations observed in certain clusters, the anomalous abundance behaviors of globulars
include star-to-star scatter of light element [el/Fe] ratios (for C, N, O, Na, Mg, and Al) in both main sequence
and giant stars (this is in contrast to the abundance trends of halo field stars; \mbox{\it e.g.}\ Carretta \mbox{et al.}\ 2009a\citep{car09a} and Gratton, Sneden, \&
Carretta 2004\citep{gra04})\footnote{We adopt the standard spectroscopic notation (Helfer, Wallerstein, \& Greenstein 1959\citep{hel59}) that
for elements A and B, [A/B] $\equiv$ log$_{\rm 10}$(N$_{\rm A}$/N$_{\rm B}$)$_{\star}$ - log$_{\rm 10}$(N$_{\rm A}$/N$_{\rm B}$)$_{\odot}$.
We also employ the definition \eps{A} $\equiv$ log$_{\rm 10}$(N$_{\rm A}$/N$_{\rm H}$) + 12.0.}. These departures in
the relative abundances (found in stars of different evolutionary stages) imply that there are
multiple stellar generations present within the globular cluster and that an initial generation may have
contributed to the intracluster medium. It is possible that three sources are responsible for the
aggregate chemical makeup of a globular cluster: a {\it primordial} source that generates the initial composition of
the protocluster cloud, a {\it pollution} source that deposits material into the ICM from highly-evolved
asymptotic branch stars, and a {\it mixing} source that is independent of the other two and the
result of stellar evolution processes. Further discussion of these scenarios may be found in, \mbox{\it e.g.}\,
Bekki et al. (2007)\citep{bek07} and Carretta et al. (2009b)\citep{car09b}.
On the other hand in the vast majority of globular clusters, minimal scatter in the element abundance ratios with Z$>$20 has been observed.
The abundances for the neutron(n-) capture elements europium and barium have been measured in several GC's, and only in a few, exceptional cases have
significantly large intracluster differences in these values been seen (\mbox{\it e.g.}\ M22; Marino et al. 2009\citep{mar09}). The predominant mechanism
of Eu manufacture is the rapid n-capture process (\mbox{$r$-process}) whereas the primary nucleosynthetic channel for Ba is
slow n-capture (\mbox{$s$-process}; additional information pertaining to these production mechanisms may be found in
\mbox{\it e.g.}\ Sneden, Cowan, \& Gallino 2008\citep{sne08}). Consequently, the abundance ratio of [Eu/Ba] is used to
demonstrate the relative prevalence of the $r$- or \mbox{$s$-process}\ in individual stars.
In globular clusters with a metallicity of [Fe/H]$\lesssim -1$, a general enhancement of [Eu/Ba]~$\sim$~+0.4 to +0.6 dex is detected, which
indicates that n-capture element production has been dominated by the \mbox{$r$-process}\ (Gratton, Sneden, \& Carretta 2004\citep{gra04} and references therein). This in
turn is suggestive of explosive nucleosynthetic input from very massive stars.
The very metal-deficient globular cluster M15 (NGC 7078; [Fe/H]$\sim -2.3$) has been subject
to several abundance investigations including the recent study by Carretta et al. (2009a\citep{car09a}).
They employed both medium-resolution and high-resolution spectra of over 80 red giant stars to precisely determine
the metallicity of this cluster: $<$[Fe/H]$> = -2.314 \pm 0.007$. Additionally, they detected variations in the light
element abundances (Na and O) for stars along the entirety of the Red Giant Branch (RGB). Prior studies
have also observed large scatter in the relative Ba and Eu (intracluster) abundances. With the spectra of 17 RGB stars,
Sneden et al. (1997\citep{sne97}) found a factor of three spread in both ratios: $<$[Ba/Fe]$> = 0.07$; $\sigma= 0.18$ and
$<$[Eu/Fe]$> = 0.49$; $\sigma= 0.20$.\footnote{The anomalously nitrogen-enriched star K969 is omitted; see Appendix A of Sneden \mbox{et al.}\ 1997\citep{sne97}}
They were able to exclude measurement error as the source for the scatter and determined
that the variations were correlated: $<$[Eu/Ba]$>= 0.41$; $\sigma= 0.11$. In a follow-up study of 31 M15 giants by Sneden,
Pilachowski \& Kraft (2000b\citep{sne00b}), the scatter in the relative Ba abundance was confirmed: $<$[Ba/Fe]$>= 0.12$; $\sigma = 0.21$
(limitations in the spectral coverage did not permit a corresponding analysis of Eu).
The majority of M15 high-resolution abundance analyses have employed yellow-red visible spectra to maximize signal-to-noise
(stellar flux levels are relatively high for RGB targets in this region). In order to precisely derive the neutron capture abundance distribution in M15,
Sneden et al. (2000a\citep{sne00a}) re-observed three tip giants in the blue visible wavelength regime
(which contains numerous n-capture spectral transitions). The abundance determinations of 8 n-capture
species (Ba, La, Ce, Nd, Sm, Eu, Gd and Dy) were performed and large star-to-star scatter in the all of the [El/Fe] ratios was measured. They
also found that the three stars exhibited a scaled solar system \mbox{$r$-process}\ abundance pattern. Additional verification of these abundance
results was done by Otsuki \mbox{et al.}\ (2006\citep{ots06}) in an analysis of six M15 RGB stars (the two studies had one star in common, K462). Consistent with
Sneden et al. (2000a\citep{sne00a}), they detected significant variation in the [Eu/Fe], [La/Fe] and [Ba/Fe] ratios. Furthermore, Otsuki et al. found
that the ratios of [(Y,Zr)/Eu] show distinct anticorrelations with the Eu abundance. Finally employing an alternate stellar sample, Preston \mbox{et al.}\ (2006\citep{pre06})
examined six red horizontal branch (RHB) stars of M15.\footnote{The papers from Sneden et al. (1997), Sneden,
Pilachowski, \& Kraft (2000b), Sneden et al. (2000a), and Preston et al. (2006) are from collaborators affiliated with institutions in both California and Texas.
Hereafter, these papers and other associated publications will be referred to as CTG.} For the elements Sr, Y, Zr, Ba and Eu
a large (star-to-star) spread in the abundances was measured. In essence, all of these investigations have observed considerable chemical inhomogeneity in
the n-capture elements of the globular cluster M15.
Two issues are brought to light by the M15 abundance data: the timescale and efficiency of mixing in the protocluster environment; and, the
nucleosynthetic mechanism(s) responsible for n-capture element manufacture. In this globular cluster, large abundance variations are seen in the two
stellar evolutionary classes as well as in both the light and heavy neutron capture species. There is a definitive enhancement of \mbox{$r$-process}\ elements
found in some stars of M15 (\mbox{\it e.g.}\, K462), yet not exhibited in others (\mbox{\it e.g.}\, B584). Taking into consideration the entirety of the M15 n-capture results, these data
hint at the existence of a nucleosynthetic mechanism different from the classical {\it r-} and {\it s-}processes.
Evidence of such a scenario (with multiple production pathways) may be also found in halo field stars of similar metallicity such
as CS 22892-052 (Sneden \mbox{et al.}\ 2003\citep{sne03}) and HD 122563 (Honda \mbox{et al.}\ 2006\citep{hon06}), which have displayed similar abundance variations.
Indeed, several models have advanced the notion of more than one \mbox{$r$-process}\ formation scenario (\mbox{\it e.g.}\ Wasserburg \& Qian 2000, 2002\citep{was00,was02},
Thielemann \mbox{et al.}\ 2001\citep{thi01}, and Kratz \mbox{et al.}\ 2007\citep{kra07}).
To further understand the implications of the M15 results, the spectra from the three RGB stars of Sneden et al. (2000a)\citep{sne00a}
and the six RHB stars of Preston \mbox{et al.}\ (2006)\citep{pre06} are re-analyzed. A single consistent methodology for the analysis is employed and an
expansive set of recently-determined oscillator strengths is utilized (\mbox{\it e.g.}\ Lawler \mbox{et al.}\ 2009\citep{law09}, Sneden \mbox{et al.}\
2009\citep{sne09}, and references therein). As the pre- and post-He core flash giants are examined, the relative invariance of abundance
distributions will be ascertained for $r-$ and \mbox{$s$-process}\ species. In consideration of the M15 investigations cited above, a few data anomalies have come to light.
The two main issues to be resolved include: large discrepancies in the log~$\epsilon(El)$ values between the studies of Sneden et al. (2000a)\citep{sne00a} and
Otsuki et al. (2006)\citep{ots06}; and, the significant disparity in the derived metallicity for the M15 cluster between Preston et al. 2006 ($<$[Fe/H]$>_{RHB} = -2.63$)
and the canonically accepted value of $<$[Fe/H]$>= -2.3$ (\mbox{\it e.g.}\ Carretta et al. 2009a\citep{car09a}, Sneden et al. 2000a\citep{sne00a}). It is suggested that these
differences are mostly due to selection of atomic data, model atmosphere, and treatment of scattering.
\section{OBSERVATIONAL DATA\label{obs}}
For the three RGB stars of the M15 cluster, the re-analysis of two sets of high resolution spectra was performed: the first from
Sneden \mbox{et al.}\ (1997\citep{sne97}) with approximate wavelength coverage region of 5400\AA~$\lesssim$ $\lambda$ $\lesssim$ 6800\AA\ and the second from
Sneden \mbox{et al.}\ (2000a\citep{sne00a}) with a wavelength domain of 3600\AA~$\lesssim$ $\lambda$ $\lesssim$ 5200\AA. All spectral observations were
acquired with the High Resolution Echelle Spectrometer (HIRES; Vogt et al. 1994\citep{vog94}) at the Keck I 10.0-m telescope (with
a spectral resolving power of R $\equiv$ $\lambda$/$\Delta\lambda$ $\simeq$ 45000). The signal-to-noise (S/N) range of the data varied from
30 $\lesssim$ S/N $\lesssim$ 150 for the shorter wavelength spectra to 100 $\lesssim$ S/N $\lesssim$ 150 for the longer wavelength
spectra (the S/N value generally increased with wavelength). The three giants, K341, K462, and K583\footnote{The Kustner
(1921)\citep{kus21} designations are employed throughout the text.}, were selected from the larger stellar sample of Sneden \mbox{et al.}\ (1997)\citep{sne97}
due to relative brightness, rough equivalence of model atmospheric parameters, and extreme spread in associated Ba and Eu abundances.
Re-examination of the high resolution spectra of six RHB stars from the study of Preston \mbox{et al.}\ (2006)\citep{pre06} was also done. The observations
were taken at the Magellan Clay 6.5-m telescope of the Las Campanas Observatory with the Magellan Inamori Kyocera Echelle (MIKE)
spectrograph (Bernstein \mbox{et al.}\ 2003)\citep{ber03}. The data had a resolution of R$\simeq$ 40000 and the S/N values ranged from
$S/N\sim$25 at 3600~\AA\ to $S/N\sim$120 at 7200~\AA\ (note that almost complete spectral coverage was
obtained in the region 3600\AA~$\lesssim$ $\lambda$ $\lesssim$ 7200\AA). The six RHB targets
were chosen from the photometric catalog of Buonanno \mbox{et al.}\ (1983)\citep{buo83} and accordingly signified as B009, B028, B224,
B262, B412 and B584. It should be pointed out that these stars have significantly lower temperatures than other HB members (and thus,
match up favorably with the RGB).
Figure~\ref{vminusk} features the color-magnitude diagram (CMD) for the M15 globular cluster with a plot of the $V$ versus $(V-K)$ magnitudes. The $V$
magnitudes for the RGB stars are taken from the prelimiary results of Cudworth (2011) and verified against the data from Cudworth (1976\citep{cud76}).
Alternatively, the RHB $V$ magnitude values are obtained from Buonanno et al. (1983\citep{buo83}). The $K$ magnitudes for all M15 targets are taken from the
Two Micron All Sky Survey (2MASS; Skrutskie \mbox{et al.}\ 2006\citep{skr06}). Cluster members
with both $B$ and $V$ measurements from Buonanno \mbox{et al.}\ are displayed in the plot (denoted by the black dots) and the stars of the current study
are indicated by large, red circles. Note that the identifications of RGB and RHB members are based upon stellar atmospheric parameters as well as
the findings from Sneden \mbox{et al.}\ (1997\citep{sne97}) and Preston \mbox{et al.}\ (2006\citep{pre06}; please consult those references for additional details).
Also in Figure~\ref{vminusk}, two isochrone determinations are overlayed upon the photometric data: Marigo \mbox{et al.}\ (2008\citep{mar08};
with the age parameter set to 12.5 Gyrs and a metallicity of [M/H]=-2.2; shown in green) and Dotter \mbox{et al.}\ (2008\citep{dot08}; with the age
parameter set to 12.5 Gyrs and a metallicity of [M/H]=-2.5; shown in blue). These are the best-fit isochrones to the general characteristics ascribed to M15 and
no preference is given to either source.
Additional observational details of the aforementioned data samples may be found in the original Sneden \mbox{et al.}\ (1997,2000a)\citep{sne97,sne00a}
and Preston \mbox{et al.}\ (2006)\citep{pre06} publications. These papers also contain descriptions of the data reduction procedures,
in which standard IRAF\footnote{\footnotesize IRAF is distributed by NOAO, which is operated by AURA, under cooperative agreement with the
NSF.} tasks were used for extraction of multi-order wavelength-calibrated spectra from the raw data frames, and specialized software
(SPECTRE; Fitzpatrick \& Sneden 1987\citep{fit87}) was employed for continuum normalization and cosmic ray elimination.
Figure~\ref{spec1} features a comparison of the spectra of all M15 targets. Displayed in this plot is a small wavelength interval $4121-4133$~\AA,
which highlights the important n-capture transitions \ion{La}{2} at 4123.22~\AA\ and \ion{Eu}{2} at 4129.72~\AA. The spectra are arranged
in decreasing \mbox{$T_{\rm eff}$}\ from the top to the bottom of the figure. As shown, the combined effects of \mbox{$T_{\rm eff}$}\ and \mbox{log~{\it g}}\ influence the apparent line strength, and
accordingly, transitions which are saturated in the RGB spectra completely disappear in the warmer RHB spectra.
\section{METHODOLOGY AND MODEL DETERMINATION}\label{modparam}
Several measures were implemented in order to improve and extend the efforts of Sneden et al. (1997, 2000a)\citep{sne97,sne00a} and Preston et al.
(2006\citep{pre06}). First, the modification of the line analysis program MOOG was performed to accurately ascertain the relative contributions
to the continuum opacity (especially necessary for the bluer wavelength regions and the cool, metal-poor RGB targets).
Second, the employment of an alternative grid of models was done to obtain an internally consistent set of stellar atmospheric
parameters for the total M15 sample. Third, the utilization of the most up-to-date experimentally and semi-empirically-derived
transition probability data was done to determine the abundances from multiple species.
\subsection{Atomic Data}\label{atdata}
Special effort was made to employ the most recent laboratory measurements of oscillator strengths. When applicable, the inclusion of
hyperfine and isotopic structure was done for the derivation of abundances. Tables 2 and 3 list the various literature sources for
the transition probability data. Some species deserve special comment. The Fe transition probability values are taken from
the critical compilation of Fuhr \& Wiese (2006\citep{fuh06}; note for neutral Fe, the authors heavily weigh the laboratory data from O'Brian \mbox{et al.}\
1991\citep{obr91}). No {\it up-to-date} laboratory work has been done for Sr, and so, the adopted gf-values are from the
semi-empirical study by Brage et al. (1998\citep{bra98}; these values are in good agreement with those derived empirically
by Gratton \& Sneden 1994\citep{gra94}). Similarly, the most recent laboratory effort for Y was by Hannaford \mbox{et al.}\ (1982\citep{han82}). Yet
these transition probabilities appear to be robust, yielding small line-to-line scatter.
A particular emphasis of the current work is the n-capture element abundances, for which a wealth of new transition probability data
have become recently available. Correspondingly, the extensive sets of rare earth $gf$-values from the Wisconsin Atomic Physics Group were adopted
(Sneden \mbox{et al.}\ 2009\citep{sne09}, Lawler et al. 2009\citep{law09}, and references therein). These data when applied to the solar spectrum yield photospheric
abundances that are in excellent agreement with meteoritic abundances. For neutron capture elements not studied by
the Wisconsin group (which include Ba, Pr, Yb, Os, Ir, and Th), alternate literature references
were employed (and these are accordingly given in the two aforementioned tables).
\subsection{Consideration of Isotropic, Coherent Scattering}
In the original version of the line transfer code MOOG (Sneden 1973\citep{sne73}), local thermodynamic equilibrium (LTE) was assumed and hence,
scattering was treated as pure absorption. Accordingly, the source function, $S_{\lambda}$, was set equal to the Planck function, $B_{\lambda}(T)$, which is an adequate
assumption for metal-rich stars in all wavelength regions. However for the extremely metal-deficient, cool M15 giants,
the dominant source of opacity switches from H$^{-}$$_{BF}$ to Rayleigh scattering in the blue
visible and ultraviolet wavelength domain ($\lambda \lesssim 4500$\AA). It was then necessary to modify the MOOG program as the LTE approximation was no
longer sufficient (this has also been remarked upon by other abundance surveys, \mbox{\it e.g.}\ Johnson 2002\citep{joh02} and Cayrel \mbox{et al.}\ 2004\citep{cay04}).
The classical assumptions of one-dimensionality and plane-parallel geometry continue to be employed in the code. Now with the
inclusion of isotropic, coherent scattering, the framework for solution of the radiative transfer equation (RTE) shifts from
an initial value to a boundary value problem. The source function then assumes the form\footnote
{To re-state, the equation terms are defined as follows: $S$ is the source function, $\epsilon$ is the thermal
coupling parameter, $J$ is the mean intensity, and $B$ is the Planck function.} of $S = (1-\epsilon)J + \epsilon B$
and the description of line transfer becomes an integro-differential equation. The chosen methodology for the
solution of the RTE (and the determination of mean intensity) is the approach of {\it short characteristics} that incorporates aspects of an
accelerated convergence scheme. In essence, the short characteristics
technique employs a tensor product grid in which the interpolation of intensity values occurs at selected grid points.
The prescription generally followed was that from Koesterke et al. (2002\citep{koe02} and references therein). The Appendix provides
more detail with regard to the MOOG program alterations.
Prior to these modifications, for a low temperature and low metallicity star (\mbox{\it e.g.}\ a RGB target), the ultraviolet and blue visible spectral transitions
reported aberrantly high abundances in comparison to those abundances found from redder lines. With the implementation of the
revised code, better line-to-line agreement is found and accordingly, the majority of the abundance trend with wavelength is eliminated for
these types of stars. Note for the RHB targets, minimal changes are seen in abundances with the employment of the modified MOOG program
(as the dominant source of opacity for these relatively warm stars is always H$^{-}$$_{BF}$ over the spectral region of interest).
\subsection{Atmospheric Parameter Determination}
To obtain preliminary estimates of \mbox{$T_{\rm eff}$}\ and \mbox{log~{\it g}}\ for the M15 stars, photometric data from the aforementioned sources (Cudworth 2011;
Buonanno et al. 1983\citep{buo83}; 2MASS) were employed as well as those data from Yanny et al. 1994\citep{yan94}. To transform the color,
the color-\mbox{$T_{\rm eff}$}\ relations of Alonso \mbox{et al.}\ (1999)\citep{alo99} were used in conjunction with the distance modulus ($(m-M)_0$ = 15.25) and
reddening ($E(B-V)$~=~0.10) determinations from Kraft \& Ivans (2003\citep{kra03}). Note that an additional intrinsic uncertainty of about
0.1~dex in \mbox{log~{\it g}}\ remains among luminous RGB stars owing to stochastic mass loss of order 0.1~dex. Consequently, initial masses of 0.8~\mbox{$M_{\odot}$}\ and 0.6\mbox{$M_{\odot}$}\ were assumed
for RGB and RHB stars respectively. The photometric V and (V-K) values as well as the photometrically- and spectroscopically-derived stellar atmospheric parameters
are collected in Table~\ref{m15models}.
With the use of the spectroscopic data analysis program SPECTRE (Fitzpatrick \& Sneden 1987\citep{fit87}), the equivalents widths (EW) of
transitions from the elements Ti I/II, Cr I/II, and Fe I/II were measured in the wavelength range 3800-6850 \AA. The preliminary \mbox{$T_{\rm eff}$}\
values were adjusted to achieve zero slope in plots of Fe abundance (log~$(\epsilon_{Fe I}$)) as a function of excitation potential ($\chi$)
and wavelength ($\lambda$). The initial values of \mbox{log~{\it g}}\ were tuned to minimize the disagreement between the neutral and ionized species
abundances of Ti, Cr, and Fe (particular attention was paid to the Fe data). Lastly, the microturbulent velocities \mbox{$v_{\rm t}$}\ were set as
to reduce any dependence on abundance as a function of EW. Final values of \mbox{$T_{\rm eff}$}, \mbox{log~{\it g}}, \mbox{$v_{\rm t}$}, and metallicity [Fe/H] are
listed in Table~1, as well as those values previously derived by Sneden \mbox{et al.}\ (2000a\citep{sne00a}) for the RGB stars and
Preston \mbox{et al.}\ (2006)\citep{pre06} for the RHB stars.
\subsection{Selection of Model Type}
To conduct a standard abundance analysis under the fundamental assumptions of one-dimensionality and local thermodynamic equilibrium (LTE), two
grids of model atmospheres are generally employed: Kurucz-Castelli (Castelli \& Kurucz 2003\citep{cas03}; Kurucz 2005\citep{kur05}) and
MARCS (Gustafsson \mbox{et al.}\ 2008\citep{gus08}).\footnote{Kurucz models are available through the website: http://kurucz.harvard.edu/
and MARCS models can be downloaded via the website: http://marcs.astro.uu.se/} The model selection criteria were as follows: the reconciliation
of the metallicity discrepancy between the RGB and RHB stars of M15, the derivation of (spectroscopic-based) atmospheric parameters in
reasonable agreement with those found via photometry, and the attainment of ionization balance between the Fe I and Fe II transitions.
For the RHB targets, interpolated models from the Kurucz-Castelli and MARCS grids were comparable and yielded extremely similar abundance results.
However, there are a few notable differences between the two model types for the RGB stars with regard to the P$_{gas}$
and P$_{electron}$ content pressures. Though beyond the scope of the current effort, it would be of considerable interest to examine in detail the exact
departures between the Kurucz-Castelli and MARCS grids. To best achieve the aforementioned goals for the M15 data set, MARCS models were accordingly chosen.
\subsection {Persistent Metallicity Disagreement between RGB and RHB stars}
For the RHB stars, the presently-derived metallicties differ slightly from those of
Preston et al. (2006): $<$[Fe$_{I}$/H]$>$ = -2.69 (a change of $\Delta = -0.03$) and $<$[Fe$_{II}$/H]$>$ = -2.64 (a change of $\Delta = -0.04$).
However for the RGB stars, the [Fe/H] results of the current study {\it do} vary
significantly from those of Sneden et al. (2000): $<$[Fe$_{I}$/H]$>$ = -2.56
(a downwards revision of $\Delta = -0.26$) and $<$[Fe$_{II}$/H]$>$ = -2.53 (a downwards revision of $\Delta = -0.28$). The
remaining metallicity discrepancy between the RGB and RHB stars is as follows: $\Delta(RGB-RHB)_{Fe I}$ = 0.13 and
$\Delta(RGB-RHB)_{Fe II}$ = 0.11. Even with the employment of MARCS models and the incorporation of
Rayleigh scattering (not done in previous efforts), the offset persists. Repeated exercises
with variations in the \mbox{$T_{\rm eff}$}, \mbox{log~{\it g}}, and \mbox{$v_{\rm t}$}\ values showed that this metallicity disagreement in all likelihood cannot be attributed
to differences in these atmospheric parameters. As a further check, the derivation of [Fe$_{I,II}$/H] values was performed
with a list of transitions satisfactorily measurable in both RGB and RHB spectra. No reduction in the metallicty disagreement was seen as
the offsets were found to be: $\Delta(RGB-RHB)_{Fe I}$ = 0.11 (with 45 candidate Fe$_{I}$ lines) and
$\Delta(RGB-RHB)_{Fe II}$ = 0.14 (with 3 candidate Fe II$_{II}$ lines).
The data from the M15 RGB and RHB stars originate from different telescope/instrument set-ups. Additionally, somewhat different data
reduction procedures were employed for the two samples. Possible contributors to the iron abundance offset could be the lack of consideration of spherical symmetry
in the line transfer computations and the generation of sufficiently representative stellar atmospheric models for these highly evolved stars
(which exist at the very tip of the giant branch and/or have undergone He-core flash). Indeed, it is difficult to posit a {\it single, clear-cut} explanation for
the disparity in the RGB and RHB [Fe/H] values. In an analysis of the globular cluster M92, King \mbox{et al.}\ (1998\citep{kin98}) derived an average abundance
ratio of $<$[Fe/H]$>= $ -2.52 for six subgiant stars, a factor of two lower than the $<$[Fe/H]$>$ value
measured in the red giant stars. Similarly, Korn \mbox{et al.}\ (2007)\citep{kor07} surveyed turn off (TO) and RGB stars of NGC~6397 and found
a metallicity offset of about 0.15~dex (with the TO stars reporting consistently lower values of [Fe/H]). They
argued that the TO stars were afflicted by gravitational settling and other mixing processes and as a result, the Fe abundances of
giant stars were likely to be nearer to the {\it true} value. While the TO stars do have \mbox{$T_{\rm eff}$}\ values close to that of the M15 RHB stars,
they have surface gravities and lifetimes that are considerably larger. Accordingly, it is not clear if the offset in M15 has a
physical explanation similar to that proposed in the case of NGC 6397.
\section{ABUNDANCE RESULTS}
For the extraction of abundances, the two filters of line strength and contaminant presence were used to assemble an effective line list. Abundance derivations
for the majority of elements employed the technique of synthetic spectrum line profile fitting (accomplished with the updated MOOG code as described in \S3.2).
For a small group of elements (those whose associated spectral features lack both hyperfine and isotopic structure), the simplified approach of EW measurement
was used (completed with both the MOOG code and the SPECTRE program; Fitzpatrick \& Sneden 1987\citep{fit87}). Presented in Tables~\ref{m15rgbdata} and~\ref{m15rhbdata}
are the log~$\epsilon$ abundance values for the individual transitions detected in the M15 RGB and RHB stars respectively. These tables also list the
relevant line parameters as well as the associated literature references for the $gf$-values employed.
In addition to the line-to-line scatter, errors in the abundance results may arise due to uncertainties in the model atmospheric
parameters. To quantify these errors in the M15 data set, a RGB target, K462, is first selected. If alterations of $\Delta T_{eff}= \pm 100$ K
are applied, then the abundances of neutral species change by approximately $\Delta$[X$_{I}$/H] $\simeq \pm 0.15$ whereas the
abundances of singly-ionized species change by about $\Delta$[X$_{II}$/H] $\simeq \pm 0.04$. Variations of $\Delta$\mbox{log~{\it g}}\ $= \pm 0.20$ yield
$\Delta$[X$_{I}$/H] $\simeq \pm 0.04$ in neutral species abundances and $\Delta$[X$_{II}$/H] $\simeq \pm 0.05$ in
singly-ionized species abundances. Changes in the microturbulent velocity on the order of $\Delta$\mbox{$v_{\rm t}$}\ $= \pm 0.20$
result in abundance variations of $\Delta$[X$_{I}$/H] $\simeq \pm 0.08$ and $\Delta$[X$_{II}$/H] $\simeq \pm 0.04$. The exact same procedure is then
repeated for the RHB star, B009. Modifications of the temperature by $\Delta T_{eff}= \pm 100$ K lead to abundance
changes of $\Delta$[X$_{I}$/H] $\simeq \pm 0.09$ in neutral species and $\Delta$[X$_{II}$/H] $\simeq \pm 0.02$ in singly-ionized species. Alterations of
the surface gravity by $\Delta$\mbox{log~{\it g}}\ $= \pm 0.20$ engender variations of $\Delta$[X$_{I}$/H] $\simeq \pm 0.01$ and
$\Delta$[X$_{II}$/H] $\simeq \pm 0.07$. Finally, variations of $\Delta$\mbox{$v_{\rm t}$}\ $= \pm 0.20$ produce abundance changes of
$\Delta$[X$_{I}$/H] $\simeq \pm 0.08$ and $\Delta$[X$_{II}$/H] $\simeq \pm 0.04$.
To discuss the abundance results in the following subsections, the elements are divided into four groupings: light ($Z = 8$; $11 \leq Z \leq 21$;
Figure~\ref{light_elem}), iron-peak ($22 \leq Z \leq 28$; Figure~\ref{fe_peak}), light/intermediate n-capture ($29 \leq Z \leq 59$; Figure~\ref{light_ncapture}),
and heavy/other ($60 \leq Z \leq 70$; $Z = 72, 76, 77, 90$; Figure~\ref{heavy_ncapture}). The measurement of abundances was completed for a total of 40 species.
Note that for the elements Sc, Ti, V, and Cr, abundance determinations were possible for both the neutral and first-ionized species. In light of
Saha-Boltzmannn calculations for these elements, greater weight is given to the singly-ionized abundances (i.e. for the stars of the M15 data set, only a small
fraction of these elements predominantly reside in the neutral state).
Figures~\ref{light_elem},~\ref{fe_peak},~\ref{light_ncapture}, and~\ref{heavy_ncapture} exhibit the abundance ratios for the M15 sample
in the form of quartile box plots. These plots show the interquartile range, the median, and the minimum/maximum of the data. Outliers, points which
have a value greater than 1.25 times the median, are also indicated. For all of the figures, RGB abundances are signified in red while RHB abundances
are denoted in blue. Note that the plots depict the abundance results in the [Elem/H] form in order to preclude erroneous comparisons of the RGB and RHB data, which
would arise from the iron abundance offset between the two groups.
Table~\ref{m15means} contains the $<$[Elem/Fe]$>$ values for elements analyzed in the M15 sample along with the
line-to-line scatter (given in the form of standard deviations), and the number of lines employed. The subsequent discussion will generally refer
to these table data and as is customary, present the {\it relative} element abundances with associated $\sigma$ values. The reference solar
photospheric abundances ({\it without non-LTE correction}) are largely taken from the three-dimensional analyses of
Asplund et al. (2005, 2009\citep{asp05a, asp09}) and Grevesse et al. (2010\citep{gre10}). However, the photospheric values for some
of the \mbox{$n$-capture}\ elements are obtained from other investigations (\mbox{\it e.g.}\ Lawler et al. 2009\citep{law09}). Table~\ref{solarabund}
lists all of the chosen log~$\epsilon_{\Sun}$ numbers. Note that in the derivation of the relative element abundance ratios [X/Fe], $<$[Fe$_{I}$/H]$>$ are employed
for the neutral species transitions while $<$[Fe$_{II}$/H]$>$ are used for the singly-ionized lines. This is done in order to minimize ionization
equilibrium uncertainties as described in detail by Kraft \& Ivans (2003)\citep{kra03}.
\subsection{General Abundance Trends}
Within the RGB sample, the neutron capture element abundances of K462 are consistently the largest whereas those of K583 are the smallest. The two RHB stars,
B009 and B262, exhibit abundance trends similar to those of the RGB objects. The expected anti-correlations in the proton capture elements (\mbox{\it e.g.}\ Na-O and Mg-Al)
are seen. The greatest abundance variation with regard to the entire M15 data set is found for the neutron capture elements. Indeed, the star-to-star
spread for the majority of n-capture abundances is demonstrable for all M15 targets and is not likely due to internal errors.
Inspection of Table~\ref{m15means} data indicates that RHB stars generally have higher \mbox{$r$-process}\ element abundances than RGB stars (on average
$\Delta$[Elem/Fe]$_{RHB-RGB}\approx $ 0.3 dex). A sizeable portion of the discrepancy is attributable to the difference in
the iron abundances as $<$[Fe/H]$>_{RHB}$ is approximately 0.12 dex lower than $<$[Fe/H]$>_{RGB}$. The remaining offset is most likely a consequence of
the small number of targets coupled with a serious selection effect. The original sample of RHB stars
from Preston et al. (2006\citep{pre06}) was chosen as a random set of objects with colors and magnitudes representative of the red end of the HB. These
objects were selected without prior knowledge of the heavy element abundances. On the other hand, the three RGB stars from Sneden \mbox{et al.}\ (2000a\citep{sne00a})
were particularly chosen as representing the highest and lowest abundances of the \mbox{$r$-process}\, as predetermined in the 17 star sample of
Sneden \mbox{et al.}\ (1997\citep{sne97}).
\subsection{Light Element Abundances}
The finalized set of light element abundances include: C, O, Mg, Al, Si, Ca, and Sc$_{I/II}$. In general, an enhancement of these element abundances
relative to solar is seen in the entire M15 data set.
An underabundance of carbon was found in one RHB and three RGB targets of M15 based on the measurement of CH spectral features. As the forbidden O I
lines were detectable only in RGB stars, the average abundance ratio for M15 is $<$[O/Fe]$>_{RGB}= +0.75$. This value is
substantially larger than that found by Sneden et al. (1997)\citep{sne97}. A portion of the discrepancy is due to the approximate 0.15 dex
difference in the $<$[Fe$_I$/H]$>$ values between the two investigations. The remainder of departure may be attributed to the adoption of
different solar photospheric oxygen values: the current study employs log~$\epsilon(O)_{\odot}= 8.71$ (Scott et al. 2009\citep{sco09})
while Sneden \mbox{et al.}\ uses log~$\epsilon(O)_{\odot}= 8.93$ (Anders \& Grevesse 1989\citep{and89}).
For the determination of the sodium abundance, the current study relies solely upon the D$_{1}$ resonance transitions. Table~\ref{m15means} lists the spuriously
large spreads in the Na abundance for both the RGB and RHB groups. The Na D$_{1}$ lines are affected by the non-LTE phenomenon of {\it resonance}
scattering (Asplund 2005b\citep{asp05b}; Andrievsky \mbox{et al.}\ 2007\citep{and07}), which MOOG does not take into account. Also, Sneden \mbox{et al.}\
(2000b\citep{sne00b}) made note of the relative strength and line profile distortions associated with these transitions and chose to discard the [Na/Fe] values
for stars with \mbox{$T_{\rm eff}$}\ $>$ 5000 K. Consequently, the sodium results from the current study are given little weight and are not plotted in Figure~\ref{light_elem}.
Aluminum is remarkable in its discordance: $<$[Al$_{I}$/Fe$_I$]$> = 0.37$ for one RHB and three RGB targets whereas $<$[Al$_{I}$/Fe$_I$]$> = -0.43$ for
five RHB stars. Now, the relative aluminum abundances for the RHB stars match well with the values found by Preston \mbox{et al.}\ (2006\citep{pre06}).
Similarly, the Al abundances from the current analysis agree favorably with the RGB data from Sneden \mbox{et al.}\ (1997\citep{sne97}). Though relatively strong transitions
are employed in the abundance derivation, the convergence upon two distinct [Al/Fe] values is nontrivial and could merit further exploration.
A decidedly consistent Ca abundance ratio is found for the RGB sample: $<[Ca_{I}/Fe_{I}]>_{RGB}= 0.29$; and, also for
the RHB sample: $<$[Ca$_{I}$/Fe$_{I}$]$>_{RHB}= 0.53$. After consideration of the iron abundance offset, the RHB
stars still report slightly higher calcium abundances than the RGB stars. Overall, a distinct overabundance of Ca relative to solar is present in the M15 cluster.
Note that the Sc$_{I}$ abundance determination was done for only one M15 star (and gives a rather aberrant result compared to the Sc$_{II}$ abundance
data from the other M15 targets).
\subsection{Iron Peak Element Abundances}
The list of finalized Fe-peak element abundances consists of Ti$_{I/II}$, V$_{I/II}$, Cr$_{I/II}$, Mn, Co, and Ni. Due to RGB spectral crowding issues,
derivations of [Ni$_{I}$/Fe$_{I}$] ratios are performed only for RHB stars.
Achievement of ionization equilibrium did not occur for any of the perspective species: Ti$_{I/II}$, V$_{I/II}$, or Cr $_{I/II}$. In consideration of the
entire M15 data set, the best agreement between neutral and singly-ionized species arises for titanium, with all $<$[Ti/Fe]$>$ ratios being supersolar. The
V$_{II}$ relative abundances compare well with one another for the RGB and RHB targets (comparison for V$_{I}$ is not possible as there are no RHB data for
this species). Both of the RGB and RHB $<$[Cr$_{I}$/Fe$_{I}$]$>$ ratios are underabundant with respect to solar and the neutral
chromium values match almost exactly with one another (after accounting for the [Fe/H] offset). On the other hand, the worst agreement is found for
Cr$_{I/II}$ in RHB stars with $\Delta(II-I) = 0.47$.
Subsolar values with minimal scatter were found for the $<$[Mn$_{I}$/Fe$_{I}$]$>$ ratios in both the RGB and RHB stellar groups. However in comparison
to RGB stars, manganese appears to be substantially more deficient in RHB targets. The discrepancy may be attributed to both the RGB/RHB iron abundance
disparity as well as the employment of the Mn$_{I}$ resonance transition at 4034.5 \AA\ for the RHB abundance determination. In particular,
Sobeck et al. (2011\citep{sob11}) have demonstrated that the manganese resonance triplet (4030.7, 4033.1, and 4034.5 \AA) fails to be a
reliable indicator of abundance. Consequently, the RHB abundance results for Mn$_{I}$ are given little weight and are not plotted in Figure~\ref{fe_peak}.
\subsection{Light and Intermediate n-Capture Element Abundances}
Finalized abundances for the light and intermediate n-capture elements include Cu, Zn, Sr, Y, Zr, Ba, La, Ce, and Pr. In general, the RGB element abundance ratios
are slightly deficient with respect to the RHB values. Also, enhancement with respect to solar is consistently seen in all M15 targets for the elements Ce and Pr.
An extremely underabundant copper abundance relative to solar was found in the RGB stars: $<$[Cu$_{I}$/Fe$_{I}$]$>_{RGB} = -0.91$. A similar
derivation could not take place in the RGB targets as the Cu$_I$ transitions were too weak. A large divergence between RGB and RHB stellar
abundances exists for zinc. Detection of the Zn transitions was possible in only one RHB target, which could perhaps account for some of the discrepancy.
For the entire M15 data set, Y$_{II}$ exhibits lower relative abundance ratios in juxtaposition to both Sr$_{II}$ and Zr$_{II}$. With regard to the three
average elemental abundances (of Sr, Y, Zr), moderate departures between the RGB and RHB groups are seen. Also, a large variation in
the $<$[Sr$_{II}$/Fe$_{II}$]$>$ ratio was found for the members of the RGB group.
Though different sets of lines are employed, the RGB and RHB $<$[Ba$_{II}$/Fe$_{II}$]$>$ ratios are consistent with one another. A portion of the RHB
abundance variation is due to the exclusive use of the resonance transitions in the determination (two lowest temperature RHB stars
report quite high $\sigma$ values; these strong lines could not be exploited in the RGB analysis). Notably for this element group, the greatest star-to-star
abundance scatter was found for lanthanum: $\Delta_{RGB} = 0.46$ and $\Delta_{RHB} = 0.61$ (excluding the one RHB outlier). The
relative cerium abundances also exhibit a wide spread in the RGB sample.
\subsection{Heavy n-Capture Element Abundances}
The list of finalized $<$[El/Fe]$>$ ratios for the heavy n-capture elements is as follows: Nd, Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm, Yb, Hf,
Os, Ir, Pb and Th. All of these element abundance ratios are enriched with regard to the solar values. As shown in
Figure~\ref{heavy_ncapture}, a larger abundance spread is found for this group in comparison to the other element groups.
Note that as \mbox{$T_{\rm eff}$}\ increases, the strength of the heavy element transitions rapidly decreases and as a consequence, the use of these lines
for abundance determinations in the warmest stars becomes unfeasible. It was possible to obtain robust abundances for Nd, Sm, Tb, and Tm in a single
RHB target. On the other hand, abundance extractions for the species Os and Ir were done only in RHB stars (measurements of these element transitions
were attainable as less spectral crowding occurs in these stars). Nonetheless, minimal line-to-line scatter is seen for the bulk of RGB and RHB n-capture abundances.
A rigorous determination of the europium relative abundance was performed for all M15 stars: $<$[Eu$_{II}$/Fe$_{II}$]$>_{RGB} = 0.53$ and
$<$[Eu$_{II}$/Fe$_{II}$]$>_{RHB} = 0.88$. Despite the iron abundance offset, the largest departure between the two stellar groups is found for
the element Ho$_{II}$. Further, the greatest star-to-star scatter in the heavy n-capture elements is seen for the [Yb$_{II}$/Fe$_{II}$] ratio: $\Delta_{RGB} = 0.55 $ and
$\Delta_{RHB} = 0.58$.
\subsection{Comparison with Previous CTG efforts and Otsuki \mbox{et al.}\ (2006)}\label{otsuki}
These new abundance results are now compared to those from the four prior CTG publications. For the majority of elements, the current data are in
accord with the findings of Sneden et al. (1997, 2000a\citep{sne97,sne00a}) and Preston et al. (2006\citep{pre06}). In this
effort, abundance derivations are performed for 13 new species: Sc$_{I}$, V$_{II}$, Cu$_{I}$,
Pr$_{II}$, Tb$_{II}$, Ho$_{II}$, Er$_{II}$, Tm$_{II}$, Yb$_{II}$, Hf$_{II}$, Os$_{I}$, Ir$_{I}$, and Pb$_{II}$. For elements
re-analyzed in the current study, the abundance data have been improved with the use of higher quality atomic data, additional transitions, and a revised version of the
MOOG program. A few large discrepancies in the [El/Fe] ratios do occur between the current study and the previous M15 efforts. These departures can be
attributed to the employment of different [Fe/H] and solar photospheric values as well as the updated MOOG code. Accordingly, the results from the
current analaysis supersede those from the earlier CTG papers.
As in Sneden \mbox{et al.}\ 1997\citep{sne97}, the abundance behavior of the proton capture elements appears to be {\it decoupled} from
that of the neutron capture elements. Notably for M15, significant spread in the abundances was confirmed for both Ba and Eu. The scatter
of $\Delta$log$\epsilon(Ba)= 0.48$ and $\Delta$log$\epsilon(Eu)= 0.90$ from the current effort is in line with
that of $\Delta$log$\epsilon(Ba)= 0.60$ and $\Delta$log$\epsilon(Eu)= 0.73$ from Sneden \mbox{et al.}\ (1997)\citep{sne97}.
Comparison of the findings from the current study to those from Otsuki \mbox{et al.}\ (2006)\citep{ots06} has also been done and will be
limited to the only star that the two investigations have in common, K462. Due to differences in the $<$[Fe/H]$>$ values, the
log~$(\epsilon)$ data of the two analyses are compared. The model atmospheric parameters for K462 differ somewhat between the
current effort (\mbox{$T_{\rm eff}$}/\mbox{log~{\it g}}/\mbox{$v_{\rm t}$}\ = 4400/0.30/2.00) and Otsuki \mbox{et al.}\ (\mbox{$T_{\rm eff}$}/\mbox{log~{\it g}}/\mbox{$v_{\rm t}$}\ = 4225/0.50/2.25). However, the agreement
in the abundances for the elements Y, Zr, Ba, La and Eu is rather good between the two studies, with the exact differences ranging:
$0.01 \leq \mid\Delta(Otsuki-Current)\mid \leq 0.16$. The largest disparity occurs for Sr, with both analyses employing the resonance transitions. As
mentioned previously, these lines are not the most rigorous probes of abundance.
\subsection{General Relationship of Ba, La, and Eu Abundances}
Sneden et al. (1997)\citep{sne97} claimed to have found a binary distribution in a plot of [Ba/Fe] versus [Eu/Fe], with 8 stars exhibiting
relative Ba and Eu abundances approximately 0.35 dex smaller than the remainder of the M15 data set. To re-examine their assertion,
Figure~\ref{Ba_La_Eu} is generated, which plots [(Ba, La)/H] as a function of [Eu/H] for the entire data sample of the current study. It also displays
the re-derived/re-scaled Ba, La, and Eu abundances for all of the giants from the Sneden et al. (1997)\citep{sne97} publication. No decisive offset is
evident in either panel of Figure~\ref{Ba_La_Eu}. For completeness, the equivalent width data from Otsuki et al. (2006)\citep{ots06} were
also re-analyzed and the abundances were re-determined. Again, no bifurcation was detected in the Ba and Eu data.\footnote{To avoid duplication,
the stars from Ostuki et al. are not plotted as they are a subset of the original sample from the Sneden et al. (1997\citep{sne97}) study.}
\section{DATA INTERPRETATION AND ANALYSIS}
A significant amount of \mbox{$r$-process}\ enrichment has occurred in the M15 globular cluster. Figure~\ref{abundance_Z} plots
the average log~$\epsilon$ values of the n-capture elements (with $39 \leq Z \leq 70$) for three RGB stars (K341, K462, K583; signified by red symbols)
and three RHB stars (B009, B224, B262; denoted by blue symbols).\footnote{B028, B412, and B584 are not included in
the figure as these stars lack abundances for most of the elements in the specified Z range.} The solid black line in this figure indicates the scaled, solar
\mbox{$r$-process}\ prediction as computed by Sneden, Cowan, \&\ Gallino (2008)\citep{sne08}. All of the element abundances
are normalized to the individual stellar log~$(\epsilon_{Eu})$ values (Eu is assumed to be an indicator of \mbox{$r$-process}\ contribution). For the n-capture
elements with $Z = 64-72$, the RGB stellar abundance values {\it strongly correlate} with the solar \mbox{$r$-process}\ distribution. Similarly for the RHB stars,
these abundances match well to the solar \mbox{$r$-process}\ pattern for most of the elements in the $Z = 64-72$ range.
Figure~\ref{abundance_Z} also displays the scaled, solar \mbox{$s$-process}\ abundance distribution (green, dotted line). The \mbox{$s$-process}\ predictions are also taken from
Sneden, Cowan, \& Gallino and the values are normalized to the solar log~$(\epsilon_{Ba})$ (Ba is considered to be an indicator of \mbox{$s$-process}\ contribution).
As shown for the $Z = 64-72$ elements, there is {\it virtually no} agreement between the solar \mbox{$s$-process}\ pattern and either the RGB or the RHB stellar abundances. The
\mbox{$s$-process}\ predictions compare well to the RGB abundances for only two elements: Ce and La. Thus, it follows that the nucleosynthesis
of the {\it heavy} neutron capture elements in M15 was dominated by the \mbox{$r$-process}. In addition, the abundance pattern for the light n-capture elements (Sr, Y, Zr)
does not adhere to either a solar \mbox{$r$-process}\ or \mbox{$s$-process}\ distribution.
\subsection{Evidence for Additional Nucleosynthetic Mechanisms Beyond the Classical {\it r-} and \mbox{$s$-process}\ }
To further examine the anomalous light n-capture abundances in the M15 cluster, Figures~\ref{SrYZr_Ba} and \ref{SrYZr_Eu} are generated. For the
stars of current effort and those from Otsuki et al. (2006)\citep{ots06}, these two plots display the abundances of the n-capture elements (Sr, Y, Zr and La)
as a function of the [Ba/H] and [Eu/H] ratios, respectively. Moreover, the abundance results from five select
field stars which represent extremes in \mbox{$r$-process}\ or \mbox{$s$-process}\ enhancement are plotted (CS 22892-052 [Sneden et al. 2003, 2009\citep{sne03,sne09}]; CS 22964-161
[Thompson et al. 2008]\citep{tho08}; HD 115444 [Westin et al. 2000\citep{wes00}, Sneden et al. 2009\citep{sne09}]; HD 122563 [Cowan et al. 2005\citep{cow05},
Lai et al. 2007\citep{lai07}]; HD 221170 [Ivans et al. 2006\citep{iva06}, Sneden et al. 2009\citep{sne09}]).
In Figure~\ref{SrYZr_Ba}, an anti-correlative trend is seen for Sr, Y and Zr with Ba while no
explicit correlative behavior is apparent for La. The correlation coefficient, $r$, is indicated in each panel. Likewise, La and Eu appear
un-correlated in Figure \ref{SrYZr_Eu}. The [(Sr, Y, Zr)/Eu] ratios all exhibit anti-correlation with [Eu/H] in this figure. As shown, the elements Sr, Y, and Zr
clearly demonstrate an anti-correlative relationship with {\it both} the markers of the \mbox{$s$-process}\ (Ba) and the \mbox{$r$-process}\ (Eu).
Figures~\ref{SrYZr_Ba} and~\ref{SrYZr_Eu} collectively imply that the production of the light neutron capture elements most likely did not transpire
via the classical forms of the \mbox{$s$-process}\ or the \mbox{$r$-process}. This finding is not novel. The abundance survey of halo field stars by Travaglio et al. (2004)\citep{tra04}
previously established the decoupled behavior of the light n-capture species to both Ba and Eu. Further, they postulated that an additional nucleosynthetic
process was necessary for the production of these elements (Sr, Y , Zr) in metal-deficient regimes (coined the Lighter Element Primary Process; LEPP).
The overabundances of Sr and Zr (see Figure~\ref{abundance_Z}) could have been the result of a small \mbox{$s$-process}\ contribution to
the M15 proto-cluster environment. To investigate this possibility, an abundance determination is performed for Pb, a definitive main \mbox{$s$-process}\ product.
The upper panel of Figure~\ref{Pb_sprocess} illustrates the synthetic spectrum fits to the neutral Pb transition at a wavelength of 4057.8 \AA\
in the M15 giant, K462. An upper limit of log~$\epsilon(Pb) \lesssim$ -0.35 can only be established for this star. For the remaining two RGB targets,
upper limits were also determined and accordingly for all three, the average values of log~$\epsilon(Pb) \lesssim$~-0.4 and $<$[Pb/Eu]$>$ $\lesssim$~-0.15 were found.
The lower panel of Figure~\ref{Pb_sprocess} plots [Pb/Eu] as a function of [Eu/Fe] for the three M15 RGB stars and the five, previously-employed halo
field stars. In a recent paper, Roederer \mbox{et al.}\ (2010\citep{roe10}) suggest that detections of Pb and enhanced [Pb/Eu] ratios should be strong indicators
of main \mbox{$s$-process}\ nucleosynthesis. In turn, they contend that non-detections of Pb and depleted [Pb/Eu] ratios should signify the absence of nucleosynthetic
input from the main component of the \mbox{$s$-process}\ (see their paper for further discussion). With the abundances of 161 low-metallicity stars
([Fe/H] $<$ -1), Roederer \mbox{et al.}\ empirically determined a threshold value of [Pb/Eu] = +0.3 for minimum AGB contribution. As shown in the figure,
the M15 giants lie below this threshold and accordingly, are likely devoid of main \mbox{$s$-process}\ input. Thus in the case of the M15 globular cluster, the light neutron capture
elements presumably originated from an alternate nucleosynthetic process (\mbox{\it e.g.}\ $\nu-p$ process, Frohlich \mbox{et al.}\ 2006\citep{fro06}; high entropy winds,
Farouqi \mbox{et al.}\ 2009\citep{far09}).
\subsection{M15 Abundances in Relation to the Halo Field}
The upper panel of Figure~\ref{Eu_spread} displays the evolution of the [Mg/Fe] abundance ratio with [Fe/H] for all M15 targets
as well as for a sample of hundreds of field stars. For this figure, halo and disk star data have been taken from these surveys: Fulbright (2000)\citep{ful00},
Reddy et al. (2003)\citep{red03}, Cayrel et al. (2004)\citep{cay04}, Cohen et al. (2004)\citep{coh04}, Simmerer et al. (2004)\citep{sim04},
Barklem et al. (2005)\citep{bar05}, Reddy, Lambert \& Allende Prieto (2006)\citep{red06}, Fran{\c c}ois et al. (2007)\citep{fra07},
and Lai et al. (2008)\citep{lai08}. As shown, the scatter in the [Mg/Fe] abundance ratio is fairly small: $\Delta ($[Mg/Fe]$)_{MAX} \approx 0.6$ dex
for all stars under conisderation and $\Delta ($[Mg/Fe]$)_{MAX} \approx 0.1$ dex for the M15 data set. In the metallicity regime
below [Fe/H] $\lesssim$ -1.1, the roughly consistent trend of [Mg/Fe] abundance ratio is due in part to the production history for
these elements: magnesium originates from hydrostatic burning in massive stars while iron is manufactured by massive star, core-collapse SNe.
If the short evolutionary lifetimes of these massive stars are taken into context with the abundance data,
it would seem to indicate that the core-collapse SNe are rather ubiquitous events in the Galactic halo. Accordingly, the
products that result from both stellar and explosive nucleosynthesis of massive stars should be well-mixed in the interstellar and
intercluster medium. The apparent downward trend in the [Mg/Fe] ratio, in the metallicity region with [Fe/H]$\gtrsim$ -1.1,
is due to nucleosynthetic input from Type Ia SNe, which produce much more iron in comparison to Type II events.
In a similar vein, the lower panel of Figure~\ref{Eu_spread} plots [Eu/Fe] as a function of [Fe/H] and demonstrates that as the metallicity decreases, the spread
in the [Eu/Fe] abundance ratio increases enormously\footnote{Though the data sample of Figure~\ref{Eu_spread} is compilation of several sources, the scatter in the [Mg/Fe]
and [Eu/Fe] ratios duplicates that found by such large scale surveys as, \mbox{\it e.g.}\, Barklem et al. (2005)}. By contrast, the scatter in the M15 [Eu/Fe] ratios is large and
comparable to the spread of the halo field at that metallicity. Specifically in the metallicity interval -2.7 $\leq$ [Fe/H] $\leq$ -2.2, the scatter in the
[Eu/Fe] ratio is found to be $\sigma = \pm 0.27$ for the nine stars of the M15 sample and similarly for the 23 halo giants, the
associated scatter is $\sigma = \pm 0.33$. This variation in the relative europium abundance ratio (as first detected by Gilroy \mbox{et al.}\ 1988\citep{gil88}
and later confirmed by others, e.g. Burris et al. 2000\citep{bur00}, Barklem et al. (2005)\citep{bar05}) indicates an inhomogeneous production history for Eu and other
corresponding \mbox{$r$-process}\ elements. These elements likely originate from lower mass SNe and their production is not correlated
with that of the alpha elements (Cowan \& Thielemann 2004\citep{cow04}). Furthermore, it seems that nucleosynthetic events
which generated the \mbox{$r$-process}\ elements were rare occurrences in the early Galaxy. As a consequence, these elements
were not well-mixed in the interstellar and intercluster medium (Sneden et al. 2009\citep{sne09}). Note that \mbox{$r$-process}\ enhancement seems to be a common feature of all
globular clusters (e.g. Gratton, Sneden, \& Carretta 2004\citep{gra04}). On the other hand, the scatter in select \mbox{$r$-process}\ element abundances, as found in M15,
is {\it not}.
\section{SUMMARY}
A novel effort was undertaken to perform a homogenous abundance determination in {\it both} the RGB and RHB members of the M15 globular cluster. The current
investigation employed improved atomic data, stellar model atmospheres, and radiative transfer code. A resolute offset in the
iron abundance between the RGB and RHB stars on the order of 0.1 dex was measured. Notwithstanding, the major findings of the analysis for {\it both}
the RGB and RHB stellar groups include: a definitive \mbox{$r$-process}\ enhancement; a significant spread in the abundances of the neutron capture species (which appears
to be astrophysical in nature); and, an anti-correlation of light n-capture element abundance behavior with both barium ([Ba/H]) and europium ([Eu/H]). Accordingly,
the last set of findings may offer proof of the operation of a LEPP-type mechanism within M15. To determine if these abundance behaviors are generally
indicative of {\it very} metal-poor globular clusters, a comprehensive examination of the chemical composition of the analogous M92 cluster should be undertaken
([Fe/H] $\sim$ -2.3, Harris et al. 1996\citep{har96}; the literature contains relatively little information with regard to the
n-capture abundances for this cluster).
To date, the presence of multiple stellar generations within the globular cluster M15 has not been irrefutably established. In a series of papers,
Carretta et al. (2009a, 2009c, 2010\citep{car09a,car09c,car10}) offered compelling proof in the detection of light element anti-correlative behavior (Na-O)
in numerous members of the M15 RGB. Lardo et al. (2011\citep{lar11}) did find a statistically significant spread in the SDSS photometric
color index of $u-g$, but yet they were not able to demonstrate a clear and unambiguous correlation of $(u-g)$ with the Na abundances in the RGB
of the M15 cluster (which would have provided further evidence). To wit, recent investigations of M15 have revealed several atypical features
including: probable detection of an intermediate mass black hole (van der Marel et al. 2002\citep{van02}; though the result is under some dispute);
observation of an intracluster medium (Evans et al. 2003\citep{eva03}); detection of mass loss (Meszaros et al. 2008, 2009\citep{mez08,mez09});
identification of extreme horizontal branch and blue hook stars (Haurberg \mbox{et al.}\ 2010\citep{hau10}); and, observation of an
extended tidal tail (Chun et al. 2010\citep{Chun2010}). It would be worthwhile to examine these peculiar aspects of the globular cluster
in relation to the abundance results of M15. Further scrutiny is warranted in order to understand the star formation history and mixing timescale
of the M15 protocluster environment.
\acknowledgments
We are deeply indebted to L. Koesterke for his extensive advice with regard to the modification of the MOOG code. We are grateful to the referee for several
valuable suggestions. We also thank I. Roederer for helpful comments pertaining to drafts of the manuscript. The current effort has made use of the
NASA Astrophysics Data System (ADS), the NIST Atomic Spectra Database (ASD), and the Vienna Atomic Line Database (VALD). Funding for this research has been
generously provided by the National Science Foundation (grants AST 07-07447 to J.C. and AST 09-08978 to C.S.).
\clearpage
|
2,869,038,155,501 | arxiv | \section{Introduction}
We develop a new scope of inferential procedures for testing the shape constraints on the regression coefficients in function-on-scalar regression models through the linear operator representation, where functional responses are incompletely observed.
{\color{black} We assume that the functional response $Y_i(t)$ is available for $t \in \mathscr{I}_i$, where $\mathscr{I}_i \subset [0,1]$ is an individual-specific random subset independent of the stochastic mechanism that generates the complete functional response $Y_i$ for $i=1, \ldots, n$.
We allow $\mathscr{I}_i$ for a union of sub-intervals, discrete subsets, or the composition of the two scenarios.
The functional response fully available on the domain is a special case by simply letting $\mathscr{I}_i = [0,1]$.
Recently \cite{kraus2015}, \cite{liebl2019partially}, \cite{delaigle2020} and \cite{kneip2020optimal} studied functional data analysis of partially observed curves, including principal component analysis, mean and covariance functions estimation, and optimal reconstruction of individual curves, but hypothesis testing problem has been less developed.
}
For statistical analysis, we assume that the unobservable complete functional response $Y$ is associated with vector covariates $\vX = (X_1,\ldots, X_p)^\top$ and $\vZ = (Z_1,\ldots, Z_q)^\top$ by the function-on-scalar regression model
\begin{equation}
\label{fullmodel}
Y(t)= \mathbf{X}^{\top} \boldsymbol\beta(t) + \mathbf{Z}^{\top} \boldsymbol\alpha(t) + \epsilon(t) \quad (t \in [0,1]),
\end{equation}
where $\boldsymbol\beta(t) = (\beta_1(t), \ldots, \beta_p(t))^\top$ and $\boldsymbol\alpha(t)$ $=$ $(\alpha_1(t),$ $\ldots,$ $\alpha_q(t))^\top$ are square-integrable vector coefficient functions, respectively, and $\epsilon(t)$ is a mean-zero error process with the covariance function $\gamma(s,t) = \Cov\big(\epsilon(s), \epsilon(t)\big)$ independent of $(\vX, \vZ)$.
In the contexts of uncorrelated error processes or longitudinal data, the model \eqref{fullmodel} is also known as a varying coefficient regression model \citep{staniswalis1998nonparametric, hastie1993varying, malfait2003historical, eubank2004smoothing}. More recent developments have extended the theory and practice to functional and spatial varying coefficient models \citep{zhu2012multivariate, zhu2014spatially, li2017functional, pietrosanu2021estimation, zhu2021network}.
{\color{black} We consider a class of testing composite null hypotheses on the functional regression model \eqref{fullmodel} of the form
\begin{equation}
\label{shapespace}
H_0: \boldsymbol{\mathcal{C}} \boldsymbol\beta = \mathbf{0},
\end{equation}
equivalently $H_0: \boldsymbol\beta \in \textrm{ker}(\boldsymbol{\mathcal{C}})$,
where $\boldsymbol{\mathcal{C}}$ is a linear operator that maps vector functions to the function space of inferential interest, and $\textrm{ker}(\boldsymbol{\mathcal{C}})$ is the kernel space of $\boldsymbol{\mathcal{C}}$.
In this study we focus on testing a dual formation of the null hypothesis \eqref{shapespace} expressed by
\begin{equation}
\label{gnull}
H_0: \boldsymbol\beta \in \textrm{span}\{ V(r) \},
\end{equation}
where $V(r) = \{v_{l} \in L^2:l=1, \ldots, r \}$ is an orthonormal basis that specifies the parametric family of $\textrm{ker}(\boldsymbol{\mathcal{C}})$.
It is worth mentioning that \eqref{gnull} is a generalization of the classical linear contrast hypothesis.
For example, let $\mathbf{C} \in \mathbb{R}^{d \times p}$ be of full rank $d$.
The null hypothesis $H_0':\mathbf{C} \boldsymbol\beta(t) = \mathbf{0}$ studied in \cite{zhang2007statistical} identifies $\boldsymbol\beta(t) = \mathbf{u}_0(t) + \sum_{l=1}^d b_l \mathbf{u}_l$ for some vector function $\mathbf{u}_0(t) = (u_{0,1}(t), \ldots, u_{0,p}(t))^\top$ satisfying $\mathbf{C} \mathbf{u}_0(t) = \mathbf{0}$ and $\mathbf{b} = (b_1, \ldots, b_d)^\top \in \mathbb{R}^d$, where $U(d) = \{ \mathbf{u}_l \in \mathbb{R}^p:l=1, \ldots, d\}$ is the orthonormal basis of $\textrm{ker}(\mathbf{C})$ in $\mathbb{R}^p$.
{\color{black} Here we extend the theory from the finite dimensional constraints such as $H_0'$ to the potentially infinite dimensional linear operator constraints in \eqref{shapespace}. }
An important class of the null hypothesis \eqref{shapespace} includes testing the shape of regression functions.
For example, \citep{gromenko2017evaluation} evaluated a physical mechanism for the conjectured linear trend in the Northern hemisphere cooling analysis. \cite{hejblum2015time} also tested linear or (piecewise) cubic time-course variations in gene expression experiments.
The functional trends can be evaluated by the coefficient function associated with the constant covariate $X=1$ and its shape constraint $H_0: \beta \in \textrm{span}\{V\}$, where $V$ is a set of $L^2$-functions or a basis that specifies the functional trend of interest.
In this case, the space of shape-constrained regression functions is expressed by \eqref{shapespace}, where the kernel space of $\mathcal{C}$ is spanned by $V$.}
Previously \cite{ramsay2005} studied similar topics by testing individual probes
in the form of $\langle c, \beta_j\rangle = 0$ for a fixed known $L^2$-function $c$ as a special case of \eqref{shapespace}.
Later, \cite{james2006performing} proposed a residual-based permutation test for performing a hypothesis test on the shape of a mean function, although the large sample properties and the power behaviors of the proposed method were not investigated.
Related work also includes \cite{yang2008hypothesis}, \cite{berkes2009detecting}, \cite{horvath2009two}, \cite{zhang2011statistical}, \cite{bugni2012specification}, \cite{hilgert2013minimax}, \cite{lei2014adaptive}, \cite{shang2015nonparametric}, \cite{staicu2015significance}, \cite{su2017hypothesis}, \cite{li2020inference}, \cite{garcia2021goodness}.
Recently, \cite{cuesta2019goodness} and \cite{chen2020model} developed goodness-of-fit tests for functional models evaluated by empirical processes, and the significance of the family of models against general alternatives is tested by wild bootstrap resampling.
But their applications to statistical inference are mainly aligned with validating functional linear models against a general class of non-structured functional models.
Moreover, the extension of the existing methods to incomplete functional data has not been investigated.
{\color{black} The main contributions of our study are as follows. We extend the the goodness-of-fit test to the general testing framework \eqref{gnull}, applicable to the model with incomplete functional response data. Our framework includes three scenarios as can often occur in practice; (i) the partially observed functional responses with random missing segments, where we have access to observations only for individual-specific sub-interval of the domain, but observation is not available on its complement, (ii) the functional responses observed with measurement errors on randomly spaced discrete evaluation points asynchronous across subjects, and (iii) a more challenging composite case, where individual curves are discretely observable over random sub-intervals of the domain.
We especially investigate the theoretical property of the composition of the sub-interval censoring and discrete sampling of functional responses, where the proposed test procedure is applicable to a wide class of incomplete functional data. The asymptotic null distribution of the test statistic is also derived together with the consistency of the test with local alternatives $H_{1n}: \boldsymbol\beta = \boldsymbol\beta_0 + n^{-\tau/2} \boldsymbol\Delta$, where $\tau \in [0,1]$, for some $\boldsymbol\beta_0 = (\beta_{0,1}, \ldots, \beta_{0,p})^\top$ and $\boldsymbol\Delta = (\Delta_1, \ldots, \Delta_p)^\top$ satisfying $\boldsymbol{\mathcal{C}}\boldsymbol\beta_0 = \mathbf{0}$ and $\boldsymbol{\mathcal{C}} \boldsymbol\Delta \neq \mathbf{0}$, respectively.}
The methodology and basic theory of the proposed test procedures under incomplete functions responses are developed in Section \ref{sec:main}. In Section \ref{sec:sim}, we present numerical simulations, where the finite sample performance of the proposed test is evaluated in several scenarios. We also illustrate two applications from an obesity prevalence study and an automotive ergonomic experiment in Section \ref{sec:real-data}. Our concluding discussion is in Section \ref{sec:discussion}. Technical details, including the numerical implementation steps and theoretical proofs, are relegated to the Appendix.
\section{Main results} \label{sec:main}
\subsection{Partially sampled functional responses} \label{subsec:partial}
We first formulate the partially observed functional data as proposed in \cite{kraus2015}.
Let $\delta_1, \ldots, \delta_n$ be a random sample of a stochastic process, defined on $[0,1]$,
satisfying the following conditions.
\begin{itemize}
\item [C1:] The latent stochastic processes, $(Y_i,\delta_i):=\{(Y_i(t),\delta_i(t)): t\in [0,1]\}$, for $i = 1,\ldots,n$, are independent and identically distributed on $(\Omega, \mathscr{F},\mathbb{P})$ and jointly $\mathscr{F}$-measurable.
\item [C2:] $b(t) = E( \delta_i(t) )$ is bounded away from zero; i.e., $\inf_{t \in [0,1]}b(t) >0$
\item [C3:] There are i.i.d. random variables $\boldsymbol{W}_i = (W_{i1}, \ldots, W_{iK}) \in {\mathcal{W}}$, and there is a measurable function $h : [0,1] \times {\mathcal{W}} \to \{0, 1\}$ such that $\delta_i(t) = h (t, \boldsymbol{W}_i)$.
\item[C4:] $Y_i$ and $\delta_i$ are independent for $i =1,\ldots,n$.
\end{itemize}
{\color{black} The partially observed functional responses are defined by $\{ Y_i(t): t \in \mathscr{I}_i, \, i=1, \ldots,n \}$, where $\mathscr{I}_i = \{ t \in [0,1]: \delta_i(t) = 1 \}$ is the individual-specific random subset of $[0,1]$ for $i=1, \ldots, n$.}
Various types of incomplete functional data structures satisfy conditions C1--C4, including dense functional snippets \citep{lin2020a}, fragmented functional data \citep{delaigle2020}, or {\color{black} functional data with single or multiple random missing intervals. More examples can be found in \cite{Park2021}. Although C3 does not allow a sparse irregular sampling scheme, we consider the discretized noisy collection of partial data under the unified framework in Section \ref{subsec:composition}.}
\subsubsection{Estimation of functional regression coefficients and asymptotics}
{
Let $\vY^{\delta}(t) = (Y_1^\delta(t), \ldots, Y_n^\delta(t))^\top$ and $\boldsymbol\epsilon^\delta(t) = (\epsilon_1^\delta(t), \ldots, \epsilon_n^\delta(t))^\top$, where $Y_i^\delta(t) = Y_i(t)$ and $\epsilon_i^\delta(t) = \epsilon_i(t)$ if $\delta_i(t)=1$, and $Y_i^\delta(t) = 0$ and $\epsilon_i^\delta(t) = 0$ otherwise for $t \in [0,1]$. That is, functional values over unobserved segments, $[0,1] \backslash \mathscr{I}_i $, are replaced by zeros.
For an $n \times n$ diagonal matrix $\mathbb{W}(t) = \mbox{diag}\{\delta_i(t) \}_{i=1}^n$, we write
\begin{align}
\mathbf{Y}^\delta(t) - \mathbb{W}(t)\mathbb{X} \boldsymbol\beta(t) = \mathbb{W}(t)\mathbb{Z} \boldsymbol\alpha(t) + \boldsymbol\epsilon^\delta(t)
\label{fullmodel-re}
\end{align}
leads to $ \hat{\boldsymbol\alpha}^w(t; \boldsymbol\beta(t)) = (\mathbb{Z}^\top \mathbb{W}(t)\mathbb{Z})^{-1} \mathbb{Z}^\top \mathbb{W}(t)(\mathbf{Y}^\delta(t) -\mathbb{X} \boldsymbol\beta(t))$ the weighted least-squares estimator of $\boldsymbol\alpha(t)$,
where $\mathbb{X} = (\mathbf{X}_1, \cdots, \mathbf{X}_n)^\top$ and $\mathbb{Z} = (\mathbf{Z}_1, \cdots, \mathbf{Z}_n)^\top$ denote $(n \times p)$- and $(n \times q)$-design matrices of full rank, respectively.
Substituting $\hat{\boldsymbol\alpha}^w(t; \boldsymbol\beta(t))$ for $\boldsymbol\alpha(t)$ in \eqref{fullmodel-re}, we obtain $(\mathbb{I} - \mathbb{P})\mathbf{Y}^\delta(t) = (\mathbb{I} - \mathbb{P})\mathbb{W}(t)\mathbb{X}\boldsymbol\beta(t) + \boldsymbol\epsilon^\delta(t)$, where $\mathbb{I} = \textrm{diag}(\mathbf{1}_n)$ and $\mathbb{P} = \mathbb{Z}(\mathbb{Z}^\top \mathbb{Z})^{-1} \mathbb{Z}^\top$ are the projection matrices that only depend on $\mathbf{1}_n$ and $\mathbb{Z}$.
It follows that
\begin{equation}
\label{beta_hat}
\hat{\boldsymbol\beta}^w(t)= (\tilde{\mathbb{X}}^\top \mathbb{W}(t) \tilde{\mathbb{X}})^{-1} \tilde{\mathbb{X}}^\top \mathbb{W}(t) \mathbf{Y}^\delta(t)
\end{equation}
is the weighted least-squares estimator of $\boldsymbol\beta(t)$, where $\tilde{\mathbb{X}} = (\mathbb{I} - \mathbb{P}) \mathbb{X}$ is the design matrix orthogonal to $\mathbb{Z}$.
This enables testing the hypothesis for $\boldsymbol\beta(t)$ while the nuisance regression coefficients related to $\mathbb{Z}$ are unspecified.
Indeed, $\hat {\boldsymbol\beta}^w(t)$ represents a pointwise least-square estimator calculated based on a subset of samples, where its response information is available at given \color{black} location \color{black} $t$.
It also follows from $\hat{\boldsymbol\alpha}^w(t) = (\mathbb{Z}^\top \mathbb{Z})^{-1} \mathbb{Z}^\top \mathbb{W}(t)(\mathbf{Y}^\delta(t) - \mathbb{X} \hat{\boldsymbol\beta}^w(t))$ that
\begin{equation}
\hat{\boldsymbol\mu}(t)
= \tilde{\mathbb{X}} \hat{\boldsymbol\beta}^w(t) + \mathbb{Z}\hat{\boldsymbol\eta}^w(t) \label{mu-estimate}
\end{equation}
fits $\boldsymbol\mu(t) = E(\mathbf{Y}(t) \,|\, \mathbb{X}, \mathbb{Z})$ in a point-wise manner, where $\hat{\boldsymbol\eta}^w(t) = \hat{\boldsymbol\alpha}^w(t; \hat{\boldsymbol\beta}^w(t)) + (\mathbb{Z}^\top \mathbb{Z})^{-1} \mathbb{Z}^\top \mathbb{X} \hat{\boldsymbol\beta}^w(t)$.
The expression \eqref{mu-estimate} will be used in the next subsection to define the model space.
}
\begin{thm}
\label{cor:regression}
Under $tr({\gamma}) < \infty$ and conditions C1--C4,
\begin{equation}
\begin{aligned}
\sqrt{n} \big( \hat{\boldsymbol\beta}^w - {\boldsymbol\beta} \big)
\stackrel{d}{\to} \textrm{GP}_p\big(\mathbf{0}_p, \vartheta \Psi^{-1} \big),
\end{aligned} \label{agp-pbeta0}
\end{equation}
where $\Psi= E(\mathrm{Var}(\vX | \vZ))$ and $\vartheta(s,t) = \gamma(s,t) \upsilon(s,t) \big/ b(s) b(t)$ with $\gamma(s,t) = \Cov(Y(s), Y(t))$, $\upsilon(s,t) = E( \delta(s) \delta(t) )$, and $b(t) = E(\delta_i(t))$, for $s,t \in [0,1]$.
\end{thm}
Theorem \ref{cor:regression} implies that pointwise ${\hat{\boldsymbol\beta}}^w(t)$ uniformly converges to $\vbeta(t)$ over $t \in [0,1]$ and further follows asymptotic Gaussian process with root-$n$ rates of convergence even under partial sampling structure. We also note that the condition on covariance function $tr({\gamma}) = \int_0^1 \gamma(t,t) \,\mathrm{d}t < \infty$ is commonly adopted in developing asymptotic theories on regression coefficient estimators under fully observed functional response. In practice, if we observe an undefined ${\boldsymbol\beta}^w(t)$ at a certain range of the domain under a
finite sample size, it can be estimated using interpolation or smoothing methods when the smoothness and continuity of $\vbeta(t)$ is assumed.
\subsubsection{The test statistic} \label{subsec:test-stat}
To test the appropriateness of the shape-constrained null hypothesis \eqref{shapespace} or equivalent \eqref{gnull}, we compare the model estimates from the unrestricted space $\mathcal{M} = \{ \vmu = \tilde{\mathbb{X}} \vbeta + \mathbb{Z} \veta: \beta_j \in L^2[0,1], \, j=1, \ldots, p \}$ and the reduced space $\mathcal{M}_0 = \{\vmu_0 = \tilde{\mathbb{X}} \vbeta_0 + \mathbb{Z} \veta: \beta_{0,j} \in \textrm{span}\{ V(r) \}, \, j=1, \ldots, p \}$.
To this end, we construct a test statistic which is based on the $L^2$-distance between $\hat \vmu $ and $\hat \vmu_0$ defined by
\begin{equation}
\begin{aligned}
\hat \vmu
&= \argmin_{\boldsymbol{h} \in \mathcal{M}} \int_0^1 \| \mathbb{W}(t) \{\vY^\delta(t) - \boldsymbol{h}(t)\} \|^2 \, \mathrm{d}t,\\
\hat \vmu_0
&= \argmin_{\boldsymbol{h} \in \mathcal{M}_0} \int_0^1 \| \mathbb{W}(t)\{\vY^\delta(t) - \boldsymbol{h}(t)\} \|^2 \, \mathrm{d}t,
\end{aligned}
\end{equation}
where $\|\cdot \|$ denotes the standard $\ell^2$-norm in $\mathbb{R}^n$. The objective functions with the weight matrix $\mathbb{W}(t)$ imply the pointwise optimization under the partially sampled responses.
It can be verified that $\hat\vmu(t) = \tilde{\mathbb{X}} \hat{\boldsymbol\beta}^w(t) + \mathbb{Z} \hat{\boldsymbol\eta}^w(t)$ as in \eqref{mu-estimate}.
Next, we define a linear operator $\mathcal{L}: L^2[0,1] \to \textrm{span}\{V(r)\}$ as
\begin{equation}
\mathcal{L} \beta = \sum_{l=1}^{r} \langle \beta, v_{l} \rangle v_{l} ,
\end{equation}
where $\langle f, g \rangle = \int_{0}^1 f(t) g(t) \, \mathrm{d}t $, and let $\boldsymbol{\mathcal{L}}$ denote the multivariate operator that applies $\mathcal{L}$ in an element-wise fashion. We then get $\hat{\vmu}_0(t) = \tilde{\mathbb{X}} \hat{\vbeta}_0^w(t) + \mathbb{Z} \hat{\boldsymbol\eta}^w(t)$, where $\hat\vbeta_0^w = \boldsymbol{\mathcal{L}}\hat\vbeta^w$. Even though we have partial response information for each observation, the uniformly consistent estimator $\hat{\boldsymbol\beta}^w$ provides the consistent model estimates $\hat\vmu(t)$ and $\hat\vmu_0(t)$ over $t \in [0,1]$. We next use them to propose a test statistic
\begin{equation}
\begin{aligned}
T_n
&= \int_0^1 \| \hat{\boldsymbol\mu}(t) - \hat{\boldsymbol\mu}_0(t) \|^2 \, \mathrm{d}t \\
&= \int_0^1 \big( \hat{\boldsymbol\beta}^w(t) - \hat{\boldsymbol\beta}_0^w(t) \big)^\top \big(\tilde{\mathbb{X}}^\top \tilde{\mathbb{X}} \big) \big( \hat{\boldsymbol\beta}^w(t) - \hat{\boldsymbol\beta}_0^w(t) \big)\, \mathrm{d}t.
\end{aligned}\label{test-stat}
\end{equation}
Note that $T_n$ is the integrated squared distance between the model fits obtained under $\mathcal{M}$ and $\mathcal{M}_0$, respectively, and we reject the null hypothesis if $T_n$ is large.
Under the orthogonality between $\tilde{\mathbb{X}}$ and $\mathbb{Z}$, distance between $\hat \vmu$ and $\hat \vmu_0$ is translated to the weighted distance between two coefficient estimates $\hat \vbeta^w$ and $\hat \vbeta_0^w$.
While similar types of the $L^2$-norm based test-statistic have been employed in \cite{shen2004f}, \cite{zhang2007statistical}, and \cite{ zhang2011statistical} for conventional hypothesis testing, such as testing the nullity of functional coefficients, our study considers a more general scope of the null hypothesis, \color{black} using linear operator constraints, and the scope of response function sampling, allowing for \color{black} partially observed functional response data.
\subsubsection{Asymptotics and power considerations}
In this section, we derive the limit law of the proposed test statistic $T_n$ under the the null and local alternative hypotheses using the asymptotic Gaussianity of $\hat\vbeta^w$ shown in Theorem \ref{cor:regression}. Since a Gaussian process is closed under a linear operator and $\tilde{\mathbb{X}}^\top \tilde{\mathbb{X}}/n$ converges to $\Psi$ in probability, under the null hypothesis \eqref{gnull}, we can derive
\begin{equation}
\begin{aligned}
\big(\tilde{\mathbb{X}}^\top \tilde{\mathbb{X}} \big)^{1/2} \big( \hat{\boldsymbol\beta}^w - \hat{\boldsymbol\beta}_0^w \big)
&= \Psi^{1/2} \sqrt{n} \, \boldsymbol{\mathcal{C}} (\hat{\boldsymbol\beta}^w - \boldsymbol\beta_0 \big) + o_P(1)\\
&\stackrel{d}{\to} \textrm{GP}_p\big(\mathbf{0}_p, \tilde\vartheta \mathbb{I}_p \big),
\end{aligned} \label{agp-beta0}
\end{equation}
where $\boldsymbol{\mathcal{C}} = \boldsymbol{\mathcal{I}} - \boldsymbol{\mathcal{L}}$ for $\boldsymbol{\mathcal{I}}$ element-wisely operating the identity map $\mathcal{I}$ and
\begin{equation}
\begin{aligned}
\tilde\vartheta(s,t)
&=\vartheta(s,t)- \sum_{k=1}^r \bigg( \int_0^1 \vartheta(s,t) v_k(s) \, \mathrm{d}s \bigg) v_k(t) \\
&\qquad\qquad- \sum_{l=1}^r \bigg( \int_0^1 \vartheta(s,t) v_l(t) \, \mathrm{d}t \bigg) v_l(s) \\
&\qquad\qquad + \sum_{k=1}^r \sum_{l=1}^r \bigg( \int\int_{[0,1]^2} \vartheta(s,t) v_k(s) v_l(t) \, \mathrm{d}s \mathrm{d}t \bigg) v_k(s)v_l(t).
\end{aligned} \label{gp-var}
\end{equation}
{\color{black}
We then consider sequences of local alternatives of the form
\begin{equation}
\label{local-alternative}
H_{1n}: \boldsymbol\beta = \boldsymbol\beta_0 + n^{-\tau/2} \boldsymbol\Delta,
\end{equation}
where $\tau \in [0, 1]$, and $\boldsymbol\Delta(t) = (\Delta_1(t), \ldots, \Delta_p(t))^\top$ represents a normalized functional deviation from the null hypothesis, independent of $n$.
Then the asymptotic distribution of the test statistic is derived as the theorem below.
\begin{thm} \label{thm-alternative-dist}
Suppose that $tr({\gamma}) < \infty$. And let $\{H_{1n}: n \geq 1 \}$ be a sequence of local alternatives with square-integrable functions $\Delta_j(t)$'s in \eqref{local-alternative}. Let $\tilde{\boldsymbol\Delta} = \Psi^{1/2} \, \boldsymbol{\mathcal{C}}\boldsymbol\Delta = (\tilde\Delta_1, \ldots, \tilde\Delta_p)^\top$ and define $\pi_m^2= \sum_{j=1}^p \|\langle \tilde\Delta_j, \phi_m \rangle \|^2 $, where $\phi_m$, $m=1,2, \ldots$, are eigenfunctions of $\tilde\vartheta(s,t)$. Then, the test statistic $T_n$ converges to $T_\Delta$ in probability, defined as
\begin{equation}
T_\Delta \stackrel{d}{=} \sum_{m=1}^\infty \lambda_m B_m, \label{thm-alternative-dist-eq}
\end{equation}
where $\lambda_m$ are decreasing-ordered eigenvalues of $\tilde\vartheta(s,t)$, corresponding to eigenfunctions $\phi_m$, and $B_m \stackrel{i.i.d.}{\sim} \chi^2_{p}(\kappa_m^2)$ denotes the $p$ degrees of freedom non-central $\chi^2$-distribution with $\kappa_m^2 = \pi_m^2/\lambda_m$.
\end{thm}
Based on Theorem \ref{thm-alternative-dist}, we obtain the null distribution of the test statistic $T_n$ and asymptotic power derivations as follows.
\begin{cor} \label{thm-power} Assume the same conditions as in Theorem \ref{thm-alternative-dist}.
\begin{enumerate}
\item[(i)] Under the null hypothesis, i.e., $\boldsymbol{\mathcal{C}} \boldsymbol\Delta = \mathbf{0}$, Theorem \ref{thm-alternative-dist} implies that the null distribution of the test statistic $T_n$ converges to $T_0$ in distribution, where
$T_0 = \sum_{m=1}^\infty \lambda_m A_m$
with $\lambda_m$, decreasing-ordered eigenvalues of $\tilde\vartheta(s,t)$, and $A_m \stackrel{i.i.d.}{\sim} \chi^2_p$.
\item[(ii)] Suppose that $\boldsymbol{\mathcal{C}} \boldsymbol\Delta \neq \mathbf{0}$, that is, the local alternative, and $\sum_{m=1}^\infty \pi_m^2 = \infty$ or $\tau \in [0,1)$. Then, Theorem \ref{thm-alternative-dist} yields the asymptotic power of the test as; $\lim_{n \to \infty} P(T_n \geq t_\alpha | H_{1n}) = 1$, where $t_\alpha$ is the upper-$\alpha$ quantile of the null distribution $T_0$ in the case (i).
\end{enumerate}
\end{cor}
}
As we can see in the proof of the Corollary \ref{thm-power} in the Appendix, the asymptotic power of the test goes 1 under $H_{1n}$ of \eqref{local-alternative} with any $\tau \in [0, 1)$ and non-zero $\Delta$, which is desirable. When considering $\tau = 1$, where the local alternative tends to the null with root-$n$ rate, the non-trivial asymptotic power goes to 1 when $\sum_{m=1}^\infty \pi_m^2 = \infty$. In Section \ref{sec:sim} of simulation studies, we consider different magnitudes of null-deviated signals $\pi_m^2$ under $\tau=1$ to investigate the power in the practical setting.
\subsection{Discrete observations with measurement errors} \label{sec:kernel}
In this section, we extend the proposed test to the case where functional responses are observed with measurement errors over finite discrete points in their domains, and possibly sampled asynchronously across subjects. Being different from the partially observed sampling scheme with continuum measurement over the subset of $[0,1]$, we consider the case that functional measurements are collected on a discrete subset of the functional domain with additive measurement errors. Specifically, let $\{ (\mathbf{Y}_i^\ast, \mathbf{T}_i, \mathbf{X}_i, \mathbf{Z}_i): i=1, \ldots, n\}$ be a random sample of $(\mathbf{Y}^\ast, \mathbf{T}, \mathbf{X}, \mathbf{Z})$, where $\mathbf{Y}_i^\ast = (Y_{i,1}^\ast, \ldots, Y_{i,N_i}^\ast)^\top$ is the finite observations of the $i$-th subject associated with evaluation points $\mathbf{T}_i = (T_{i,1}, \ldots, T_{i,N_i})^\top$ as
\begin{equation}
\begin{aligned}
Y_{i,m}^\ast
= \mathbf{X}_i^{\top} \boldsymbol\beta(T_{i,m}) + \mathbf{Z}_i^{\top} \boldsymbol\alpha(T_{i,m}) + \epsilon_i(T_{i,m}) + \varepsilon_{i,m}.
\end{aligned} \label{model-longitudinal}
\end{equation}
The sampling design differs from the ones considered in the previous subsections as functional outcomes are prone to measurement errors, denoted by $\varepsilon_{i,m}$, and finite observations are only available. We note that $\varepsilon_{i,m} = 0$ is a special case that follows the same model \eqref{fullmodel}.
For statistical analysis, we assume that $\varepsilon_{i,1}, \ldots, \varepsilon_{i,N_i}$ are i.i.d. as $\varepsilon$ such that $E(\varepsilon | \mathbf{X}, \mathbf{Z}) = 0$ and $E |\varepsilon|^k < \infty$ for some $k > 2$. The finite evaluation points $T_{i,1}, \ldots, T_{i,N_i}$ are randomly generated by a probability density function $\lambda(t)$ bounded away from zero and infinity whose derivative also exists and is bounded. We also assume that $N_1, \ldots, N_n$ are i.i.d. as an independent random integer $N \geq 1$ that asymptotically increases as the sample size $n$ becomes large. This sampling framework is similar to those considered by
\cite{yao2005functional, zhang2013time, petersen2016functional}, and \cite{han2020additive}.
We refer to the theorem and remark below for technical details.
However, it is infeasible to apply the same procedure demonstrated in Section \ref{subsec:partial} because functional responses are only available at discrete evaluation points. Unlike the partially sample functional responses, we lose functional continuum in outcome variables which bases point-wise estimates \eqref{beta_hat} to calculate the test statistic \eqref{test-stat}. More importantly, the magnitude of false signals is not ignorable in the presence of measurement errors. This means that, even though infinitely many evaluation points are available, the coefficient function estimates will be biased as we may not achieve consistency. As a result, Corollary \ref{thm-power} may not serve as a reference distribution for testing \eqref{gnull}.
To tackle the bottleneck, we employ kernel smoothing to recover the unobserved functional responses, where the false signals produced by measurement errors are mitigated, and substitute the estimated curves for the true functional responses to perform the test demonstrated in Section \ref{subsec:partial}. Formally, as a two-step procedure, we first kernel smooth discrete observations for each subject as the Nadaraya-Watson kernel estimator of $E(Y_i(T) \,|\, T = t)$
\begin{equation} \label{kernsmooth}
\tilde{Y}_i^\ast(t) = \frac{\sum_{m=1}^{N_i} K_h(T_{i,m} - t) Y_{i,m}^\ast}{\sum_{m'=1}^{N_i} K_h(T_{i,m'} - t) } \quad (t \in [0,1])
\end{equation}
for each $i=1, \ldots, n$. $\tilde{Y}_i^\ast(t)$ is , where $h > 0$ is a bandwidth and $K_h(t) = K(t/h)/h$ is the scaled kernel of a symmetric density function $K$.
Then, we define a kernel-smoothed test statistic as
\begin{equation}
T_n^\ast
= \int_0^1 \big( \tilde{\boldsymbol\beta}^\ast(t) - \tilde{\boldsymbol\beta}^\ast_0(t) \big)^\top \big(\tilde{\mathbb{X}}^\top \tilde{\mathbb{X}} \big) \big( \tilde{\boldsymbol\beta}^\ast(t) - \tilde{\boldsymbol\beta}^\ast_0(t) \big)\, \mathrm{d}t,
\label{kernel-test-stat}
\end{equation}
where $\tilde{\boldsymbol{\beta}}^\ast(t) = (\tilde{\mathbb{X}}^{\top} \tilde{\mathbb{X}})^{-1} \tilde{\mathbb{X}}^{\top} \tilde{\mathbf{Y}}^\ast(t)$ and $\tilde{\boldsymbol{\beta}}_0^\ast = \mathcal{L} \tilde{\boldsymbol{\beta}}^\ast$. Finally, we reject the null hypothesis \eqref{gnull} if $T_n^{\ast} > t_{\alpha}$, where $t_{\alpha}$ is the level-$\alpha$ critical value for $T_0$ in Corollary \ref{thm-power}.
In the pre-smoothing approach, it is critical to recover individual curves with a uniform rate of convergence on the entire domain $[0,1]$ since the kernel-smoothed test statistic $T_n^\ast$ is defined as a weighted $L^2$ norm of $\, \boldsymbol{\mathcal{C}}\tilde{\boldsymbol\beta}^\ast$, while $\tilde{\boldsymbol\beta}^\ast(t)$ is given by a point-wise estimate.
{\color{black} The local constant smoothing, also known as Nadaraya-Watson type estimation, is easy to implement, but it is less preferred when the reconstruction of individual curves is of main interest in functional data analysis because the asymptotic bias near the boundary of the domain may vary with individual smoothing.
However, Theorem \ref{thm-kernel-test-consistent} below shows that we can still attain the consistency of the test procedure with local constant smoothing. }
\begin{thm} \label{thm-kernel-test-consistent}
Assume that $E \|Y\|_\infty^k < \infty$ for some $k > 2$ and $ \max_{1 \leq i \leq n} \| Y' \|_\infty$ is bounded in probability. If $h \asymp n^{-\theta/5}$ and $P(N < n^\theta) = o(n^{-1})$ for some $\theta > 5/3$, then $T_n^\ast - T_n = o_P(1)$, where we define $T_n$ in \eqref{test-stat} with $\delta_i = 1$ for all $i=1, \ldots, n$.
\end{thm}
\begin{rmk}
For each $i$-th subject, the optimal rate of univariate bandwidth for kernel estimation is typically given by $h \asymp N_i^{-1/5}$ \citep{hall1983large, hall1991local}. Since $N_i \geq n^\theta$ for all $1 \leq i \leq n$ with probability tending to $1$ (Lemma \ref{thm-kernel-unif-conv} in the Appendix), the use of a common rate $h \asymp n^{-\theta/5}$ in Theorem \ref{thm-kernel-test-consistent} allows us to employ the existing bandwidth selectors \citep{park1990comparison, jones1996brief}
\end{rmk}
{\color{black}
However, we note that the classical pre-smoothing approach such as \cite{ramsay2005} requires densely observed functional responses over the entire domain for all subjects. In practice, this requirement is implausible when the observations are relatively sparse. In the following subsection, we introduce a new scope of partially observed functional data to ease the limitation.
}
\subsection{Composition of partial filtering and discrete sampling} \label{subsec:composition}
In Section \ref{subsec:partial}, partially observed data were assumed to be evaluated over continuous subsets of the functional domain. If such data are observed discretely rather than continuously, then the observation framework reduces to that of discretely observed functional response data, and the smoothing approach of Section \ref{sec:kernel} may be applied. In this case, we assume that the complete observations for responses are given by $\{Y_i^\delta: i=1, \ldots, n\}$. For random evaluation points $\mathbf{T}_i = (T_{i,1}, \ldots, T_{i, N_i})^\top$ and the indicator process $\delta_i$, we define a random subset $\mathscr{I}_i^\ast = \{ j : \delta_i(T_{i,j}) = 1, \, j=1, \ldots, N_i \}$. We assume that $\mathbf{T}_i$ and $\delta_i$ are independent.
The corresponding discrete functional observations are given by $\{Y_{i,m}^\ast, T_{i,m}^\ast : m \in \mathscr{I}_i^\ast\}$, where $Y_{i,m}^\ast = Y_i^\delta(T_{i,m}^\ast) + \varepsilon_{i,m}$ and $T_{i,m}^\ast = T_{i,j}$ for some $j=j_m \in \mathscr{I}_i^\ast$, which can be viewed as the discrete sampling of $Y_i$ composed with the partial filtering process $\delta_i$. Then, we define
\begin{equation} \label{kernsmooth-composition}
\tilde{Y}_i^{\ast\ast}(t) = \frac{\sum_{m=1}^{N_i} K_h(T_{i,m}^\ast - t) Y_{i,m}^\ast}{\sum_{m'=1}^{N_i} K_h(T_{i,m'}^\ast - t) } \quad (t \in \mathscr{I}_i)
\end{equation}
for each $i=1, \ldots, n$. Also, we define a kernel-smoothed test statistic as
\begin{equation}
T_n^{\ast\ast}
= \int_0^1 \big( \tilde{\boldsymbol\beta}^{\ast\ast}(t) - \tilde{\boldsymbol\beta}^{\ast\ast}_0(t) \big)^\top \big(\tilde{\mathbb{X}}^\top \tilde{\mathbb{X}} \big) \big( \tilde{\boldsymbol\beta}^{\ast\ast}(t) - \tilde{\boldsymbol\beta}^{\ast\ast}_0(t) \big)\, \mathrm{d}t,
\label{kernel-test-stat-composition}
\end{equation}
where $\tilde{\boldsymbol{\beta}}^{\ast\ast}(t) = (\tilde{\mathbb{X}}^{\top} \mathbb{W}(t)\tilde{\mathbb{X}})^{-1} \tilde{\mathbb{X}}^{\top} \mathbb{W}(t)\tilde{\mathbf{Y}}^{\ast\ast}(t)$ and $\tilde{\boldsymbol{\beta}}_0^{\ast\ast} = \mathcal{L} \tilde{\boldsymbol{\beta}}^{\ast\ast}$.
To investigate the theoretical property of the proposed method, we assume that
\begin{align}
E\Bigg\vert \frac{1}{\int_0^1 \delta_i(v) \, \mathrm{d}v} \Bigg\vert^p < \infty.
\label{delta-condition1}
\end{align}
for some $p > 2$. Also,
suppose that there exists an absolute constant $C > 0$ satisfying
\begin{align}
\begin{split}
P(\delta_i(s) \neq \delta_i(t)) \leq C|s - t|^p
\end{split} \label{delta-condition3}
\end{align}
{\color{black} The reciprocal moment condition \eqref{delta-condition1} implies that that the length of the random sub-interval $\mathscr{I}_i = \int_0^1 \delta_i(v) \, \mathrm{d}v$ is positive (a.s.). Hence, together with \eqref{delta-condition3}, the composition sampling has discrete observations densely available on each sub-interval, but not necessarily over the entire domain. We also refer to Remark \ref{rmk-condition} below for the equivalent expression of \eqref{delta-condition3}.}
\begin{thm} \label{thm:section3}
Assume the same conditions as Theorem \ref{cor:regression} and Theorem \ref{thm-kernel-test-consistent}. If \eqref{delta-condition1} and \eqref{delta-condition3} hold for some $p > 2$, then $T_n^{\ast\ast} - T_n = o_P(1)$.
\end{thm}
\begin{proof}
We note that
\begin{align}
\begin{split}
P(T_{i,m}^\ast \in A)
&= E\big[P(T_{i,j_m} \in A \,|\, \delta_i(T_{i,j_m}) = 1)\big] \\
&= E\bigg[\frac{P(T_{i,j_m} \in A, \delta_i(T_{i,j_m}) = 1 \,|\, \delta_i)}{P(\delta_i(T_{i,j_m}) = 1 \,|\, \delta_i)}\bigg]\\
&= E\bigg[\frac{\int_ A \delta_i(u) \lambda(u) \, \mathrm{d}u}{\int_0^1 \delta_i(v) \lambda(v) \, \mathrm{d}v }\bigg].
\end{split} \nonumber
\end{align}
The above observation implies that, even if discrete observations are sampled from the random segments of functional responses, the proposed method works for this case if we impose additional conditions on the filtering process $\delta_i$ so that the density of $T_{i,m}^\ast$ given by
\[
\lambda^\ast(t) = E\bigg[\frac{\delta_i(t) \lambda(t)}{\int_0^1 \delta_i(v) \lambda(v) \, \mathrm{d}v }\bigg]
\quad (t \in [0,1])
\]
satisfies the key design condition of Theorem \ref{thm:section3}.
For the boundedness of $\lambda^\ast$, we note that conditions (C1)-(C4) and the assumptions on $\lambda$ imply the uniform lower bound,
\[
\lambda^\ast(t) \ge E\bigg[\frac{\delta_i(t)\lambda(t)}{\Vert \lambda \Vert_\infty}\bigg] = \frac{b(t)\lambda(t)}{\Vert \lambda \Vert_\infty}
\ge \frac{b_0 \lambda_0}{\Vert \lambda\Vert_\infty} > 0,
\]
where $b_0 = \inf_t b(t)$ and $\lambda_0 = \inf_t \lambda(t)$.
Also, \eqref{delta-condition1} gives the uniform upper bound,
\begin{align}
\lambda^\ast(t) \le \frac{\Vert \lambda \Vert_\infty}{\lambda_0}
E\bigg[\frac{1}{\int_0^1 \delta_i(v) dv}\bigg]. \label{lambda-star-bounded}
\end{align}
For the smoothness of $\lambda^\ast$, we note that
\begin{align}
\begin{split}
\frac{\lambda^\ast(s) - \lambda^\ast(t)}{s-t}
&= \frac{1}{s-t}E\bigg[\frac{\delta_i(s) \lambda(s) - \delta_i(t) \lambda(t)}{\int_0^1 \delta_i(v) \lambda(v) \, \mathrm{d}v }\bigg]\\
&= \sum_{j=1}^3 E\bigg[\frac{A_{ij}(s,t)}{\int_0^1 \delta_i(v) \lambda(v) \, \mathrm{d}v }\bigg],
\end{split} \label{lambda-star-diff}
\end{align}
where $A_{i1}(s,t) = \frac{\lambda(s) - \lambda(t)}{s-t} \, \mathbb{I}(s, t \in \mathscr{I}_i)$, $A_{i2}(s,t) = \frac{\lambda(s)}{s-t} \, \mathbb{I}(s\in \mathscr{I}_i, \, t \not\in \mathscr{I}_i)$, and $A_{i3}(s,t) = - \frac{\lambda(t)}{s-t} \, \mathbb{I}(s\not\in \mathscr{I}_i, \, t \in \mathscr{I}_i)$. Obviously, $|A_{i1}(s,t)|$ is bounded (a.s.) since $\lambda$ has a bounded derivative. The moment condition \eqref{delta-condition1} and the dominated convergence theorem give
\begin{align}
\lim_{s \to t} E\bigg[ \frac{A_{1j}(s,t)}{\int_0^1 \delta_i(v) \lambda(v) \, \mathrm{d}v } \bigg] = E\bigg[\frac{\delta_i(t)\lambda'(t) }{\int_0^1 \delta_i(v) \lambda(v) \, \mathrm{d}v } \bigg]. \label{lambda-star-derivative}
\end{align}
To analyze $A_{i2}(s,t)$, using H\"older's inequality with $p^{-1}+q^{-1}=1$ for $p, q >1$, we have
\[
E\Bigg\vert \frac{A_{i2}(s,t)}{\int_0^1 \delta_i(v)\lambda(v)dv}\Bigg\vert \le
\frac{\Vert\lambda\Vert_\infty} {\lambda_0} \left\{ E\bigg\vert \frac{1}{\int_0^1 \delta_i(v) dv} \bigg\vert^p\right\}^{1/p}
\left\{E\bigg\vert \frac{\mathbb{I}(s\in \mathscr{I}_i, \, t \not\in \mathscr{I}_i)}{s-t}\bigg\vert^q\right\}^{1/q}.
\]
Doing the same with $A_{i3}$, we claim that
\begin{align}
\limsup_{s\to t} \frac{P(\delta_i(s) \neq \delta_i(t))}{\vert s-t \vert^q}
= 0 \label{delta-condition2}
\end{align}
Indeed, it follows from \eqref{delta-condition3} that
\begin{align}
\begin{split}
\frac{P(\delta_i(s) \neq \delta_i(t))}{\vert s-t \vert^q}
&\leq 2C |s-t|^{p-q},
\end{split} \nonumber
\end{align}
provided that $p > 2 > \frac{p}{p-1}= q$.
Therefore, combining \eqref{lambda-star-diff}, \eqref{lambda-star-derivative}, and \eqref{delta-condition2}, we conclude that the derivative of $\lambda^\ast$ is given by
\[
(\lambda^\ast)'(t) = E\bigg[\frac{\delta_i(t)\lambda'(t) }{\int_0^1 \delta_i(v) \lambda(v) \, \mathrm{d}v } \bigg].
\]
The boundedness of $(\lambda^\ast)'$ can also be shown similarly as \eqref{lambda-star-bounded}.
\end{proof}
\begin{rmk} \label{rmk-condition}
The condition \eqref{delta-condition3} can also equivalently understood as
{\color{black}
\begin{align}
\begin{split}
\vert \Gamma(s,t) - b(s)(1-b(t))\vert \leq C|s - t|^p
\end{split} \label{delta-condition3-equiv}
\end{align}
as well as $\vert \Gamma(s,t) - b(t)(1-b(s))\vert \leq C|s - t|^p$,
where $\Gamma(s,t) = \mathrm{Cov}\big(\delta_i(s), \delta_i(t)\big)$ and $\Gamma(t,t)=b(t)(1-b(t))$}. To see this, we note that
\begin{align}
\begin{split}
\Gamma(s,t)
&= \mathrm{Cov}\big(\delta_i(s), \delta_i(t)\big)\\
&= E\big[ \delta_i(s) \delta_i(t) \big] - E\big[ \delta_i(s) \big]E\big[ \delta_i(t) \big]\\
&= P\big(\delta_i(s) = 1,\, \delta_i(t) = 1\big) - b(s)b(t).
\end{split} \nonumber
\end{align}
Similarly, we have
\begin{align}
\begin{split}
\Gamma(s,t)
&= \mathrm{Cov}\big(1-\delta_i(s), 1-\delta_i(t)\big)\\
&= P\big(\delta_i(s) = 0,\, \delta_i(t) = 0\big) - (1-b(s))(1-b(t)).
\end{split} \nonumber
\end{align}
It follows that
\begin{align}
\begin{split}
P\big( \delta_i(s) \neq \delta_i(t) \big)
&= 1 - P\big(\delta_i(s)=0,\, \delta_i(t) = 0\big) - P\big(\delta_i(s)=1, \delta_i(t)=1\big)\\
&= \big\{b(s)(1-b(t)) - \Gamma(s,t)\big\} + \big\{b(t)(1-b(s)) - \Gamma(s,t) \big\}.
\end{split} \nonumber
\end{align}
Indeed,
\begin{align}
\begin{split}
b(s)(1-b(t)) - \Gamma(s,t)
&= b(s)(1-b(t)) - E\big[ \delta(s)\delta(t) \big] + b(s)b(t) \\
&= P\big( \delta_i(s) = 1 \big) - P\big( \delta_i(s) = 1,\, \delta_i(t) = 1 \big)\\
&= P\big( \delta_i(s) = 1,\, \delta(t)_i = 0 \big).
\end{split} \nonumber
\end{align}
Similarly, we have
\begin{align}
\begin{split}
b(t)(1-b(s)) - \Gamma(s,t)
&= P\big( \delta_i(t) = 1 \big) - P\big( \delta_i(s) = 1,\, \delta_i(t) = 1 \big)\\
&= P\big( \delta_i(s) = 0,\, \delta_i(t) = 1 \big).
\end{split} \nonumber
\end{align}
Therefore, \eqref{delta-condition3} and \eqref{delta-condition3-equiv} are equivalent because
\begin{align}
\begin{split}
P(\delta_i(s) \neq \delta_i(t))
&= b(s)(1-b(t))+b(t)(1-b(s))-2\Gamma(s,t).
\end{split}
\end{align}
\end{rmk}
\begin{rmk}\label{rmk:delta}
We provide one simple example of $\delta$ that satisfies the conditions \eqref{delta-condition1} and \eqref{delta-condition3}.
Suppose that $U_{(1)} < \cdots < U_{(2p+k+2)}$ be order statistics of a $\mathrm{Uniform}(0,1)$ random sample of size $(2p+k+2)$ for some $k \geq p = 3$. Let $\delta(t) = \mathbb{I}(U_{(p+1)} \leq t \leq U_{(p+k+2)})$. Since $S = U_{(p+k+2)} - U_{(p+1)}$ has a $\mathrm{Beta}(k+1, 2p+2)$ distribution, the condition \eqref{delta-condition1} holds, i.e,
\[
E\Bigg\vert \frac{1}{\int_0^1 \delta(v) dv} \Bigg\vert^p = E|1/S^p| = \frac{(k-p)! \, (k+2p+2)!}{k! \, (k+p+2)!
} < \infty.
\]
To verify the condition \eqref{delta-condition3}, let $s < t$ without loss of generality. We note that
\begin{align}
\begin{split}
P\big( \delta(s) = 1,\, \delta(t) = 0 \big)
&= P(U_{(p+1)} \leq s \leq U_{p+k+1} < t)\\
&= C_{p,k} \int_s^t \int_0^s u^3 (v-u)^k (1-v)^3 \, \mathrm{d}u \mathrm{d}v,
\end{split} \nonumber
\end{align}
where $C_{p,k} = \frac{(2p+k+2)!}{p! \, k! \, p!}$.
For $g(s,t) = \int_s^t \int_0^s u^3 (v-u)^k (1-v)^3 \, \mathrm{d}u \mathrm{d}v$ satisfying $g(s,s) = 0$, the Leibniz rule and integration by parts give
\begin{align}
\begin{split}
g^{(0,1)}(s,t)
= \frac{\partial}{\partial t} g(s,t)
&= (1-t)^3 \int_0^s u^3 (t-u)^k \, \mathrm{d}u\\
&= (1-t)^3 \sum_{\ell=0}^3 c_\ell s^{3-\ell} (t-s)^{k+ 1 + \ell}
\end{split}
\end{align}
for some non-zero constants $c_0, \ldots, c_3$. Therefore, it follows from the mean value theorem that
\begin{align}
\begin{split}
P\big( \delta(s) = 1,\, \delta(t) = 0 \big)
&\leq C_{p,k}\big| g(s,t) - g(s,s) \big|\\
&\leq C_{p,k} |s-t| \sup_{u \in [s,t]}\big| g^{(0,1)}(s,u) \big| \\
&\leq C_{p,k}^\ast |s-t|^{k+2},
\end{split}
\end{align}
where $C_{p,k}^\ast = C_{p,k} \max_{\ell} c_\ell$. The case for $P\big( \delta(s) = 0,\, \delta(t) = 1 \big) $ can also be verified similarly, and we get the condition \eqref{delta-condition3}.
\end{rmk}
\section{Simulation studies}\label{sec:sim}
In this section, we study the finite sample performance of the proposed testing procedure in terms of size control and powers under various settings. The performances under incomplete functional response models are compared to the benchmark performance, where functional responses are fully observed without measurement errors.
\subsection{Simulation setting}
We first generate the fully observed response $Y_i$ from the model
\begin{equation}\label{simY}
Y_i(t) = \mathbf{X}_i^\top \boldsymbol\beta(t) + \mathbf{Z}_i^\top \boldsymbol\alpha(t) + \epsilon_i(t), \quad (t \in [0,1])
\end{equation}
for $i=1,\ldots, n$, where covariates $\mathbf{X}_i = (1_{\{U_{i1} > 0\}}, \Phi(U_{i2}), U_{i3})^\top$ and $\mathbf{Z}_i= (1, U_{i4})^\top$ are from $\mathbf{U}_i \stackrel{i.i.d.}{\sim} N_4(\mathbf{0}, \Sigma)$ with $\Sigma = [\sigma_{ij}]_{1\leq i,j \leq 4}$ for $\sigma_{ij}= 0.5^{|i - j|}$, and $\Phi$ denoting the cdf of $N(0,1)$; functional coefficients $\boldsymbol\alpha(t) $ $=$ $\{\alpha_1(t),$ $\alpha_2(t) \}^\top$ associated with $\mathbf{Z}_i$ are generated by $\alpha_k(t) = \sum_{l=4}^{5} ( k+l)^{-1/2} (-1)^{l}v_l(t) \big/ \{\sum_{l=4}^{5} (k+l)^{-1}\}^{1/2}$ for $k=1, 2$, where $V(5) = \{ v_l(t);~t \in [0,1]\}_{l=1}^5$ is a set of orthonormal polynomial base derived from polynomials $P(5) = \{ t^{l-1}; ~t\in [0,1]\}_{l=1}^5$, that is, $\alpha_k \in \textrm{span}\{V(5)\}$ satisfying $\| \alpha_k \|_2 = 1$; random error is independently and identically generated from $\epsilon_i(t) = \sum_{m=1}^{100} e_m \phi_m(t)$, where $\phi_m(t) = \sqrt{2}\sin(2m\pi t)$ and $e_m \stackrel{i.i.d.}{\sim} N(0, 4 m^{-4})$, for $m=1, \ldots, 100$. Functional trajectories are generated at a regular grid of 100 points in $[0, 1]$ and the sample size $n$ is chosen to be 100 and 200.
Let $\vbeta_0(t)=\{\beta_{0,1}(t),\beta_{0,2}(t),\beta_{0,3}(t) \}^\top$, where $\beta_{0,j}(t) = \{v_1(t) + v_{j+1}(t)\}/\sqrt{2}$, for $j=1,2,3$, implying that $\beta_{0,j} \in \textrm{span}\{V(4)\}$ and $\|\beta_{0,j}\|_2=1$. We then consider two scenarios A and B on $\vbeta(t)$. In scenario A, we set $\vbeta(t) = \vbeta_0(t) + n^{-\tau/2} \{d_A \vdelta_A(t)\}$, where $d_A>0$ and $\vdelta_A(t)=\{\delta_{A,1}(t),\delta_{A,2}(t), \delta_{A,3}(t)\}^T$ with $\delta_{A,j}(t) = \sum_{m=1}^{100} (j+m)^{-1/2} \phi_m(t) \big / \{\sum_{m=1}^{100} (j+m)^{-1}\}^{1/2}$. And we consider a hypothesis testing for the null hypothesis
\begin{equation}\label{hyp:sim}
H_0: \beta_j \in \textrm{span}\{V(4)\}, \quad \forall j=1,2,3.
\end{equation}
It aims to find statistical evidence on whether $\beta_j(t)$ coefficients can be expressed exclusively by polynomials up to order three. We investigate the empirical size and power of the proposed method under different magnitudes of the null-deviated signals by setting $d_A=0,1,3,5,7,9$. For each $d_A$, we further set $\tau=1, 0.8, 0.67$, corresponding to the rates of the local alternative approaching to the null as $n^{1/2}$, $n^{1/2.5}$, $n^{1/3}$, respectively, to examine the performance under different rates that the null-deviated model tends to the null model.
In scenario B, we consider a test for the same hypothesis of \eqref{hyp:sim} under $\vbeta(t) = \vbeta_0(t) + n^{-\tau/2} \{d_B \vdelta_B(t)\}$, where $\vdelta_B(t) = \{\delta_{B,1}(t),\delta_{B,2}(t), \delta_{B,3}(t)\}^T$ with $\delta_{B,j}(t) =v_5(t)$. we set $d_B=0, 0.3, 0.6, 0.9, 1.2, 1.5$, and $\tau=1, 0.8, 0.67$. Figure \ref{fig:simalt} illustrates deviations of $\vbeta(t)$ from $\vbeta_0(t)$ under two scenarios for $d_A=3$ and $d_B=0.6$, respectively, when $\tau=1$ and $n=100$.
\begin{figure}[t!]
\centering
\includegraphics[width=4.8in]{sim-delta.png}
\caption{Regression coefficients under scenario A, $\beta_j(t) = \beta_{0,j}(t) + n^{-1/2} \{d_A\delta_{A,j}(t)\}$, and under scenario B, $\beta_j(t) = \beta_{0,j}(t) + n^{-1/2} \{d_B\delta_{B,j}(t)\}$, for (a) $j=1$, (b) $j=2$, and (c) $j=3$, under $n=100$, $d_A=3$, and $d_B=0.6$. The straight lines in each plot represent $\beta_{0,j}(t)$, $j=1,2,3$, respectively.}
\label{fig:simalt}
\end{figure}
{\color{black}
For each scenario, we apply three incomplete sampling schemes. First, we consider the partially observed functional responses with the random missing period $M_i$, on which functional values on the $i$th trajectory are removed. By following a part of the setting in Remark \ref{rmk:delta}, we generate $M_i = [U_{(p+1)}, U_{(p+k+2)}]$, where $U_{(1)} < \cdots < U_{(2p+k+2)}$ are order statistics of independent random samples of a size $(2p+k+2)$ from Uniform(0,1). We note that $1-\delta(t)$ in Remark \ref{rmk:delta} is set as our indicator process, where employing reversed indicator process does not affect the remarked conclusion. We here set constant parameters $p, k$, as $p=k=3$. On average, for each simulation set, 30.4 \% of each trajectory is removed by missing interval $M_i$. Second, we consider functional responses irregularly collected over 80 asynchronous grid points with i.i.d. measurement errors following $N(0, 0.5^2$) added to each $Y_i(T_{i,m})$, $m=1, \ldots, 80$. The locations of 80 grid points are uniformly sampled among 100 grids from each observation. Lastly, we consider the partially observed noisy responses collected over irregular grids under the setting in Remark \ref{rmk:delta} with the reserved indicator process specified above. That is, $M_i = [U_{(p+1)}, U_{(p+k+2)}]$, $N_i=60$, and i.i.d. additive measurement errors generated from $N(0, 0.5^2$). Here, locations of 60 grid points are uniformly sampled among available grids on partially sampled trajectories, if there are more than 60 grids on the filtered set. Figure \ref{fig:Y_example} (a) illustrates a randomly selected set of fully observed response trajectories, and three other sets of trajectories in Figure \ref{fig:Y_example} (b), (c), (d) display partially observed response trajectories filtered by missing random intervals, noisy responses generated over irregular grid points with additive measurement errors, and noisy partially observed responses over irregular grids with additive measurement errors, respectively.
}
\begin{figure}[t!]
\centering
\includegraphics[width=4.8in]{sim_Y_example_V5.png}
\caption{Randomly selected six simulated trajectories of (a) fully observed response data, (b) partially observed response data filtered by independent missing intervals, (c) irregularly observed data with added measurement, and {\color{black} (d) partially observed data over irregular grids with measureme errors.}}
\label{fig:Y_example}
\end{figure}
\subsection{Empirical size and power}
{\color{black}
We examine the empirical sizes and powers of the proposed procedures for models from fully, partially, irregular, and partially irregular error-prone functional response data using their corresponding test statistics, denoted as $T_n^{\textrm{Full}}$, $T_n$, $T_n^\ast$, and $T_n^{\ast\ast}$ respectively.} Practical implementation steps for each test statistic are provided in the Appendix. All simulation results below were based on 5,000 simulation replicates, and the critical value of the test was estimated by 5,000 bootstrap samples in each simulation run. To calculate the test statistic $T_n^\ast$ and $T_n^{\ast\ast}$ involving kernel smoothing, we chose a common bandwidth that minimizes the leave-one-out cross-validation \citep{wong1983consistency, hardle1985optimal} across all subjects in each simulation sample.
\begin{sidewaystable}
\caption{Empirical size and power at the $5\%$ nominal level for testing $H_0: \beta_j(t) \in \mbox{span}\{ V(4) \}$ under scenario A from fully observed response data ($T_n^{\textrm{Full}}$), partially observed response data ($T_n$), irregularly observed response data with additive measurement errors ($T_n^\ast$), and irregularly observed partial response data with additive measurement errors ($T_n^{\ast\ast}$).
}
\centering
\vspace{2mm}
\begin{tabular} {cccccccccccccc}
\hline
$d_A$ & $n$ & \multicolumn{4}{c}{$\tau=1$} & \multicolumn{4}{c}{$\tau=0.8$} & \multicolumn{4}{c}{$\tau=0.67$}\\
\cmidrule(lr){3-6} \cmidrule(lr){7-10} \cmidrule(lr){11-14}\\ [-0.1in]
& & $T_n^{\textrm{Full}}$ & $T_n$ & $T_n^\ast$ & $T_n^{\ast\ast}$ & $T_n^{\textrm{Full}}$ & $T_n$ & $T_n^\ast$ & $T_n^{\ast\ast}$ & $T_n^{\textrm{Full}}$ & $T_n$ & $T_n^\ast$ & $T_n^{\ast\ast}$ \\
\hline
\multirow{2}{*}{0}
& 100 & 0.060 & 0.068 & 0.061 & 0.072 & 0.065 & 0.060 & 0.064 & 0.072 & 0.061 & 0.050 & 0.066 & 0.071 \\
& 200 & 0.055 & 0.052 & 0.057 & 0.073 & 0.053 & 0.057 & 0.057 & 0.073 & 0.050 & 0.062 & 0.050 & 0.071 \\
\hline
\multirow{2}{*}{1}
& 100 & 0.075 & 0.067 & 0.067 & 0.070 & 0.082 & 0.81 & 0.076 & 0.074 & 0.107 & 0.085 & 0.096 & 0.078 \\
& 200 & 0.065 & 0.065 & 0.063 & 0.084 & 0.081 & 0.085 & 0.075 & 0.078 & 0.112 & 0.103 & 0.094 & 0.077\\
\hline
\multirow{2}{*}{3}
& 100 & 0.153 & 0.101 & 0.118 & 0.086 & 0.354 & 0.288 & 0.21 0 & 0.095 & 0.679 & 0.529 & 0.480 & 0.181 \\
& 200 & 0.140 & 0.124 & 0.098 & 0.101 & 0.404 & 0.314 & 0.222 & 0.124 & 0.815 & 0.576 & 0.597 & 0.255\\
\hline
\multirow{2}{*}{5}
& 100 & 0.384 & 0.230 & 0.223 & 0.108 & 0.900 & 0.670 & 0.555 & 0.263 & 1.000 & 0.913 & 0.881 & 0.492\\
& 200 & 0.398 & 0.258 & 0.207 & 0.111 & 0.958 & 0.752 & 0.634 & 0.349 & 1.000 & 0.996 & 0.900 & 0.608 \\
\hline
\multirow{2}{*}{7}
& 100 & 0.789 & 0.474 & 0.443 & 0.152 & 0.999 & 0.951 & 0.902 & 0.510 & 1.000 & 1.000 & 0.999 & 0.916 \\
& 200 & 0.775 & 0.504 & 0.417 & 0.172 & 1.000 & 0.996 & 0.956 & 0.564 & 1.000 & 1.000 & 1.000 & 1.000\\
\hline
\multirow{2}{*}{9}
& 100 & 0.977 & 0.809 & 0.700 & 0.458 & 1.000 & 1.000 & 0.995 & 0.808 & 1.000 & 1.000 & 1.000 & 0.999 \\
& 200 & 0.985 & 0.833 & 0.728 & 0.480 & 1.000 & 1.000 & 0.999 & 0.964 & 1.000 & 1.000 & 1.000 & 1.000\\
\hline
\label{table:simA}
\end{tabular}
\end{sidewaystable}
Table \ref{table:simA} summarizes results for hypothesis \eqref{hyp:sim} at 5\% nominal level under scenario A from test statistics from corresponding functional response data structures, for $\tau= 1, 0.8, 0.67$. It can be seen that the empirical sizes are reasonably controlled around the nominal level 0.05. Although the sizes under error-prone partially observed structure, corresponding to the test statistic $T_n^{\ast \ast}$, show slightly larger values around 0.07, and it is due to loss of original information with missing intervals and additive noise.
In terms of power, we investigate the results depending on $\tau$, which regulates the rate that the null-deviated model approaches the null model. As expected, the empirical power increases as $\tau$ decreases or as $d_A$ increases. In addition, the power reasonably approaches to 1 under all settings. Especially for $\tau=1$ of $T_n^{\textrm{Full}}$, the power approaches 1 even with moderate magnitudes of the null-deviated signals, indicating that the condition of $\sum_{m=1}^\infty \pi_m^2 = \infty$ in Theorem \ref{thm-power} is not restrictive in practical application. The relatively deflated powers from $T_n^\ast$ might be due to some loss of the null-deviated signal after applying the smoothing process to noisy data. We observe that the power from $T_n$ goes to 1 with a reasonable but slightly slower rate than $T_n^{\textrm{FULL}}$ shows, and it is from the smaller effective sample sizes at each grid due to partial sampling. Although the lowest powers are observed from $T_n^{\ast \ast}$ under all settings due to most significant loss of original information with missing periods and noisy discretized measurements, we still see the power gradually increases towards 1. {\color{black}Indeed, our extra simulations considering larger values of $d_A$ show that powers under $\tau=1$ from $T_n^\ast$ and $T_n^{\ast\ast}$ become 1 when $d_A=13$ and 17, respectively. }
The simulation results from scenario B are illustrated in Figure \ref{fig:simB}. The results under $n=100$ and $n=200$ are represented by full and dotted lines, respectively. We observe a very similar pattern to that under scenario A with reasonable size controlling at the 0.05 nominal levels and with the behaviors of the power for $d_B>0$. We again confirm that power approaches to 1 when $\tau=1$ under the moderate magnitudes of the null-deviated signals. The power tends to 1 with relatively slower but reasonable rates with an increase of $d_B$ for $T_n$ and $T_n^{\ast}$ due to the same reasons described in results from scenario A. {\color{black} Again, we observe the lowest powers achieved from $T_n^{\ast \ast}$ under $d_B >0$, they gradually approaches towards 1. We note that extra simulations considering larger values of $d_B$ show that powers under $\tau=1$ for $T^\ast$ and $T^{\ast\ast}$ are attained as 1 when $d_B=2.1$ and 3, respectively. It implies empirically consistent properties of our proposed tests.}
{
Although we have only illustrated the simulation result for investigating the finite sample performance of $T_n^\ast$ with $N_i=80$ in Table \ref{table:simA} and Figure \ref{fig:simB}, we observed that the power and size of the proposed test are also well achieved with $N_i=60$, where relatively rich response information is available over the domain. However, under the sparse setting, $N_i=10$ or $30$, we observed relatively unsatisfactory results with the finite sample analysis. We note that, in our simulation setting, the null-deviated signals visualized in Figure 1 are quite subtle, with a delicate difference in the trend and visually detectable discrepancies only at boundaries. Hence, we report a limitation of the proposed method for $T_n^\ast$ such that the estimated regression coefficients calculated from noisy functional responses collected over sparse grids may not be able to effectively detect subtle trend differences near the boundary.
}
\begin{figure}[t!]
\centering
\includegraphics[width=4.8in]{sim_power_res_B_V4.png}
\caption{ Empirical size and power at the $5\%$ nominal level for testing $H_0: \beta_j(t) \in \mbox{span}\{ V(4) \}$ under scenario B for (a) fully observed response data, (b) partially observed response data, (c) irregularly observed functional data with additive measurement errors, {\color{black} and (d) irregularly observed partial functional data with additive measurement errors} ($\blacksquare,\tau=1$; $\CIRCLE,\tau=0.8$; $\blacktriangle,\tau=0.67$; $\rule[0.5ex]{1cm}{0.8pt}$, $n=100$; \hdashrule[0.5ex]{1cm}{1pt}{1pt}, $n=200$).} \label{fig:simB}
\end{figure}
\section{Real data application} \label{sec:real-data}
\subsection{The obesity prevalence trend change} \label{subsec:data1}
We illustrate the practical application of the proposed testing procedure through an analysis of the U.S. overweight and obesity prevalence data from 2011 to 2020. It is a part of the data of the U.S. residents regarding their health-related risk behaviors and chronic health conditions, collected by Behavioral Risk Factors Surveillance System (BRFSS) through the state-based telephone interview survey in cooperation with the Centers for Disease Control and Prevention (CDC). The dataset consists of percentages ($\%$) of adults aged 20 and over populations with the weight status of obese, overweight, normal weight, and underweight from 50 states. Along with weight status, socioeconomic status is also measured through educational and income levels of samples. In terms of income, each survey sample is classified into one of five categories; less than \$15,000, \$15,000-\$24,999, \$25,000-\$34,999, \$35,000-\$49,999, and over \$50,000. The full dataset can be found at: https://chronicdata.cdc.gov/Behavioral-Risk-Factors/Behavioral-Risk-Factor-Surveillance-System-BRFSS-P/dttw-5yxu.
\begin{figure}[t!]
\centering
\includegraphics[width=4.8in]{obese_figure.png}
\caption{Percentages (\%) of (a) obese, (b) overweight, and (c) normal weight adults among the U.S. adults aged 20 and over populations from 2011 to 2020, from 50 states (gray lines) and sample means (solid lines) }
\label{fig:obese}
\end{figure}
Despite growing recognition of the problem, the obesity epidemic continues in the U.S. with steadily rising obesity rates. For example, 1999-2000 through 2017-2018, U.S. obesity prevalence increased from 30.5\% to 42.4\%. Figure \ref{fig:obese} illustrates such trends during recent 10 years from 50 states for obese, overweight, and normal weight groups. The bold lines represent the sample mean trajectories of each group, where its calculation is specified later with the model specification \eqref{intmodel}. With the rising obesity rates, we observe decreasing proportions of normal weight population along with seemingly constant rates of overweight population. We apply the proposed methods to identify shape of the tendency on prevalence rates for each group of the weight status. Furthermore, the gap of obesity prevalence between low and high income groups changes during this time is also examined.
We first investigate the shape of overall prevalence trend for each weight group. Since data is collected over regular grids for all states with a few missing values, we adopt the test statistic $T_n$ for partially observed functional data. Let $Y_i(t_m)$ denote the observed prevalence rate for given weight status group from $i$th state. We formulate the intercept-only model with $\vZ = 0$ in \eqref{fullmodel} and rescaled discrete time points $2011, \ldots, 2020$ to equally spaced $t_m \in [0,1]$,
\begin{equation} \label{intmodel}
Y_i(t_m) = \beta(t_m) + \epsilon(t_m),\quad i=1,\ldots,50, \quad m = 1,\ldots, 10.
\end{equation}
Based on it, we obtain the least square estimates $\hat\beta(t_m)= \sum_{i=1}^{50} {Y_i(t_m)}/50$ as the sample trajectories of each weight group, illustrated with bold lines in Figure \ref{fig:obese}. To identify its shape, we consider the null hypotheses for the constant and linear spaces, corresponding to $H_{0,c}: \beta(t) \in \textrm{span}\{V(1)\}$ and $H_{0,l}: \beta(t) \in \textrm{span}\{V(2)\}$, respectively. Here, $V(r) = \{ v_l(t);~t \in [0,1]\}_{l=1}^r$ is an orthonormal set we obtain by applying the Gram-Schmidt process to the polynomial basis $P(r) = \{ t^{l-1}: t\in [0,1]\}_{l=1}^r$, for $r \geq 1$. Table \ref{table:obesity} shows calculated test statistic $T_n$ and corresponding calculated $p$-values for each null hypothesis from each weight group. Calculation details and numerical implementation steps are provided in the Appendix. In Table \ref{table:obesity}, we reject the null hypothesis of constant space $H_{0,c}$ for the obese and normal groups, but not $H_{0,l}$. That is, at a significance level less than $0.001$, obesity prevalence has linearly risen over time, while rates of normal weight population is linearly decreased. On the other hand, we could not find any significant trend as we retain the constant shape hypothesis $H_{0,c}$ at level $0.1$ for the null hypothesis $H_{0,c}$.
\begin{table}[!t]
\centering
\caption{Calculated test statistic $T_n$ and $p$-values (in parentheses) for null hypotheses of constant and linear trends from each group of weight status }
\vspace{2mm}
\begin{tabular}{cccc}
\hline
& Obesity & Overweight & Normal weight \\
\hline
$H_{0,c}: \beta(t) \in \textrm{span}\{V(1)\}$ & 23.13 ($<0.001) $ & 1.16 (0.162) & 14.74 ($<0.001$) \\
$H_{0,l}: \beta(t) \in \textrm{span}\{V(2)\}$ & 0.34 (0.822) & 0.16 (0.974) & 0.17 (0.967)\\
\hline
\label{table:obesity}
\end{tabular}
\end{table}
We next investigate the obesity prevalence over time associated with income levels. In recent literature, statistical analyses on the association between income levels and obesity rates have repeatedly reported that obesity prevalence has been significantly increased at a faster rate mostly in relatively low-income levels \citep{ Cynthia2010, Bently2018, Kim2018}. Figure \ref{fig:obese_income} (a) illustrates obesity prevalence rates for five income levels and their mean trajectories. While all five income levels present increasing obesity prevalence over time, the group of income less than $\$15,000$ shows the highest rates while the groups of income over $\$50,000$ illustrates the lowest rates. We first apply the functional ANOVA to this data, a special case of our proposed testing procedures corresponding to a part of \cite{Zhang2014}. To do this, we formulate the model based on \eqref{fullmodel-re} by setting $(250 \times 4)$ matrix for $\vX = \text{diag}\{\vone_{50}, \ldots, \vone_{50} \}$ and $(250 \times 1)$ vector of $1$'s for $\vZ$. The null hypothesis for fANOVA corresponds to \eqref{gnull}, where $V=\{ 0\}$; i.e., $H_0: \beta_j(t) = 0$, for $t \in [0,1]$, and $j=1,\ldots,4$. By applying the proposed testing procedure, we obtain $p$-value $<0.001$ and conclude that significant differences on obesity rates among different income groups exist. We then apply a type of post hoc test, specifically to examine how the gap of prevalence among lowest highest income group changes over time. Figure \ref{fig:obese_income} (b) shows mean trajectory of gap in obesity rates between the lowest and highest income groups. It is observed that this gap tends to decrease over time and fitted linear and quadratic trending lines are illustrated, respectively. We also consider the fit with the piecewise linear bases, where its fitting details and testing results are specified later. We apply the proposed procedure based on the test statistic $T_n$ to identify the shape of this gap. Let $\vY(t_m) = \{ \vY_{level1}(t_m)^\top, \vY_{level5}(t_m)^\top \}^\top$, where $\vY_{level1}(t_m)$ is a vector of length $50$ with the elements of obesity rates for the lowest income group from 50 states at $m$th year. Similarly, $\vY_{level5}(t_m)$ denotes a vector for the highest income group. We then specify the model based on \eqref{fullmodel-re} with $\vX=(\vone_{50}, \vzero_{50})^\top$ and $\vZ$ the length 100 vector of $1$'s. Under given model formulation, $\beta(t)$ represents the difference between two groups means. Then two null hypotheses of linear and quadratic functional spaces are considered, $H_{0,l}: \beta(t) \in \textrm{span}\{V(2)\}$ and $H_{0,q}: \beta(t) \in \textrm{span}\{V(3)\}$, respectively.
\begin{figure}[t!]
\centering
\includegraphics[width=4.8in]{data-income.png}
\caption{(a) Percentages (\%) of obese prevalence by five income levels from 50 states and (b) mean of differences of the obesity rates between group of income less than $\$15,000$ and group of income over $\$50,000$, with fitted lines using linear bases ($\hdashrule[0.5ex]{1.5cm}{1pt}{1.5mm 2pt}$), using quadratic bases ($\hdashrule[0.5ex]{1cm}{0.5pt}{1.5pt}$) and piecewise linear bases ($\hdashrule[0.5ex]{2cm}{1pt}{3.5mm 2pt})$.}
\label{fig:obese_income}
\end{figure}
By applying the proposed testing procedures; for the test under $H_{0,l}$, we obtain $T_n = 10.75$ with the $p$-value $0.02$; and for the test under $H_{0,q}$, $T_n = 5.03$ and $p$-value is $0.23$. Under significance level $\alpha = 0.05$, we fail to reject the quadratic null space and conclude that the gap of obesity prevalence between lowest and highest income groups is significantly decreasing with the quadratic shape. To demonstrate the further application of our method with null hypothesis with other types of bases, we try the hypothesis testing for $H_{0,pl}: \beta(t) \in \textrm{span}\{U(3)\}$, where $U(3)$ represent a set of three orthonormal B-spline bases derived from the piecewise linear functions with knots at $0, 0.5,$ and 1, where the internal knot 0.5 is chosen by the estimated peak from the quadratic fit. Under this null hypothesis, we obtain $T_n = 4.70$ and $p$-value 0.28. By comparing obtained p-value 0.28 with the $p$-value $0.23$ derived from the null hypothesis $H_{0,q}$, we observe slightly stronger statistical evidence on the conclusion for the piecewise linear shape on gap between two groups under given sample sizes. We note that results from smoothed trajectories through the test statistic $T_n^*$ leads the same inferential conclusions for $H_{0,l}$, $H_{0,q}$, and $H_{0,pl}$, although they are not presented here. It empirically demonstrates the performance of our proposed method in detecting significant functional shape even under non-smoothed raw trajectories.
\subsection{Human motion analysis in ergonomics}
We illustrate another data example in automotive ergonomics, previously analyzed by \cite{faraway1997regression}, \cite{shen2004f}, \cite{zhang2011statistical}, \cite{chen2020model}, and among others. The Center for Ergonomics at the University of Michigan collected data on body motions of an automobile driver. As part of the project, the right elbow angles of the test driver were captured as time-varying responses when the driver's hand leaves the steering wheel until reaching 20 different locations in the car. There were 3 repeated reaches to each of the different targets located near the glove compartment, headliner, radio panel, and gear shifter.
{\color{black} We associate observed discrete trajectories of elbow angles $R_{ij}(t_{ij,m})$ with with the $(x,y,z)$-coordinate of a reaching target with extra variables as
\begin{equation}
\begin{aligned}
R_{ij}(t_{ij,m})
&= \mu_0(t)
+ \sum_{k=1}^3 \alpha_k(t_{ij,m}) d_{ik}
+ \sum_{l=1}^3 \beta_l(t_{ij,m}) c_{il} \\
&\qquad + \sum_{k=1}^3\sum_{l=k}^3 \gamma_{kl}(t_{ij,m}) c_{ik} c_{il}
+ \varepsilon_{ij}(t_{ij,m}),
\end{aligned} \label{faraway-reg-model}
\end{equation}
for $i=1, \ldots, 20$, $j=1,2,3$, and $k=1, \ldots, N_i$, }
where $(c_{i1},c_{i2},c_{i3})$ represents the $(x,y,z)$-coordinate of a target location with its origin at the initial hand posture on the steering wheel and $d_{ik}$'s are $0$-$1$ dummy variables indicating four nominal areas of different targets. Specifically, $d_{i1} = 1$ if the target is located near the headliner, $d_{i2} = 1$ if the radio, $d_{i3} = 1$ if the gear shifter, and zeros otherwise so that we set the glove compartment for the baseline location. By adding the nominal target information to the conventional model, we are able to statistically compare the changes of elbow angles from different experimental conditions. Among $60$ experiments, we drop one trial which has been excluded in the literature, where the researchers revealed that the driver's motion was mistaken while reaching the target. See \cite{faraway1997regression} for more details about the experimental settings.
Since observed discrete trajectories of elbow angles reveal some noises due to the measurement errors, \cite{faraway1997regression} applied the smoothing splines to raw data to respect the smoothness of human motion and obtained pre-smoothed angle random curve. We denote it as $\tilde{R}_{ij}^\ast(t)$, where the tracking time points $t_{ij,1}, \ldots, t_{ij,N_{ij}}$ were re-scaled to $[0,1]$ for each of 60 reaches. The pre-smoothed random sample $\{ \tilde{R}_{ij}^\ast: i=1, \ldots, 20, \, j=1,2,3\}$ can be analyzed by the standard one-way functional ANOVA.
We note that the model \eqref{faraway-reg-model} turns out to be adequate for the data as the bootstrap-based test \citep{faraway1997regression} does not reject the lack of fit compared with the functional ANOVA model ($\textrm{p-value}=0.436$). \cite{chaffin2002simulating, chaffin2005improving} also considered similar approaches to the driver's motion prediction in a larger dataset by adding extra variables to statistically control different experimental conditions.
In this example, we aim to analyze the shape of the time-varying motion changes rather than find a predictive model for an arbitrary target location.
{\color{black}
Based on the asymptotic equivalence between splines and certain class of kernel estimates \citep{Silverman1984, Lin2004}, we apply the proposed method to the pre-smoothed random sample $\{ \tilde{R}_{ij}^\ast: i=1, \ldots, 20, \, j=1,2,3\}$ to test the null hypothesis $H_{0}^\alpha:$ $\{ \alpha_1, \alpha_2, \alpha_3 \}$ $\in$ $\mathrm{span}\{V(2)\}$ with the same $V(2)$ defined in Section \ref{subsec:data1}. We perform inference using $T_n^\ast$ from Section \ref{sec:kernel} and find that the null hypothesis cannot be rejected ($T_n^\ast=3.16$, \,$\textrm{p-value}=0.660$).} This result together with Figure \ref{fig1:faraway} shows that, compared to the glove reaching experiment, the driver stretched their elbow less and moved slower at a constant relative angular velocity when reaching different area. We also individually test several hypotheses such as $H_{0}^\beta: \{ \beta_1, \beta_2, \beta_3 \} \in \mathrm{span}\{ V(2) \}$ ($T_n^\ast=3.88$, \,$\textrm{p-value}=0.930$), $H_{0}^{\gamma_{kk}}: \{ \gamma_{11}, \gamma_{22}, \gamma_{33} \} \in \mathrm{span}\{ V(2) \}$ ($T_n^\ast=2.86$, \,$\textrm{p-value}=0.710$), and $H_{0}^{\gamma_{kl}}: \{ \gamma_{12}, \gamma_{13}, \gamma_{23} \} \in \mathrm{span}\{ V(2) \}$ ($T_n^\ast=4.68$, \,$\textrm{p-value}=0.409$). We close this section by reporting that all twelve hypotheses we have tested were still not rejected after applying the multiple comparison adjustment, both the Bonferroni and Benjamini-Hochberg corrections, at $5\%$ significance level, implying statistically significant linear trends on them.
\begin{figure}[!t]
\centering
\includegraphics[width=1.55in]{faraway-example-coef-headliner.png}
\includegraphics[width=1.55in]{faraway-example-coef-radio.png}
\includegraphics[width=1.55in]{faraway-example-coef-shifter.png}\\
\includegraphics[width=1.55in]{faraway-example-coef-x.png}
\includegraphics[width=1.55in]{faraway-example-coef-y.png}
\includegraphics[width=1.55in]{faraway-example-coef-x.png}\\
\includegraphics[width=1.55in]{faraway-example-coef-xx.png}
\includegraphics[width=1.55in]{faraway-example-coef-yy.png}
\includegraphics[width=1.55in]{faraway-example-coef-zz.png}\\
\includegraphics[width=1.55in]{faraway-example-coef-xy.png}
\includegraphics[width=1.55in]{faraway-example-coef-xz.png}
\includegraphics[width=1.55in]{faraway-example-coef-yz.png}\\
\caption{The regression coefficient estimates of the model \eqref{faraway-reg-model} are depicted. The solid and dot-dashed lines are coefficient estimates and their $95\%$ confidence bands, respectively. The long-dashed lines show the estimates under the null hypotheses $H_{0,l}$.}
\label{fig1:faraway}
\end{figure}
\section{Discussion} \label{sec:discussion}
We have presented a statistical procedure for testing shape-constrained hypotheses on regression coefficients in function-on-scalar regression models, generalizing existing methods such as fANOVA that consider nullity hypotheses only. The approach presented here enables inferences about temporal/spatially varying coefficient effects as well. The large sample properties of the proposed test were investigated by deriving the asymptotic null distribution of the test statistic and consistency of the test against local alternatives. The methodology was demonstrated under three incomplete sampling situations; (i) partially observed, (ii) irregularly observed error-prone, and (iii) composition of former two incomplete functional response data. A few studies have recently illustrated goodness-of-fit tests for functional linear models under fully observed responses, but handling incomplete sampling designs was not studied either in theory or practice. Furthermore, the critical value in our methodology can be approximated with the spectral decomposition of covariance function, for which one can easily exploit the existing methods in the recent developments in functional data analysis. A key aspect of the methodology developed here is the specification of a relevant shape hypothesis of interest. Ideally, the application defines the relevant shape space. Otherwise, we can use standard curve-fitting hypotheses defined by, for example, polynomial basis functions, exponential functions, or periodic functions cycling at different frequencies.
In Section \ref{sec:kernel}, we considered functional data, where each sample path is observed on randomly spaced discrete points of size $N_i$. Assuming that $N_i$'s are increasing as the sample size increase, which is often called ``densely observed'' functional data, we adopted the individual smoothing strategy as interpolation. Another interesting and challenging situation is when the functional data are so sparsely observed that the individual smoothing strategy employed here is not effective. In this case, one may consider a functional principal components based approach to reconstruct individual curves. Recently, \cite{kneip2020optimal} proposed optimal reconstruction of individual curves in which each of the incomplete $n$ functions is observed at discrete points considerably smaller than $n$ in finite sample analysis. They showed that the functional principal components based approach can provide better rates of convergence than conventional smoothing methods, where $\min_i \{N_i\} \asymp n^\theta$ as $n \to \infty$ for some $\theta > 0$. However, hypothesis testing under the functional principal component analysis framework remains to be developed.
\section{Technical Details}
\subsection{Numerical Implementation}
We first present the numerical implementation of the proposed test for the fully observed response data. In practice, the response $Y_i(t)$ is collected in a discrete manner over a dense grid $t_1, \ldots, t_{N_i}$. For simplicity, we focus on the case $N_i = N$ and all the individual functions are observed at a common grid of design time points. If the design time points are different for different individual trajectories, we can apply the kernel smoothing to obtain the evaluations at a common grid points under its uniform consistency property, demonstrated in Section \ref{sec:kernel}.
Suppose that two design matrices $\vX$ and $\vZ$ are orthogonalized as in \eqref{fullmodel-re} and a set of orthonormal bases \{$v_l$; $l=1,\ldots, r\}$ is given for the null hypothesis \eqref{gnull}. We calculate $(p \times N)$ matrix $\hat {\boldsymbol{\beta}} = (\hat{\boldsymbol{\beta}}^\top_1, \ldots, \hat{\boldsymbol{\beta}}^\top_p)^\top$, where $\hat {\boldsymbol{\beta}}_j = \{\hat\beta_j(t_1), \ldots, \hat\beta_j(t_N)\}^\top$ is the least square estimator of $\vbeta_j$ at each grid. The test statistic $T_n^{\text{Full}}$ is obtained based on $D = \hat {\boldsymbol{\beta}} - \mathcal{L} \hat {\boldsymbol{\beta}}$, where $(p \times N)$ matrix $D=(D_1, \ldots, D_N)$ is defined with length $p$ vector $D_m$, $m=1\ldots, N$, and consists of $j$th row representing the length $N$ regression residuals by fitting the linear regression for the response $\boldsymbol{\hat\beta}_j$ with $r$-columns of matrix V as covariates. Here, each column of matrix V are discretized orthonormal bases $v_1, \ldots, v_r$ evaluated at $N$ grid points. Then, the test statistic $T_n^{\text{Full}}$ is approximated by $N^{-1}\sum_{m=1}^N D_m^\top(\tilde{\mathbb{X}}^\top \tilde{\mathbb{X}}) D_m$. We next find the empirical critical value for the level $\alpha$ test. We calculate the $(N \times N)$ covariance matrix of residuals, denoted as $\Gamma = [\gamma_{m m'}]_{1 \leq m, m' \leq N}$, based on $ {\boldsymbol{r}}_i= \{r_i(t_1), \ldots, r_i(t_N)\}^\top$, $i=1,\ldots, n$, where $r_i(t_m) ={Y_i}(t_m) - \tilde{\mathbb{X}} \hat\vbeta(t_m) - \mathbb{Z} \hat\veta(t_m) $ under $\hat\veta(t_m)=(\mathbb{Z}^\top \tilde{\mathbb{Z})}^{-1} \mathbb{Z}^\top {\vY}(t_m)$. We then derive the discretized $\tilde \gamma(s,t)$ following the formula in \eqref{gp-var}, denoted as $\tilde \Gamma $, by calculating $\tilde \Gamma = \Gamma -\Gamma_{(c)} - \Gamma_{(r)} + \Gamma_{(c,r)}$, where $\Gamma_{(c)}=(\hat{ \boldsymbol{\gamma}}_{(c)1}, \ldots, \hat{\boldsymbol{\gamma}}_{(c)N})^\top$ is the matrix of the fitted multi-response regression values based on $N$ separate regressions, with each column of $\Gamma$ as the response, and the $r$-columns of $V$ as covariates. Simiarly, $\Gamma_{(r)}$ is the matrix of fitted multi-response regression values, with each row of $\Gamma$ as the response, and the $r$-columns of $V$ as covariates. Lastly, $\Gamma_{(c,r)}$ is the matrix of fitted values by applying previous two steps to $\Gamma$ subsequently. We then generate a large bootstrap samples of $\hat T_0 = \sum_{k=1}^{\hat K} \hat\lambda_k A_k $, $A_k \stackrel{i.i.d.}{\sim} \chi^2_p ,$ where $\hat K$ denotes the number of positive eigenvalues of $\tilde \Gamma$, and use its $(1-\alpha)\%$ quantile as the critical value.
To perform the proposed hypothesis testing under irregularly collected response data with additive measurement errors, we replace $Y_i(t)$ by kernel smooth estimates of $\tilde Y_i^*(t)$, obtained by \eqref{kernsmooth}, and apply the procedures described for the fully observed response data. To find the optimal smooth parameter in kernel estimation, one may adopt the leave-one-out cross-validation
For the application of the proposed testing procedure to partially observed functional response data, we calculate $(p \times N)$ matrix $D^w = \hat{\boldsymbol{\beta}}^w - \mathcal{L} \hat{\boldsymbol{\beta}}^w$, where $D^w = (D_1^w, \ldots, D_N^w)$ and $\hat {\boldsymbol{\beta}}^w_j = (\hat\beta_j^w(t_1), \ldots, \hat\beta_j^w(t_N))^\top$, and approximate $T_n =N^{-1}\sum_{m=1}^N {D^w_m}^\top(\tilde{\mathbb{X}}^\top \tilde{\mathbb{X}}) D^w_m$. To find the empirical critical value for the level $\alpha$ test, we estimate $\Gamma$ by employing nonparametric covariance surface estimation method applicable to sparse functional data, available through the function \texttt{GetCovSurface} in R package `fdapace'. The optimal bandwidth for the surface estimation can be found through the cross-validation steps. Next, we calculate $\hat v(t_m,t_{m'}) = \sum_{i=1}^{n} \delta_i(t_m) \delta_i(t_{m'})/n,$ and $\hat b(t_m) = \sum_{i=1}^{n} \delta_i(t_m)/n$. Then $(N \times N)$ matrix $\Xi$, discretized version of $\vartheta(s,t)$ in Theorem \ref{cor:regression}, can be derived, where its $(m, m')$-th element is calculated as $\Xi_{mm'}=\Gamma_{mm'} \Pi_{mm'}$. Here, $(N \times N)$ matrix $\Pi$ has its $(m,m')$-th element as $\Pi_{mm'}=\hat v(t_m, t_{m'})\hat b(t_m)^{-1} \hat b(t_{m'})^{-1}.$ We next calculate $\tilde \Xi = \Xi -\Xi_{(c)} - \Xi_{(r)} + \Xi_{(c,r)}$, where $\Xi_{(c)}$ and $\Xi_{(r)}$ are obtained by following definitions of each term, described in the implementation for the fully observed data. In practical application, we adopt the standardized test statistic ${\breve{T}}_n$ $=$ $N^{-1}\sum_{m=1}^N$ ${\breve{D}}_m^{w \top}(\tilde{\mathbb{X}}^\top \tilde{\mathbb{X}}) {\breve{D}}_m^{w}$, where ${\breve{D}}_m^{w} = D_m^w \hat b(t_m) \hat v(t_m, t_m)^{-1/2}$ and obtain an approximate critical value from $\tilde \Xi^* = \Xi^* -\Xi_{(c)}^* - \Xi_{(r)}^* + \Xi_{(c,r)}^* $, where $ \Xi^*_{mm'} = \Gamma_{mm'} \Pi^*_{mm'}$ with $\Pi^*$ representing the standardized matrix of $\Pi$ having unit variance for diagonals. The standardized testing procedure empirically shows the improved performance in size controlling in simulation studies.
Lastly, we can calculate $T_n^{\ast \ast}$ by combining two previous steps, smoothing process over observed trajectories and calculation of test-statistic under partial structures.
\subsection{Technical Details for Section \ref{subsec:partial}}
\subsubsection*{Proof of Theorem \ref{cor:regression}}
Suppose $\E(\tilde{\mathbb{X}}| \mathbb{Z}) = \vzero$ without loss of generality. Under the null hypothesis,
\begin{align}
\begin{split}
\hat \vbeta^w(t)
&= (\tilde{\mathbb{X}}^\top \mathbb{W}(t) \tilde{\mathbb{X}})^{-1} \tilde{\mathbb{X}}^\top \mathbb{W}(t) \{\tilde{\mathbb{X}}^\top \vbeta_0(t) + \mathbb{Z}^\top \boldsymbol{\eta}(t) + \vepsilon(t)\} \\
&= \vbeta_0(t)+ (\tilde{\mathbb{X}}^\top \mathbb{W}(t) \tilde{\mathbb{X}})^{-1} \tilde{\mathbb{X}}^\top \mathbb{W}(t) \vepsilon(t),
\end{split} \nonumber
\end{align}
and let $\vZ_n(t) = \sqrt{n} (\hat\vbeta^w(t) - \vbeta_0(t)) $. Then we can write
\begin{equation
\begin{aligned}
\vZ_n(t) &= \Big( \frac{\tilde{\mathbb{X}}^\top \mathbb{W}(t) \tilde{\mathbb{X}}}{nb(t)} \Big)^{-1} \frac{\sqrt{n}\tilde{\mathbb{X}}^\top \mathbb{W}(t) \vepsilon(t)}{nb(t)} \nonumber\\
&= \frac{nb(t)}{\sum_{i=1}^{n} \delta_i(t)}\Big( \frac{\tilde{\mathbb{X}}^\top \mathbb{W}(t) \tilde{\mathbb{X}}}{\sum_{i=1}^{n} \delta_i(t)}\Big)^{-1} \frac{\sqrt{n}\tilde{\mathbb{X}}^\top \mathbb{W}(t) \vepsilon(t)}{nb(t)} \label{beta-exp}.
\end{aligned}
\end{equation}
Let $\tilde{\mathbb{X}}=(\tilde{\mathbb{X}}_1, \ldots,\tilde{\mathbb{X}}_n)^\top$, where $\tilde{\mathbb{X}}_i = (\tilde{x}_{i1}, \ldots, \tilde{x}_{ip})^\top$, and let
\begin{equation}
\vV_n(t) = n^{-1/2} \tilde{\mathbb{X}}^\top \mathbb{W}(t) \vepsilon(t)/b(t)
\end{equation}
be the $p$-variate random functions with the zero mean, corresponding to the third term in \eqref{beta-exp}. Its $j$-th element is specifically written as $ n^{-1/2} \sum_{i=1}^n \tilde{x}_{ij} \delta_i(t) \epsilon_i(t) / b(t)$. Let ${\mathbb{V}}_n = (\vV_n(t_1), \ldots, \vV_n(t_Q))$, where $\mathcal{T}_Q = \{t_q \in [0,1]: q=1, \ldots, Q\}$ is a finite collection of any $Q$ time points, for $Q \geq 1$. By the multivariate CLT and the mutual independence among $\tilde x_{ij}$, $\delta_i$, and $\epsilon_i$, we have
\begin{equation} \nonumber
\mathrm{vec}({\mathbb{V}}_n) = \big( \vV_n(t_1), \ldots, \vV_n(t_Q) \big)^\top \stackrel{d}{\to} MVN(\boldsymbol{0}_{pQ}, \Xi \otimes \Psi),
\end{equation}
where $\Xi = \big[\vartheta_{q q'}\big]_{1 \leq q,q' \leq Q}$ is the $Q \times Q$ covariance matrix with
\[\vartheta_{qq'} = \gamma(t_q, t_{q'}) v(t_q, t_{q'}) b(t_q)^{-1} b(t_{q'})^{-1},\]
$\Psi = \big[ \Psi_{jj'}\big]_{1 \leq j,j' \leq p}$ is the $p \times p$ matrix with $\Psi = E(\Var(\boldsymbol{X} | \boldsymbol{Z} ))$, and the Kronecker product of $\Xi$ and $\Psi$ is given by
\begin{equation} \nonumber
\Xi \otimes \Psi
=
\left[
\begin{array}{ccc}
\vartheta_{11} \Psi & \cdots & \vartheta_{1Q} \Psi\\
\vdots & \ddots & \vdots\\
\vartheta_{Q1} \Psi & \cdots & \vartheta_{QQ} \Psi
\end{array}
\right]
\in \mathbb{R}^{(pQ) \times (pQ)}.
\end{equation}
We specifically derive $\Xi \otimes \Psi$ as follows. For $p$-variate random variable $\vV_n(t_q)$, the diagonal of its asymptotic covariance matrix, i.e., $(j,j)$-th element of the matrix, is derived as $\gamma(t_q, t_q)b(t_q)^{-1} E (\tilde x_{ij}^2) = \vartheta_{qq} \Var(\tilde x_{ij}) $, and the $(j,j')$-th element of the covariance matrix, for $j \neq j'$, is $\gamma(t_q, t_q)b(t_q)^{-1} E (\tilde x_{ij} \tilde x_{ij'}) = \vartheta_{qq} \Cov(\tilde x_{ij}, \tilde x_{ij'}) $. That is, the block diagonal covariance matrix of $\mathrm{vec}({\mathbb{V}}_n)$ is $\vartheta(t_q, t_q) \Psi$. We then examine the block off-diagonal covariance matrix of $\mathrm{vec}( {\mathbb{V}}_n)$ by calculating the covariance between $\vV_n(t_q)$ and $\vV_n(t_{q'})$, for $q \neq q'$. we can show that the diagonal $(j,j)$-th element of the covariance matrix is $\vartheta_{qq'}\Var(\tilde{x}_{ij})$ and the $(j,j')$-th element of the matrix, for $j \neq j'$, is $\vartheta_{qq'}\Cov(\tilde{x}_{ij}, \tilde{x}_{ij'})$. That is, $p \times p$ off-diagonal block covariance matrix of $\mathrm{vec}({\mathbb{V}}_n)$ is written as $\vartheta_{qq'} \Psi$.
By \cite{gupta2018matrix} and \cite{chen2020multivariate}, the multivariate process $\{ \vV_n(t) : t \in [0,1] \}$ converges to the multivariate Gaussian process in distribution as
\[
\{ \vV_n(t) : t \in [0,1] \} \stackrel{d}{\to} GP_p (\vzero_{p}, \vartheta \Psi),
\]
where the finite-dimensional restrictions of $\vartheta$ is given by the covariance matrix $\Xi$. Next, we can show that the $(p \times p)$ matrix $\tilde{\mathbb{X}}^\top \mathbb{W}(t) \tilde{\mathbb{X}}/ \sum_{i=1}^{n} \delta_i(t)$ in the second term of \eqref{beta-exp} converges to $\Psi$ in probability, under the conditions C2 and C4. Let $\tilde{\vV}_n(t) = (\tilde{\mathbb{X}}^\top \mathbb{W}(t) \tilde{\mathbb{X}}/ \sum_{i=1}^{n} \delta_i(t))^{-1}$ $\vV_n(t)$, then $\tilde\vV_n(t) \stackrel{d}{\to} GP_p(\vzero_p, \vartheta \Psi^{-1})$, for $t \in [0,1],$ by the Slutksy's lemma. Note that
\[
\sup_{t \in [0,1]} \left| \tilde\vV_n(t) - \vZ_n(t) \right| \leq \sup_{t \in [0,1]} |\tilde\vV_n(t) | \cdot \sup_{t \in [0,1]} \left| 1 - \frac{nb(t)}{\sum_{i=1}^{n} \delta_i(t)}\right|,
\]
where $\sup_{t \in [0,1]}|\vZ(t)| \triangleq \sup_{t \in [0,1]} \sup_{j \in \{1,\ldots, p \}} Z_j(t)$, for $p$-variate random functions $\vZ(t)=(Z_1(t),$ $\ldots,$ $Z_p(t))^\top$. Following the similar lines of the proof of Theorem 4 and the Lemma 2.1 provided in \cite{Park2021}, we have
\[
\sup_{t \in [0,1]} \left| \tilde\vV_n(t) - \vZ_n(t) \right| = O_p(n^{-1/2}).
\]
Then Corollary \ref{cor:regression} is an immediate consequence of Slutksy's lemma.
\subsubsection*{Proof of Theorem \ref{thm-alternative-dist} and Corollary \ref{thm-power}}
We first present the proof of Theorem \ref{thm-alternative-dist}.
Following the similar arguments used in Theorem 1 by \cite{zhang2011statistical}, we have
\begin{equation}
\begin{aligned}
T_n
&= \sum_{j=1}^p \int_0^1 W_j(t)^2 \, \mathrm{d}t + o_P(1)\\
&= \sum_{j=1}^p \sum_{m=1}^{\infty} \psi_{jm}^2 + o_P(1),
\end{aligned}
\end{equation}
where $\mathbf{W} = (W_1, \ldots, W_p)^\top \sim \textrm{GP}_p( \tilde{\boldsymbol\Delta}, \tilde\vartheta \mathbb{I}_p)$. The eigen-decomposition of $\tilde\vartheta(s,t)$ leads to $W_j(t) = \sum_{m=1}^{\infty} \psi_{jm} \phi_m(t)$, where the series converges in $L^2$, uniformly for $t \in (0,1)$, and $\psi_{jm} = \langle W_j, \phi_m \rangle \sim N(\langle \tilde\Delta_j, \phi_m \rangle, \lambda_m)$ independent for all $j=1, \ldots, p$ and $m \geq 1$.
Since $\| \tilde\Delta_j \|_2^2 = \sum_{m=1}^\infty |\langle \tilde\Delta_j, \phi_m \rangle|^2 < \infty$
it follows that
\[
\sum_{m=1}^\infty \textrm{Var} \big(\psi_{jm}^2\big) = \sum_{m=1}^\infty 2\lambda_m\big(1 + 2 |\langle \tilde\Delta_j, \phi_m \rangle|^2/\lambda_m\big) < \infty
\]
for all $j=1, \ldots, p$. Therefore,
\begin{equation}
T_\Delta
\stackrel{a.s.}{=} \sum_{m=1}^{\infty} \sum_{j=1}^p \psi_{jm}^2
\stackrel{d}{=}
\sum_{m=1}^\infty \lambda_m B_m,
\end{equation}
where $B_m = \sum_{j=1}^p \psi_{jm}^2/\lambda_m$ has the non-central $\chi^2$-distribution with $p$ degrees of freedom and the non-central parameter $\kappa_m^2 = \pi_m^2/\lambda_m$. Since $W_1, \ldots, W_p$ are independent Gaussian processes, $B_1, B_2, \ldots$ are independent. The proof of Corollary \ref{thm-power} case (i) is a special case with non-centrality parameter on $\chi^2$ distribution with $p$ degrees of freedom.
\subsubsection*{Proof of Corollary \ref{thm-power} case (ii)}
The proof with $\tau \in [0,1)$ follows from Theorem \ref{thm-alternative-dist} as $\Psi^{1/2} (\boldsymbol{\mathcal{I}} - \boldsymbol{\mathcal{L}})(n^{-\tau/2}\boldsymbol\Delta) \to \infty$ as $n \to \infty$. When $\tau=1$, we assume that $\sum_{m=1}^\infty \pi_m^2 = \infty$. Let $\zeta_{jm}$ denote a standard normal random variable independent for all $j=1, \ldots, p$ and $m \geq 1$. We note that
\begin{equation}
\begin{aligned}
B_m
&= \sum_{j=1}^p \psi_{jm}^2/\lambda_m\\
&\stackrel{d}{=} \sum_{j=1}^p \zeta_{jm}^2 + 2 \sum_{j=1}^p \zeta_{jm} \langle \tilde\Delta_j, \phi_m \rangle/\sqrt{\lambda_m} + \sum_{j=1}^p |\langle \tilde\Delta_j, \phi_m \rangle|^2/\lambda_m\\
&\stackrel{d}{=} A_m + 2 \tilde\rho_m \zeta_{1m} + \pi_m^2/\lambda_m,
\end{aligned}
\end{equation}
where $\tilde\rho_m = \sum_{j=1}^p \langle \tilde\Delta_j, \phi_m \rangle/\sqrt{\lambda_m}$ for $m \geq 1$ with $A_m$ and $B_m$ defined in the previous theorems. It follows from Corollary \ref{thm-power} case (i) and \eqref{thm-alternative-dist-eq} that
\begin{equation}
\begin{aligned}
\lim_{n \to \infty} P(T_\Delta \geq t_\alpha | H_{1n})
&= P\bigg( \sum_{m=1}^\infty \lambda_m B_m \geq t_\alpha \bigg)\\
&= P\bigg( T_0 + 2 \sum_{m=1}^\infty \lambda_m \tilde\rho_m \zeta_{1m} + \sum_{m=1}^\infty \pi_m^2 \geq t_\alpha \bigg).
\end{aligned}
\end{equation}
Let $\Pi^2 = \sum_{m=1}^\infty \lambda_m \pi_m^2$.
We note that
\begin{equation}
\begin{aligned}
\sum_{m=1}^\infty \textrm{Var}(\lambda_m\tilde\rho_m \zeta_{1m})
&= \sum_{m=1}^\infty \lambda_m^2 \bigg( \sum_{j=1}^p \langle \tilde\Delta_j, \phi_m \rangle/\sqrt{\lambda_m} \bigg)^2\\
&\leq p^2 \sum_{m=1}^\infty \lambda_m \pi_m^2 = p^2 \Pi^2,
\end{aligned}
\end{equation}
where $\Pi^2 \leq \lambda_1 \sum_{m=1}^\infty \pi_m^2 = \lambda_1 \sum_{j=1}^p \| \tilde\Delta_j \|_2^2 < \infty$. Therefore, we can write $\sum_{m=1}^\infty \lambda_m \tilde\rho_m \zeta_{1m} \stackrel{d}{=} \Pi Z_0$, where $Z_0 \sim N(0,1)$ is independent of $T_0$. This completes the proof.
\subsection{Technical Details for Section \ref{sec:kernel}}
\begin{lem} \label{lem-unif-conv}
Let $\eta_1(t), \ldots, \eta_N(t)$ be independent and random functions such that there exists $B_N > 0$ satisfying
$\max_{1 \leq j \leq N} E \|\eta_j\|_\infty^k = O(B_N)$ for some $k > 2$ and
$\max_{1 \leq j \leq N} \mathrm{Lip}(\eta_j) = O_P(1)$, where $\mathrm{Lip}(f)$ denotes the Lipschitz constant of $f$.
Suppose that $h \asymp N^{-\alpha}$ for some $\alpha \in \big(0,\frac{k-2}{k} \big)$ and that $B_N=O(1)$. Then,
\begin{equation}
\sup_{t \in [0,1]} \bigg| N^{-1} \sum_{j=1}^{N} \xi_{N,j}(t) \bigg| = O_P\Big( N^{-1/2} h^{-1/2} \sqrt{\log N} \Big)
\label{generic-unif-conv}
\end{equation}
where $\xi_{N,j}(t) = K_h(T_{j} - t) \eta_{j}(t) - E \big( K_h(T_j - t) \eta_j(t) \big)$.
\end{lem}
\begin{proof}
For $0 < c < \frac{k-2 - k\alpha}{2k}$, let $\tilde\eta_{j}(t) = \eta_{j}(t) \mathbb{I}\big(\|\eta_{j}\|_\infty \leq N^{1/2-c} h^{1/2}\big)$ be the truncation of $\eta_{j}(t)$ by the magnitude of $N^{1/2-c} h^{1/2}$. We claim that
\begin{equation}
\begin{aligned}
N^{-1} \sum_{j=1}^{N} \xi_{N,j}(t)
&= N^{-1} \sum_{j=1}^{N} \tilde\xi_{N,j}(t) + o_P\big( N^{-1/2} h^{-1/2} \big)
\end{aligned} \label{lem-truncation}
\end{equation}
uniformly for $t \in [0,1]$, where $\tilde\xi_{N,j}(t) = K_h(T_{j} - t) \tilde\eta_{j}(t) - E \big( K_h(T_j - t) \tilde\eta_j(t) \big)$. Then, it can be verified that
\begin{equation}
\sup_{t \in [0,1]} \bigg| N^{-1} \sum_{j=1}^{N} \tilde\xi_{N,j}(t) \bigg| = O_P \Big( N^{-1/2} h^{-1/2}\sqrt{\log N} \Big)
\label{lem-trunc-unif-conv}
\end{equation}
as \eqref{generic-unif-conv}. To see this, let $\mathcal{T}_\delta(m)$ denote a finite $\delta$-covering of $[0,1]$ such that $1/\delta \leq |\mathcal{T}_\delta(m)| \leq N^m$, i.e., any $t \in [0,1]$, there exists $t' \in \mathcal{T}_\delta(m)$ such that $|t - t'| \leq N^{-m} \leq \delta$. It follows that
\begin{equation}
\begin{aligned}
\sup_{t \in [0,1]} \bigg| N^{-1} \sum_{j=1}^{N} \tilde\xi_{N,j}(t) \bigg|
&\leq \sup_{t \in \mathcal{T}_\delta(m)} \bigg| N^{-1} \sum_{j=1}^{N} \tilde\xi_{N,j}(t) \bigg| \\
&\qquad + \sup_{t,t' \in [0,1]: \, |t-t'| \leq N^{-m}} \bigg| N^{-1} \sum_{j=1}^{N}\big( \tilde\xi_{N,j}(t) - \tilde\xi_{N,j}(t') \big) \bigg|.
\end{aligned} \label{lem-finite-cover1}
\end{equation}
We note that the second term is negligible as
\begin{equation}
\begin{aligned}
&\sup_{t,t' \in [0,1]: \, |t-t'| \leq N^{-m}} \bigg| N^{-1} \sum_{j=1}^{N}\big( \tilde\xi_{N,j}(t) - \tilde\xi_{N,j}(t') \big) \bigg|\\
&\qquad\qquad \leq 2 N^{-m} \bigg( \mathrm{Lip}(K) N^{1/2-c}h^{-3/2} + \| K \|_\infty h^{-1} \max_{1 \leq j \leq N} \mathrm{Lip}(\eta_j) \bigg) \\
&\qquad\qquad = O_P\Big(N^{-1/2}h^{-1/2} N^{-m} \big( N^{1 + \alpha - c} \vee N^{(1+\alpha)/2}\big) \Big)\\
&\qquad\qquad = o_P\big( N^{-1/2}h^{-1/2} \big)
\end{aligned} \label{lem-finite-cover2}
\end{equation}
for some $m > 0$. Also, applying the standard techniques for the exponential bound of large deviations, we get
\begin{equation}
\begin{aligned}
&P\bigg( \sup_{t \in \mathcal{T}_\delta(m)} \bigg| N^{-1} \sum_{j=1}^{N} \tilde\xi_{N,j}(t)\bigg| > C \cdot N^{-1/2} h^{-1/2}\sqrt{\log N} \bigg) \\
&\qquad\qquad \leq \sum_{t \in \mathcal{T}_\delta(m)} P\bigg( \bigg|N^{-1/2+c}h^{1/2} \sum_{j=1}^{N} \tilde\xi_{N,j}(t)\bigg| > C \cdot N^c \sqrt{\log N} \bigg)\\
&\qquad\qquad \leq 2N^{m + c_0 - C} \to 0 \quad (N \to \infty)
\end{aligned} \label{lem-lerge-dev-bound}
\end{equation}
for some large $C > 0$, where $c_0 = c_0(K,\alpha, c) > 0$ is a constant that depends on $K$, $\alpha$, $c$ but $\mathcal{T}_\delta(m)$. Therefore, \eqref{lem-finite-cover1} together with \eqref{lem-finite-cover2} and \eqref{lem-lerge-dev-bound} gives \eqref{lem-trunc-unif-conv}.
Now, we prove the claim \eqref{lem-truncation}. Define $\mathcal{E}_j = \big( \|\eta_j\|_\infty \leq N^{1/2-c} h^{1/2} \big)$ for $j=1, \ldots, N$. It follows from Markov's inequality that
\begin{equation}
\begin{aligned}
P\bigg( \bigcap_{j=1}^N \mathcal{E}_j \bigg)
&\geq 1 - \sum_{j=1}^N P\big( \|\eta_j\|_\infty >N^{1/2-c} h^{1/2} \big)\\
&\geq 1 - B_N N^{1 - k\big(\frac{1}{2}- c\big)} h^{-k/2} \\
&= 1 - B_N N^{-k \big( \frac{k-2-k\alpha}{2k} - c \big)}
\to 1 \quad (N \to \infty).
\end{aligned} \label{lem-unif-prob-bound}
\end{equation}
This implies that the $\eta_j(t)$ and $\tilde\eta_j(t)$ are equivalent to each other with probability tending to $1$ uniformly for $t \in [0,1]$. We also note that
\begin{equation}
\begin{aligned}
&\sup_{t \in [0,1]} \Big| E \Big(K_h(T_j - t) \big(\eta_j(t) - \tilde\eta_j(t) \big)\Big) \Big| \\
&\qquad\qquad = \sup_{t \in [0,1]} \Big| E \Big(K_h(T_j - t) \eta_j(t) \mathbb{I}\big( \|\eta_j\|_\infty > N^{1/2-c} h^{1/2} \big)\Big) \Big|\\
&\qquad\qquad \leq \| K \|_\infty B_N N^{-(k-1)\big(\frac{1}{2} - c\big)}h^{-1-(k-1)/2} \\
&\qquad\qquad \leq \| K \|_\infty B_N N^{-c -k\big( \frac{k -2 - k\alpha}{2k} - c\big)} N^{-1/2} h^{-1/2}
= o\big( N^{-1/2} h^{-1/2} \big).
\end{aligned} \label{lem-unif-expectation}
\end{equation}
Finally, \eqref{lem-unif-prob-bound} and \eqref{lem-unif-expectation} imply \eqref{lem-truncation}, which completes the proof of the lemma.
\end{proof}
\begin{lem} \label{thm-kernel-unif-conv}
Let $\mu(t) = E Y(t)$ be continuously twice differentiable in $t \in [0,1]$, where $\| \mu' \|_\infty$ and $\| \mu'' \|_\infty$ exist and are finite. Suppose that $E \|Y\|_\infty^k < \infty$ for some $k > 2$ and that $ \max_{1 \leq i \leq n} \| Y' \|_\infty$ is bounded in probability. If $P(N < a_n) = o(n^{-1})$, where $a_n \asymp n^\theta$ for some $\theta > 0$, then
\begin{equation}
\tilde{Y}_i^\ast(t) - Y_i(t) = O_P\Big( r_n(t) + n^{-\theta/2} h^{-1/2} \sqrt{\log n} \Big) \quad (i=1, \ldots, n)
\label{thm-unif-rate-eq}
\end{equation}
uniformly for $t \in [0,1]$ , where $h \asymp n^{-\theta\alpha}$ for some $\alpha \in \big(0,\frac{k-2}{k}\big)$, $r_n(t) \asymp h^2$ if $t \in [h,1-h]$, and $r_n(t) \asymp h$ otherwise.
\end{lem}
\subsubsection*{Proof of Lemma \ref{thm-kernel-unif-conv}}
Let $\{ S_n: n \geq 1 \}$ be a sequence of events defined as $S_n = (N_i \geq a_n \,\, \textrm{for all} \,\, i=1, \ldots, n)$. Then, $P(S_n) \geq 1 - \sum_{i=1}^n P(N_i < a_n ) \to 1$ as $n \to \infty$. We claim that the stochastic expansion of \eqref{thm-unif-rate-eq} holds conditioning on $S_n$. Then, the theorem follows since $N_i$'s are independent of $(\mathbf{Y}_i^\ast, \mathbf{T}_i, \mathbf{X}_i, \mathbf{Z}_i)$'s, where $P(S_n) \to 1$ as $n \to \infty$. For simplicity, we may assume that $N_1, \ldots, N_n$ are deterministic integers bounded below from $n^{\delta}$.
To prove the stochastic expansion of \eqref{thm-unif-rate-eq}, let $\hat\lambda_i(t) = N_i^{-1} \sum_{j=1}^{N_i} K_h(T_{i,j} - t)$ denote the kernel density estimator of $\lambda(t)$ and define
\begin{equation}
\begin{aligned}
\hat{g}_i^A(t)
&= \hat\lambda_i(t)^{-1} N_i^{-1} \sum_{j=1}^{N_i} K_h(T_{i,j} - t)\varepsilon_{i,j},\\
\hat{g}_i^B(t)
&= \hat\lambda_i(t)^{-1} N_i^{-1} \sum_{j=1}^{N_i} K_h(T_{i,j} - t)\big( Y_i(T_{i,j}) - Y_i(t) \big),
\end{aligned}
\end{equation}
where $\varepsilon_{i,j} = Y_{i,j}^\ast - Y_i(T_{i,j})$ in \eqref{model-longitudinal}, so that we re-write $\tilde{Y}_i^\ast(t) - Y_i(t) = \hat{g}_i^A(t) + \hat{g}_i^B(t)$. It follows from Lemma \ref{lem-unif-conv} that
\begin{equation}
\begin{aligned}
&N_i^{-1} \sum_{j=1}^{N_i} K_h(T_{i,j} - t) = N_i^{-1} \sum_{j=1}^{N_i} E (K_h(T_{i,j} - t) \big) + O_P\Big( n^{-\theta/2} h^{-1/2} \sqrt{\log n} \Big),\\
&N_i^{-1} \sum_{j=1}^{N_i} K_h(T_{i,j} - t)\varepsilon_{i,j}
= O_P\Big( n^{-\theta/2} h^{-1/2} \sqrt{\log n} \Big),\\
&N_i^{-1} \sum_{j=1}^{N_i} K_h(T_{i,j} - t) \eta_{i,j}(t)\\
&\qquad= N_i^{-1} \sum_{j=1}^{N_i} E \big( K_h(T_{i,j} - t) \eta_{i,j}(t) \big) + O_P\Big( n^{-\theta/2} h^{1/2} \sqrt{\log n} \Big)
\end{aligned} \label{prop-gA-gB}
\end{equation}
uniformly for $t \in [0,1]$, where $\eta_{i,j}(t) = Y_i(T_{i,j}) - Y_i(t)$. We note that the magnitude of stochastic remainders are of the same order for all $i=1, \ldots, n$ as $(\mathbf{Y}_i^\ast, \mathbf{T}_i, \mathbf{X}_i, \mathbf{Z}_i)$ are iid.
By the standard theory of kernel smoothing, we also get
\begin{equation}
\begin{aligned}
&E (K_h(T_{i,j} - t) \big)
= \kappa_0(t) \lambda(t) + o(1), \\
&E \big( \eta_{i,j}(t) \big)
=
\left\{
\begin{array}{ll}
h^2 \big\{ \frac{1}{2} \mu''(t) \lambda(t) + \mu'(t) \lambda'(t) \big\} \kappa_2(t) + o(h^2) & \textrm{if} \,\,\, t \in [h,1-h],\\
h \mu'(t) \lambda(t) \kappa_1(t) + o(h) & \textrm{otherwise},
\end{array}
\right.
\end{aligned} \label{eq-prop-mean}
\end{equation}
uniformly for $t \in [0,1]$, where
\begin{equation}
\kappa_r(t)
=
\left\{
\begin{array}{ll}
\int_{-\frac{t}{h}}^1 u^r K(u) \, \mathrm{d}u & \textrm{if} \,\,\, t \in [0,h),\\
\int_{-1}^1 u^r K(u) \, \mathrm{d}u & \textrm{if} \,\,\, t \in [h,1-h],\\
\int_{-1}^{\frac{1-t}{h}} u^r K(u) \, \mathrm{d}u & \textrm{if} \,\,\, t \in (1-h,1]
\end{array}
\right.
\end{equation}
for $r=0,1,2$.
Since $\kappa_0(t)$ does not vanish and $|\kappa_1(t)|$ and $|\kappa_2(t)|$ are bounded, we get \eqref{thm-unif-rate-eq}.
\subsubsection*{Proof of Theorem \ref{thm-kernel-test-consistent}}
Recall that
\begin{equation}
\begin{aligned}
T_n^\ast
&= \int_0^1 \big((\boldsymbol{\mathcal{I}} - \boldsymbol{\mathcal{L}}) \tilde{\boldsymbol\beta}^\ast\big)(t)^\top \big(\tilde{\mathbb{X}}^\top \tilde{\mathbb{X}} \big) \big((\boldsymbol{\mathcal{I}} - \boldsymbol{\mathcal{L}}) \tilde{\boldsymbol\beta}^\ast\big)(t) \, \mathrm{d}t\\
&= \int_0^1 \Big\| \big(\tilde{\mathbb{X}}^\top \tilde{\mathbb{X}} \big)^{1/2} \big((\boldsymbol{\mathcal{I}} - \boldsymbol{\mathcal{L}})\tilde{\boldsymbol\beta}^\ast\big)(t) \Big\|^2 \, \mathrm{d}t\\
&= \int_0^1 \sum_{k=1}^p \Big( \mathbf{e}_k^\top(\tilde{\mathbb{X}}^\top \tilde{\mathbb{X}})^{-1} \tilde{\mathbb{X}}^\top \tilde{\mathbf{Y}}^\ast(t) \Big)^2 \, \mathrm{d}t,
\end{aligned}
\end{equation}
where $\mathbf{e}_k \in \mathbb{R}^p$ be a unit vector whose $k$-th component is $1$.
We note that $\tilde\beta_j^\ast(t) = \hat\beta_j(t) + \mathbf{e}_j^\top(\tilde{\mathbb{X}}^\top \tilde{\mathbb{X}})^{-1} \tilde{\mathbb{X}}^\top \big( \tilde{\mathbf{Y}}^\ast(t) - \mathbf{Y}(t)\big)$ for each $j=1, \ldots, p$, where $\hat\beta_j(t) = \mathbf{e}_j^\top(\tilde{\mathbb{X}}^\top \tilde{\mathbb{X}})^{-1} \tilde{\mathbb{X}}^\top\mathbf{Y}(t)$ is the least-squares estimator of $\beta_j(t)$ with fully observed data. The large sample property of $\hat{\boldsymbol\beta}(t) = (\hat\beta_1(t), \ldots, \hat\beta_p(t))^\top$ follows from Theorem \ref{cor:regression} by letting $\mathscr{I}_i = [0,1]$ for all $i=1, \ldots, n$.
Since $\| \mathcal{I} - \mathcal{L}\|_{\mathrm{op}} \leq 1$, it follows from the Cauchy-Schwarz inequality and Lemma \ref{thm-kernel-unif-conv} that
\begin{equation}
\begin{aligned}
&\int_0^1 \Big\| \mathbf{e}_j^\top \big(\tilde{\mathbb{X}}^\top \tilde{\mathbb{X}} \big)^{1/2} \big((\boldsymbol{\mathcal{I}} - \boldsymbol{\mathcal{L}})( \tilde{\boldsymbol\beta}^\ast - \hat{\boldsymbol\beta})\big)(t) \Big\|^2 \, \mathrm{d}t\\
&\quad \leq \,\, \mathbf{e}_j^\top \tilde{\mathbb{X}}^\top \tilde{\mathbb{X}} \mathbf{e}_j \sum_{k=1}^p \int_0^1 \Big( \mathbf{e}_k^\top(\tilde{\mathbb{X}}^\top \tilde{\mathbb{X}})^{-1} \tilde{\mathbb{X}}^\top \big( \tilde{\mathbf{Y}}^\ast(t) - \mathbf{Y}(t) \big)\Big)^2 \, \mathrm{d}t \\
&\quad \leq \,\, \bigg\{ \mathbf{e}_j^\top \tilde{\mathbb{X}}^\top \tilde{\mathbb{X}} \mathbf{e}_j \sum_{k=1}^p \big(\mathbf{e}_k^\top(\tilde{\mathbb{X}}^\top \tilde{\mathbb{X}})^{-1}\mathbf{e}_k \big) \bigg\} \sum_{i=1}^n \int_0^1 \big( \tilde{Y}_i^\ast(t) - Y_i(t) \big)^2 \, \mathrm{d}t .\\
&\quad = \,\, \bigg\{ \mathbf{e}_j^\top \Psi \mathbf{e}_j \mathrm{tr}\big(\Psi^{-1}\big) + o_P(1) \bigg\} \cdot O_P\big( nh^3 + n^{1-\theta} h^{-1} \log n \big) \\
&\quad = \,\, O_P\big( n^{1-\theta(3/5)} \big) \quad (\forall j=1, \ldots, p).
\end{aligned} \label{thm-tn-decomp1}
\end{equation}
The above result is analogous to the proof of theorems in \cite{zhang2007statistical}.
On the other hand, \eqref{agp-beta0} gives
\begin{equation}
\begin{aligned}
\big\| \mathbf{e}_j^\top\big(\tilde{\mathbb{X}}^\top \tilde{\mathbb{X}} \big)^{1/2} (\boldsymbol{\mathcal{I}} - \boldsymbol{\mathcal{L}})\hat{\boldsymbol\beta} \big\|
= \big\| \mathbf{e}_j^\top\Psi^{1/2} \sqrt{n} \big( \boldsymbol{\mathcal{I}} - \boldsymbol{\mathcal{L}})(\hat{\boldsymbol\beta} - \boldsymbol\beta_0 \big) \big\| + o_P(1)
= O_P(1)
\end{aligned}\label{thm-tn-decomp2}
\end{equation}
for all $j=1, \ldots, p$.
Combining \eqref{thm-tn-decomp1} and \eqref{thm-tn-decomp2}, we get
\begin{equation}
\begin{aligned}
T_n^\ast
&= \int_0^1 \Big\| \big(\tilde{\mathbb{X}}^\top \tilde{\mathbb{X}} \big)^{1/2} \big((\boldsymbol{\mathcal{I}} - \boldsymbol{\mathcal{L}}) \tilde{\boldsymbol\beta}^\ast \big) (t) \Big\|^2 \, \mathrm{d}t \\
&= \, T_n + \int_0^1 \Big\| \big(\tilde{\mathbb{X}}^\top \tilde{\mathbb{X}} \big)^{1/2} \big((\boldsymbol{\mathcal{I}} - \boldsymbol{\mathcal{L}})( \tilde{\boldsymbol\beta}^\ast - \hat{\boldsymbol\beta})\big)(t) \Big\|^2 \, \mathrm{d}t \\
&\qquad + \, 2 \int_0^1 \Big[\big(\tilde{\mathbb{X}}^\top \tilde{\mathbb{X}} \big)^{1/2} \big((\boldsymbol{\mathcal{I}} - \boldsymbol{\mathcal{L}})( \tilde{\boldsymbol\beta}^\ast - \hat{\boldsymbol\beta})\big)(t)\Big]^\top \Big[ \big(\tilde{\mathbb{X}}^\top \tilde{\mathbb{X}} \big)^{1/2} \big((\boldsymbol{\mathcal{I}} - \boldsymbol{\mathcal{L}}) \hat{\boldsymbol\beta}\big)(t) \Big]\, \mathrm{d}t\\
&= T_n + O_P\big( n^{1-\theta(3/5)} \big).
\end{aligned}
\end{equation}
This completes the proof.
|
2,869,038,155,502 | arxiv | \section{Introduction}
The situation in physical cosmology is currently governed by experiment
(observations) which made an increasing progress for the recent years.
However, the theory of formation of {\it Large Scale Structure} in the
Universe leaves something to be desired while a progress is still there:
the simplest versions of the dynamical {\it Dark Matter} are discarded
(e.g. sHDM, sCDM, cosmic strings), the cosmological model has become
multiparametrer ($\Omega_{{\rm M}}$, $\Omega_{\Lambda}$, $\Omega_b$, $h$,
$n_{{\rm S}}$, T/S, etc.) which hints on a complex nature of the dark matter
in the Universe. Hopefully, the ongoing and oncoming measurements of the CMB
anisotropy (both ground and space based) as well as the development of
median and low $z$ observations will fix the DM/LSS model of the Universe by
a few per cent in the nearest future.
A theory of the very early Universe which meets most predictions
and observational tests is inflation. It prophesys small Gaussian
{\it Cosmological Density Perturbations} (the {\it Scalar} adiabatic mode)
responsible for the LSS formation in the observable Universe. The ultimate
goal here would be the reconstruction of DM parameters and CDP power
spectrum directly from observational data (LSS {\it vs} $\Delta T/T$).
A drama put in the basis of cosmic inflation is that it provides also a
general ground for the fundamental production of {\it Cosmic Gravitational
Waves} (the {\it Tensor} mode) which should contribute along with the S-mode
into the $\Delta T/T$ anisotropy at large angular scale\footnote{
Obviously, all three modes of the perturbations of gravitational field --
scalar, vector and tensor (see [1]) -- induce the CMB anisortopy through the
SW-effect [2]. However, most of the inflationary models considered by now
are based on scalar inflaton fields which cannot be a source for the vector
mode. A general physical reason for the production of the T and S
perturbations in the expanding Universe is the {\it parametric amplification
effect} [3], [4]: the spontaneous creation of quantum physical fields in a
non-stationary gravitational background.}. Hence, a principal question on
the way to the S-spectrum restoration remains the T/S problem -- the
fraction of the variance of the CMB temperature fluctuations on 10 degrees
of arc generated by the CGWs:
\begin{equation}
\left(\Delta T/T\right)^2 \vert_{10^0} = {\rm S} + {\rm T}.
\end{equation}
Observational separation between the modes is postponed by the time when
polarization measurements of the CMB anisotropy will be available (which
require the detector sensitivity ${}_{\sim}^{<} 1\mu$K). Today, we can
investigate the T/S problem theoretically.
A common suggestion created by the {\it Chaotic Inflation} [5], that 'T/S
{\it is usually small} (T/S ${}_{\sim}^{<} 0.1$)', stems actually from a
very specific property of the CI model (it inflates only at high values of
the inflaton, $\varphi >1 $). However, in general this is not true: any
inflation produces inevitably {\it both} pertubation modes, the ratio
between them is not limited by unity and sticks to the parameters of a
given model\footnote{There is no fundamental theorem restricting T/S
relative to the unity: the inflationary requirement, $\gamma\equiv -
\dot{H}/H^2 < 1$, imposes only a wide constraint, T/S ${}_{\sim}^< 10$,
obviously insufficient to discriminate the T-mode in the cosmological
context (see eqs.(2),(5)).}. Nevertheless, people often relate this
T/S-CI feature to another basic property of the chaotic inflation with
a smooth inflaton potential $V(\varphi)$ -- the {\it Harrison-Zel'dovich}
S-spectrum ($n_S\simeq 1$).
Such a mythological statement that 'T/S {\it is small when} $n_S\simeq 1$',
has even been strenthened by the power-law inflation [6], [7] which has
displayed that T/S may become large only at the expense of the rejection
from the HZ-spectrum in S-mode: T/S ${}^{>}_\sim 1 $ when $n_S\leq 0.8$;
obviously, {\it vice versa}, when $n_S\rightarrow 1$, the T/S tends to zero
in a total accordance with the previous CI-assertion. The analytic
approximation for T/S found in this model looks eventually universal for any
inflationary dynamics when related to the T-spectrum slope index (estimated
in the appropriate scale $k_{COBE}\sim 10^{-3}h/$Mpc)\footnote{It is just
because the CGW spectrum created in {\it any} inflation is
intrinsically akin to the evolution of the Hubble factor at the horizon
crossing time: $n_T \simeq - 2\gamma < 0$. Notice the T-spectrum stays
always red in the minimally coupled gravity since a systematic decrease in
time of the Hubble factor, cf.eq.(12).},
\begin{equation}
\frac {{\rm T}}{{\rm S}} \simeq -6n_T \simeq 12\gamma.
\end{equation}
Since a case for the {\it red} S-spectra suggested by power-law inflation
($n_S < 1$) has confirmed the above statement of the T/S smallness for
HZ-CDPs, we are facing to check the opposite situation -- a case for models
where the {\it blue} spectra ($n_S > 1$) are allowed and the T/S there. An
example of the blue S-spectrum is provided by (i) the two-field hybrid
inflation [8], [9], [10], [11] for a certain range of the model parameters,
(ii) a single massive inflaton [12] ($V=V_0 + m^2\varphi^2/2$), and (iii)
that producing power-law S-spectrum [13], [14], [15]. However, the problem
of blue S-spectra is more generic and requires its full investigation. In
this paper we present such analysis for the case of a {\it single} inflaton
field.
Below, we start considering the inflationary requirements for the production
of blue S-spectra. We introduce a simple natural model of such an inflation
with one scalar $\varphi$ field which we call the {\it $\Lambda$-inflation}.
It proceeds at any values of the inflaton and generates a typical feature in
the S-spectrum: a blue branch in short wavelengths (small $\varphi$ values)
and a red one in large wavelengths (high $\varphi$ values). Between these
two asymptotics the broad-scale transient spectrum region is settled down
where the 'T/S {\it is close to its highest value (generally not more than
10) when the} S {\it spectrum (or the joint} S+T {\it metric fluctuation
spectrum) is essentially HZ one}'. Futher on, we analyse physical reasons
for the latter generic statement (CI is the measure zero in the family of
$\Lambda$-inflation models) and its place in the inflationary paradigm.
Surprisingly, the phenomena of large T/S and blue S-spectra are two totally
disconnected problems: both are realized in $\Lambda$-inflation but at
different scales and field values. The large T/S is produced where inflation
proceeds only marginally (the subunity $\gamma$-values) which occurs near
$\varphi\sim 1$ where the S-spectrum tilt is slightly red, $n_S{}_\sim^< 1$.
On the contrary, the blueness ($n_S > 1$) is gained for $\varphi \ll 1$ and
has thus a different physical origin. We conclude by discussing the
necessary and sufficient conditions for obtaining large T/S from inflation,
and argue for a general estimate of T/S based on eq.(2).
\section{The $\Lambda$-Inflation}
We are looking for the simplest way to get a blue-kind spectrum of density
perturbations generated at inflation driven by one scalar field $\varphi$.
The minimal coupling of $\varphi$ to geometry is given by the action ($c =
\hbar = 8\pi G = 1$):
\begin{equation}
W \left[\varphi, g^{ik}\right] = \int \left(L - \frac 12 R\right) \sqrt{-g}
\; d^4 x
\end{equation}
where $g_{ik}$ and $R_{ik}$ are the metric and Ricci tensors respectively,
with the signature $(+ - - -)$, $g = det (g_{ik})$, and $R \equiv R^{i}_{i}$.
The field Lagrangian is an arbitrary function of two scalars,
\begin{equation}
L = L\left(w, \varphi\right),
\end{equation}
where $w^{2} = \varphi_{,i} \varphi^{,i}$ is the kinetic term of
$\varphi$-field.
Actually, the latter can be simplified at inflation. Indeed, the
inflationary condition (taken in the locally flat Friedmann geometry),
\begin{equation}
\gamma \equiv - \frac{\dot{H}}{H^{2}} = \frac{3(\rho + p)}{2\rho} = \frac{3
w^{2} M^{2}}{2(w^{2} M^{2} - L)} < 1,
\end{equation}
implies generally that
\[
w^2 M^{2} \equiv \frac{\partial L}{\partial (\ln w)} < - 2L,
\]
just telling us on the validity of the Taylor-decomposition of (4) over
small $w^{2}$:
\[
L=L(0,\varphi)+\frac{1}{2}w^{2}M^{2}(0,\varphi)+0(w^{4}),
\]
where $\rho\equiv w^2M^2-L$ and $p\equiv L$ are comoving density and
pressure of $\varphi$-field, $H=\frac{g_{,i}\varphi^{,i}}{6w g}$ is the
local Hubble factor. After redefining the field by a new one,
\[
\varphi \Rightarrow \int M (0, \varphi) d \varphi,
\]
we come to a standard form for the Lagrangian density at inflation which is
assumed further on:
\begin{equation}
L = - V (\varphi) + \frac{w^{2}}{2}.
\end{equation}
Here $V = V(\varphi)$ is the potential energy of $\varphi$-field.
A simple guess on the condition necessary to arrange inflation with a blue
S-spectrum arises when we address an example of the slow-roll approximation.
Under this approach the spectrum of created scalar perturbations $q_{k}$ is
straightforwardly related to the inflaton potential $V(\varphi) \simeq 3
H^{2}$ at the horizon-crossing:
\begin{equation}
q_{k}\simeq \frac{H}{2\pi\sqrt{2\gamma}} = \frac{H^2}{4\pi
H^{\prime}_{\varphi}},\;\;\;\; k=aH=\dot{a},
\end{equation}
where $a$ is the scale factor and dot denotes the time derivative. The wave
number $k$ increases with time as $a$ grows faster than $H^{-1}$ in any
inflationary expansion (see eq.(5)):
\[
\left(\ln\left(aH\right)\right)^{.}=\left(1-\gamma\right)H>0.
\]
Eq.(7) prompts evidently: while decreasing $H^\prime_{\varphi}$ with
$k>k_{cr}$, one gains the power on short scales and, thus, realizes the blue
spectrum slope.
Without loss of generality, we will assume that $V(\varphi)$ is a function
growing with $\varphi(>0)$ and getting its local minimum at $\varphi = 0$.
It means that during the process of inflation $\varphi$-field evolves to
smaller values. Hence, the necessary condition for a blue spectrum could be
any way of flattenning the potential shape at smaller $\varphi< \varphi_{cr}$
to provide for a rise of $H^{2}/H^\prime_{\varphi}$ and keeping the
inflation still on ($H^{\prime}_{\varphi} < H/ \sqrt{2}$, cf. eqs.(5), (6)):
\begin{equation}
1-n_S\simeq \frac{\gamma}{H} \left(\frac{H^{2}}{H^{\prime}_{ \varphi}}
\right)^{\prime}_{\varphi} < 0.
\end{equation}
The latter equation leads to a broad-brush requirement of the positive
potential energy at the local minimum point of $\varphi$-field:
\begin{equation}
V_{0}\equiv V(0) > 0,
\end{equation}
which displays the existence of the effective $\Lambda$-term during the
period of inflation dominated by the residual (constant) potential energy:
\begin{equation}
V(\varphi<\varphi_{cr})\simeq V_{0}\equiv\Lambda\equiv 3H_0^2,
\end{equation}
where the characteristic value $\varphi_{cr}$ is determined as follows
\footnote{In most applications $\varphi_{cr}\sim 1$, see eq.(32).}:
\begin{equation}
V (\varphi_{cr}) = 2 V_{0}.
\end{equation}
This appearance of the {\it de Sitter}-type inflation (for $\varphi <
\varphi_{cr}$) results in a drastic difference with CI which has eventually
assumed that $V_{0}=0$ just making the inflation at small $\varphi$ in
principal impossible. Obviously, the latter hypothesis on vanishing the
potential energy at $\varphi=0$ has reduced the CI-model to a very partial
case (from the point of view of eq.(9)) restricting the inflation dynamics
by only high values of the inflaton ($\varphi >1$). So, we may conclude that
the $\Lambda$-inflation based on eq.(9) presents a general class of the
fundamental inflationary models. In this sense they are more natural models
(the CI being of the measure zero in $V_{0}$-parameter) allowing the
inflation also at small $\varphi$-values (less than the Planckian one).
Summarising, we see that under condition (9) we have two qualitatively
different stages of the inflationary dynamics separated by
$\varphi\sim\varphi_{cr}$. We will call them:
\begin{itemize}
\item the CI stage ($\varphi {}_{\sim }^{>}\varphi_{cr}$), where the
evolution is not influenced by the $\Lambda $-term and looks essentially
like in standard chaotic inflation, and
\item the dS stage ($\varphi <\varphi _{cr}$) predominated by the
$V_0$-constant.
\end{itemize}
The completion of the full inflation in this model is related to $V_{0}$
-decay which is supposed to happen at some $\varphi^{\ast} < \varphi_{cr}$
\footnote{We do not discuss here possible mechanisms for such metastability
(it may be the coupling to other physical fields, a way of double- or
platoo-like inflations, etc.) and take the $\varphi^{\ast}$ value as an
arbitrary parameter of our model (allowing to recalculate $k_{cr}$ in Mpc).
Mind that in CI $\varphi^{\ast} \simeq \varphi_{cr}$.}. So, we deel with the
three-parameter model $(V_0, \varphi_{cr}, \varphi^{\ast})$ starting as CI
($\varphi >\varphi_{cr}$) and processing by dS-inflation at small $\varphi$
($\varphi^{\ast} < \varphi < \varphi_{cr}$).
As we know from the CI theory smooth $V$-potentials create generally the
{\it red} $q_{k}$-spectra ($n_S<1$ for $\varphi > \varphi_{cr}$). On the
other hand, eq.(9) provides physical grounds for the {\it blue} spectra
generated at dS period ($n_S>1$, cf. eq.(8)). Recall for comparison, that
the spectrum of gravitational waves produced at any inflationary regim is
given by the universal formula (here both polarizations are taken into
account):
\begin{equation}
h_k=\frac H{\pi\sqrt 2}, \;\;\;\; k = a H,
\end{equation}
which generally ensures the red-like T-spectra as $H$ decreases in time for
$\rho+p>0$: $n_T = -2\gamma <0$ (see eq.(5)).
A trivial way to maintain eq.(9) is the introduction of an additive
$\Lambda$-term in the inflation potential. Keeping in mind only the simplest
dynamical terms we easily come to a trivial and rather general potential
form:
\begin{equation}
V=V_0+\frac 12 m^2\varphi^2+\frac 14\lambda\varphi^4,
\end{equation}
which may also be understood as a decomposition of $V(\varphi)$ over small
$\varphi$. Here, such decomposition is a reasonable approach since the
inflation proceeds to small $\varphi \rightarrow 0$. Obviously, eq.(11) can
be explicitely reversed in this case:
\begin{equation}
\varphi^2_{cr}=\frac{4 V_0}{m^2+\sqrt{m^4+4\lambda V_0}}.
\end{equation}
Also, we will use later the power-law potential
\begin{equation}
V=V_{0}+\frac{\lambda_{\kappa}}{\kappa}\varphi^{\kappa}=
V_{0}\left(1+y^{\kappa}\right),
\end{equation}
where $\kappa$ and $\lambda_{\kappa}$ are positive numbers ($\kappa\ge 2$,
$\lambda_2\equiv m$, $\lambda_4\equiv\lambda$), $\varphi_{cr}^{\kappa}=\kappa
V_0/\lambda_{\kappa}$, and $y=\varphi/\varphi_{cr}$.
Let us turn to the evolution and spectral properties of $\Lambda$-inflation
models.
\section{The background model}
Below, we consider dynamics under the condition (9).
The background geometry is classical employing the 6-parametric Friedmann
group:
\begin{equation}
ds^{2}=dt^{2}-a^{2}d\vec{x}^{2}=a^{2}(d\eta^{2}-d\vec{x}^{2}),
\end{equation}
The functions of time $a$ and $\varphi$ are found either from the Einstein
equations:
\begin{equation}
H^{2} = \frac{1}{3} V + \frac{1}{6} \dot{\varphi}^{2},
\end{equation}
\begin{equation}
\dot{H} = - \frac{1}{2} \dot{\varphi}^{2},
\end{equation}
or equivalently, from the $\varphi$-field equation (with $H$ taken from
eq.(17)):
\begin{equation}
\ddot{\varphi} + 3 H \dot{\varphi} + V^{\prime}_{\varphi} = 0.
\end{equation}
Coming to the dimensionless quantities,
\[
h \equiv \frac{H}{H_{0}}, \;\;\;\; v = v(y) \equiv \left(\frac{V}{V_{0}}
\right)^{1/2},
\]
\begin{equation}
y \equiv \frac{\varphi}{\varphi_{cr}}, \;\;\;\; x \equiv H_0 \left(t -
t_{cr}\right), \;\;\;\; \epsilon \equiv \frac {2}{\varphi_{cr}},
\end{equation}
we can derive the first-order-equation for the function $h=h(y)$\footnote{
Hereafter, the prime/dot will denote the derivative over $y/x$, i.e. the
normalized $\varphi/t$, respectively.}:
\begin{equation}
h=\frac{v}{\sqrt{1-\gamma/3}},\;\;\;\;
\sqrt{2\gamma}=\epsilon \frac{h^{\prime}}{h},
\end{equation}
and/or the second-order-equation for $y = y(x)$:
\begin{equation}
\ddot{y}+3h\dot{y}+\frac{3}{2}\epsilon^{2}vv^{\prime}=0.
\end{equation}
Eq.(18) yields the relationship between two functions:
\begin{equation}
2\dot{y} = - \epsilon^{2} h^{\prime}.
\end{equation}
The inflation condition (5) allows to find the inflationary solution of
eq.(21) via the decomposion over small $\gamma$:
\begin{equation}
h=v\left(1+\frac 16{\gamma}+o(\gamma)\right),
\end{equation}
\begin{equation}
a=-\frac{1}{H\eta}\left(1+\gamma+o(\gamma)\right),
\end{equation}
where
\begin{equation}
\sqrt{2\gamma}=\frac{\epsilon v^{\prime}/v}{1-\vartheta/3},\;\;\;\;
\vartheta\equiv\frac{\epsilon\left(\sqrt{\gamma/2} \right)^{\prime}}{1-
\gamma/3}=\frac{\left(\sqrt{2\gamma}\right)^{ \prime}_{\varphi}}{1-\gamma/3}.
\end{equation}
Making use of eqs.(23), (25) we may also present the derivatives of
$y$-function over the conformal time,
\begin{equation}
\frac{dy}{d\ln\vert\eta\vert} = \epsilon\sqrt{\frac{\gamma}{2}}
\left(1+\gamma+o(\gamma)\right),\;\;\;\; \vartheta=\frac{d\ln\sqrt{\gamma}}{
d\ln\vert\eta\vert} \left(1-\frac {2}{3}\gamma+o(\gamma )\right).
\end{equation}
We will also need for further analysis the $\varphi$-derivatives at the
horizon-crossing\footnote{%
Eq.(18) yields
\[
\frac{d\varphi}{d\ln a}=-\sqrt{2\gamma},\;\;\; \frac{d^{2}\varphi}{d(\ln
a)^{2}}=\frac{d\gamma}{d\varphi},
\]
the (-) sigh implies that $\varphi$ decreases with time.},
\begin{equation}
\frac{d\varphi}{d\ln k}=-\sqrt{2\gamma}\left(1+\gamma+o(\gamma)
\right),\;\;\;\; \frac{d\ln\gamma}{d\ln k}=-2\vartheta\left(1+\frac{2}{3}
\gamma+ o\left(\gamma\right)\right),
\end{equation}
and the scattering potentials (cf.eqs.(52)),
\[
U\equiv\frac{d^{2}\left(a\sqrt{\gamma}\right)}{a\sqrt{\gamma}d \eta^{2}}%
=a^{2}H^{2}\left(2-\gamma-3\vartheta\left(1-\frac{ \gamma}{3}\right)^{2} +
\frac{1}{4}\epsilon^{2}\gamma^{\prime\prime}\right),
\]
\begin{equation}
U^{\lambda}\equiv\frac{d^{2}a}{ad\eta^{2}}=a^{2}H^{2}\left(2- \gamma\right)=
\frac{2}{\eta^{2}}\left(1+\frac {3}{2}\gamma+ o(\gamma)\right).
\end{equation}
Actually, eqs.(24)-(29) are true during the whole period of inflation based
on inequality (5); they describe the evolution along the attractor
inflationary separatisa towards which any solution of eqs.(17)-(19) tends
during the Universe expansion.
However, it is an additional to the inflation condition (5) assumption known
as the slow-roll approximaion,
\begin{equation}
\vert\vartheta\vert < 1,
\end{equation}
that, when works, simplifys the situation allowing to relate $\gamma$ and $y$
algebraically (see eqs.(26)) and thus to solve eqs.(21), (26) explicitely.
Both inequalities (5) and (30) can be rewritten, respectively, as
\begin{equation}
\epsilon\frac{v^{\prime}}{v}<1,\;\;\;\;{\rm and}\;\;\;\;
\epsilon^{2}\frac{\vert v^{\prime\prime}\vert}v<1.
\end{equation}
$\Lambda$-inflation proceeds most difficult near $y\sim 1$. Indeed, for the
power-law potential (15), $v=\sqrt{1+y^{\kappa} }$, the first inequality
(31) meets at the worst point $y\sim y_1 = (\kappa -1)^{\frac{1}{\kappa}}
\simeq 1$ only for small $\epsilon$,
\begin{equation}
\epsilon<\epsilon_{0}=\frac {2}{\kappa-1},\;\;\;\;or\;\;\;\;
\varphi_{cr}\;{}^>_\sim\;(\kappa-1)\ge 1,
\end{equation}
that we assume hereafter. The second inequality (31) holds at any $y$ unless
$\kappa<3$. For the latter case the slow-roll approximation is broken in the
field interval
\begin{equation}
\exp\left(-\frac{1}{\kappa-2}\right)<y<1,
\end{equation}
where the left-hand-side keeps constant: $\frac{\epsilon^{2}v^{ \prime
\prime}}{v}\sim\epsilon^{2}$ (hence, the slow-roll approximation is
restored in the limit $\epsilon\rightarrow 0$).
So, for $\kappa=2$, the whole evolution for $y<1$ deviates strongly from the
slow-roll approximation. Before coming to it, we write down the evolution
for $\kappa\ge 3$.
\subsection{The $\Lambda\lambda$-Inflation}
The slow-roll approximation is met for $\kappa\ge 3$; then, under conditions
(5) and (30), eq.(23) is integrated explicitely:
\begin{equation}
a = \exp\left(-\int\frac{d\varphi}{\sqrt {2\gamma}}\right) \simeq
\gamma^{\frac{1}{6}}\exp\left(-\frac{2}{\epsilon^{2}} \int\frac{vdy}{v^{
\prime}}\right).
\end{equation}
Substituting here $v=\sqrt{1+y^\kappa}$, we have at the horizon-crossing:
\begin{equation}
\kappa\ge 3:\;\;\;\; y^{2}\left(1-\left(\frac{y_{2}}{y}\right)^{\kappa}
\right)=\Theta,
\end{equation}
where
$y_{2} = \left(\frac{2}{\kappa-2}\right)^{\frac{1}{\kappa}} \simeq 1$,
$\Theta = -\frac{\kappa\epsilon^{2}}{2} \ln K = \frac{\kappa-4 }{\kappa-2} -
\frac{\kappa\epsilon^{2}}{2} \ln K_{c}$,
$K = \frac{a}{\gamma^{\frac{1}{6}}} = \left(\frac {k}{k_{2}} \right) \left(
\frac{y_2}{y}\right)^{\frac{\kappa-1}{3}} \left( \frac{\kappa/(\kappa-2)}{1+
y^{\kappa}}\right)^{\frac {1}{6}} \sim \frac{k}{k_{2}}$,
$K_{c} = \left(\frac{k}{k_{cr}}\right) \left(\frac{1}{y}\right)^{\frac{
\kappa-1}{3}} \left(\frac{2}{1+y^{\kappa}} \right)^{\frac {1}{6}} \sim
\frac{k}{k_{cr}}$.
Evidently,
\[
\frac{d\ln K_{(c)}}{d\ln k} = 1+\gamma + \vartheta /3+o(\gamma)
+o(\vartheta)\simeq 1,
\]
\[
y\simeq\Biggl\{
\begin{array}{lcl}
\Theta^{\frac {1}{2}}, & \; & y>y_{2}, \\
\left(\frac{2}{\left(\kappa-2\right)\vert\Theta\vert}\right)^{\frac{1}{
\kappa-2}}, & \; & y<y_{2}.
\end{array}
\]
The transition period between these two asymptotics, $\vert\Theta
\vert^{<}_{\sim}1$, is pretty small in $y$-space,
\[
\vert y-y_{2}\vert<\frac {1}{\kappa}:\;\;\;\; y\simeq y_{2} + \frac{1}{
\kappa y_{2}}\Theta\simeq 1 - \frac{\epsilon^{2}}{2}\ln K,
\]
however, it is big in the correspondent frequency band (cf.eq.(32)):
\begin{equation}
\vert\ln K\vert < \frac{2}{\kappa\epsilon^2} \left({}^{>}_{\sim}\frac{1}{
\epsilon}\right).
\end{equation}
An interesting physical case here is the case with self-interacting field,
which we call $\Lambda\lambda$-inflation:
\begin{equation}
\kappa=4:\;\;\;\;\;y^{2}\simeq\sqrt{1+(\epsilon^{2}\ln K)^{2}} -
\epsilon^{2}\ln K,
\end{equation}
where $K=K_{c} = \frac{k}{k_{cr} y}(\frac {2}{1+y^{4}})^{\frac{1}{6}}$.
Recall that the $\epsilon$-parameter should not exceed unity if we want to
keep inflation everywhere.
\subsection{The $\Lambda m$-Inflation}
The case of massive field ($\kappa=2$, $v=\sqrt{1+y^2}$) violates the
slow-roll condition and requires more carefull investigation.
The slow-roll approximation works well for $y^{>}_{\sim} 1$, but is broken
at small $y$ as $\frac{v^{\prime\prime}}{v}=v^{-4}\sim 1$ for $y < 1$ (see
eq.(33)). In the latter case $h\simeq 1$ and eq.(22) turns to linear one
presenting the $y$-function as a linear superposition of the {\it fast} (+)
and {\it slow} (-) exponents ($\sim e^{-1.5(1\pm p)x}\sim
\vert\eta\vert^{1.5(1\pm p)}$). This allows for a straightforward, i.e.
independent of the exponent amplitudes, derivation of the $U$-potential at
the dS stage (see eqs.(27),(29)):
\begin{equation}
y<1:\;\;\;\; U\equiv\frac{d^{2}(a\sqrt{\gamma})}{a\sqrt\gamma d \eta^{2}}
\simeq \frac{d^{3}y}{d\eta^{3}}\left(\frac{dy}{d\eta} \right)^{-1}\simeq
\frac{9p^{2}-1}{4\eta^{2}},
\end{equation}
where $p=\sqrt{1-\frac{2\epsilon^{2}}{3}}$.
The inflationary evolution proceeds in a non-oscilatory way for $\varphi <
\varphi_{cr}$ if
\begin{equation}
0<p<1,\;\;\;\;\varphi_{cr}\;{}^{>}_{\sim}1.6,
\end{equation}
that we will assume further on. With such a requirement the inflation is
garanteed for any $\varphi$ (cf.eqs.(32)).
To find the exponent amplitudes for $y(<1)$ we have to match the full
inflationary separatrisa at $y\sim 1$. To do it let us exclude the
first-derivative term in eq.(22) introducing a new variable $z = z(\eta)
\equiv ya$:
\begin{equation}
\frac{d^{2}z}{d^{2}\eta^{2}}-\tilde {U} z=0,
\end{equation}
and then approximate the $\tilde {U}$-function by a simple step-function:
\[
\tilde {U}\equiv\left(aH\right)^{2}\left(2-\gamma- \frac{3\epsilon^{2}}{
2h^{2}}\right) \simeq \frac{2}{\eta^{2}} \left(1-\frac{3\epsilon^{2}}{4v^{2}}
\right)\simeq\frac{1}{\eta^{ 2}}\Biggl\{
\begin{array}{lcl}
2, & \; & \eta<\eta_{3}, \\
\frac{9p^{2}-1}{4}, & \; & \eta>\eta_{3},
\end{array}
\]
where $\eta_{3}\simeq\eta_{cr}$. The solution of eq.(40) is then got
explicitely; matching $z$-function and its first derivative at $%
\eta=\eta_{3} $ and taking into account that $H_{0}z\eta \rightarrow -1$ for
large $y$, we obtain at the dS stage (cf.eqs.(27)):
\begin{equation}
\omega>1:\;\;\;
y\simeq\omega^{-\frac{3}{2}}\left({\rm ch}\mu +\frac {1}{p}{\rm sh}\mu
\right),\;\;\;
\sqrt{\frac{\gamma}{2}}\simeq\frac{\epsilon}{p}\omega^{-\frac{3}{ 2}}
{\rm sh}\mu,\;\;\;
\vartheta \simeq\frac {3}{2}\left(1-p{\rm cth}\mu\right),
\end{equation}
where $\mu=\frac {3}{2} p\ln\omega^{>}_{\sim} p$, $\omega = \frac{\eta_{3}}{
\eta}\simeq\frac{\eta_{cr}}{\eta}\simeq \frac{k}{k_{cr}}$.
The fitting coefficients in eq.(41) describe a part ($y<1$) of the full
inflationary separatrisa extending from large to small values of the
$\varphi $-field\footnote{ The fitting accuracy is quite satisfactory.
Say, in the slow-roll approximation $p \rightarrow 1$: $\frac{\eta_{3}}{
\eta_{cr}}=2^{-\frac {1}{6}}\simeq 1$ and $y\omega^{1.5(1-p)} = \sqrt {e}
\sim 1$. See the Appendix for more detail.}. We see that at the de Sitter
stage the function $\vartheta = \vartheta(\omega)>0$ varies slowly,
\begin{equation}
y<1:\;\;\;\;\;\;\;\vartheta\simeq\Biggl\{
\begin{array}{lcl}
\frac {3}{2}-\frac{1}{\ln\omega}, &\; &1^<_\sim \ln\omega<\frac{2}{3p} \\
\frac{\epsilon^{2}}{1+p}, & \; & \ln\omega^>_\sim\frac{2}{3p}
\end{array},
\end{equation}
and
\begin{equation}
\sqrt{2\gamma}=\frac{\epsilon y}{1-\frac{\vartheta}{3}}\;,\;\;\;\;\;
y\simeq\Biggl\{
\begin{array}{lcl}
\frac {3}{2}\omega^{-\frac {3}{2}}\ln\omega, & \; & 1_{\sim}^{<}\ln
\omega< \frac{2}{3p} \\
\frac{1+p}{2p}\omega^{\frac {3}{2}(p-1)}, & \; & \ln\omega\;^{>}_{ \sim}
\frac{2}{3p}
\end{array}.
\end{equation}
The field evolution approaches the slow exponent only for $\ln \omega>
\frac{2}{3p}\;\left(y<\frac{\exp\left(-\frac{1}{p}\right) }{p}\right)$:
\begin{equation}
y\ll 1:\;\;\;\;
y\simeq \frac{1+p}{2p}\omega^{\frac {3}{2}(p-1)}, \;\;\;\;
\sqrt{2\gamma}\simeq \frac{\epsilon}{p}\omega^{\frac{3}{2}(p-1)}.
\end{equation}
For $p\in \left(\frac 23, 1\right)$ the true evolution at the dS stage is
presented only by the bottom lines in eqs.(42), (43); this fact is used in
the Appendix to restore the whole inflation dynamics for
$\epsilon^2<\frac 56$.
Comparing eqs.(35) and (44) we see that at the dS stage $y$ decays as $\ln k$
for $\kappa\ge 3$, whereas it is the power-law for $\kappa=2$. For the
intermediate case $2<\kappa <3$ the slow-roll approximation is violated only
within the limited interval (33) where the solution can be matched by
eq.(41) with $p=\sqrt{1 - \frac{\kappa\epsilon^{2}}{3}}$.
\section{The generation of primordial perturbations}
Below, we introduce the S and T metric perturbation spectra and find them
for $\Lambda$-inflation.
The linear perturbations over the geometry (16) can be irreducably
represented in terms of the uncoupled Scalar, Vector and Tensor parts [1].
The vector perturbations are not induced in our case as scalar fields are
not their sources. Under the action (3) we are rest with only the S and T
modes, and the new geometry looks as follows:
\begin{equation}
ds^{2} = (1+h_{00})\;dt^{2}+2ah_{0\alpha}\;dtdx^{\alpha}-a^{2}
(\delta_{\alpha\beta} + h_{\alpha\beta})\;dx^{\alpha}dx^{\beta},
\end{equation}
\[
\frac {1}{2}h_{\alpha\beta}=A\delta_{\alpha\beta}+B_{,\alpha
\beta}+G_{\alpha\beta},\;\;\;\; h_{0\alpha}=C_{,\alpha},
\]
where $G^{\alpha}_{\alpha}=G^{\beta}_{\alpha,\beta}=0$. The gravitational
potentials $h_{00}$, $A$, $B$, $C$ are coupled to the perturbation of scalar
field $\delta\varphi$, whereas $G_{ \alpha \beta}$ is the free tensor field.
The Lagrangian $L^{(2)}$ of the perturbation sector of the geometry (45) is
given by decomposing the integrand (3) up to the second order in the
perturbation amplitudes. Our further analysis of the S-sector follows a
general theory of the $q$-field ([4], [16]), the gravitational waves are
totally described by the gauge-independent 3D-tensor $G_{\alpha\beta}$ ([3],
[17], [18]).
Instead of considering gauge-dependent potentials ($h_{00}$, $A$, $B$, $C$,
$\delta\varphi$) we introduce the gauge-invariant canonical 4D-scalar $q$
uniquely fixed by the appearence of the S-part of the perturbative
Lagrangian $L^{(2)}$ similar to a massless field:
\begin{equation}
L^{(2)} = L(q,G_{\alpha\beta}) = \frac {1}{2}\alpha^{2}q_{,i} q^{,i}+
\frac{1}{2}G_{\alpha\beta,\gamma}G^{\alpha\beta,\gamma},
\end{equation}
where
$\alpha^{2}\equiv 2\gamma = \frac{\rho+p}{H^{2}} = \left(\frac{\dot{\varphi
}}{H}\right)^{2}$,
$\alpha = \frac{\dot{\varphi}}{H}$ (mind the choice of the sign for
$\alpha$ that we take coinciding with the sign of $\dot{\varphi}$). The
relation of $q$ to the original potentials takes the following form:
\[
\delta\varphi=\alpha\left(q+A\right),\;\;\;\;
a^{2}\dot{B}+C= \frac{\Phi+A}{H},
\]
\begin{equation}
\frac{1}{2}h_{00} = \gamma q + \left(\frac{A}{H}\right)^{.}, \;\;\;\;
\Phi=\frac{H}{a}\int a\gamma q dt,
\end{equation}
\[
\frac{\delta\rho}{\rho+p} = \frac{\dot{q}}{H} - 3(q+A),\;\;\;\; 4\pi
G\delta\rho_{c} \equiv\gamma H\dot{q}=a^{-2}\triangle\Phi,
\]
where $a$, $\varphi$, $H$, $\alpha$, $\gamma$, $\rho=\frac{1}{2} w^{2}+V$
and $p=\frac{1}{2}w^{2}-V$ are the background functions of time, $\Phi$ is
the "Newtonian" gauge-invariant gravitational potential related non-locally
to $q$, $\triangle\equiv\partial^{ 2}/\partial^{2}\vec{x}^{2}$ is spatial
Laplacian, ($\triangle=- k^{2}$ in the Fourrier representation,
$\delta\rho_{c}$ is the comoving density perturbation). Any two potential
taken from the triple $A$, $B$, $C$ are arbitrary functions of all
coordinates, which determines the gauge choice. All information on the
physical scalar perturbations is contained in the $q=q(t,\vec{x}) $ field,
the dynamical 4D-scalar propagating in the unperturbed Friedmann geometry
(i.e. independently of any gauge in eq.(45)).
The equations of motions of the $q$ and $G_{\alpha\beta}$ fields are two
harmonic oscilators:
\begin{equation}
\ddot{q}+\left(3H+\frac{\dot{\gamma}}{\gamma}\right)\dot{q}-a^{- 2}\triangle
q=0,
\end{equation}
\begin{equation}
\ddot{G_{\alpha\beta}}+3H\dot{G_{\alpha\beta}}-a^{-2}\triangle
G_{\alpha\beta}=0.
\end{equation}
A standard procedure to find the amplitudes generated is to perform the
secondary quantization of the field operators,
\begin{equation}
q =\int^{\infty}_{-\infty}d^{3}\vec {k}\left(a_{\vec k}q_{\vec k}+
a^{+}_{\vec{k}}q^{\ast}_{\vec k}\right),
\end{equation}
\[
G_{\alpha\beta}=\sum_{\lambda}\int^{\infty}_{-\infty}d^{3}\vec{k}
\left(a^{\lambda}_{\vec k}h^{\lambda}_{\vec{k}\alpha\beta}+
a^{ \lambda +}_{\vec{k}}h^{\lambda \ast}_{\vec {k}\alpha\beta}\right
),
\]
where $+/\ast$ denotes Hermit/complex conjugation, index $\lambda =+,\times$
runs two polarizations of gravitational waves with the polarization tensors
$c_{\alpha \beta}^{\lambda}(\vec{k})$, and
\begin{equation}
q_{\vec{k}}=\frac{\nu_{k}}{\left(2\pi\right)^{\frac{3}{2}}\alpha a}\;
e^{i\vec {k}\vec {x}},
\end{equation}
\[
h^{\lambda}_{\vec{k}\alpha\beta} =
\frac{\nu_{k}^{\lambda}}{\left(2\pi\right)^{\frac{3}{2}}a}\;
e^{i\vec{k}\vec{x}}c^\lambda_{\alpha\beta}\left(\vec{k}\right),
\]
\[
\delta^{\alpha\beta}c^\lambda_{\alpha\beta}\left(\vec k\right)= k^\alpha
c^\lambda_{\alpha\beta}\left(\vec k\right)=0, \;\;\;\;
c_{\alpha\beta}^{\lambda}(\vec{k})c^{\alpha\beta\lambda^{\prime}}
\left(\vec{k}\right)^\ast = \delta_{\lambda\lambda^{\prime}}.
\]
The time-dependent $\nu$-functions satisfy the respective Klein-Gordon
equations,
\begin{equation}
\frac{d^{2}\nu_{k}^{(\lambda)}}{d\eta^{2}}+\left(k^{2}-U^{(
\lambda)}\right)\nu^{(\lambda)}_{k}=0,
\end{equation}
with $U=U(\eta)\equiv\frac{d^{2}\left(\alpha a\right)}{\alpha ad\eta^{2}}$
for the $q$-field and $U^{\lambda} = U^{\lambda} (\eta)\equiv\frac{d^{2}
a}{ad\eta^{2}}$ for each polarization of the gravitational waves
$\nu^{\lambda}_{k}$. The standard commutation relations between the
annihilation and creation operators,
\[
\left[ a_{\vec{k}}a^{+}_{\vec{k}^{\prime}}\right]=\delta\left( \vec{k}-
\vec{k}^{\prime}\right),\;\;\;\; \left[ a^{\lambda}_{\vec {k}}
a^{\lambda^{\prime}+}_{\vec {k}^{ \prime}}\right] = \delta\left(\vec{k} -
\vec{k}^{\prime}\right) \delta_{\lambda \lambda^{\prime}},
\]
require the following normalization condition for each of the
$\nu$-functions:
\[
\nu_{k}^{(\lambda)}\frac{d\nu^{(\lambda)\ast}_{k}}{d\eta}-
\nu_{k}^{(\lambda)\ast}\frac{d\nu_{k}^{(\lambda)}}{d\eta}= i.
\]
Eqs.(46)-(52) specify the {\it parametric amplification effect}: the
production of the perturbations -- the phonons for $S$-mode [4] and the
gravitons for $T$-mode [3] -- in the process of the Universe expansion (the
latter is imprinted in the non-zero scatering potentials $U^{(\lambda)}$ in
eqs.(52)).
From the inflationary condition (5) one finds always $k\eta \rightarrow
-\infty$ for the early inflation (scales inside the horizon); therefore, the
microscopic vacua states of the $q$ and $G_{\alpha\beta}$ fields mean the
positive frequency choice for the initial $\nu$-functions:
\begin{equation}
k\vert\eta\vert\gg 1:\;\;\;\;
\nu_{k}^{(\lambda)} = \frac{\exp(-ik\eta)}{\sqrt{2k}}.
\end{equation}
So, the problem of the spontaneus creation of density perturbations and
gravitational waves is finally reduced to solving the eqs.(52), (53) with
the effective potentials $U^{(\lambda)}$ taken from the inflationary
background regimes considered above.
For the late inflation $k\eta\rightarrow 0$ (scales outside the horizon),
the perturbations become semiclassical since the fields are getting frozen
in time and thus acquire the phase (only the ${\it {growing}}$ solutions of
eqs.(48),(49) survive in time)\footnote{ Here, the transfer from the
quantum (squeezed) to classical case occurs when one neglects the
{\it decaying} solutions of eqs.(48),(49) for $\eta\rightarrow 0$:
\[
k\vert\eta\vert < 1:\;\;\;\; q_{d}\sim\int^{0}\frac{d\eta}{a^{2}\gamma} =
\frac{1}{3\gamma} H^{2}\eta^{3}\left(1 + O(\gamma)\right),\;\;\;\;
G_d\sim\int^{0}\frac{d\eta}{a^{2}} = \frac {1}{3}H^{2}\eta^{3}
\left(1+O(\gamma)\right),
\]
and thus is left only with the growing ones (see eq.(54)). This procedure
turns the annihilation and creation operators into the $C$-numbers (where
the commutators vanish).},
\begin{equation}
k\vert\eta\vert\ll 1:\;\;\;\; q=q(\vec {x}),\;\;\;\; G_{\alpha\beta} =
G_{\alpha\beta}(\vec {x}).
\end{equation}
One can, therefore, treat these time-independent perturbation fields as
realizations of the classical random Gaussian fields with the following
power spectra:
\[
\langle q^{2}\rangle = \int^{\infty}_{0}q^{2}_{k}\frac{dk}{k}, \;\;\;\;
\langle G_{\alpha\beta}G^{\alpha\beta}\rangle=\int^{\infty}_{0} h^{2}_{k}
\frac{dk}{k},
\]
\begin{equation}
q_{k}=\frac{k^{\frac{3}{2}}\vert\nu_{k}\vert}{2\pi a\sqrt{\gamma }},\;\;\;\;
h_{k}=\frac{k^{\frac{3}{2}}\sqrt{\vert\nu^{+}_{k}\vert^{2}+\vert
\nu_{k}^{\times}\vert^{2}}}{\pi a\sqrt{2}}=\frac{k^{\frac{3}{2}}
\vert\nu_{k}^{\lambda}\vert}{\pi a}
\end{equation}
Here the $\nu$-functions are taken in the limit $\vert\eta\vert\ll k^{-1}$,
and the gravitational wave spectra in both polarizations are identical. The
local slopes and the ratio of the power spectra are found as follows:
\begin{equation}
n_{S}-1 \equiv 2\frac{d\ln q_{k}}{d\ln k},\;\;\;\; n_{T} \equiv 2\frac{d\ln
h_{k}}{d\ln k}, \;\;\;\; r \equiv \left(\frac{h_{k}}{q_{k}}\right)^{2} =
4\left(\gamma\vert \frac{\nu_{k}^{\lambda}}{\nu_{k}}\vert^{2}\right)_{k\vert
\eta\vert \ll 1}.
\end{equation}
Note, that the quantities $q_k$, $h_k$, $n_S$, $n_T$, $r$ are the functions
of the wavenumber only. For references, we recall also the density
perturbation and Newtonian potential linked to the $q$-field in the
Friedmann Universe (cf. eqs.(47), (54)),
\[
k<aH:\;\;\;\;\;\;\;\; \Delta_k=\frac{2}{3}\left(\frac{k}{aH}%
\right)^2\Phi_k,\;\;\;\; \Phi_k=\Gamma q_k,
\]
where $\Delta_k$, $\Phi_k$ are the dimentionless spectra, respectively,
\[
\left\langle\left(\frac{\delta\rho_c}{\rho}\right)^2\right\rangle =
\int_0^\infty\Delta_k^2\frac{dk}{k},\;\;\;\; \langle \Phi^2\rangle=
\int_0^\infty\Phi_k^2\frac{dk}{k},
\]
and $\Gamma=\frac{H}{a}\int a\gamma dt = 1-\frac{H}{a}\int a\,dt$ is the
function of time ($\Gamma=(1+\beta)^{-1}=$ const for the power-law
expansion, $a\sim t^\beta$).
\section{The power spectra}
When it works, the slow-roll approximation allows for simple derivation of
the S-spectrum ($U^{(\lambda)}\simeq 2/\eta^2$, cf.eqs.(29)):
\begin{equation}
q_{k}\simeq\frac{H}{2\pi\sqrt{2\gamma}},\;\;\;\;
h_{k}=\frac{H}{\pi\sqrt{2}},\;\;\;\;k=aH,
\end{equation}
where $H=H_{0}v$, $\sqrt{2\gamma}\simeq\epsilon\frac{v^{\prime}}{ v}$,
$\vartheta\simeq\frac{1}{2}\epsilon^{2}\left(\frac{v^{ \prime}}{v}\right)^{
\prime}$. The spectra ratio and the local slopes are then the
following (see eqs.(28), (32), (56)):
\begin{equation}
r\simeq -2n_{T} = 4\gamma\simeq \frac{1}{2}\left(\frac{\epsilon \kappa
y^{\kappa-1}}{1+y^{\kappa}}\right)^{2} \le r_{max}=
\frac{1}{2}\left(\frac{\epsilon\left(\kappa-1\right)}{y_{1}} \right)^2
\simeq 2\left(\frac{\epsilon}{\epsilon_0}\right)^2,
\end{equation}
\[
n_S-1\simeq 2\left(\vartheta-\gamma\right)=f\left(y\right), \;\;\;\;
f_{-}\le f\left(y\right)\le f_{+},
\]
where
$f\left(y\right) = \frac{\kappa}{2}y^{\kappa-2}\left(\frac{\epsilon }{1+
y^{\kappa}}\right)^2\left(\kappa-1-\frac{\kappa+2}{2}y^\kappa \right)$,
$y_{\pm}=\left(\kappa-1\mp\kappa\sqrt{\frac{\kappa-1}{\kappa+2}}
\right)^{\frac{1}{\kappa}}$,
$f_{\pm}=f\left(y_{\pm}\right)=\frac{\left(\kappa-1\right)\left( \kappa+2
\right)}{12}\left(\frac{\epsilon}{y_{\pm}}\right)^2\left( \pm 2\sqrt{\frac{
\kappa-1}{\kappa+2}}-1\right)\simeq\pm\left( \frac{\epsilon}{\epsilon_0}
\right)^2$.
Eqs.(58) are true for $v=\sqrt{1+y^{\kappa}}$; the T-spectrum deviates
maximally from HZ (and the spectrum ratio reaches its maximum) at
$y_{1}\simeq (\kappa-1)^{\frac{1}{\kappa}}\simeq 1$; the S-spectrum achieves
its minimum and becomes exactly HZ one at
$y_{4} = \left(\frac{\kappa-1}{1+\frac{\kappa}{2}}\right)^{\frac{ 1}{
\kappa}}= y_{1}(\frac{2}{\kappa+2})^{\frac {1}{\kappa}}\simeq 1$,
it is the most red (blue) at $y_{-}$ ($y_{+}$); the points $y_1$ and
$y_4$ lay always inside the interval $\left(y_{+},y_{-}\right)$ while
the region (36) resides there only if $\kappa\le 8$. Eq.
$f\left(y\right)=$ const $\in\left[f_{-},f_{+}\right]$ has two solutions:
one is located within the interval $\left[y_{+},y_{-}\right]$ where $r$ is
large, $\frac{r}{r_{max}}^>_\sim\left(\frac{\kappa+1}{3\kappa}\right)^2$ and
$r(n_S=1)\simeq r_{max}$; another is outside this interval where $r$ is
small, $\frac{r}{r_{max}}<1$ and $r(n_S=1)=0$.
So, for $\kappa\ge 3$ we have from eq.(35) the following asymptotics for the
power spectra:
\[
q_{k}^{2}\simeq\left( \frac{H_{0}}{\epsilon\pi\kappa}\right)^{2} \frac{
\left(1+y^{\kappa}\right)^{3}}{y^{2\kappa-2}}\simeq\frac{ \lambda_{\kappa}}{
12\pi^{2}}\Biggl\{
\begin{array}{lcl}
\kappa^{\frac{\kappa-4}{2}}\vert 2\ln K\vert^{\frac{\kappa+2}{2}}, & \; &
K<\exp \left(-\frac{2}{\kappa\epsilon^{2}}\right) \\
\left(\frac{V_{0}}{\lambda_{\kappa}}\right)^{\frac{\kappa-4}{ \kappa-2}
}\left(\left(\kappa-2\right)\ln K\right)^{\frac{2\kappa -2}{\kappa-2}}, & \;
& K>\exp\left(\frac{2}{\kappa\epsilon^{2}} \right)
\end{array}
,
\]
\[
h_{k}^{2}=\frac{H_{0}^{2}}{2\pi^{2}}\left(1+y^{\kappa}\right) =
\frac{1}{6\pi^{2}}\Biggl\{
\begin{array}{lcl}
\frac{\lambda_{\kappa}}{\kappa}\vert 2\kappa\ln K\vert^{\frac{ \kappa}{2}},
& \; & K<\exp\left(-\frac{2}{\kappa\epsilon^{2}} \right) \\
V_{0}, & \; & K>\exp\left(\frac{2}{\kappa\epsilon^{2}}\right)
\end{array}
.
\]
In the transition region (36) the ratio of the spectra is approximately
constant independent of the $\kappa$-index: $r\simeq 2\epsilon^{2}$ (it is a
factor $\epsilon^{-2}_0$ less than $r_{max}$).
For $\Lambda\lambda$-inflation the spectra are resolved explicitely (see
eq.(37)):
\begin{equation}
\kappa=4:\;\;\;\;
\begin{array}{l}
q_{k}\simeq\frac{1}{\pi}\sqrt{\frac{2\lambda}{3}}\left(
\epsilon^{-4}+\ln^{2}K\right)^{\frac{3}{4}}, \\
h_{k}=\frac{H_{0}}{\pi}\left(1+\frac{\ln K}{\sqrt{\epsilon^{-4} +\ln^{2}K}}
\right)^{-\frac{1}{2}},
\end{array}
\end{equation}
and
$y_{1}=3^{\frac 14}$,
$K_1=\exp\left(-\frac{1}{\sqrt{3} \epsilon^2}\right)$,
$y_{2}=y_{4}=1$,
$y_{-}=\frac{1}{y_{+}}= \left(\sqrt{2}+1\right)^{\frac{1}{2}}$,
$K_{\pm}=\exp\left(\mp \frac{1}{\epsilon^2}\right)$.
An example of the power spectra for $\epsilon = 0.3$ is shown in Fig.1.
Fig.2 clarifys the relation between $r$ and $n_S-1$ for any $\epsilon<1$.
We see there is no correlation between the blueness and large $r$: the
region of large $r$-values is located in the red and HZ sectors of
the S-spectrum.
\begin{figure}
\epsfxsize=13cm
\centerline{\epsfbox{fig1.eps}}
\caption{The spectra of scalar perturbations $q_{k}$ (dotted
curve), tensor perturbations $h_{k}$ (dashed curve), and the
ratio between them $r^{\frac {1}{2}}\equiv h_{k}/q_{k}$ (dot-dashed
curve), in the $\Lambda\lambda$-inflation model with $\epsilon=0.3$.
The normalization is arbitrary, however the ratio does not depend
on normalization and is true for the used model parameter.}
\end{figure}
\begin{figure}
\epsfxsize=13cm
\centerline{\epsfbox{fig2.eps}}
\caption{The relationship between $r$ and $n_S$ for $\Lambda$-inflation
($r_{max}=\frac{3\sqrt 3}2 \epsilon^2$, $r=-2n_T$).}
\end{figure}
Let us now turn to the case where the slow-roll approximation is broken.
For $\Lambda m$-inflation eqs.(57) are true except the blue part of the
S-spectrum ($k>k_{cr}$) where it must be corrected. Here eqs.(52), (53) are
solved explicitely,
\[
y<1:\;\;\;\;
k^{\frac {3}{2}}\nu_{k}\simeq\frac{ik\sqrt{\pi x} }{2} H^{(1)}_{\frac{3}{2}p}
(x)\;\;
{}^{\longrightarrow}_{x\ll 1} \;\;
\frac{caH_{0}}{\sqrt{2}p}x^{\frac {3}{2}(1-p)},
\]
where $H^{(1)}_p(x)$ is the Hankel function, $x=k\vert\eta\vert$,
$c=\frac{p}{\sqrt{2\pi}}\Gamma\left(\frac {3}{2}p\right)2^{\frac{3 }{2}p}
=\frac{2^{3p/2}}{3\sqrt{\pi/2}}\Gamma(1+\frac{3}{2}p)$. Taking into account
the field asymptotics for $y\ll 1$ (see eq.(44)) we obtain the following
S-spectrum in the blue range:
\begin{equation}
k>k_{cr}:\;\;\;\; q_{k}\simeq\frac{cH_{0}}{2\pi\epsilon}\left(\frac{k}{k_{cr}
} \right)^{\frac 32(1-p)},\;\;\;\;n_S^{blue}=4-3p>1.
\end{equation}
As we see, the spectrum amplitude remains a finite number for $p \rightarrow
0$ ($n_S\rightarrow 4$).\footnote{This corrects the wrong statement on the
divergence of $q_k$ at $p\rightarrow 0$ made in some previous publications.}
In most applications we usually have $n_S <3\;\;(p>\frac 13)$; in this case
the whole spectrum approximation for the $\Lambda m$-inflation looks as
follows:
\begin{equation}
q_k=\frac{H_0(1+y^2)^{\frac 12}(\tilde c+y^2)}{2\pi\epsilon y},
\end{equation}
where $\tilde c = \frac{c(1+p)}{p} = \frac{1+p}{\sqrt\pi}\Gamma \left(
\frac {3}{2}p\right)2^{\frac 32(p-1)}$ and $y$ is taken at horizon crossing
(see eq.(41)).
\section{The T/S effect in $\Lambda$-inflation}
A large T/S $\sim 1$ (when $k_{cr}\in (10^{-4}, 10^{-3})$) is an intrinsic
property of $\Lambda$-inflation. Below, we demonstrate it straightforwardly
for COBE angular scale (see [19], [20], eq.(1)).
Then the S and T are written as follows:
\begin{equation}
{\rm {S} = \sum_{\ell=2}^{\infty} S_{\ell} \exp\left[- \left(\frac{2\ell +
1}{27}\right)^2\right],\;\;\; {T} = \sum_{\ell=2}^{\infty} T_{\ell}
\exp\left[- \left(\frac{2\ell + 1}{27}\right)^2\right],}
\end{equation}
where $S_{\ell}$, $T_{\ell}$ are the corresponding variances in $\ell$th
harmonic component of $\Delta T/T$ on the celestial sphere,
\begin{equation}
S_{\ell} = \sum_{m=-\ell}^{\ell} \vert a_{\ell m}^{(S)} \vert^2, \;\;\;\;
T_{\ell}=\sum_{m=-\ell}^{\ell}\vert a_{\ell m}^{(T)}\vert^2, \;\;\;\;
\frac{\Delta T}{T}\left(\vec e\right)=
\sum_{\ell,m,S,T}a_{\ell m}^{(S,T)}Y_{\ell m}\left(\vec e\right).
\end{equation}
The calculations can be done for the instantaneous recombination,
$\eta=\eta_E$ [2],
\[
\frac{\Delta T}{T}\left(\vec e\right) = \left(\frac 14\delta_\gamma -
\vec {e}\vec {v}_b+\frac {1}{2} h_{00} \right)_E +\frac12 \int^0_E
\frac{\partial h_{ik}}{\partial\eta}e^ie^kdx, \;\;
e^i =(1, -\vec e),\; x\equiv \vert\vec{x}\vert=\eta_0-\eta,
\]
where the SW-integral makes a dominant contribution on large scale (see
eq.(45)), $\delta_\gamma$ and $\vec v_b$ are the photon density contrast and
baryon perculiar velocity, respectively. The mean $S_\ell$ and $T_\ell$
values seen by an arbitrary observer in the matter dominated Universe (e.g.
[21]) are explicitely related with the respective power spectra (see
eqs.(55)):
\begin{equation}
S_\ell = \frac{2\ell+1}{25}\int^{\infty}_0q_k^2 j^2_\ell\left(\frac{k}{k_0}
\right)\frac{dk}{k},
\end{equation}
\begin{equation}
T_\ell =\frac{9\pi^2}{16}\left(2\ell+1\right) \frac{\left(\ell+ 2\right)!}{
\left(\ell-2\right)!}\int^{\infty}_0h_k^2 I^2_\ell\left(\frac{k}{k_0}\right)
\frac{dk}{k},
\end{equation}
where $k_0=\eta_0^{-1}=\frac{H_0}{2}\simeq 1.6\times 10^{-4}h$ Mpc ${}^{-1}$,
\[
j_\ell\left(x\right)=\sqrt{\frac{\pi}{2x}}J_{\ell+1/2}\left(x
\right),\;\;\;\; I_\ell (x) = \int_0^x\frac{J_{\ell+1/2}\left(x-y\right)}{
\left( x-y\right)^{5/2}}\frac{J_{5/2}\left(y\right)}{y^{3/2}}dy.
\]
We have derived T/S for $\Lambda m$-inflation using the appoximation (61).
The result is presented in Fig.3 as a function of two parameters of the
model: the spectrum index in the blue asymptotics $n_S^{blue}$ (see eq.(60))
and the critical scale $k_{cr}$ (in units $k_0$). A similar behaviour of T/S
is met for $\Lambda\lambda$-inflation.
\begin{figure}
\epsfxsize=13cm
\centerline{\epsfbox{fig3.eps}}
\caption{T/S as a function of $k_{cr}$ and $n_S^{blue}$ ($n_S$ in blue
asymptotic) in the case of $\Lambda m$-inflation.}
\end{figure}
Actually, the two-arm structure of T/S is typical for any $\Lambda$
-inflation model: T/S gets its maximum at $k_{cr}\sim k_{ COBE}\sim 10^{-3}h$
Mpc${}^{-1}$ and gradually decays in both, blue ($k_{cr}<k_{COBE}$) and far
red ($k_{cr}\gg k_{COBE}$) sectors of the S-mode. To be precise, the
T/S-maximum is achieved in the location of $r$-maximum (where $\gamma$ is the
largest and thus $\vartheta=0$). There (and anywhere) the S-spectrum slope
is pretty close to HZ for $\epsilon\ll \epsilon_0$ (cf.eqs.(32), (58)):
\begin{equation}
1-n_S^{(r_{max})}\simeq -n_T^{(r_{max})}=2\gamma_{max}\simeq \frac {1}{2}
r_{max}\simeq \left(\frac{\epsilon}{\epsilon_0}\right)^2 \ll 1.
\end{equation}
It is important that T/S remains large in a broad $k$-region including the
point where the S-spectrum is exactly HZ: $\frac{r_{n_S = 1}}{r_{max}}\simeq
\frac{4}{9}\left(1+ \frac{\kappa}{2}\right)^{\frac{2}{\kappa}} >\frac 49$.
\section{Discussion}
It seems as a paradox that T/S can be as large as 1 for such a simple model
as $\Lambda$-inflation. However, it can be easily understood. In fact, the
model recalls a case of double inflation where the large T/S is generated in
the intermediate scales between the first and second stages. So, we can
assume that it is sufficient to evaluate T/S by the end of the first stage
($\varphi \sim\varphi_{cr}\sim 1$) where the slow-roll condition is marginally
applicable. Here (cf.eqs.(2), (32))
\begin{equation}
\frac TS \sim \varphi^{-2}\sim 1.
\end{equation}
Often T/S is presented as a function of the gravitational-wave-spectrum
index $n_T$ or the inflationary $\gamma$-parameter estimated in the given
scale, see eq.(2) (e.g. [22], [23], [24], [25], and others). We think this
formula is universal for most types of cosmic inflation. We can argue it by
plainly noting a clear physical equation,
\begin{equation}
\frac TS\simeq 3r,
\end{equation}
where $r$ is taken in scale where the T/S is determined ($k_{COBE} \sim
10^{-3}h$ Mpc ${}^{-1}$). The factor 3\footnote{%
Or somewhat about 3, to be found more accurately by special investigation
elsewhere. Eventually, it is proportional to the ratio of the effective
numbers of T and S spin projections on given spherical harmonics, see
eqs.(64), (65).} takes into account a higher ability of T-mode to contribute
to $\Delta T/T$. We now see from eqs.(52)-(56) that $r$ is a number found in
the limit $k\vert\eta \vert\ll 1$:
\begin{equation}
k\vert\eta\vert\ll 1:\;\;\;\;
r=4\gamma \vert\frac{\nu_k^\lambda}{\nu_{k}}\vert^{2}.
\end{equation}
Implying that the r.h.s. stays frozen outside the horizon ($k\vert \eta\vert
< 1$) we can estimate $r$ as the r.h.s. of eq.(69) at inflation horizon
crossing time. Thus, we may conclude that
\begin{equation}
r\simeq 4\left(\gamma \vert\frac{\nu_k^\lambda}{\nu_k}\vert^2
\right)_{k\vert\eta\vert=1}\simeq 4\gamma{}_{_{k\vert\eta\vert =1}}.
\end{equation}
The latter is due to the fact that the functions $\nu_k^\lambda$ and $\nu_k$
are close to each other at the horison crossing: they both start from the
same initial conditions (53) and obey the same equations inside the horizon
(see eqs.(52))\footnote{
The difference in their evolutions originates only because of various
effective potentials $U^{(\lambda)}$ entering eqs.(52); however both
potentials vanish for $k\vert\eta\vert>1$.}. Notice this argument is more
general than the slow-roll-condition validity: actually, according to
eqs.(51), (70) the $r$-number counts just the difference between the phase
space volumes of phonons and gravitons.
So, we see that large T/S is created each time when the $\gamma$ factor
approaches subunity values. It may happen either in the end of inflation
(note inflation stops for $\gamma=1$) or in a numerous intermediate periods
during inflationary regime where one type of the inflation is changed for
another one. Such a transition periods can be caused by many reasons; e.g.
due to a functional change of the dynamical potential in the course of
inflation (e.g. $\Lambda$-inflation), or a percularity in the potential
energy shape (e.g. non-analiticity, a step, plateau, or a break of the first
or second derivative of $V(\varphi)$), see e.g. [26]), or a change of the
inflaton field (e.g. double-inflation), or any type of phase transitions or
other evolutionary restructurings of the field Lagrangian that may slower
down, terminate, or break up the process of inflation.
Obviously, each particular way of inflation leaves its own imprints in the
power spectra and requires special investigation. However, the issue of T/S
is a matter of the very generic argument: the inflationary ($\gamma$, $H$)
and/or spectral ($r$, $n_T$) parameters estimated in the appropriate
energy/scale region. It (the T/S value) is totally independent of the local
$n_S$ and, thus, has nothing to do with a particular S-spectrum shape
produced in a given model.
The principal quantity for estimating T/S becomes the energy of inflaton:
the Hubble parameter at the inflationary horizon-crossing time, $H$ $[GeV]$.
The motivation is the following: as the CGW amplitude is always about $H$
(cf.eq.(57)) and $q_k\sim10^{-5}$ (from LSS originated from the
adiabatic S-mode), we have
\begin{equation}
\frac{{\rm T}}{{\rm S}}\simeq\frac 16\left(\frac{H}{q_{COBE}}\right)^2
=\left(\frac{H}{6\times 10^{13}GeV}\right)^2\left(\frac{10^{-5}}{ q_{COBE}}
\right)^2,
\end{equation}
where $q_{COBE}\equiv q_{k_{COBE}}$. So, measuring the T/S brings a vital
direct information on the physical energy scale where the cosmic
perturbations has been created; a cosmologically noticable T/S could be
achieved only if the inflation occured at subPlanckian (GUT) energies, $H{}>
10^{13}GeV$. If the CDPs were generated at smaller energies (e.g. during
electroweek transition) then T/S would vanish.
The point we emphasize in this paper is that for $\Lambda $-inflation. It
brings about two distinguished signatures -- a wing-like S-spectrum and the
possibility for large T/S -- under quite a simple and natural assumption on
the potential energy of inflaton: the existence in $V(\varphi )$ of a
{\it metastabe dynamical constant} in addition to an {\it arbitrary
functional} $\varphi $-dependent term. It is obviously three independent
parameters that determine the degrees of freedom of any $\Lambda $-inflation
model. They can be, for instance, T/S and the local $n_S$ (at the COBE scale)
as well as $k_{cr}$ (the scale where S-spectrum is at minimum) or,
alternatively, the $r$-maximum and its position (the $k_1$ scale) as well as
$V_0$. In case if T/S is large, we find quite a defenite prediction on the
location of the $\Lambda $-inflation parameters near GUT energies
(see eq.(15)):
\[
\frac{{\rm T}}{{\rm S}}>0.1:\;\;\;\;
\begin{array}{rcl}
\sqrt{V_0} & \in & \left( \frac{\zeta ^{-\frac \kappa 2}}{\sqrt{\kappa/2}},
\zeta \right) \left( \frac{q_{COBE}}{10^{-5}}\right) \left( 7\times
10^{15}GeV\right) ^2 \\
\frac{\sqrt{\lambda _\kappa /3}}{q_{COBE}} & \in & \left( 10^{-\frac \kappa
2},\frac{\kappa}{2}\zeta \right) \left(\kappa-1\right)^{\frac{1-\kappa }2}
\left( 2\times 10^{18}GeV\right)^{2-\frac \kappa 2}
\end{array}
,
\]
where $\zeta \equiv 4\epsilon \left( \kappa -1\right) ^{\frac{\kappa -1}
\kappa }\simeq \frac{2(\kappa -1)10^{19}GeV}{\varphi _{cr}}\in (1,10)$;
recall these estimates assume only the condition T/S$>0.1$ (cf.eqs.(57),
(58), (68)).
\section{Conclusions}
Our conclusions are the following:
\begin{itemize}
\item[$\ast$] We introduce a broad class of elementary inflaton models
called the $\Lambda $-inflation. The inflaton in the local-minimum-point has
a {\it positive residual potential energy}, $V_0>0$. The hybrid inflation
model (at the intermediate evolutionary stage) is a partial case of $\Lambda
$-inflation; the chaotic inflation is a measure-zero-model in the family of
$\Lambda $-inflation models.
\item[$\ast$] The S-perturbation spectrum generated in $\Lambda $-inflation
has a non-power-law {\it wing-like shape with a broad minimum} where the
slope is locally HZ ($n_S=1$); it is blue, $n_S>1$, (red, $n_S<1$) on short
(large) scales. The T-perturbation spectrum remains always red with the
maximum deviation from HZ at the scale near the S-spectrum-minimum.
\item[$\ast$] The cosmic gravitational waves generated in
$\Lambda$-inflation contribute {\it maximally} to the SW
$\Delta T/T$-anisortopy, (T/S)${}_{max}{}_{\sim }^{<}10$, in scales where
the S-spectrum is slightly red or nearly HZ ($k_{\sim }^{<}k_{cr}$). The
T/S remains small ($\ll 1$) in both blue ($k>k_{cr}$) and far red
($k\ll k_{cr}$) S-spectrum asymptotics.
\item[$\ast$] {\it Three} independent arbitrary parameters determine the
fundamental $\Lambda $-inflation; they can be the T/S, $k_{cr}$ (the scale
where $n_S=1$), and $\sqrt{V_0}$ (the CDP amplitude at $k_{cr}$ scale; a
large value for T/S is expected if $V_0^{\frac 14}\sim 10^{16}GeV$). This
brings high capability in fitting various observational tests to the dark
matter cosmologies based on $\Lambda $-inflation.
\end{itemize}
{\it Acknowledgements} The work was partly supported by the INTAS grant
97-1192 and Swiss National Science Foundation grant 7IP 050163.96/1.
\newpage
\section*{APPENDIX: $\Lambda m$-inflation with $\epsilon ^2<0.9$}
Here, we consider the inflation model with $\kappa =2$ and
$p>\frac{2}{3}\;(\epsilon ^2<\frac 56)$.
Under the latter restriction, $\vartheta \simeq {\rm const}=\frac 32(1-p)$
during the whole dS stage ($y<1$, cf.eq.(42)) and decays as $\vartheta
\simeq \frac{3f}4\simeq -\frac{\epsilon ^2}{2y^2}$ for $y>1$. Making use of
eqs.(26) we find the following best fit for the whole $y$-evolution
(analytically exact in the limit $\epsilon \rightarrow 0$):
$$
\vartheta \simeq \frac 32\left( 1-\sqrt{1-f}\right) =
\frac{1.5f}{1+\sqrt{1-f}},\eqno(A1)
$$
$$
\sqrt{\frac{\gamma}{2}}\simeq \frac{\epsilon y}{\left( 1+y^2\right)
\left( 1+\sqrt{1-f}\right) }.\eqno(A2)
$$
where $f\equiv \frac{2\epsilon ^2}3\frac{1-y^2}{(1+y^2)^2}$. The
substitution of (A2) into eq.(27) brings about the explicit integration:
$$
\epsilon ^2\ln \left( \frac {v\eta}{\sqrt{2}\eta_{cr}}\right) \simeq J(\xi ),
\eqno(A3)
$$
where $\xi \equiv v^2\left( 1+\sqrt{1-f}\right) =v^2+\sqrt{v^4+\left(
1-p^2\right) \left( v^2-2\right) }$, $v^2=1+y^2$,
\[
J\left( \xi \right) \equiv \int_1\frac{\xi dy}y=\frac \xi 2-2+\frac 12\ln
\left[ \left( \frac{\xi -1-p}{3-p}\right) ^{1+p}\left( \frac{\xi -1+p}{3+p}
\right) ^{1-p}\left( \frac{2\xi +1-p^2}{9-p^2}\right) ^{\frac{1-p^2}2
}\right] .
\]
Obviously, the evolution goes from large $\xi =2y^2\left( 1+\frac{1+\epsilon
^2/6}{y^2}+O\left( \frac 1{y^4}\right) \right) >4$ to small $\xi =\left(
1+p\right) \left( 1+\frac{3-p}{2p}y^2+O\left( y^4\right) \right) <4$, and
$\xi _{cr}=4$. Accordingly, we have the following $y$-asymptotics from
eq.(A3):
$$
y^2=\Bigg \{
\begin{array}{lcl}
\epsilon ^2\ln \left( \frac{\omega _5\eta}{\eta_{cr}} \right) +\left( 1+p^2
\right)\ln \left( \frac{y_5}y\right) +1, & \; & y>1, \\
y_6^2\left( \frac{\omega _6\eta}{\eta_{cr}} \right) ^{3(1-p)}, & \; & y<1,
\end{array}
\eqno(A4)
$$
where
$\omega _5^{-1}=\sqrt{2}\exp \left( \frac 16\right)$,
$y_5=\left(\frac{3-p}{2}\right)^{\frac{(1+p)(3-p)}{4(1+p^2)}}
\left(\frac{3+p}2\right)^{\frac{(1-p)(3+p)}{4(1+p^2)}}$,
$\omega _6=\frac{\eta_{cr}}{\eta_3}=\frac 1{\sqrt{2}}(3+p)^{\frac{3+p}{6(1+p)
}}$,
$y_6=(2p)^{\frac p{1+p}}(1+p)^{\frac{p-3}4}\exp\left[\frac{3-p}{2(1+p)}
\right] $.
In the allowed region $p>\frac 23$, the coefficients $\omega _6$ and $y_6$
remain close to unity. In the slow-roll limit $(p\rightarrow 1)$, $\omega
_6=2^{\frac 16}$ and $y_6=\sqrt{e}$.
\newpage
\section*{References}
1. E.M. Lifshitz, Zh. Eksp. Teor. Fiz. {\bf 16}, 587 (1946).\\
2. R.K. Sachs, A.M. Wolfe, ApJ {\bf 147}, 73 (1967).\\
3. L.P. Grishchuk, Zh. Eksp. Teor. Fiz. {\bf 67}, 825 (1974). \\
4. V.N. Lukash, Zh. Eksp. Teor. Fiz. {\bf 79}, 1601 (1980).\\
5. A.D. Linde, Phys. Lett. B {\bf 129}, 177 (1983).\\
6. F. Lucchin, S. Matarrese, Phys. Rev. D {\bf 32}, 1316 (1985).\\
7. R.L. Davis, H.M. Hodges, G.F. Smoot, et al., Phys. Rev. Lett.
{\bf 69}, 1856 (1992).\\
8. A.D. Linde, Phys. Rev. D {\bf 49}, 748 (1994).\\
9. J. Garcia-Bellido, D. Wands, Phys. Rev. D {\bf 54}, 6040 (1996).\\
10. J. Garcia-Bellido, A. Linde, D. Wands, Phys. Rev. D {\bf 54}, 7181
(1996).\\
11. E.J. Copeland, A.R. Liddle, D.H. Lyth, et al., Phys. Rev. D
{\bf 49}, 6410 (1994).\\
12. V.N. Lukash, E.V. Mikheeva, Gravitation and Cosmology {\bf 2},
247 (1996).\\
13. B.J. Carr, J.H. Gilbert, Phys. Rev. D {\bf 50}, 4853 (1994).\\
14. J. Gilbert, Phys. Rev. D {\bf 52}, 5486 (1995). \\
15. A. Melchiorri, M.S. Sazhin, V.V. Shulga, N. Vittorio, to appear
in ApJ, preprint astro-ph/9901220 (1999).\\
16. V.N. Lukash, in: {\it Cosmology: The physics of
the Universe}, ed. by B.A. Robson et al., World Scientific, Singapore
(1996), p.213.\\
17. V.A. Rubakov, M.V. Sazhin, A.M. Veryaskin, Phys. Lett. B {\bf
115}, 189 (1982).\\
18. A.A. Starobinskii, Zh. Eksp. Teor. Fiz. {\bf 30}, 719 (1979).\\
19. G.F. Smoot, C.L. Bennett, A.Kogut et al., ApJ {\bf 396}, L1 (1992).\\
20. C.L. Bennet, A.J. Banday, K.M.Gorski et al., ApJ {\bf 464}, L1
(1996).\\
21. F. Lucchin, S. Matarrese, S. Mollerach, ApJ {\bf 401}, L49 (1992).\\
22. M.S. Turner, Phys. Rev. D {\bf 48}, 5539 (1993).\\
23. E.W. Kolb, S.L. Vadas, Phys.Rev.D {\bf 50}, 2479 (1994).\\
24. J.E. Lidsey, A.R.Liddle, E.W.Kolb et al., Rev. Mod. Phys. {\bf 69}, 373
(1997).\\
25. A.A. Starobinsky, Pis'ma Astron.Zh. {\bf 11}, 323 (1985).\\
26. G. Lesgourgues, D. Polarski, A.A. Starobinsky, to appear in
MNRAS, preprint astro-ph/9807019 (1999).
\end{document}
|
2,869,038,155,503 | arxiv | \section{Introduction}\label{Sec:Introduction}
Persistent homology, a method for studying topological features over changing scales, has received tremendous attention in the past decade \cite{Edelsbrunner:2002,Zomorodian:2005}. The basic idea is to measure the life cycle of topological features within a filtration, i.e., a nested family of abstract simplicial complexes, such as Vietoris-Rips complexes, \v{C}ech complexes, or alpha complexes \cite{Edelsbrunner:1994}. Thus, long-lived topological characteristics, which are often the intrinsic invariants of the underlying system, can be extracted; while short-lived features are filtered out. The essential topological characteristics of three-dimensional (3D) objects typically include connected components, tunnels or rings, and cavities or voids, which are invariant under the non-degenerate deformation of the structure. Homology characterizes such structures as groups, whose generators can be considered independent components, tunnels, cavities, etc. Their times of ``birth'' and ``death'' can be measured by a function associated with the filtration, calculated with ever more efficient computational procedures \cite{edelsbrunner:2010,Dey:2008,Dey:2013,Mischaikow:2013}, and further visualized through barcodes \cite{Ghrist:2008}, a series of horizontal line segments with the $x$-axis representing the changing scale and the $y$-axis representing the index of the homology generators. Numerous software packages, such as Perseus, Dionysus, and Javaplex \cite{javaPlex}, based on various algorithms have been developed and made available in the public domain.
As an efficient tool to {unveil topological invariants}, persistent homology has been applied to various fields, such as image analysis \cite{Carlsson:2008,Pachauri:2011,Singh:2008}, chaotic dynamics verification \cite{Mischaikow:1999,Kaczynski:2004}, sensor network \cite{Silva:2005}, complex network \cite{LeeH:2012,Horak:2009}, data analysis \cite{Carlsson:2009}, {geometric processing}\cite{Feng:2013}, and computational biology \cite{Kasson:2007,Gameiro:2013,Dabaghian:2012,YaoY:2009}. Based on persistent homology analysis, we have proposed molecular topological fingerprints and utilized them to reveal the topology-function relationship of biomolecules \cite{KLXia:2014c}. In general, persistent homology is devised as a robust but \emph{qualitative} topological tool and has been hardly employed as a precise \emph{quantitative} predictive {tool \cite{Adcock:2013,Bendich:2014}.}
To the best of our knowledge, persistent homology has not been applied to the study of fullerenes, special molecules comprised of only carbon atoms. The fullerene family shares the same closed carbon-cage structure, which contains only pentagonal and hexagonal rings. In 1985, Kroto et al. ~\cite{Kroto:1985} proposed the first structure of C$_{60}$, which {was} then confirmed in 1990 by Kr$\ddot{a}$tschmer et al. \cite{Kratschmer:1990} in synthesizing macroscopic quantities of C$_{60}$. Enormous interest has been aroused by these interesting discoveries. However, there are many challenges. Among them, finding the ground-state structure has been a primary target.
In general, two types of approaches are commonly used \cite{Fowler:1988,Manolopoulos:1991,Fowler:1995,Ballone:1990,Chelikowsky:1991,ZhangBL:1992a}. The first method is based on the geometric and topological symmetries of fullerene \cite{Fowler:1988,Manolopoulos:1991,Fowler:1995}. In this approach, one first constructs all possible isomers, and then chooses the best possible candidate based on the analysis of the highest-occupied molecular orbital (HOMO) energy and the lowest-unoccupied molecular orbital (LUMO) energy~\cite{Manolopoulos:1991}. In real applications, to generate all possible isomers for a fullerene with a given atom count is nontrivial until the introduction of Coxeter's construction method ~\cite{Coxeter:1971,Fowler:1988} and the ring spiral method~\cite{Manolopoulos:1991}. In Coxeter's method, the icosahedral triangulations of the sphere are analyzed to evaluate the possible isomer structures. This method is mathematically rigorous. However, practical applications run into issues with low-symmetry structures. On the other hand, based on the {spiral conjecture\cite{Fowler:1995}}, the ring spiral method simply lists all possible spiral sequences of pentagons and hexagons, and then winds them up into fullerenes. When a consistent structure is found, an isomer is generated; otherwise, the sequence is skipped. Although the conjecture breaks down for fullerenes with 380 or more atoms, the spiral method proves to be quite efficient \cite{Fowler:1995}.
For each isomer, its electronic structure can be modeled simply by the H\"{u}ckel molecular orbital theory \cite{Streitwieser:1961}, which is known to work well for planar aromatic hydrocarbons using standard C-C and C-H $\sigma$ bond energies. Similarly, the bonding connectivities in fullerene structures are used to evaluate orbital energies. The stability of the isomers, according to Manolopoulus \cite{Manolopoulos:1991}, can then be directly related to the calculated HOMO-LUMO energy gap. However, this model falls short for large fullerene molecules. Even for small structures, its prediction tends to be inaccurate. One possible reason is fullerene's special cage structures. Instead of {a planar shape}, the structure usually has local curvatures, which jeopardizes the $\sigma$-$\pi$ orbital separation \cite{Fowler:1994,Fowler:1995}. To account for curvature contributions, a strain energy is considered. It is found that the stain energy reaches {its minimum when pentagonal faces} are as far away as possible from each other. This is highly consistent with the isolated pentagon rule (IPR) --- the most stable fullerenes are those in which all the pentagons are isolated \cite{Fowler:1995}.
Another approach to obtain ground-state structures for fullerene molecules is through simulated annealing \cite{Ballone:1990,Chelikowsky:1991,ZhangBL:1992a}. This global optimization method works well for some structures. However, if large energy barriers exist in the potential, the whole system is prone to be trapped into metastable high-energy state. This happens as breaking the carbon bonds and rearranging the structure need a huge amount of energy. A revised method is to start the system from a special face-dual network and then employ the tight-binding potential model \cite{ZhangBL:1992a,ZhangBL:1992b}. This modified algorithm manages to generate the C$_{60}$ structure of $I_h$ symmetry that has the HOMO-LUMO energy gap of 1.61 eV, in contrast to 1.71 eV obtained by using the ab initio local-density approximation.
In this paper, persistent homology is, for the first time, employed to {quantitatively} predict the stability of the fullerene molecules. The ground-state structures of a few small fullerene molecules are first studied using a distance based filtration process. Essentially, we associate each carbon atom of a fullerene with an ever-increasing radius and thus define a Vietoris-Rips complex. The calculated Betti numbers (i.e., ranks of homology groups), including $\beta_0$, $\beta_1$ and $\beta_2$, are provided in the barcode representation. To further exploit the persistent homology, we carefully discriminate between the local short-lived and global long-lived bars in the barcodes. We define an average accumulated bar length as the negative arithmetic mean of $\beta_2$ bars. As the local $\beta_2$ bars represent the number of cavities of the structure, when $\beta_2$ becomes larger, interconnectedness (and thus stability) tends to increase, and relative energy tends to drop. Therefore, the average accumulated bar length indicates the level of a relative energy. We validate this hypothesis with a series of ground-state structures of small fullerenes. It is found that our average accumulated bar length can capture the energy behavior remarkably well, including an anomaly in fullerene C$_{60}$ energy. Additionally, we explore the relative stability of fullerene isomers. The persistence of the Betti numbers is calculated and analyzed. Our results are validated with the total curvature energies of two fullerene families. It is observed that the total curvature energies of fullerene isomers can be well represented with their lengths of the long-lived Betti-2 bars, which indicates the sphericity of fullerene isomers. For fullerenes C$_{40}$ and C$_{44}$, correlation coefficients up to 0.956 and 0.948 are attained in the distance based filtration. Based on the flexibility-rigidity index (FRI) \cite{KLXia:2013d,KLXia:2013f,KLXia:2014b}, {a correlation matrix based filtration process is proposed to validate our findings}.
The rest of this paper is organized as follows. In Section \ref{Sec:theory}, we discuss the basic persistent homology concepts, including simplices and simplicial complexes, chains, homology, and filtration. Section \ref{Sec:algorithm} is devoted to the description of algorithms. The alpha complex and Vietoris-Rips complex are discussed in some detail, including filtration construction, metric space design, and persistence evaluation. In Section \ref{sec:application}, persistent homology is employed in the analysis of fullerene structure and stability. After a brief discussion of fullerene structural properties, we elaborate on their barcode representation. The average accumulated bar length is introduced and applied to the energy estimate of the small fullerene series. By validating with total curvature energies, our persistent homology based quantitative predictions are shown to be accurate. Fullerene isomer stability is also analyzed by using the new correlation matrix based filtration. This paper ends with a conclusion.
\section{Rudiments of Persistent Homology} \label{Sec:theory}
As representations of topological features, the homology groups are abstract abelian groups, which may not be robust or {able to provide} continuous measurements. Thus, practical treatments of noisy data require the theory of persistent homology,
which provides {continuous measurements} for the persistence of topological structures, allowing both quantitative comparison and {noise removal} in topological analyses.
{The concept was introduced by Frosini and Landi~\cite{Frosini:1999} and Robins~\cite{Robins:1999}, and in the general form by Zomorodian and Carlsson~\cite{Zomorodian:2005}. Computationally, the first efficient algorithm for Z/2 coefficient situation was proposed by Edelsbrunner et al.~\cite{Edelsbrunner:2002} in 2002.}
\subsection{Simplex and Simplicial Complex}
For discrete surfaces, i.e., meshes, the commonly used homology is {called simplicial homology}. To describe this notion, we first present a formal description of the meshes, the common discrete representation of surfaces and volumes. Essentially, meshing is a process in which a geometric shape is decomposed into elementary pieces called cells, the simplest of which are called \emph{simplices}.
\paragraph{Simplex}
{\emph{Simplices}} are the simplest polytopes in a given dimension, as described below. {Let $v_0,v_1,..v_p$ be $p\!+\!1$ affinely independent points in a linear space. A $p$-simplex $\sigma_p$ is the convex hull of those $p\!+\!1$ vertices}, denoted as {$\sigma_p={\rm convex}<v_0,v_1,...,v_p>$} or shorten as {$\sigma_p=<v_0,v_1,...,v_p>$}.
{A formal definition can be given as,}
\begin{eqnarray}
\sigma_p=\left\{v \mid v=\sum\limits_{i=0}^p\lambda_iv_i, \sum\limits_{i=0}^p\lambda_i=1, 0\leq\lambda_i\leq1, \forall i \right\}.
\end{eqnarray}
\begin{figure
\vspace{-.1in}
\centering
\includegraphics[keepaspectratio,width=3.0in]{figure1.pdf}
\caption{Illustration of 0-simplex, 1-simplex, and 2-simplex in the first row. The second row is simple 0-cycle, 1-cycle and 2-cycle.}
\label{fig:simplex}
\vspace{-.1in}
\end{figure}
{The most commonly used simplices in $\mathbb{R}^3$ are 0-simplex (vertex), 1-simplex (edge), 2-simplex (triangle) and 3-simplex (tetrahedron) as illustrated in Fig. \ref{fig:simplex}.}
An $m$-face of $\sigma_p$ is the $m$-dimensional subset of {$m\!+\!1$} vertices, where $0\leq m\leq p$. For example, an edge has two vertices as its 0-faces and one edge as its 1-face. Since the number of subsets of a set with $p\!+\!1$ vertices is $2^{p\!+\!1}$, there are a total of $2^{p\!+\!1}-1$ faces in $\sigma_p$. {All the faces are proper except for $\sigma_p$ itself.}
Note that polytope shapes can be decomposed into cells other than simplices, such as hexahedron and pyramid. {However, as non-simplicial cells can be further decomposed, we can, without loss of generality,} restrict our discussion to shapes decomposed to simplices as we describe next.
\paragraph{Simplicial Complex}
With simplices as the basic building blocks, we define a \emph{simplicial complex} $K$ as a finite collection of simplices that meet the following two requirements,
\begin{itemize}
\item Containment: Any face of a simplex from $K$ also belongs to $K$.
\item Proper intersection: The intersection of any two simplices $\sigma_i$ and $\sigma_j$ from $K$ is either empty or a face of both $\sigma_i$ and $\sigma_j$.
\end{itemize}
Two $p$-simplices $\sigma^{i}$ and $\sigma^{j}$ are \emph{adjacent} to each other if they share a common face. The boundary of $\sigma_p$, denoted as $\partial{\sigma_p}$, is the union (which can be written as a formal sum) of its $(p\!-\!1)$-faces. Its interior is defined as the set containing {all non-boundary points}, denoted as $\sigma-\partial{\sigma_p}$. We define a boundary operator for each $p$-simplex spanned by vertices $v_0$ through $v_p$ as
\begin{eqnarray}
\delta p<v_0,...,v_p>=\sum_{i=0}^p<v_0,...,\hat{v_i},...,v_p>,
\end{eqnarray}
where $\hat{v_i}$ indicates that $v_i$ is omitted {and $Z/2$ coefficient set is employed}. It is the boundary operator that creates the nested topological structures and the
homomorphism among them as described in the next section.
If the vertex positions in the ambient linear space can be ignored or do not exist, the containment relation among the simplices (as finite point sets) defines an \emph{abstract simplicial complex}.
\subsection{Homology}\label{Homology}
A powerful tool in topological analysis is {homology}, {which represents certain structures in the meshes by algebraic groups to describe their topology}. For regular objects in 3D space, essential topological features are connected components, tunnels and handles, and cavities, which are exactly described by the 0th, 1st, and 2nd {homology groups}, respectively.
\paragraph{Chains}
The shapes to be mapped to homology groups are constructed from \emph{chains} defined below.
Given a simplicial complex (e.g., a tetrahedral mesh) $K$, which, roughly
speaking, {is a concatenation of $p$-simplices}
, we define a $p$-chain $c=\sum_i a_i \sigma_i$ as a formal linear
combination of all $p$-simplices in $K$, {where $a_i\in Z/2$ is}
$0$ or $1$ and $\sigma_i$ is a $p$-simplex. {Under such a definition, a
0-chain is a set of vertices, a 1-chain is a set of line segments which link vertices, a 2-chain is a set of triangles which are enclosed by line segments, and a 2-chain is a set of tetrahedrons which are enclosed by triangle surfaces.}
We extend the boundary operator $\partial_p$ for each $p$-simplex to a linear operator applied to chains, i.e., the extended operator meet following two conditions for linearity,
\begin{eqnarray}
\begin{aligned}
\partial_p(\lambda c)&=\lambda\partial_p(c),\\
\partial_p(c_i+c_j)&=\partial_p(c_i)+\partial_p(c_j),
\end{aligned}
\end{eqnarray}
where $c_i$ and $c_j$ are both chains and $\lambda$ is a constant, and all arithmetic is for modulo-2 integers, in which $1+1=0$.
An important property of the boundary operator is {that the following composite operation is the zero map},
\begin{eqnarray}\label{BoundaryOperator}
\partial_p\circ\partial_{p+1}=0,
\end{eqnarray}
which immediately follows from the definition.
{Take the 2-chain} $c=f_1+f_2$ as an example, which represents a membrane formed by two triangles, $f_1=<v_1, v_2, v_3>$ and $f_2=<v_3, v_2, v_4>$. The boundary of $c$ is a 1-chain, which turns out to be a loop,
\begin{eqnarray}
\begin{aligned}
\partial_2(c)&=<v_1,v_2>+ <v_2,v_3> + <v_3, v_1> + <v_3, v_2> + <v_2, v_4> + <v_4, v_3>\\
&=<v_1,v_2>+ <v_3, v_1> + <v_2, v_4> + <v_4, v_3>.
\end{aligned}
\end{eqnarray}
The boundary of this loop is thus
\begin{eqnarray}
\begin{aligned}
\partial_1\circ\partial_2(c)&= \partial_1 (<v_1,v_2>+ <v_3, v_1> + <v_2, v_4> + <v_4, v_3>)\\
&= v_1 + v_2 +v_2 + v_4 + v_4 + v_3 + v_3 + v_1 = 0.
\end{aligned}
\end{eqnarray}
\paragraph{Simplicial homology}
Simplicial homology is built on the \emph{chain complex} {associated to} the simplicial complex. A chain complex is a sequence of abelian groups $(C_1, C_2, \dots, C_n)$ connected by the homomorphism (linear operators) $\partial_p$, such that $\partial_p \circ \partial_{p+1} = 0$ as in Eq.(\ref{BoundaryOperator}).
\begin{eqnarray}
\cdots\xlongrightarrow{\partial_{p+1}}C_{p}\xlongrightarrow{\partial_{p}}C_{p-1}
\xlongrightarrow{\partial_{p-1}}\cdots\xlongrightarrow{\partial_{2}}C_{1}\xlongrightarrow{\partial_{1}}C_{0}
\xlongrightarrow{\partial_{0}}\emptyset.
\end{eqnarray}
The chain complex in the definition of simplicial homology is formed by C$_p$, the space of all $p$-chains, and $\partial_p$, the boundary operator on $p$-chains. Since $\partial_p\circ \partial_{p+1} = 0$, the kernel of the boundary operator $p$-chains is a subset of the image of the boundary operator of $p\!+\!1$-chains.
The $p$-chains in the kernel of the boundary homomorphisms $\partial_p$ are called \emph{$p$-cycles} ($p$-chains without boundary) and the $p$-chains in the image of the boundary homomorphisms $\partial_{p+1}$ are called $p$-boundaries.
The $p$-cycles form an abelian group (with group {operation} being the addition of chains) called cycle group, denoted as $Z_p={\rm Ker}\ \partial_p$. The $p$-boundaries form another abelian group called boundary group, denoted as $B_p={\rm Im}\ \partial_{p+1}$.
Thus, $p$-boundaries are also $p$-cycles as shown in Fig. \ref{fig:Homology}. As $p$-boundaries form a subgroup of the cycles group, the quotient group can be constructed through cosets of $p$-cycles, i.e., by equivalence classes of cycles. The $p$-th homology, denoted as $H_p$, is defined as a quotient group,
\begin{eqnarray}
\begin{aligned}
H_p&={\rm Ker}\ \partial_p/{\rm Im}\ \partial_{p+1}\\
&=Z_p/B_p,
\end{aligned}
\end{eqnarray}
where ${\rm Ker}\ \partial_p$ is the collection of $p$-chains with empty boundary and ${\rm Im}\ \partial_{p+1}$
is the collection of $p$-chains that are boundaries of $p+1$-chains.
\begin{figure
\vspace{-.1in}
\centering
\includegraphics[keepaspectratio,width=5.0in]{figure2.pdf}
\caption{Illustration of boundary operators, and chain, cycle and boundary groups in $\mathbb{R}^3$. Red dots stand for empty sets.}
\label{fig:Homology}
\vspace{-.1in}
\end{figure}
{Noticing that all groups with $p > 3$ cannot be generated from meshes in $\mathbb{R}^3$}, we only need chains, cycles and boundaries of dimension $p$ with $0\leq p \leq 3$. See Fig. \ref{fig:Homology} for an illustration.
{We illustrate simplexes and cycles including 0-cycle, 1-cycle, and 2-cycle in Fig. \ref{fig:simplex}. Basically,} an element in the $p$-th homology group is an equivalence class of $p$-cycles. One of these cycles $c$ can represent any other $p$-cycle that can be ``deformed'' through the mesh to $c$, because any other $p$-cycle in the same equivalence class differ with $c$ by a $p$-boundary $b=\partial (\sigma_1+\sigma_2+\dots)$, where each $\sigma_i$ is a $p\!+\!1$-simplex. Adding the boundary of $\sigma_i$ has the effect of deforming $c$ to $c+\partial\sigma_i$ by sweeping through $\sigma_i$. For instance, a $0$-cycle $v_i$ is equivalent to $v_j$ if there is a path $<v_i, v_{k1}> + <v_{k1},v_{k2}> + \dots + <v_{kn}, v_j>$. Thus each generator of $0$th-homology, (like a basis vector in a basis of the linear space of $0$th-homology) represents one connected component. Similarly, $1$-cycles are loops, and
$1$st-homology generators represent independent nontrivial loops, i.e., separate tunnels;
$2$-homology generators are independent membranes, each
enclosing one cavity of the 3D object.
Define $\beta_p={\rm rank}(H_p)$ to be the $p$-th Betti number. For a simplicial complex in 3D, $\beta_0$ is the number of connected components;
$\beta_1$ is the number of tunnels; and
$\beta_2$ is the number of cavities.
As $H_p$ is the quotient group between $Z_p$ and $B_p$, we can also compute Betti numbers through,
\begin{equation}
{\rm rank}(H_p) = {\rm rank}(Z_p) - {\rm rank}(B_p).
\end{equation}
Note, however, $H_p$ is usually of much lower rank than either $Z_p$ or $B_p$.
\subsection{Persistent Homology}
Homology generators identify the tunnels, cavities, etc., in the shape, but as topological invariants, they omit the metric measurements by definition. However, in practice, one often needs to compare the sizes of tunnels, for instance, to find the narrowest tunnel, or to {remove} tiny tunnels as topological noises. Persistent homology is a method of reintroducing metric measurements to the topological structures~\cite{Edelsbrunner:2002,Zomorodian:2005}.
The measurement is introduced as an index $i$ to a sequence of nested topological spaces $\{\mathbb{X}_i\}$. Such a sequence is called a \emph{filtration},
\begin{equation}
\emptyset = \mathbb{X}_0\subseteq \mathbb{X}_1\subseteq \mathbb{X}_2\subseteq \cdots \subseteq \mathbb{X}_m = \mathbb{X}.
\end{equation}
Since each inclusion induces a mapping of chains, it induces a linear map for homology,
\begin{equation}
\emptyset= H(\mathbb{X}_0)\rightarrow H(\mathbb{X}_1)\rightarrow H(\mathbb{X}_2)\rightarrow \cdots \rightarrow H(\mathbb{X}_m) = H(\mathbb{X}).
\end{equation}
\begin{figure
\begin{center}
\vspace{-0.in}
\includegraphics[keepaspectratio,width=5.0in]{figure3.pdf}
\vspace{-0.in}
\end{center}
\caption{Illustration of the birth and death of a homology generator $c$}
\label{fig:Filtration}
\end{figure}
The above sequence describes the evolution of the homology generators. We follow the exposition in Ref. \cite{Munch:2013}
and {define by} a composition mapping from $H(\mathbb{X}_i)$ to $H(\mathbb{X}_j)$ as
$\xi_i^j: H(\mathbb{X}_i)\rightarrow H(\mathbb{X}_j)$.
A new homology class $c$ is created (born) in $\mathbb{X}_i$ if it is not in the image of $\xi_{i-1}^i$.
It dies in $\mathbb{X}_j$ if it becomes trivial or is merged to an ``older'' (born before $i$) homology class, i.e., its image in $H(\mathbb{X}_j)$ is in the image of $\xi_{i-1}^j$,
unlike its image under $\xi_{i}^{j-1}$.
As shown in Fig. ~\ref{fig:Filtration}, if we associate with each space $\mathbb{X}_i$ a value $h_i$ denoting ``time'' or ``length'', we can define the duration, or the persistence length of the each homology generator $c$ as
\begin{equation}
{\rm persist}(c) = h_j-h_i.
\end{equation}
This measurement $h_i$ is usually readily available when analyzing the topological feature changes. For instance, when the filtration arises from the level sets of a height function.
\section{Algorithms for persistent homology}\label{Sec:algorithm}
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[width=1\textwidth]{figure4.pdf}
\end{tabular}
\end{center}
\caption{Illustration of filtrations built on fullerene C$_{60}$. Each point or atom in the point cloud data (i.e., coordinates) of the C$_{60}$ is associated with a common radius $r$ which increases gradually. As the value of $r$ increases, the solid balls centered at given coordinates grow. These balls eventually overlap with their neighbors at certain $r$ values. Simplices indicating such neighborhood information can be defined through abstract $r$-dependent simplicial complexes, e.g., alpha complexes and Rips complexes. {Note that in the last chart, we have removed some atoms to reveal the central void.}}
\label{fig:carbon60}
\end{figure}
In computational topology, intrinsic features of point cloud data, i.e., a point set $S\subset \mathbb{R}^n$ without additional structure, are common subjects of investigation. For such data, a standard way to construct the filtration is to grow a solid ball centered at each point with an ever-increasing radius. If the differences between points can generally be ignored, as is the case for fullerenes, a common radius $r$ can be used for all points. In this setting, the radius $r$ is used as the parameter for the family of spaces in the filtration. As the value of $r$ increases, the solid balls will grow and simplices can be defined through the overlaps among the set of balls.
In Figure \ref{fig:carbon60}, fullerene C$_{60}$ is used to demonstrate this process. There are various ways of constructing abstract simplicial complexes from the intersection patterns of the set of expanding balls, such as \v{C}ech complex, Vietoris-Rips complex and alpha complex. The corresponding topological invariants, e.g., the Betti numbers, are in general different due to different definitions of simplicial complexes. In this section, we discuss computational algorithms for the design of filtrations, the construction of abstract simplicial complexes, and the calculation of Betti numbers.
\paragraph{Alpha complex}
One possible filtration that can be derived from the unions of the balls with a given radius around the data points (as shown in Figure \ref{fig:carbon60}) is the family of $d$-dependent \v{C}ech complexes, each of them is defined to be a simplicial complex, whose $k$-simplices are determined by $(k+1)$-tuples of points, such that the corresponding $d/2$-balls have a non-empty intersection. However, it may contain many simplices for a large $d$. A variant called {the alpha complex} can be defined by replacing the $d/2$-ball in the above definition by the intersection of the {$d/2$-ball with the Voronoi cells for these data points}. {In both cases, they are homotopic to the simple unions of balls, and thus produce the same persistent homology. {Interested readers are referred to the nerve theorem for details \cite{Zomorodian:2009book}.}
\paragraph{Vietoris-Rips complex}
The Vietoris-Rips complex, which is also known as Vietoris complex or Rips complex, is another type of abstract simplicial complex derived from the union of balls. In this case, for a $k$-simplex to be included, instead of requiring that the $(k+1)$ $d/2$-balls to have a common intersection, one only needs them to intersect pairwise. The \v{C}ech complex is a subcomplex of the Rips complex for any given $d$, however, the latter is much easier to compute and is also a subcomplex of the former at the filtration parameter of $\sqrt{2}d$.
\paragraph{Euclidean-distance based filtration}
It is straightforward to use the metric defined by the Euclidean space in which the data points are embedded. The pairwise distance can be stored in a symmetric distance matrix $\left(d_{ij}\right)$, with each entry $d_{ij}$ denoting the distance between point $i$ and point $j$. Each diagonal term of the matrix is the distance from a certain point to itself, and thus is always 0. The family of Rips complexes is parameterized by $d$, a threshold on the distance. For a certain value of $d$, the Vietoris-Rips complex can be calculated. In 3D, more specifically, for a pair of points whose distance is below the threshold $d$, they form {a 1-simplex} in the Rips complex; for a triplet of points, {if the distance between every pair is smaller than $d$}, the 2-simplex formed by the triplet is in the Rips complex; whether a 3-simplex is in the Rips complex can be similarly determined. The Euclidean-distance based Vietoris-Rips complexes are widely used in persistent homology due to their simplicity and efficiency.
\paragraph{Correlation matrix based filtration}
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.8\textwidth]{figure5.pdf}
\end{tabular}
\end{center}
\caption{Correlation matrix based filtration of fullerene C$_{60}$ (labels on both axes are atomic numbers). A correlation matrix is constructed from the FRI theory. As the filtration parameter increases, the Rips complex based on this matrix expands accordingly. ({\bf a}) The correlation based matrix for fullerene C$_{60}$; ({\bf b}), ({\bf c}) and ({\bf d}) demonstrate the connectivity between atoms at the filtration threshold $d=0.1$\AA, $0.3$\AA, and $0.5$\AA, respectively. The blue color entries represent the pairs already forming simplices. }
\label{fig:FiltrationMatrix}
\end{figure}
Another way to construct the metric space is through a certain correlation matrix, which can be built, e.g., from theoretical predictions and experimental observations. From a previous study on protein stability, flexibility-rigidity index (FRI) theory has been proven accurate and efficient\cite{KLXia:2013d}. The reason for its success is that the geometric information is harnessed properly through the special transformation to a correlation matrix. The key to this transformation is the geometric to topological mapping. Instead of direct geometric information of the embedding in the Euclidean space, a mapping through certain kernel functions is able to interpret spatial locations of atoms in a particular way that reveals the atom stability quantitatively. We believe that this functional characterization is of importance to the study of not only proteins, but also other molecules.
Here, we present a special correlation {matrix based} Vietoris complex on the FRI method. In order to define the metric used, we briefly review the concepts of the FRI theory. First, we introduce the geometry to topology mapping \cite{KLXia:2013d,KLXia:2013f,KLXia:2014b}. We denote the coordinates of atoms in the molecule we study as ${\bf r}_{1}, {\bf r}_{2}, \cdots, {\bf r}_{j}, \cdots, {\bf r}_{N}$, where ${\bf r}_{j}\in \mathbb{R}^{3}$ is the position vector of the $j$th atom. The Euclidean distance between $i$th and $j$th atoms $r_{ij}$ can then be calculated. Based on these distances, topological connectivity matrix can be constructed with monotonically decreasing radial basis functions. A general form for a connectivity matrix is,
\begin{eqnarray}\label{eq:couple_matrix0}
{C}_{ij} = w_{j} \Phi( r_{ij},\eta_{j}),
\end{eqnarray}
where $w_{j}$ is associated with atomic types, parameter $\eta_{j}>0$ is the atom-type related characteristic distance, and $\Phi( r_{ij};\eta_{j}) $ is a radial basis correlation kernel.
The choice of kernel is of significance to the FRI model. It has been shown that highly predictive results can be obtained by
the exponential type and Lorentz type of kernels \cite{KLXia:2013d,KLXia:2013f,KLXia:2014b}. Exponential type of kernels is
\begin{eqnarray}\label{eq:ExpKernel}
\Phi(r,\eta) = e^{-\left(r/\eta\right)^\kappa}, \quad \eta >0, \kappa >0
\end{eqnarray}
and the Lorentz type of kernels is
\begin{eqnarray}\label{eq:PowerKernel}
\Phi(r, \eta) =
\frac{1}{1+ (r/\eta)^{\upsilon}}.
\quad \eta >0, \upsilon > 0
\end{eqnarray}
The parameters $\kappa$ and $\upsilon$ are adjustable.
We define the atomic rigidity index $\mu_i$ for $i$th atom as
\begin{eqnarray}\label{eq:rigidity1}
\mu_i = \sum_{j=1}^N w_{j} \Phi( r_{ij} ,\eta_{j} ), \quad \forall i =1,2,\cdots,N.
\end{eqnarray}
A related atomic flexibility index can be defined as the inverse of the atomic rigidity index.
\begin{eqnarray}\label{eq:flexibility1}
f_i= \frac{1}{\mu_i}, \quad \forall i =1,2,\cdots,N.
\end{eqnarray}
The FRI theory has been intensively validated by comparing with the experimental data, especially the Debye-Waller factor (commonly known as the B-factor) \cite{KLXia:2013d}. While simple to evaluate, their applications in B-factor prediction yield decent results. The predicted results are proved to be highly accurate while the procedure remains efficient. FRI is also used to analyze the protein folding behavior \cite{KLXia:2014b}.
To construct an FRI-based metric space, we need to design a special distance matrix, in which the functional correlation is measured. If we directly employ the correlation matrix in Eq. (\ref{eq:couple_matrix0}) for the filtration, atoms with less functional relation {form} more simplices, resulting in a counter-intuitive persistent homology. However, this problem can be easily remedied by defining a new correlation matrix as $M_{ij}=1-{C}_{ij}$, i.e.,
\begin{eqnarray}\label{eq:FiltrationMatrix}
{M}_{ij} = 1-w_{j} \Phi( r_{ij}, \eta_{j}).
\end{eqnarray}
Thus a kernel function induces a metric space under this definition. Figure \ref{fig:FiltrationMatrix}({\bf a}) demonstrates such a metric space based filtration of fullerene C$_{60}$, in which we assume $w_{j}=1$ since only one type of atom exists in this system. The generalized exponential kernel in Eq. (\ref{eq:ExpKernel}) is used with parameters $\kappa=2.0$ and $\eta=6.0$\AA.
With the correlation matrix based filtration, the corresponding Vietoris-Rips complexes can be straightforwardly constructed. Specifically, given a certain filtration parameter $h_0$, if the matrix entry $M_{ij}\leq h_0$, an edge formed between $i$th and $j$th atoms, and a simplex is formed if all of its edges are present. The complexes are built incrementally as the filtration parameter grows. Figures \ref{fig:FiltrationMatrix}({\bf b}), ({\bf c}) and ({\bf d}) illustrate this process with three filtration threshold values $h=0.1$\AA, $0.3$\AA~ and $0.5$\AA, respectively. We use the blue color to indicate formed edges. It can be seen that simplicial complexes keep growing with the increase of filtration parameter $h$. The diagonal terms are always equal to zero, which means that $N$ atom centers (0-simplices) form the first complex in the filtration.
\section{Application to fullerene structure analysis and stability prediction}\label{sec:application}
In this section, the theory and algorithms of persistent homology are employed to study the structure and stability of fullerene molecules. The ground-state structural data of fullerene molecules used in our tests are downloaded from the \href{http://www.ccl.net/cca/data/fullerenes}{CCL webpage} and fullerene isomer data and corresponding total curvature energies \cite{Guan:2014} are adopted from David Tomanek's \href{http://www.nanotube.msu.edu/fullerene}{carbon fullerene webpage}. In these structural data, coordinates of fullerene carbon atoms are given. The collection of atom center locations of each molecule forms a point cloud in $\mathbb{R}^{3}$. With only one type of atom, the minor heterogeneity of atoms due to their chemical environments in these point clouds can be ignored in general. We examined both distance based and correlation matrix based metric spaces in our study. The filtration based on the FRI theory is shown to predict the stability very well.
Before we discuss the more informative persistent homology of fullerenes, we discuss the basic structural properties simply based on their Euler characteristics {(vertex number minus edge number plus polygon number)}. The Euler characteristic, as a topological property, is invariant under non-degenerate shape deformation. For a fullerene cage composed of only pentagons and hexagons, the exact numbers of these two types of polygons can be derived from the Euler characteristic. For instance, if we have $n_p$ pentagon and $n_h$ hexagons in a C$_N$ fullerene cage, the corresponding numbers of vertices, edges and faces are
$(5n_p+6n_h)/3$, $(5n_p+6n_h)/2$ and $n_p+n_h$, respectively, since each vertex is shared by three faces, and each edge is shared by two faces. As the fullerene cage is treated as a two dimensional surface, we have the Euler characteristic $(5n_p+6n_h)/3-(5n_p+6n_h)/2+(n_p+n_h)=2$, according to Euler's polyhedron formula, since it is a topological sphere. Thus, we have $n_p=12$, which means a fullerene cage structure must have 12 pentagons and correspondingly $N/2-10$ hexagons. Therefore, for a C$_N$ fullerene cage, we have $N$ vertices, $3N/2$ edges and $N/2+2$ faces.
\subsection{Barcode representation of fullerene structures {and nanotube}}\label{Sec:FullereneBarcodes}
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.48\textwidth]{figure6a.pdf}&
\includegraphics[width=0.48\textwidth]{figure6b.pdf}
\end{tabular}
\end{center}
\caption{Illustration of the barcodes for fullerene C$_{20}$(left chart) and C$_{60}$ (right chart) filtration on associated Rips complexes. Each chart contains three panels corresponding to the Betti number sequences $\beta_0,\beta_1$ and $\beta_2$, from top to bottom.
}
\label{fig:C20C60Barcode}
\end{figure}
\paragraph{Barcodes for fullerene molecule}
{In Fig. \ref{fig:C20C60Barcode}, we demonstrate the persistent homology analysis of fullerene C$_{20}$ and C$_{60}$ using the barcode representation generated by {Javaplex \cite{javaPlex}.} }
{The $x$-axis represents the filtration parameter $h$. If the distance between two vertices is below or equal to certain $h_0$, they will form an edge (1-simplex) at $h_0$. Stated differently, the simplical complex generated is equivalent to the raidus filtration with radius parameter $h/2$}. In the barcode, the persistence of a certain Betti number is represented by an interval (also known as bar), denoted as $L^{\beta_j}_{i}, j=0,1,2; i=1,2,\cdots$. Here $j\in \{0,1,2\}$ as we only consider the first three Betti numbers in this work. From top to bottom, the behaviors of $\beta_0$, $\beta_1$, and $\beta_2$ are depicted in three individual panels. It is seen that as $h$ grows, isolated atoms initialized as points will gradually grow into solid spheres with an ever-increasing radius. This phenomenon is represented by the bars in the $\beta_0$ panel. Once two spheres overlap with each other, one $\beta_0$ bar is terminated. Therefore, the bar length for the independent 0-th homology generator (connected component) $c^0_i$, denoted as $L^{\beta_0}_{i}={\rm persist}(c^0_i)$, indicates the bond length information of the molecule. As can be seen from Fig. \ref{fig:C20C60Barcode}, for fullerene C$_{20}$, all $\beta_0$ bar lengths are around $1.45$\AA~ and the total number of components equals exactly to 20. On the other hand, fullerene C$_{60}$ has two different kinds of bars with lengths around $1.37$\AA~ and $1.45$\AA, respectively, indicating its two types of bond lengths.
More structure information is revealed as $\beta_1$ bars, which represent independent noncontractible $1$-cycles (loops), emerge. It is seen in the fullerene C$_{20}$ figure, that there are 11 equal-length $\beta_1$ bars persisting from $1.45$\AA~ to $2.34$\AA. As fullerene C$_{20}$ has 12 {pentagonal rings}, {the Euler characteristics for a 1D simplicial subcomplex ($1$-skeleton) can be evaluated from the Betti numbers},
\begin{equation}\label{Euler_1simplicial}
n_{\rm vertice}-n_{\rm edge}=\beta_0-\beta_1.
\end{equation}
Here $\beta_0$, $n_{\rm vertice}$, and $n_{\rm edge}$ are 1, 20, and 30, respectively. Therefore, it is easy to obtain that $\beta_1=11$ for fullerene C$_{20}$, as demonstrated in Fig. \ref{fig:C20C60Barcode}. It should be noticed that all $\beta_1$ bars end at filtration value $h=2.34$\AA, when five balls in each pentagon with their ever-increasing radii begin to overlap to form a pentagon surface.
Even more structural information can be derived from fullerene C$_{60}$'s $\beta_1$ barcodes. First, there are $31$ bars for $\beta_1$. This is consistent with the Euler characteristics in Eq. (\ref{Euler_1simplicial}), as we have 12 pentagons and 20 hexagons. Secondly, two kinds of bars correspond to the coexistence of {pentagonal rings} and {hexagonal rings}. They persist from $1.45$\AA~ to $2.35$\AA~ and from $1.45$\AA~ to $2.44$\AA~, respectively.
As the filtration progresses, $\beta_2$ bars (membranes enclosing cavities) tend to appear. In fullerene C$_{20}$, there is only one $\beta_2$ bar, which corresponds to the void structure in the center of the cage. For fullerene C$_{60}$, we have 20 $\beta_2$ {bars persisting} from $2.44$\AA~ to $2.82$\AA, {which corresponds to hexagonal cavities as indicated in the last chart of Fig .\ref{fig:simplex}. Basically, as the filtration goes, each node in the hexagon ring joins its four nearest neighbors, and fills in the relevant 2-simplices, yielding a simplical complex whose geometric realization is exactly the octahedron.} There is another $\beta_2$ bar due to the center void {as indicated in the last chart of Fig.\ref{fig:C20C60Barcode}}, which persists until the complex forms a solid block. Note that two kinds of $\beta_2$ bars represent entirely different physical properties. The short-lived bars are related to local behaviors and fine structure details, while the long-lived bar is associated with the global feature, namely, the large cavity.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.7\textwidth]{figure7.pdf}
\end{tabular}
\end{center}
\caption{Illustration of persistent homology analysis for a nanotube. ({\bf a}) The generated nanotube structure with 10 unit layers. ({\bf b}) and ({\bf c}) A 3 unit layer segment extracted from the nanotube molecule in {\bf a}. ({\bf d}) Barcodes representation of the topology of the nanotube segment.}
\label{fig:nanotube}
\end{figure}
\paragraph{Barcodes for nanotube}
{Another example of nanotube is demonstrated in Fig. \ref{fig:nanotube}. The nanotube structure is constructed using the software \href{https://www.ccs.uky.edu/~ernst/carbontubes/TubeApplet.html}{TubeApplet webpage}. We set tube indices to (6,6), the number of unit cell to 10, tube radius to 4.05888, and lattice constant to 2.454\AA. We extract a segment of 3 unit cells from the nanotube and employ the persistent homology analysis to generate it barcodes. Our results are demonstrated in Fig. \ref{fig:nanotube}. Different from {fullerene} molecules, the nanotube has a long $\beta_1$ bar representing the tube circle.} {It should also be noticed that $\beta_2$ barcodes are concentrated in two different regions. The first region is when $x$ is around 2.5 to 2.7. The $\beta_2$ barcodes in this domain are generated by hexagonal rings on the nanotube. The other region appears when $x$ is slightly larger than 7.0. The corresponding $\beta_2$ barcodes are representation of the void formed between different layer of carbons. }
{Unlike commonly used topological methods\cite{Fowler:1995}}, persistent homology is able to provide a multiscale representation of the topological features. Usually, global behavior is of {major concern}. Therefore, the importance of the topological features is typically measured by their persistence length.} In our analysis, we have observed that except for discretization errors, topological invariants of all scales can be equally important in revealing various structural features of the system of interest. In this work, we demonstrate that both local and global topological invariants play important roles in quantitative physical modeling.
\subsection{Stability analysis of small fullerene molecules}
From the above analysis, it can be seen that detailed structural information has been incorporated into the corresponding barcodes. On the other hand, molecular structures determine molecular functions \cite{KLXia:2013d,KLXia:2013f,KLXia:2014b}. Therefore, persistent homology can be used to predict molecular functions of fullerenes. To this end, we analyze the barcode information. For each Betti number $\beta_j$, we define an accumulated bar length $A_j$ as the summation of barcode lengths,
\begin{equation}\label{AccumulationIndex}
A_j=\sum_{i=1} L^{j}_{i}, j=0,1,2,
\end{equation}
where $L^j_{i}$ is the length of the $i$th bar in the $j$-th-homology barcode. Sometimes, we may only sum over certain types of barcodes.
We define an average accumulated bar length as $B_j=-\sum_{i=1} L^{j}_{i}/N$, where $N$ is the total number of atoms in the molecule.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.45\textwidth]{figure8a.pdf}&
\includegraphics[width=0.45\textwidth]{figure8b.pdf}
\end{tabular}
\end{center}
\caption{Comparison between the heat of formation energies computed using a quantum theory \cite{ZhangBL:1992a} (left chart) and average accumulated bar length (right chart) for fullerenes. The units for the heat of formation energy and average accumulated bar length are eV/atom and \AA/atom, respectively Although the profile of average accumulated bar length of fullerenes does not perfectly match the fullerene energy profile, they bear a close resemblance in their basic characteristics.}
\label{fig:fullerenFitting}
\end{figure}
Zhang et al. \cite{ZhangBL:1992a,ZhangBL:1992b} found that for small fullerene molecule series C$_{20}$ to C$_{70}$, their ground-state heat of formation energies gradually decrease with the increase of the number of atoms, except for C$_{60}$ and C$_{70}$. The decreasing rate, however, slows down with the increase of the number of atoms. With data adopted from Ref. \cite{ZhangBL:1992a}, Fig. \ref{fig:fullerenFitting} demonstrates this phenomenon. This type of behavior is also found in the total energy (STO-3G/SCF at MM3) per atom \cite{Murry:1994}, and in average binding energy of fullerene C$_{2n}$ which can be broken down to $n$ dimmers (C$_2$) \cite{ChangYF:2005}.
To understand this behavior, many theories {have been} proposed. Zhang et al.~\cite{ZhangBL:1992b} postulate that the fullerene stability is related to the ratio between the number of pentagons and the number of atoms for a fullerene molecule. Higher percentage of pentagon structures results in relatively higher levels of the heat of formation. On the other hand, a rather straightforward isolated pentagon rule (IPR) states that the most stable fullerenes are those in which all the pentagons are isolated. The IPR explains why C$_{60}$ and C$_{70}$ are relatively stable as both have only isolated pentagons. Raghavachari's neighbour index \cite{Raghavachari:1992} provides another approach to quantitatively characterize the relative stability. For example, in C$_{60}$ of $I_n$ symmetry, all 12 pentagons have neighbour index 0, thus the $I_n$ C$_{60}$ structure is very stable.
In this work, we hypothesize that fullerene stability depends on the average number of hexagons per atom. The larger number of hexagons is in a given fullerene structure, the more stable it is. We utilize persistent homology to verify our hypothesis. As stated in Section \ref{Sec:FullereneBarcodes}, there are two types of $\beta_2$ bars, namely, the one due to hexagon-structure-related holes and that due to the central void. Their contributions to the heat of formation energy are dramatically different. Based on our hypothesis, we only need to include those $\beta_2$ bars that are due to hexagon-structure-related holes in our calculation of the average accumulated bar length $B_2$. As depicted in the right chart of Fig. \ref{fig:fullerenFitting}, the profile of the average accumulated bar length closely resembles that of the heat of formation energy. Instead of a linear decrease, both profiles exhibit a quick drop at first, then the decreasing rate slows down gradually. Although our predictions for C$_{30}$ and C$_{32}$ fullerenes do not match the corresponding energy profile precisely, which may be due to the fact that the data used in our calculation may not be exactly the same ground-state data as those in the literature \cite{ZhangBL:1992b}, the basic characteristics and the relative relations in the energy profile are still well preserved. In fact, the jump at the C$_{60}$ fullerene is captured and stands out more obviously than the energy profile. This may be due to the fact that our method distinguishes not only pentagon and hexagon structures, but also the size differences within each of them.
We are not able to present the full set of energy data in Ref. \cite{ZhangBL:1992a} because we are limited by the availability of the ground-state structure data.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.6\textwidth]{figure9.pdf}
\end{tabular}
\end{center}
\caption{The comparison between {quantum mechanical simulation results}\cite{ZhangBL:1992a} and persistent homology prediction of the heat of formation energy (eV/atom). Only local $\beta_2$ bars that are due to hexagon structures are included in our average accumulated bar length $B_2$. The correlation coefficient from the least-squares fitting is near perfect ($C_c=0.985$).}
\label{fig:FitHeatFormation}
\end{figure}
To quantitatively validate our prediction, the least squares method is employed to fit our prediction with the heat of formation energy, and a correlation coefficient is defined \cite{KLXia:2013d},
\begin{eqnarray}\label{correlation}
C_c=\frac{\sum^N_{i=1}\left(B^e_i-\bar{B}^e \right)\left( B^t_i-\bar{B}^t \right)}
{ \left[\sum^N_{i=1}(B^e_i- \bar{B}^e)^2\sum^N_{i=1}(B^t_i-\bar{B}^t)^2\right]^{1/2}},
\end{eqnarray}
where $B^e_i$ represents the heat of formation energy of the $i$th fullerene molecule, and $B^e_t$ is our theoretical prediction. The parameter $\bar{B}^e$ and $\bar{B}^t$ are the corresponding mean values. The fitting result is demonstrated in Fig. \ref{fig:FitHeatFormation}. The correlation coefficient is close to unity (0.985), which indicates the soundness of our model and the power of persistent homology for quantitative predictions.
\subsection{Total curvature energy analysis of fullerene isomers}
Having demonstrated the ability of persistent homology for the prediction of the relative stability of fullerene molecules, we further illustrate the effectiveness of persistent homology for analyzing the total curvature energies of fullerene isomers. Fullerene molecules C$_N$ are well-known to admit various isomers \cite{Fowler:1996}, especially when the number ($N$) of atoms is large. In order to identify all of the possible isomers for a given $N$, many elegant mathematical algorithms have been proposed. Coxeter's construction method \cite{Coxeter:1971,Fowler:1988} and {the ring spiral method} \cite{Manolopoulos:1991} are two popular choices. Before discussing the details of these two methods, we need to introduce the concept of fullerene dual. Mathematically, a dual means dimension-reversing dual. From Euler's polyhedron theorem, if a spherical polyhedron is composed of $n_{\rm vertice}$ vertices , $n_{\rm edge}$ edges and $n_{\rm face}$ faces, we have the relation $n_{\rm vertice}-n_{\rm edges}+n_{\rm face}=2$. Keeping the $n_{\rm edge}$ unchanged while swapping the other two counts, we have its dual, which has $n_{\rm vertice}$ faces and $n_{\rm face}$ vertices. For example, the cube and the octahedron form a dual pair, the dodecahedron and the icosahedron form another dual pair, and the tetrahedron is its self-dual. This duality is akin to the duality between the Delaunay triangulation and the corresponding Voronoi diagram in computational geometry.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.45\textwidth]{figure10a.pdf}&
\includegraphics[width=0.45\textwidth]{figure10b.pdf}
\end{tabular}
\end{center}
\caption{Comparison between the distance filtration (left chart) and the correlation matrix filtration (right chart) in fullerene C$_{40}$ stability analysis. Fullerene C$_{40}$ has 40 isomers. Each of them has an associated total curvature energy (eV). We calculate our average accumulated bar lengths from both distance filtration and the correlation matrix based filtration, and further fit them with total curvature energies. The correlation coefficients for our fitting are 0.956 and 0.959, respectively. It should be noticed that only the central void related $\beta_2$ bars (i.e., the long-lived bars) are considered. The exponential kernel is used in matrix filtration with parameter $\eta=4$ and $\kappa=2$.}
\label{fig:IsomerC40}
\end{figure}
In fullerenes, each vertex is shared by three faces (each of them is either a pentagon or a hexagon). Therefore, fullerene dual can be represented as a triangulation of the topological sphere. Based on this fact, Coxeter is able to analyze the icosahedral triangulations of the sphere and predict the associated isomers. This method, although mathematically rigorous, is difficult to implement for {structures with low symmetry}, thus is inefficient in practical applications \cite{Fowler:1995}.
On the other hand, in the Schlegel diagram \cite{Schlegel:1883}, each fullerene structure can be projected into a planar graph made of pentagons and hexagons. The ring spiral method is developed based on the spiral conjecture \cite{Fowler:1995}, which states ``The surface of a fullerene polyhedron may be unwound in a continuous spiral strip of edge-sharing pentagons and hexagons such that each new face in the spiral after the second shares an edge with both (a) its immediate predecessor in the spiral and (b) the first face in the preceding spiral that still has an open edge.'' Basically, for fullerenes of $N$ atoms, one can list all possible spiral sequences of pentagons and hexagons, and then wind them up into fullerenes. If no conflict happens during the process, an isomer is generated. Otherwise, we neglect the spiral sequence. Table \ref{tab:Isomer} lists the numbers of isomers for different fullerenes \cite{Fowler:1995}, when enantiomers are regarded as equivalent \ref{tab:Isomer}. It is seen that the number of isomers increases dramatically as $N$ increases.
Total curvature energies of many fullerene isomers are available at the \href{http://www.nanotube.msu.edu/fullerene}{carbon fullerene webpage}.
\begin{table}[htbp]
\centering
\caption{Numbers of isomers for small fullerenes. }
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|c|}
\hline
$N_{\rm atom}$ & 20 & 24 & 26 & 28 & 30 & 32 &34 & 36&38 &40 &50 &60\\
\hline
$N_{\rm isomer}$ & 1 & 1 & 1 & 2 & 3 & 6 &6 & 15 &17 &40 &271 &1812 \\
\hline
\end{tabular}
\label{tab:Isomer}
\end{center}
\end{table}
In 1935, Hakon defined sphericity as a measure of how spherical (round) an object is \cite{Hakon:1935}. By assuming particles having the same volume but differing in surface areas, Hakon came up with a sphericity function \cite{Hakon:1935},
\begin{eqnarray}
\Psi=\frac{\pi^{1/3} (6V_p)^{2/3}}{A_p},
\end{eqnarray}
where $V_p$ and $A_p$ are the volume and the surface area of the particle. Obviously, a sphere has sphericity 1, while the sphericity of non-spherical particles is less than 1. Let us assume that fullerene isomers have the same surface area as the perfect sphere $A_p=4\pi R^2$, we define a sphericity measure as
\begin{eqnarray}
\Psi_c=\frac{V_p}{V_s}
= \frac{6\pi^{1/2} V_p}{A_p^{3/2}},
\end{eqnarray}
where $V_s$ is the volume of a sphere with radius $R$.
By the isoperimetric inequality, among all simple closed surfaces with given surface area $A_p$, the sphere encloses a region of maximal volume.
Thus, the sphericity of non-spherical fullerene isomers is less than 1. Consequently, in a distance based filtration process, the smaller
sphericity a fullerene isomer is, the shorter its global $\beta_2$ bar will be.
On fullerene surface, the local curvature characterizes the bond bending away from the plane structure required by the sp$^2$ hybrid orbitals \cite{Holec:2010}. Therefore, the relation between fullerene curvature and stability can be established and confirmed by using {\it ab initio} density functional calculations \cite{Guan:2014}. However, such an analysis favors fullerenes with infinitely many atoms. Let us keep the assumption that for a given fullerene C$_N$, all its isomers have the same surface area. We also assume that the most stable fullerene isomer C$_N$ is the one that has a near perfect spherical shape. Therefore, each fullerene isomer is subject to a (relative) total curvature energy $E_c$ per unit area due to its accumulated deviations from a perfect sphere,
\begin{eqnarray}\label{CuravtureE}
E_c&=&\int_\Gamma \mu\left[(\kappa_1-\kappa_0)^2 + (\kappa_2-\kappa_0)^2 \right] dS\\
&=& \int_\Gamma 2\mu \left[\frac{1}{2}(2{\bf H}-\kappa_0)^2 + {\bf K} \right] dS,
\end{eqnarray}
where $\Gamma$ is the surface, $\mu$ is bending rigidity, $\kappa_1$ and $\kappa_2$ are the two principal curvatures, and $\kappa_0=1/R$ is the constant curvature of the sphere with radius $R$. Here, ${\bf H}$ and ${\bf K}$ are the mean and Gaussian curvature of the fullerene surface, respectively. Therefore, a fullerene isomer with a smaller sphericity will have a higher total curvature energy. Based on the above discussions, we establish the inverse correlation between fullerene isomer global $\beta_2$ bar lengths and fullerene isomer total curvature energies.
Obviously, the present fullerene curvature energy (\ref{CuravtureE}) is a special case of the Helfrich energy functional for elasticity of cell membranes \cite{Helfrich:1973}
\begin{eqnarray}
E_c=\int_\Gamma \left[ \frac{1 }{2}{\cal K}_C(2{\bf H}-C_0)^2 + {\cal K}_G {\bf K} \right] dS,
\end{eqnarray}
where, $C_0$ is the spontaneous curvature, and ${\cal K}_C$ and ${\cal K}_G$ are the bending modulus and Gaussian saddle-splay modulus,
respectively. The Gauss\ - Bonnet theorem states that for a compact two-dimensional Riemannian manifold without boundary, the surface integral of the Gaussian curvature is $2\pi \chi$, where $\chi$ is the Euler characteristic. Therefore, the curvature energy admits a jump whenever there is a change in topology which leads to a change in the Euler characteristic. A problem with this discontinuity in the curvature energy is that the topological change may be induced by an infinitesimal change in the geometry associated with just an infinitesimal physical energy, which implies that the Gaussian curvature energy functional is unphysical. Similarly, Hadwiger type of {energy} functionals, which make use of a linear combination of the surface area, surfaced enclosed volume, and surface integral of mean curvature and surface integral of Gaussian curvature \cite{Hadwiger:1975}, may be unphysical as well for systems involving topological changes. However, this is not a problem for differential geometry based multiscale models which utilize only {surface} area and {surface} enclosed volume terms \cite{Wei:2009,Wei:2012,Wei:2013,ZhanChen:2010a}, {as we employ the Eulerian representation and the proposed generalized mean curvature terms but not Gaussian curvature terms.} Moreover, in the present model for fullerene isomers, there is no topological change.
\begin{figure}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.45\textwidth]{figure11a.pdf}&
\includegraphics[width=0.45\textwidth]{figure11b.pdf}
\end{tabular}
\end{center}
\caption{Further validation of our method with 89 isomers for fullerene C$_{44}$. The correlation coefficients for distance filtration (left chart) and correlation matrix based filtration (right chart) are 0.948 and 0.952, respectively. In the latter method, the exponential kernel is used with parameter $\eta=4$ and $\kappa=2$.
}
\label{fig:IsomerC44}
\end{figure}
To verify our assumptions, we consider a family of isomers for fullerene C$_{40}$. It has a total of 40 isomers. We compute the global $\beta_2$ bar lengths of all isomers by Euclidean distance filtration and fit their values with their total curvature energies with a negative sign. Figure \ref{fig:IsomerC40} (right chart) shows an excellent correlation between the fullerene total curvature energies and our persistent homology based predictions. The correlation coefficient is 0.956, which indicates that the proposed persistent homology analysis of non-sphericity and our assumption of a constant surface area for all fullerene isomers are sound. In reality, fullerene isomers may not have an exactly constant surface area because some distorted bonds may have a longer bond length. However, the high correlation coefficient found in our persistent homology analysis implies that either the average bond lengths for all isomers are similar or the error due to non-constant surface area is offset by other errors.
To further validate our persistent homology based method for the prediction of fullerene total curvature energies, we consider a case with significantly more isomers, namely, fullerene C$_{44}$, which has 89 isomers. In this study, we have again found an excellent correlation between the fullerene total curvature energies and our persistent homology based predictions as depicted in the right chart of Fig. \ref{fig:IsomerC44}. The correlation coefficient for this case is 0.948. In fact, we have checked more fullerene isomer systems and obtained similar predictions.
\begin{figure}
\begin{center}
\begin{tabular}{c}
\includegraphics[width=0.7\textwidth]{figure12.pdf}
\end{tabular}
\end{center}
\caption{Illustration of the persistent barcodes generated by using correlation matrix based filtrations with different characteristic distances. The exponential kernel model with power $\kappa=2$ is used. The characteristic distances in the left and right charts are respectively $\eta=2$ and $\eta=20$. }
\label{fig:Sigma_barcodes}
\end{figure}
Finally, we explore the utility of our correlation matrix based filtration process for analysis of fullerene total curvature energies. In place of Euclidean distance based filtration, the correlation matrix based filtration is employed. To demonstrate the basic principle, Eq.~(\ref{eq:FiltrationMatrix}) with the generalized exponential kernel in Eq. (\ref{eq:ExpKernel}) is used in the filtration. We assume $w_{ij}=1$ as fullerene molecules have only carbon atoms. To understand the correlation matrix based filtration method, the fullerene C$_{60}$ is employed again. We fixed the power $\kappa=2$, and adjust the value of characteristic distance $\eta$. Figure \ref{fig:Sigma_barcodes} gives the calculated barcodes with $\eta=2$ and $\eta=20$. It can be seen that these barcodes share a great similarity with the Euclidean distance based filtration results depicted in the right chart of Figure \ref{fig:C20C60Barcode}. All of topological features, namely, two kinds of bonds in $\beta_0$, the pentagonal rings and the hexagonal rings in $\beta_1$, and also the hexagonal cavities and the central void in $\beta_2$ are clearly demonstrated. However, it should be noticed that, unlike the distance based filtration, the matrix filtration does not generate linear Euclidean distance relations. However, relative correspondences within the structure are kept. For instances, in $\beta_2$ bars, the bar length ratio between the central void part and the hexagonal hole part in Fig. \ref{fig:Sigma_barcodes} is drastically different from its counterpart in Fig. \ref{fig:C20C60Barcode}. From our previous experience in flexibility and rigidity analysis \cite{KLXia:2013d,KLXia:2013f,KLXia:2014b}, these rescaled distance relations have a great potential in capturing the essential physical properties, such as, flexibility, rigidity, stability, and compressibility of the underlying system.
Similarly, the global $\beta_2$ bar lengths obtained from the correlation matrix based filtration are utilized to fit with the total curvature energies of fullerene isomers. The correlation coefficients for the correlation distance matrix filtration are 0.959 and 0.952, respectively for C$_{40}$ and C$_{44}$ fullerene isomers. The corresponding results are demonstrated in the right charts of Figs. \ref{fig:IsomerC40} and \ref{fig:IsomerC44}, respectively. It can be seen that the correlation matrix filtration is able to capture the essential stability behavior of fullerene isomers. In fact, results from correlation matrix based filtrations are slightly better than those of Euclidean distance based filtrations. In correlation matrix based filtrations, the generalized exponential kernel is used with parameter $\eta=4$ and $\kappa=2$. These parameters are chosen based on our previous flexibility and rigidity analysis of protein molecules. Overall, best prediction is obtained when the characteristic distance is about 2 to 3 times of the bond length and power index $\kappa$ is around 2 to 3. Fine tuning of the parameters for each single case may yield even better result. However, this aspect is beyond the scope of the present work.
\section{Conclusion}\label{Sec:Conclusion}
Persistent homology is an efficient tool for the qualitative analysis of topological features that last over scales. In the present work, for the first time, persistent homology is introduced for the quantitative prediction of fullerene energy and stability. We briefly review the principal concepts and algorithms in persistent homology, including simplex, simplicial complex, chain, filtration, persistence, and paring algorithms. Euler characteristics analysis is employed to decipher the barcode representation of fullerene C$_{20}$ and C$_{60}$. A thorough understanding of fullerene barcode origins enables us to construct physical models based on local and/or global topological invariants and their accumulated persistent lengths. By means of an average accumulated bar length of the second Betti number that corresponds to fullerene hexagons, we are able to accurately predict the relative energies of a series of small fullerenes. To analyze the total curvature energies of fullerene isomers, we propose to use sphericity to quantify the non-spherical fullerene isomers and correlate the sphericity with fullerene isomer total curvature energies, which are defined as a special case of the Helfrich energy functional for elasticity. Topologically, the sphericity of a fullerene isomer is measured by its global 2nd homology bar length in the barcode, which in turn gives rise to the prediction of fullerene isomer total curvature energies. We demonstrate an excellent agreement between total curvature energies and our persistent homology predictions for the isomers of fullerene C$_{4}$ and C$_{44}$. Finally, a new filtration based on the correlation matrix of the flexibility and rigidity index is proposed and found to provide even more accurate predictions of fullerene isomer total curvature energies.
\vspace{1cm}
\noindent \textbf{Acknowledgments}\\
\noindent This work was supported in part by NSF grants IIS-0953096, IIS-1302285 and DMS-1160352, NIH grant R01GM-090208 and MSU Center for Mathematical Molecular Biosciences initiative.
GWW acknowledges the Mathematical Biosciences Institute for hosting valuable workshops. KLX thanks Bao Wang for useful discussions.
\vspace{1cm}
\footnotesize
|
2,869,038,155,504 | arxiv | \section{Introduction}\label{intro}
For any $n\geq 0$, let
\begin{align*
W_n(q)=\sum_{k=0}^{n}\binom{n}{k}^2 q^k
\end{align*}
denote the $n$-th Narayana polynomial of type $B$. Wang and Zhu \cite{WZ16}, and Sokal \cite{Sok15} independently proved that
the Hankel matrix \begin{equation}\label{eq-Hankel-def}
H =(W_{i+j}(q))_{i,j \ge 0}
\end{equation}
is $q$-totally positive, namely, any minor of $H$ is a polynomial in $q$ with nonnegative coefficients.
The main objective of this paper is to give a combinatorial proof of the $q$-total positivity of $H$, which solves a problem of Pan and Zeng \cite{PZ16}.
The $q$-total positivity of the Hankel matrix $H$ arose in the study of the $q$-log-convexity of the polynomial sequence $(W_n(q))_{n\geq 0}$. For the convenience of introducing related definitions and results, we make use of the notion of $q$-nonnegativity and the symbol $\ge_q$. A polynomial $f(q)$ with real coefficients is called $q$-nonnegative, written $f(q) \ge_q 0$, if all its coefficients are nonnegative. Accordingly, for two polynomials $f(q)$ and $g(q)$ we write $f(q) \ge _q g(q)$ if $f(q)-g(q) \ge_q 0$. Recall that a sequence
$\alpha = (a_n(q))_{n\geq 0}$ of polynomials in $q$ is said to be $q$-log-convex if for any $n\geq 1$ there holds
$a_{n+1}(q)a_{n-1}(q)\geq_q a^2_n(q)$. Furthermore, if
$a_{m+1}(q)a_{n-1}(q)\geq_q a_m(q)a_n(q)$ holds for any $m\geq n\geq 1$, then we call $\alpha$ a strongly $q$-log-convex sequence. Conversely,
we say that $\alpha$ is a $q$-log-concave sequence if for any $n\geq 1$ we have
$a^2_n(q)\geq_q a_{n+1}(q)a_{n-1}(q)$, and it is a strongly $q$-log-concave sequence
if $a_m(q)a_n(q)\geq_q a_{m+1}(q)a_{n-1}(q)$ holds
for any $m\geq n\geq 1$. The concept of $q$-log-concavity was introduced by Stanley, and the notion of strong $q$-log-concavity was due to Sagan \cite{Sag92DM}. Many polynomial sequences have been proved to be $q$-log-concave, or even strongly $q$-log-concave, see Butler \cite{But90}, Krattenthaler \cite{Kra89}, Leroux \cite{Ler90}, Sagan \cite{Sag92DM, Sag92TAMS}, and Chen, Wang and Yang \cite{CWY11}. However, $q$-log-convex sequences received very little attention until the work of Liu and Wang \cite{LW07}, who first introduced the notion of $q$-log-convexity. Liu and Wang established the $q$-log-convexity of many combinatorial polynomials, such as the Eulerian polynomials. For further progress on $q$-log-convexity, see \cite{CWY10, Zhu14} for instance.
The $q$-log-convexity of $(W_n(q))_{n\geq 0}$ was conjectured by Liu and Wang \cite{LW07}, and was proved later by Chen, Tang, Wang and Yang \cite{CTWY10} by using the theory of symmetric functions. Zhu \cite{Zhu13} further established the strong $q$-log-convexity of $(W_n(q))_{n\geq 0}$ by identifying this polynomial sequence as the first column of the triangular array
$B=(b_{n,k}(q))_{n,k \ge 0}$, which is generated by
\begin{equation}\label{eq-narab-recurrence}
\begin{split}
b_{n,0}(q)&= (1+q)\cdot b_{n-1,0}(q) + 2q \cdot b_{n-1,1}(q);\\
b_{n,k}(q)&= b_{n-1,k-1}(q) + (1+q)\cdot b_{n-1,k}(q) + q \cdot b_{n-1,k+1}(q) \quad (k\ge 1, \, n\ge 1)
\end{split}
\end{equation}
with $b_{0,0}(q) = 1$ and $b_{n,k}(q) = 0$ for $k > n$. The triangular array $B$ belongs to a wide class of matrices, called $q$-recursive matrices in \cite{WZ16}, or Catalan-Stieltjes matrices in \cite{PZ16, LLYZ21}, which we will recall below.
Let $\gamma=(r_k(q))_{k\ge 0}$, $\sigma=(s_k(q))_{k\ge 0}$ and $\tau=(t_k(q))_{k\ge 1}$ be three sequences of polynomials in $q$. The Catalan-Stieltjes
matrix with respect to $\gamma,\sigma,\tau$, denoted by $C^{\gamma,\sigma,\tau}=(c_{n,k}(q))_{n,k \ge 0}$, is generated by the following recursive relations:
\begin{equation*
\begin{split}
c_{n,0}(q) &=s_0(q) c_{n-1,0}(q) + t_1(q)c_{n-1,1}(q);\\
c_{n,k}(q) &= r_{k-1}(q)c_{n-1,k-1}(q) + s_k(q)c_{n-1,k}(q) + t_{k+1}(q)c_{n-1,k+1}(q) \quad (k\ge 1, \, n\ge 1),
\end{split}
\end{equation*}
where $c_{0,0}(q)=1$ and $c_{n,k}(q)=0$ unless $n\geq k\geq 0$. Actually, Zhu \cite{Zhu13} gave a general criterion for the strong $q$-log-convexity of $(c_{n,0}(q))_{n\geq 0}$ of
$C^{\gamma,\sigma,\tau}$. Further, Wang and Zhu \cite{WZ16} proved that the Hankel matrix $(c_{i+j,0}(q))_{i,j\geq 0}$ is $q$-totally positive provided that the matrix
$$
L^{\gamma,\sigma,\tau}=\begin{pmatrix}
1 &\quad &\quad &\quad&\quad\\
s_0(q) & r_0(q) & \quad &\quad &\quad\\
t_1(q) & s_1(q) & r_1(q) &\quad &\quad \\
\quad& t_2(q) & s_2(q) & r_2(q) &\quad\\
\quad&\quad&\ddots & \ddots &\ddots
\end{pmatrix},
$$
called the coefficient matrix of $C^{\gamma,\sigma,\tau}$,
is $q$-totally positive. As a result, Wang and Zhu
obtained the $q$-total positivity of the Hankel matrix $H=(W_{i+j}(q))_{i,j\geq 0}$, which was also independently proved by Sokal \cite{Sok15} based on the continued fraction expression of the generating function $\sum_{n\geq 0}W_n(q)x^n$.
We would like to note that the $q$-total positivity of the Hankel matrix $(c_{i+j,0}(q))_{i,j \ge 0}$ is also closely related to that of $C^{\gamma,\sigma,\tau}$, for details see \cite{LMW16} and \cite{WZ16}. Chen, Liang and Wang \cite{CLW15} raised the problem of giving a combinatorial interpretation for the $q$-total positivity of $C^{\gamma,\sigma,\tau}$. An ideal tool to combinatorially proving the positivity of a matrix is the famous Lindstr\"om-Gessel-Viennot lemma, see \cite{Lin73, GV85, GV89}. A natural strategy is to construct a planar network with nonnegative weights for the target matrix, and then to apply the Lindstr\"om-Gessel-Viennot lemma to interpret each minor of this matrix as the generating function of nonintersecting families of directed paths, which are obviously nonnegative. In the spirit of this method, Pan and Zeng \cite{PZ16} provided a general planar network construction for the Catalan-Stieltjes matrices and their associated Hankel matrices, which enables them to give combinatorial proofs of the $q$-total positivity for many such matrices, such as those related to the Eulerian polynomials, Schr\"oder polynomials, and Narayana polynomials of type $A$. However,
their approach did not work for Narayana polynomials of type $B$, and they proposed it as an open problem to find a planar network proof of the $q$-total positivity of $H=(W_{i+j}(q))_{i,j\geq 0}$.
It seems impossible to find a planar network with only nonnegative weights for $H$.
In this paper, inspired by our recent work \cite{LLYZ21}, we construct for $H$ a suitable planar network allowing negative weights and solve Pan and Zeng's problem.
In our construction, the planar network for $H$ can be naturally divided into serial segments which are essentially subnetworks of the planar network for the coefficient matrix $L^B$ of $B$.
By applying the Lindstr\"om-Gessel-Viennot lemma and establishing a sign-reversing involution on the nonintersecting families of each segment, we combinatorially prove the $q$-total positivity of $L^B$ and $H$.
This paper is organized as follows.
In Section \ref{sect-lgv}, we will introduce the Lindstr\"{o}m-Gessel-Viennot lemma. In Section \ref{sect-network}, we will present our planar network construction for the coefficient matrix $L^B$, as well as a combinatorial proof of its $q$-total positivity. In Section \ref{sect-pf} we will make use of
the results in Section \ref{sect-network}
to obtain a planar network for $H$ and a combinatorial proof of its $q$-total positivity.
We conclude this paper in Section \ref{sect-conj} with a conjecture on the immanant positivity for
$H$.
\section{The Lindstr\"om-Gessel-Viennot lemma}\label{sect-lgv}
The Lindstr\"om-Gessel-Viennot lemma was originally proved by Lindstr\"om \cite{Lin73} and further developed by Gessel and Viennot \cite{GV85, GV89}. It has a broad range of applications, see \cite{HG95, Kra96, MW00, Ste90} for instance. In this section, we will give an overview of the Lindstr\"om-Gessel-Viennot lemma, which plays a key role in our combinatorial proof of the $q$-total positivity of the Hankel matrix of type $B$ Narayana polynomials.
To state the Lindstr\"om-Gessel-Viennot lemma, we need some notations. Let $D$ be a directed graph, or digraph for short, with vertex set $V(D)$ and arc set $A(D)$. A digraph $D$ is said to be acyclic if it contains no directed cycles. Throughout this paper we may assume that $D$ is locally finite, namely, for any two vertices $u,v \in V(D)$ the number of directed paths from $u$ to $v$ is finite. We say two directed paths intersect if they have a vertex in common. A sequence $(p_1,\ldots,p_n)$ of directed paths is called a nonintersecting family if $p_i$ and $p_j$ do not intersect for any $i \neq j$. Let $\mathbf{U} = (u_1,\ldots,u_n)$ and $\mathbf{V} = (v_1,\ldots,v_n)$ be two sequences of vertices in $D$, and let $\mathcal{N}_D(\mathbf{U},\mathbf{V})$ denote the set of nonintersecting families $(p_1,\ldots,p_n)$ such that $p_i$ is a directed path from $u_i$ to $v_i$ for each $1 \le i \le n$. If for any permutation $\sigma$ of $\{1,2,\ldots,n\}$, the set $\mathcal{N}_D(\mathbf{U},\sigma(\mathbf{V})) = \mathcal{N}_D((u_1,\ldots,u_n),(v_{\sigma(1)},\ldots,v_{\sigma(n)}))$ is empty unless $\sigma$ is the identity permutation, then $\mathbf{U}$ and $\mathbf{V}$ are said to be compatible. A weight function $\mathrm{wt}$ of $D$ is a map from $A(D)$ to $R$, where $R$ is a commutative ring with identity. The weight of a directed path in $D$ is the product of the weights of all its arcs, and the weight of a nonintersecting family is defined to be the product of the weights of all its components. Given two vertices $u$ and $v$ of $D$, let $GF_D(u,v)$ denote the sum of the weights of all directed paths from $u$ to $v$.
For two sequences $\mathbf{U}$ and $\mathbf{V}$ of vertices in $D$, let $GF(\mathcal{N}_D(\mathbf{U},\mathbf{V}))$ denote the sum of the weights of all elements in $\mathcal{N}_D(\mathbf{U},\mathbf{V})$. For a matrix $M$, we denote by $\det [M]$ the determinant of $M$. The celebrated Lindstr\"om-Gessel-Viennot lemma is stated as follows.
\begin{lem}[{\cite[Corollary 2]{GV89}}]\label{lem-lgv}
Let $D$ be a locally finite and acyclic digraph with a weight function, and let $\mathbf{U} = (u_1,\ldots,u_n)$, $\mathbf{V} = (v_1,\ldots,v_n)$ be two sequences of vertices in $D$. Then
\[
\det \left[\left(GF_D(u_i,v_j)\right)_{1 \le i,j \le n}\right] = GF(\mathcal{N}_D(\mathbf{U},\mathbf{V})).
\]
\end{lem}
In this paper, we mainly apply the above lemma to a special class of digraphs, called planar networks. Recall that a digraph $D$ is said to be planar if it can be embedded in the plane with edges meeting only at endpoints. We call $\mathcal{D} = (D,\mathrm{wt}_D)$ a planar network if $D$ is a locally finite, acyclic, and planar digraph, and $\mathrm{wt}_D$ is a weight function of $D$. Given an $n \times n$ matrix $M$, then $\mathcal{D}$ is called a planar network for $M$ if there exist two sequences $(u_1,\ldots,u_n)$ and $(v_1,\ldots,v_n)$ of vertices in $D$ such that
\begin{equation*}
M = \left(GF_D(u_i,v_j)\right)_{1 \le i,j \le n}.
\end{equation*}
In the remaining part of this paper, we usually specify the vertices and say that $\mathcal{D} = (D,\mathrm{wt}_D,(u_1,\ldots,u_n),(v_1,\ldots,v_n))$ is a planar network for $M$.
\section{The coefficient matrix \texorpdfstring{$L^B$}{}}\label{sect-network}
In this section we will establish the planar network for the coefficient matrix $L^B$ and prove its $q$-total positivity. By \eqref{eq-narab-recurrence}, we have
\begin{equation*}
L^B=(l_{i,j})_{i,j \ge 0} =
\begin{pmatrix}
1 & & & & &\\
1+q & 1 & & & &\\
2q & 1+q & 1 & & & \\
& q & 1+q & 1 & &\\
& & q & 1+q & 1 & \\
& & &\ddots & \ddots &\ddots\\
\end{pmatrix}.
\end{equation*}
Now we give the construction of the planar network for $L^B$. Let $D^{L^B}$ be the infinite planar digraph with vertex set
\[
V(D^{L^B})=\{P_i \mid i \ge 0\} \cup \{Q_i \mid i \ge 0\} \cup \{P'_i \mid i \ge 0\}
\]
and arc set
\begin{align*}
A(D^{L^B})=&
\{P_i \to Q_i \mid i \ge 0\} \cup \{P_i \to Q_{i-1} \mid i \ge 1\} \\
& \cup \{Q_i \to P'_i \mid i \ge 0\} \cup \{Q_i \to P'_{i-1} \mid i \ge 2\} \\
& \cup \{P_1 \to P'_0, Q_1 \overset{l}{\to} P'_0, Q_1 \overset{r}{\to} P'_0\},
\end{align*}
where the coordinates of vertices are given by $P_i = (0,-i)$, $Q_i = (1,-i)$ and $P'_i = (2,-i)$, and $Q_1 \overset{l}{\to} P'_0$, $Q_1 \overset{r}{\to} P'_0$ are multiple arcs from $Q_1$ to $P'_0$ with one drawn on the left and the other on the right, respectively, as shown in Figure \ref{fig-network-lb}. The weight function $\mathrm{wt}_{D^{L^B}}$ is defined by
\begin{align*}
\mathrm{wt}_{D^{L^B}} (P_1 \to P'_0)=-1, \quad \mathrm{wt}_{D^{L^B}} (P_i \to Q_{i-1})= q \quad \mathrm{for} \quad i \ge 1,
\end{align*}
and $\mathrm{wt}_{D^{L^B}}(a) = 1$ for the other arcs $a$ in $A(D^{L^B})$. Then we have the following result.
\begin{lem}\label{lem-network-lb}
Let $D^{L^B}$ and $\mathrm{wt}_{D^{L^B}}$ be defined as above. Then
\[
\mathcal{L^B} = (D^{L^B},\mathrm{wt}_{D^{L^B}},(P_0,P_1,\ldots),(P'_0,P'_1,\ldots))
\]
is a planar network for $L^B$, or equivalently,
\[
L^B = \left(GF_{D^{L^B}}(P_i,P'_j)\right)_{i,j \ge 0}.
\]
\end{lem}
\pf By the definitions in Section \ref{sect-lgv} and the above construction, it is straightforward to verify that
\[
l_{i,j} = GF_{D^{L^B}}(P_i,P'_j)
\]
for $i,j \ge 0$. Then the proof follows. \qed
Figure \ref{fig-network-lb} provides an illustration of the planar network $\mathcal{L^B}$, where we only label the weights not equal to 1.
\begin{figure}[htb]
\centering
\begin{tikzpicture}
[place/.style={thick,fill=black!100,circle,inner sep=0pt,minimum size=1mm,draw=black!100},scale=1.5]
\draw [thick] [->] (2.5,2.5) -- (3,2.5);
\draw [thick] [->] (3,2.5) -- (3.5,2.5) -- (4,2.5);
\draw [thick] (4,2.5) -- (4.5,2.5);
\node [place,label=above:{\footnotesize$P_0$}] at (2.5,2.5) {};
\node [place,label=above:{\footnotesize$Q_0$}] at (3.5,2.5) {};
\node [place,label=above:{\footnotesize$P'_0$}] at (4.5,2.5) {};
\draw [thick] [->](2.5,1.5) -- (3,1.5);
\draw [thick] [->] (3,1.5) -- (4,1.5);
\draw [thick] (4,1.5) -- (4.5,1.5);
\node [place,label=below:{\footnotesize$P_1$}] at (2.5,1.5) {};
\node [place,label=below:{\footnotesize$Q_1$}] at (3.5,1.5) {};
\node [place,label=below:{\footnotesize$P'_1$}] at (4.5,1.5) {};
\draw [thick] [->] (2.5,1.5) --(3,2);
\draw [thick] (3,2) -- (3.5,2.5) ;
\draw [thick] [->] (2.5,1.5) -- (3.5,2);
\draw [thick] (3.5,2) -- (4.5,2.5);
\draw [thick] [->] (2.5,0.5) -- (3,0.5);
\draw [thick] [->] (3,0.5) -- (4,0.5);
\draw [thick] (4,0.5) -- (4.5,0.5);
\node [place,label=below:{\footnotesize$P_2$}] at (2.5,0.5) {};
\node [place,label=below:{\footnotesize$Q_2$}] at (3.5,0.5) {};
\node [place,label=below:{\footnotesize$P'_2$}] at (4.5,0.5) {};
\draw [thick] [->] (2.5,0.5) -- (3,1);
\draw [thick] (3,1) -- (3.5,1.5);
\draw [thick] [->] (3.5,0.5) -- (4,1);
\draw [thick] (4,1) -- (4.5,1.5);
\draw [thick] [->] (2.5,-0.5) -- (3,-0.5);
\draw [thick] [->] (3,-0.5) -- (4,-0.5);
\draw [thick] (4,-0.5) -- (4.5,-0.5);
\node [place,label=below:{\footnotesize$P_3$}] at (2.5,-0.5) {};
\node [place,label=below:{\footnotesize$Q_3$}] at (3.5,-0.5) {};
\node [place,label=below:{\footnotesize$P'_3$}] at (4.5,-0.5) {};
\draw [thick] [->] (2.5,-0.5) -- (3,0);
\draw [thick] (3,0) -- (3.5,0.5);
\draw [thick] [->] (3.5,-0.5) -- (4,0);
\draw [thick] (4,0) -- (4.5,0.5);
\node [blue] at (2.75,2) {$q$};
\node [blue] at (3.375,2.15) {$-1$};
\node [blue] at (2.75,1) {$q$};
\node [blue] at (2.75,0) {$q$};
\draw [thick]plot[smooth, tension=.7] coordinates {(3.5,1.5) (4.1,1.9) (4.5,2.5)};
\draw [thick]plot[smooth, tension=.7] coordinates {(3.5,1.5) (3.9,2.1) (4.5,2.5)};
\draw [thick][->] (4.1,1.9) -- (4.11,1.91);
\draw [thick][->] (3.9,2.1) -- (3.91,2.11);
\node at (3.5,-1) {$\textbf{\vdots}$};
\node at (2.5,-1) {$\textbf{\vdots}$};
\node at (4.5,-1) {$\textbf{\vdots}$};
\end{tikzpicture}
\caption{The planar network $\mathcal{L^B}$}\label{fig-network-lb}
\end{figure}
The remaining part of this section is devoted to giving a combinatorial proof of the $q$-total positivity of $L^B$ by using the planar network $\mathcal{L^B}$. To this end, we define the following three properties of nonintersecting families $\mathbf{p}=(p_1,\ldots,p_k)$:
\begin{itemize}
\item[($\mathcal{P}_1$)] $p_1 = P_1\to P'_0$;
\item[($\mathcal{P}_2$)] $p_1 = P_1 \to Q_1 \overset{l}{\to} P'_0$;
\item[($\mathcal{P}_3$)] there exists $l$ ($l \ge 2$) such that $p_m = P_m \to Q_{m-1} \to P'_{m-1}$ for $1 \le m \le l-1$ and $p_l = P_l \to Q_l \to P'_{l-1}$.
\end{itemize}
Given a positive integer $k$ and two sequences
\begin{equation}\label{eq-ij}
\begin{split}
I& = (i_1,\ldots,i_k),\, \mbox{ where } 0 \le i_1 < \cdots < i_k,\\
J& = (j_1,\ldots,j_k),\, \mbox{ where } 0 \le j_1 < \cdots < j_k,
\end{split}
\end{equation}
let
\begin{align}\label{eq-pipjp}
\mathbf{P}_I=(P_{i_1},\ldots,P_{i_k}),\quad
\mathbf{P}'_J=(P'_{j_1},\ldots,P'_{j_k}),
\end{align}
and let
\begin{align}\label{eq-sij}
S_{I,J}& = \{\mathbf{p} \in \mathcal{N}_{D^{L^B}}(\mathbf{P}_I,\mathbf{P}'_J) \mid \mathbf{p} \mbox{ satisfies none of ($\mathcal{P}_1$), ($\mathcal{P}_2$), ($\mathcal{P}_3$)}\}.
\end{align}
It is clear that each nonintersecting family in $S_{I,J}$ has a $q$-nonnegative weight. By virtue of this, the following result provides a combinatorial interpretation for the $q$-total positivity of $L^B$.
\begin{thm}\label{thm-qtp-lb}
Let $I,J, \mathbf{P}_I,\mathbf{P}'_J, S_{I,J}$ be as given in \eqref{eq-ij}, \eqref{eq-pipjp} and \eqref{eq-sij} respectively, and let $L^B_{I,J}$ denote the submatrix of $L^B$ whose rows are indexed by $I$ and columns indexed by $J$. Then we have
\begin{align}\label{eq-lbij-det}
\det \left[L^B_{I,J}\right] = GF(S_{I,J}),
\end{align}
where $GF(S_{I,J})$ denotes the sum of weights of all elements in $S_{I,J}$. In particular, $\det \left[L^B_{I,J}\right]$ is $q$-nonnegative.
\end{thm}
\pf By Lemma \ref{lem-network-lb} and Lemma \ref{lem-lgv}, we have
\[
\det \left[L^B_{I,J}\right] = GF(\mathcal{N}(\mathbf{P}_I,\mathbf{P}'_J)),
\]
where we use
$\mathcal{N}(\mathbf{P}_I,\mathbf{P}'_J)$ to stand for $\mathcal{N}_{D^{L^B}}(\mathbf{P}_I,\mathbf{P}'_J)$ for convenience.
In the following we may assume that $\mathcal{N}(\mathbf{P}_I,\mathbf{P}'_J)\neq \emptyset$, otherwise, $S_{I,J}=\emptyset$, $\det \left[L^B_{I,J}\right] = 0$, and \eqref{eq-lbij-det} holds trivially.
For $i=1,2,3$, let
\begin{align*}
\mathcal{N}_i&=\{\mathbf{p} \in \mathcal{N}(\mathbf{P}_I,\mathbf{P}'_J) \mid \mathbf{p} \mbox{ satisfies } (\mathcal{P}_i)\}.
\end{align*}
Clearly, $\mathcal{N}(\mathbf{P}_I,\mathbf{P}'_J)$ is the disjoint union of
$\mathcal{N}_1,\mathcal{N}_2,\mathcal{N}_3$ and $S_{I,J}$.
Now it suffices to give an involution $\phi$ on $\mathcal{N}(\mathbf{P}_I,\mathbf{P}'_J)$ such that
any nonintersecting family $\mathbf{p}\in \mathcal{N}_1\cup \mathcal{N}_2\cup\mathcal{N}_3$ and its image $\phi(\mathbf{p})$
have opposite weights, and the restriction of $\phi$ to $S_{I,J}$ is the identity map. Thus, we only need to define the action of $\phi$ on $\mathcal{N}_1\cup \mathcal{N}_2\cup\mathcal{N}_3$.
Let us first consider the case $k=1$, for which we have $\mathcal{N}_3=\emptyset$. If $i_1=1,j_1=0$, then we have
$$\mathcal{N}_1=\{P_1\to P_0'\},\quad \mathcal{N}_2=\{P_1\to Q_1 \overset{l}{\to} P'_0\}.$$ Define $\phi$ to be the map which sends $P_1\to P_0'$ and $P_1\to Q_1 \overset{l}{\to} P'_0$ to each other. Note that the weight of $P_1\to P_0'$ is $-1$, while the weight of $P_1\to Q_1 \overset{l}{\to} P'_0$ is $1$. This establishes the desired involution. If $i_1\neq 1$ or $j_1\neq 0$, then
$\mathcal{N}_1=\mathcal{N}_2=\emptyset$ and hence $\mathcal{N}(\mathbf{P}_I,\mathbf{P}'_J)=S_{I,J}$; for this subcase, we simply take $\phi$ to be the identity map.
We proceed to define $\phi$ for $k\geq 2$. In this case we divide $\mathcal{N}_1$ into the following two subsets:
\begin{align*}
\mathcal{N}_{1,1}=\{\mathbf{p} \in \mathcal{N}_1 \mid p_2 = P_2 \to Q_1 \to P'_{1}\}, \quad \mathcal{N}_{1,2}= \mathcal{N}_1 \setminus \mathcal{N}_{1,1}.
\end{align*}
In the following, we will define a sign-reversing involution $\phi$ on $\mathcal{N}_1\cup \mathcal{N}_2\cup\mathcal{N}_3$ such that
\begin{align*}
\phi(\mathcal{N}_{1,1})=\mathcal{N}_3, \quad \phi(\mathcal{N}_{1,2})=\mathcal{N}_2.
\end{align*}
There are several subcases to consider.
\begin{itemize}
\item[(i)] If $i_1 = 0$, $i_1 \ge 2$, or $i_1=j_1 = 1$, then $\mathcal{N}_{1,1} = \mathcal{N}_{1,2} = \mathcal{N}_2 = \mathcal{N}_3 = \emptyset$. For these three situations, take $\phi$ to be the identity map.
\item[(ii)] If $i_1 = 1$, $j_1 = 0$, and moreover $i_2 \ge 3$ or $j_2 \ge 2$, then $\mathcal{N}_{1,1} = \mathcal{N}_3 = \emptyset$ and $\mathcal{N}_{1,2} = \mathcal{N}_1$. For these situations, we take $\phi$
to be the map which sends
\[
(P_1 \to P'_{0},p_2,\ldots,p_k)\in \mathcal{N}_{1,2} \quad \mbox{and} \quad
(P_1 \to Q_1 \overset{l}{\to} P'_{0},p_2,\ldots,p_k)\in \mathcal{N}_{2}
\]
to each other. It is clear that $(P_1 \to P'_{0},p_2,\ldots,p_k)$ and $(P_1 \to Q_1 \overset{l}{\to} P'_{0},p_2,\ldots,p_k)$ have opposite weights.
\item[(iii)] If $i_1 = 1$, $j_1 = 0$ and $i_2 = 2$, $j_2 = 1$, then
\begin{align*}
\mathcal{N}_{1,1}&=\{\mathbf{p}\in \mathcal{N}(P_I,P'_J) \mid p_1=P_1 \to P'_{0}, p_2 = P_2 \to Q_1 \to P'_{1}\}, \\
\mathcal{N}_{1,2}&=\{\mathbf{p} \in \mathcal{N}(P_I,P'_J) \mid p_1=P_1 \to P'_{0}, p_2 = P_2 \to Q_2 \to P'_{1}\}, \\
\mathcal{N}_{2}&=\{\mathbf{p} \in \mathcal{N}(P_I,P'_J) \mid p_1 = P_1 \to Q_1 \overset{l}{\to} P'_0, p_2 = P_2 \to Q_2 \to P'_{1}\},\\
\mathcal{N}_{3}&=\left\{\mathbf{p} \in \mathcal{N}(P_I,P'_J)\quad
\left | \begin{array}{ll}
&p_1 = P_1 \to Q_0 \to P'_0,\\
& \exists\, l \ge 2 \mbox{ such that } p_l = P_l \to Q_l \to P'_{l-1},
\\
&p_m = P_m \to Q_{m-1} \to P'_{m-1}, \forall\, 2 \le m \le l-1
\end{array}\right.
\right\}.
\end{align*}
Now we are going to define the map $\phi$ on $\mathcal{N}_1\cup \mathcal{N}_2\cup\mathcal{N}_3$.
If $\mathbf{p}\in \mathcal{N}_{1,1}$, say
$\mathbf{p}=(P_1 \to P'_{0},p_2,\ldots,p_k)$, then we take $l$ to be the largest number such that $p_m = P_m \to Q_{m-1} \to P'_{m-1}$ for $2 \le m \le l$, and let
\begin{align*}
\phi(\mathbf{p})&=(P_1 \to Q_0 \to P'_0,p_2,\ldots,p_{l-1},P_l \to Q_l \to P'_{l-1},p_{l+1},\ldots,p_k).
\end{align*}
Thus, $\phi(\mathbf{p})\in \mathcal{N}_3$.
If $\mathbf{p}\in \mathcal{N}_{3}$, namely, there exists $l \ge 2$ such that $\mathbf{p} = (P_1 \to Q_0 \to P'_0,p_2,\ldots,p_k)$ with $p_l = P_l \to Q_l \to P'_{l-1}$ and $p_m = P_m \to Q_{m-1} \to P'_{m-1}$ for any $2 \le m \le l-1 $, then let
\begin{align*}
\phi(\mathbf{p})&=(P_1 \to P'_0,p_2,\ldots,p_{l-1},P_l \to Q_{l-1} \to P'_{l-1},p_{l+1},\ldots,p_k).
\end{align*}
Here the map $\phi$ is well-defined since the number $l$ exists then it must be unique by the definition of $(\mathcal{P}_3)$. It is also clear that $\phi(\mathbf{p})\in \mathcal{N}_{1,1}$.
If $\mathbf{p}\in \mathcal{N}_{1,2}$, say
$\mathbf{p}=(P_1 \to P'_{0},p_2,\ldots,p_k)$, then let
\begin{align*}
\phi(\mathbf{p})&=(P_1 \to Q_1 \overset{l}{\to} P'_0,p_2,\ldots,p_k).
\end{align*}
Hence, we have $\phi(\mathbf{p})\in \mathcal{N}_{2}$.
If $\mathbf{p}\in \mathcal{N}_{2}$, say
$\mathbf{p}=(P_1 \to Q_1 \overset{l}{\to} P'_0,p_2,\ldots,p_k)$, then let
\begin{align*}
\phi(\mathbf{p})&=(P_1 \to P'_{0},p_2,\ldots,p_k).
\end{align*}
It is obvious that $\phi(\mathbf{p})\in \mathcal{N}_{1,2}$.
Figure \ref{fig-phi-iii} gives an illustration of $\phi$ for this subcase.
\begin{figure}[ht]
\centering
\begin{tikzpicture}
[place/.style={thick,fill=black!100,circle,inner sep=0pt,minimum size=1mm,draw=black!100},scale=0.9]
\node [place,label=below:{\footnotesize$P_1$}] (v1) at (-2,0.5) {};
\node [place,label=below:{\footnotesize$P'_0$}] (v3) at (0,1) {};
\node [place,label=below:{\footnotesize$P_2$}] (v4) at (-2,-0.5) {};
\node [place,label=above:{\footnotesize$Q_1$}] (v5) at (-1,0) {};
\node [place,label=below:{\footnotesize$P'_1$}] (v6) at (0,0) {};
\node at (-1,-0.5) {$\vdots$};
\node [place,label=below:{\footnotesize$P_{l-1}$}] (v7) at (-2,-2) {};
\node [place,label=above:{\footnotesize$Q_{l-2}$}] (v8) at (-1,-1.5) {};
\node [place,label=below:{\footnotesize$P'_{l-2}$}] (v9) at (0,-1.5) {};
\node [place,label=below:{\footnotesize\footnotesize{$P_l$}}] (v12) at (-2,-3) {};
\node [place,label=above:{\footnotesize$Q_{l-1}$}] (v10) at (-1,-2.5) {};
\node [place,label=below:{\footnotesize$P'_{l-1}$}] (v11) at (0,-2.5) {};
\node at (-1,-4) {$\mathcal{N}_{1,1}$};
\draw [thick] (v1) -- (v3);
\draw [thick] (v4) -- (v5) -- (v6);
\draw [thick] (v7) -- (v8) -- (v9);
\draw [thick] (v12) -- (v10) -- (v11);
\node at (1.5,-0.5) {$\overset{\phi}{\leftrightarrow}$};
\node [place,label=below:{\footnotesize$P_1$}] (v13) at (3,0.5) {};
\node [place,label=above:{\footnotesize$Q_0$}] (v2) at (4,1) {};
\node [place,label=below:{\footnotesize$P'_0$}] (v14) at (5,1) {};
\node [place,label=below:{\footnotesize$P_2$}] (v15) at (3,-0.5) {};
\node [place,label=above:{\footnotesize$Q_1$}] (v16) at (4,0) {};
\node [place,label=below:{\footnotesize$P'_1$}] (v17) at (5,0) {};
\node at (4,-0.5) {$\vdots$};
\node [place,label=below:{\footnotesize$P_{l-1}$}] (v18) at (3,-2) {};
\node [place,label=above:{\footnotesize$Q_{l-2}$}] (v19) at (4,-1.5) {};
\node [place,label=below:{\footnotesize$P'_{l-2}$}] (v20) at (5,-1.5) {};
\node [place,label=below:{\footnotesize$P_l$}] (v21) at (3,-3) {};
\node [place,label=below:{\footnotesize$Q_{l}$}] (v22) at (4,-3) {};
\node [place,label=below:{\footnotesize$P'_{l-1}$}] (v23) at (5,-2.5) {};
\node at (4,-4) {$\mathcal{N}_{3}$};
\draw [thick] (v13) -- (v2) -- (v14);
\draw [thick] (v15) -- (v16) -- (v17);
\draw [thick] (v18) -- (v19) -- (v20);
\draw [thick] (v21) -- (v22) -- (v23);
\node [place,label=below:{\footnotesize$P_1$}] (v24) at (7.5,0.5) {};
\node [place,label=below:{\footnotesize$P'_0$}] (v25) at (9.5,1) {};
\node [place,label=below:{\footnotesize$P_2$}] (v26) at (7.5,-0.5) {};
\node[place,label=below:{\footnotesize$Q_2$}] (v27) at (8.5,-0.5) {};
\node[place,label=below:{\footnotesize$P'_1$}] (v28) at (9.5,0) {};
\node at (7.5,-1.75) {$\vdots$};
\node at (8.5,-1.75) {$\vdots$};
\node at (9.5,-1.75) {$\vdots$};
\node at (8.5,-4) {$\mathcal{N}_{1,2}$};
\draw [thick](v24) -- (v25);
\draw[thick] (v26) -- (v27) -- (v28);
\node at (10.75,-0.5) {$\overset{\phi}{\leftrightarrow}$};
\node [place,label=below:{\footnotesize$P_1$}] (v35) at (12,0.5) {};
\node[place,label=below:{\footnotesize$Q_1$}] (v36) at (13,0.5) {};
\node [place,label=below:{\footnotesize$P'_0$}]at (14,1) {};
\node[place,label=below:{\footnotesize$P_2$}] (v37) at (12,-0.5) {};
\node[place,label=below:{\footnotesize$Q_2$}] (v38) at (13,-0.5) {};
\node[place,label=below:{\footnotesize$P'_1$}] (v39) at (14,0) {};
\node at (12,-1.75) {$\vdots$};
\node at (13,-1.75) {$\vdots$};
\node at (14,-1.75) {$\vdots$};
\node at (13,-4) {$\mathcal{N}_{2}$};
\draw[thick] (v35) -- (v36);
\draw[thick] (v37) -- (v38) -- (v39);
\draw [thick]plot[smooth, tension=.7] coordinates {(13,0.5) (13.4,0.85) (14,1)};
\draw [thick][->](13.4,0.85)-- (13.41,0.86);
\end{tikzpicture}\caption{An illustration of $\phi$ for subcase (iii)}\label{fig-phi-iii}
\end{figure}
\end{itemize}
With the above definition of $\phi$, it is straightforward to verify that
$\phi$ is a sign-reversing involution on $\mathcal{N}_{1}\cup\mathcal{N}_{2}\cup\mathcal{N}_3$, as desired. \qed
\section{The Hankel matrix \texorpdfstring{$H$}{}}\label{sect-pf}
The aim of this section is to give a combinatorial proof of the $q$-total positivity of the Hankel matrix $H$ of Narayana polynomials of type $B$.
It suffices to combinatorially prove the $q$-total positivity of each
leading principal submatrix of $H$. To this end, let us first establish the planar networks for its leading principal submatrices.
Let $L^B_n=(l_{i,j}(q))_{0\le i,j \le n+1}$, $B_n=(b_{i,j}(q))_{0\le i,j \le n}$ and $H_n = (b_{i+j,0}(q))_{0 \le i,j \le n}$. By \eqref{eq-narab-recurrence} it is evident that
\begin{align}\label{eq-recurrence}
B_{n+1}=\bar{B}_nL^B_{n}, \mbox{ where } \bar{B}_n=
\begin{pmatrix}
1 & O\\
O & B_n
\end{pmatrix}.
\end{align}
Aigner \cite{Aig01} proved that
\begin{equation}\label{eq-Hankel-recurrence}
H_n=B_nT_nB_n^{t},
\end{equation}
where
\begin{equation*}
T_n = \begin{pmatrix}
1& & & & \\
&2q& & & \\
& &2q^2& & \\
& & &\ddots& \\
& & & &2q^n
\end{pmatrix}
_{(n+1) \times (n+1)},
\end{equation*}
and $B_n^t$ denotes the transpose of $B_n$.
These two formulas allow us to recursively construct the planar network for $H_n$. Precisely, we mainly make use of the following lemma, which provides a way to build a network for a product of matrices. Recall that in a digraph, a vertex $v$ is called a source (resp. sink) if there is no arcs point in (resp. out of) it. The following result could be considered as a corollary of the transfer-matrix method (see \cite[Theorem
4.7.1]{StaEC1} for instance), but for self-containedness we will give a detailed proof.
\begin{lem}\label{lem-transfer-matrix}
Given $k$ square matrices $M_1$, $M_2$, \ldots, $M_k$ of order $n$, for each $1 \le i \le k$ assume that $\mathcal{M}_i = (D^{M_i},\mathrm{wt}_{D^{M_i}},(u_{i,1},\ldots,u_{i,n}),
(v_{i,1},\ldots,v_{i,n}))$ is a planar network for $M_i$ with $u_{i,j}$ being a source and $v_{i,j}$ being a sink for all $1 \le j \le n$. Let $D^M$ be the digraph obtained by placing $D^{M_1}$, $D^{M_2}$, \ldots, $D^{M_k}$ in succession and identifying $v_{i,j}$ with $u_{i+1,j}$ for each $1 \le i \le k-1$ and $1 \le j \le n$, and let $\mathrm{wt}_{D^M}$ be the weight function inherited from $\mathrm{wt}_{D^{M_1}},\ldots,\mathrm{wt}_{D^{M_k}}$ in an obvious way. Then
\[
\mathcal{M} = (D^M, \mathrm{wt}_{D^M}, (u_{1,1},\ldots,u_{1,n}), (v_{k,1},\ldots,v_{k,n}))
\]
is a planar network for the product $M_1\cdots M_k$.
\end{lem}
\pf The proof is by induction on $k$. Let us first prove the base case $k = 2$. The construction tells that for any $1 \le i,j \le n$, each directed path from $u_{1,i}$ to $v_{2,j}$ must pass through exactly one vertex $v_{1,l}$ ($=u_{2,l}$) for some $1 \le l \le n$, and hence
\begin{equation*
GF_{D^M}(u_{1,i},v_{2,j}) = \sum_{l=1}^{n} GF_{D^{M_1}}(u_{1,i},v_{1,l}) GF_{D^{M_2}}(u_{2,l},v_{2,j}),
\end{equation*}
as desired. Assume the assertion for $k$ ($k \ge 2)$. By applying the preceding proof to $M_1\cdots M_k$ and $M_{k+1}$, we find that the assertion also holds for $k+1$. This completes the proof. \qed
Now we present the construction for the planar network $\mathcal{H}_n$ for $H_n$, which is essentially based on $\mathcal{L^B}$. At first, we give the planar network for $L^B_n$, which is actually obtained by cutting off the part of $\mathcal{L^B}$ below $y = -n-1$. Precisely, let $D^{L^B_n}$ be the subgraph of $D^{L^B}$ induced by the vertices $P_0,\ldots,P_{n+1},Q_0,\ldots,Q_{n+1},P'_0,\ldots,P'_{n+1}$, and let $\mathrm{wt}_{D^{L^B_n}}$ be the restriction of $\mathrm{wt}_{D^{L^B}}$ to $D^{L^B_n}$. Then
$(D^{L^B_n},\mathrm{wt}_{D^{L^B_n}},(P_0,\ldots,P_{n+1}),(P'_0,\ldots,P'_{n+1}))$ is a planar network for $L^B_n$. Unfortunately, this labeling is not convenient for introducing the recursive construction of $\mathcal{H}_n$. In the rest of this paper, we will label the vertex $P_i$ by $P^{(n)}_{n+1-i}$, $Q_i$ by $Q^{(n)}_{n+1-i}$, $P'_i$ by $P^{(n+1)}_{n+1-i}$ for $0 \le i \le n+1$ in the digraph $D^{L^B_n}$. Moreover, we may shift the digraphs $D^{L^B_n}$ in the plane such that $P^{(i)}_j = (2i,j)$ and $Q^{(i)}_j = (2i+1,j)$ for all $i,j \ge 0$.
Then
\[
\mathcal{L}^{\mathcal{B}}_n = (D^{L^B_n},\mathrm{wt}_{D^{L^B_n}},(P^{(n)}_{n+1},\ldots,P^{(n)}_0),(P^{(n+1)}_{n+1},\ldots,P^{(n+1)}_0))
\]
is a planar network for $L^B_n$, and
\[
L^B_n = \left(GF_{D^{L^B_n}}(P^{(n)}_{n+1-i},P^{(n+1)}_{n+1-j})\right)_{0 \le i,j \le n+1}.
\]
Figure \ref{fig-network-lb2} shows the planar network $\mathcal{L}^{\mathcal{B}}_2$.
\begin{figure}[htb]
\centering
\begin{tikzpicture}
[place/.style={thick,fill=black!100,circle,inner sep=0pt,minimum size=1mm,draw=black!100},scale=1.5]
\draw [thick] [->] (2.5,2.5) -- (3,2.5);
\draw [thick] [->] (3,2.5) -- (3.5,2.5) -- (4,2.5);
\draw [thick] (4,2.5) -- (4.5,2.5);
\node [place,label=above:{\footnotesize$P_3^{(2)}$}] at (2.5,2.5) {};
\node [place,label=above:{\footnotesize$Q_3^{(2)}$}] at (3.5,2.5) {};
\node [place,label=above:{\footnotesize$P_3^{(3)}$}] at (4.5,2.5) {};
\draw [thick] [->](2.5,1.5) -- (3,1.5);
\draw [thick] [->] (3,1.5) -- (4,1.5);
\draw [thick] (4,1.5) -- (4.5,1.5);
\node [place,label=below:{\footnotesize$P_2^{(2)}$}] at (2.5,1.5) {};
\node [place,label=below:{\footnotesize$Q_2^{(2)}$}] at (3.5,1.5) {};
\node [place,label=below:{\footnotesize$P_2^{(3)}$}] at (4.5,1.5) {};
\draw [thick] [->] (2.5,1.5) --(3,2);
\draw [thick] (3,2) -- (3.5,2.5) ;
\draw [thick] [->] (2.5,1.5) -- (3.5,2);
\draw [thick] (3.5,2) -- (4.5,2.5);
\draw [thick]plot[smooth, tension=.7] coordinates {(3.5,1.5) (4.1,1.9) (4.5,2.5)};
\draw [thick]plot[smooth, tension=.7] coordinates {(3.5,1.5) (3.9,2.1) (4.5,2.5)};
\draw [thick][->] (4.1,1.9) -- (4.11,1.91);
\draw [thick][->] (3.9,2.1) -- (3.91,2.11);
\draw [thick] [->] (2.5,0.5) -- (3,0.5);
\draw [thick] [->] (3,0.5) -- (4,0.5);
\draw [thick] (4,0.5) -- (4.5,0.5);
\node [place,label=below:{\footnotesize$P_1^{(2)}$}] at (2.5,0.5) {};
\node [place,label=below:{\footnotesize$Q_1^{(2)}$}] at (3.5,0.5) {};
\node [place,label=below:{\footnotesize$P_1^{(3)}$}] at (4.5,0.5) {};
\draw [thick] [->] (2.5,0.5) -- (3,1);
\draw [thick] (3,1) -- (3.5,1.5);
\draw [thick] [->] (3.5,0.5) -- (4,1);
\draw [thick] (4,1) -- (4.5,1.5);
\draw [thick] [->] (2.5,-0.5) -- (3,-0.5);
\draw [thick] [->] (3,-0.5) -- (4,-0.5);
\draw [thick] (4,-0.5) -- (4.5,-0.5);
\node [place,label=below:{\footnotesize$P_0^{(2)}$}] at (2.5,-0.5) {};
\node [place,label=below:{\footnotesize$Q_0^{(2)}$}] at (3.5,-0.5) {};
\node [place,label=below:{\footnotesize$P_0^{(3)}$}] at (4.5,-0.5) {};
\draw [thick] [->] (2.5,-0.5) -- (3,0);
\draw [thick] (3,0) -- (3.5,0.5);
\draw [thick] [->] (3.5,-0.5) -- (4,0);
\draw [thick] (4,0) -- (4.5,0.5);
\node [blue] at (2.75,2) {$q$};
\node [blue] at (3.375,1.75) {$-1$};
\node [blue] at (2.75,1) {$q$};
\node [blue] at (2.75,0) {$q$};
\end{tikzpicture}
\caption{Planar network $\mathcal{L}^{\mathcal{B}}_2$}\label{fig-network-lb2}
\end{figure}
Using the planar networks $\mathcal{L}^{\mathcal{B}}_0,\mathcal{L}^{\mathcal{B}}_1,\ldots,\mathcal{L}^{\mathcal{B}}_{n}$, we can construct the planar network $\mathcal{B}_{n+1}$ for $B_{n+1}$.
\begin{itemize}
\item For $n=0$, we take $\mathcal{B}_{1}$ to be the planar network $\mathcal{L}^{\mathcal{B}}_{0}$ since $B_1 = L^B_0$.
\item Assuming that $\mathcal{B}_{n}$
has been constructed for some $n \geq 1$, we continue to construct $\mathcal{B}_{n+1}$. Let $D^{\bar{B}_n}$ be the digraph with $V(D^{\bar{B}_n}) = V(D^{B_n}) \cup \{P_{n+1}^{(0)},P_{n+1}^{(1)},\ldots,P_{n+1}^{(n)}\}$ and $A(D^{\bar{B}_n}) = A(D^{B_n}) \cup \{P_{n+1}^{(i)} \to P_{n+1}^{(i+1)} \mid 0\leq i\leq n-1\}$, and let $\mathrm{wt}_{D^{\bar{B}_n}}(a)$ be equal to $\mathrm{wt}_{D^{B_n}}(a)$ for $a \in A(D^{B_n})$ and equal to 1 for the other arcs. Then
\[
\mathcal{\bar{B}}_n = (D^{\bar{B}_n},\mathrm{wt}_{D^{\bar{B}_n}}, (P_{n+1}^{(0)},P_n^{(0)},\ldots,P_0^{(0)}), (P_{n+1}^{(n)},P_n^{(n)},\ldots,P_0^{(n)}))
\]
is a planar network for $\bar{B}_n$.
By \eqref{eq-recurrence} and Lemma \ref{lem-transfer-matrix}, we obtain that
\[
\mathcal{B}_{n+1} = (D^{B_{n+1}},\mathrm{wt}_{D^{B_{n+1}}}, (P_{n+1}^{(0)},P_{n}^{(0)},\ldots,P_{0}^{(0)}), (P_{n+1}^{(n+1)},P_{n}^{(n+1)},\ldots,P_{0}^{(n+1)}))
\]
is a planar network for $B_{n+1}$, where $D^{B_{n+1}}$ and $\mathrm{wt}_{D^{B_{n+1}}}$ are defined in the way as described in Lemma \ref{lem-transfer-matrix}. See Figure \ref{fig-db3} for an illustration of $D^{B_3}$.
\begin{figure}[htb]
\centering
\begin{tikzpicture}
[place/.style={thick,fill=black!100,circle,inner sep=0pt,minimum size=1mm,draw=black!100},scale=1.5]
\draw [thick] [->] (-1.5,2.5) -- (-0.5,2.5);
\draw [thick] [->] (-0.5,2.5) -- (1.5,2.5);
\draw [thick] (1.5,2.5) -- (2.5,2.5);
\draw [thick] [->] (2.5,2.5) -- (3,2.5);
\draw [thick] [->] (3,2.5) -- (3.5,2.5) -- (4,2.5);
\draw [thick] (4,2.5) -- (4.5,2.5);
\node [place,label=below:{\footnotesize$P_3^{(0)}$}] at (-1.5,2.5) {};
\node [place,label=below:{\footnotesize$P_3^{(1)}$}] at (0.5,2.5) {};
\node [place,label=below:{\footnotesize$P_3^{(2)}$}] at (2.5,2.5) {};
\node [place,label=below:{\footnotesize$Q_3^{(2)}$}] at (3.5,2.5) {};
\node [place,label=below:{\footnotesize$P_3^{(3)}$}] at (4.5,2.5) {};
\draw [thick] [->] (-1.5,1.5) -- (-0.5,1.5);
\draw [thick] [->] (-0.5,1.5) -- (1,1.5);
\draw [thick] [->] (1,1.5) -- (2,1.5);
\draw [thick] (2,1.5) --(2.5,1.5);
\draw [thick][->] (2.5,1.5) -- (3,1.5);
\draw [thick] (3,1.5) --(3.5,1.5);
\draw [thick][->] (3.5,1.5) -- (4,1.5);
\draw [thick] (4,1.5) -- (4.5,1.5);
\node [place,label=below:{\footnotesize$P_2^{(0)}$}] at (-1.5,1.5) {};
\node [place,label=below:{\footnotesize$P_2^{(1)}$}] at (0.5,1.5) {};
\node [place,label=below:{\footnotesize$Q_2^{(1)}$}] at (1.5,1.5) {};
\node [place,label=below:{\footnotesize$P_2^{(2)}$}] at (2.5,1.5) {};
\node [place,label=below:{\footnotesize$Q_2^{(2)}$}] at (3.5,1.5) {};
\node [place,label=below:{\footnotesize$P_2^{(3)}$}] at (4.5,1.5) {};
\draw [thick] [->] (2.5,1.5) --(3,2);
\draw [thick] (3,2) -- (3.5,2.5) ;
\draw [thick] [->] (2.5,1.5) -- (3.5,2);
\draw [thick] (3.5,2) -- (4.5,2.5);
\draw [thick] [->] (-1.5,0.5) -- (-1,0.5);
\draw [thick] [->] (-1,0.5) -- (0,0.5);
\draw [thick] (0,0.5) -- (0.5,0.5);
\draw [thick] [->] (0.5,0.5) -- (1,0.5);
\draw [thick] [->] (1,0.5) -- (2,0.5);
\draw [thick] (2,0.5) -- (2.5,0.5);
\draw [thick] [->] (2.5,0.5) -- (3,0.5);
\draw [thick] [->] (3,0.5) -- (4,0.5);
\draw [thick] (4,0.5) -- (4.5,0.5);
\node [place,label=below:{\footnotesize$P_1^{(0)}$}] at (-1.5,0.5) {};
\node [place,label=below:{\footnotesize$Q_1^{(0)}$}] at (-0.5,0.5) {};
\node [place,label=below:{\footnotesize$P_1^{(1)}$}] at (0.5,0.5) {};
\node [place,label=below:{\footnotesize$Q_1^{(1)}$}] at (1.5,0.5) {};
\node [place,label=below:{\footnotesize$P_1^{(2)}$}] at (2.5,0.5) {};
\node [place,label=below:{\footnotesize$Q_1^{(2)}$}] at (3.5,0.5) {};
\node [place,label=below:{\footnotesize$P_1^{(3)}$}] at (4.5,0.5) {};
\draw [thick] [->] (0.5,0.5) --(1,1);
\draw [thick] (1,1) -- (1.5,1.5);
\draw [thick] (1.5,1) -- (2.5,1.5);
\draw [thick] [->] (2.5,0.5) -- (3,1);
\draw [thick] (3,1) -- (3.5,1.5);
\draw [thick] [->] (3.5,0.5) -- (4,1);
\draw [thick] (4,1) -- (4.5,1.5);
\draw [thick] [->] (-1.5,-0.5) -- (-1,-0.5);
\draw [thick] [->] (-1,-0.5) -- (-0.5,-0.5) -- (0,-0.5);
\draw [thick] [->] (0,-0.5) -- (1,-0.5);
\draw [thick] [->] (1,-0.5) -- (1.5,-0.5) -- (2,-0.5);
\draw [thick] [->] (2,-0.5) -- (2.5,-0.5) -- (3,-0.5);
\draw [thick] [->] (3,-0.5) -- (3.5,-0.5) -- (4,-0.5);
\draw [thick] (4,-0.5) -- (4.5,-0.5);
\node [place,label=below:{\footnotesize$P_0^{(0)}$}] at (-1.5,-0.5) {};
\node [place,label=below:{\footnotesize$Q_0^{(0)}$}] at (-0.5,-0.5) {};
\node [place,label=below:{\footnotesize$P_0^{(1)}$}] at (0.5,-0.5) {};
\node [place,label=below:{\footnotesize$Q_0^{(1)}$}] at (1.5,-0.5) {};
\node [place,label=below:{\footnotesize$P_0^{(2)}$}] at (2.5,-0.5) {};
\node [place,label=below:{\footnotesize$Q_0^{(2)}$}] at (3.5,-0.5) {};
\node [place,label=below:{\footnotesize$P_0^{(3)}$}] at (4.5,-0.5) {};
\draw [thick] [->] (0.5,-0.5) -- (1,0);
\draw [thick] (1,0) --(1.5,0.5);
\draw [thick] [->] (1.5,-0.5) -- (2,0);
\draw [thick] (2,0) -- (2.5,0.5);
\draw [thick] [->] (2.5,-0.5) -- (3,0);
\draw [thick] (3,0) -- (3.5,0.5);
\draw [thick] [->] (3.5,-0.5) -- (4,0);
\draw [thick] (4,0) -- (4.5,0.5);
\draw [thick] [->] (-1.5,-0.5) -- (-1,0);
\draw [thick] (-1,0) -- (-0.5,0.5);
\draw [thick] [->] (-1.5,-0.5) -- (-0.5,0);
\draw [thick] (-0.5,0) -- (0.5,0.5);
\draw [thick][->] (0.5,0.5) -- (1.5,1);
\draw [thick]plot[smooth, tension=.7] coordinates {(-0.5,-0.5) (0.1,-0.1) (0.5,0.5)};
\draw [thick]plot[smooth, tension=.7] coordinates {(-0.5,-0.5) (-0.1,0.1) (0.5,0.5)};
\draw [thick]plot[smooth, tension=.7] coordinates {(1.5,0.5) (2.1,0.9) (2.5,1.5)};
\draw [thick]plot[smooth, tension=.7] coordinates {(1.5,0.5) (1.9,1.1) (2.5,1.5)};
\draw [thick]plot[smooth, tension=.7] coordinates {(3.5,1.5) (4.1,1.9) (4.5,2.5)};
\draw [thick]plot[smooth, tension=.7] coordinates {(3.5,1.5) (3.9,2.1) (4.5,2.5)};
\draw [thick][->] (0.1,-0.1) -- (0.11,-0.09);
\draw [thick][->] (-0.1,0.1) -- (-0.09,0.11);
\draw [thick][->] (2.1,0.9) -- (2.11,0.91);
\draw [thick][->] (1.9,1.1) -- (1.91,1.11);
\draw [thick][->] (4.1,1.9) -- (4.11,1.91);
\draw [thick][->] (3.9,2.1) -- (3.91,2.11);
\end{tikzpicture}
\caption{Digraph $D^{B_3}$}\label{fig-db3}
\end{figure}
\end{itemize}
Based on \eqref{eq-Hankel-recurrence} and Lemma \ref{lem-transfer-matrix}, we proceed to build the planar network for $H_n$ from $\mathcal{B}_n$. Firstly, we construct a planar network for $B_n^t$. We take $D^{B_n^t}$ to be the digraph obtained by reflecting $D^{B_n}$ about the vertical line $x = 2n + 1/2$ and reversing the direction of all arcs. We also label the image of $P_j^{(i)}$ (resp. $Q_j^{(i)}$) by $\bar{P}_j^{(i)}$ (resp. $\bar{Q}_j^{(i)}$). We also let $\mathrm{wt}_{D^{B_n^t}}$ be the function which assigns to each arc of $D^{B_n^t}$ the weight of its preimage. Then it is easy to verify that
\[
\mathcal{B}_n^t = (D^{B_n^t}, \mathrm{wt}_{D^{B_n^t}}, (\bar{P}_{n}^{(n)},\ldots,\bar{P}_{0}^{(n)}), (\bar{P}_{n}^{(0)},\ldots,\bar{P}_{0}^{(0)}))
\]
is a planar network for $B_n^t$. Next, we define $D^{T_n}$ to be the digraph whose vertex set is $\{P_{i}^{(n)} \mid 0 \le i \le n\} \cup \{\bar{P}_{j}^{(n)} \mid 0 \le j \le n\}$ and arc set is $\{P_{i}^{(n)} \to \bar{P}_{i}^{(n)} \mid 0 \le i \le n\}$, and let $\mathrm{wt}_{D^{T_n}}(P_{i}^{(n)} \to \bar{P}_{i}^{(n)}) = (T_n)_{i,i}$ for $0 \le i \le n$. Then
\[
\mathcal{T}_n = (D^{T_n},\mathrm{wt}_{D^{T_n}},(P_{n}^{(n)},\ldots,P_{0}^{(n)}),(\bar{P}_{n}^{(n)},\ldots,\bar{P}_{0}^{(n)}))
\]
is a planar network for $T_n$. Finally, we combine $\mathcal{B}_n$, $\mathcal{T}_n$ and $\mathcal{\bar{B}}_n^t$ to get the following planar network for $H_n$:
\begin{align}\label{eq-network-hn}
\mathcal{H}_n = (D^{H_{n}}, \mathrm{wt}_{D^{H_{n}}}, (P_{n}^{(0)},\ldots,P_{0}^{(0)}),(\bar{P}_{n}^{(0)},\ldots,\bar{P}_{0}^{(0)})),
\end{align}
where $D^{H_{n}}$ and $\mathrm{wt}_{D^{H_{n}}}$ are defined in the way as described in Lemma \ref{lem-transfer-matrix}. Figure \ref{fig-dh3} shows the digraph $D^{H_3}$.
\begin{figure}[htb]
\begin{tikzpicture}
[place/.style={thick,fill=black!100,circle,inner sep=0pt,minimum size=1mm,draw=black!100},scale=1.15]
\draw [thick] [->] (-1.5,2.5) -- (-0.5,2.5);
\draw [thick] (-0.5,2.5) -- (0.5,2.5);
\draw [thick] [->] (0.5,2.5) -- (1.5,2.5);
\draw [thick] (1.5,2.5) -- (2.5,2.5);
\draw [thick] [->] (2.5,2.5) -- (3,2.5);
\draw [thick] [->] (3,2.5) -- (3.5,2.5) -- (4,2.5);
\draw [thick] [->] (4,2.5) -- (4.5,2.5) -- (5,2.5);
\draw [thick] [->] (5,2.5) -- (5.5,2.5) -- (6,2.5);
\draw [thick] [->] (6,2.5) -- (6.5,2.5) -- (7,2.5);
\draw [thick] [->] (7,2.5) -- (8.5,2.5);
\draw [thick] (8.5,2.5) -- (9.5,2.5);
\draw [thick] [->] (9.5,2.5) -- (10.5,2.5);
\draw [thick] (10.5,2.5) -- (11.5,2.5);
\node [place,label=below:{\tiny$P_3^{(0)}$}] at (-1.5,2.5) {};
\node [place,label=below:{\tiny$P_3^{(1)}$}] at (0.5,2.5) {};
\node [place,label=below:{\tiny$P_3^{(2)}$}] at (2.5,2.5) {};
\node [place,label=below:{\tiny$Q_3^{(2)}$}] at (3.5,2.5) {};
\node [place,label=below:{\tiny$P_3^{(3)}$}] at (4.5,2.5) {};
\node [place,label=below:{\tiny$\bar{P}_3^{(3)}$}] at (5.5,2.5) {};
\node [place,label=below:{\tiny$\bar{Q}_3^{(2)}$}] at (6.5,2.55) {};
\node [place,label=below:{\tiny$\bar{P}_3^{(2)}$}] at (7.5,2.5) {};
\node [place,label=below:{\tiny$\bar{P}_3^{(1)}$}] at (9.5,2.5) {};
\node [place,label=below:{\tiny$\bar{P}_3^{(0)}$}] at (11.5,2.5) {};
\draw [thick]plot[smooth, tension=.7] coordinates {(5.5,2.5) (6.1,2.1) (6.5,1.5)};
\draw [thick]plot[smooth, tension=.7] coordinates {(5.5,2.5) (5.9,1.9) (6.5,1.5)};
\draw [thick] [->] (6.09,2.11) -- (6.1,2.1);
\draw [thick] [->] (5.89,1.91) -- (5.9,1.9);
\draw [thick] [->] (5.5,2.5) -- (6.5,2);
\draw [thick] (6.5,2) -- (7.5,1.5);
\draw [thick] [->] (6.5,2.5) -- (7,2);
\draw [thick] (7,2) -- (7.5,1.5);
\draw [thick] [->] (-1.5,1.5) -- (-0.5,1.5);
\draw [thick] (-0.5,1.5) -- (0.5,1.5);
\draw [thick] [->] (0.5,1.5) -- (1,1.5);
\draw [thick] [->] (1,1.5) -- (1.5,1.5) -- (2,1.5);
\draw [thick] [->] (2,1.5) -- (2.5,1.5) -- (3,1.5);
\draw [thick] [->] (3,1.5) -- (3.5,1.5) -- (4,1.5);
\draw [thick] [->] (4,1.5) -- (4.5,1.5) -- (5,1.5);
\draw [thick] [->] (5,1.5) -- (5.5,1.5) -- (6,1.5);
\draw [thick] [->] (6,1.5) -- (6.5,1.5) -- (7,1.5);
\draw [thick] [->] (7,1.5) -- (7.5,1.5) -- (8,1.5);
\draw [thick] [->] (8,1.5) -- (8.5,1.5) -- (9,1.5);
\draw [thick] [->] (9,1.5) -- (10.5,1.5);
\draw [thick] (10.5,1.5) -- (11.5,1.5);
\node [place,label=below:{\tiny$P_2^{(0)}$}] at (-1.5,1.5) {};
\node [place,label=below:{\tiny$P_2^{(1)}$}] at (0.5,1.5) {};
\node [place,label=below:{\tiny$Q_2^{(1)}$}] at (1.5,1.5) {};
\node [place,label=below:{\tiny$P_2^{(2)}$}] at (2.5,1.5) {};
\node [place,label=below:{\tiny$Q_2^{(2)}$}] at (3.5,1.5) {};
\node [place,label=below:{\tiny$P_2^{(3)}$}] at (4.5,1.5) {};
\node [place,label=below:{\tiny$\bar{P}_2^{(3)}$}] at (5.5,1.5) {};
\node [place,label=below:{\tiny$\bar{Q}_2^{(2)}$}] at (6.5,1.5) {};
\node [place,label=below:{\tiny$\bar{P}_2^{(2)}$}] at (7.5,1.5) {};
\node [place,label=below:{\tiny$\bar{Q}_2^{(1)}$}] at (8.5,1.55) {};
\node [place,label=below:{\tiny$\bar{P}_2^{(1)}$}] at (9.5,1.5) {};
\node [place,label=below:{\tiny$\bar{P}_2^{(0)}$}] at (11.5,1.5) {};
\draw [thick] [->] (2.5,1.5) --(3,2);
\draw [thick] (3,2) -- (3.5,2.5) ;
\draw [thick] [->] (2.5,1.5) -- (3.5,2);
\draw [thick] (3.5,2) -- (4.5,2.5);
\draw [thick]plot[smooth, tension=.7] coordinates {(3.5,1.5) (4.1,1.9) (4.5,2.5)};
\draw [thick]plot[smooth, tension=.7] coordinates {(3.5,1.5) (3.9,2.1) (4.5,2.5)};
\draw [thick][->] (4.1,1.9) -- (4.11,1.91);
\draw [thick][->] (3.9,2.1) -- (3.91,2.11);
\draw [thick] [->] (5.5,1.5) --(6,1);
\draw [thick] (6,1) -- (6.5,0.5);
\draw [thick] [->] (6.5,1.5) -- (7,1);
\draw [thick] (7,1) -- (7.5,0.5);
\draw [thick]plot[smooth, tension=.7] coordinates {(7.5,1.5) (8.1,1.1) (8.5,0.5)};
\draw [thick]plot[smooth, tension=.7] coordinates {(7.5,1.5) (7.9,0.9) (8.5,0.5)};
\draw [thick] [->] (8.09,1.11) -- (8.1,1.1);
\draw [thick] [->] (7.89,0.91) -- (7.9,0.9);
\draw [thick] [->] (7.5,1.5) -- (8.5,1);
\draw [thick] (8.5,1) -- (9.5,0.5);
\draw [thick] [->] (8.5,1.5) -- (9,1);
\draw [thick] (9,1) -- (9.5,0.5);
\draw [thick] [->] (-1.5,0.5) -- (-1,0.5);
\draw [thick] [->] (-1,0.5) -- (-0.5,0.5) -- (0,0.5);
\draw [thick] [->] (0,0.5) -- (0.5,0.5) -- (1,0.5);
\draw [thick] [->] (1,0.5) -- (1.5,0.5) -- (2,0.5);
\draw [thick] [->] (2,0.5) -- (2.5,0.5) -- (3,0.5);
\draw [thick] [->] (3,0.5) -- (3.5,0.5) -- (4,0.5);
\draw [thick] [->] (4,0.5) -- (4.5,0.5) -- (5,0.5);
\draw [thick] [->] (5,0.5) -- (5.5,0.5) -- (6,0.5);
\draw [thick] [->] (6,0.5) -- (6.5,0.5) -- (7,0.5);
\draw [thick] [->] (7,0.5) -- (7.5,0.5) -- (8,0.5);
\draw [thick] [->] (8,0.5) -- (8.5,0.5) -- (9,0.5);
\draw [thick] [->] (9,0.5) -- (9.5,0.5) -- (10,0.5);
\draw [thick] (10,0.5) -- (10.5,0.5);
\draw [thick] [->] (10.5,0.5) -- (11,0.5);
\draw [thick] (11,0.5) -- (11.5,0.5);
\node [place,label=below:{\tiny$P_1^{(0)}$}] at (-1.5,0.5) {};
\node [place,label=below:{\tiny$Q_1^{(0)}$}] at (-0.5,0.5) {};
\node [place,label=below:{\tiny$P_1^{(1)}$}] at (0.5,0.5) {};
\node [place,label=below:{\tiny$Q_1^{(1)}$}] at (1.5,0.5) {};
\node [place,label=below:{\tiny$P_1^{(2)}$}] at (2.5,0.5) {};
\node [place,label=below:{\tiny$Q_1^{(2)}$}] at (3.5,0.5) {};
\node [place,label=below:{\tiny$P_1^{(3)}$}] at (4.5,0.5) {};
\node [place,label=below:{\tiny$\bar{P}_1^{(3)}$}] at (5.5,0.5) {};
\node [place,label=below:{\tiny$\bar{Q}_1^{(2)}$}] at (6.5,0.5) {};
\node [place,label=below:{\tiny$\bar{P}_1^{(2)}$}] at (7.5,0.5) {};
\node [place,label=below:{\tiny$\bar{Q}_1^{(1)}$}] at (8.5,0.5) {};
\node [place,label=below:{\tiny$\bar{P}_1^{(1)}$}] at (9.5,0.5) {};
\node [place,label=below:{\tiny$\bar{Q}_1^{(0)}$}] at (10.5,0.55) {};
\node [place,label=below:{\tiny$\bar{P}_1^{(0)}$}] at (11.5,0.5) {};
\draw [thick] [->] (0.5,0.5) --(1,1);
\draw [thick] (1,1) -- (1.5,1.5);
\draw [thick] [->] (0.5,0.5) -- (1.5,1);
\draw [thick] (1.5,1) -- (2.5,1.5);
\draw [thick]plot[smooth, tension=.7] coordinates {(1.5,0.5) (2.1,0.9) (2.5,1.5)};
\draw [thick]plot[smooth, tension=.7] coordinates {(1.5,0.5) (1.9,1.1) (2.5,1.5)};
\draw [thick][->] (2.1,0.9) -- (2.11,0.91);
\draw [thick][->] (1.9,1.1) -- (1.91,1.11);
\draw [thick] [->] (2.5,0.5) -- (3,1);
\draw [thick] (3,1) -- (3.5,1.5);
\draw [thick] [->] (3.5,0.5) -- (4,1);
\draw [thick] (4,1) -- (4.5,1.5);
\draw [thick] [->] (-1.5,-0.5) -- (-1,-0.5);
\draw [thick] [->] (-1,-0.5) -- (0,-0.5);
\draw [thick] [->] (0,-0.5) -- (1,-0.5);
\draw [thick] [->] (1,-0.5) -- (2,-0.5);
\draw [thick] [->] (2,-0.5) -- (3,-0.5);
\draw [thick] [->] (3,-0.5) -- (4,-0.5);
\draw [thick] [->] (4,-0.5) -- (5,-0.5);
\draw [thick] [->] (5,-0.5) -- (6,-0.5);
\draw [thick] [->] (6,-0.5) -- (7,-0.5);
\draw [thick] [->] (7,-0.5) -- (8,-0.5);
\draw [thick] [->] (8,-0.5) -- (9,-0.5);
\draw [thick] [->] (9,-0.5) -- (10,-0.5);
\draw [thick] (10,-0.5) -- (10.5,-0.5);
\draw [thick] [->] (10.5,-0.5) -- (11,-0.5);
\draw [thick] (11,-0.5) -- (11.5,-0.5);
\node [place,label=below:{\tiny$Q_0^{(0)}$}] at (-0.5,-0.5) {};
\node [place,label=below:{\tiny$P_0^{(0)}$}] at (-1.5,-0.5) {};
\node [place,label=below:{\tiny$P_0^{(1)}$}] at (0.5,-0.5) {};
\node [place,label=below:{\tiny$Q_0^{(1)}$}] at (1.5,-0.5) {};
\node [place,label=below:{\tiny$P_0^{(2)}$}] at (2.5,-0.5) {};
\node [place,label=below:{\tiny$Q_0^{(2)}$}] at (3.5,-0.5) {};
\node [place,label=below:{\tiny$P_0^{(3)}$}] at (4.5,-0.5) {};
\node [place,label=below:{\tiny$\bar{P}_0^{(3)}$}] at (5.5,-0.5) {};
\node [place,label=below:{\tiny$\bar{Q}_0^{(2)}$}] at (6.5,-0.5) {};
\node [place,label=below:{\tiny$\bar{P}_0^{(2)}$}] at (7.5,-0.5) {};
\node [place,label=below:{\tiny$\bar{Q}_0^{(1)}$}] at (8.5,-0.5) {};
\node [place,label=below:{\tiny$\bar{P}_0^{(1)}$}] at (9.5,-0.5) {};
\node [place,label=below:{\tiny$\bar{Q}_0^{(0)}$}] at (10.5,-0.5) {};
\node [place,label=below:{\tiny$\bar{P}_0^{(0)}$}] at (11.5,-0.5) {};
\draw [thick] [->] (-1.5,-0.5) -- (-1,0);
\draw [thick] (-1,0) -- (-0.5,0.5);
\draw [thick] [->] (-1.5,-0.5) -- (-0.5,0);
\draw [thick] (-0.5,0) -- (0.5,0.5);
\draw [thick]plot[smooth, tension=.7] coordinates {(-0.5,-0.5) (0.1,-0.1) (0.5,0.5)};
\draw [thick]plot[smooth, tension=.7] coordinates {(-0.5,-0.5) (-0.1,0.1) (0.5,0.5)};
\draw [thick][->] (0.1,-0.1) -- (0.11,-0.09);
\draw [thick][->] (-0.1,0.1) -- (-0.09,0.11);
\draw [thick] [->] (0.5,-0.5) -- (1,0);
\draw [thick] (1,0) --(1.5,0.5);
\draw [thick] [->] (1.5,-0.5) -- (2,0);
\draw [thick] (2,0) -- (2.5,0.5);
\draw [thick] [->] (2.5,-0.5) -- (3,0);
\draw [thick] (3,0) -- (3.5,0.5);
\draw [thick] [->] (3.5,-0.5) -- (4,0);
\draw [thick] (4,0) -- (4.5,0.5);
\draw [thick] [->] (5.5,0.5) -- (6,0);
\draw [thick] (6,0) -- (6.5,-0.5);
\draw [thick] [->] (6.5,0.5) -- (7,0);
\draw [thick] (7,0) -- (7.5,-0.5);
\draw [thick] [->] (7.5,0.5) -- (8,0);
\draw [thick] (8,0) -- (8.5,-0.5);
\draw [thick] [->] (8.5,0.5) -- (9,0);
\draw [thick] (9,0) -- (9.5,-0.5);
\draw [thick]plot[smooth, tension=.7] coordinates {(9.5,0.5) (10.1,0.1) (10.5,-0.5)};
\draw [thick]plot[smooth, tension=.7] coordinates {(9.5,0.5) (9.9,-0.1) (10.5,-0.5)};
\draw [thick] [->] (10.09,0.11) -- (10.1,0.1);
\draw [thick] [->] (9.89,-0.09) -- (9.9,-0.1);
\draw [thick] [->] (9.5,0.5) -- (10.5,0);
\draw [thick] (10.5,0) -- (11.5,-0.5);
\draw [thick] [->] (10.5,0.5) -- (11,0);
\draw [thick] (11,0) -- (11.5,-0.5);
\end{tikzpicture}
\caption{Digraph $D^{H_3}$}\label{fig-dh3}
\end{figure}
We are now in a position to give a combinatorial proof of the $q$-total positivity of $H_n$ for any nonnegative integer $n$.
Given a positive integer $k$ and two sequences $I = (i_1,\ldots,i_k),\,J = (j_1,\ldots,j_k)$ of indices such that $0 \le i_1 < \cdots < i_k\leq n$ and $0 \le j_1 < \cdots < j_k\leq n$, let
\begin{align}\label{eq-pipjbar}
\mathbf{P}_I=(P_{n-i_1}^{(0)},\ldots,P_{n-i_k}^{(0)}),\quad
\mathbf{\bar{P}}_J=(\bar{P}_{n-j_1}^{(0)},\ldots,\bar{P}_{n-j_k}^{(0)}).
\end{align}
Let $H_{I,J}$ denote the submatrix of $H_n$ whose rows are indexed by $I$ and columns indexed by $J$.
By Lemma \ref{lem-lgv} and \eqref{eq-network-hn}, we have
\begin{align}\label{eq-hankel-lgv}
\det \left[H_{I,J}\right] = GF(\mathcal{N}_{D^{H_n}}(\mathbf{P}_I,\mathbf{\bar{P}}_J)).
\end{align}
We further need to find a subset of $\mathcal{N}_{D^{H_n}}(\mathbf{P}_I,\mathbf{\bar{P}}_J)$, say
$\mathcal{S}^{{H_n}}_{I,J}$, which will play the same role as $S_{I,J}$ in Theorem \ref{thm-qtp-lb}.
Observe that by the recursive construction of $D^{H_n}$, it can be naturally divided into $2n+1$ parts: $D^{H_n}_1$, \ldots, $D^{H_n}_n$, $D^{T_n}$, $\bar{D}^{H_n}_n$, \ldots, $\bar{D}^{H_n}_1$, where for each $1 \le i \le n$ the graph $D^{H_n}_i$ is the subgraph of $D^{H_n}$ induced by the vertices $P_{n}^{(i-1)},\ldots,P_{0}^{(i-1)}$, $Q_{n}^{(i-1)},\ldots,Q_{0}^{(i-1)}$, $P_{n}^{(i)},\ldots,P_{0}^{(i)}$, and $\bar{D}^{H_n}_i$ is the subgraph of $D^{H_n}$ induced by the vertices $\bar{P}_{n}^{(i)},\ldots,\bar{P}_{0}^{(i)}$, $\bar{Q}_{n}^{(i-1)},\ldots,\bar{Q}_{0}^{(i-1)}$, $\bar{P}_{n}^{(i-1)},\ldots,\bar{P}_{0}^{(i-1)}$.
Graphically, $D^{H_n}$ is divided into $2n+1$ parts by $2n$ lines parallel to the $y$-axis. Thus, each member $\mathbf{p}=(p_1,\ldots,p_k)\in \mathcal{N}_{D^{H_n}}(\mathbf{P}_I,\mathbf{\bar{P}}_J)$ is also
divided into $2n+1$ nonintersecting families
$\mathbf{p}_1,\ldots,\mathbf{p}_n, \mathbf{p}_T, \mathbf{\bar{p}}_n,\ldots,\mathbf{\bar{p}}_1$ by these lines,
where $\mathbf{p}_i$ (resp. $\mathbf{\bar{p}}_i$) is the restriction of $\mathbf{p}$ to ${D}^{H_n}_i$ (resp. $\bar{D}^{H_n}_i$) for each
$1 \le i \le n$, and $\mathbf{p}_T$ is the restriction of $\mathbf{p}$ to $D^{T_n}$. For this reason, we may adopt the notation
$$\mathbf{p}=(\mathbf{p}_1,\ldots,\mathbf{p}_n, \mathbf{p}_T, \mathbf{\bar{p}}_n,\ldots,\mathbf{\bar{p}}_1)$$
to represent a nonintersecting family of $\mathcal{N}_{D^{H_n}}(\mathbf{P}_I,\mathbf{\bar{P}}_J)$.
Note that $D^{H_n}_i$ ($1 \le i \le n)$ can be regarded as the digraph obtained by adding $n-i$ parallel arcs (namely, $P_{n}^{(i-1)} \to P_{n}^{(i)},\ldots,P_{i+1}^{(i-1)} \to P_{i+1}^{(i)}$) to $D^{L^{B}_{i-1}}$. By simply mimicking the definitions of ($\mathcal{P}_1$), ($\mathcal{P}_2$) and ($\mathcal{P}_3$) as given immediately before \eqref{eq-pipjp}, we may define the following properties on nonintersecting families $\mathbf{p}_i$ in $D^{H_n}_i$:
\begin{list}{}{\setlength{\leftmargin}{1.1cm}}
\item[($\mathcal{P}_1^{(i)}$)] There exists $1 \le j \le k$ such that the $j$-th component of $\mathbf{p}_i$ is the directed path $P_{i-1}^{(i-1)} \to P_i^{(i)}$;
\item[($\mathcal{P}_2^{(i)}$)] There exists $1 \le j \le k$ such that the $j$-th component of $\mathbf{p}_i$ is the directed path $P_{i-1}^{(i-1)} \to Q_{i-1}^{(i-1)} \overset{l}{\to} P_i^{(i)}$;
\item[($\mathcal{P}^{(i)}_3$)] There exist $1 \le j \le k$ and $l\ge 2$ such that the $(j+l-1)$-th component of $\mathbf{p}_i$ is $P_{i-l}^{(i-1)} \to Q_{i-l}^{(i-1)} \to P_{i-l+1}^{(i)}$ and the $m$-th component is $P_{i-1-(m-j)}^{(i-1)} \to Q_{i-(m-j)}^{(i-1)} \to P_{i-(m-j)}^{(i)}$ for each $j \le m \le j+l-2$.
\end{list}
For $\mathbf{\bar{p}}_i$ in $\bar{D}_i^{H_n}$, if its preimage with respect to the reflection satisfies ($\mathcal{P}^{(i)}_1$), ($\mathcal{P}^{(i)}_2$), or ($\mathcal{P}^{(i)}_3$), we say that $\mathbf{\bar{p}}_i$ satisfies Property ($\mathcal{\bar{P}}^{(i)}_1$), ($\mathcal{\bar{P}}^{(i)}_2$), or ($\mathcal{\bar{P}}^{(i)}_3$), respectively. Then we take
\begin{align}\label{eq-sdhij}
\mathcal{S}^{{H_n}}_{I,J}=\left\{
\mathbf{p}\in \mathcal{N}_{D^{H_n}}(\mathbf{P}_I,\mathbf{\bar{P}}_J) \left |
\begin{array}{l}
\mbox{$\mathbf{p}_i$ satisfies none of ($\mathcal{P}^{(i)}_1$), ($\mathcal{P}^{(i)}_2$), ($\mathcal{P}^{(i)}_3$) and }\\
\mbox{$\mathbf{\bar{p}}_i$ satisfies none of ($\mathcal{\bar{P}}^{(i)}_1$), ($\mathcal{\bar{P}}^{(i)}_2$), ($\mathcal{\bar{P}}^{(i)}_3$)}\\
\mbox{for each $1\leq i\leq n$}
\end{array}\right.
\right\}.
\end{align}
It is clear that each $\mathbf{p}\in \mathcal{S}^{{H_n}}_{I,J}$
has a $q$-nonnegative weight.
We would like to point out that the involution $\phi$ defined in the proof of Theorem \ref{thm-qtp-lb} can also be mimicked to define a sign-reversing involution $\phi_i$ on nonintersecting families $\mathbf{p}_i$ in $D^{H_n}_i$. Suppose that $\mathbf{p}_i = (p_{i,1},\ldots,p_{i,k})$ and $p_{i,1},\ldots,p_{i,m}$ are those parallel arcs out of $D^{L^{B}_{i-1}}$.
Then $(p_{i,m+1},\ldots,p_{i,k})$ is a nonintersecting family in $D^{L^{B}_{i-1}}$.
If $\phi((p_{i,m+1},\ldots,p_{i,k}))=(p'_{i,m+1},\ldots,p'_{i,k})$, then define
\begin{align}\label{eq-phi-i}
\phi_i(\mathbf{p}_i) = (p_{i,1},\ldots,p_{i,m},p'_{i,m+1},\ldots,p'_{i,k}).
\end{align}
Similarly, we can define a sign-reversing involution $\bar{\phi}_i$ on nonintersecting families $\mathbf{\bar{p}}_i$ in $\bar{D}^{H_n}_i$. Note that if $\mathbf{p}_i$ satisfies Property ($\mathcal{P}^{(i)}_1$), ($\mathcal{P}^{(i)}_2$), or ($\mathcal{P}^{(i)}_3$), then $(p'_{i,m+1},\ldots,p'_{i,k})$ satisfies Property ($\mathcal{P}_1$), ($\mathcal{P}_2$), or ($\mathcal{P}_3$) (with a change of labeling), respectively, and hence $\phi((p_{i,m+1},\ldots,p_{i,k})) \neq (p_{i,m+1},\ldots,p_{i,k})$ and $\phi_i(\mathbf{p}_i) \neq \mathbf{p}_i$. An analogous result holds for $\mathbf{\bar{p}}_i$ and $\bar{\phi}_i$.
The main result of this section is as follows, which provides a combinatorial proof of the $q$-total positivity of $H$.
\begin{thm}\label{thm-main}
Given a nonnegative integer $n$ and two sequences $I = (i_1,\ldots,i_k),\,J = (j_1,\ldots,j_k)$ of indices such that $0 \le i_1 < \cdots < i_k\leq n$ and $0 \le j_1 < \cdots < j_k\leq n$, let $H_{I,J}$ denote the submatrix of $H_n$ whose rows are indexed by $I$ and columns indexed by $J$, let
$\mathbf{P}_I,\mathbf{\bar{P}}_J$ be as given by \eqref{eq-pipjbar}, and let
$\mathcal{S}^{{H_n}}_{I,J}$ be as given by \eqref{eq-sdhij}. Then we have
\begin{align}\label{eq-hankel-lgv-s}
\det \left[H_{I,J}\right] = GF(\mathcal{S}^{{H_n}}_{I,J}),
\end{align}
where $GF(\mathcal{S}^{{H_n}}_{I,J})$ denotes the sum of weights of all elements in $\mathcal{S}^{{H_n}}_{I,J}$.
In particular, $H$ is $q$-totally positive.
\end{thm}
\pf By \eqref{eq-hankel-lgv}, it suffices to give a sign-reversing involution, say $\phi^{H_n}$, on $\mathcal{N}_{D^{H_n}}(\mathbf{P}_I,\mathbf{\bar{P}}_J)$
with $\mathcal{S}^{{H_n}}_{I,J}$ being the set of all fixed points.
We proceed to define $\phi^{H_n}$ by using the aforementioned involutions $\phi_i$ and $\bar{\phi}_i$, see \eqref{eq-phi-i}.
Given $\mathbf{p}=(\mathbf{p}_1,\ldots,\mathbf{p}_n, \mathbf{p}_T, \mathbf{\bar{p}}_n,\ldots,\mathbf{\bar{p}}_1)\in \mathcal{N}_{D^{H_n}}(\mathbf{P}_I,\mathbf{\bar{P}}_J)$, if $\mathbf{p}\in \mathcal{S}^{{H_n}}_{I,J}$, then let $\phi^{H_n}(\mathbf{p})=\mathbf{p}$.
Next, we consider the case $\mathbf{p} \in \mathcal{N}_{D^{H_n}}(\mathbf{P}_I,\mathbf{\bar{P}}_J) \setminus \mathcal{S}^{{H_n}}_{I,J}$. If there exists some $i$ such that $\mathbf{p}_i$ satisfies Property $(\mathcal{P}^{(i)}_1)$, $(\mathcal{P}^{(i)}_2)$ or $(\mathcal{P}^{(i)}_3)$, or equivalently, $\phi_i(\mathbf{p}_i)\neq \mathbf{p}_i$, then let $l$ be the smallest such index and
\[
\phi^{H_n}((\mathbf{p}_1,\ldots,\mathbf{p}_n,\mathbf{p}_T,\bar{\mathbf{p}}_n,\ldots,\bar{\mathbf{p}}_1)) = (\mathbf{p}_1,\ldots,\phi_l(\mathbf{p}_l),\ldots,\mathbf{p}_n,\mathbf{p}_T,\bar{\mathbf{p}}_n,\ldots,\bar{\mathbf{p}}_1).
\]
Otherwise, if such an index does not exist, then there must exist some $i$ such that $\mathbf{\bar{p}}_i$ satisfies Property $(\mathcal{\bar{P}}^{(i)}_1)$, $(\mathcal{\bar{P}}^{(i)}_2)$ or $(\mathcal{\bar{P}}^{(i)}_3)$, or equivalently, $\bar{\phi}_i(\mathbf{\bar{p}}_i)\neq \mathbf{\bar{p}}_i$. In this subcase, we let $l$ be the largest such index and define
\[
\phi^{H_n}((\mathbf{p}_1,\ldots,\mathbf{p}_n,
\mathbf{p}_T,\bar{\mathbf{p}}_n,\ldots,\bar{\mathbf{p}}_1)) = (\mathbf{p}_1,\ldots,\mathbf{p}_n,\mathbf{p}_T,
\bar{\mathbf{p}}_n,\ldots,\bar{{\phi}}_l(\bar{\mathbf{p}}_l),\ldots,\bar{\mathbf{p}}_1).
\]
By the construction of the involutions $\phi_i$ and $\bar{\phi}_i$ for $1 \le i \le n$, it is easy to verify that $\phi^{H_n}$ is a sign-reversing involution on nonintersecting families in $D^{H_n}$. Hence $\phi^{H_n}$ induces the $q$-total positivity of $H_n$ in the same way that $\phi$ induces the $q$-total positivity of $L^B_n$. Further, the $q$-total positivity of $H_n$ implies the $q$-total positivity of $H$ since each minor of $H$ is a minor of $H_n$ for some $n$. The proof is complete. \qed
By applying the same kind of reasoning of the proof of Theorem \ref{thm-main}, we can give a combinatorial proof of the $q$-total positivity of the triangular array
$B=(b_{n,k}(q))_{n,k \ge 0}$ as defined by \eqref{eq-narab-recurrence}. We leave the details to the reader. It is interesting to note that similar reasoning can be used to establish the following result.
\begin{thm
Let $C = (c_{n,k}(q))_{n,k \ge 0}$ be the Catalan-Stieltjes matrix generated by one of the following two recurrences:
\begin{itemize}
\item[(1)] We have
\begin{align*}
c_{n,0}(q)&= [(f-e)+eq)]\cdot c_{n-1,0}(q) + fq \cdot c_{n-1,1}(q);\\
c_{n,k}(q)&= c_{n-1,k-1}(q) + (1+q)\cdot c_{n-1,k}(q) + q \cdot c_{n-1,k+1}(q), \quad (k\ge 1, \, n\ge 1)
\end{align*}
for some $f \ge e \ge 0$.
\item[(2)] We have
\begin{align*}
c_{n,0}(q)&= [(f-1)+eq)]\cdot c_{n-1,0}(q) + efq \cdot c_{n-1,1}(q);\\
c_{n,k}(q)&= c_{n-1,k-1}(q) + (1+eq)\cdot c_{n-1,k}(q) + eq \cdot c_{n-1,k+1}(q), \quad (k\ge 1, \, n\ge 1)
\end{align*}
for some $e,f \ge 1$.
\end{itemize}
Then the Hankel matrix $(c_{i+j,0})_{i,j \ge 0}$ is $q$-totally positive.
\end{thm}
\pf Note that, for either case, we can construct a planar network for the leading principal submatrix $(c_{i+j,0})_{0 \le i,j \le n}$ by using the
same underlying graph $D^{H_n}$ of the planar network for $H_n$, but with a different weight function. Since the weight function of $\mathcal{H}_n$ is naturally inherited from that of $\mathcal{L^B}$, it is sufficient to assign a new weight function to $D^{L^B}$.
For the first case, we let
\begin{align*}
&\mathrm{wt}_{D^{L^B}}(P_1 \to Q_0) = eq, \quad \mathrm{wt}_{D^{L^B}}(P_i \to Q_{i-1}) = q \text{ for } i \ge 2, \\
&\mathrm{wt}_{D^{L^B}}(P_1 \to P'_0) = -e, \quad \mathrm{wt}_{D^{L^B}}(Q_1 \overset{l}{\to} P'_0) = e, \\
&\mathrm{wt}_{D^{L^B}}(Q_1 \overset{r}{\to} P'_0) = f-e,
\end{align*}
and $\mathrm{wt}_{D^{L^B}}(a) = 1$ for the other arcs $a$ in $D^{L^B}$.
For the second case, we let
\begin{align*}
\mathrm{wt}_{D^{L^B}}(P_i \to Q_{i-1}) = eq \text{ for } i \ge 1, \quad \mathrm{wt}_{D^{L^B}}(P_1 \to P'_0) = -1,\\
\mathrm{wt}_{D^{L^B}}(Q_1 \overset{l}{\to} P'_0) = 1, \quad \mathrm{wt}_{D^{L^B}}(Q_1 \overset{r}{\to} P'_0) = f-1,
\end{align*}
and $\mathrm{wt}_{D^{L^B}}(a) = 1$ for the other arcs $a$ in $D^{L^B}$.
With these new weights, it is straightforward to verify that the involution $\phi^{H_n}$ constructed in the proof of
Theorem \ref{thm-main} is still a sign-reversing involution on nonintersecting families of $D^{H_n}$.
As a result, we obtain the $q$-total positivity of $(c_{i+j,0})_{i,j \ge 0}$. \qed
\section{A conjecture on immanant positivity}
\label{sect-conj}
Let $M = (m_{i,j})$ be a square matrix of order $n$, $\mathfrak{S}_n$ be the symmetric group of order $n$, $\lambda$ be a partition of $n$, and $\chi^{\lambda}$ be the irreducible character of $\mathfrak{S}_n$ associated with $\lambda$. Recall that the immanant of $M$ with respect to $\lambda$ is defined by
\begin{align*}
\mathrm{Imm}_{\,\lambda}\,M = \sum_{\sigma \in \mathfrak{S}_n} \chi^{\lambda}(\sigma) m_{1,\sigma(1)}\cdots m_{n,\sigma(n)}.
\end{align*}
When $\lambda = (1^n)$, the immanant $\mathrm{Imm}_{\,\lambda}\,M$ specializes to $\det [M]$. In \cite{LLYZ21} we proved the immanant positivity for a large family of Catalan-Stieltjes matrices and their associated Hankel matrices. This motivates us to study the
the immanant positivity for the Hankel matrix $H$ defined as in \eqref{eq-Hankel-def}. We have the following conjecture.
\begin{conj}
Let $k \ge 1$ and $I = (i_1,\ldots,i_k)$, $J = (j_1,\ldots,j_k)$ be two sequences of indices with $0 \le i_1 < \cdots < i_k$ and $0 \le j_1 < \cdots < j_k$. Let $H_{I,J}$ be the submatrix of $H$ whose rows are indexed by $I$ and columns are indexed by $J$. Then
\[
\mathrm{Imm}_{\,\lambda}\,H_{I,J} \ge_q 0
\]
for any partition $\lambda$ of $k$.
\end{conj}
We have verified the immanant positivity of all square submatrices of $H_n$ for $n\leq 6$ by Sage \cite{Sage}.
Note that, our method in \cite{LLYZ21} does not apply to $H$ directly, since there exist some arcs weighted by $-1$ in our planar network for $H$.
\vskip 0.5cm
\noindent \textbf{Acknowledgments.} This work is supported in part by the Fundamental Research Funds for the Central Universities and the National Science Foundation of China (Nos. 11522110, 11971249).
|
2,869,038,155,505 | arxiv | \section{Introduction}
C*-algebra theory is a blend of algebra and analysis which turns out to be much more than the sum of its parts, as illustrated by its fundamental results of Gelfand duality and the GNS representation theorem. Nevertheless, the C*-algebra axioms seem somewhat mysterious, and it may not be very clear what they mean or where they actually `come from'. To see the point, consider the axioms of groups for comparison: these have a clear meaning in terms of symmetries and the composition of symmetries, and this provides adequate motivation for these axioms. Do C*-algebras also have an interpretation which motivates their axioms in a similar manner?
A plausible answer to this question would be in terms of applications of C*-algebras to areas outside of pure mathematics. The most evident application of C*-algebras is to quantum mechanics and quantum field theory~\cite{strocchi,landsman,haag}. However, also in this context, the C*-algebra axioms do not seem well-motivated. In fact, not even the multiplication, which results in the algebra structure, does have a clear physical meaning. This is in stark contrast to other physical theories, such as relativity, where the mathematical structures that come up are derived from physical considerations and principles, often via the use of thought experiments. A similar derivation of C*-algebraic quantum mechanics does not seem to be known.
For these reasons, it seems pertinent to try and reformulate the C*-algebra axioms in a more satisfactory manner that would allow for a clear interpretation. This was our motivation for developing the notions of this paper.
For technical convenience, our C*-algebras are assumed unital throughout.
\subsection*{Summary and structure of this paper}
We start the technical development in Section~\ref{Calgsasfunctors} by assigning to every C*-algebra $A\in\Calg$ the functor $\mathsf{CHaus}\to\mathsf{Set}$ induced via the Yoneda embedding and Gelfand duality. It takes a compact Hausdorff space $X$ and maps it to the set of $*$-homomorphisms $C(X)\to A$. In terms of quantum mechanics, this is the set of projective measurements with outcomes in $X$, while the functoriality corresponds to post-processings or coarse-grainings of these measurements. We explain how this functor captures functional calculus for (commuting tuples of) normal elements of $A$, and how this encodes the `commutative aspect' of the structure of $A$. The physical interpretation is in terms of measurements with values in $X$ on the level of objects and post-processings between these on the level of morphisms.
Section~\ref{Calgsheaf} investigates which properties distinguish these functors from arbitrary functors $\mathsf{CHaus}\to\mathsf{Set}$. These properties take the form of sheaf conditions. Starting with the commutative case, we consider sheaf conditions satisfied by all hom-functors $\mathsf{CHaus}(W,-):\mathsf{CHaus}\to\mathsf{Set}$. These can be interpreted in a manner similar to a conventional sheaf condition: while we think of the latter as identifying functions on a space with consistent assignments of values to all points, we now identify points with consistent assignments of values to all functions (Lemma~\ref{squarecover}). We then move on to consider sheaf conditions satisfied by the functors $\mathsf{CHaus}\to\mathsf{Set}$ associated to arbitrary $A\in\Calg$. Roughly, the question is how to `guarantee commutativity': under what conditions is a colimit of commutative C*-algebras itself commutative? We introduce \emph{directed cones} as a class of colimits that satisfy this, so that every C*-algebra becomes a functor $\mathsf{CHaus}\to\mathsf{Set}$ that satisfies the sheaf condition on all directed cones. The resulting category of sheaves $\Sh(\mathsf{CHaus})$ does not seem to be a category of sheaves on a (large) site since the directed cones do not form a coverage (Proposition~\ref{nocoverage}). Nevertheless, we show that $\Sh(\mathsf{CHaus})$ is at least locally small (Corollary~\ref{locsmall}) and well-powered (Proposition~\ref{wellpowered}). Furthermore, Lemma~\ref{replem} is a key technical result on the representability of our sheaves, which can be understood as a new characterization of commutative C*-algebras.
In Section~\ref{piecewisesec}, we relate our sheaves $\mathsf{CHaus}\to\mathsf{Set}$ to the \emph{piecewise C*-algebras} of van den Berg and Heunen~\cite{piecewise} (originally called \emph{partial C*-algebras}). The main result is Theorem~\ref{pCalgthm}, which identifies piecewise C*-algebras with a full subcategory of $\Sh(\mathsf{CHaus})$, the objects of which are characterized in terms of a simple additional condition. Since we do not know of any sheaf that would not satisfy this condition, $\Sh(\mathsf{CHaus})$ may even be equivalent to the category of piecewise C*-algebras (Problem~\ref{pCalgprob}).
Section~\ref{secsaC} asks which additional structure a piecewise C*-algebra (or suitable sheaf $\mathsf{CHaus}\to\mathsf{Set}$) could be equipped with such as to recover the noncommutative structure of a C*-algebra as well, i.e.~to obtain an equivalence with the category of C*-algebras. Our proposal is to consider the additional structure of a \emph{self-action}, in the sense of a notion of inner automorphisms: every unitary element should give rise to an automorphism, and these automorphisms should satisfy suitable conditions on commuting unitaries. Introducing such a self-action is motivated by the physical interpretation: it is one of the essential features of quantum mechanics that real-valued observables generate one-parameter families of inner automorphisms, by first exponentiating to a unitary (functional calculus) and then conjugating by that unitary (self-action). In this way, we obtain the category of \emph{almost C*-algebras} $\aCalg$, and we ask whether the forgetful functor $\Calg\to\aCalg$ is an equivalence. While it is clearly faithful, Theorem~\ref{Wfull} shows that it is also full on morphisms out of W*-algebras.
In order to understand better whether the forgetful functor $\aCalg\to\Calg$ could indeed be an equivalence, Section~\ref{secgrps} investigates an analogous question for groups instead of C*-algebras. We ask whether the forgetful functor $\mathsf{Grp}\to\aGrp$ from the category of groups to the category of \emph{almost groups} is an equivalence. Theorem~\ref{aGnotfull} shows that this is not the case, since the functor is not full on morphisms out of a free group.
\subsection*{How about other kinds of operator algebras?}
We expect that many of the ideas developed in this paper apply \emph{mutatis mutandis} to other kinds of operator algebras as well, and in particular to W*-algebras. In this sense, focusing on C*-algebras has been a somewhat arbitrary choice made in the present work. In fact, as indicated by Lemma~\ref{weakvalulem} and especially by Theorem~\ref{Wfull}, the W*-algebra case allows for the derivation of more powerful results than we have been able to prove in the C*-algebra setting. The main reason for us to treat the C*-algebra case in this paper is the greater technical simplicity of topology over measure theory. For example, a W*-algebra version of Gelfand duality in terms of an equivalence of the category of commutative W*-algebras with a suitable category of measurable spaces is not readily available in the literature.
\subsection*{Relation to topos quantum theory}
The present ideas have commonalities with and were partly inspired by the topos-theoretic approach to quantum physics~\cite{thing,cecilia,hls}. Nevertheless, there are important differences, which may also provide a new direction for topos quantum theory. Crucially, topos quantum theory is formulated in terms of a topos that depends on the particular physical system under consideration, namely the category of presheaves on the poset of commutative subalgebras of the algebra of observables $A$. Instead of working with commutative subalgebras only, we consider \emph{all} $*$-homomorphisms $C(X)\to A$ for \emph{all} commutative C*-algebras $C(X)$. Doing so means that $A$ becomes a functor $\mathsf{CHaus}\to\mathsf{Set}$. In this way, we can consider all physical systems as described on the same footing as objects in the functor category $\mathsf{Set}^\mathsf{CHaus}$ or the category of sheaves $\Sh(\mathsf{CHaus})$.
\subsection*{Notation and terminology}
For us, `C*-algebra' always means `unital C*-algebra'. Likewise, our $*$-homomorphisms are always assumed to be unital, unless noted otherwise (as in the proof of Theorem~\ref{Wfull}). This already applies to the following index of our notation, which lists the conventions for our most commonly used mathematical symbols:
\newcommand{\notation}[2]{#1: & \textrm{\: #2}. \\}
\allowdisplaybreaks
\begin{align*}
\notation{W,X,Y,Z}{compact Hausdorff spaces}
\notation{\mathbf{1},\ldots,\mathbf{4}}{A compact Hausdorff on the corresponding number of points, where we write e.g.~$\mathbf{4}=\{0,1,2,3\}$}
\notation{w,x,y,z}{points in a compact Hausdorff space}
\notation{f,g,h,k}{continuous functions between compact Hausdorff spaces}
\notation{\Box,\bigcirc,\Tl}{unit square, unit disk and unit circle, considered as compact subsets of $\mathbb{C}$}
\notation{A,B}{C*-algebras or piecewise C*-algebras (Definition~\ref{piecewisedef})}
\notation{M_n}{The C*-algebra of $n\times n$ matrices with entries in $\mathbb{C}$}
\notation{\alpha,\beta,\gamma,\nu,\tau}{normal elements in a C*-algebra, or (more generally) $*$-homomorphisms of the type $C(X)\to A$}
\notation{\zeta}{a $*$-homomorphism or piecewise $*$-homomorphism of the type $A\to B$}
\notation{\mathfrak{a},\mathfrak{b}}{self-action of a piecewise C*-algebra (Definition~\ref{almostdef}) or a piecewise group (Definition~\ref{almostgroupdef})}
\end{align*}
The normal part of a C*-algebra $A$ is
\[
\mathbb{C}(A) := \{\: \alpha\in A \:|\: \alpha\alpha^* = \alpha^*\alpha \:\}.
\]
We also think of it as the set of `$A$-points' of $\mathbb{C}$. More generally, for $A\in\Calg$ and a closed subset $S\subset\mathbb{C}$, we also write
\[
S(A) := \{\: \alpha\in \mathbb{C}(A) \:|\: \spc(\alpha)\subseteq S \:\}
\]
for the set of normal elements with spectrum in $S$, and similarly $S(\zeta):S(A)\to S(B)$ for the resulting action of a $*$-homomorphism $\zeta:A\to B$ on these elements. For example, $\mathbb{R}(A)$ denotes the self-adjoint part of a C*-algebra, and similarly $\Tl(A)$ is the unitary group. This sort of notation may be familiar from algebraic geometry, where for a ring $A$, the set of $A$-points of a scheme $S$ is denoted $S(A)$. We also use the standard notation $C(X)$ for the $\mathbb{C}$-valued continuous functions on a space $X$. Unfortunately, this is very similar notation despite being different in nature.
We work with the following categories:
\begin{align*}
\notation{\mathsf{CHaus}}{compact Hausdorff spaces with continuous maps}
\notation{\CGHs}{compactly generated Hausdorff spaces with continuous maps}
\notation{\Calg}{C*-algebras with $*$-homomorphisms}
\notation{\cCalg}{commutative C*-algebras with $*$-homomorphisms}
\notation{\pCalg}{piecewise C*-algebras (Definition~\ref{piecewisedef}) with piecewise $*$-homomorphisms (Definition~\ref{phomdef})}
\notation{\aCalg}{almost C*-algebras (Definition~\ref{almostdef}) with almost $*$-homomorphisms (Definition~\ref{ahomdef})}
\notation{\mathsf{Grp}}{groups with group homomorphisms}
\notation{\pGrp}{piecewise groups (Definition~\ref{piecewisegroupdef}) with piecewise group homomorphisms (Definition~\ref{pghomdef})}
\notation{\aGrp}{almost groups (Definition~\ref{almostgroupdef}) with almost group homomorphisms (Definition~\ref{aghomdef})}
\end{align*}
Throughout, all diagrams are commutative diagrams, unless explicitly stated otherwise.
\newpage
\section{C*-algebras as functors $\mathsf{CHaus}\to\mathsf{Set}$}
\label{Calgsasfunctors}
In this section, we explain how to regard a C*-algebra as a functor $\mathsf{CHaus}\to\mathsf{Set}$, and how this encodes the usual functional calculus for normal elements in a C*-algebra, as well as its multivariate generalization.
The Yoneda embedding realizes a C*-algebra $A$ as the hom-functor
\[
\Calg(-,A) \: :\: \Calg^\op \to \mathsf{Set}.
\]
We are interested in studying this hom-functor on the commutative C*-algebras, meaning that we consider its restriction to a functor $\cCalg^\op\to\mathsf{Set}$. Applying Gelfand duality, we can equivalently consider it as a functor
\[
-(A) \: :\: \mathsf{CHaus} \to \mathsf{Set},
\]
assigning to every compact Hausdorff space $X\in\mathsf{CHaus}$ a set $X(A)$, which is the set of all $*$-homomorphisms $C(X)\to A$. Our notation $X(A)$ suggests thinking of it as the set of \emph{generalized $A$-points} of $X$.
\begin{example}
If $X$ is finite, a $*$-homomorphism $C(X)\to A$ or generalized $A$-point in $X$ corresponds to a \emph{partition of unity} in $A$ indexed by $X$, i.e.~a family of pairwise orthogonal projections summing up to $1$.
\end{example}
\begin{example}
If $A$ is a W*-algebra, the spectral theorem~\cite[Theorem~1.44]{folland} implies that $X(A)$ is precisely the collection of all regular projection-valued measures on $X$ with values in $A$.
\end{example}
\begin{remark}
\label{measurements}
In terms of algebraic quantum mechanics, where a physical system is described by a C*-algebra $A$ of observables~\cite{strocchi,landsman}, we interpret a $*$-homomorphism $\alpha : C(X)\to A$ as a projective measurement with values in $X$, described in the Heisenberg picture. So the physical meaning of our $X(A)$ is as the collection of all measurements with outcomes in the space $X$.
\end{remark}
\begin{remark}
Those $*$-homomorphisms $C(X)\to A$ whose image is in the center of $A$ are called \emph{$C(X)$-algebras}, and they correspond exactly to upper semicontinuous C*-bundles over $X$~\cite{nilsen}\footnote{We thank Klaas Landsman for pointing this out to us.}.
\end{remark}
At the level of morphisms, every $f:X\to Y$ acts by composing a $*$-homomorphism $\alpha : C(X)\to A$ with $C(f)$ to $\alpha\circ C(f) : C(Y)\to A$, so that
\begin{align}
\begin{split}
\label{faction}
f(A) : X(A) & \longrightarrow Y(A) \\
\alpha & \longmapsto \alpha\circ C(f)
\end{split}
\end{align}
is the action of $f$ on generalized $A$-points.
\begin{remark}
\label{postproc}
The physical interpretation of $f(A)$ is as a \emph{post-processing} or \emph{coarse-graining} of measurements. Under $f(A)$, a measurement $\alpha : C(X)\to A$ with values in $X$ becomes a measurement $\alpha\circ C(f):C(Y)\to A$ with values in $Y$, implemented by first conducting the original measurement $\alpha$ and then processing the outcome via application of the function $f$. Since we work in the Heisenberg picture, the order of composition is reversed, so that $C(f)$ happens first.
\end{remark}
This construction is also functorial in $A$: for any $*$-homomorphism $\zeta:A\to B$ and $X\in\mathsf{CHaus}$, we have $X(\zeta) : X(A) \to X(B)$. Furthermore, for any $f:X\to Y$ there is the evident naturality diagram
\[
\xymatrix{ X(A) \ar[d]_{X(\zeta)} \ar[r]^{f(A)} & Y(A) \ar[d]^{Y(\zeta)} \\
X(B) \ar[r]_{f(B)} & Y(B) }
\]
which expresses the bifunctoriality of the hom-functor $\Calg(-,-)$ in our setup.
Before proceeding with technical developments in the next section, it is worthwhile pondering how these considerations relate to functional calculus.
\subsection*{Functoriality captures the `commutative part' of the C*-algebra structure}
\label{funcalc}
In a somewhat informal sense, the functor $-(A)$ captures the entire `commutative part' of the structure of a C*-algebra $A$. We will obtain a precise result along these lines as Theorem~\ref{pCalgthm}. Here, we perform some simple preparations.
\begin{lemma}
For any compact set $S\subseteq\mathbb{C}$, evaluating an $\alpha : C(S)\to A$ on $\id_S : S\to\mathbb{C}$,
\begin{equation}
\label{evalid}
\alpha\longmapsto\alpha(\id_S),
\end{equation}
is a bijection between $S(A)$ and the normal elements of $A$ with spectrum in $S$.
\label{normalcorrespond}
\end{lemma}
\begin{proof}
If $\alpha,\beta:C(S)\to A$ coincide on $\id_S$, then they must coincide on the *-algebra generated by $\id_S$. Since $\id_S$ separates points, this *-algebra is dense in $C(S)$ by the Stone-Weierstrass theorem, so that $\alpha=\beta$ by continuity. This establishes injectivity of~\eqref{evalid}.
Concerning surjectivity, applying functional calculus to a given normal element with spectrum in $S$ results in a $*$-homomorphism $C(S)\to A$ which realizes the given element via~\eqref{evalid}.
\end{proof}
Due to this correspondence, we will not distinguish notationally between a $*$-homomorphism $\alpha : C(S)\to A$ and its associated normal element, i.e.~we also denote the latter simply by $\alpha\in A$. Moreover, we can also think of a $*$-homomorphism $C(X)\to A$ for arbitrary $X\in\mathsf{CHaus}$ as a sort of `generalized normal element' of $A$.
For any two compact $S,T\subseteq\mathbb{C}$ and $f:S\to T$, functional calculus---in the sense of applying $f$ to normal elements with spectrum in $S$---is encoded in two ways:
\begin{itemize}
\item in evaluating an $\alpha : C(S)\to A$ on $f:S\to\mathbb{C}$, as in the proof of Lemma~\ref{normalcorrespond};
\item in the functoriality $f(A) : S(A)\to T(A)$, since applying this functorial action to $\alpha$ results in the same normal element of $A$,
\begin{equation}
\label{fundrel}
f(A)(\alpha)(\id_T) \stackrel{\eqref{faction}}{=} (\alpha\circ C(f))(\id_T) = \alpha(C(f)(\id_T)) = \alpha(\id_T\circ f) = \alpha(f).
\end{equation}
\end{itemize}
From now on, what we mean by `functional calculus' is the functoriality, i.e.~the second formulation.
Writing $\bigcirc\subseteq\mathbb{C}$ for the unit disk, the normal elements of norm $\leq 1$ are identified with the $*$-homomorphisms $\alpha : C(\bigcirc)\to A$. For every $r\in [0,1]$, we have the multiplication map $r\cdot : \bigcirc\to\bigcirc$, so that $(r\cdot)(A) : \bigcirc(A) \to \bigcirc(A)$ represents scalar multiplication of normal elements by $r$. Based on this, we can recover the norm of a normal element $\alpha\in\bigcirc(A)$ as the largest $r$ for which $\alpha$ factors through $C(r)$,
\[
||\alpha|| = \max\: \{\: r\in[0,1] \:|\: \alpha\in\im((r\cdot)(A)) \:\}\:.
\]
As we will see next, the functoriality also captures part of the binary operations of a C*-algebra.
\begin{lemma}
\label{pairscorrespond}
For $S,T\subseteq\mathbb{C}$, applying functoriality to the product projections
\begin{equation}
\label{prodprojs}
p_S \: : \: S\times T\longrightarrow S,\qquad p_T \: : \: S\times T\longrightarrow T
\end{equation}
establishes a bijection between $(S\times T)(A)$ and pairs of \emph{commuting} normal elements $(\alpha,\beta)\in A\times A$ with $\spc(\alpha)\subseteq S$ and $\spc(\beta)\subseteq T$.
\end{lemma}
This generalizes Lemma~\ref{normalcorrespond} to commuting pairs of normal elements. Of course, there are analogous statements for tuples of any size (finite or even infinite), and this encodes multivariate functional calculus.
\begin{proof}
We need to show that the map
\[
(p_S(A),p_T(A)) \: : \: (S\times T)(A) \longrightarrow S(A)\times T(A)
\]
is injective, and that its image consists of precisely the pairs $(\alpha,\beta)$ with $\alpha:C(S)\to A$ and $\beta:C(T)\to A$ that have commuting ranges. Injectivity holds because $p_S : S\times T\to\mathbb{C}$ and $p_T:S\times T\to\mathbb{C}$ separate points, so that the same argument as in the proof of Lemma~\ref{normalcorrespond} applies. For surjectivity, let $\alpha$ and $\beta$ be given. Since their ranges commute, we can find a commutative subalgebra $C(X)\subseteq A$ that contains both, so that the pair $(\alpha,\beta)$ has a preimage in the upper right corner of the diagram
\[
\xymatrix{ (S\times T)(C(X)) \ar[r] \ar[d] & S(C(X))\times T(C(X)) \ar[d] \\
(S\times T)(A) \ar[r] & S(A)\times T(A) }
\]
Now the upper row is equal to the canonical map $\mathsf{CHaus}(X,S\times T)\to \mathsf{CHaus}(X,S)\times\mathsf{CHaus}(X,T)$, which is a bijection due to the universal property of $S\times T$. Hence we can find a preimage of $(\alpha,\beta)$ also in the upper left corner, and then also in the lower left corner by commutativity of the diagram.
\end{proof}
\begin{remark}
In the physical interpretation, the elements of $(S\times T)(A)$ are measurements that have outcomes in $S\times T$ (Remark~\ref{measurements}). Lemma~\ref{pairscorrespond} now shows that such a measurement corresponds to a pair of \emph{compatible} measurements taking values in $S$ and $T$, respectively, and one obtains these measurements by coarse-graining along the product projections~\eqref{prodprojs}, i.e.~by forgetting the other outcome.
\end{remark}
As part of bivariate functional calculus, we can now consider the addition map
\begin{equation}
\label{addition}
S\times T \longrightarrow S + T,\qquad (x,y)\longmapsto x + y,
\end{equation}
where $S+T$ is the Minkowski sum
\[
S + T = \{\: x + y \:|\: x\in S,\: y\in T\:\},
\]
again considered as a compact subset of $\mathbb{C}$. Under the identifications of Lemmas~\ref{normalcorrespond} and~\ref{pairscorrespond}, the addition map
\begin{equation}
\label{additionA}
+(A) \: :\: (S\times T)(A) \longrightarrow (S + T)(A).
\end{equation}
takes a pair of commuting normal elements with spectra in $S$ and $T$ and takes it to a normal element with spectrum in $S+T$.
\begin{lemma}
On commuting normal elements, this recovers the usual addition in $A$.
\end{lemma}
\begin{proof}
By Lemma~\ref{pairscorrespond}, it is enough to take a $\gamma\in (S\times T)(A)$ and to compute the resulting normal element that one obtains by applying $+(A)$ in a manner analogous to~\eqref{fundrel},
\begin{align*}
(+(A))(\gamma)(\id_{S+T}) & \stackrel{\eqref{faction}}{=} (\gamma \circ C(+))(\id_{S+T}) = \gamma(\id_{S+T}\circ +) \\
& = \gamma(\id_S \circ p_S + \id_T\circ p_T) \\
& = \gamma(\id_S\circ p_S) + \gamma(\id_T\circ p_T) \\
& = (\gamma\circ C(p_S))(\id_S) + (\gamma\circ C(p_T))(\id_T) \\
& \stackrel{\eqref{faction}}{=} (p_S(A))(\gamma)(\id_S) + (p_T(A))(\gamma)(\id_T),
\end{align*}
where the crucial assumption of additivity of $\gamma$ has been used to obtain the expression in the third line.
\end{proof}
In the analogous manner, one can show that the multiplication map
\begin{equation}
\label{multiplication}
S\times T \longrightarrow ST,\qquad (x,y)\longmapsto xy.
\end{equation}
lets us recover the product of two commuting normal elements in $A$. More generally, we can recover any polynomial or continuous function of any number of commuting normal elements.
In summary, we think of the functor $-(A):\mathsf{CHaus}\to\mathsf{Set}$ associated to $A\in\Calg$ as a generalization of functional calculus, which remembers the entire `commutative structure' of $A$. The generalization is from applying functions to individual normal elements---as in the conventional picture of functional calculus---to applying functions to `generalized' normal elements in the guise of $*$-homomorphisms of the form $C(X)\to A$. In particular, the C*-algebra operations acting on commuting normal elements are encoded in the functoriality. In the remainder of this paper, we will always have this point of view in mind, together with its physical interpretation:
\[
\text{functoriality = generalized functional calculus = post-processing of measurements.}
\]
\begin{remark}
\label{reconstruct}
In Section~\ref{guarcommsec}, we will also consider functors $F:\mathsf{CHaus}\to\mathsf{Set}$ that do not necessarily arise from a C*-algebra in this way. In terms of the physical interpretation, this means that we attempt to model physical systems not in terms of their algebras of observables as the primary structure, but in terms of a functor $F$ as the most fundamental structure that describes physics. This is motivated by the fact that the C*-algebra structure of the observables is (a priori) not physically well-motivated, as discussed in the introduction. Thanks to Remarks~\ref{measurements} and~\ref{postproc}, our functors $F:\mathsf{CHaus}\to\mathsf{Set}$ do have a meaningful operational interpretation in terms of measurements: $F(X)$ is the set of (projective) measurements with outcomes in $X$, and the action of $F$ on morphisms is the post-processing. This bare-bones structure turns out to carry a surprising amount of information about the algebra of observables. We will try to equip $F$ with additional properties and structure such as to uniquely specify the algebra of observables.
In spirit, this approach is similar to the existing reconstructions of quantum mechanics from operational axioms~\cite{grinbaum}. In recent years, a wide range of reconstruction theorems with a large variety of choices for the axioms have been derived, as pioneered by Hardy~\cite{hardy1,hardy2}. In these theorems, `quantum mechanics' refers to the Hilbert space formulation in finite dimensions, and the reconstruction theorems recover the Hilbert space structure within the framework of general probabilistic theories. In contrast to this, our work focuses on the C*-algebraic formulation of quantum mechanics and is not limited to a finite-dimensional setting. Also, we do not make use of the possibility of taking stochastic mixtures: since we are (currently) only dealing with projective measurements, taking stochastic mixtures is not even possible in our setup.
\end{remark}
\newpage
\section{C*-algebras as sheaves $\mathsf{CHaus}\to\mathsf{Set}$}
\label{Calgsheaf}
Functional calculus lets us apply functions to operators, or more generally to $*$-homomorphisms $C(X)\to A$ as in the previous section. In some situations, one can also go the other way: for certain families of functions $\{f_i : X\to Y_i\}_{i\in I}$ with common domain, a collection of $*$-homomorphisms $\{\beta_i : C(Y_i)\to A\}_{i\in I}$ arises from a unique $*$-homomorphism $\alpha : C(X)\to A$ by functoriality along the $f_i$ if and only if the $\beta_i$ satisfy a simple compatibility requirement. This property is a \emph{sheaf condition}, and it turns our functors $-(A)$ into sheaves on the category $\mathsf{CHaus}$.
\begin{remark}
We emphasize already at this point that the sheaf conditions that we consider do not arise from a Grothendieck topology (on $\mathsf{CHaus}^\op$), since the axiom of stability under pullback fails to hold. Also, while sheaf conditions are typically formulated for contravariant functors (i.e.~presheaves), our sheaves live in a covariant setting. While we could speak of `cosheaves' to emphasize this distinction, this term usually refers to dualizing the standard notion of sheaf on the codomain category, while we dualize on the domain category.
\end{remark}
A good way of talking about sheaf conditions on large categories is not in terms of sieves or cosieves---which would usually have to be large---but in terms of cocones or cones~\cite{shulman}:
\begin{definition}
A \emph{cone} in $\mathsf{CHaus}$ is any small family of morphisms $\{f_i :X\to Y_i\}_{i\in I}$ with common domain.
\end{definition}
\begin{definition}
A functor $F:\mathsf{CHaus}\to\mathsf{Set}$ satisfies the \emph{sheaf condition} on a cone $\{f_i : X\to Y_i\}_{i\in I}$ if the $F(f_i)$ implement a bijection between the sections $\alpha\in F(X)$ and the families of sections $\{\beta_i\}_{i\in I}$ with $\beta_i\in F(Y_i)$ that are \emph{compatible} in the following sense: for any $i,j\in I$ and any diagram
\begin{equation}
\begin{split}
\label{compdiag}
\xymatrix{ X \ar[r]^{f_i} \ar[d]_{f_j} & Y_i \ar[d]^g \\
Y_j \ar[r]_h & Z }
\end{split}
\end{equation}
we have $F(g)(\beta_i) = F(h)(\beta_j)$.
\end{definition}
Since $\mathsf{CHaus}$ has pushouts, the compatibility condition holds if and only if it holds on every pushout diagram
\[
\xymatrix{ X \ar[r]^{f_i} \ar[d]_{f_j} & Y_i \ar[d] \\
Y_j \ar[r] & Y_i\pushout{f_i}{f_j} Y_j }
\]
Hence the sheaf condition holds on $\{f_i\}$ if and only if the diagram
\[
\xymatrix{ F(X) \ar[r] & \mathlarger{\mathlarger{\prod}}_{i\in I} F(Y_i) \ar@<1ex>[r] \ar@<-1ex>[r] & \mathlarger{\mathlarger{\prod}}_{i,j\in I} F(Y_i\pushout{f_i}{f_j} Y_j). }
\]
is an equalizer in $\mathsf{Set}$, where the arrows are the canonical ones~\cite[p.~123]{MM}. At times it is convenient to apply the compatibility condition as in~\eqref{compdiag} instead of considering the pushout, while at other times it is necessary to work with the pushout explicitly.
\subsection*{Effective-monic cones in $\mathsf{CHaus}$}
Since we are interested in sheaf conditions satisfied by a functor of the form $-(A):\mathsf{CHaus}\to\mathsf{Set}$ for $A\in\Calg$, it makes sense to consider the commutative case first. Then our functor takes the form $-(C(W))$, which is isomorphic to the hom-functor $\mathsf{CHaus}(W,-)$.
\begin{definition}[{e.g.~\cite[Definition~2.22]{shulman}}]
\label{effmondef}
A cone $\{f_i:X\to Y_i\}_{i\in Y}$ in $\mathsf{CHaus}$ is \emph{effective-monic} if every representable functor $\mathsf{CHaus}(W,-)$ satisfies the sheaf condition on it.
\end{definition}
Hence $\{f_i\}$ is effective-monic if and only if $X$ is the equalizer in the diagram
\[
\xymatrix{ X \ar[r] & \mathop{\mathlarger{\mathlarger{\prod}}_{i\in I}} Y_i \ar@<1ex>[r] \ar@<-1ex>[r] & \mathlarger{\mathlarger{\prod}}_{i,j\in I} (Y_i\pushout{f_i}{f_j}Y_j), }
\]
or equivalently the limit in the diagram
\begin{equation}
\begin{split}
\label{pointssheaf}
\xymatrix{ && Y_i \ar[dr] & \vdots \\
X \ar[drr]_{f_j} \ar[urr]^{f_i} && \vdots & Y_i\pushout{f_i}{f_j}Y_j \\
&& Y_j \ar[ur] & \vdots }
\end{split}
\end{equation}
\begin{example}
\label{limit}
Let $\Lambda$ be a small category and $L:\Lambda\to\mathsf{CHaus}$ a functor of which we consider the limit $\lim_\Lambda L\in\mathsf{CHaus}$. The limit projections $p_\lambda : \lim_\Lambda L \to L(\lambda)$ assemble into a cone $\{p_\lambda\}_{\lambda\in\Lambda}$, which is effective-monic.
\end{example}
Fortunately, it is not necessary to consider arbitrary $W$ in Definition~\ref{effmondef}:
\begin{lemma}
A cone $\{f_i\}$ is effective-monic if and only if $\mathsf{CHaus}(\mathbf{1},-)$ satisfies the sheaf condition on it.
\label{pointsonly}
\end{lemma}
\begin{proof}
$\mathsf{CHaus}$ is well-known to be monadic over $\mathsf{Set}$, with the forgetful functor being precisely the functor of points $\mathsf{CHaus}(\mathbf{1},-) : \mathsf{CHaus}\to\mathsf{Set}$. In particular, this functor creates limits.
\end{proof}
So in words, $X$ must be the subspace of the product space $\prod_{i\in I} Y_i$ consisting of all those families of points $\{y_i\}_{i\in I}$ such that the image of $y_i\in Y_i$ coincides with the image of $y_j\in Y_j$ in the pushout space $Y_i\pushout{f_i}{f_j}Y_j$. This condition also applies for $j=i$, in which case it is equivalent to $y_i\in\im(f_i)$.
\begin{remark}
For a given $Y$, the cone of all functions $\{f:X\to Y\}_{f:X\to Y}$ is effective-monic for every $X$ if and only if $Y$ is codense.
\label{codense}
\end{remark}
While these categorical considerations have been extremely general, we now get into the specifics of $\mathsf{CHaus}$. We write $\Box:=[0,1]\times[0,1]$ for the unit square, and consider it as embedded in $\Box\subseteq\mathbb{R}^2=\mathbb{C}$, where the unit interval $[0,1]\subseteq\mathbb{R}$ is an edge of $\Box$.
\begin{lemma}
\label{squarecover}
For every $X\in\mathsf{CHaus}$, the cone $\{f:X\to\Box\}_{f:X\to\Box}$ consisting of all functions $f:X\to\Box$ is effective-monic.
\end{lemma}
By Remark~\ref{codense}, this is a restatement of the known fact that $\Box$ is codense in $\mathsf{CHaus}$~\cite{isbell}.
While one thinks of a conventional sheaf condition as saying that a function is uniquely determined by a compatible assignment of values to all (local neighbourhoods of) points, this sheaf condition says that a point is uniquely determined by a compatible assignment of values to all functions.
\begin{proof}
We need to show that the diagram
\[
\xymatrix{ X \ar[r] & \mathlarger{\mathlarger{\prod}}_{f:X\to \Box} \Box \ar@<1ex>[r] \ar@<-1ex>[r] & \mathlarger{\mathlarger{\prod}}_{g,h:X\to\Box} (\Box \pushout{g}{h}\Box) }
\]
is an equalizer. Since functions $X\to\Box$ separate points in $X$, it is clear that the map $X\to\prod_f\Box$ is injective.
Surjectivity is more difficult. Suppose that $v \in \prod_{f:X\to\Box} \Box$ is a compatible family of sections. Then in particular, we have
\begin{equation}
v(hf) = h(v(f)) \quad\textrm{ for all }\quad h:\Box\to \Box
\label{noncontextuality}
\end{equation}
as an instance of the compatibility condition, since the square
\begin{equation}
\begin{split}
\label{weakcomp}
\xymatrix{ X \ar[r]^{hf} \ar[d]_f & \Box \ar@{=}[d] \\
\Box \ar[r]_h & \Box }
\end{split}
\end{equation}
commutes.
We have to show that there exists a point $x\in X$ with $v(f) = f(x)$ for all $f:X\to\Box$. This set of equations is equivalent to $x\in \bigcap_f f^{-1}(v(f))$. Hence it is enough to show that $\bigcap_f f^{-1}(v(f))$ is nonempty. By compactness, it is sufficient to prove that any finite intersection
\[
f_1^{-1}(v(f_1)) \cap \ldots \cap f_n^{-1}(v(f_n))
\]
for a finite set of functions $f_1,\ldots,f_n:X\to\Box$ is nonempty. Using induction on $n$, the induction step is obvious if for given $f_1,f_2$ we can exhibit $g:X\to\Box$ such that
\[
g^{-1}(v(g)) = f_1^{-1}(v(f_1)) \cap f_2^{-1}(v(f_2)).
\]
First, by~\eqref{noncontextuality}, we can assume that both $f_1$ and $f_2$ actually take values in $[0,1]$, e.g.~by considering
\[
h_1 \: : \: \Box\longrightarrow[0,1], \qquad t\longmapsto |t - v(f_1)|
\]
and replacing $f_1$ by $h_1 f_1$, which results in
\[
(h_1 f_1)^{-1}(v(h_1 f_1)) \stackrel{\eqref{noncontextuality}}{=} f_1^{-1}(h_1^{-1}(h_1(v(f_1)))) = f_1^{-1}(h_1^{-1}(0)) = f_1^{-1}(v(f_1)),
\]
and similarly for $f_2$. After this replacement, we can take $g(t):= (f_1(t),f_2(t))$, and the induction step is complete upon applying~\eqref{noncontextuality} to the two coordinate projections.
Finally, we need to show that any individual set $f^{-1}(v(f))$ is nonempty as the base of the induction. For given $s\in[0,1]\setminus\im(f)$, choose $h$ such that $h(\im(f))=\{0\}$ and $h(s)=1$ by the Tietze extension theorem. Then
\[
0 = v(0) = v(hf) = h(v(f)),
\]
and hence $v(f)\neq s$. Therefore $v(f)\in\im(f)$, as was to be shown.
\end{proof}
For us, this effective-monic cone is the most important one. We now consider some other examples of effective-monic cones in $\mathsf{CHaus}$, which shed some light on their general behaviour. This is relevant for our main line of thought only as a source of examples.
As the counterexample given in the proof of~\cite[Theorem 2.6]{isbell} shows, this does generally not hold with $[0,1]$ in place of $\Box$. However, at least if $X$ is extremally disconnected, then it is still true, as an immediate consequence of the following result:
\begin{lemma}
\label{weakvalulem}
If $X$ is extremally disconnected, then $\{f:X\to\mathbf{4}\}$ is effective-monic.
\end{lemma}
Here, we write $\mathbf{4}:=\{0,1,2,3\}$, and the proof uses indicator functions $\chi_Y : X\to\mathbf{4}$ of clopen sets $Y\subseteq X$.
\begin{proof}
Since the clopen sets separate points, the injectivity is again clear and the burden of the proof is in the surjectivity. So let $v:\mathbf{4}^X\to\mathbf{4}$ be a compatible family of sections.
As in the proof of Lemma~\ref{squarecover}, we show that the intersection
\[
\bigcap_{Y \text{ clopen, } v(\chi_Y) = 1} Y
\]
is nonempty. Again by compactness and an induction argument as in the proof of Lemma~\ref{squarecover}, it is enough to show that for any clopen $Y_1,Y_2\subseteq X$ with $v(\chi_{Y_1}) = 1$ and $v(\chi_{Y_2}) = 1$, we also have $v(\chi_{Y_1\cap Y_2}) = 1$. To see this, we consider the function
\[
f:= \chi_{Y_1} + 2\chi_{Y_2},
\]
and apply the compatibility condition in the form~\eqref{noncontextuality} for various $h$. Choosing $h$ such that $0,2\mapsto 0$ and $1,3\mapsto 1$ results in $hf=\chi_{Y_1}$, and hence $v(f)\in \{1,3\}$. Similarly, mapping $0,1\mapsto 0$ and $2,3\mapsto 1$ yields $hf=\chi_{Y_2}$, and therefore $v(f)\in\{2,3\}$. Overall, we obtain $v(f)=3$, and apply $h$ with $0,1,2\mapsto 0$ and $3\mapsto 1$ to conclude $v(\chi_{Y_1\cap Y_2})=1$ from $hf=\chi_{Y_1\cap Y_2}$.
So there is at least one point $x_0\in X$ such that $v(\chi_Y)=1$ implies $x_0\in Y$ for all clopen $Y\subseteq X$. We then claim that $v(f) = f(x_0)$ for all $f:X\to\mathbf{4}$. This follows from writing
\[
f = 0\chi_{Y_0} + 1\chi_{Y_1} + 2\chi_{Y_2} + 3\chi_{Y_3}
\]
for a partition of $X$ by clopens $Y_0,Y_1,Y_2,Y_3\subseteq X$, and applying~\eqref{noncontextuality} with $h$ such that $v(f)\mapsto 1$, while the other three integers map to $0$.
\end{proof}
A singleton cone $\{f:X\to Y\}$ is effective-monic if and only if $f$ is injective. For cones consisting of exactly two functions, the necessary and sufficient criterion is as follows:
\begin{lemma}
\label{malcev}
A cone $\{f:X\to Y,g:X\to Z\}$ consisting of exactly two functions is effective-monic if and only if the pairing $(f,g) : X\to Y\times Z$ is a Mal'cev relation, meaning that $f$ and $g$ are jointly injective and their joint image
\[
R:=\im((f,g))\subseteq Y\times Z
\]
satisfies the implication
\begin{equation}
\label{eqmalcev}
(y,z) \in R,\qquad (y',z) \in R,\qquad (y,z') \in R \qquad\Longrightarrow\qquad (y',z') \in R.
\end{equation}
\end{lemma}
For the notion of Mal'cev relation, see~\cite{garner}.
\begin{proof}
We use the criterion of Lemma~\ref{pointsonly}. The injectivity part of the sheaf condition is equivalent to injectivity of $(f,g) : X\to Y\times Z$. Assuming that this holds, we identify $X$ with the joint image $R\subseteq Y\times Z$.
Now if $\{f,g\}$ is effective-monic and we have $y,y'\in Y$ and $z,z'\in Z$ as in~\eqref{eqmalcev}, then each of the three pairs $(y,z)$, $(y',z)$ and $(y,z')$ represents a point of $X$. So since $(y,z)$ is in particular a compatible pair of sections, in $Y\pushout{f}{g}Z$ the image of $y$ coincides with the image of $z$. By the same reasoning applied to $(y',z)$, also $y'$ maps to the same point in $Y\pushout{f}{g}Z$, and by $(y,z')$ so does $z'$. Hence also $(y',z')$ is a compatible pair of sections, which must correspond to a point of $X$ due to the sheaf condition.
Conversely, suppose that~\eqref{eqmalcev} holds. The pushout $Y\pushout{f}{g}Z$ is the quotient of the coproduct $Y\amalg Z$ by the closed equivalence relation generated by $f(x)\sim g(x)$ for all $x\in X$, i.e.~by $y\sim z$ for all $(y,z)\in R$. In terms of relational composition, it is straightforward to check that
\[
\id_{Y\amalg Z} \cup R \cup R^\op \cup (R\circ R^\op) \cup (R^\op\circ R)
\]
is already an equivalence relation thanks to~\eqref{eqmalcev}. As a finite union of closed sets, it is also closed, and hence two points in $Y\amalg Z$ get identified in $Y\pushout{f}{g} Z$ if and only if they satisfy this relation. In particular, $y\in Y$ and $z\in Z$ map to the same point in $Y\pushout{f}{g}Z$ if and only if $(y,z)\in R$.
\end{proof}
In general, the pushout of an effective-monic cone along an arbitrary function is not effective-monic again. The following example shows even that the effective-monic cones on $\mathsf{CHaus}$ do not form a coverage; an even more drastic example can be found in the proof of Proposition~\ref{nocoverage}.
\begin{example}
\label{ex4to3}
Take $X:=\mathbf{4}=\{0,1,2,3\}$, and consider two maps to spaces with 3 points,
\[
f\: : \: \{0,1,2,3\} \longrightarrow \{01,2,3\},\qquad g\: : \: \{0,1,2,3\} \longrightarrow \{0,1,23\},
\]
as illustrated by the projection maps in Figure~\ref{fig4to3}. By Lemma~\ref{malcev}, this cone is effective-monic. However, taking the pushout along the identification map
\[
h \: : \: \{0,1,2,3\} \longrightarrow \{0,12,3\}
\]
results in a cone consisting of $f' : \{0,12,3\}\to \{012,3\}$ and $g':\{0,12,3\}\to \{0,123\}$. Since the criterion of Lemma~\ref{malcev} fails, the cone $\{f',g'\}$ is not effective-monic. In particular, the pushout of an effective-monic cone is not necessarily effective-monic again. Worse, the collection of all effective-monic cones is not a coverage: for our original $\{f,g\}$, there does not exist any effective-monic cone $\{k_i : \{0,12,3\}\to Y_i\}_{i\in I}$ such that every $k_i h$ would factor through $f$ or $g$,
\[
\xymatrix{ && & \{01,2,3\} \ar@{-->}[dd]^? \\
\{0,1,2,3\} \ar[urrr]^f \ar[rr]_g \ar[d]_h && \{0,1,23\} \ar@{-->}[dr]_? \\
\{0,12,3\} \ar[rrr]_{k_i} && & Y_i }
\]
The reason is as follows: for every $i\in I$, we would need to have $k_i(0) = k_i(12)$ or $k_i(12) = k_i(3)$. If the former happens, consider the point $y_i:= k_i(3)\in Y_i$, while if the latter happens take $y_i:= k_i(0)$. (If both cases apply, these two prescriptions result in the same point $y_i = k_i(0) = k_i(3)$.) It is easy to check that the resulting family of points $\{y_i\}_{i\in I}$ is compatible. However, it does not arise from a point of $\{0,12,3\}$: since the $k_i$ must separate points, there must be $i$ with $k_i(0) = k_i(12) \neq k_i(3)$, and another $i$ with $k_i(0)\neq k_i(12) = k_i(3)$. Hence neither of $x\in\{0,12,3\}$ results in the given compatible family, and the cone $\{k_i\}$ is not effective-monic.
\end{example}
\begin{figure}
\[
\xymatrix@=.2cm{ & & \bullet \ar@{}[l]|3 & & & & & \bullet \ar@{}[l]|3 \\
& & \bullet \ar@{}[l]|2 & \ar[rrr]^f & & & & \bullet \ar@{}[l]|2 \\
\bullet \ar@{}[u]|0 & \bullet \ar@{}[u]|1 & & & & & & \bullet \ar@{}[l]|{01} \\
\\
& \ar[ddd]_g \\
\\
& \\
& & & \\
\bullet \ar@{}[u]|0 & \bullet \ar@{}[u]|1 & \bullet \ar@{}[u]|{23} }
\]
\caption{Illustration of the cone $\{f,g\}$ of Example~\ref{ex4to3}.}
\label{fig4to3}
\end{figure}
Incidentally, the cone $\{f',g'\}$ from above is arguably the simplest example of a cone that separates points (is jointly injective) without being effective-monic.
\begin{remark}
The previous example can also be understood in terms of effectus theory~\cite[Assumption~1]{jacobs}: the relevant pushout square is of the form
\[
\xymatrix{ W + Y \ar[r]^{\id + f} \ar[d]_{g + \id} & W + Z \ar[d]^{g + \id} \\
X + Y \ar[r]_{\id + f} & X + Z }
\]
where `$+$' is the coproduct in $\mathsf{CHaus}$ and both $f$ and $g$ are the unique map $\mathbf{2}\to\mathbf{1}$. In general, any cone consisting of $\id+f:W+Y\to W+Z$ and $g+\id:W+Y\to X+Y$ is effective-monic by Lemma~\ref{malcev}.
It is conceivable that there are deeper connections with effectus theory than just at the level of examples, but so far we have not explored this theme any further.
\end{remark}
Starting to get back to C*-algebras, we record one further statement about cones for further use.
\begin{lemma}
A cone $\{f_i : X\to Y_i\}$ separates points if and only if the ranges of the $C(f_i) : C(Y_i)\to C(X)$ generate $C(X)$ as a C*-algebra.
\label{seppoints}
\end{lemma}
\begin{proof}
By the Stone-Weierstrass theorem, the C*-subalgebra generated by the ranges of the $C(f_i)$ equals $C(X)$ if and only if it separates points (as a subalgebra). This C*-subalgebra is generated by the elements $g_i\circ f_i\in C(X)$, where $g_i : Y_i\to [0,1]$ ranges over all functions, and hence the subalgebra separates points if and only if these functions separate points. This in turn is equivalent to the $f_i$ separating points, since the $g_i : Y_i\to [0,1]$ also separate points.
\end{proof}
\subsection*{How to guarantee commutativity?}
\label{guarcommsec}
The previous subsection was concerned with sheaf conditions satisfied by the functors $-(A)$ for commutative $A$. Now, we want to investigate which of these sheaf conditions hold for general $A$.
\begin{definition}
\label{guarcommdef}
An effective-monic cone $\{f_i:X\to Y_i\}_{i\in I}$ in $\mathsf{CHaus}$ is \emph{guaranteed commutative} if every functor $-(A)$ satisfies the sheaf condition on it.
\end{definition}
In detail, $-(A)$ satisfies the sheaf condition on $\{f_i\}$ if and only if restricting a $*$-homomorphism $\alpha:C(X)\to A$ along all $C(f_i):C(Y_i)\to C(X)$ to families $\beta_i : C(Y_i)\to A$ that are compatible in the sense that $\beta_i\circ C(g) = \beta_j\circ C(h)$ for every diagram of the form~\eqref{compdiag},
\[
\xymatrix{ X \ar[r]^{f_i} \ar[d]_{f_j} & Y_i \ar[d]^g \\
Y_j \ar[r]_h & Z }
\]
results in a bijection. In terms of the functor $C:\mathsf{CHaus}^\op\to\Calg$, this holds if and only if the diagram
\[
\xymatrix{ \vdots & C(Y_i) \ar[drr] \\
C(Y_i)\pullback{C(f_i)}{C(f_j)}C(Y_j) \ar[ur]^{C(f_i)} \ar[dr]_{C(f_j)} & \vdots && C(X) \\
\vdots & C(Y_j) \ar[urr] }
\]
which is the image of~\eqref{pointssheaf} under $C$, is a colimit in $\Calg$. Here, we have used the canonical isomorphism $C(Y_i\pushout{f_i}{f_j} Y_j)\cong C(Y_i)\pullback{C(f_i)}{C(f_j)} C(Y_j)$, which holds because $C$ is a right adjoint. So we are dealing with an instance of the question, which limits does $C$ turn into colimits?
\begin{remark}
\label{gluemeas}
In terms of the physical interpretation of Remarks~\ref{measurements} and~\ref{reconstruct}, the sheaf condition on a cone $\{f_i:X\to Y_i\}$ states that every compatible family of measurements with outcomes in the $Y_i$ corresponds to a unique measurement with values in $X$ which coarse-grains to the given measurements via the $f_i$.
\end{remark}
The terminology of Definition~\ref{guarcommdef} is motivated by the following observation:
\begin{lemma}
An effective-monic cone $\{f_i:X\to Y_i\}_{i\in I}$ is guaranteed commutative if and only if for every $A\in\Calg$ and compatible family $\beta_i : C(Y_i)\to A$, the ranges of the $\beta_i$ commute.
\label{gclem}
\end{lemma}
\begin{proof}
Suppose that the criterion holds. For $A\in\Calg$, we show that restricting a $*$-homomorphism $C(X)\to A$ to a compatible family of $*$-homomorphisms $C(Y_i)\to A$ is a bijection. We first show injectivity, so let $\alpha,\alpha' : C(X)\to A$ be such that the resulting families coincide, $\beta_i = \beta'_i$. In particular, this means that the range of each $\beta_i$ coincides with the range of $\beta'_i$, and hence $\im(\alpha) = \im(\alpha')$ by Lemma~\ref{seppoints}. Hence we are back in the commutative case, where Gelfand duality and the effective-monic assumption apply.
For surjectivity, let a compatible family $\beta_i : C(Y_i)\to A$ be given. By assumption, there is some commutative subalgebra $B\subseteq A$ which contains the ranges of all $\beta_i$, and it is sufficient to prove the sheaf condition with $B$ in place of $A$. The claim then follows from Gelfand duality together with the assumption that $\{f_i\}$ is effective-monic.
Conversely, if the sheaf condition holds on a functor $-(A)$, then the $\beta_i : C(Y_i)\to A$ all arise from restricting some $\alpha : C(X)\to A$ along $C(f_i):C(Y_i)\to C(X)$. In particular, the range of every $\beta_i$ is contained in the range of $\alpha$, which is a commutative C*-subalgebra.
\end{proof}
The crucial ingredient here is the fact that commutativity is a pairwise property, in the sense that if any family of elements in a C*-algebra commute pairwise, then they generate a commutative C*-subalgebra. We will meet this property again in Definition~\ref{piecewisedef}.
In the sense of Lemma~\ref{gclem}, the question is under what conditions an effective-monic cone `guarantees commutativity' of the ranges of a compatible family.
\begin{example}
\label{exgc}
The effective-monic cone of Example~\ref{ex4to3} is guaranteed commutative: in terms of indicator functions of individual points, the compatibility assumption on a pair of $*$-homomorphisms $\beta_f:C(\{01,2,3\})\to A$ and $\beta_g:C(\{0,1,23\}) \to A$ is that
\[
\beta_f(\chi_{01}) = \beta_g(\chi_0) + \beta_g(\chi_1),\qquad \beta_g(\chi_{23}) = \beta_f(\chi_2) + \beta_f(\chi_3).
\]
So $\beta_g(\chi_0)$ is a projection below $\beta_f(\chi_{01})$, and in particular orthogonal to $\beta_f(\chi_2)$ and $\beta_f(\chi_3)$, so that it commutes with every element in the range of $\beta_f$. Proceeding like this proves that the ranges of $\beta_f$ and $\beta_g$ commute entirely.
\end{example}
\begin{example}
Let $\Tl\subseteq\mathbb{C}$ be the unit circle, and $p_\Re,p_\Im : \Tl\to [-1,+1]$ the two coordinate projections. Then the cone $\{p_\Re,p_\Im\}$ is effective-monic by Lemma~\ref{malcev}, or alternatively since applying $p_\Re$ and $p_\Im$ establishes a bijection between points of $x$ and pairs of numbers $y_\Re,y_\Im\in[-1,+1]$ with $y_\Re^2 + y_\Im^2 = 1$. Hence compatible families $\{\beta_\Re,\beta_\Im\}$ are $*$-homomorphisms $\beta_\Re:C([-1,+1])\to A$ and $\beta_\Im:C([-1,+1])\to A$ that correspond to self-adjoint elements $\beta_\Re(\id),\beta_\Im(\id)\in [-1,+1](A)$ with $\beta_\Re(\id)^2 + \beta_\Im(\id)^2 = 1$. Such a pair of self-adjoints arises from a unitary by functional calculus if and only if they commute. For example, choosing any $A$ with non-commuting symmetries $s_\Re$ and $s_\Im$ provides a compatible family that does not arise in this way upon putting $\beta_\Re:= s_\Re/\sqrt{2}$ and $\beta_\Im:= s_\Im/\sqrt{2}$. Therefore $\{p_\Re,p_\Im\}$ is not guaranteed commutative.
\label{circleproj}
\end{example}
So far, we know of one powerful sufficient condition for guaranteeing commutativity:
\begin{definition}
An effective-monic cone $\{f_i:X\to Y_i\}_{i\in I}$ in $\mathsf{CHaus}$ is \emph{directed} if for every $i\in I$ there is a cone $\{g_i^j : Y_i\to Z_i^j\}_{j\in J_i}$ which separates points, and such that for every $i,i'\in I$ and $j\in J_i$, $j'\in J_{i'}$ there is $k\in I$ and a diagram
\begin{equation}
\begin{split}
\label{directeddiag}
\xymatrix{ & X \ar[dl]_{f_i} \ar[d]|{f_k} \ar[dr]^{f_{i'}} \\
Y_i \ar[d]_{g_i^j} & Y_k \ar[dl] \ar[dr] & Y_{i'} \ar[d]^{g_{i'}^{j'}} \\
Z_i^j & & Z_{i'}^{j'} }
\end{split}
\end{equation}
\label{directeddef}
\end{definition}
Note that this definition can be considered in principle in any category.
\begin{proposition}
If $\{f_i\}$ is effective-monic and directed, then it is also guaranteed commutative.
\label{guarcommcrit}
\end{proposition}
\begin{proof}
By Lemma~\ref{gclem}, it is enough to show that the ranges of a compatible family $\{\beta_i:C(Y_i)\to A\}$ commute. By Lemma~\ref{seppoints}, it is enough to prove that the range of $\beta_i\circ C(g_i^j) : C(Z_i^j)\to A$ commutes with the range of $\beta_{i'}\circ C(g_{i'}^{j'}): C(Z_{i'}^{j'})\to A$ for any $i,i'\in I$ and $j\in J_i$, $j'\in J_{i'}$. Thanks to~\eqref{directeddiag} and the compatibility, both of these ranges are contained in the range of $\beta_k : C(Y_k)\to A$, which is commutative.
\end{proof}
\newcommand{\mathbf{2}^\Nl}{\mathbf{2}^\mathbb{N}}
\begin{example}
\label{cofiltered}
Let $\mathbf{2}^\Nl$ be the Cantor space, with projections $p_n:\mathbf{2}^\Nl\to\mathbf{2}^n$ for every $n\in\mathbb{N}$. Then the cone $\{p_n\}_{n\in\mathbb{N}}$ is effective-monic and directed. Therefore it is also guaranteed commutative.
More generally, let $\Lambda$ be a small cofiltered category and $L:\Lambda\to\mathsf{CHaus}$ a functor of which we consider the limit $\lim_\Lambda L\in\mathsf{CHaus}$. The cone of limit projections $\{p_\lambda : \lim_\Lambda L \to L(\lambda)\}$ is effective-monic (Example~\ref{limit}). With the trivial cones $\{\id\}$ on the codomains $L(\lambda)$, the cofilteredness implies that the cone is also directed, and therefore guaranteed commutative. What we have shown hereby in a roundabout manner is that a filtered colimit of commutative C*-algebras is again commutative.
\end{example}
Unfortunately, the converse to Proposition~\ref{guarcommcrit} is not true:
\begin{example}
The effective-monic cone $\{f,g\}$ of Examples~\ref{ex4to3} and~\ref{exgc} is not directed, despite being guaranteed commutative. The reason is that the additional cones as in Definition~\ref{directeddef} would have to contain some $h:\{12,3,4\}\to Z_{12}$ with $h(3)\neq h(4)$, and similarly some $k:\{1,2,34\}\to Z_{34}$ with $k(1)\neq k(2)$. By~\eqref{directeddiag}, this would mean that the cone $\{f,g\}$ would have to contain a function that separates both $1$ from $2$ and $3$ from $4$, which is not the case.
\end{example}
So while Proposition~\ref{guarcommcrit} is sufficiently powerful for the remainder of this paper, it remains open to find a necessary and sufficient condition for guaranteeing commutativity.
\begin{lemma}
For any $X\in\mathsf{CHaus}$, the cone $\{f:X\to\Box\}$ of all functions $f:X\to\Box$ is directed.
\label{guarcommsquare}
\end{lemma}
By Lemma~\ref{squarecover}, we already know that this cone is effective-monic. By Proposition~\ref{guarcommcrit}, we can now conclude that it also is guaranteed commutative.
\begin{proof}
In Definition~\ref{directeddef}, take every $\{g_i^j\}_{j\in J_i}$ to be the cone consisting of all functions $\Box\to[0,1]$. Since the pairing of any two functions $X\to[0,1]$ is a function $X\to\Box$, the cone $\{f:X\to\Box\}$ is directed.
\end{proof}
\begin{remark}
In terms of Remark~\ref{gluemeas}, this lemma `explains' why physical measurements are numerical: for every conceivable measurement with values in some arbitrary space $X$, conducting that measurement and recording the outcome in $X$ is equivalent to conducting a sufficient number of measurements with values in $\Box$ and recording their outcomes, which are now plain (complex) numbers.
\end{remark}
\begin{lemma}
If two cones $\{f_i:W\to Y_i\}_{i\in I}$ and $\{g_j:X\to Z_i\}_{j\in J}$ are effective-monic and directed, then so is the product cone
\[
\{ f_i\times g_j : W\times X\to Y_i\times Z_j \}_{(i,j)\in I\times J}.
\]
\label{productcovers}
\end{lemma}
\begin{proof}
Let $\{h^k_i:Y_i\to U_i^k\}_{k\in K_i}$ and $\{k^l_j:Z_j\to V_j^l\}_{l\in L_j}$ be the families of additional cones that witness the directedness. Then for $(i,j)\in I\times J$, consider the cone at $Y_i\times Z_j$ given by
\begin{equation}
\label{productsep}
\{ h^k_i p_{Y_i} : Y_i\times Z_j\to U_i^k \} \cup \{ k^l_j p_{Z_j} : Y_i\times Z_j\to V_j^l \}
\end{equation}
with index set $K_i\amalg L_j$. This cone separates the points of $X_i\times Y_j$, since any two different points differ in at least one coordinate. The condition of Definition~\ref{directeddef} is easy to check by distinguishing the cases of the left and the right morphism in~\eqref{directeddiag} belonging to either part of~\eqref{productsep}. The only interesting case that comes up is when one considers a $h^k_i p_{Y_i} : Y_i\times Z_{j'}\to U_i^k$ together with a $k^l_j p_{Z_j} : Y_{i'}\times Z_j\to \times V_j^l$, resulting in a diagram of the form
\[
\xymatrix{ & W\times X \ar[dl]_{f_i\times g_{j'}} \ar[d]|{} \ar[dr]^{f_{i'}\times g_j} \\
Y_i\times Z_{j'} \ar[d]_{h_i^k p_{Y_i}} & \ar[dl] \ar[dr] & Y_{i'}\times Z_j \ar[d]^{k_j^l p_{Z_j}} \\
U_i^k & & V_j^l }
\]
where indeed the central vertical arrow can be taken to be $f_i\times g_j$.
\end{proof}
In combination with Lemma~\ref{guarcommsquare}, we therefore obtain:
\begin{corollary}
For any $X,Y\in\mathsf{CHaus}$, the cone $\{f\times g:X\times Y\to\Box\times\Box\}$ indexed by all functions $f:X\to\Box$ and $g:Y\to\Box$ is directed.
\label{doublesquare}
\end{corollary}
Another simple class of examples is as follows:
\begin{lemma}
Let $\{f_i : X\to Y_i\}_{i\in I}$ be an effective-monic cone on $X\in\mathsf{CHaus}$. Then the cone
\[
\left\{ (f_{i_1},\ldots,f_{i_n}) : X\to Y_{i_1}\times\ldots\times Y_{i_n} \right\}
\]
consisting of all finite tuplings of the $f_i$ is effective-monic and directed.
\label{Sinfty}
\end{lemma}
Alternatively, we could phrase this as saying that if an effective-monic cone is closed under pairing, then it is directed.
\begin{proof}
Mapping points of $X$ to compatible families of points in all finite products $\prod_{m=1}^n Y_{i_m}$ is trivially injective, since it already is so on single-factor products due to the effective-monic assumption. Concerning surjectivity, the compatibility assumption guarantees that the component $(y_1,\ldots,y_n)\in Y_{i_1}\times\ldots\times Y_{i_n}$ is uniquely determined by the components in every individual $y_i$, since this is precisely the compatibility condition on diagrams of the form
\[
\xymatrix{ X \ar[d]_{f_{i_m}} \ar[rr]^(.4){(f_{i_1},\ldots,f_{i_n})} && \mathlarger{\mathlarger{\prod}}_{m=1}^n Y_{i_m} \ar[d]^{p_m} \\
Y_{i_m} \ar@{=}[rr] && Y_{i_m} }
\]
Hence the new cone is also effective-monic.
The condition of Definition~\ref{directeddef} holds by construction, with the trivial cone $\{\id\}$ on the codomains.
\end{proof}
Next, we briefly investigate the collection of directed effective-monic cones in its entirety.
\begin{proposition}
\label{nocoverage}
The collection of all directed effective-monic cones on $\mathsf{CHaus}$ is not a coverage.
\end{proposition}
Results along the lines of~\cite[Theorem~1.1]{reyes} indicate that this is not due to the potential inadequacy of our definitions, but rather due to fundamental obstructions related to the noncommutativity.
\begin{proof}
Consider $X:=\{0,1\}^3$ with the three product projections $p_1,p_2,p_3:\{0,1\}^3\to\{0,1\}$. By reasoning analogous to the proof of Lemma~\ref{squarecover}, their three pairings
\begin{equation}
\label{facecone}
\left\{\:(p_1,p_2),\: (p_1,p_3),\: (p_2,p_3) \: : \: \{0,1\}^3\longrightarrow\{0,1\}^2 \:\right\}
\end{equation}
form an effective-monic cone. By reasoning analogous to the proof of Lemma~\ref{guarcommsquare}, this cone is directed.
Now consider the function $f:\{0,1\}^3\to\mathbf{4}$ defined by mapping every element of $\{0,1\}^3$ to the sum of its digits. In any square of the form
\[
\xymatrix{ \{0,1\}^3 \ar[r]^{(p_1,p_2)} \ar[d]_f & \{0,1\}^2 \ar[d]^g \\
\mathbf{4} \ar[r]_h & Z }
\]
we necessarily have
\[
\underbrace{h(f(000))}_{=h(0)} = g(00) = h(f(001)) = h(f(010)) = g(01) = \underbrace{h(f(010))}_{=h(1)} = \ldots = \underbrace{h(f(110))}_{=h(2)} = \ldots = \underbrace{h(f(111))}_{=h(3)},
\]
and therefore $h$ must be constant. By symmetry, the same must hold with $(p_1,p_3)$ or $(p_2,p_3)$ in place of $(p_1,p_2)$. Hence any cone on $\mathbf{4}$ that factors through~\eqref{facecone} must identify \emph{all} points of $\mathbf{4}$. In particular, no such cone can even be effective-monic, let alone directed.
\end{proof}
We close this subsection with another potential criterion for guaranteeing commutativity. This is not relevant for the remainder of the paper.
\begin{lemma}
The following conditions on a cone $\{f_i:X\to Y_i\}$ in $\mathsf{CHaus}$ are equivalent:
\begin{enumerate}
\item\label{lfpoint} For every $x\in X$ and neighbourhood $U\ni x$ there exists $i\in I$ with
\[
f_i^{-1}(f_i(x)) \subseteq U.
\]
\item\label{lfngbhd} For every $x\in X$ and neighbourhood $U\ni x$ there exist $i\in I$ and a neighbourhood $V\ni f_i(x)$ with
\[
f_i^{-1}(V) \subseteq U.
\]
\item\label{lfbasis} The sets of the form $f_i^{-1}(V)$ for open $V\subseteq Y_i$ form a basis for the topology on $X$.
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}
\item[\implproof{lfpoint}{lfngbhd}] Since $X\setminus U$ is compact, $f_i(X\setminus U)$ is a closed set, and disjoint from $\{x\}$ by assumption. Now take $V$ to be any open neighbourhood of $f_i(x)$ disjoint from $f_i(X\setminus U)$.
\item[\implproof{lfngbhd}{lfbasis}] Suppose $x\in f_i^{-1}(V_i)\cap f_j^{-1}(V_j)$. Then by assumption, there is $k$ and an open $V_k\subseteq Y_k$ with $f_k(x)\in V_k$ such that
\[
f_k^{-1}(V_k) \subseteq f_i^{-1}(V_i)\cap f_j^{-1}(V_j).
\]
\item[\implproof{lfbasis}{lfpoint}] There must be a basic open $f_i^{-1}(V_i)$ with $x\in f_i^{-1}(V_i)\subseteq U$.\qedhere
\end{enumerate}
\end{proof}
\begin{definition}
If the above conditions hold, we say that the cone $\{f_i\}$ is \emph{locally injective}.
\end{definition}
Clearly, a locally injective cone separates points. However, it is not necessarily effective-monic:
\begin{example}
The cone consisting of all three surjective functions $\mathbf{3}\to \mathbf{2}$ is locally injective. However, it is not effective-monic: the pushout of any two different maps $\mathbf{3}\to\mathbf{2}$ is trivial, and hence there are $2^3$ compatible families of points in the cone, but only $3$ points in $X$.
\end{example}
\begin{example}
The cone $\{p_\Re,p_\Im\}$ from Example~\ref{circleproj} is not locally injective: for any angle $0<\varphi<\pi/2$, the point $(\cos\varphi,\sin\varphi)\in \Tl$ cannot be distinguished from $(\cos\varphi,-\sin\varphi)\in \Tl$ under $p_\Re$, and not from $(-\cos\varphi,\sin\varphi)$ under $p_\Im$.
\end{example}
\begin{conjecture}
\label{linjconj}
An effective monic cone $\{f_i:X\to Y_i\}$ that is locally injective is also guaranteed commutative.
\end{conjecture}
Since the cone of all functions $X\to\Box$ is an effective-monic and locally injective cone, proving this conjecture would again show that $\{f:X\to\Box\}$ is guaranteed commutative. Furthermore, this would detect some cones as guaranteed commutative that are not detected as such by Proposition~\ref{guarcommcrit}: the effective-monic cone of Examples~\ref{ex4to3} and~\ref{exgc} is one of these.
\begin{example}
In the setting of Example~\ref{cofiltered}, the topology of $\lim_\Lambda L$ is generated by the preimages of opens in all the $L(\lambda)$. The cofilteredness assumption implies that these opens form a basis: for $U_\lambda \subseteq L(\lambda)$ and $U_{\lambda'}\subseteq L(\lambda')$, we have $\hat{\lambda}$ and morphisms $f:\hat{\lambda}\to\lambda$ and $f':\hat{\lambda}\to\lambda'$ such that
\[
\xymatrix{ & \lim_\Lambda L \ar[dl]_{p_\lambda} \ar[d]|{p_{\hat{\lambda}}} \ar[dr]^{p_{\lambda'}} \\
L(\lambda) & L(\hat{\lambda}) \ar[l]^{L(f)} \ar[r]_{L(f')} & L(\lambda') }
\]
commutes. In particular, $f^{-1}(U_\lambda)\cap f'^{-1}(U_{\lambda'})$ is an open in $L(\hat{\lambda})$ whose preimage in $\lim_\Lambda L$ is exactly the intersection of the preimages of $U_\lambda$ and $U_{\lambda'}$. Hence the limit cone $\{p_\lambda\}$ is also locally injective. By Example~\ref{cofiltered}, this is in accordance with Conjecture~\ref{linjconj}.
\end{example}
Similar to the situation with Proposition~\ref{guarcommcrit}, being locally injective is also not a necessary condition for guaranteeing commutativity:
\newcommand{\text{\mancube}}{\text{\mancube}}
\begin{example}
There are effective-monic cones that are directed and hence guaranteed commutative, but not locally injective. For example with $\text{\mancube}:=[0,1]^3$ the unit cube, the three face projections $p_1,p_2,p_3 : \text{\mancube}\to\Box$ form a cone $\{p_1,p_2,p_3\}$ that is effective-monic but not locally injective. Nevertheless, considering copies of the cone $\{p_\Re,p_\Im:\Box\to [0,1]\}$ in Definition~\ref{directeddef} shows that the cone is directed, and hence guaranteed commutative. In particular, the converse to Conjecture~\ref{linjconj} is false.
\end{example}
\subsection*{The category of sheaves and its smallness properties}
Now that we have some idea of which sheaf conditions are satisfied by C*-algebras, we investigate completely general functors $\mathsf{CHaus}\to\mathsf{Set}$ satisfying (some of) these sheaf conditions.
\begin{definition}
A functor $F:\mathsf{CHaus}\to\mathsf{Set}$ is a \emph{sheaf} if it satisfies the sheaf condition on all effective-monic cones that are directed.
\label{sheafdef}
\end{definition}
We write $\Sh(\mathsf{CHaus})$ for the resulting category of sheaves, which is a full subcategory of $\mathsf{Set}^\mathsf{CHaus}$. Due to Proposition~\ref{nocoverage}, the sheaf conditions are \emph{not} those of a (large) site. Nevertheless, we expect that $\Sh(\mathsf{CHaus})$ is an instance of a category of sheaves on a \emph{quasi-pretopology} or on a \emph{Q-category}, whose categories of sheaves were investigated by Kontsevich and Rosenberg in the context of noncommutative algebraic geometry~\cite{KR1,KR2}\footnote{It is natural to suspect that the reason for why Grothendieck topologies do not apply is in both cases due to the noncommutativity, as has been formally proven in~\cite{reyes}. However, so far we have not explored the relation to the work of Kontsevich and Rosenberg any further.}.
A priori, $\Sh(\mathsf{CHaus})$ may seem rather unwieldy, and it is not even clear whether it is locally small.
\begin{lemma}
\label{evalbox}
Let $F,G\in\Sh(\mathsf{CHaus})$. Evaluating natural transformations on $\Box$ is injective,
\[
\xymatrix{ \Sh(\mathsf{CHaus})(F,G) \: \ar@{^{(}->}[r] & \: \mathsf{Set}(F(\Box),G(\Box)). }
\]
\end{lemma}
\begin{proof}
Since $F$ and $G$ satisfy the sheaf condition on $\{f:X\to\Box\}$ by Corollary~\ref{guarcommsquare}, the canonical map
\[
\xymatrix{ F(X) \ar[r] & \mathlarger{\mathlarger{\prod}}_{f:X\to\Box} F(\Box) }
\]
is injective. Hence for any $\eta:F\to G$, the naturality diagram
\[
\xymatrix{ F(X) \ar[r] \ar[d]_{\eta_X} & \mathlarger{\mathlarger{\prod}}_{f:X\to\Box} F(\Box) \ar[d]^{\prod_f \eta_\Box} \\
G(X) \ar[r] & \mathlarger{\mathlarger{\prod}}_{f:X\to\Box} G(\Box) }
\]
shows that every component $\eta_X$ is uniquely determined by $\eta_\Box$.
\end{proof}
\begin{corollary}
$\Sh(\mathsf{CHaus})$ is locally small.
\end{corollary}
\begin{proof}
Lemma~\ref{evalbox} provides an upper bound on the size of each hom-set.
\end{proof}
With functors $-(A)$ and $-(B)$ for $A,B\in\Calg$ in place of $F$ and $G$, Lemma~\ref{evalbox} also follows from the Yoneda lemma and the fact that $C(\Box)$ is a separator in $\Calg$. The latter is true more generally:
\begin{corollary}
$-(C(\Box))$ is a separator in $\Sh(\mathsf{CHaus})$.
\label{locsmall}
\end{corollary}
Recall that as functors $\mathsf{CHaus}\to\mathsf{Set}$, we have $-(C(\Box)) \cong \mathsf{CHaus}(\Box,-)$.
\begin{proof}
By the Yoneda lemma,
\begin{equation}
\label{yonedaeq}
\Sh(\mathsf{CHaus})(-(C(\Box)),F) = \mathsf{Set}^\mathsf{CHaus}(\mathsf{CHaus}(\Box,-),F) = F(\Box),
\end{equation}
and hence the claim follows from Lemma~\ref{evalbox}.
\end{proof}
The following stronger injectivity property will play a role in the next section:
\begin{lemma}
For $F\in\Sh(\mathsf{CHaus})$, the following are equivalent:
\begin{enumerate}
\item\label{boxinj} The canonical map
\begin{equation}
\label{doublebox}
\xymatrix{ (F(p_1),F(p_2)) \: :\: F(\Box\times\Box) \ar[r] & F(\Box)\times F(\Box) }
\end{equation}
is injective.
\item\label{allinj} For every $X\in\mathsf{CHaus}$ and effective-monic $\{f_i:X\to Y_i\}$, the canonical map
\[
\xymatrix{ F(X) \ar[r] & \mathlarger{\mathlarger{\prod}}_{i\in I} F(Y_i) }
\]
is injective.
\end{enumerate}
\label{injlem}
\end{lemma}
In~\ref{allinj}, the point is that the cone may not be directed, so generically $F$ does not satisfy the sheaf condition on it. The intuition behind the lemma is that these (equivalent) conditions hold in the C*-algebra case, and then the image of~\eqref{doublebox} consists of precisely the pairs of commuting normal elements. In terms of the interpretation as measurements on a physical system, this image consists of the pairs of measurements (with values in $\Box$) that are jointly measurable.
In the proof, we can start to put the seemingly haphazard lemmas of the previous subsection to some use.
\begin{proof}
Since the cone $\{p_1,p_2:\Box\times\Box\to\Box\}$ is effective-monic, condition~\ref{boxinj} is a special case of~\ref{allinj}.
In the other direction, we first show that for every $X,Y\in\mathsf{CHaus}$, the canonical map $F(X\times Y)\to F(X)\times F(Y)$ is injective. By Corollary~\ref{doublesquare}, the left vertical arrow in
\[
\xymatrix{ F(X\times Y) \ar[r] \ar[d] & F(X) \times F(Y) \ar[d] \\
\mathlarger{\mathlarger{\prod}}_{f:X\to\Box,g:Y\to\Box} F(\Box\times\Box) \ar[r] & \left(\mathlarger{\mathlarger{\prod}}_{f:X\to\Box} F(\Box)\right)\times\left(\mathlarger{\mathlarger{\prod}}_{g:Y\to\Box} F(\Box)\right) }
\]
is injective. Since the lower horizontal arrow is injective by assumption, it follows that the upper horizontal arrow is also injective. By induction, we then obtain that $F(\prod_{j=1}^n X_j)\to \prod_{j=1}^n F(X_j)$ is injective for any finite product.
Now let $\{f_i\}$ be an arbitrary effective-monic cone on $X$. By Lemma~\ref{Sinfty}, $F$ satisfies the sheaf condition on the cone consisting of all the finite tuplings $(f_{i_1},\ldots,f_{i_n})$. Hence we have the diagram
\[
\xymatrix{ F(X) \ar[r] \ar[d] & \mathlarger{\mathlarger{\prod}}_{i\in I} F(Y_i) \ar[d] \\
\mathlarger{\mathlarger{\prod}}_{n\in\mathbb{N}} \: \mathlarger{\mathlarger{\prod}}_{i_1,\ldots,i_n\in I} F\left(\mathlarger{\mathlarger{\prod}}_{m=1}^n Y_{i_m}\right) \ar[r] & \mathlarger{\mathlarger{\prod}}_{n\in\mathbb{N}} \: \mathlarger{\mathlarger{\prod}}_{i_1,\ldots,i_n\in I} \: \mathlarger{\mathlarger{\prod}}_{m=1}^n F(Y_{i_m}) }
\]
where the left vertical arrow is injective due to the sheaf condition, and the lower horizontal one due to the first part of the proof. Hence also the upper horizontal arrow is injective.
\end{proof}
So far, we do not know of any sheaf $\mathsf{CHaus}\to\mathsf{Set}$ that would \emph{not} have the property characterized by the lemma.
By Gelfand duality, the commutative C*-algebras are precisely the representable functors $\mathsf{CHaus}(W,-):\mathsf{CHaus}\to\mathsf{Set}$. These are characterized in terms of a condition similar to the previous lemma:
\begin{lemma}
\label{replem}
For $F\in\Sh(\mathsf{CHaus})$, the following are equivalent:
\begin{enumerate}
\item\label{boxbij} The canonical map
\[
\xymatrix{ (F(p_1),F(p_2)) \: :\: F(\Box\times\Box) \ar[r] & F(\Box)\times F(\Box) }
\]
is bijective.
\item\label{allbij} $F$ satisfies the sheaf condition on every effective-monic cone $\{f_i:X\to Y_i\}$ in $\mathsf{CHaus}$.
\item\label{Frep} $F$ is representable.
\end{enumerate}
\label{bijlem}
\end{lemma}
\begin{proof}
By the definition of effective-monic,~\ref{Frep} trivially implies~\ref{allbij}. Also if~\ref{allbij} holds, then it is easy to show~\ref{boxbij}: the empty cone is effective-monic on $1\in\mathsf{CHaus}$, which implies $F(1)\cong 1$. With this in mind,~\ref{boxbij} is the sheaf condition on the effective-monic cone $\{p_1,p_2:\Box\times\Box\to\Box\}$.
The burden of the proof is the implication from~\ref{boxbij} to~\ref{Frep}. By the representable functor theorem~\cite[p.~130]{ML} and the generation of limits by products and equalizers, it is enough to show that $F$ preserves products and equalizers, which we do in several steps. First, the functor $-\times Y$ preserves pushouts for any $Y\in\mathsf{CHaus}$: the functor $-\times Y:\CGHs\to\CGHs$ preserves colimits as a left adjoint~\cite[Theorem~VII.8.3]{ML}, and the inclusion functor $\mathsf{CHaus}\to\CGHs$ also preserves finite colimits, since it preserves finite coproducts and coequalizers (the latter by the automatic compactness of quotients of compact spaces).
Second, we prove that the canonical map $F(X\times\Box)\longrightarrow F(X)\times F(\Box)$ is a bijection for every $X\in\mathsf{CHaus}$. To this end, we consider the effective-monic cone $\{f\times\id_\Box : X\times\Box\to\Box\times\Box\}$ indexed by $f:X\to\Box$. We know that this cone is directed by Lemmas~\ref{guarcommsquare} and~\ref{productcovers}. This entails that $F(X\times\Box)$ is equal to the set of compatible families $\{\beta_f\}_{f:X\to\Box}$ of elements of $\prod_{f:X\to\Box} F(\Box\times\Box)$. Since $-\times\Box$ preserves pushouts as per the first observation, the compatibility condition is the one associated to the squares of the form
\[
\xymatrix{ X\times\Box \ar[r]^{f\times\id_\Box} \ar[d]_{g\times\id_\Box} & \Box\times\Box \ar[d] \\
\Box\times\Box \ar[r] & (\Box\pushout{f\times\id}{g\times\id}\Box)\times\Box }
\]
By using the fact that the maps $\Box\pushout{f\times\id}{g\times\id}\Box\longrightarrow\Box$ separate points, it is sufficient to postulate the compatibility on all commuting squares of the form
\[
\xymatrix{ X\times\Box \ar[r]^{f\times\id_\Box} \ar[d]_{g\times\id_\Box} & \Box\times\Box \ar[d]^{h\times\id_\Box} \\
\Box\times\Box \ar[r]_{k\times\id_\Box} & \Box\times\Box }
\]
So upon decomposing $\beta_f = (\beta_f^1,\beta_f^2)$ via $F(\Box\times\Box) = F(\Box)\times F(\Box)$, the compatibility condition is precisely that $F(h)(\beta^1_f) = F(k)(\beta^1_g)$ and that $\beta^2_f = \beta^2_g$ for all $h,k:\Box\to\Box$ with $hf=kg$. Since the family of first components therefore corresponds precisely to an element of $F(X)$, we conclude that the canonical map $F(X\times\Box)\longrightarrow F(X)\times F(\Box)$ is an isomorphism.
Third, we use this result to show that $F(X\times Y)\longrightarrow F(X)\times F(Y)$ is an isomorphism for all $X,Y\in\mathsf{CHaus}$; the proof is the same as above, just with $-\times\Box$ replaced by $-\times Y$. The case of finite products $F(\prod_{i=1}^n X_i)\cong \prod_{i=1}^n F(X_i)$ then follows by induction, and the case of infinite products by the sheaf condition.
The preservation of equalizers also takes a bit of work. Since every monomorphism $f:X\to Y$ in $\mathsf{CHaus}$ is regular, the singleton cone $\{f\}$ is effective-monic. Since this cone is trivially directed, $F$ satisfies the sheaf condition on it, which entails that $F(f):F(X)\to F(Y)$ must be injective.
Second, a diagram
\[
\xymatrix{ E \ar[r]|e & X \ar@<1ex>[r]^f \ar@<-1ex>[r]_g & Y }
\]
is an equalizer if and only if
\[
\xymatrix{ E \ar[r]^e \ar[d]_e & X \ar[d]^{(\id_X,f)} \\
X \ar[r]_(.4){(\id_X,g)} & X\times Y }
\]
is a pullback. By constructing the pushout $X\pushout{e}{e}X$ as a quotient of $X\amalg X$ and doing a case analysis on pairs of points in $X\pushout{e}{e}X$, the induced arrow $k$ in
\[
\xymatrix{ E \ar[r]^e \ar[d]_e & X \ar[d]^(.45)i \ar@/^3ex/[ddr]^{(\id_X,f)} \\
X \ar[r]_(.4)j \ar@/_3ex/[rrd]_{(\id_X,g)} & X\pushout{e}{e}X \ar@{-->}[dr]|k \\
& & X\times Y }
\]
is seen to be a monomorphism, and therefore so is $F(k)$. So if $\beta\in F(X)$ is such that $F(f)(\beta) = F(g)(\beta)$, then also $F(i)(\beta) = F(j)(\beta)$. But by the sheaf condition on the singleton cone $\{e\}$, this means that $\beta$ is in the image of $F(e)$, as was to be shown.
\end{proof}
For $F\in\Sh(\mathsf{CHaus})$, any $W\in\mathsf{CHaus}$ and any $\alpha\in F(W)$, let $F_\alpha : \mathsf{CHaus}\to\mathsf{Set}$ be the subfunctor of $F$ generated by $\alpha$. Concretely, over every $X\in\mathsf{CHaus}$, the set $F_\alpha(X)$ consists of all the images $F(f)(\alpha)$ for $f:W\to X$.
\begin{proposition}
If the canonical map
\begin{equation}
\label{timesinj}
\xymatrix{ F(\Box\times\Box) \ar[r] & F(\Box)\times F(\Box) }
\end{equation}
is injective, then such an $F_\alpha$ is representable.
\label{singlygenerated}
\end{proposition}
\begin{proof}
It is straightforward to verify that $F_\alpha$ is also a sheaf. Lemma~\ref{replem} and the injectivity assumption on $F$ then complete the proof if we can show that every pair of elements $(\beta_1,\beta_2)\in F_\alpha(\Box)\times F_\alpha(\Box)$ actually comes from an element of $F_\alpha(\Box\times\Box)$. To this end, we write $\beta_1 = F(f_1)(\alpha)$ and $\beta_2 = F(f_2)(\alpha)$ for certain $f_1,f_2:W\to\Box$. Now considering $\alpha$ transported along the pairing $(f_1,f_2) : W\to \Box\times\Box$ results in an element of $F(\Box\times\Box)$ that reproduces $(\beta_1,\beta_2)$.
\end{proof}
Here is another smallness result:
\begin{proposition}
$\Sh(\mathsf{CHaus})$ is well-powered.
\label{wellpowered}
\end{proposition}
\begin{proof}
Let $\eta:F\to G$ be a monomorphism in $\Sh(\mathsf{CHaus})$. Then upon composing morphisms of the form $-(C(\Box))\longrightarrow F$ with $\eta$, the Yoneda lemma~\eqref{yonedaeq} shows that the component $\eta_\Box:F(\Box)\to G(\Box)$ is injective, since the diagram
\[
\xymatrix{ \Sh(\mathsf{CHaus})(-(C(\Box)),F) \ar[r]_(.7){\cong}^(.7){\eqref{yonedaeq}} \ar[d]_{\eta\circ -} & F(\Box) \ar[d]^{\eta_\Box} \\
\Sh(\mathsf{CHaus})(-(C(\Box)),G) \ar[r]_(.7){\cong}^(.7){\eqref{yonedaeq}} & G(\Box) }
\]
commutes.
Again using the sheaf condition on all functions $X\to\Box$ and the fact that $\Box$ is a coseparator in $\mathsf{CHaus}$, we can identify the $\alpha\in F(X)$ with the families $\{\beta_f\}$ with $\beta_f\in F(\Box)$ that are indexed by $f:X\to\Box$ and satisfy the compatibility condition that $F(h)(\beta_f) = \beta_{hf}$ for all $f$ and $h:\Box\to\Box$. Hence we have the diagram
\[
\xymatrix{ F(X) \ar[r] \ar[d]^{\eta_X} & \mathlarger{\mathlarger{\prod}}_{f:X\to\Box} F(\Box) \ar[d]^{\prod_f \eta_{\Box}} \ar@<1ex>[r] \ar@<-1ex>[r] & \mathlarger{\mathlarger{\prod}}_{f:X\to\Box,h:\Box\to\Box} F(\Box) \ar[d]^{\prod_{f,h} \eta_\Box} \\
G(X) \ar[r] & \mathlarger{\mathlarger{\prod}}_{f:X\to\Box} G(\Box) \ar@<1ex>[r] \ar@<-1ex>[r] & \mathlarger{\mathlarger{\prod}}_{f:X\to\Box,h:\Box\to\Box} G(\Box) }
\]
in which both rows are equalizers. So for fixed $G$, the set $F(X)$ is determined by the inclusion map $\eta_\Box:F(\Box)\to G(\Box)$. Hence the number of subobjects of $G$ is bounded by $2^{|G(\Box)|}$.
\end{proof}
\begin{corollary}
Every sheaf $F:\mathsf{CHaus}\to\mathsf{Set}$ for which~\eqref{timesinj} is injective is a (small) colimit in $\Sh(\mathsf{CHaus})$ of representable functors.
\end{corollary}
\begin{proof}
We show that $F$ is the colimit in $\Sh(\mathsf{CHaus})$ of the subfunctors of the form $F_\alpha$ from Proposition~\ref{singlygenerated}, as ordered by inclusion; thanks to Proposition~\ref{wellpowered}, this colimit is equivalent to a small colimit.
To show the required universal property, suppose first that $\eta,\eta'\in\Sh(\mathsf{CHaus})(F,G)$ coincide upon restriction to all $F_\alpha$. Then in particular, $\eta_\Box(\alpha) = \eta'_\Box(\alpha)$ for all $\alpha\in F(\Box)$, and hence $\eta=\eta'$ by the previous results. Conversely, let $\{\phi^\alpha\}_\alpha$ be a family of natural transformations $\phi^\alpha: F_\alpha\to G$ that are compatible in the sense that if $F_\beta\subseteq F_\alpha$, then $\phi^\alpha |_{F_\beta} = \phi^\beta$. Then define the component $\eta_X : F(X)\to G(X)$ on every $\alpha\in F(X)$ as
\[
\eta_X(\alpha):= \phi^\alpha_X(\alpha).
\]
The commutativity of the naturality square
\[
\xymatrix{ F(X) \ar[r]^{\eta_X} \ar[d]_{F(f)} & G(X) \ar[d]^{G(f)} \\
F(Y) \ar[r]_{\eta_Y} & G(Y) }
\]
on some $\alpha\in F(X)$ follows from
\[
G(f)(\phi^\alpha_X(\alpha)) = \phi^\alpha_Y(F(f)(\alpha)) = \phi^{F(f)(\alpha)}_{Y}(F(f)(\alpha)) ,
\]
where the first equation is naturality of $\phi^\alpha$ and the second one is the assumed compatibility.
To see that $\eta$ restricts to $\phi^\alpha$ on every $F_\alpha$, we show that the components coincide, i.e.~$\eta_Y = \phi^\alpha_Y$ for all $Y\in\mathsf{CHaus}$ and $\beta\in F_\alpha(Y)$. Now we must have $\beta = F(f)(\alpha)$ for suitable $f:X\to Y$, and consider the diagram
\[
\xymatrix{ F_\alpha(X) \ar[r] \ar[d] & F(X) \ar[r]^{\eta_X} \ar[d]|{F(f)} & G(X) \ar[d]^{G(f)} \\
F_\alpha(Y) \ar[r] & F(Y) \ar[r]_{\eta_Y} & G(Y) }
\]
Starting with $\alpha$ in the upper left, we have $\phi^\alpha_X(\alpha)$ in the upper right, and hence
\[
G(f)(\phi^\alpha_X(\alpha)) = \phi^\alpha_Y(F(f)(\alpha)) = \phi^\alpha_Y(\beta)
\]
in the lower right, where the first equation is as above. Since we also have $\beta$ in the lower left, we obtain the desired $\eta_Y(\beta) = \phi^\alpha_Y(\beta)$.
\end{proof}
In light of the upcoming Theorem~\ref{pCalgthm}, this result is closely related to~\cite[Theorem~5]{piecewise}. The only potential difference is that our colimit is taken in $\Sh(\mathsf{CHaus})$, while van den Berg and Heunen consider it in $\pCalg$, and it is not clear whether these are equivalent.
\bigskip
Since it is currently unclear whether Definition~\ref{sheafdef} is the most adequate collection of sheaf conditions that one can postulate, we do not investigate the categorical properties of $\Sh(\mathsf{CHaus})$ any further in this paper.
\newpage
\section{Piecewise C*-algebras as sheaves $\mathsf{CHaus}\to\mathsf{Set}$}
\label{piecewisesec}
In this section, we will establish that $\Sh(\mathsf{CHaus})$ contains the category of \emph{piecewise C*-algebras} introduced by van den Berg and Heunen~\cite{piecewise} as a full subcategory. The following definition was inspired by Kochen and Specker's consideration of partial algebras~\cite{KS}.\footnote{For this reason van den Berg and Heunen introduced their definition as \emph{partial C*-algebras}, but the term was subsequently changed to \emph{piecewise C*-algebra}~\cite{HR}.}
\newcommand{\mathop{\Perp}}{\mathop{\Perp}}
\begin{definition}[\cite{piecewise}]
\label{piecewisedef}
A \emph{piecewise C*-algebra} is a set $A$ equipped with the following pieces of structure:
\begin{enumerate}
\item a reflexive and symmetric relation $\mathop{\Perp}\subseteq A\times A$. If $\alpha\mathop{\Perp}\beta$, we say that $\alpha$ and $\beta$ \emph{commute};
\item binary operations $+,\cdot:\mathop{\Perp}\rightarrow A$;
\item a scalar multiplication $\cdot:\mathbb{C}\times A\rightarrow A$;
\item distinguished elements $0, 1\in A$;
\item an involution $*:A\rightarrow A$;
\item a norm $||-||:A\rightarrow \mathbb{R}$;
\end{enumerate}
such that every subset $C\subseteq A$ of pairwise commuting elements is contained in some subset $\bar{C}\subseteq A$ of pairwise commuting elements which is a commutative C*-algebra with respect to the data above.
\end{definition}
The piecewise C*-algebras in which the relation $\mathop{\Perp}$ is total are precisely the commutative C*-algebras $C(X)$. Our choice of the symbol ``$\mathop{\Perp}$'' is explained by the special case of rank one projections, which commute if and only if they are either orthogonal ($\perp$) or parallel ($\parallel$).
\begin{definition}[\cite{piecewise}]
\label{phomdef}
Given piecewise $C^*$-algebras $A$ and $B$, a \emph{piecewise $*$-homomorphism} is a function $\zeta:A\rightarrow B$ such that
\begin{enumerate}
\item If $\alpha\mathop{\Perp}\beta$ in $A$, then
\begin{equation}
\label{phom}
\zeta(\alpha) \mathop{\Perp} \zeta(\beta),\qquad \zeta(\alpha\beta)=\zeta(\alpha)\zeta(\beta),\qquad \zeta(\alpha+\beta)=\zeta(\alpha)+\zeta(\beta).
\end{equation}
\item $\zeta(z\alpha) = z \zeta(\alpha)$ for all $a\in A $ and $z\in \mathbb{C}$,
\item $\zeta(\alpha^*) = \zeta(\alpha)^*$ for all $\alpha\in A$.
\item $\zeta(1)=1$.
\end{enumerate}
\end{definition}
\begin{example}
It is well-known that there is no $*$-homomorphism $M_n\to\mathbb{C}$ for $n\geq 2$. The Kochen-Specker theorem~\cite{KS} states that for $n\geq 3$, there does not even exist a piecewise $*$-homomorphism $M_n\to\mathbb{C}$.
\end{example}
So piecewise $C^*$-algebras and piecewise $*$-homomorphisms form a category $\pCalg$. Still following~\cite{piecewise}, there is a forgetful functor $\mathbb{C}(-) : \Calg\to\pCalg$ sending every C*-algebra $A$ to its normal part,
\begin{equation}
\label{normalpart}
\mathbb{C}(A)=\{\: \alpha\in A \:|\: \alpha\alpha^*=\alpha^*\alpha \:\}.
\end{equation}
This set forms a piecewise C*-algebra by postulating that $\alpha \mathop{\Perp} \beta$ holds whenever $\alpha$ and $\beta$ commute. $\mathbb{C}(-)$ is easily seen to be a faithful functor that reflects isomorphisms. In the language of \emph{property, structure and stuff}~\cite{propertystructurestuff}, this means that it forgets at most structure. So we may think of a C*-algebra as a piecewise C*-algebra together with additional structure, namely the specifications of sums and products of noncommuting elements.
\begin{example}
For $A,B\in\Calg$, any Jordan homomorphism $\mathbb{R}(A)\to \mathbb{R}(B)$ extends linearly to a piecewise $*$-homomorphism $\mathbb{C}(A)\to\mathbb{C}(B)$. For example, the transposition map $-^T:M_n\to M_n$ yields a piecewise $*$-homomorphism $\mathbb{C}(M_n)\to\mathbb{C}(M_n)$.
\end{example}
The discussion of Section~\ref{Calgsasfunctors} extends canonically to piecewise C*-algebras. To wit, Gelfand duality still implements an equivalence of $\mathsf{CHaus}^\op$ with a full subcategory of $\pCalg$, so that for every $A\in\pCalg$ we can restrict the hom-functor
\[
\pCalg(-,A) \: : \: \pCalg^\op\to\mathsf{Set}
\]
to a functor $\mathsf{CHaus}\to\mathsf{Set}$, which maps $X\in\mathsf{CHaus}$ to the set of piecewise $*$-homomorphisms $C(X)\to A$. For $A\in\Calg$, this results precisely in the functor $\mathsf{CHaus}\to\mathsf{Set}$ that we already know from Section~\ref{Calgsasfunctors}, since then $\pCalg(C(X),A)=\Calg(C(X),A)$. In other words, we have a diagram of functors
\[
\xymatrix{ \Calg \ar[dr]_{\mathbb{C}(-)} \ar[rr] & & \mathsf{Set}^{\mathsf{CHaus}} \\
& \pCalg\ar[ur] & }
\]
In fact, the proof of Proposition~\ref{guarcommcrit} still goes through for piecewise C*-algebras. Hence the functor $\pCalg\to\mathsf{Set}^\mathsf{CHaus}$ actually lands in the full subcategory $\Sh(\mathsf{CHaus})$ as well, and the commutative triangle of functors can be taken to be
\[
\xymatrix{ \Calg \ar[dr]_{\mathbb{C}(-)} \ar[rr] & & \Sh(\mathsf{CHaus}) \\
& \pCalg\ar[ur] & }
\]
We now investigate the functor on the right a bit further, finding that it is close to being an equivalence. In the following, we use the unit disk $\bigcirc\subseteq\mathbb{C}$. Since it is homeomorphic to the unit square $\Box$ that we have been working with until now, all previous statements apply likewise with $\Box$ replaced by $\bigcirc$.
\begin{theorem}
The functor $\pCalg\longrightarrow\Sh(\mathsf{CHaus})$ is fully faithful, with essential image given by all those $F\in\Sh(\mathsf{CHaus})$ for which the canonical map
\begin{equation}
\label{circtimes}
\xymatrix{ F(\bigcirc\times\bigcirc) \ar[r] & F(\bigcirc)\times F(\bigcirc) }
\end{equation}
is injective.
\label{pCalgthm}
\end{theorem}
So this functor forgets at most property, namely the property of injectivity of~\eqref{circtimes} as investigated in Lemma~\ref{injlem}. This property is equivalent to $F$ being separated (in the presheaf sense) on the effective-monic cones. It seems natural to suspect that not every sheaf on $\mathsf{CHaus}$ is separated in this sense, but this remains open. So it is also conceivable that $\pCalg\to\Sh(\mathsf{CHaus})$ actually is an equivalence of categories.
In particular, this shows that $\cCalg$ is dense in $\pCalg$, i.e.~that the canonical functor $\pCalg\to\mathsf{Set}^{\cCalg^{\op}}$ is fully faithful. For a potentially related result of a similar flavour, see~\cite[Corollary~8]{SU}.
\begin{proof}
A piecewise $*$-homomorphism $\zeta:A\to B$ is determined by its action on the unit ball, which is the set of elements with spectrum in $\bigcirc$. In particular, $\zeta$ is uniquely determined by the associated transformation $-(\zeta) : -(A)\to -(B)$, so that the functor under consideration is faithful.
Concerning fullness, let $\eta : -(A)\to -(B)$ be a natural transformation. Its component at $\bigcirc$ is a map $\eta_\bigcirc : \bigcirc(A)\to \bigcirc(B)$. The pairs of commuting elements $\alpha,\beta\in \bigcirc(A)$ are precisely those that are in the image of the canonical map
\[
(\bigcirc\times\bigcirc)(A)\longrightarrow \bigcirc(A)\times \bigcirc(A),
\]
and hence the requirements~\eqref{phom} follow from naturality and the consideration of functions like~\eqref{addition} and~\eqref{multiplication}. The other axioms are likewise simple consequences of naturality. This exhibits a piecewise $*$-homomorphism $\zeta:A\to B$ such that $\eta_\bigcirc$ coincides with $\bigcirc(\zeta)$. Then by Lemma~\ref{evalbox}, we have $\eta = -(\zeta)$.
Finally, we show that every $F\in\Sh(\mathsf{CHaus})$ for which~\eqref{circtimes} is injective is isomorphic to $-(A)$ for some $A\in\pCalg$. Concretely, we construct a piecewise C*-algebra $A$ by first defining its unit ball to be
\[
\bigcirc(A) := F(\bigcirc).
\]
This set comes equipped with a commutation relation: $\alpha\mathop{\Perp}\beta$ is declared to hold for $\alpha,\beta\in A$ precisely when $(\alpha,\beta)$ is in the image of~\eqref{circtimes}. In this case, we can define the sum $\alpha+\beta$ and the product $\alpha\beta$ using the functoriality on maps such as~\eqref{addition} and~\eqref{multiplication}. Likewise there is a scalar multiplication by numbers $z\in \bigcirc$ and an involution arising from functoriality on the complex conjugation map $\bigcirc\to\bigcirc$.
Now defining $A$ to consist of pairs $(\alpha,z)\in F(\bigcirc)\times \mathbb{R}_{>0}$, modulo the equivalence $(\alpha,z)\sim(s a,s z)$ for all $s\in(0,1)$, results in a piecewise C*-algebra: the relevant structure of Definition~\ref{piecewisedef} extends canonically from $\bigcirc(A)$ to all of $A$, and we also claim that any set $\{\gamma_i\}_{i\in I}\subseteq \bigcirc(A)$ of pairwise commuting elements is contained in a commutative C*-subalgebra. We write this family as a single element of the $I$-fold product,
\[
\gamma \in F(\bigcirc)^I.
\]
The cone $\{(p_i,p_j) : \bigcirc^I\to\bigcirc\times\bigcirc\}_{i,j\in I}$ consisting of all pairings of projections $p_i : \bigcirc^I\to \bigcirc$ is effective-monic and directed. By the commutativity assumption on $\gamma$, the pair $(\gamma_i,\gamma_j)\in F(\bigcirc)\times F(\bigcirc)$ comes from an element of $F(\bigcirc\times\bigcirc)$. Hence by the sheaf condition, $\gamma$ is actually the image of an element $\gamma'\in F\left(\bigcirc^I\right)$ under the canonical map. The subfunctor $F_{\gamma'}\subseteq F$, as in Proposition~\ref{singlygenerated}, is representable. It corresponds to the commutative C*-subalgebra generated by the $\gamma_i$.
\end{proof}
The following criterion---due to Heunen and Reyes---describes the image of the functor $\mathbb{C}(-):\Calg\to\pCalg$ at the level of morphisms.
\begin{lemma}[{\cite[Proposition~4.13]{HR}}]
\label{fextend}
For $A,B\in\Calg$, a piecewise $*$-homomorphism $\zeta:\mathbb{C}(A)\to \mathbb{C}(B)$ extends to a $*$-homomorphism $A\to B$ if and only if it is additive on self-adjoints and multiplicative on unitaries.
\end{lemma}
By faithfulness of $\Calg\to\pCalg$, we already know such an extension to be unique if it exists.
\begin{proof}
The `only if' part is clear, so we focus on the `if' direction. Every element of $A$ is of the form $a+ib$ for $a,b\in \mathbb{R}(A)$, and linearity forces us to define the candidate extension of $f$ by
\[
\hat{\zeta}(a+ib) := \zeta(a) + i\zeta(b).
\]
In this way, $\hat{\zeta}$ becomes linear due to the first assumption, and is evidently involutive and unital. On a unitary $\nu$, we have $\hat{\zeta}(\nu)=\zeta(\nu)$, since
\begin{align*}
\hat{\zeta}(\nu) & = \hat{\zeta}\left( \frac{\nu+\nu^*}{2} + i\frac{\nu-\nu^*}{2}\right) = \frac{1}{2}\zeta(\nu+\nu^*) + \frac{1}{2}\zeta(\nu-\nu^*) \\
& = \frac{1}{2}\zeta(\nu) + \frac{1}{2}\zeta(\nu^*) + \frac{1}{2}\zeta(\nu) - \frac{1}{2}\zeta(\nu^*) = \zeta(\nu),
\end{align*}
where the third step uses $\nu\mathop{\Perp} \nu^*$.
We finish the proof by arguing that $\hat{\zeta}$ is multiplicative on two arbitrary elements $\alpha,\beta\in [-1,+1](A)$, which is enough to prove multiplicativity generally, and hence to show that $\hat{\zeta}$ is indeed a $*$-homomorphism. By functional calculus, we can find unitaries $\nu,\tau\in \Tl(A)$ such that $\alpha=\nu+\nu^*$ and $\beta=\tau+\tau^*$. Then
\begin{align*}
\hat{\zeta}(\alpha\beta) = \hat{\zeta}\left( (\nu+\nu^*)(\tau+\tau^*) \right) & = \hat{\zeta}\left( \nu\tau + \nu\tau^* + \nu^* \tau + \nu^* \tau^* \right) \\
& = \hat{\zeta}(\nu\tau) + \hat{\zeta}(\nu\tau^*) + \hat{\zeta}(\nu^*\tau) + \hat{\zeta}(\nu^* \tau^*) \\
& = \zeta(\nu\tau) + \zeta(\nu\tau^*) + \zeta(\nu^*\tau) + \zeta(\nu^*\tau^*) \\
& = \zeta(\nu)\zeta(\tau) + \zeta(\nu)\zeta(\tau^*) + \zeta(\nu^*)\zeta(\tau) + \zeta(\nu^*)\zeta(\tau^*) \\
& = (\zeta(\nu) + \zeta(\nu^*))(\zeta(\tau) + \zeta(\tau^*)) \\
& = (\hat{\zeta}(\nu) + \hat{\zeta}(\nu^*))(\hat{\zeta}(\tau) + \hat{\zeta}(\tau^*)) \\
& = \hat{\zeta}(\nu + \nu^*) \hat{\zeta}(\tau + \tau^*) = \hat{\zeta}(\alpha)\hat{\zeta}(\beta),
\end{align*}
where we have used that $\hat{\zeta}$ coincides with $\zeta$ on unitaries (third and sixth line) and the assumption of multiplicativity on unitaries (fourth line).
\end{proof}
In fact, this result can be improved upon:
\begin{proposition}
A piecewise $*$-homomorphism $\zeta:\mathbb{C}(A)\to\mathbb{C}(B)$ extends to a $*$-homomorphism $A\to B$ if and only if it is multiplicative on unitaries.
\label{fextend2}
\end{proposition}
\begin{proof}
By the lemma, it is enough to prove that such a $\zeta$ is additive on self-adjoints.
We use the following fact, which follows from the exponential series: for every $\alpha,\beta\in\mathbb{R}(A)$ and real parameter $t\in\mathbb{R}$, the unitary
\[
e^{it(\alpha + \beta)} e^{-it\alpha} e^{-it\beta}
\]
differs from $1$ by at most $O(t^2)$ as $t\to 0$. Since $\zeta$ preserves the spectrum of unitaries, we conclude that also
\[
\zeta\left(e^{it(\alpha + \beta)} e^{-it\alpha} e^{-it\beta}\right) = e^{it\zeta(\alpha + \beta)} e^{-it\zeta(\alpha)} e^{-it\zeta(\beta)}
\]
is a unitary that differs from $1$ by at most $O(t^2)$. By the same argument as above, this implies $\zeta(\alpha + \beta) = \zeta(\alpha) + \zeta(\beta)$, as was to be shown.
\end{proof}
As the proof shows, we actually only need multiplicativity on products of exponentials, i.e.~on the connected component of the identity $1\in\Tl(A)$. Also, the method of proof suggests a relation to the Baker-Campbell-Hausdorff formula, which may be worth exploring further.
Finally, it is worth noting that a piecewise $*$-homomorphism $\zeta:\mathbb{C}(A)\to\mathbb{C}(B)$ is additive on self-adjoints if and only if it is a Jordan homomorphism: the condition $\zeta(\alpha^2) = \zeta(\alpha)^2$ for $\alpha\in\mathbb{R}(A)$ is automatic since $\zeta$ preserves functional calculus.
Let us end this section by stating its main open problem:
\begin{problem}
\label{pCalgprob}
Is the functor $\pCalg\to\Sh(\mathsf{CHaus})$ an equivalence of categories, i.e.~does \emph{every} sheaf on $\mathsf{CHaus}$ satisfy the injectivity condition of Lemma~\ref{injlem}?
\end{problem}
\newpage
\section{Almost C*-algebras as piecewise C*-algebras with self-action}
\label{secsaC}
What we have learnt so far is that considering a C*-algebra $A$ as a sheaf $-(A):\mathsf{CHaus}\to\mathsf{Set}$, or equivalently as a piecewise C*-algebra, recovers the entire `commutative part' of the C*-algebra structure of $A$. Nevertheless, the functor $\Calg\to\pCalg$ is not full, which indicates that part of the relevant structure is lost: for example, a C*-algebra $A$ is in general not isomorphic to $A^\op$~\cite{phillips}, although the two are canonically isomorphic as piecewise C*-algebras. This raises the question: which natural piece of additional structure on a sheaf $\mathsf{CHaus}\to\mathsf{Set}$ or piecewise C*-algebra would let us recover the missing information?
Of course, what kind of additional structure counts as `natural' is a subjective matter. But again, we can take inspiration from quantum physics: which additional structure would have a clear physical interpretation? Our following proposal is based on a central feature of quantum mechanics: observables generate dynamics, in the sense that to every observable (self-adjoint operator) $\alpha\in\mathbb{R}(A)$, one associates the one-parameter group of inner automorphisms given by
\begin{equation}
\label{autos}
\mathbb{R}\times A \longrightarrow A,\qquad (t,\beta) \longmapsto e^{i\alpha t} \beta e^{-i\alpha t}.
\end{equation}
For example, if $\alpha$ is energy, then the resulting one-parameter family of automorphisms is given precisely by time translations, i.e.~by the inherent dynamics of the system under consideration. If $\alpha$ is a component of angular momentum, then the resulting family of automorphisms are the rotations around that axis. As is obvious from~\eqref{autos}, this natural way in which $A$ acts on itself by inner automorphisms is a purely noncommutative feature, in that it becomes trivial in the commutative case.
More formally, the construction of~\eqref{autos} really consists of two parts: first, for every $t\in\mathbb{R}$, one forms the unitary $\nu:= e^{-i\alpha t}$; since this is functional calculus, it is captured by the functoriality $\mathsf{CHaus}\to\mathsf{Set}$. Second, one lets $\nu$ act on $A$ via conjugation, as $\beta\mapsto \nu^* \beta\nu$. This part is not captured by what we have discussed so far, and hence we axiomatize it as an additional piece of structure. Our definition is similar in spirit to the `active lattices' of Heunen and Reyes~\cite{HR} and also seems related to~\cite[Section~VI]{BMU}.
\begin{definition}
\label{almostdef}
An \emph{almost C*-algebra} is a pair $(A,\mathfrak{a})$ consisting of a piecewise C*-algebra $A\in\pCalg$ and a \emph{self-action} of $A$, which is a map
\[
\mathfrak{a} : \Tl(A) \longrightarrow \pCalg(A,A)
\]
assigning to every unitary $\nu\in \Tl(A)$ a piecewise automorphism $\mathfrak{a}(\nu) : A \to A$ such that
\begin{itemize}
\item $\nu$ commutes with $\tau\in\Tl(A)$ if and only if $\mathfrak{a}(\nu)(\tau) = \tau$;
\item in this case, $\mathfrak{a}(\nu\tau) = \mathfrak{a}(\nu)\mathfrak{a}(\tau)$.
\end{itemize}
\end{definition}
So $\mathfrak{a}$ must satisfy two equations on commuting unitaries. The first equation implies that a commutative C*-algebra, considered as a piecewise C*-algebra, can act on itself only trivially; and conversely, if the self-action is trivial in the sense that every $\mathfrak{a}(\nu)$ is the identity, then $A$ must be commutative. The second equation implies that if $\nu$ and $\tau$ commute, then also their actions commute:
\[
\mathfrak{a}(\nu)\mathfrak{a}(\tau) = \mathfrak{a}(\nu\tau) = \mathfrak{a}(\tau\nu) = \mathfrak{a}(\tau)\mathfrak{a}(\nu).
\]
While introducing a self-action $\mathfrak{a} : \Tl(A)\longrightarrow\pCalg(A,A)$ can be physically motivated by the above discussion; we expect the appearance of $\Tl$ to be related to Pontryagin duality. The physical interpretation of the first axiom could be related to Noether's theorem.
Almost C*-algebras form a category denoted $\aCalg$ as follows:
\begin{definition}
\label{ahomdef}
An \emph{almost $*$-homomorphism} $\zeta:(A,\mathfrak{a})\to (B,\mathfrak{b})$ is a piecewise $*$-homomorphism $\zeta:A\to B$ which preserves the self-actions in the sense that
\begin{equation}
\label{ahom}
\mathfrak{b}(\zeta(\nu)) (\zeta(\alpha)) = \zeta(\mathfrak{a}(\nu)(\alpha)).
\end{equation}
\end{definition}
The forgetful functor $\Calg\to\pCalg$ factors through $\aCalg$ by associating to every C*-algebra $A$ and unitary $\nu\in \Tl(A)$ its conjugation action,
\[
\mathfrak{a}(\nu)(\alpha) := \nu^* \alpha \nu.
\]
Every $*$-homomorphism $\zeta:A\to B$ is compatible with the resulting self-actions: the condition~\eqref{ahom} becomes simply
\begin{equation}
\label{presconj}
\zeta(\nu)^* \zeta(\alpha) \zeta(\nu) = \zeta(\nu^* \alpha \nu).
\end{equation}
Our main question is whether the additional structure of a self-action that is present in an almost C*-algebras is sufficient to recover the entire C*-algebra structure:
\begin{problem}
\label{aCalgprob}
Is the forgetful functor $\Calg\to\aCalg$ an equivalence of categories?
\end{problem}
In order for this to be the case, one would have to show that the functor is both fully faithful and essentially surjective. While the latter question is wide open, it is clear that the functor is faithful, since already the forgetful functor $\Calg\to\pCalg$ is. We can also prove fullness in a W*-algebra setting:
\begin{theorem}
\label{Wfull}
$\Calg\to\aCalg$ is fully faithful on morphisms out of any W*-algebra.
\end{theorem}
This result is similar to~\cite[Theorem~4.11]{HR}, but does not directly follow from it\footnote{This is because the notion of `active lattice' of~\cite{HR} includes a group that acts on the lattice, and a morphism of active lattices in particular is \emph{assumed} to be a homomorphism of the corresponding groups. If we assumed something analogous in our definition of almost C*-algebra, the fullness of the forgetful functor would simply follow from Proposition~\ref{fextend2}.}.
\begin{proof}
We need to show surjectivity, i.e.~if $\zeta:\mathbb{C}(A)\to \mathbb{C}(B)$ for a W*-algebra $A$ is a piecewise $*$-homomorphism which satisfies~\eqref{presconj}, then $\zeta$ extends to a $*$-homomorphism $A\to B$. Let us first consider the case that $A$ contains no direct summand of type $I_2$. Then for every state $\phi:B\to\mathbb{C}$, the map
\begin{equation}
\label{quasifun}
\alpha + i\beta \longmapsto \phi(\zeta(\alpha) + i\zeta(\beta))
\end{equation}
for $\alpha,\beta\in \mathbb{R}(A)$ is a quasi-linear functional on $A$ in the sense of~\cite[Definition~5.2.5]{hamhalter}, and therefore is uniquely determined by its values on the projections $\mathbf{2}(A)$~\cite[Proposition~5.2.6]{hamhalter}. On the other hand, by the generalized Gleason theorem~\cite[Theorem~5.2.4]{hamhalter}, this map $\mathbf{2}(A)\to\mathbb{R}$ uniquely extends to a state $A\to\mathbb{R}$. In conclusion, composition with $\zeta$ takes states on $B$ to states on $A$, and hence $\mathbb{R}(\zeta):\mathbb{R}(A)\to\mathbb{R}(B)$ is linear.
On $\mathbb{R}(A)$, we furthermore have $\zeta(\alpha^2) = \zeta(\alpha)^2$, which makes $\zeta$ into a Jordan homomorphism. By a deep result of St{\o}rmer~\cite[Theorem~3.3]{stormer}, this means that there exists a projection $\pi\in \mathbf{2}(B)$, commuting with the range of $\zeta$, such that $\alpha\mapsto \pi\zeta(\alpha)$ uniquely extends to a (generally nonunital) $*$-homomorphism, and similarly $\alpha\mapsto (1-\pi)\zeta(\alpha)$ uniquely extends to a (generally nonunital) $*$-anti-homomorphism. In other words, $\zeta$ decomposes into the sum of the restriction (to normal elements) of a $*$-homomorphism and a $*$-anti-homomorphism. So far, we have only made use of the assumption that $\zeta$ is a piecewise $*$-homomorphism.
Hence in order to complete the proof in the case of $A$ without type $I_2$ summand, working with the corner $(1-\pi)A(1-\pi)$ in place of $A$ itself shows that it is enough to consider the case $\pi=0$, i.e.~that $\zeta$ is the restriction of a $*$-anti-homomorphism. In particular,
\[
\zeta(\nu)^* \zeta(\alpha) \zeta(\nu) \stackrel{\eqref{presconj}}{=} \zeta(\nu^*\alpha\nu) = \zeta(\nu) \zeta(\alpha) \zeta(\nu)^*,
\]
and therefore $\zeta(\alpha) \zeta(\nu^2) = \zeta(\nu^2) \zeta(\alpha)$ for all $\nu\in\Tl(A)$ and $\alpha\in\mathbb{C}(A)$. Since every exponential unitary $e^{i\beta}$ is the square of another unitary, we know that $\zeta(\alpha)$ commutes with every exponential unitary. Since every element of $A$ is a linear combination of exponential unitaries, we conclude that $\zeta(\alpha)$ commutes with $\zeta(\beta)$ for every $\beta\in\mathbb{C}(A)$. Hence the range of $\zeta$ is commutative. In particular, $\zeta$ is also the restriction of a $*$-homomorphism, which completes the proof in the present case.
Now consider the case of an almost $*$-homomorphism $\zeta:\mathbb{C}(M_2)\to \mathbb{C}(B)$. Due to the isomorphism $M_2\cong \mathrm{Cl}(\mathbb{R}^2)\otimes\mathbb{C}$ with a complexified Clifford algebra, $M_2$ is freely generated as a C*-algebra by two self-adjoints $\sigma_x$ and $\sigma_y$ subject to the relations
\[
\sigma_x^2 = \sigma_y^2 = 1,\qquad \sigma_x \sigma_y + \sigma_y \sigma_x = 0.
\]
Since $\zeta$ commutes with functional calculus, the first two equations are clearly preserved by $\zeta$ in the sense that $\zeta(\sigma_x)^2 = \zeta(\sigma_y)^2 = 1$. Concerning the third equation, we know
\[
-\zeta(\sigma_x) = \zeta(-\sigma_x) = \zeta(\sigma_y \sigma_x \sigma_y) \stackrel{\eqref{presconj}}{=} \zeta(\sigma_y) \zeta(\sigma_x) \zeta(\sigma_y) .
\]
Hence $\zeta(\sigma_x) \zeta(\sigma_y) + \zeta(\sigma_y) \zeta(\sigma_x) = 0$ due to $\zeta(\sigma_y)^2 = 1$. Therefore the values $\zeta(\sigma_x)$ and $\zeta(\sigma_y)$ extend uniquely to a $*$-homomorphism $\hat{\zeta} : M_2\to B$; the problem is to show that this coincides with the original $\zeta$ on normal elements. Since any symmetry $\nu\in \{-1,+1\}(M_2)$ is conjugate to $\sigma_x$, we certainly have $\hat{\zeta}(\nu) = \zeta(\nu)$ by~\eqref{presconj} and the assumption $\hat{\zeta}(\sigma_x) = \zeta(\sigma_x)$. But because in the special case of $M_2$, every normal element can be obtained from a symmetry by functional calculus, and both $\zeta$ and $\hat{\zeta}$ preserve functional calculus, this is sufficient to show that $\hat{\zeta} = \zeta$ on normal elements. This finishes off the case $A=M_2$.
A general W*-algebra of type $I_2$ is of the form $A\cong L^\infty(\Omega,\mu,M_2)$ for a suitable measure space $(\Omega,\mu)$. Let $\zeta:\mathbb{C}(A)\to\mathbb{C}(B)$ be an almost $*$-homomorphism. We first show that $\zeta$ uniquely extends to a bounded $*$-homomorphism on the *-subalgebra of simple functions. For a measurable set $\Gamma\subseteq\Omega$, let $\chi_\Gamma : \Omega\to\{0,1\}$ be the associated indicator function. For nonempty $\Gamma$, the algebra elements of the form $\alpha\chi_\Gamma$ for $\alpha\in M_2$ form a C*-subalgebra isomorphic to $M_2$ itself (with different unit). By the previous, we know that $\zeta$ uniquely extends to a $*$-homomorphism on this subalgebra. Furthermore, $\zeta$ behaves as expected on a simple function $\sum_{i=1}^n \alpha_i \chi_{\Gamma_i}$: assuming that the $\Gamma_i$'s form a partition of $\Omega$, we have $\alpha_i \chi_{\Gamma_i} \cdot \alpha_j \chi_{\Gamma_j} = 0$ for $i\neq j$, and hence $\zeta$ is additive on the sum, which implies
\begin{equation}
\label{simpleadd}
\zeta\left( \sum_{i=1}^n \alpha_i \chi_{\Gamma_i} \right) = \sum_{i=1}^n \zeta(\alpha_i) \zeta(\chi_{\Gamma_i}).
\end{equation}
We show that $\zeta$ is linear on the sum of two self-adjoint simple functions. By choosing a common refinement, it is enough to consider the case that the two partitions are the same. But then additivity follows from~\eqref{simpleadd} and additivity on $M_2$. Multiplicativity on unitary simple functions is analogous. Since the proof of Lemma~\ref{fextend} still goes through in the present situation (where the *-algebra of simple functions is generally not a C*-algebra), we conclude that $\zeta$ extends uniquely to a $*$-homomorphism on the simple functions. By construction, this $*$-homomorphism is bounded. Therefore it uniquely extends to a $*$-homomorphism $\hat{\zeta}:A\to B$ which coincides with $\zeta$ on the normal simple functions. It remains to be shown that $\hat{\zeta}(\alpha) = \zeta(\alpha)$ for all $\alpha\in \mathbb{C}(A)$.
To obtain this for a given $\alpha\in\mathbb{C}(A)$, we distinguish those points $x\in\Omega$ for which $\alpha(x)$ is degenerate from those for which it is not. Since degeneracy is detected by the vanishing of the discriminant $\tr^2 - 4\det$, the relevant set is
\[
\Delta := \{\: x\in\Omega \:|\: \tr(\alpha(x))^2 - 4 \det(\alpha(x)) = 0 \:\}.
\]
This set is measurable since both trace and determinant are measurable functions $M_2\to\mathbb{C}$. For every $x\in\Omega\setminus\Delta$, there is a unique unitary $\nu(x)\in\Tl(M_2)$ such that $\nu(x)^* \alpha(x) \nu(x)$ is diagonal. Since the eigenbasis of a nondegenerate self-adjoint matrix depends continuously on the matrix, it follows that the function $x\mapsto \nu(x)$ is also measurable. By arbitrarily choosing $\nu(x):= 1$ on $x\in\Delta$, we have constructed a unitary $\nu\in\Tl(L^\infty(\Omega,\mu,M_2))$ such that $\nu^* \alpha \nu$ is pointwise diagonal. Thanks to~\eqref{presconj}, it is therefore sufficient to prove the desired identity $\hat{\zeta}(\alpha) = \zeta(\alpha)$ on diagonal $\alpha$ only. But since these diagonal elements generate a commutative C*-subalgebra, which contains a dense *-subalgebra of simple functions on which $\hat{\zeta}$ and $\zeta$ are known to coincide, we are done because both $\hat{\zeta}$ and $\zeta$ are $*$-homomorphisms on this commutative subalgebra.
Now a general W*-algebra $A$ is a direct sum of a W*-algebra without $I_2$ summand and one that is of type $I_2$~\cite[Theorems~1.19~\&~1.31]{takesaki}. Again by considering corners, it is straightforward to check that if the fullness property holds on almost $*$-homomorphisms out of $A,B\in\Calg$, then it also holds on almost $*$-homomorphisms out of $A\oplus B$.
\end{proof}
In general, the problem of fullness is related to the cohomology of the unitary group $\Tl(A)$ as follows. Let $\zeta:\mathbb{C}(A)\to\mathbb{C}(B)$ be an almost $*$-homomorphism between C*-algebras. We can assume without loss of generality that $\im(\zeta)$ generates $B$ as a C*-algebra. For unitaries $\nu,\tau\in\Tl(A)$ and any $\alpha\in\bigcirc(A)$, we have
\begin{align*}
\zeta(\alpha) & = \zeta\left(\tau^*\nu^*(\nu\tau)\alpha(\nu\tau)^*\nu\tau\right) \\
& \stackrel{\eqref{presconj}}{=} \zeta(\tau)^* \zeta(\nu)^* \zeta(\nu\tau) \zeta(\alpha) \zeta(\nu\tau)^* \zeta(\nu) \zeta(\tau) \\
& = \: \left( \zeta(\nu\tau)^* \zeta(\nu) \zeta(\tau)\right)^*\: \zeta(\alpha) \:\left( \zeta(\nu\tau)^* \zeta(\nu) \zeta(\tau)\right)
\end{align*}
Hence the unitary $\zeta(\nu\tau)^*\zeta(\nu)\zeta(\tau)$ commutes with $\zeta(\alpha)$. By the assumption that $\im(\zeta)$ generates $B$, this means that there exists $c(\nu,\tau)$ in the centre of $\Tl(B)$ such that
\[
\zeta(\nu\tau) = c(\nu,\tau) \zeta(\nu)\zeta(\tau).
\]
As in the theory of projective representations of groups, we can use this relation to evaluate $\zeta$ on a product of three unitaries $\nu,\tau,\chi\in\Tl(A)$, resulting in
\[
c(\nu\tau,\chi) c(\nu,\tau) \zeta(\nu)\zeta(\tau)\zeta(\chi) = \zeta(\nu\tau\chi) = c(\nu,\tau\chi)c(\tau,\chi) \zeta(\nu)\zeta(\tau)\zeta(\chi).
\]
This establishes the cocycle equation
\[
c(\tau,\chi) c(\nu\tau,\chi)^* c(\nu,\tau\chi) c(\nu,\tau)^* = 1,
\]
showing that $c$ is a $2$-cocycle on $\Tl(A)$ with values in the centre of $\Tl(B)$, which is equal to the unitary group of the centre of $B$. Unfortunately, we do not know whether this can be used to show that $\Tl(\zeta):\Tl(A)\to\Tl(B)$ is a group homomorphism, which would be enough to prove fullness in general by Proposition~\ref{fextend2}.
Let us now restate the remaining part of Problem~\ref{aCalgprob}:
\begin{problem}
Is the functor $\Calg\to\aCalg$ full in general? If so, could it even be essentially surjective?
\end{problem}
\newpage
\section{Groups as piecewise groups with self-action}
\label{secgrps}
In order to get a better intuition for the relation between C*-algebras and almost C*-algebras, it is instructive to perform analogous considerations for other mathematical structures. In this section, we investigate the case of groups, which may also be of interest in its own right.
By analogy with piecewise C*-algebras, we have:
\begin{definition}[\cite{HR}]
\label{piecewisegroupdef}
A \emph{piecewise group} is a set $G$ equipped with the following pieces of structure:
\begin{enumerate}
\item a reflexive and symmetric relation $\mathop{\Perp}\subseteq G\times G$. If $x\mathop{\Perp} y$, we say that $x$ and $y$ \emph{commute};
\item a binary operation $\cdot:\mathop{\Perp}\rightarrow G$;
\item a distinguished element $1\in G$;
\end{enumerate}
such that every subset $C\subseteq G$ of pairwise commuting elements is contained in some subset $\bar{C}\subseteq G$ of pairwise commuting elements which is an abelian group with respect to the data above.
\end{definition}
Abelian groups are precisely those piecewise groups for which the commutativity relation $\mathop{\Perp}$ is total. Piecewise groups form a category $\pGrp$ in the obvious way:
\begin{definition}
\label{pghomdef}
Given piecewise groups $G$ and $H$, a \emph{piecewise group homomorphism} is a function $\zeta:G\rightarrow H$ such that if $g\mathop{\Perp} h$ in $G$, then
\begin{equation}
\label{pghom}
\zeta(g) \mathop{\Perp} \zeta(h),\qquad \zeta(gh)=\zeta(g)\zeta(h).
\end{equation}
\end{definition}
It is straightforward to show that a piecewise group homomorphism satisfies $\zeta(1)=1$.
Considering every group as a piecewise group results in a forgetful functor $\mathsf{Grp}\to\pGrp$, which is faithful and reflects isomorphisms. Since it is not full (taking inverses $g\mapsto g^{-1}$ is a piecewise group homomorphism for every $G$, but a group homomorphism only if $G$ is abelian), this functor forgets some of the structure that groups have. By analogy with Definition~\ref{almostdef}, we try to recover this structure by equipping a piecewise group with a notion of inner automorphisms:
\begin{definition}
\label{almostgroupdef}
An \emph{almost group} is a pair $(G,\mathfrak{a})$ consisting of $G\in\pGrp$ and a \emph{self-action} on $G$, which is a map
\[
\mathfrak{a} : G \longrightarrow \pGrp(G,G)
\]
assigning to every element $g\in G$ a piecewise automorphism $\mathfrak{a}(g) : G \to G$ such that
\begin{itemize}
\item $g$ commutes with $h$ if and only if $\mathfrak{a}(g)(h) = h$;
\item in this case, $\mathfrak{a}(gh) = \mathfrak{a}(g)\mathfrak{a}(h)$.
\end{itemize}
\end{definition}
Almost groups form a category denoted $\aGrp$ as follows:
\begin{definition}
\label{aghomdef}
An \emph{almost group homomorphism} $\zeta:(G,\mathfrak{a})\to (H,\beta)$ is a piecewise group homomorphism $\zeta:A\to B$ such that
\begin{equation}
\label{aghom}
\mathfrak{a}(\zeta(g)) (\zeta(h)) = \zeta(\mathfrak{a}(g)(h)).
\end{equation}
\end{definition}
The forgetful functor $\mathsf{Grp}\to\pGrp$ factors through $\aGrp$ by associating to every group $G$ and element $g\in G$ the conjugation action,
\[
\mathfrak{a}(g)(h) := g^{-1} h g.
\]
Every group homomorphism $\zeta:G\to H$ respects the resulting self-actions: the condition~\eqref{ahom} becomes simply
\begin{equation}
\label{presconjgrp}
\zeta(g)^{-1} \zeta(h) \zeta(g) = \zeta(g^{-1} h g).
\end{equation}
One can ask whether this forgetful functor $\mathsf{Grp}\to\aGrp$ is an equivalence of categories. In contrast to the discussion of Section~\ref{secsaC}, and in particular Theorem~\ref{Wfull}, here we know the answer to be negative:
\begin{theorem}
The forgetful functor $\mathsf{Grp}\to\aGrp$ is not full.
\label{aGnotfull}
\end{theorem}
So in general, going from a group to an almost group still constitutes a loss of structure.
\begin{proof}
We provide an explicit example of an almost group homomorphism between groups that is not a group homomorphism.
Let $\Fl_2$ be the free group on two generators $a$ and $b$. For any word $w\in\Fl_2$, let $\hat{w}$ be the cyclically reduced word associated to $w$. Then consider the map $\zeta:\Fl_2\to\mathbb{Z}$ defined as $\zeta(w)$ being the number of times that the generator $a$ directly precedes the generator $b$ in $\hat{w}$, minus the number of times that the generator $b^{-1}$ directly precedes the generator $a^{-1}$ in $\hat{w}$. By construction, this is invariant under conjugation and therefore satisfies~\eqref{presconjgrp}. If $v,w\in\Fl_2$ commute, then they must be of the form $v=u^m$ and $w=u^n$ for some $u\in\Fl_2$ and $m,n\in\mathbb{Z}$~\cite[Proposition~2.17]{ls}. Hence to verify that $\zeta$ is a piecewise group homomorphism, it is enough to show that $\zeta(u^k)=\zeta(u)^k$ for all $k\in\mathbb{Z}$. This is the case because we have $\hat{u^k} = \hat{u}^k$ at the level of reduced cyclic words.
On the other hand, $\zeta$ is not a group homomorphism since $\zeta(a) = \zeta(b) = 0$, while $\zeta(ab) = 1$.
\end{proof}
As the second half of the proof indicates, part of the problem is that a free group has very few commuting elements. One can hope that the situation will be better for finite groups:
\begin{problem}
Is the restriction of the functor $\mathsf{Grp}\to\aGrp$ from finite groups to finite almost groups an equivalence of categories?
\end{problem}
\newpage
\newgeometry{top=2cm}
\bibliographystyle{unsrt}
|
2,869,038,155,506 | arxiv | \section{Introduction}
\label{sec:Introduction}
Future ground-based and space-borne observatories, equipped with large aperture telescopes and sensitive large format detectors will provide broad-band imaging data for more than a billion galaxies. These data are pivotal to better understanding of dark sectors of the Universe (i.e., dark matter and dark energy) as well as the evolution of galaxies and large-scale structures over cosmic time. The challenge, however, is to obtain wide waveband coverage to constrain the spectral energy distributions (SEDs) of millions of galaxies and estimate their redshifts and physical parameters such as stellar masses and star formation rates.
Template fitting is widely used to infer photometric redshifts of galaxies and their physical properties \cite[e.g.,][]{Arnouts99,Bolzonella2000,Ilbert06}. However, theoretical synthetic templates may not be representative of the real parameter space of galaxies. For example, templates can include SEDs which do not have an observational analog. This will cause degeneracy in parameter measurement, especially when we reconstruct SEDs with few bands. Many of these degeneracies are mitigated by obtaining data with wide spectral coverage (e.g., with a larger number of wavebands). An example of such a data set is the Cosmic Evolution Survey \cite[COSMOS;][]{Scoville07} that has been observed in more than 40 bands from X-ray to radio wavelengths. The wealth of information in this field provides very well-constrained SEDs for galaxies. However, not all surveys have as many photometric bands as the COSMOS field. For instance, \textit{Euclid}{} \citep{Laureijs11} will rely on near-infrared $Y$, $J$,
and $H$ bands ($960–2000\ \rm nm$), complemented by optical ground-based observations in $u$, $g$, $r$, $i$ and $z$ to measure photometric redshifts \citep{Euclid_photz20}. It is therefore instructive to use the extensive dataset in the COSMOS field to identify essential bands which carry most of the information regarding the physical properties of galaxies.
The aim of this study is to transfer the information gained in the COSMOS field to fields such as the \textit{Euclid}{} deep fields where such extensive photometry does not exist. Using the concepts of information theory, we can find if there is any information shared between the bands and use these measurements to identify the most important bands (those that reveal most of the information about the physical properties of galaxies). Based on the machine learning techniques, we can then predict fluxes in the wavebands that are not observed in a survey but share information with other available (observed) bands. This allows us to carefully design future surveys and only observe in selected wavebands that include most of the information to significantly save in the observing time.
Machine learning has become popular in recent years to build models based on spectroscopic redshifts \citep[e.g.,][]{Carrasco14,Masters17} and train models based on synthetic templates \citep[e.g.,][]{Hemmati19} or mock catalogs generated from galaxy simulations \citep[e.g.,][]{Davidzon19,Simet21}. These methods are particularly useful as machine learning algorithms can learn more complicated relations given a large and comprehensive training data set \citep{Mucesh21}. Moreover, these models speed up parameter measurement, which is an important characteristic with the flood of data imminent from upcoming surveys \citep{Hemmati19}.
In this paper, we develop a new technique based on information theory to quantify the importance of each waveband and identify essential bands to measure the physical properties of galaxies. We also develop a machine learning model to predict fluxes in missing bands and thereby improve the wavelength resolution of existing photometric data. To demonstrate the application of these techniques, we apply our methods to a sample of galaxies drawn from the latest version of the COSMOS survey \citep[COSMOS2020;][]{Weaver21}, analogous to that planned by the \textit{Euclid}{} deep fields. A new ground-based survey, Hawaii Two-0 (H20; McPartland et al. in prep), has been designed to provide complementary photometric data for the \textit{Euclid}{} mission. H20 will provide $u-$band observations from MegaCam instrument on the Canada-France-Hawaii telescope (CFHT) and $g-,r-,i-, z-$band imaging from Hyper Suprime-Cam (HSC) instrument on the Subaru telescope over 20 square degrees of the \textit{Euclid}{} deep fields. Spitzer/IRAC observations from the Spitzer Legacy Survey (SLS) are also available in the same fields \citep{Moneti21}. In this paper, we identify the importance of wavebands for an H20+UVISTA-like survey with similar wavelength coverage expected in \textit{Euclid}{} deep fields, incorporating the near-IR $YJH$ bands from UltraVista \citep{McCracken12} in addition to the H20 and SLS wavebands. We then predict fluxes in near-IR wavebands using the existing ground-based and mid-IR Spitzer/IRAC observations (H20-like) of the deep fields.
In Section \ref{sec:Data}, we briefly introduce the COSMOS2020 catalog, and use that to build a sample of H20+UVISTA-like galaxies. Section \ref{information} describes the concepts of information gain and quantifies the importance of each waveband based on that. In Section \ref{sec:Visualize}, we use dimensionality reduction techniques to visualize photometric data in 2-dimensional space to explore the feasibility of predicting fluxes in near-IR fluxes based on $ugriz$ and Spitzer/IRAC data. This is followed by Section \ref{sec:band_prediction} where we train a machine learning algorithm, Random Forest model, to predict fluxes in UVISTA/$YJH$ wavebands using data in wavebands similar to the existing H20. In Section \ref{sec:Photz-M}, we investigate the accuracy of the photometric redshifts and stellar masses given the limited number of bands available in H20-like and H20+UVISTA-like data. We discuss and summarize our results in Section \ref{sec:Discussion_Summary}.
Throughout this work, we assume flat $\Lambda$CDM cosmology with $H_0=70 \rm \ kms^{-1} Mpc^{-1}$, $\Omega_{m_{0}}=0.3$ and $\Omega_{\Lambda_{0}}=0.7$. All magnitudes are expressed in the AB system, and the physical parameters are measured assuming a \cite{Chabrier03} IMF.
\begin{figure}
\centering
\includegraphics[width=1\linewidth,clip=True, trim=1.4cm 0cm 2cm 0cm]{z_PDF.pdf}
\caption{Redshift distribution for the subset of COSMOS2020 galaxies brighter than $i$= 25 AB magnitude (3$\sigma$). The entropy of the redshift calculated based on the distribution shown in this figure is less than the entropy of a uniformly distributed redshift. In other words, we get less surprised when we observe the redshift of a galaxy given this distribution (prior information). }
\label{fig:z_PDF}
\end{figure}
\section{Data}
\label{sec:Data}
Here we use the updated version of the COSMOS catalog, COSMOS2020, to build a sample of galaxies analogous to those that will be observed in the \textit{Euclid}{} deep fields. Compared to COSMOS2015 catalog \citep{Laigle16}, COSMOS2020 provides much deeper near-IR and mid-IR (Spitzer) photometric data as well as two independent methods for photometric extraction - the conventional and a profile-fitting (\texttt{The Farmer}{}; J. Weaver et al., in prep.) methods. We use \texttt{The Farmer}{} photometry that contains consistent photometric data in 39 bands from FUV to mid-IR including broad, medium and narrow filters. All the data are reduced to the same scale with appropriate PSFs. Photometric redshifts are calculated using LePhare \citep{Arnouts99,Ilbert06} with a similar configuration described in \cite{Ilbert13}. Given the large number of bands with deep observations, photometric redshift solutions are accurate, reaching a normalized median absolute deviation \citep[$\sigma_{\rm NMAD}$;][]{Hoaglin83} of $0.02$ for galaxies as faint as $i\sim25$ AB mag \citep{Weaver21}. The redshifts of galaxies are then fixed on their estimated photometric redshifts and the stellar masses were estimated. In this paper, we consider COSMOS2020 photometric redshifts and stellar masses as a \enquote{ground truth} since spectroscopic redshifts are only available for a limited number of galaxies and using a mixture of photometric and spectroscopic redshifts can bias our sample towards specific populations of galaxies.
We use two sets of wavebands: 1) H20-like bands: ${\rm \mathbf{A}}\coloneqq\{u,g,r,i,z,ch1,ch2\}$, 2) H20+UVISTA-like bands: ${\rm \mathbf{B}}\coloneqq\{u,g,r,i,z,Y,J,H,ch1,ch2\}$. $u-$band observations are conducted by MegaCam instrument at CFHT, and other optical bands ($g,r,i$ and $z$) are available from Subaru's Hyper Suprime-Cam (HSC) imaging. Spitzer/IRAC channel 1,2 ($ch1,ch2$) data are compiled from all the IRAC observations of the COSMOS field \citep{Moneti21}. Near-IR photometry in $Y$, $J$ and $H$ bands are obtained from the UltraVista survey \citep{McCracken12}. We select a subset of the COSMOS2020 galaxies that are observed, but not necessarily detected, in all the aforementioned bands and have $i-$band AB magnitude $\leq 25$ with $3\sigma$ detection. These selection criteria result in 165,807 galaxies out to $z\sim 5.5$. \new{Photometric measurements in COSMSOS2020 catalog are not corrected for Galactic extinction. We corrected them using \cite{Schlafly11} dust map. Moreover,} some sources have negative fluxes in the desired bands, which is due to the variation of background flux across the image. We set these fluxes to zero.
\section{Information Gain}
\label{information}
Let's suppose that we do not have any prior information about the redshift distribution of galaxies selected from the criteria mentioned in Section \ref{sec:Data}. We, therefore, assume a uniform distribution for the redshift. As an example, if we define four bins of redshifts (\{$z_1$=(0,1], $z_2$=(1,2], $z_3$=(2,3], $z_4$=(3,4]\}) and want to identify which bin does a galaxy belong to, we can encode it in two bits, as below, \\
\begin{center}
\includegraphics[]{output-figure0.pdf}
\end{center}
\begin{figure*}
\centering
\includegraphics[width=1\linewidth,clip=True]{info.pdf}
\caption{Mutual information of redshift and wavebands in bits per galaxy. Larger mutual information means that the entropy of the redshift will decrease more if we include the band in photometric redshift measurements, so the band is more important. Here, $u$ is the most important followed by $z$-band.
}
\label{fig:mi}
\end{figure*}
Here, we need to ask two YES/NO questions to identify the bin a galaxy belongs to. However, based on the available observations of COSMOS2020, we know the redshift distribution of galaxies with $i\leq 25$ AB mag as background information. We, therefore, update the decision tree above, considering our prior information about the redshift distribution, to reduce the average number of questions we need to ask to identify the redshift bin of a galaxy. Based on the redshift distribution shown in Figure \ref{fig:z_PDF}, the probability of a galaxy being in each redshift bin is: $P(z_1)=0.56, P(z_2)=0.32, P(z_3)=0.09, P(z_4)=0.03$. Thus, one possible decision tree to identify the redshift bin of a galaxy can be built as follows,
\begin{center}
\includegraphics[]{output-figure1.pdf}
\end{center}
\noindent On average, $0.56\times 1+0.32\times 2+(0.09+0.03)\times 3=1.56$ questions (bits) are required to identify the redshift bin of a galaxy. We find that the number of bits (questions) reduced from 2 to 1.56 when we added information regarding the redshift distribution of galaxies. This decrease shows that we will get less surprised when we observe the redshift of a galaxy, given that we know what the redshift distribution looks like.
Given the above example, the optimal number of bits required to store a variable called Shannon’s entropy ($H$), is defined as \citep{Shannon48},
\begin{equation}
H(X)=-\sum_i P(x_i)\log_2 P(x_i),
\label{entropy}
\end{equation}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.95\textwidth]{info_cond.pdf}
\caption{Conditional mutual information of redshift and wavebands in bits per galaxy. The most relevant bands can be selected based on their conditional mutual information. The sample is selected based on the magnitude of the $i-$band, which implies that the first selected waveband is the $i-$band. The top left panel shows the mutual information of redshift and wavebands given that $i-$band data are available. Therefore we select $r-$band as the second most relevant band since it provides the most information. In the top right, we assume that $i-$ and $r-$band data are available and find that $u-$band would be the third choice. We follow a similar procedure to find relevant bands in order of their importance. We note that these results depend on the selection criteria. For any new sample of galaxies with a different selection, these results should be remeasured. }
\label{Fig:cmi}
\end{figure*}
\noindent where $x_i$ is the possible outcome of a variable ($X$) which occurs with probability $P(x_i)$. In this formulation, $\log_2 P(x_i)$ represents the number of bits required to identify the outcome. Using equation \ref{entropy}, Shannon’s entropy of redshift based on the probabilities in four bins is 1.45 bits. This means that we can still make our tree more optimal to encode the redshift values in 1.45 bits instead of 1.56. One possible way would be building the tree to identify the redshift of two galaxies simultaneously, which makes the average number of questions per galaxy even less than 1.56. However, we do not aim to find the optimal compression algorithm to encode the redshift information. We just use Shannon’s entropy to find the maximal compression rate.
\begin{figure*}
\centering
\includegraphics[width=1\linewidth,clip=True]{info_mass.pdf}
\caption{Similar to Figure \ref{fig:mi} but for the stellar mass. Mutual information of stellar mass and wavebands in bits per galaxy is shown. With more mutual information, the entropy of stellar mass will decrease more if we include the band in the photometric stellar mass measurements, so the band is more important.
}
\label{fig:mi_mass}
\end{figure*}
In the presence of other information, such as observed fluxes in different bands, the entropy of the redshift decreases even more. The amount of uncertainty (entropy) remaining in $X$ after we have seen $Y$ is called conditional entropy and defined as,
\begin{equation}
H(X|Y)=-\sum_{x\in X,y \in Y} P(x,y)\log_2 \frac{P(x,y)}{P(y)},
\end{equation}where $P(x,y)$ is the joint probability distribution at $(x,y)$. Moreover, the mutual information between X and Y (i.e., the amount of uncertainty in X that is removed by knowing Y) is defined as,
\begin{equation} \label{eq:mi}
\begin{split}
I(X,Y)&=H(X)-H(X|Y) \\
& = H(X) + H(Y) - H(X,Y),
\end{split}
\end{equation}where $H(X,Y)$ is the joint entropy of a pair of variables $(X,Y)$. In other words, I$(X,Y)$ is a measure of the amount of information (in bits) one can acquire about $X$ by observing $Y$. This parameter can be used to identify the waveband that will be most useful for measuring galaxy properties (e.g., redshifts). For instance, the waveband with the highest I$(redshift,waveband)$ carries the most information and decreases the entropy of the redshift the most.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{mi_mass_z.pdf}
\caption{Mutual information of stellar mass and wavebands in bits per galaxy in the bins of redshift. The map is colored based on the value of mutual information, with red representing the most important band and blue representing the least important band. The role of low wavelength bands decreases as we approach higher redshift, as we would expect.}
\label{fig:mi_bands_z}
\end{figure}
The mutual information as in equation \ref{eq:mi} is defined for discrete variables. In the case of continuous variables (e.g., redshift, flux, stellar mass), we need to properly discretize the data. \cite{Kraskov04} (hereafter KSG) introduced a k-nearest neighbor estimator to compute the mutual information of continuous variables. This method detects the underlying probability distribution of data by measuring distances to the $k^{th}$ nearest neighbors of points in the data set. There is nonzero mutual information when some points are clustered in the X-Y space, which allows us to predict $y\in Y$ given an $x\in X$ coordinate. We refer readers to the original KSG paper for details of the method. Figure \ref{fig:mi} shows the mutual information between redshift and each waveband based on the KSG algorithm with $k=100$ nearest neighbors. It suggests that given the sample of $i<25$ AB mag galaxies, the $u-$band provides the largest information regarding the redshift compared to the rest of the H20+UVISTA-like bands. However, our sample is selected based on $i-$band magnitudes, so we assumed that $i-$band data are already available. Suppose that for our sample $u-$band fluxes are highly correlated with $i-$band data. In this case, $u-$band carries no information in the presence of $i-$band data. To take into account such an effect, we need to compute conditional mutual information, defined as,
\begin{equation}
I(X,Y|Z)=H(X|Z)- H(X|Y,Z),
\label{eq:cmi}
\end{equation} where I$(X,Y|Z)$ is the mutual information of $X$ and $Y$ given that $Z$ is observed. Following the KSG algorithm, we find the conditional mutual entropy to sort wavebands based on their importance. We compute I($redshift,waveband|i-band$) and choose the waveband with the highest conditional mutual information as the most important band. The conditional mutual information estimations reveal that the $r-$band is the most important waveband given that $i-$band data are available. We continue computing conditional mutual information, I($redshift,waveband|swaveband$), where $swaveband$ is the previously selected waveband.
Figure \ref{Fig:cmi} shows the non-zero conditional mutual information as we select relevant wavebands. We find that for $i<25$ AB mag galaxies, $r, u, ch2$ and $z$ bands are the bands that provide most of the information about the redshift with decreasing importance from $r-$band to $z-$band. We repeat these analyses for stellar mass measurements. As shown in Figure \ref{fig:mi_mass}, we measure the mutual information between stellar mass and each waveband for the whole sample, and in Figure \ref{fig:mi_bands_z}, we measure the same quantity, I($\log(M_*/M_\odot),waveband|i-band$), in the bins of redshifts. As we expect, the role of short wavelength bands decreases as we approach higher redshifts. We further compute the important wavebands given the availability of $i-$band data in Figure \ref{Fig:cmi_mass}. We find that $ch2$, $Y$, $r$ and $u$ bands are the most relevant bands in the stellar mass measurements with decreasing order of importance. One can constrain the redshift and repeat analysis to find the optimal bands for stellar mass measurements in the desired redshift range given the availability of $i-$band data.
One should note that these conclusions depend on the selection criteria of the sample. This method provides a powerful tool in designing future surveys and quantifying the importance of each waveband. An efficient observation can be conducted by prioritizing important wavebands identified by the information gain-based method.
Moreover, different waveband fluxes can be inter-correlated for a specific sample of galaxies. For instance, the top left panel in Figure \ref{Fig:cmi_mass} shows that IRAC/$ch1$ and $ch2$ provide a comparable amount of information for stellar mass measurements, which suggests that these bands are inter-correlated for our sample with $i<25$ AB mag. Figure \ref{fig:mi_bands} visualizes the mutual information between different bands. A greater value of mutual information indicates that wavebands are more correlated. Inter-correlation between wavebands allows us to predict/simulate fluxes of galaxies in missing bands. In the following, we investigate the possibility of predicting/simulating near-IR UVISTA/$YJH$ fluxes based on H20-like data for a sample of galaxies with $i<25$ AB mag.
\begin{figure*}[ht]
\centering
\includegraphics[width=0.95\textwidth]{info_cond_mass.pdf}
\caption{Similar to Figure \ref{Fig:cmi} but for the stellar mass. Each panel shows the Conditional mutual information of stellar mass and wavebands given that all the previously selected bands are available. We find that for the $i-$band selected sample, $ch2$,$Y$,$r$ and $u-$band are the four most relevant bands with decreasing order of importance. The top left panel shows that IRAC data are essential for stellar mass measurements.}
\label{Fig:cmi_mass}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{mi_bands.pdf}
\caption{Visual representation of the mutual information between different wavebands for a sample of $i<25$ AB mag galaxies. The map is colored based on the value of mutual information, with purple representing the most correlated bands and yellow representing the least correlated bands (mostly independent). For instance, the mutual information of $ch1$ and $ch2$ quantifies the bits of information about IRAC/$ch1$ flux obtained by observing IRAC/$ch2$ flux. It is similar to the correlation coefficient, but it is able to capture non-linear relationships.}
\label{fig:mi_bands}
\end{figure}
\vspace{1cm}
\section{Data Visualization}
\label{sec:Visualize}
Fluxes of galaxies in $N$ wavebands are used to measure the photometric redshifts and physical parameters of galaxies. For example, the H20-like data with $N=7$ bands occupy a 7-dimensional space, where the position of each galaxy is determined by its fluxes in 7 bands. Therefore, galaxies with similar positions in $N$-dimensional space are expected to have similar redshifts and physical parameters if $N$ is large enough to fully sample the observed SED of galaxies. Similarly, it is expected that they will have similar fluxes in $(N+1)^{th}$ waveband. However, showing galaxy fluxes in a high-dimensional space (e.g., 7-dimensional space) is impossible and thus, we use dimensionality reduction techniques to present them in 2D space such that the information of higher dimension is maximally preserved. In this work, we use Uniform Manifold Approximation and Projection \cite[UMAP;][]{McInnes18} technique to visualize our sample in a 2-dimensional space. UMAP is a non-linear dimensionality reduction technique that estimates the topology of the high-dimensional data and uses this information to construct a low-dimensional representation of data that preserves structure information on local scales. It also outperforms other dimensional reduction algorithms such as t-SNE \citep[t-Distributed Stochastic Neighbor Embedding;][]{vanDerMaaten2008} used in the literature \citep{Steinhardt20} since it preserves structures on global scales as well. In a simple sense, UMAP constructs a high-dimensional weighted graph by extending a radius around each data point and connecting points when their radii overlap. This radius varies locally based on the distance to the $n^{th}$ nearest neighbor of each point. The number of the nearest neighbor (n) is the hyper-parameter in UMAP that should be fixed to construct the high-dimensional graph. Small (large) values for n will preserve more local (global) structures. Once the high-dimensional weighted graph is constructed, UMAP optimizes the layout of a low-dimensional map to be as similar as possible to the high-dimensional graph.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{umap_flux.pdf}
\caption{2-D visualization of the sample with H20-like bands using the UMAP technique. The mapped data are color-coded by the $H-$band fluxes. The smooth gradient of $H-$band fluxes in the 2-D representation reassures us that galaxies with similar fluxes in H20-like bands have similar $H-$band fluxes as well.}
\label{fig:H_umap}
\end{figure}
We use the UMAP Python library\footnote{https://github.com/lmcinnes/umap} to map 7-dimensional flux space of H20-like data to 2 dimensions considering 50 nearest neighbors to provide a balance between preserving local and global structures. We do not map magnitudes or colors since non-detected values cannot be handled properly when using them. Multi-waveband fluxes contain all the information regarding colors, but using colors misses information regarding fluxes or magnitudes. Therefore, mapping fluxes of galaxies from that space to 2-dimension is a better way than using colors. Since fluxes in different bands have fairly similar distributions, no normalization is needed before applying UMAP. In the case of significantly distinct distributions, normalization is needed to avoid the dominance of a waveband with a larger dynamic range. Figure \ref{fig:H_umap} shows a 2-D visualization of the sample with H20-like bands using the UMAP algorithm. As an example, the mapped data are color-coded by the $H-$band fluxes (not present in H20 photometry) in $\mu{\rm Jy}$. The smooth transition of the $H-$band fluxes in the 2D representation in Figure \ref{fig:H_umap} reassures us that galaxies with similar fluxes in H20-like bands also have similar $H-$band fluxes. We note that the H20-like data set does not include $H-$band data.
Visualized data in Figure \ref{fig:H_umap} show qualitatively that the $H-$band fluxes are predictable to some extent using H20-like data. To perform a quantitative assessment on how accurately one can predict fluxes in UVISTA $YJH$ bands given the H20-like observations, we train a Random Forest \cite[][]{Breiman01} model with half of our sample and evaluate the model's performance with the other half. A Random Forest consists of an ensemble of regression trees. The algorithm picks a subsample of the dataset, builds a regression tree based on the subsample and repeats this procedure numerous times. The final value is the average of all the values predicted by all the trees in the forest. Having numerous decision trees based on subsampled data makes this algorithm unbiased and unaffected by overfitting. Another advantage of this method is that the inputs do not need to be scaled before feeding into the model. In the following section, we train a Random Forest model and evaluate its accuracy.
\section{Flux predictions }
\label{sec:band_prediction}
\begin{figure*}[]
\centering
\subfloat{{\includegraphics[width=8.55cm]{umap_train.pdf} }}%
\qquad
\subfloat{{\includegraphics[width=8.55cm]{umap_test.pdf} }}%
\caption{\new{Similar to Figure \ref{fig:H_umap}, but for training (two left panels) and test (two right panels) samples. Maps are color-coded with photometric redshifts and stellar masses. We find that the training and test samples share the same properties, so the randomly selected training sample is representative of the galaxies in the COSMOS field.} }%
\label{fig:umap_train_test}%
\end{figure*}
\begin{figure}
\centering
\subfloat{{\includegraphics[width=0.9\columnwidth, clip=True, trim=0cm 0.25cm 0cm 0.25cm]{Y_pre.pdf} }}%
\subfloat{{\includegraphics[width=0.9\columnwidth, clip=True, trim=0cm 0.25cm 0cm 0.25cm]{J_pre.pdf} }}%
\subfloat{{\includegraphics[width=0.9\columnwidth, clip=True, trim=0cm 0.25cm 0cm 0.25cm]{H_pre.pdf} }}%
\caption{The performance of the Random Forest model on the 82,904 test galaxies not used for the training of the model. The model is trained based on H20-like bands ($u,g,r,i,z,ch1,ch2$) and predicts UVISTA $YJH$ bands. Bottom panels show the scatter of $\rm Mag_{Predicted}-Mag_{True}$ as a function of true magnitudes and $\Delta$ is the median offset in these scatter plots.}%
\label{fig:Euclid_RF}%
\end{figure}
We split the sample (described in Section \ref{sec:Data}) \new{randomly} into a training and a test sample. \new{To evaluate if the training sample is representative, we construct a 2-D projection of H20-like fluxes similar to Figure \ref{fig:H_umap} for both training and test samples. Figure \ref{fig:umap_train_test} shows the 2-D visualizations color-coded by the properties of galaxies (photometric redshift and stellar mass). We find that the training and test samples share the same properties, so the training sample is representative of the galaxies in the COSMOS field.} With 82,903 galaxies as a training sample, we build a Random Forest model with 100 regression trees to predict UVISTA $YJH$ bands from the H20-like band fluxes. We use Python implementation of the algorithm \cite[Scikit-learn;][]{scikit-learn} \footnote{https://scikit-learn.org/stable} with its default parameters to build the model. The true (observed) fluxes in the $YJH$ bands are available in the COSMOS2020 catalog. Using the trained Random Forest model, we then predict the expected fluxes for galaxies not included in the training set, with the results compared in Figure \ref{fig:Euclid_RF}. For each band, we compare the predicted magnitudes ($\rm Mag_{Predicted}$) with the true observed magnitudes ($\rm Mag_{True}$). We find that the Random Forest model predicts unbiased $YJH$ fluxes with high accuracy. The bottom panel in each figure shows the scatter of the $\rm Mag_{Predicted}-Mag_{True}$ as a function of true magnitudes. With a median magnitude discrepancy ($\Delta$) of $\sim 0.01$, we find that the offset is comparable with discrepancies that arise from different methods of photometric data reduction. \cite{Weaver21} found that the median tension between the magnitudes derived from aperture photometry and profile-fitting extraction is $\Delta\sim 0.002$ in $YJ$ bands and $\Delta\sim 0.02$ in $H-$band for sources brighter than the 3$\sigma$ depth of each band. Thus, such small offsets in the Random Forest regressor are within the intrinsic uncertainties of the data reduction techniques. Green solid and dashed lines in the sub-panels of Figure \ref{fig:Euclid_RF} show the median of $\Delta$ and 1$\sigma$ (68\%) scatter, respectively. The scatter in the prediction is $<0.17$ mag for galaxies brighter than 24 AB mag. This shows that $YJH$ near-IR observations of UVISTA can be simulated with acceptable accuracy from the available observations of H20 for a sample of galaxies with $i<25$ AB mag. \new{Our results remain consistent when we rebuild a new Random Forest with different randomly selected training samples.} While our focus in this paper is on the UVISTA/$YJH$ and H20 bands, the method we present is general and directly applicable to other surveys.
\section{Photometric redshift and stellar mass}
\label{sec:Photz-M}
\begin{figure*}[]
\centering
\subfloat{{\includegraphics[width=8.5cm]{z_RF_A.pdf} }}%
\qquad
\subfloat{{\includegraphics[width=8.5cm]{z_RF_B.pdf} }}%
\qquad
\subfloat{{\includegraphics[width=8.5cm]{m_RF_A.pdf} }}%
\qquad
\subfloat{{\includegraphics[width=8.5cm]{m_RF_B.pdf} }}%
\caption{Performance of the Random Forest model to predict photometric redshifts and stellar masses when the model is trained by H20-like bands (left panels) and H20+UVISTA-like bands (right panels). Both trained models recover photometric redshifts and stellar masses with high accuracy. The similar performance of the model with and without $YJH$ bands originates from the fact that the H20-like bands capture most of the information available in $YJH$ bands as shown in Figure \ref{fig:Euclid_RF}. The black dashed lines show one-to-one relation, and the gray dashed lines correspond to the predicted redshifts at $\pm0.15(1+z)$ (outlier definition boundaries).}%
\label{fig:z_RF}%
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{RF_bands.pdf}
\caption{The normalized median absolute deviation of $\Delta z/(1+z)$ (left) and $\log(M_*/M_\odot)$ (right) as a function of bands used to measure the parameter. As the sample is selected based on the $i-$band magnitude of galaxies, we start with training a Random Forest model based on only $i-$band data and then we add other bands following the same order of importance we find in Figure \ref{Fig:cmi} and \ref{Fig:cmi_mass}. \new{Red horizontal lines show the scatter of the data relative to their mean value. } }
\label{fig:RF_comp}
\end{figure*}
In the previous section, we showed that given the observations of the H20 survey, near-IR observations of UVISTA can be constrained to some extent. In other words, observations of the COSMOS field provide valuable information regarding the distribution of galaxies in the flux space, even if we do not observe galaxies as extensively as it is done in the COSMOS field in terms of spectral coverage. When we use template fitting code with synthetic templates, we usually do not take into account this constraint. There are two approaches to incorporate this information in the photometric redshifts or physical parameters measurements. First, add a prior to fluxes in the bands that are not observed in the survey. For instance, when we perform SED fitting using H20-like bands, we can add priors to the $YJH$ bands based on a Random Forest model, which is trained over the population of galaxies from the COSMOS observations. Second, train a model based on SED-fitting results calculated with a large number of bands. In this case, when we feed our model with H20-like data, it will decide the best value of a parameter based on both the existence of similar observations in the COSMOS field (information from galaxy populations) and the SED-fitting solution for that galaxy.
In this section, we employ the latter approach to train a model to predict the photometric redshifts and the stellar masses of galaxies based on H20-like and H20+UVISTA-like bands. We train a Random Forest model based on a training sample of observed galaxies. The inputs of the model are H20-like fluxes and the output is either photometric redshift or stellar mass computed from SED fitting over 29 bands available in the COSMOS2020 catalog. We also train another similar model where the inputs are H20+UVISTA-like bands. Figure \ref{fig:z_RF} shows the performance of trained models on the test sample with 82,904 galaxies. We find that both models recover photometric redshifts and stellar masses with comparable accuracy with being slightly accurate using H20+UVISTA-like inputs. Normalized median absolute deviation ($\sigma_{\rm NMAD}$) of $\Delta z/(1+z)$ is $\sim 0.03$ for both models with $\sim 4\%$ outlier fraction. Outlier galaxies are defined as galaxies with $\Delta z/(1+z)>0.15$. The median absolute deviation of $\log (M_*/M_\odot)$ is $\sim 0.1$ dex for both models. We explain this similar performance using the results of Section \ref{information} and \ref{sec:band_prediction}. The Random Forest model with H20-like bands comprises most of the information regarding UVISTA bands as we trained the model with the population of observed COSMOS galaxies. Therefore, it should recover photometric redshifts and stellar masses as accurately as the model which includes near-IR ($YJH$) observations.
We repeat a similar analysis starting with only $i-$band data and adding other important bands in the same order as we identified in Section \ref{information}. Figure \ref{fig:RF_comp} shows the the normalized median absolute deviation of $\Delta z/(1+z)$ and $\log(M_*/M_\odot)$ as a function of bands used to measure the parameter. We find that $i-,r-,u-,ch2-,z-$band are the minimal number of bands to reach an acceptable accuracy of $\sigma^{\rm NMAD}_{\Delta z/(1+z)}=0.03$ to measure photometric redshifts of $i<25$ AB mag. For the same sample, $i-,ch2-,Y-,r-,u-$band are the optimal bands for stellar mass measurements reaching an accuracy of $\sigma^{\rm NMAD}_{\log(M_*/M_\odot)}=0.15$ dex.
\subsection{Synthetic templates}
In the following, we use UMAP to visualize photometry of synthetic SED models commonly used in template-fitting procedures. We build a set of theoretical templates using 2016 version of a library of \cite{Bruzual03}, considering \cite{Chabrier03} initial mass function. Star formation histories are modeled with an exponentially declining function (${\rm SFR} \propto e^{-t/\tau}$), where $\tau$ is the star formation timescale. Dust attenuation is applied using the \cite{Calzetti} law and solar stellar metallicity is assumed for all templates. We build $\sim750,000$ theoretical templates assuming $\tau\in(0.1,10)\ \rm Gyr$, $t\in(0.1,13.7)\ \rm Gyr$, $ A_V\in(0,2)\ \rm mag$ and $z\in(0,5.5)$. $t$ and $A_V$ are the stellar age and the extinction in the visual band, respectively. We then calculate the synthetic photometry in both H20-like and H20+UVISTA-like bands by applying the corresponding filter response function.
As we learned the topology of fluxes in the H20-like bands for real observed galaxies in COSMOS2020 catalog (Figure \ref{fig:H_umap}), we can transform H20-like band fluxes of synthetic photometry into the learned space. Figure \ref{fig:H_umap_synthetic} shows the 2-D visualization of the theoretical templates with H20-like bands in that learned space. As an example, data points in the reduced dimension are color-coded by their synthetic $H-$band fluxes in $\mu{\rm Jy}$. Comparing theoretical templates with the observed data shown in Figure \ref{fig:H_umap} reveals that model galaxies encounter degeneracies. In this specific example, we show that templates with similar H20-like fluxes have more diverse $H-$band fluxes than real observations, which can produce degenerate results when template fitting is performed based on H20-like bands. Adding information of the COSMOS2020 observations as a prior imposes a strong correlation between the observed and missing bands and makes the theoretical templates less degenerate as shown in Figure \ref{fig:H_umap}. For example, the dark blue arc in the left side of Figure \ref{fig:H_umap_synthetic} mismatches with the observational counterpart. In other words, synthetic templates predict $H-$band flux of $\sim 0.1$ $\mu{\rm Jy}$ for galaxies in that vicinity (i.e., the dark blue arc), but real observations show that they have, in fact, $H-$band flux of $\sim 10$ $\mu{\rm Jy}$. This shows that extra information that exists in the previous observations can add valuable information to template fitting analysis.
If one adds a predicted band in the template-fitting procedure, the errors should be assigned based on the $1\sigma$ scatter of the predicted flux (dashed green lines in Figure \ref{fig:Euclid_RF}). It is particularly important to properly take into account the systematic scatter of the predicted bands in template-fitting and ensure that the predicted bands are not over-weighted in best-template selection. \new{In the following section, we perform a simple template-fitting to evaluate values added by predicted fluxes.} However, it is worth highlighting that the better approach would be using a machine learning model which is trained based on template-fitting results of a galaxy population with well-constrained SEDs such as COSMOS2020 (Figure \ref{fig:z_RF}).
\begin{figure}
\centering
\includegraphics[width=\linewidth]{umap_flux_synthetic.pdf}
\caption{Similar to Figure \ref{fig:H_umap}, but for synthetic photometric data. The high-dimensional synthetic H20-like data are transformed to the space learned in Figure \ref{fig:H_umap}. The map is color-coded by the synthetic $H-$band fluxes. Existing dissimilarities between this figure and Figure \ref{fig:H_umap} show that synthetic models lack the observed information.}
\label{fig:H_umap_synthetic}
\end{figure}
\subsection{Template-fitting}
\label{sec:Template-fitting}
\begin{figure*}[]
\centering
\subfloat{{\includegraphics[width=\linewidth]{z_SED.pdf} }}%
\qquad
\subfloat{{\includegraphics[width=\linewidth]{m_SED.pdf} }}%
\caption{\new{Template-fitting results are compared against photometric redshifts and stellar masses of COSMOS2020 catalog (derived from 29 bands) for three cases: using 1) observed ugrizch1ch2 bands (left panels), 2) observed ugrizch1ch2+predicted JHK bands (middle panels) and 3) observed ugrizYJHch1ch2 bands (right panels). }}%
\label{fig:z_SED}%
\end{figure*}
\new{We perform template fitting for three cases using 1) H20-like bands, 2) H20-like+predicted YHJ bands, and 3) H20+UVISTA-like bands. For this purpose, we split the test sample used in Section \ref{sec:band_prediction} into half to have a validation set as well as a new test sample. The validation sample is used to measure $1\sigma$ scatter of the predicted flux (similar to dashed green lines in Figure \ref{fig:Euclid_RF}). We assign errors to the predicted fluxes of the new test sample based on $1\sigma$ scatter of the validation sample at a given magnitude. We use a template-fitting code \texttt{LePhare}{} with the same configuration as \cite{Ilbert15}. This configuration differs from the templates used for COSMOS2020 redshift measurements. In the COSMOS2020 catalog, the photometric redshifts are measured based on templates employed by \cite{Ilbert13}, followed by stellar masses measured in the same manner as \cite{Ilbert15} at fixed photometric redshifts, but here we fit both photometric redshifts and stellar masses simultaneously. Figure \ref{fig:z_SED} presents the results of the template-fitting for these three cases. We find that the lack of observed near-IR fluxes in template-fitting increases the $\sigma_{\rm NMAD}$ and outlier fraction by 50\% and 80\%, respectively. We also find that adding predicted fluxes improves the $\sigma_{\rm NMAD}$ and outlier fraction by 10\% and 25\%, respectively. Predicted fluxes also improve the scatter of the stellar mass measurements by 7\%.}
\new{Improvement in template-fitting results by adding predicted fluxes suggests that observationally driven priors on near-IR fluxes can help reduce both scatter and outlier fraction of SED-derived properties. Moreover, we find that adding observed near-IR data significantly ($\sim50\%$) improves the template-fitting results, but this is not the case for the Random Forest model shown in Figure \ref{fig:RF_comp} ($\sim10\%$ improvement). This suggests that machine learning models are able to fully incorporate the information gathered from extensive surveys and avoid the degeneracies in template-fitting parameters that are inevitable when a few bands are present.}
\section{Discussion and Summary}
\label{sec:Discussion_Summary}
In this paper, we present an information gain-based method to quantify the importance of wavebands and find the optimal set of bands needed to be observed to constrain photometric redshifts and physical properties of galaxies. To demonstrate the application of this method we build a subsample of galaxies from COSMOS2020 catalog with similar waveband coverage ($ugrizYJH$ and IRAC/$ch1,ch2$) that will be available in \textit{Euclid}{} deep fields. For a sample of galaxies with $i<25$ AB mag, we find that given the availability of $i$-band fluxes, $r, u$, IRAC/$ch2$ and $z$ bands provide most of the information for measuring the photometric redshifts with importance decreasing from $r-$band to $z-$band. We also find that for the same sample, IRAC/$ch2$, $Y$, $r$ and $u$ bands are the most relevant bands in stellar mass measurements with decreasing order of importance. We note that these results should be remeasured for any new sample with different selection criteria. Moreover, we present the relative importance of wavebands for stellar mass measurements in the bins of redshifts since their importance depends on the redshift. We also investigate the inter-correlation between the flux in different wavebands and use a machine learning technique to predict/simulate missing fluxes from a survey. To prove the concept, we apply the method trained on the COSMOS2020 data to predict UVISTA near-IR observations based on H20-like survey data, which include $ugriz$ and Spitzer/IRAC observations. We find that near-IR bands ($YJH$) can be predicted/simulated from ground-based ($ugriz$) and mid-IR Spitzer (IRAC/$ch1,ch2$) observations with an accuracy of $1\sigma$ mag scatter $\lesssim 0.2$ for galaxies brighter than $24$ AB mag in near-IR bands. We demonstrate that theoretical templates lack such valuable information already observed through numerous bands in the COSMOS field. We conclude that degeneracies in template-fitting can be alleviated if one trains a model based on template-fitting solutions for observed galaxies with extensive observations instead of using conventional SED fitting. We show that a model trained on H20-like bands has comparable accuracy to a model which is trained over H20+UVISTA-like bands, given that the model is trained over the observed galaxy population with a vast number of wavebands.
\cite{Masters15} mapped high-dimensional color space of COSMOS galaxies in UVISTA bands using the self-organizing map (SOM) technique \citep{Kohonen1982} and proposed a spectroscopy survey to fully cover regions in reduced color space with no spectroscopic redshifts. This survey, C3R2, was awarded 44.5 nights on Keck telescope to map the color-redshift relation necessary for weak lensing cosmology \citep{Masters17,Masters19}. Later on, \cite{Hemmati19} used SOM to map the color space of theoretical models and used the reduced map as a fast template-fitting technique. \new{In the present work, we use a new technique, UMAP, to create a 2-dimensional representation of a high-dimensional flux distribution. This technique can also be utilized to map the color space of galaxies and study their physical properties (similar to Figure \ref{fig:umap_train_test}), providing an opportunity for further analyses that can be performed in the future.}
Acquiring data for galaxy surveys over wide areas and a range of wavelengths with a large number of wavebands is costly. A new method based on machine learning algorithms is presented in this paper to supplement the present and future surveys in their missing bands with information from previous extensive surveys (e.g. COSMOS). It can be used to optimize observations of future surveys, as well as to predict photometry of observatories that have ceased operation \citep{Dobbels20}.
\section*{Acknowledgments}
\new{We thank the anonymous referee for providing insightful comments and suggestions that improved the quality of this work.} NC and AC acknowledge support from NASA ADAP 80NSSC20K0437. ID has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Skłodowska-Curie grant agreement No. 896225.
|
2,869,038,155,507 | arxiv | \section{Lower Bounds in the Classical Radio Broadcast Model}
In this section, we focus on the problem of local broadcast in the classical model and present lower bounds for both progress and acknowledgment times. We emphasize that all these lower bounds are presented for centralized algorithms and also, in the model where processes are provided with a collision detection mechanism. Note, that these points only strengthen these results. These lower bounds prove that the optimized decay protocol, as presented in the previous section, is optimal with respect to progress and acknowledgment times in the classical model. These lower bounds also show that the existing constructions of Ad Hoc Selective Families are optimal. Moreover, in future sections, we use the lower bound on the acknowledgment time in the classical model that we present here as a basis to derive lower bounds for progress and acknowledgment times in the dual graph model.
\subsection{Progress Time Lower Bound}
In this section, we remark that following the proof of the $\Omega(\log^2 n)$ lower bound of Alon et al. \cite{ABLP89} on the time needed for global broadcast of one message in radio networks, and with slight modifications, one can get a lower bound of $\Omega(\log\Delta \log n)$ on the progress bound in the classical model.
\begin{lemma}
For any $n$ and any $\Delta \leq n$, there exists a one-shot setting with a bipartite network of size $n$ and maximum contention of at most $\Delta$ such that for any transmission schedule, it takes at least $\Omega(\log\Delta \log n)$ rounds till each receiver receives at least one message.
\end{lemma}\fullOnly{
\begin{proof}[Proof Outline]
The proof is an easy extension of \cite{ABLP91} to networks with maximum contention of $\Delta$. The only change is that instead of choosing the receiver degrees to vary between $n^{0.4}$ and $n^{0.6}$, we choose the degrees between $14$ and $\Theta(\sqrt{\Delta})$. This leads to $\log(\Delta)$ (instead of $\log n$) different classes of degrees, and in turn, to the stated bound. The proof stays mostly unaffected.
\end{proof}}
\subsection{Acknowledgment Time Lower Bound}
In this section, we present our lower bound on the acknowledgment time in the classical radio broadcast model.
\begin{theorem} \label{thm:Ack_LB}
In the classical radio broadcast model, for any large enough $n$ and any $\Delta \in [20 \log n, n^{0.1}]$, there exists a one-shot setting with a bipartite network
of size $n$ and maximum receiver degree at most $\Delta$ such that it takes at least $ \Omega(\Delta \log n)$ rounds until all receivers have received all messages of their sender neighbors.
\end{theorem}
To prove this theorem, instead of showing that randomized algorithms have low success probability, we show a stronger variant by proving an impossibility result: we prove that there exists a one-shot setting with the above properties such that, even with a centralized algorithm, it is \emph{not possible} to schedule transmissions of nodes less than some bound of $\Omega(\Delta \log n)$ rounds such that each receiver receives the message of each of its neighboring senders successfully. In particular, this result shows that in this one-shot setting, for any randomized local broadcast algorithm, the probability that an execution shorter than that $\Omega(\Delta \log n)$ bound successfully delivers message of each sender to all of its receiver neighbors is zero.
Let us first present some definitions. A transmission schedule $\sigma$ of length $L(\sigma)$ for a bipartite network is a sequence $\sigma_1, \ldots, \sigma_{L(\sigma)} \subseteq S$ of senders. Having a sender $u \in \sigma_r$ indicates that at round $r$ the sender $u$ is transmitting its message. For a network $G$, we say that transmission schedule $\sigma$ \emph{covers} $G$ if for every $v \in S$ and $u \in \mathcal{N}_{G}(v)$, there exists a round $r$ such that $\sigma_r \cap \mathcal{N}_{G}(v) = \{u\}$, that is, using transmission schedule $\sigma$ every receiver node receives all the messages of all of its sender neighbors. Now we are ready to see the main lemma which proves our bound.
\begin{lemma}\label{lem:Ack_Schedules_LB}
For any large enough $n$ and $\Delta \in [20 \log n, n^{0.1}]$, there exists a bipartite network $G$ with size $n$ and maximum receiver degree at most $\Delta$ such that there does not exist a transmission schedule $\sigma$ such that $L(\sigma) < \frac{\Delta\log n}{100}$ and $\sigma$ covers $G$.
\end{lemma}
The rest of this subsection is devoted to proving this lemma. As in the previous subsection, our proof uses techniques similar to those of \cite{ABLP91, ABLP89, ABLP92} and utilizes the probabilistic method~\cite{AS00} to show the existence of the network $G$ mentioned in the \Cref{lem:Ack_Schedules_LB}.
First, we fix an arbitrary $n$ and a $\Delta \in [20 \log n, n^{0.1}]$ and let $\eta= n^{0.12}$ and $m = \eta^8 = n^{0.96}$. Next, we present a probability distribution over a particular family $\mathcal{G}$ of bipartite networks. The common structure of this graph family $\mathcal{G}$ is as follows. All networks of $\mathcal{G}$ have a fixed set of nodes $V$. Moreover, $V$ is partitioned into two nonempty disjoint sets $S$ and $R$, which are respectively the set of senders and the set of receivers. We have $|S|=\eta$ and $|R|=m$. The total number of nodes in these two sets is $\eta+m = n^{0.12}+n^{0.96}$. We adjust the number of nodes to exactly $n$ by adding enough isolated senders to the graph. Instead of defining the probability mass distribution of these graphs we describe the process that samples networks from $\mathcal{G}$. A random sample network is simply created by independently putting an edge between any $s \in S$ and $r \in R$ with probability $\frac{\Delta}{2\eta}$. Given a random network from this distribution we first show that with high probability the maximum receiver degree is at most $\Delta$, as desired.
\begin{lemma} \label{lem:degrees}For a random sample graph $G \in \mathcal{G}$, with probability at least $1-\frac{1}{n^2}$, the degree of any receiver node $r \in R$ is at most $\Delta$.
\end{lemma}
\begin{proof} For each $r \in R$, let $X_G(r)$ denote the degree of node $r$ in random sample graph $G$. Then, $\mathbb{E}[X_G(r)] = \eta \cdot \frac{\Delta}{2\eta} = \frac{\Delta}{2}$. Moreover, since edges are added independently, we can use a Chernoff bound and obtain that $\Pr[X_G(r) \geq \Delta] \leq e^{-\frac{\Delta}{6}}$. Using a union bound over all choices of receiver node $r$, and noting that $\Delta \geq 20\log n$, we get that
\begin{eqnarray}\Pr [\exists r \in R \; s.t.\; X_G(r) \geq \Delta] \leq&& \eta^{8} \cdot e^{-\frac{\Delta}{6}} = e^{8\log \eta -\frac{\Delta}{6}} = e^{0.96 \log n -\frac{\Delta}{6}} \nonumber \\
<&& e^{0.96 \log n -3 \log n} \leq e^{-2\log n} =\frac{1}{n^2}\nonumber
\end{eqnarray}
\end{proof}
\medskip
Now, we study the behavior of transmission schedules over random graphs drawn from $\mathcal{G}$. For each transmission schedule $\sigma$, call $\sigma$ \emph{short} if $L(\sigma) < \frac{\Delta\log n}{100}$. Moreover, for any fixed short transmission schedule $\sigma$, let $P(\sigma)$ be the probability that $\sigma$ covers a random graph $G \in \mathcal{G}$. Using a union bound, we can infer that for a random graph $G \in \mathcal{G}$, the probability that there exists a short transmission schedule $\sigma$ that covers $G$ is at most sum of the $P(\sigma)$-s, when $\sigma$ ranges over all the short transmission schedules. Let us call this probability \emph{the total coverage probability}. In order to prove the lower bound, we show \Cref{lem:probs} about \emph{the total coverage probability}. Note, that given Lemmas \ref{lem:degrees} and \ref{lem:probs}, using the probabilistic method~\cite{AS00}, we can infer that there exists a network $G \in \mathcal{G}$ such that $G$ has maximum receiver degree of at most $\Delta$ and no short transmission schedule covers $G$. This completes the proof of \Cref{lem:Ack_Schedules_LB}.
\begin{lemma} \label{lem:probs} $\sum_{\sigma \; s.t. L(\sigma) <\frac{\Delta\log n}{100}} P(\sigma) \leq e^{-\sqrt{n}} \ll e^{-2\log n}=\frac{1}{n^2}$.
\end{lemma}
\begin{proof}
Note, that the total number of distinct short transmission schedules is less than $2^{\eta^3}$. This is because in each round there are $2^\eta$ options for selecting which subset of senders transmits. Then, each short transmission schedule has at most $\frac{\Delta\log n}{100} < \eta^2$ rounds. Therefore, the total number of ways in which one can choose a short transmission schedule is less than $2^{\eta^3}$. In order to prove that the total coverage probability is $e^{-\sqrt{n}}$, since the total number of short transmission schedules is less than $2^{\eta^3}=2^{n^{0.36}}$, it is enough to show that for each short transmission schedule $\sigma$, $P(\sigma) \leq e^{-n^{0.72}}$ as then the summation would be at most $2^{n^{0.36}} \cdot e^{-n^{0.72}} \leq e^{n^{0.36}-n^{0.72}} < e^{n^{-0.5}} = e^{-\sqrt{n}}$. Thus, it remains to prove that for each short transmission schedule $\sigma$, $P(\sigma)\leq e^{-n^{0.72}}$.
Fix an arbitrary short transmission schedule $\sigma$. for each round $t$ of $\sigma$, let $N(t)$ denote the number of senders that transmit in round $t$. Also, call round $t$ \emph{isolator} if $N(t)=1$. For each sender $s\in S$, if there exists an isolator round in $\sigma$ where only $s$ transmits in that round, then call sender $s$ \emph{lost}. Since $L(\sigma) \leq \frac{\Delta \log n}{100} \leq \frac{n^{0.1} \log n}{100} < \frac{n^{0.12}}{2} = \frac{\eta}{2}$, there are at least $\frac{\eta}{2}$ senders that \emph{are not lost}.
For each not-lost sender $s$, we define a potential function $\Phi(s) = \sum_{t\in T_s} \frac{1}{N(t)}$ where $T_s$ is the set of rounds in which $s$ transmits. Note, that for each round $t$, the total potential given to not-lost senders in that round is at most $N(t) \cdot \frac{1}{N(t)} = 1$. Hence, the total potential when summer-up over all rounds is at most $\frac{\Delta \log n}{100} = \frac{\Delta \log \eta}{12}$. Therefore, since there are at least $\frac{\eta}{2}$ not-lost senders, there exists a not-lost sender $s^*$ for which $\Phi(s^*) \leq \frac{\Delta \log \eta}{6\eta }$.
Now we focus on sender $s^*$ and rounds $T_{s^*}$. We show that, for each receiver $r \in R$, there is a probability at least $\frac{1}{\eta^2}$ that node $r$ is a neighbor of $s$ and it does not receive message of $s^{*}$. First note that the probability that $r$ is a neighbor of $s$ is $\frac{\Delta}{\eta} > \frac{1}{\eta}$. Now for each $t \in T_{S^*}$, the probability that $r$ is connected to a sender other than $s^*$ that transmits in round $t$ is $1 - (1-\frac{\Delta}{2\eta})^{N(t)-1} \geq 1 - e^{-\frac{\Delta}{2\eta} \cdot (N(t)-1)} \geq 1 - e^{-\frac{\Delta}{4\eta} \cdot N(t)} \geq e^{-\frac{4\eta}{\Delta} \cdot \frac{1}{N(t)}}$. Thus, by the FKG inequality\cite[Chapter 6]{AS00}, the probability that this happens for every round $t \in T_{s^{*}}$ is at least $e^{-\sum_{t\in T_{s^*}} \frac{4\eta}{\Delta} \cdot \frac{1}{N(t)}} = e^{-\frac{4\eta}{\Delta} \cdot \Phi(s^*)}$. By choice of $s^*$, we know that this probability is greater than $e^{-\log \eta} = \frac{1}{\eta}$.
Hence, for each receiver $r$, the probability that $r$ is a neighbor of $s^{*}$ but never receives a message from $s$ is greater than $\frac{1}{\eta} \cdot \frac{1}{\eta} = \frac{1}{\eta^2}$. Given this, since edges of different receivers are chosen independently, the probability that there does not exist a receiver $r$ which satisfies above conditions is at most $(1-\frac{1}{\eta^2})^{\eta^8} \geq e^{-\eta^6}$. This shows that $P(\sigma)\leq e^{-\eta^6} = e^{-n^{0.72}}$ and thus completes the proof.
\end{proof}
\iffalse
\begin{theorem} \label{thm:Ack_LB}
In the classical radio broadcast model, for any large enough $n$ and any $\Delta \in [20 \log n, n^{0.1}]$, there exists a one-shot setting with a bipartite network
of size $n$ and maximum receiver degree at most $\Delta$ such that it takes at least $ \Omega(\frac{\Delta \log n})$ rounds until all receivers have received all messages of their sender neighbors.
\end{theorem}
In other words, in this one-shot setting, any algorithm that solves the local broadcast problem has a acknowledgment bound of $\Omega(\frac{\Delta \log n}{\log^2 \log n})$. To prove this theorem, instead of showing that randomized algorithms have low success probability, we show a stronger variant by proving an impossibility result: we prove that there exists a one-shot setting with the above properties such that, even with a centralized algorithm, it is \emph{not possible} to schedule transmissions of nodes in $o(\frac{\Delta \log n}{\log^2 \log n})$ rounds such that each receiver receives the message of each of its neighboring senders successfully. In particular, this result shows that in this one-shot setting, for any randomized local broadcast algorithm, the probability that an execution shorter than $o(\frac{\Delta \log n}{\log^2 \log n})$ rounds successfully delivers message of each sender to all of its receiver neighbors is zero.
In order to make this formal, let us define a transmission schedule $\sigma$ of length $L(\sigma)$ for a bipartite network to be a sequence $\sigma_1, \ldots, \sigma_{L(\sigma)} \subseteq S$ of senders. Having a sender $u \in \sigma_r$ indicates that at round $r$ the sender $u$ is transmitting its message. For a network $G$, we say that transmission schedule $\sigma$ \emph{covers} $G$ if for every $v \in S$ and $u \in \mathcal{N}_{G}(v)$, there exists a round $r$ such that $\sigma_r \cap \mathcal{N}_{G}(v) = \{u\}$, that is using transmission schedule $\sigma$, every receiver node receives all the messages of all of its sender neighbors. Also, we say that a transmission schedule $\sigma$ is \emph{short} if $L(\sigma)= o(\frac{\Delta \log n}{\log^2 \log n})$. With these notations, we are ready to state the main result of this section.
\begin{lemma}\label{lem:Ack_Schedules_LB}
For any large enough $n$ and $\Delta \in [20 \log n, n^{0.1}]$, there exists a bipartite network $G$ with size $n$ and maximum receiver degree at most $\Delta$ such that no short transmission schedule covers $G$.
\end{lemma}
\fullOnly{
Before getting to the details of proof of Lemma \ref{lem:Ack_Schedules_LB}, let us get the proof of \Cref{thm:Ack_LB} out of way, by finishing it assuming that Lemma \ref{lem:Ack_Schedules_LB} is correct.
\begin{proof}[Proof of \Cref{thm:Ack_LB}]
Suppose that $G$ is the network implied by \Cref{lem:Ack_Schedules_LB}. For the sake of contradiction, suppose that there exists an algorithm $A$ with acknowledgment bound $o(\frac{\Delta \log n}{\log^2 \log n})$ in this network. Then, there exists an execution $\alpha$ of $A$ with length $o(\frac{\Delta \log n}{\log^2 \log n})$ rounds such that during this execution, each receiver receives all the messages of its neighbors. This implies that the transmission schedule $\sigma$ of the execution $\alpha$ covers $G$. This is in contradiction to \Cref{lem:Ack_Schedules_LB}.
\end{proof}}
\begin{proof}[Proof Sketch for \Cref{lem:Ack_Schedules_LB}]
Fix an arbitrary $n$ and a $\Delta \in [20 \log n, n^{0.1}]$. Also let \fullOnly{$\rho_1=\frac{1}{10}$, }$\eta= n^{0.1}$, $m = \eta^9$. We use the probabilistic method~\cite{AS00} to show the existence of the network $G$ with the aforementioned properties.
First, we present a probability distribution over a particular family of bipartite networks with maximum receiver degree $\Delta$. To present this probability distribution, we show how to draw a random sample from it. Before getting to the details of this sampling, let us present the structure of this family. All networks of this family have a fixed set of nodes $V$. Moreover, $V$ is partitioned into two nonempty disjoint sets $S$ and $R$, which are respectively the set of senders and the set of receivers. We have $|S|=\eta$ and $|R|=m$. The total number of nodes in these two sets is $\eta+m = n^{0.1}+n^{0.9}$. We adjust the number of nodes to exactly $n$ by adding enough isolated senders to the graph. To draw a random sample from this family, each receiver node $u \in R$ chooses $\Delta$ random senders from $S$ uniformly (with replacement) as its neighbors. Also, choices of different receivers are independent of each other.
Having this probability distribution, we study the behavior of short transmission schedules over random graphs drawn from this distribution. For each fixed transmission schedule $\sigma$, let $P(\sigma)$ be the probability that $\sigma$ covers a random graph $G$. Using a union bound, we can infer that for a random graph $G$, the probability that there exists a short transmission schedule that covers $G$ is at most sum of the $P(\sigma)$-s, when $\sigma$ ranges over all the short transmission schedules. Let us call this probability \emph{the total coverage probability}.
In order to prove the lower bound, we show that ``the total coverage probability is in $e^{-\Omega(\eta)}$" and therefore, less than $1$. Proving this claim completes the proof as with this claim, using the probabilistic method~\cite{AS00}, we can conclude that there exists a bipartite network with maximum receiver degree of at most $\Delta$ such that no short transmission schedule covers it. To prove that the total coverage probability is $e^{-\Omega(\eta)}$, since the total number of short transmission schedules is less than $2^{\eta^3}$, it is enough to show that for each short transmission schedule $\sigma$, $P(\sigma)= e^{-\Omega(\eta^5)}$.
Proving that for any fixed short schedule $\sigma$, $P(\sigma)= e^{-\Omega(\eta^5)}$ is the core part of the proof and also the hardest one. For this part, we use techniques similar to those that we are using in \cite{GHK12} for getting a lower bound for multicast in known radio networks. Let us first present some definitions. Fix a short transmission schedule $\sigma$. For each round $r$ of $\sigma$, we say that this round is \emph{lightweight} if $|\sigma(r)| <\frac{\eta}{2\Delt \log \eta}$. Since $\sigma$ is a short transmission schedule, i.e., $L(\sigma)< \Delt \log \eta$, the total number of senders that transmit in at least one lightweight round of $\sigma$ is less than $\frac{\eta}{2}$. Therefore, there are at least $\frac{\eta}{2}$ senders that never transmit in lightweight rounds of $\sigma$. We call these senders the \emph{principal} senders of $\sigma$.
Throughout the rest of the proof, we focus on the principal senders of $\sigma$. For this, we divide the short transmission schedules into two disjoint categories, \emph{adequate} and \emph{inadequate}. We say that $\sigma$ is an \emph{adequate} transmission schedule if throughout $\sigma$, each principal node transmits in at least $\frac{\log \eta}{\log \log \eta}$ rounds. Otherwise we say that $\sigma$ is an \emph{inadequate} transmission schedule. We study inadequate and adequate transmission schedules in two separate lemmas (Lemmas \ref{lem:inad} and \ref{lem:ad}), and prove that in each case $P(\sigma)= e^{-\Omega(\eta^5)}$.
\end{proof}
\fullOnly{\begin{remark} The fact that the total coverage probability is in $e^{-\Omega(\eta)}$ actually means that for most of the bipartite networks drawn from the aforementioned distribution, there does not exist a short transmission schedule to cover them.
\end{remark}}
\begin{lemma}\label{lem:inad}
For each inadequate short transmission schedule $\sigma$, the probability that $\sigma$ covers a random graph is $e^{-\Omega(\eta^5)}$, i.e., $P(\sigma)= e^{-\Omega(\eta^5)}$.
\end{lemma}
\shortOnly{
\begin{proof}[Proof Sketch] Let $\sigma$ be an arbitrary inadequate short transmission schedule. Since $\sigma$ is inadequate, there exists a principal sender node $v$ that transmits in less than $\frac{\log \eta}{\log \log \eta}$ rounds of $\sigma$. Also, since $v$ is a principal sender, it does not transmit in any lightweight round. That is, in each round that $v$ transmits, the number of sender nodes that transmit is at least $\frac{\eta}{2\Delt \log \eta}$. We show that in a random graph, $v$ is unlikely to deliver its message to all its neighbors, i.e., that in a random graph, with probability $1-e^{-\Omega(\eta^5)}$, there exists a receiver neighbor of $v$ that does not receive the message of $v$.
A formal proof for this claim requires rather careful probability arguments but the main intuition is as follows. In each round that $v$ transmits, there is a high contention, i.e., at least $\frac{\eta}{2\Delt \log \eta}$ senders transmit. Thus, in a random graph, in most of those rounds, neighbors of $v$ receive collisions. On the other hand, the number of rounds that $v$ transmits in them is at most $\frac{\log \eta}{\log \log \eta}$. These two observations suggest that it is unlikely for all neighbors of $v$ to receive its message.
\end{proof}
}
\fullOnly{
\begin{proof}
Let $\sigma$ be an arbitrary inadequate short transmission schedule. By definition of inadequate transmission schedules, there exists a principal sender node $v$ that transmits in less than $\frac{\log \eta}{\log \log \eta}$ rounds of $\sigma$. Also, since $v$ is a principal sender, it does not transmit in any lightweight round. Now we focus on sender $v$. To show that the probability of $\sigma$ covering a random graph is $e^{-\Omega(\eta^5)}$, we show that in a random graph, with probability $1- e^{-\Omega(\eta^5)}$, there exists a receiver neighbor of $v$ that does not receive the message of $v$.
Let us say that the set of rounds of $\sigma$ that sender $v$ transmits in them is $R_{\sigma}(v)$ and we have $\ell_\sigma(v) = |R_{\sigma}(v)|$. Note, that by the choice of node $v$, we have $\ell_\sigma(v) < \frac{\log n}{\log \log \eta}$. Also, let $T_{\sigma}^{i}(v)$ to be the set of sender nodes other than $v$ that transmit in the $i^{th}$ round that $v$ transmits. Note, that since $v$ is a principal sender, for each $i$, we have $|T_{\sigma}^{i}(v)| > \frac{\eta}{2\Delt \log \eta} -1$.
Now consider an arbitrary receiver node $u$. We first argue that the probability that $u$ is a neighbor of $v$ and does not receive message of $v$ is at least $\frac{1}{\eta^4}$. For this purpose, recall that in the construction of random graphs, $u$ picks $\Delt$ sender nodes uniformly at random (with replacement) as its neighbors. Therefore, the probability that $u$ chooses its first neighbor to be sender $v$ is $\frac{1}{\eta}$. So, because of the independence in the choices that $u$ makes for selection of its different neighbors, in order to prove the claim, it is enough to show that the probability that $u$ does not receive message of $v$, conditioned on that the first neighbor of $u$ is sender $v$, is at least $\frac{1}{\eta^3}$. Therefore, for the rest of this part, suppose that the first neighbor of $u$ is indeed sender $v$. Now note that other than its first choice which led to being adjacent with $v$, $u$ has $\Delt-1$ other choices to make about its neighbors. Divide this $\Delt-1$ choices into $\ell_\sigma(v)$ sets of size (roughly) $\frac{\Delt-1}{\ell_\sigma(v)}$ and let $X^i$ represent the $i^{th}$ set. In order to show that $u$ does not receive message of $v$ with probability at least $\frac{1}{\eta^3}$, it is enough to show that with probability at least $\frac{1}{\eta^3}$, for each $i$, we have $X^i \cap T_{\sigma}^{i}(v) \neq \emptyset$. This is because in that case, $u$ receives collision in all the rounds that $v$ transmits and therefore, it does not receive the message of $v$. Now, for each $i$, the probability that $X^i \cap T_{\sigma}^{i}(v) = \emptyset$ is (modulo plus or minus ones)
\[ (1 -\frac{|T_{\sigma}^{i}(v)|}{\eta})^{|X^i|} \geq (1 -\frac{1}{2\Delt \log \eta})^{\frac{\Delt \log \log \eta}{\log \eta}} \approx e^{-\frac{\log \log \eta}{2 \log^2 \eta}} \approx 1- \frac{\log \log \eta}{2 \log^2 \eta}
\]
Therefore, since choices of different $X^i$ sets are independent, events of form $X^i \cap T_{\sigma}^{i}(v) \neq \emptyset$ are independent for different $i$-s and therefore, the probability that this event happens for all values of $i$ at least
\begin{eqnarray}
\bigg(1 - (1- \frac{\log \log \eta}{2 \log^2 \eta})\bigg)^{\frac{2\log \eta}{\log \log \eta}} = (\frac{\log \log \eta}{2 \log^2 \eta})^{\frac{\log \eta}{\log \log \eta}} = \nonumber \\
e^{-\frac{\log \eta}{\log \log \eta} \; \cdot \; ( 2 \log \log \eta - \log \log \log \eta + \log 2)} > e^{-2\log \eta} > \frac{1}{\eta^3} \nonumber
\end{eqnarray}
So, to conclude, so far we showed that for each receiver node $u$, the probability that $u$ is a neighbor of $v$ and does not receive message of $v$ is at least $\frac{1}{\eta^4}$. \\
Now, we have $m = \eta^9$ receiver nodes. Therefore, the probability that there exists at least one of them that is a neighbor of $v$ and does not receive message of $v$ is at least
\[
1-(1-\frac{1}{\eta^4})^{\eta^9} \approx 1- e^{-\eta^5}
\]
Note, that if in a graph, there exists a receiver that is a neighbor of $v$ and does not receive message of $v$, then $\sigma$ does not cover that graph. Therefore, the above equation completes the proof by showing that the probability that $\sigma$ covers a random graph is at most $e^{- \eta^5}$.
\end{proof}
}
\fullOnly{
Having this result about inadequate short transmission schedules, it is time to study adequate short transmission schedules. However before that, let us just state an optimization lemma that would be used throughout the proof of the result about adequate short transmission schedules.
\begin{lemma} \label{lem:lagrange} The minimum value of function $f(\mathbf{x})=\sum_{i=1}^{p} \frac{1}{x_i}$ constrained to condition $\sum_{i=1}^{p} e^{-\alpha x_i} = C$ is achieved when all $x_i$-s are equal. Also, if $p > e \, C$, the minimum value of function $f$ is monotonically increasing in both $p$ and $C$.
\end{lemma}
\begin{proof}
First, suppose that $p$ is fixed. We prove the first part of lemma using Lagrange multiplier method. Define $\Lambda(\mathbf{x}, \lambda) = \sum_{i=1}^{p} \frac{1}{x_i} + \lambda (\sum_{i=1}^{p} e^{-\alpha x_i} - C) $ to be our Lagrange function. We know that if vector $\mathbf{x^\prime}$ is optimal, there exists a $\lambda^\prime$ such that $(\mathbf{x^\prime,\lambda^\prime})$ is a stationary point for the Lagrange function. This means that $1\leq i \leq p $, $\frac{-1}{x_i^{\prime 2}}=\lambda^\prime \alpha (e^{-\alpha x_i^\prime})$. Then, using the $\sum_{i=1}^{p} e^{-\alpha x_i} = C$ constraint, we get the following new constraint:
$$\sum_{i=1}^{p} \frac{1}{x_i^{\prime 2}}=-\lambda^\prime \alpha C$$
With another use of Lagrange multiplier method with same objective function and this new constraint, we have that
$$\forall i \; 1\leq i \leq p \textit{, we have } \; -\frac{1}{x^{\prime 2}} = \frac{\gamma}{x^{\prime 3}} $$
which means that for every $i, 1\leq i \leq p$, we have $x^\prime = -\gamma$. This shows that in the optimum solution, all $x_i$-s are equal and therefore, it completes the proof of first part.
Using the first part, one can easily see that for every $p \in \mathbb{N}$, the minimum value of $f(\mathbf{x})$ is equal to $\frac{p\alpha}{\log(p/C)}.$ It is clear that if $p > e\,C$, this function is monotonically increasing in both $p$ and $C$.
\end{proof}
Now, we show the counterpart of the above lemma for adequate short transmission schedules.
}
\begin{lemma}\label{lem:ad}
For each adequate short transmission schedule $\sigma$, the probability that $\sigma$ covers a random graph is $e^{-\Omega(\eta^5)}$, i.e., $P(\sigma)= e^{-\Omega(\eta^5)}$.
\end{lemma}
\shortOnly{
\begin{proof}[Proof Sketch] Let $\sigma$ be an arbitrary adequate short transmission schedule. Recall that principal senders of $\sigma$ are defined as senders that do not transmit in lightweight rounds of $\sigma$. Let us say that a message is a principal message if its sender is a principal sender. Note, that in a random graph, in expectation, each receiver is adjacent to at least $\frac{\Delt}{2}$ principal senders. Therefore, if $\sigma$ covers a random graph, each receiver should receive, in expectation, at least $\frac{\Delt}{2}$ principal messages. Hence, since there are $m$ different receivers, if $\sigma$ covers a random graph, there are, in expectation, at least $\frac{m\Delt}{2}$ successful deliveries. Then, using a Chernoff bound, we can infer that if $\sigma$ covers a random graph, with probability $1-e^{-\Omega(\eta^9)}$, there are at least $\frac{m\Delt}{4}$ successful deliveries. To prove the lemma, we show that the probability that for a random graph, $\sigma$ has $\frac{m\Delt}{4}$ successful deliveries is $e^{-\Omega(\eta^5)}$. Then, a union bound completes the proof of lemma.
Hence, the remaining part of the proof is to show that on a random graph, with probability $e^{-\Omega(\eta^5)}$, $\sigma$ has less than $\frac{m\Delt}{4}$ successful deliveries. This part is the core part of the proof of this lemma. The formal reasoning for this part requires a careful potential argument but the intuition is based on the following simple observations. Suppose that $\sigma$ has at least $\frac{m\Delt}{4}$ successful deliveries with probability $e^{-\Omega(\eta^5)}$. Since $\sigma$ is an adequate transmission schedule, each principal sender transmits in at least $\frac{\log \eta}{\log \log \eta}$ rounds and because there are at least $\frac{\eta}{2}$ principal senders, there has to be at least $\frac{\eta \log \eta}{2\log \log \eta}$ transmissions by principal senders. Now in each round $\sigma$, the number of transmitting senders should be at most $\Theta(\frac{\eta}{\Delta})$, or otherwise, the number of successful deliveries drops down exponentially as a function of the multiplicative distance from $\frac{\eta}{\Delta}$, and hence the total sum of them over all the rounds would not accumulate to $\frac{m \Delta}{4}$. If we assume that in each round roughly at most $\Theta(\frac{\eta}{\Delta})$ senders transmit, we directly get a lower bound of $\frac{\frac{\eta \log \eta}{2\log \log \eta}}{\frac{\eta}{\Delta}} = \Theta(\frac{\Delta\log\eta}{\log \log \eta})$ on the number of rounds of $\sigma$ which is in contradiction with the fact that $\sigma$ is short. The formal proof of this part replaces this simplistic assumption by a more careful argument that, essentially, takes all the possibilities of the number of transmitters in each of the rounds into consideration, using a potential argument. This formal argument is omitted due to the space considerations.
\end{proof}
}
\fullOnly{\begin{proof}
Let $\sigma$ be an arbitrary adequate short transmission schedule. Recall that principal senders of $\sigma$ are defined as senders that do not transmit in lightweight rounds of $\sigma$. Let us say that a message is a principal message if its sender is a principal sender. Now note that in a random graph, in expectation, each receiver is adjacent to at least $\frac{\Delt}{2}$ principal senders. Therefore, if $\sigma$ covers a random graph, each receiver should receive, in expectation, at least $\frac{\Delt}{2}$ principal messages. Hence, since there are $m$ different receivers, if $\sigma$ covers a random graph, there are, in expectation, at least $\frac{m\Delt}{2}$ successful deliveries
. Then, using a Chernoff bound, we can infer that if $\sigma$ covers a random graph, with probability $1-e^{-\Omega(\eta^9)}$, there are at least $\frac{m\Delt}{4}$ successful deliveries.
To prove the lemma, we show that the probability that for a random graph, $\sigma$ has $\frac{m\Delt}{4}$ successful deliverie
is $e^{-\Omega(\eta^5)}$. Then, using a union bound, and noting the aforementioned Chernoff bound, this would complete the proof of lemma.
For the sake of analysis, we define an artificial form of transmission schedules, namely \emph{Solitude Transmission Schedules} (STS), and transform $\sigma$ into a STS $\wp$ with certain properties. In these artificial schedules, we use dummy `collision' messages and if a receiver node $u$ receives a collision message, $u$ treats this message as a real collision. Roughly speaking, a STS is the same as a normal transmission schedule with the exception of only one additional constraint. In a STS, in each round, only one sender node transmits its actual message and the rest of the transmitting senders transmit the dummy `collision' message. More precisely, each STS $\phi$ determines two things for each round $r$: (1) The set $S_{\sigma}(r)$ of senders that transmit in round $r$, (2) the single sender $w_{\sigma}(r)$ that transmits its own message in round $r$. As before, length of each STS $\phi$ is just the number of rounds that it has and is denoted by $L(\phi)$. Also, for each STS $\phi$, we define a potential $\Psi(\phi)$ for $\phi$ and set $\Psi(\phi) = \sum_{r=1}^{L(\phi)} 1/|S_{\phi}(r)|$.
\noindent We transform $\sigma$ into a STS $\wp$ such that
\begin{itemize}
\item[(a)] The successful deliveries in $\sigma$ are exactly the same as those in $\wp$.
\item[(b)] The potential of $\wp$ is equal to the length of $\sigma$, \emph{i.e.}, $\Psi(\wp)= L(\sigma)$.
\item[(c)] The length of $\wp$ is equal to the total number of transmissions throughout $\sigma$.
\end{itemize}
This transformation is done as following. Consider each round $r$ of transmission schedule $\sigma$ and let $T_{\sigma}(r)$ be the set of senders that transmit in round $r$ of $\sigma$. Then, in $\wp$, we add $|T_{\sigma}(r)|$ rounds and these rounds will imitate round $r$ of $\sigma$. We call these rounds the imitators of round $r$ of $\sigma$ in $\wp$. For each round $r^\prime$ of $\wp$ that imitates round $r$ of $\sigma$, the set of nodes that transmit is exactly the same as that of round $r$ of $\sigma$, \emph{i.e.}, $S_{\wp}(r^\prime) = S_{\sigma}(r)$. Also, we assign each imitator round $r^\prime$ exactly one sender of $T_{\sigma}(r)$ (and vice versa). Then, for each such round $r^\prime$, exactly that assigned sender transmits its own message, while the rest of $S_{\wp}$ transmit the dummy `collision' message.
Now, we study why this transformation has the desired properties. First, it is clear that in each round $r^\prime$ of $\wp$ that imitates round $r$ of $\sigma$, if the single node that transmits its own message in $r^\prime$ is $v$, the successful delivery are to those receivers that would receive message of $v$ in round $r$ of $\sigma$. Therefore, going over different rounds $r^\prime$ of $\wp$ that imitates round $r$ of $\sigma$, in union, the successful deliveries are to those receivers that had a successful delivery in round $r$ of $\sigma$. Hence, the total set of successful deliveries in $\wp$ is exactly the same as that in $\sigma$ and therefore, we have property (a). To see the property (b), note that for each round $r$ of $\sigma$, the sum of terms $1/|S_{\wp}(r^\prime)|$ when $r^\prime$ ranges over rounds in $\wp$ that imitate round $r$ of $\sigma$ is one. Therefore, the total potential of $\phi$ is equal to the number of rounds of $\sigma$ which proves property (b). Property (c) is clear as for each transmission in $\sigma$, we have exactly one round in $\wp$.
Having this transformation at hand, from now on, we work with transformed version of $\sigma$ which is solitude transmission schedule $\wp$. Therefore, to complete the proof of lemma, we show that the probability that for a random graph, STS $\wp$ has $\frac{m\Delt}{2} = \frac{\eta^9\Delt}{4}$ successful deliveries is in $e^{-\Omega(\eta^5)}$.
First note that by property (c) and the fact that there at most $\eta$ senders, we get that the new schedule $\wp$ is at most a factor $\eta$ longer than the original schedule $\sigma$. Since $\sigma$ is a short schedule, we get that the total number of rounds of $\wp$ is less than $\eta^2 \log \eta$.
Now, we study the number of successful deliveries in $\wp$. For each round $r$ of $\wp$, for each receiver $u$, the probability that $u$ receives a message successfully in round $r$ is equal to
\[ \frac{\Delt}{\eta}\; (1- \frac{|S_{\wp(r)}|}{\eta})^{\Delt-1} \approx \frac{\Delt}{\eta}\; e^{- \, \frac{\Delt|S_{\wp(r)}|}{\eta}}
\]
Therefore, since there are $m = \eta^9$ receivers, in expectation, the number of successful deliveries in round $r$ is
\[ \mu_{\wp}(r) = \frac{\eta^9 \Delt}{\eta} \;e^{- \, \frac{\Delt|S_{\wp(r)}|}{\eta}}
\]
Now, let $\mu_{\wp}^{*}(r)= \max(\mu_{\wp}(r), \eta^5)$. Using a Chernoff bound, we can infer that for each round $r$, the probability that the number of successful deliveries in round $r$ is greater than $2\mu_{\wp}^{*}(r)$ is in $e^{-\Omega(\eta^5)}$. Therefore, using a Union bound and noting the fact that the number of rounds of $\wp$ is at most $\eta^2 \log \eta$, we get that with probability at least $1-e^{-\Omega(\eta^5)}$, the total number of successful deliveries throughout $\wp$ is at most $\sum_{r= 1}^{L(\wp)} 2\mu_{\wp}^{*}(r)$. Now, since for each $r$, we have $\mu_{\wp}^{*}(r)= \max(\mu_{\wp}(r), \eta^5)$, we have that for each $r$, $\mu_{\wp}^{*}(r)\leq \mu_{\wp}(r)+ \eta^5$. Hence, we can conclude that with probability at least $1-O(e^{-\eta^5})$, the total number of successful deliveries throughout $\wp$ is at most
\[\sum_{r= 1}^{L(\wp)} 2(\mu_{\wp}(r)+ \eta^5) = \sum_{r= 1}^{L(\wp)} 2\mu_{\wp}(r) + L(\wp)\, \eta^5 \leq \sum_{r= 1}^{L(\wp)} 2\mu_{\wp}(r) + \eta^7 \log \eta \]
The last inequality is because we know that $ L(\wp) < \eta^2 \log \eta$. Now since we only needed to show that the probability that $\wp$ has $\frac{\eta^9\Delt}{4}$ successful deliveries is $e^{-\Omega(\eta^5)}$, it is enough to show that
\[ \sum_{r= 1}^{L(\wp)} 2\mu_{\wp}(r) < \frac{\eta^9\Delt}{16}
\]
which, using definition of $\mu_{\wp}(r)$, is equivalent to showing
\begin{eqnarray}\label{eq:goal} \sum_{r= 1}^{L(\wp)} \;e^{- \, \frac{\Delt\, |S_{\wp(r)}|}{\eta}} \; <\; \frac{\eta}{16}
\end{eqnarray}
In order to show this, let us first see two simple facts about $\wp$. First note that since $\sigma$ was a short transmission schedule, we had $L(\sigma)< \frac{\Delt \log \eta}{\log^2 \log \eta}$. Therefore, by property (b) of the transformation, we have $\Psi(\wp)< \frac{\Delt \log \eta}{\log^2 \log \eta}$. Also, since $\sigma$ was an adequate transmission schedule, each of its principal senders transmits at least $\frac{\log \eta}{\log \log \eta}$. Therefore, since number of principal senders in each transmission schedule is at least $\frac{\eta}{2}$, and noting that by property (c) of the transformation, $L(\wp)$ is equal to the total number of transmissions in $\sigma$, we get that $L(\wp) \geq \frac{\eta \log \eta}{2 \log \log \eta}$. Having these two facts about $\wp$, namely that (i) $\Psi(\wp)< \frac{\Delt \log \eta}{\log^2 \log \eta}$ and (ii) $L(\wp) \geq \frac{\eta \log \eta}{2 \log \log \eta}$, we prove Inequality (\ref{eq:goal}).
For this purpose, we use Lemma \ref{lem:lagrange} which is simply an optimization lemma. If in this lemma, we set $\alpha = \frac{\Delt}{\eta}$, $p = L(\wp)$ and let $C$ vary in range $[\frac{\eta}{16}, \infty]$ (as contours of possible values for the LHS of Inequality (\ref{eq:goal})), we get that the minimum value for $\Psi(\wp) = \sum_{r=1}^{L(\wp)} 1/|S_{\wp}(r)|$ is achieved when all $|S_{\wp}(r)|$-s are equal and $p=L(\wp)$ and $C$ are as small as possible, \emph{i.e.}, $L(\wp) \geq \frac{\eta \log \eta}{2 \log \log \eta}$ and $C= \frac{\eta}{16}$. Therefore, if we let $s_{\wp}$ be the common value of $|S_{\wp}(r)|$-s, we have
\[
\frac{\eta \log \eta}{2 \log \log \eta} \;e^{- \, \frac{\Delt \,\cdot \,s_{\wp}}{\eta}} = \frac{\eta}{16}
\]
which means that
\[
s_{\wp} = \frac{\eta}{\Delt} \; (\log \log \eta - \log \log \log \eta +3 \log 2) < \frac{\eta\log \log \eta}{2\Delt}
\]
and hence, if inequality \ref{eq:goal} is not satisfied, fact (ii) would mean that minimum value for $\Psi(\wp)$ is
\[\Psi(\wp) = \sum_{r=1}^{\frac{\eta \log \eta}{2 \log \log \eta}} \frac{1}{s_{\wp}} > \frac{2 \Delt \log \eta}{\log^2 \log \eta}
\]
However, this contradicts with fact (i). Therefore, having both facts (i) and (ii), inequality (\ref{eq:goal}) has to be satisfied which completes the proof of lemma.
\end{proof}
Now that we have established both of our desired lemmas, it is time to get back to the main lower bound lemma and complete the proof.
\begin{proof}[Proof of \Cref{lem:Ack_Schedules_LB}] Using \Cref{lem:inad} and \Cref{lem:ad}, we know that for each random graph $G$, and for each short transmission schedule $\sigma$, the probability that $\sigma$ covers $G$ is $e^{-\Omega(\eta^5)}$. Now, note that the total number of short transmission schedules less than $2^{\eta^3}$. That is because in each round, there are $2^\eta$ possibilities for the set of senders that transmit and by definition of shortness of transmission schedules, there are less than $\eta^2$ total rounds. Therefore, using a Union bound, we get that for each random graph, probability that there exists a short transmission schedule that covers it is in $e^{-\Omega(\eta^2)}$. Hence, for each random graph, with probability $1-e^{-\Omega(\eta^2)}$, there is no short transmission schedule to cover it. This completes our proof.
\end{proof}
\begin{corollary} \label{crl:Ack-LB}
In the classical radio broadcast model, for any large enough $n$ and each $\Delta$, with $20 \log n \leq \Delta \leq n^{\frac{1}{11}}$, there exists a one-shot setting with a bipartite network of size $n$ nodes and maximum receiver degree at most $\Delta$ such that
for any algorithm that solves the local broadcast problem, it holds that:
For every $k \in [20 \log n, \Delta]$, there exists a sender $u$ with $c'(u,1) \leq k$ that at round $\Omega(\frac{k \log n}{\log^2 \log n})$ has not acknowledged its message. That is, in this setting, any algorithm that solves the local broadcast problem has an acknowledgment bound of at least $f(k) = \Omega(\frac{k \log n}{\log^2 \log n})$ for all $k \in [20 \log n, \Delta]$.
\end{corollary}
\begin{proof}[Proof]
We use \Cref{thm:Ack_LB} with $n' = n^{1/1.1}$ and $\Delta$ values between $20 \log n'$ and $(n')^{0.1}$ and simply take the union of the resulting networks (and one-shot-environments) as different components. Note, that the total number of nodes is $n'^{0.1} \cdot n' = n$. From \Cref{thm:Ack_LB}, it is clear that for every $k \in [20 \log n, \Delta]$, there will be a component and a sender $u$ that has $c'(u,1) \leq k$ but does not acknowledge its message before round $\Omega(\frac{k \log n}{\log^2 \log n})$.
\end{proof}
}
\fi
\section{Lower Bounds in the Dual Graph Model}
In this section,\fullOnly{ we present two lower bounds for the dual graph model. We}\shortOnly{ we} show a lower bound of $\Omega(\Delta' \log n)$ on the progress time of centralized algorithms in the dual graph model with collision detection. This lower bound directly yields a lower bound with the same value on the acknowledgment time in the same model. Together, these two bounds show that the optimized decay protocol presented in section \ref{sec:upper} achieves almost optimal acknowledgment and progress bounds in the dual graph model. On the other hand, this result demonstrates a big gap between the progress bound in the two models, proving that progress is unavoidably harder (slower) in the dual graph model.\fullOnly{ Also, we show an unavoidable big gap in the dual graph model between the receive bound, the time by which all neighbors of an active process have received its message, and the acknowledgment bound, the time by which this process believes that those neighbors have received its message.}
\fullOnly{\subsection{Lower Bound on the Progress Time}\label{subsec:Prog_Dual}
In the previous section, we proved a lower bound of $\Omega(\Delta \log n)$ for the acknowledgment time in the classical radio broadcast model.
Now, we use that result to show a lower bound of $\Omega(\Delta' \log n)$ on the progress time in the dual graph model.}
\fullOnly{
To get there, we first need some definitions. Again, we will work with bipartite networks and in a one-shot setting. However, this time, these networks would be in the dual graph radio broadcast model and for each such network, we have two graphs $G$ and $G^\prime$. For each algorithm $A$ and each bipartite network in the dual graph model, we say that an execution $\alpha$ of $A$, is \emph{progressive} if throughout this execution, every receiver of that network receives at least one message. Note that an execution includes the choices of adversary about activating the unreliable links in each round. Now we are ready to see the main result of this section.
}
\shortOnly{
\begin{theorem} \label{thm:worst-prog-dual} In the dual graph model, for each $n$ and each $\Delta' \in [20 \log n, n^{\frac{1}{11}}]$, there exists a bipartite network $H^*(n, \Delta')$ with $n$ nodes and maximum receiver $G^\prime$-degree at most $\Delta'$ such that no algorithm can have progress bound of $o(\Delta' \log n)$ rounds. In the same network, no algorithm can have acknowledgment bound of $o(\Delta' \log n)$ rounds.
\end{theorem}
}
\fullOnly{
\begin{theorem} \label{thm:worst-prog-dual} In the dual graph model, for each $n_1$ and each $\Delta'_1 \in [20 \log n_1, n_1^{\frac{1}{11}}]$, there exists a bipartite network $H^*(n_1, \Delta'_1)$ with $n_1$ nodes and maximum receiver $G^\prime$-degree at most $\Delta'_1$ such that no algorithm can have progress bound of $o(\Delta' \log n_1)$ rounds.
\end{theorem}
}
\begin{proof}[Proof Outline]In order to prove this lower bound, in Lemma \ref{lem:trans}, we show a reduction from acknowledgment in the bipartite networks of the classical model to the progress in the bipartite networks of the dual graph model. In particular, this means that if there exists an algorithm with progress bound of $o(\Delta^\prime \log n)$ in the dual graph model, then for any bipartite network $H$ in the classical broadcast model, we have a transmission schedule $\sigma(H)$ with length $o(\Delta \log n)$ that covers $H$. Then, we use \Cref{thm:Ack_LB} to complete the lower bound.
\end{proof}
\begin{lemma}\label{lem:trans} Consider arbitrary $n_2$ and $\Delta_2$ and let $n_1 = n_2 \Delta_2$ and $\Delta'_1 = \Delta_2$. Suppose that in the dual graph model, for each bipartite network with $n_1$ nodes and maximum receiver $G'$-degree $\Delta'_1$, there exists a local broadcast algorithm $A$ with progress bound of at most $f(n_1, \Delta_1^\prime)$. Then, for each bipartite network $H$ with $n_2$ nodes and maximum receiver degree $\Delta_2$ in the classical radio broadcast model, there exists a transmission schedule $\sigma(H)$ with length at most $f({n_2}{\Delta_2}, \Delta_2)$ that covers $H$.
\end{lemma}
\shortOnly{
\begin{proof}[Proof Sketch] Let $H$ be a network in the classical radio broadcast model with $n_2$ nodes and maximum receiver degree at most $\Delta_2$. We use algorithm $A$ to construct a transmission schedule $\sigma_H$ of length at most $f({n_2}{\Delta_2},\Delta_2)$ that covers $H$. We first construct a new bipartite network, \emph{Dual($H$)} = $(G, G')$, in the dual graph model with at most $n_1$ nodes and maximum receiver $G^\prime$-degree $\Delta'_1$. The set of sender nodes in the Dual($H$) is equal to that in $H$. For each receiver $u$ of $H$, let $d_{H}(u)$ be the degree of node $u$ in graph $H$. Let us call the senders that are adjacent to $u$ `the \emph{associates} of $u$'. In the network Dual($H$), we replace receiver $u$ with $d_{H}(u)$ receivers and we call these new receivers `the \emph{proxies} of $u$'. In graph $G$ of Dual($H$), we match proxies of $u$ with associates of $u$, i.e., we connect each proxy to exactly one associate and vice versa. In graph $G^\prime$ of Dual($H$), we connect all proxies of $u$ to all associates of $u$. It is easy to check that Dual($H$) has the desired size and maximum receiver degree.
Now we present a special adversary for the dual graph model. Later we construct transmission schedule $\sigma_H$ based on the behavior of algorithm $A$ in network Dual($H$) against this adversary. This special adversary activates the unreliable links using the following procedure. Consider round $r$ and receiver node $w$. (1) If exactly one $G^\prime$-neighbor of $w$ is transmitting, then the adversary activates only the links from $w$ to its $G$-neighbors, (2) otherwise, adversary activates all the links from $w$ to its $G^\prime$-neighbors.
We focus on the executions of algorithm $A$ on the network Dual($H$) against the above adversary. By assumption, there exists an execution $\alpha$ of $A$ with length at most $f(n_2 \Delta_2, \Delta_2)$ rounds such that in $\alpha$, every receiver receives at least one message. Let transmission schedule $\sigma_H$ be the transmission schedule of execution $\alpha$. Note that because of the above choice of adversary, in the execution $\alpha$, each receiver can receive messages only from its $G$-neighbors. Suppose that $w$ is a proxy of receiver $u$ of $H$. Then because of the construction of Dual($H$), each receiver node has exactly one $G$-neighbor and that neighbor is one of associates of $u$ (the one that is matched to $w$). Therefore, in execution $\alpha$, for each receiver $u$ of $H$, in union, the proxies of $u$ receive all the messages of associates of $u$. On the other hand, because of the choice of adversary, if in round $r$ of $\sigma$ a receiver $w$ receives a message, then using transmission schedule $\sigma_H$ in the classical radio broadcast model, $u$ receives the message of the same sender in round $r$ of $\sigma_H$. Therefore, using transmission schedule $\sigma_H$ in the classical broadcast model and in network $H$, every receiver receives messages of all of its associates. Hence, $\sigma_H$ covers $H$ and we are done with the proof of lemma.
\end{proof}
}
\fullOnly{
\begin{proof}
Consider an arbitrary $n_2$ and $\Delta_2$ and let $n_1 = n_2 \Delta_2$ and $\Delta'_1 = \Delta_2$. Suppose that in the dual graph model and for each bipartite network with $n_1$ nodes and maximum receiver $G'$-degree $\Delta'_1$, there exists a local broadcast algorithm $A$ for this network with progress bound of at most $f(n_1, \Delta_1^\prime)$. Let $H$ be a network in the classical radio broadcast model with $n_2$ nodes and maximum receiver degree at most $\Delta_2$. We show a transmission schedule $\sigma_H$ of length at most $f({n_2}{\Delta_2},\Delta_2)$ that covers $H$.
For this, using network $H$, we first construct a special bipartite network in the dual graph model, \emph{Dual($H$)} = $(G, G')$ that has $n_1$ nodes and maximum receiver $G^\prime$-degree $\Delta'_1$. Then, by the above assumption, we know that there exists a local broadcast algorithm $A$ for this network with progress bound of at most $f(n_1, \Delta'_1) = f(n_2 \Delta_2, \Delta_2)$ rounds. We define transmission schedule $\sigma_H$ by emulating what this algorithm does on the network Dual($H$) and under certain choices of the adversary. Then, we argue why $\sigma_H$ covers $H$.
The network Dual($H$) in the dual graph model is constructed as follows. The set of sender nodes in the Dual($H$) is exactly the same as those in $H$. Now for each receiver $u$ of $H$, let $d_{H}(u)$ be the degree of node $u$ in graph $H$. Also, let us call the senders that are adjacent to $u$ the \emph{associates} of $u$. Then, in the network Dual($H$), we replace receiver $u$ with $d_{H}(u)$ receivers and we call these new receivers the \emph{proxies} of $u$. Also, in graph $G$ of Dual($H$), we match proxies of $u$ with associates of $u$, i.e., we connect each proxy to exactly one associate and vice versa. In graph $G^\prime$ of Dual($H$), we connect all proxies of $u$ to all associates of $u$. Note that because of this construction, we have that the maximum degree of the receivers in $G^\prime$ is $\Delta_2$. Also, since each receiver is substituted by at most $\Delta_2$ receiver nodes, the total number of nodes mentioned so far in the Dual($H$) is at most $n_2 \Delta_2$. Without loss of generality, we can assume that the number of nodes in $Dual(H)$ is exactly $n_2 \Delta_2$. This is because we can simply adjust it by adding enough isolated senders.
Now, we present a particular way of resolving the nondeterminism in the choices of adversary in activating the unreliable links for each round over Dual($H$). Later, we will study and emulate the algorithm $A$ under the assumption that the unreliable links are activated in this way. This method of resolving the nondeterminism is, in principle, trying to make the number of successful message deliveries as small as possible. More precisely, adversary activates the links using the following procedure. For each round $r$ and each receiver node $w$, we use these rules about the link activation: (1) if exactly one $G^\prime$-neighbor of $w$ is transmitting, then the adversary activates only the links from $w$ to its $G$-neighbors, (2) otherwise, adversary activates all the links from $w$ to its $G^\prime$-neighbors.
Now, we focus on the executions of algorithm $A$ on the network Dual($H$) and under the above method of resolving the nondeterminism. By the assumption that $A$ has progress time bound of $f(n_2 \Delta_2, \Delta_2)$ for network Dual($H$), there exists a progressive execution $\alpha$ of $A$ with length at most $f(n_2 \Delta_2, \Delta_2)$ rounds. Let transmission schedule $\sigma_H$ be the transmission schedule of execution $\alpha$. Note that in the execution $\alpha$, because of the way that we resolve collisions, each receiver can receive messages only from its $G$-neighbors. Suppose that $w$ is a proxy of receiver $u$ of $H$. Then because of the construction of Dual($H$), each receiver node has exactly one $G$-neighbor and that neighbor is one of associates of $u$ (the one that is matched to $w$). Therefore, in execution $\alpha$, for each receiver $u$ of $H$, in union, the proxies of $u$ receive all the messages of associates of $u$. Now, note that because of the presented method of resolving the nondeterminism, if in round $r$ of $\sigma$, a receiver $w$ receives a message, then using transmission schedule $\sigma_H$ in the classical radio broadcast model, $u$ receives the message of the same sender in round $r$ of $\sigma_H$. Therefore, using transmission schedule $\sigma_H$ in the classical broadcast model and in network $H$, every receiver receives messages of all of its associates. Hence, $\sigma_H$ covers $H$ and we are done with the proof of lemma.
\end{proof}
\begin{proof}[Proof of \Cref{thm:worst-prog-dual}] The proof follows from Theorem \ref{thm:Ack_LB} and Lemma \ref{lem:trans}. Fix an arbitrary $n_1$ and $\Delta'_1 \in 20 \log n_1, n_1^{\frac{1}{11}}]$. Let $n_2 = \frac{n_1}{\Delta'_1}$ and $\Delta_2 = \Delta'_1$. By theorem \ref{thm:Ack_LB}, we know that in the classical radio broadcast model, there exists a bipartite network $H(n_2, \Delta_2)$ with $n_2$ nodes and maximum receiver degree at most $\Delta_2$ such that no transmission schedule with length of $o(\Delta_2 \log n_2)$ rounds can cover it. Then, by setting $f(n_1, \Delta_1) = \Theta (\Delta_1 \log n_1)$ in Lemma \ref{lem:trans}, we can conclude that there exists a bipartite network with $n_1$ nodes and maximum receiver $G'$-degree $\Delta'_1$ such that there does not exists a local broadcast algorithm for this network with progress bound of at most $f(n_1, \Delta_1^\prime)$. Calling this network $H^*(n_1, \Delta'_1)$ finishes the proof of this lemma.
\end{proof}
}
\fullOnly{\begin{corollary} In the dual graph model, for each $n$ and each $\Delta' \in [20 \log n, \frac{n^{\frac{1}{11}}}{2}]$, there exists a bipartite network with $n$ nodes and maximum receiver $G^\prime$-degree at most $\Delta'$ such that for every $k \in [20 \log n, \Delta']$, no algorithm can have progress bound of $f_{prog}(k) = o(k \log n)$ rounds.
\end{corollary}
\begin{proof} The corollary follows from \ref{thm:worst-prog-dual} by considering the dual network graph that is derived from union of networks $H^*(\frac{n}{2}, k)$ as $k$ goes from $20 \log n$ to $\Delta'$.
\end{proof}
\begin{corollary}In the dual graph model, for each $n$ and each $\Delta' \in [20 \log n, \frac{n^{\frac{1}{11}}}{2}]$, there exists a bipartite network $H^*(n, \Delta')$ with $n$ nodes and maximum receiver $G^\prime$-degree at most $\Delta'$ such that no algorithm can have acknowledgment bound of $o(\Delta' \log n)$ rounds.
\end{corollary}
\begin{proof} Proof follows immediately from \ref{thm:worst-prog-dual} and the fact that the acknowledgment time is greater than or equal to the progress time.
\end{proof}
}
\fullOnly{\subsection{The Intrinsic Gap Between the Receive and Acknowledgment Time Bounds}\label{subsec:rcv_ack_gap}
In \Cref{sec:upper}, we saw that the SPP protocol has a reception time bound of $f_{rcv}(k) = O(k \log(\Delta') \log n)$. In this section, we show that in the distributed setting, there is a relatively large gap between the time that the messages can be delivered in and the time needed for acknowledging them. More formally, we show the following.
\begin{lemma}In the dual graph model, for each $n_1$ and each $\Delta' \in [20 \log n_1, \frac{n_1^{\frac{1}{11}}}{2}]$, there exists a bipartite network $\mathcal{H}_{rcv}(n_1, \Delta'_1)$ with $n_1$ nodes and maximum receiver $G^\prime$-degree at most $\Delta'$ such that for any distributed algorithm, many senders have $c'(v, r) \leq 1$, but they can not acknowledge their packets in $o(\Delta'_1 \log n_1)$ rounds, i.e., $ f_{ack}(1) = \Omega (\Delta'_1 \log n_1)$.
\end{lemma}
\begin{proof} Let $n_2= \lfloor \frac{n_1^{\frac{10}{11}}}{2}\rfloor$ and $\Delta_2 = \Delta'_1$. Then, let $H( n_2, \Delta_2)$ with size $n_2$ and maximum degree $\Delta_2$ be the bipartite network in the classic model that we showed its existence in \Cref{thm:Ack_LB}. Recall that in $H(n_2, \Delta_2)$, we have $\eta = (n_2)^{\,0.1}$ sender processes. Now, we first introduce two simple graphs using $H(n_2, \Delta_2)$. Add $\eta$ receivers to the receiver side of $H(n_2, \Delta_2)$, call them \emph{new receivers}, and match these new receivers to the senders. Let us call the matching graph itself $M$. Then, define $G' = H(n_2, \Delta_2)+M$, $G_1 = M$ and $G_2 = H(n_2, \Delta_2)+M$. Also, let $\mathcal{H}_{rcv}(n_1, \Delta'_1)$ be the dual graph network that is composed of two components, one being the pair $(G_1, G')$ and the other being $(G_2,G')$. In each pair, the first element is the reliable part of the component and the second is the whole component. Note that the total number of nodes in $\mathcal{H}(n_1, \Delta'_1)$ is at most $n_1^{\frac{10}{11}} + n_1^{\frac{1}{11}}$ which is less than or equal to $n_1$ for large enough $n$. Without loss of generality, we can assume the number of nodes in $\mathcal{H}(n_1, \Delta'_1)$ is exactly $n_1$ by adding enough isolated nodes.
Now note that the second component of $\mathcal{H}_{rcv}(n_1, \Delta'_1)$, which is the pair $(G_2,G')$, $G'$ is a super graph of $H(n_2, \Delta_2)$. Hence, Lemma \ref{lem:Ack_Schedules_LB}, for any algorithm, acknowledgment in the second component needs at least $\Omega(\Delta_2 \log(n_2)) = \Omega(\Delta'_1 \log n_1)$ rounds. On the other hand, since for every new receiver $u$ in the first component, we have $|\mathcal{N}_{G_1}(u)| = 1$, we know that for every sender $v$ in the first component, for any round $r$ of any algorithm, $c'(v, r) \leq 1$. Now consider an arbitrary subset $P$ of all processes with $|P|=\eta'$. As an adversary, we can map these processes into either the senders in the first component or the senders in the second component. Since processes don't know the mapping between the processes, if we resolve the nondeterminism by always activating all the edges, the processes can not distinguish between the aforementioned two cases of mapping. Hence, since acknowledgment in the second component takes at least $\Omega(\Delta'_1 \log n_1)$ rounds, it takes at least the same amount of time in the first component as well. Thus, this dual graph network satisfies all the desired properties for $\mathcal{H}_{rcv}(n_1, \Delta'_1)$ mentioned in the theorem statement and therefore, we are done with the proof.
\end{proof}
}
\section{Centralized vs. Distributed Algorithms in the Dual Graph Model}
In this section, we show that there is a gap in power between
distributed and centralized algorithms in the dual graph model,
but not in the classical model---therefore highlighting
another difference between these two settings.
Specifically, we produce dual graph network graphs where centralized
algorithms achieve $O(1)$ progress while
distributed algorithms have unavoidable slow progress.
In more detail,
our first result shows that distributed algorithms
will have {\em at least one process}
experience
$\Omega(\Delta'\log{n})$
progress,
while the second result shows
the {\em average} progress is
$\Omega(\Delta')$.
Notice, such gaps do not exist in the classical model,
where our distributed algorithms from Section~\ref{sec:upper}
can guarantee fast progress in all networks.
\begin{theorem}\label{thm:worst-prog-gap}
For any $k$ and $\Delta'\in [20\log{k},k^{1/10}]$,
there exists a dual graph network of size $n$, $k < n \leq k^4$,
with maximum receiver degree $\Delta'$,
such that the optimal centralized local broadcast
algorithm achieves a progress bound of $O(1)$ in this network
while every distributed
local broadcast algorithm has a progress bound of
$\Omega(\Delta'\log{n})$.
\end{theorem}
\shortOnly{Our proof argument leverages
the bipartite network proven to exist in Lemma~\ref{lem:trans} to show that
all algorithms have slow progress in the dual graph model.
Here, we construct a network consisting of many copies
of this counter-example graph. In each copy, we leave
one of the reliable edges as reliable, but {\em downgrade}
the others to unreliable edges that act reliable.
A centralized algorithm can achieve fast progress in each
of these copies as it only needs the processes connected
to the single reliable edge to broadcast.
A distributed algorithm, however, does not know
which edge is actually reliable, so it still has slow
progress. We prove that in one of these copies, the last
message to be delivered comes across the only reliable edge,
w.h.p. This is the copy that provides the slow progress needed
by the theorem.}
\fullOnly{\begin{proof}
Let $G_1 = H(\Delta',n)$ be the classic network,
with size $n$ and maximum receiver degree $\Delta'$,
proved to exist by Theorem~\ref{thm:Ack_LB}.
(Notice the bounds on $\Delta'$ from the theorem
statement match the requirement by Theorem~\ref{thm:Ack_LB}.)
As also proved in this previous theorem,
every centralized
algorithm has an acknowledgment bound
of $\Omega(\Delta'\log{n} )$
in $G_1$.
Next, let $G_2 = Dual(G_1)$ be the dual graph
network, with maximum receiver degree $\Delta'$
and network size $n_2 = n\Delta'$,
that results from applying the $Dual$ transformation,
defined in the proof of Lemma~\ref{lem:trans}, to $G_1$.
This Lemma proves that every centralized
algorithm has a progress bound
of $\Omega(\Delta'\log{n_2})$
rounds in $G_2$.
We can restate this bound as follows:
for every algorithm, there is an assignment
of messages to senders such
that in every execution
some process has a reliable
edge to at least one sender,
and yet does not receive its first message from
a sender for $\Omega(\Delta'\log{n_2})$
rounds. Call the reliable edge on which this slow process
receives its first message the {\em slow edge} in the execution.\footnote{We
are assuming w.l.o.g. that in these worst case executions
identified by the lower bound, that the last receiver to receive
a message does not receive this message on an unreliable edge
(as, in this case, we could always drop that message, contradicting
the assumption that we are considering the worst case execution).}
We now use $G_2$ to construct a larger dual graph network, $G^*$.
To do so, label the $m$ reliable edges in $G_2$ as $e_1,...,e_m$.
We construct $G^*$ to consist of $n_2m^2$ modified copies of
$G_2$.
In more detail, $G^*$ has $n_2m$ components,
which we label $C_{i,j}$, $i\in [m], j\in [n_2m]$.
Each $C_{i,j}$ has the same structure
as $G_2$ but with the following exception:
we keep only $r_i$ as a reliable edge;
all other reliable edges $r_j$, $j\neq i$,
are {\em downgraded} to unreliable edges.
We are now ready to prove a lower bound on progress
on $G^*$.
Fix some distributed local broadcast algorithm ${\cal A}$.
We assign the $n_2^2m^2$ process to nodes in $G^*$
as follows.
Partition these processes into sets
$S_1,...,S_{n_2m^2}$, each consisting of $n_2$ processes.
For each $S_i$, $i\in [n_2m]$,
we make an independent random choice of a value $j$ from $[m]$,
and assign $S_i$ to component $C_{j,i}$ in $G^{*}$.
Notice, no two such sets can be assigned to same to the same
component, so the choice of each assignment can be independent
of the choice of other assignments. We also emphasize
that these choices are made independent of the algorithm
${\cal A}$ and its process' randomness.
Finally, we assign the remaining $S$ sets to the
remaining $G^*$ components in an arbitrary fashion.
For each $C_{j,i}$,
we fix the behavior of each downgraded edge to
behave as if it was a reliable edge.
With this restriction in place, $C_{j,i}$ now
behaves indistinguishably from $G_2$.
It follows from Lemma~\ref{lem:trans},
that no algorithm can guarantee fast
progress in $C_{j,i}$.
Leveraging this insight, we assume
the worst case behavior,
in terms of the non-downgraded unreliable
edge behavior and message assignments,
in each component.
In every $C_{j,i}$, therefore,
some process does not receive a message for the first
time on a reliable or downgraded edge
for $\Omega(\Delta'\log{n_2} )$ rounds.
With this in mind, let us focus
on our sets of processes $S_1$ to $S_{n_2m^2}$.
Consider some $S_i$ from among
these sets. Let $C_{j,i}$ be
the component to which we randomly assigned $S_i$.
As we just established,
some process in $S_i$ does
not receive a message for the first
time until many rounds have passed.
This message either comes across
the single reliable edge in $C_{j,i}$
or a downgraded edge.
If it is a reliable edge,
then this process yields the slow progress we need.
The crucial observation here is that
for any fixed randomness for the processes
in $S_i$,
the choice of this edge is the same regardless
of the component where $S_i$ is assigned.
Therefore we can treat the determination
of this slow edge as independent of
the assignment of $S_i$ to a component.
Because we assigned $S_i$ at random
to a component, the probability
that we assigned it to a component
where the single reliable edge matches
the fixed slow edge is $1/m$.
Therefore, the probability
that this match occurs for
at least one of our $n_2m$ $S$
sets is $(1 - 1/m)^{n_2m} \leq 1 - e^{n_2}$.
In other words,
some receiver in our network does not
receive a message over a reliable edge
for a long time, w.h.p.
Because a progress bound must hold w.h.p.,
the progress bound of ${\cal A}$ is slow.
Finally, to establish our gap, we must also describe
a centralized algorithm can achieve $O(1)$
progress in this same network, $G^*$.
To do so, notice each component $C_{i,j}$
has exactly one reliable edge.
With this in mind, we define our fixed centralized algorithm to divide rounds
in pairs and do the following: in the first round of a pair,
if the first endpoint of a component's single reliable edge (by some
arbitrary ordering of endpoints)
has a message then it broadcasts; in the second
round do the following for the second endpoint.
After a process has been active for a full round pair, it acknowledges the message.
This centralized algorithm satisfies the following property:
if some process $u$ receives a messages as input in round $r$,
every reliable neighbor of $u$ receives the message by $r+O(1)$.
It follows that this centralized algorithm
has a progress bound of $O(1)$.
\end{proof}
}
Notice, in some settings, practioners might tolerate a slow worst-case
progress (e.g., as established in Theorem~\ref{thm:worst-prog-gap}),
so long as {\em most} processes have fast progress.
In our next theorem, we show that this ambition is also impossible
to achieve.
To do so, we first need a definition that captures
the intuitive notion of many processes having slow progress.
In more detail, given an execution of the one-shot local broadcast
problem (see Section~\ref{sec:model}), with
processes in {\em sender set} $S$ being passed messages,
label each receiver that neighbors
$S$ in $G$ with the round when it
first received a message. The {\em average progress}
of this execution is the average of these values.
We say an algorithm has an {\em average progress of $f(n)$},
with respect to a network of size $n$ and sender set $S$,
if executing
that algorithm in that network with those senders
generates an
average progress value of no more than $f(n)$, w.h.p.
We now bound this metric in the same style as above
\begin{theorem}\label{thm:average-prog-gap}
For any $n$, there exists a dual graph network of size $n$
and a sender set,
such that the optimal centralized local broadcast
algorithm has an average progress of $O(1)$
while every distributed local broadcast algorithm
has an average progress of $\Omega(\Delta')$.
\end{theorem}
\shortOnly{Our proof uses a reduction
argument. We show how a distributed algorithm that achieves
fast average progress in a specific type of dual graph network
can be transformed to a distributed algorithm that
solves global broadcast fast in a different type
of dual graph network.
We then apply a lower bound from~\cite{KLN09BA}
that proves no fast solution exists for the latter---providing
our needed bound on progress.}
\fullOnly{
\paragraph{Lollipop Network.}
We begin our argument
by recalling a result proved in a previous study of
the dual graph model.
This result concerns the {\em broadcast
problem}, in which a single source process is provided
a message at the beginning of the execution which it must
subsequently
propagate to all processes in the network.
The result in question
concerned a specific dual graph construction
we call a {\em lollipop network}, which can be defined
with respect to any network size $n>2$.
For a given $n$,
the $G$ edges in this network
define a clique of $n-1$ nodes,
$c_1$ to $c_{n-1}$.
There is an additional node
$r$ that is connected
to one of the clique clique nodes.
By contrast, $G'$ is complete.
In~\cite{KLN09BA} we proved the following:
\begin{lemma}[From~\cite{KLN09BA}]
Fix some $n>2$ and randomized broadcast algorithm ${\cal A_B}$.
With probability at least $1/2$,
${\cal A_B}$ requires at least $\lfloor (n-1)/2 \rfloor$
rounds to solve broadcast in the lollipop network of size $n$.
\label{lem:podc2009}
\end{lemma}
\paragraph{Spread Network.}
Our strategy in proving Theorem~\ref{thm:average-prog-gap}
is to build a dual graph network
in which achieving fast average progress would
yield a fast solution to the broadcast problem in the lollipop
network, contradicting Lemma~\ref{lem:podc2009}.
To do so, we need to define the network in which we achieve
our slow average progress.
We call this network a {\em spread network},
and define it as follows.
Fix any even size $n \geq 2$.
Partition the $n$ nodes in $V$ into
{\em broadcasters} ($b_1,b_2,...,b_{n/2}$)
and {\em receivers} ($r_1,r_2,...,r_{n/2}$).
For each $b_i$, add a $G$ edge to $r_i$.
Also add a $G$ edge from
$b_1$ to all other receivers.
Define $G'$ to be complete. Note that in this network, $\Delta'=n-1$.
We can now prove our main theorem.
\begin{proof}[Proof of Theorem~\ref{thm:average-prog-gap}]
Fix our sender set $S = \{b_1,...,b_{n/2}\}$.
Notice, a centralized algorithm can achieve
$1$ round progress for all receiver by simply
have $b_1$ broadcast alone.
We now turn our attention to showing
that any distributed algorithm, by contrast, is slow
in this setting.
Fix one such algorithm, ${\cal A}$.
Assume for contradiction that it defies
the theorem statement.
In particular, it will guarantee $o(n)$ progress
when executed in the spread network
with sender set $S = \{b_1,...,b_{n/2}\}$.
We use ${\cal A}$
to construct a broadcast algorithm ${\cal A'}$
that can be used to solve broadcast in the lollipop network.
At a high-level, ${\cal A'}$ has each
process in the clique in the lollipop network simulate both
a sender and its matching receiver from the spread network.
In the following, use $b$ to refer to the single node in
the clique of the lollipop network that connects to $r$ with
a reliable edge.
In this simulation, process $b$ in the lollipop network matches up with
process $b_1$ in the spread graph.
Of course, process $b$ does not know a priori that it is
simulating process $b_1$, as in the lollipop network
$b$ does not a priori that is assigned to this crucial node.
This will not be a problem, however, because
we will control the $G'$ edges in our simulation
such that the behavior of $b_1$ will differ from
the other processes in $S$ only when it broadcasts alone
in the graph. It will be exactly at this point,
however, that our simulation can stop, having
successfully solved broadcast.
In more detail, our algorithm ${\cal A'}$ works as follows:
\begin{enumerate}
\item We first allow process $r$ to identify itself.
To do so, have the source, $u_0$, broadcast.
Either we solve the broadcast problem (e.g., if the source is $b$)
or $r$ is the only process to not receive a message---allowing
it to figure out it is $r$. At this point, every
process but $r$ has the message. To solve broadcast
going forward, it is now sufficient for $b$ to broadcast alone.
\item We will now have processes in ${\cal A'}$ simulate
processes from ${\cal A}$ to determine whether or
not to broadcast in a given round.
In more detail, we have each process $u$ in the lollipop
clique simulate a sender (call it, $b_u$) and its corresponding
receiver (call it, $r_u$) from the spread network.\footnote{In
the case of the process simulating $b_1$, we have to be careful
because $b_1$ has a $G$ edge to all receivers. The
simulator, however, is responsible only for simulating
the sole receiver that is connected to only $b_1$, namely $r_1$.}
We have $n/2$ clique processes each simulating $2$ spread network
processes, so we are now setup to begin a simulation of
an $n$-process spread network.
\item Each simulated round of ${\cal A}$ will require
two real rounds of ${\cal A'}$.
{\em In the first real round}, each process $u$ in the lollipop
clique advances the simulation of its simulated
processes $b_u$ and $r_u$, to see if they
should broadcast in the current
round of ${cal A}$ being simulated. If either $b_u$ or $r_u$
broadcasts (according to $u$'s simulation),
$u$ broadcasts these simulated messages,
{\em and} the broadcast
message for the instance of broadcast we are trying to solve.
On the other hand, if neither of $u$'s simulated processes broadcast,
$u$ remains silent.
(Notice, if only $b$ broadcasts during this round, we are done.)
The exception to these rules is the source, $u_0$, which does not broadcast,
regardless of the result of its simulation.
{\em In the second real round of our simulated round},
$u_0$ announces what it learned in the previous round.
That is, $u_0$ acts as a simulation coordinator.
In more detail,
$u_0$ can tell the difference between the following
two cases:
(1) either no simulated process, or two or more
simulated processes, broadcast;
(2) one simulated process broadcast (in which
case $u_0$ also knows whether the processes is a sender
or receiver in the spread network, and its
message);
Process $u_0$ announces whether case $1$ or $2$ occurred, and in the
case of (2), it also announces the identity of the
sender and its message.
This information is received by all processes in the lollipop clique.
\item Once the lollipop clique processes learn
the result of the simulation
from $u_0$, they can consistently and correctly finish
the round for their simulated processes by applying
the following rules.
{\em Rule \#1:} If $u_0$ announces that no simulated process
broadcasts, or two or more simulated processes broadcast,
then all the processes in ${\cal A'}$
have their simulated processes receive nothing.
(This is valid as $G'$ is complete in the simulated network,
so it is valid for concurrent messages
to lead to total message loss.)
{\em Rule \#3:} If $u_0$ announces that one simulated process broadcast,
then the simulators' behavior depends on the identity of the
simulated broadcaster. If this broadcaster is a sender in the
spread network, then it simulates its single matched receiver receiving
the message. (Notice this behavior is valid so long as the broadcaster
is not $b^*$. Fortunately, the broadcaster {\em cannot} be $b^*$,
as if it was, then
$b$ would have broadcast alone in ${\cal A'}$ in the previous round,
solving broadcast.)
On the other hand, if the single broadcaster is a receiver,
then we have to be more careful. It is not sufficient for
its single matched broadcaster to receive the message
because $b_1$ must also receiver it.
Because we do not know which process is simulating $b_1$,
we instead, in this case, simulate all broadcasters
receiving this message. This is valid as $G'$ is complete.
\end{enumerate}
By construction, ${\cal A'}$ will solve broadcast when simulated
$b_1$ broadcasts
alone in the simulation.
Our simulation rules are designed such that $b_1$ {\em must} eventually
broadcast alone for the simulated instance of ${\cal A}$ to solve
local broadcast, as this is the only way for $r_1$
to receive a message from a process in $S$.
Because we assume ${\cal A}$ solves this problem,
and we proved our simulation of ${\cal A}$ is valid,
${\cal A}$ {\em will} eventually have $b_1$ broadcast alone and
therefore ${\cal A'}$ {\em will} eventually solve broadcast.
The question is how long it takes for this event to occur.
Recall that we assumed that with high probability
the average progress of ${\cal A}$ is $o(n)$.
By our simulation rules, until $b_1$ broadcasts alone,
at most one receiver can receive a message from
a sender, per round. It follows that $b_1$ must broadcast
alone (well) {\em before} round $n/4$. (If it waited until
$n/4$, only $n/4$ processes will have finished receiving
in those round, so even if the remaining receivers all finished in round
$n/4$, the average progress would be greater
than $n/8$ which, of course, is not $o(n)$.)
By Lemma~\ref{lem:podc2009},
with probability at least $1/2$,
${\cal A'}$ requires at least
$((\frac{n}{2}+1) - 1)/2 = n/4$ rounds to solve broadcast.
We just argued, however, that with {\em high} probability
$b_1$ broadcasts alone---and therefore ${\cal A'}$ solves
broadcast---in less than $n/4$ rounds. A contradiction.
\end{proof}
}
\section{#1}\vspace{-.0in}}
\newcommand{\FullOrShort}{full}
\ifthenelse{\equal{\FullOrShort}{full}}{
\newcommand{\fullOnly}[1]{#1}
\newcommand{\shortOnly}[1]{}
}{
\newcommand{\fullOnly}[1]{}
\newcommand{\shortOnly}[1]{#1}
}
\usepackage{url}
\newcommand{\keywords}[1]{\par\addvspace\baselineskip
\noindent\keywordname\enspace\ignorespaces#1}
\begin{document}
\mainmatter
\title{Bounds on Contention Management in Radio Networks}
\titlerunning{Bounds on Contention Management in Radio Networks}
\author[*]{Mohsen Ghaffari}
\author[*]{Bernhard Haeupler}
\author[*]{Nancy Lynch}
\author[**]{Calvin Newport\vspace{0.4cm}}
\affil[*]{Computer Science and Artificial Intelligence Lab, MIT \authorcr \texttt{\{ghaffari, haeupler, lynch\}@csail.mit.edu}\vspace{0.2cm}}
\affil[**]{Department of Computer Science, Georgetown University \authorcr \texttt{[email protected]}}
\renewcommand\Authands{ and }
\authorrunning{Ghaffari et al.}
\institute{}
\toctitle{Bounds on Contention Management in Radio Networks }
\tocauthor{Ghaffari et al.}
\maketitle
\fullOnly{\thispagestyle{empty}}
\begin{abstract}
The local broadcast problem assumes that processes in a
wireless network are provided messages, one by one,
that must be delivered to their neighbors.
In this paper,
we prove tight bounds for this problem in two well-studied
wireless network models:
the {\em classical} model,
in which links are reliable and collisions consistent,
and the more recent {\em dual graph} model,
which introduces unreliable edges.
Our results prove that the
{\em Decay} strategy,
commonly used for local broadcast in the classical setting,
is optimal.
They also establish a separation between the two models,
proving that the dual graph setting is strictly harder
than the classical setting, with respect to this primitive.
\end{abstract}
\iffalse
\fi
\input{Preliminaries}
\input{RelatedWork}
\input{UpperBounds}
\input{Classic}
\input{Dual}
\input{Gap}
\vspace{-0.2cm}
\input{Ref-List}
\end{document}
\section{Introduction}
At the core of every wireless network algorithm is the need
to manage contention on the shared medium.
In the theory community, this challenge is abstracted
as the {\em local broadcast problem}, in which processes
are given messages, one by one,
that must be delivered to their neighbors.
This problem has been studied in multiple wireless network models.
The most common such model is the {\em classical} model, introduced
by Chlamatac and Kutten~\cite{CK85}, in which
links are reliable
and concurrent broadcasts by neighbors always generate collisions.
The dominant local broadcast strategy in this model
is the {\em Decay} routine introduced by Bar-Yehuda et al.~\cite{BGI87}.
In this strategy,
nodes cycle through an exponential distribution of broadcast
probabilities
with the hope that one will be appropriate for the current
level of contention~(e.g.,
\cite{BGI87, CGR00, CGGPR00, CGOR00, CMS01, CCMPS01, CMS04, GPX05, KLNOR10}).
To solve local broadcast with high probability (with respect to the network size $n$),
the {\em Decay} strategy requires $O(\Delta\log{n})$ rounds,
where $\Delta$ is the maximum contention in the network (which is at most the maximum degree in the network topology).
It has remained an open question whether this bound can be improved
to $O(\Delta + \polylogn)$.
In this paper, we resolve this open question by proving the {\em Decay}
bound optimal.
This result also proves that existing
constructions of {\em ad hoc selective families}~\cite{CCMPS01,CMS04}---a
type
of combinatorial object used in wireless network
algorithms---are optimal.
We then turn our attention to the more recent {\em dual graph}
wireless network model introduced by Kuhn et~al.~\cite{KLN09, KLN09BA, KLNOR10, CGKLN11}.
This model generalizes the classical model by allowing
some edges in the communication graph to be unreliable.
It was motivated by the observation
that real wireless networks include links of dynamic quality (see~\cite{KLNOR10} for more
extensive discussion).
We provide tight solutions to the local broadcast problem in this setting, using algorithms
based on the {\em Decay} strategy.
Our tight bounds in the dual graph model are larger (worse)
than our tight time bounds for the classical model, formalizing
a separation between the two settings (see Figure~\ref{fig:results} and
the discussion below for result details).
We conclude by proving another separation:
in the classical model there is no significant difference
in power between centralized and distributed local broadcast
algorithms, while in the dual graph model the gap is exponential.
These separation results are important
because most wireless network algorithm
analysis relies on the correctness of the underlying contention
management strategy.
By proving that the dual graph model is strictly harder
with respect to local broadcast,
we have established that an algorithm
proved correct in the classical model will not necessarily
remain correct or might loose its efficiency in the more general (and more realistic)
dual graph model.
\underline{To summarize:}
This paper provides an essentially complete characterization
of the local broadcast problem in the well-studied classical and dual graph wireless network models.
In doing so, we:
(1) answer the long-standing open question regarding the optimality of {\em Decay} in the classical model;
(2) provide a variant of Decay and prove it optimal for the local broadcast problem in the dual graph model; and
(3) formalize the separation between these two models, with respect to local broadcast.
{\begin{figure}[t]
\centering
\shortOnly{\tiny}
\begin{tabular}{|c|c|c|}
\hline
& \bf Classical Model & \bf Dual Graph Model \\ \hline \hline
\bf Ack. Upper & $O(\Delta\log{n})$** & $O(\Delta'\log{n})$* \\ \hline
\bf Ack. Lower & $\Omega(\Delta \log{n})$*
& $\Omega(\Delta'\log{n})$*\\ \hline \hline
\bf Prog. Upper & $O(\log{\Delta}\log{n})$** & $O(\min\{ k\log{k}\log{n}, \Delta'\log{n} \})$* \\ \hline
\bf Prog. Lower & $\Omega(\log{\Delta}\log{n})$** & $\Omega(\Delta'\log{n})$*\\ \hline
\end{tabular}
\caption{{\shortOnly{\footnotesize} \onehalfspacing A summary of our results for {\em acknowledgment} and {\em progress}
for the local broadcast problem.
Results that are new, or significant improvements
over the previously best known result, are marked
with an ``*'' while a ``**'' marks results that where obtained from
prior work via minor tweaks.
}}
\label{fig:results}
\end{figure}}
\paragraph{Result Details:}
As mentioned,
the {\em local broadcast} problem assumes processes are
provided messages, one by one, which should be delivered
to their neighbors in the communication graph.
Increasingly, local broadcast solutions
are being studied separately from the higher level problems
that use them,
improving the composability of
solutions; e.g.,~\cite{KLN09,CLVW09,KKKL10,KKLMP11}.
Much of the older theory work in the wireless setting, however,
mixes the local broadcast logic with the logic
of the higher-level problem being solved; e.g.,~\cite{BGI87, CGR00, CGGPR00, CGOR00, CMS01, CCMPS01, CMS04, GPX05, KLNOR10}.
This previous work can be seen as implicitly solving local broadcast.
The efficiency of a local broadcast algorithm is characterized
by two metrics:
(1) an {\em acknowledgment bound}, which measures the time for
a sender process (a process that has a message for broadcast)
to deliver its message to all of its neighbors;
and (2) a {\em progress bound}, which measures
the time for a receiver process (a process that has a sender neighbor)
to receive at least one message~\footnote{Note that with respect to these definitions, a process can be both a sender and a receiver, simultaneously.}.
The acknowledgment bound is obviously interesting;
the progress bound has also been shown to be critical for analyzing
algorithms for many problems, e.g., global broadcast~\cite{KLN09}
where the reception of {\em any} message is normally sufficient to advance the algorithm.
The progress bound was first introduced and explicitly
specified in~\cite{KLN09, KKKL10}
but it was implicitly used already in many previous works~\cite{BGI87, CGR00, CGGPR00, CGOR00, CMS01, GPX05}.
Both acknowledgment and progress bounds typically depend on two parameters, the maximum contention $\Delta$
and the network size $n$. In the dual graph model,
an additional measure of maximum contention, $\Delta'$, is introduced
to measure contention in the unreliable communication link
graph, which is typically denser than the reliable link graph.
In our progress result for the dual graph model, we also
introduce $k$ to capture the {\em actual} amount of contention relevant
to a specific message.
These bounds are usually required to hold with high probability.
Our upper and lower bound results for the
local broadcast problem
in the classical and dual graph models are summarized
in Figure~\ref{fig:results}.
Here we highlight three key points regarding these results.
First, in both models, the upper and lower bounds match asymptotically.
Second, we show that $\Omega(\Delta\log{n})$ rounds
are necessary for acknowledgment in the classical model.
This answers in the negative the open question of whether
a $O(\Delta + \polylogn)$ solution is possible.
Third, the separation between the classical and dual graph
models occurs with respect to the progress bound,
where the tight bound for the classical model
is {\em logarithmic} with respect to contention,
while in the dual graph model it is {\em linear}---an exponential
gap.
Finally, in addition to the results described
in Figure~\ref{fig:results}, we also prove the following
additional separation between the two models:
in the dual graph model, the gap in progress
between distributed and centralized local
broadcast algorithms is (at least) linear in the
maximum contention $\Delta'$, whereas no such gap exists in the classical
model.
Before starting the technical sections, we remark that due to space considerations, the full proofs are omitted from the conference version and can be found in~\cite{GHLN12}.
\input{model}
\input{Problem}
\section{Problem}
\label{sec:problem}
\fullOnly{\paragraph{Preliminaries:}}
Our first step in formalizing the local broadcast
problem is to fix the input/output interface between the \emph{local broadcast module} (automaton) of a process and the higher layers at that process. In this interface, there are three actions as follows: (1) $bcast(m)_v$, an input action that provides the local broadcast module at process $v$ with message $m$ that has to be broadcast over $v$'s local neighborhood, (2) $ack(m)_v$, an output action that the local broadcast module at $v$ performs to inform the higher layer that the message $m$ was delivered to all neighbors of $v$ successfully, (3) $rcv(m)_u$, an output action that local broadcast module at $u$ performs to transfer the message $m$, received through the radio channel, to higher layers. To simplify definitions going forward, we assume w.l.o.g. that
every $bcast(m)$ input in a given execution is for a unique $m$.
We also need to restrict the behavior of the environment
to generate $bcast$ inputs in a {\em well-formed} manner,
which we define as strict alternation between $bcast$
inputs and corresponding $ack$ outputs at each process.
In more detail, for every execution and every process $u$,
the environment generates a $bcast(m)_u$ input only
under two conditions: (1) it is the first input to $u$
in the execution; or (2) the last input or non-$rcv$ output action
at $u$ was an $ack$.
\fullOnly{\paragraph{Local Broadcast Algorithm:}}
We say an algorithm {\em solves the local
broadcast problem} if and only if
in every execution, we have the following three properties:
(1) for every process $u$, for each $bcast(m)_u$ input, $u$ eventually
responds with a single $ack(m)_u$ output, and
these are the only $ack$ outputs generated by $u$;
(2) for each process $v$, for each message $m$, $v$ outputs $rcv(m)_v$ at most once and if $v$ generates a $rcv(m)_v$ output in round
$r$, then there is a neighbor $u \in N_{G'}(v)$ such
that following conditions hold: $u$ received a $bcast(m)_u$ input before round $r$ and has not output $ack(m)_u$ before round $r$
(3) for each process $u$, if $u$ receives $bcast(m)_u$ in round
$r$ and respond with $ack(m)_u$ in round
$r' \geq r$, then w.h.p.: $\forall v\in N_G(u)$,
$v$ generates output $rcv(m)_v$
within the round interval $[r,r']$.
We call an algorithm that solves the local broadcast
problem a {\em local broadcast algorithm}.
\paragraph{Time Bounds:}
We measure the performance of a local broadcast algorithm
with respect to the \fullOnly{three}\shortOnly{two} bounds first formalized
in~\cite{KLN09}: {\em acknowledgment} (the worst case bound
on the time between a $bcast(m)_u$ and the corresponding $ack(m)_u$),\fullOnly{{\em receive } (the worst case bound on the time between
a $bcast(m)_v$ input and a $rcv(m)_u$ output
for all $u\in N_{G}(v)$),}
and {\em progress} (informally speaking the worst case bound on the time
for a process to receive at least one message
when it has one or more $G$ neighbors with messages to send).
The first \fullOnly{two bounds represent}\shortOnly{bound represents} standard ways of measuring
the performance of local communication. The progress bound is crucial for obtaining tight performance
bounds in certain classes of applications. See \cite{KLN09, KKKL10} for examples of places where progress bound proves crucial explicitly. Also, \cite{BGI87, CGR00, CGGPR00, CGOR00, CMS01, GPX05} use the progress bound implicitly throughout their analysis.
In more detail, a local broadcast algorithm has
\fullOnly{three}\shortOnly{two} {\em delay functions} which describe
these delay bounds as a function of the relevant contention:
$f_{ack}$, \fullOnly{$f_{rcv}$, }and $f_{prog}$, respectively.
In other words, every local broadcast algorithm can be characterized
by these \fullOnly{three} \shortOnly{two} functions which must satisfy
properties we define below.
Before getting to these properties, however,
we first present a few helper definitions that we use
to describe local contention during a given round interval.
The following are defined with respect to a fixed execution.
(1) We say a process $u$ is {\em active} in round $r$,
or, alternatively, {\em active with $m$},
iff it received a $bcast(m)_u$ output in
a round $\leq r$ and it has not yet generated
an $ack(m)_u$ output in response. We furthermore
call a message $m$ active in round $r$ if there is a
process that is active with it in round $r$.
(2) For process $u$ and round $r$, contention $c(u,r)$ equals
the number of active $G'$ neighbors of $u$ in $r$.
Similarly, for every $r' \geq r$,
$c(u,r,r') = max_{r''\in[r,r']}\{c(u,r'')\}$.
(3) For process $v$ and rounds $r' \geq r$,
$c'(v,r,r') = max_{u\in N_G(v)}\{c(u,r,r')\}$.
We can now formalize the properties our delay
functions, specified for a local broadcast algorithm, must satisfy
for any execution:
\begin{enumerate}
\fullOnly{\item {\em Receive bound:} Suppose that $v$ receives a $bcast(m)_v$
input in round $r$ and $u\in N_{G'}(v)$ generates $rcv(m)_v$ in $r' \geq r$. Then with high probability we have $r' - r \leq f_{rcv}(c(u,r,r'))$.}
\item {\em Acknowledgment bound:} Suppose process $v$ receives a $bcast(m)_v$ input in round $r$. Then, if $r' \geq r$ is the round in which process $v$ generates corresponding output $ack(m)_v$,
then with high probability we have $r' - r \leq f_{ack}(c'(v,r,r'))$.
\item {\em Progress bound:} For any pair of rounds $r$ and $r' \geq r$, and process $u$, if $r' - r > f_{prog}(c(u,r,r'))$ and there exists a neighbor $v\in N_G(u)$ that is active throughout the entire interval $[r,r']$, then with high probability, $u$ generates a $rcv(m)_u$ output in a round $r'' \leq r'$ for a message $m$ that was active at some round within $[r,r']$.
\end{enumerate}
We use notation $\Delta^\prime$ (or $\Delta$ for the classical model) to denote the maximum contention over all processes.\footnote{Note that since the maximum degree in the graph is an upper bound on the maximum contention, this notation is consistent with prior work, see e.g. ~\cite{KLN09, KKKL10, KKLMP11}.} In our upper bound results, we assume that processes are provided with upper bounds on contention that are within a constant factor of $\Delta'$ (or $\Delta$ for the classical model). Also, for the sake of concision,
in the results that follow,
we sometimes use the terminology ``{\em has an acknowledgment bound of}"
(resp.\fullOnly{{\em receive bound} and} {\em progress bound})
to indicate ``{\em specifies the delay function $f_{ack}$}"
(resp. \fullOnly{$f_{rcv}$ and}$f_{prog}$).
For example,
instead of saying ``the algorithm specifies
delay function $f_{ack}(k) = O(k)$,'' we might
instead say ``the algorithm has an acknowledgment
bound of $O(k)$.''
\paragraph{Simplified One-Shot Setting for Lower Bounds:
The local broadcast problem as just described assumes that processes can keep
receiving messages as input forever and in an arbitrary asynchronous way.
This describes the practical reality of contention management, which
is an on going process. All our algorithms work in this general setting.
For our lower bounds, we use a setting in which we restrict the environment
to only issue broadcast requests at the beginning of round one. We call
this the {\em one-shot setting}. \fullOnly{Note that this restriction only
strengthens the lower bounds and it furthermore simplifies the notation.} Also, in most of our lower bounds, we consider, $G$ and $G'$ to be bipartite graphs, where nodes of one part are called \emph{senders} and they receive broadcast inputs, and nodes of the other part are called \emph{receivers}, and each have a sender neighbor. In this setting, when referring to contention $c(u)$, we furthermore mean $c(u,1)$. Note that in this setting, for any $r, r'$, $c(u,[r,r'])$ is less than or equal to $c(u,1)$. The same holds for $c'(u)$. Also, in these bipartite networks, the maximum $G'$-degree (or $G$-degree in the classical model) of the receiver nodes provides an upper bound on the maximum contention $\Delta'$ (or $\Delta$ in the classical model). When talking about these networks, and when it is clear from the context, we sometimes use the phrase {\em maximum receiver degree} instead of the maximum contention.
\section{Related Work}
\fullOnly{{\bf Single-Hop Networks}: The $k$-selection problem is the restricted case of the local broadcast problem for single-hop networks, in classical model. This problem is defined as follows. The network is a clique of size $n$, and $k$ arbitrary processes are active with messages. The problem is for all of these active processes to deliver their messages to all the nodes in the network. This problem received a vast range of attention throughout 70's and 80's, and under different names, see \emph{e.g.} \cite{TM77}- \cite{GW85}. For this problem, Tsybakov and Mikhailov \cite{TM77}, Capetanakis \cite{C79a, C79b}, and Hayes \cite{H79}, (independently) presented deterministic tree algorithms with time complexity of $O(k + k \log (\frac{n}{k}))$ rounds. Komlos and Greenberg \cite{KG85} showed if processes know the value of $k$, there exists algorithms that work with the same time complexity in networks that do not provide any collision detection mechanism. Greenberg and Winograd \cite{GW85} showed a lower bound of $\Omega(\frac{k \log n}{\log k})$ for time complexity of deterministic solutions of this problem in the case of networks with collision detection.
On the other hand, Tsybakov and Mikhailov \cite{TM77}, and Massey \cite{M80}, and Greenberg and Lander \cite{GL83} present randomized algorithms that solve this problem in expected time of $O(k)$ rounds. One can see that with simple modifications, these algorithms yield high-probability randomized algorithms that have time complexity of $O(k)+ polylog(n)$ rounds.\\}
\fullOnly{{\bf Multi-Hop Networks}:} Chlamatac and Kutten~\cite{CK85} were the first to introduce the classical radio network model. Bar-Yehuda et al. \cite{BGI87} studied the theoretical problem of local broadcast in synchronized multi-hop radio networks as a submodule for the broader goal of global broadcast. For this, they introduced {\em Decay} procedure, a randomized distributed procedure that solves the local broadcast problem. Since then, this procedure has been the standard method for resolving contention in wireless networks (see \emph{e.g.}~ \cite{GPX05, KLN09, KKKL10, KKLMP11}). In this paper, we prove that a slightly modified version of Decay protocol achieves optimal progress and acknowledgment bounds in both the classical radio network model and the dual graph model. A summary of these time bounds is presented in Figure~\ref{fig:results}.
Deterministic solutions to the local broadcast problem are typically based on combinatorial objects called \emph{Selective Families}, see \emph{e.g.} \cite{CGGPR00}-\cite{CMS04}. Clementi et al. \cite{CMS01} construct $(n, k)$-selective families of size $O(k \log n)$ (\cite[Theorem 1.3]{CMS01}) and show that this bound is tight for these selective families (\cite[Theorem 1.4]{CMS01}). Using these selective families, one can get local broadcast algorithms that have progress bound of $O(\Delta \log n)$, in the classical model. These families do not provide any local broadcast algorithm in the dual graph model. Also, in the same paper, the authors construct $(n,k)$-strongly-selective families of size $O(k^2 \log n)$ (\cite[Theorem 1.5]{CMS01}). They also show (in \cite[Theorem 1.6]{CMS01}) that this bound is also, in principle, tight for selective families when $k \leq \sqrt{2n}-1$. Using these strongly selective families, one can get local broadcast algorithms with acknowledgment bound of $O(\Delta^2 \log n)$ in the classical model and also, with acknowledgment bound of $f_{ack}(k)=O((\Delta^\prime)^{2} \log n)$ in the dual graph model. As can be seen from our results (summarized in Figure~\ref{fig:results}), all three of the above time bounds are far from the optimal bounds of the local broadcast problem. This shows that when randomized solutions are admissible, solutions based on these notions of selective families are not optimal.
In \cite{CCMPS01}, Clementi et al. introduce a new type of selective families called Ad-Hoc Selective Families which provide new solutions for the local broadcast problem, if we assume that processes know the network. Clementi et al. show in \cite[Theorem 1]{CCMPS01} that for any given collection $\mathcal{F}$ of subsets of set $[n]$, each with size in range $[\Delta_{min}, \Delta_{max}]$, there exists an ad-hoc selective family of size $O((1+\log(\Delta_{max}/\Delta_{min})) \cdot \log |F|)$. This, under the assumption of processes knowing the network, translates to a deterministic local broadcast algorithm with progress bound of $O(\log\Delta \, \log n)$, in the classical model. This family do not yield any broadcast algorithms for the dual graph model. Also, in \cite{CMS04}, Clementi et al. show that for any given collection $\mathcal{F}$ of subsets of set $[n]$, each of size at most $\Delta$, there exists a Strongly-Selective version of Ad-Hoc Selective Families that has size $O(\Delta \log |F|)$ (without using the name ad hoc). This result shows that, again under the assumption of knowledge of the network, there exists a deterministic local broadcast algorithms with acknowledgment bounds of $O(\Delta \log n)$ and $O(\Delta' \log n)$, respectively in the classical and dual graph models. Our lower bounds for the classical model show that both of the above upper bounds on the size of these objects are tight.
\section{Upper Bounds for Both Classical and Dual Graph Models}\label{sec:upper}
In this section, we show that by slight modifications to Decay protocol, we can achieve upper bounds that match the lower bounds that we present in the next sections. \shortOnly{Due to space considerations, the details of the related algorithms are omitted from the conference version and can be found in~\cite{GHLN12}.
\begin{theorem} In the classical model, there exists a distributed local broadcast algorithm that gives acknowledgment bound of $f_{ack}(k) = O(\Delta \log n)$ and progress bound of $f_{prog}(k) = O(\log \Delta \log n)$. \end{theorem}
\begin{theorem}There exists a distributed local broadcast algorithm that, in the classical model, gives bounds of $f_{ack}(k)=O(\Delta \log n)$ and $f_{prog}(k)= O(\log \Delta \log n)$, and in the dual graph model, gives bounds of $f_{ack}(k) = O(\Delta' \log n)$ and $f_{prog}(k) = O(\min\{k \log \Delta' \log n, \Delta' \log n\})$.\end{theorem}
\begin{theorem} In the dual graph model, there exists a distributed local broadcast algorithm that gives acknowledgment bound of $f_{ack}(k) = O(\Delta' \log n)$ and progress bound of $f_{prog}(k) = O(\min\{k \log k \log n, \Delta' \log n\})$. \end{theorem}
}
\fullOnly{
We first present three local broadcast algorithms. The first algorithm, the Synchronous Acknowledgment Protocol (SAP), yields a good acknowledgment bound and the other two algorithms, Synchronous Progress Protocol (SPP) and Asynchronous Progress Protocol (APP), achieve good progress bounds. From these two progress protocols, the SPP protocol is exactly the same as the \emph{Decay procedure} in \cite{BGI87}. In that paper, this protocol was designed as a submodule for the global broadcast problem in the classical model. Here, we reanalyze that protocol for the Dual Graph model. Furthermore, the APP protocol is similar to the Harmonic Broadcast Algorithm in \cite{KLNOR10}. In that work, the Harmonic Broadcast Algorithm is introduced and used as a solution to the problem of global broadcast in the dual graph model. We analyze the modified version of this algorithm, which we call the APP protocol, and show that it yields good progress bounds in the dual graph model. Then, we show how to combine the acknowledgment and progress protocols to get both fast acknowledgment and fast progress. Particularly, one can look at the combination of SAP and SPP as an optimized version of the Decay procedure, adjusted to provide tight progress and acknowledgment together.
\subsection{The Synchronous Acknowledgment Protocol (SAP)}\label{subsec:SAP}
In this section, we present the SAP protocol and show that this algorithm has acknowledgment bounds of $O(\Delta' \log n)$ and $O(\Delta \log n)$, respectively, in the dual graph and the classical model. \fullOnly{The reason that we call this algorithm synchronous is that the rounds are divided into contiguous sets named epochs, and the epochs of different processes are synchronized (aligned) with each other.} In the SAP algorithm, each epoch consists of $\Theta(\Delta' \log n)$ rounds. Whenever a process receives a message for transmission, it waits till the start of next epoch. If a process $v$ has received input $bcast(m)_v$ before the start of an epoch and has not output $ack(m)_v$ by that time, we say that in that epoch, process $v$ is \emph{ready with message $m$} or simply \emph{ready}.
As presented in Algorithm \ref{alg:SAP}, each epoch of SAP consists of $\log \Delta'$ phases as follows. For each $i \in [\log \Delta']$, the $i^{th}$ phase is comprised of $\Theta(2^i \log n)$ rounds where in each such round, each ready process transmits with probability $\frac{1}{2^i}$. After the end of the epoch, each ready process acknowledges its message.
\begin{algorithm}[th]\label{alg:SAP}
\caption{An epoch of SAP in process $v$ when $v$ is ready with message $m$}
\begin{algorithmic}
\footnotesize
\For {$i$ := $1$ to $\log \Delta'$}
\For {$j$:= $1$ to $\Theta(2^i \log n)$}
\State transmit $m$ with probability $\frac{1}{2^i}$
\EndFor
\EndFor
\State output $ack(m)_v$
\end{algorithmic}
\end{algorithm}
\begin{lemma} \label{lemma:SAP-Dual} The Synchronous Acknowledgment Protocol solves the local broadcast problem in the dual graph model and has acknowledgment time of $O(\Delta' \log n)$.
\end{lemma}
\begin{proof} Consider a process $v$ a round $r$ such that $v$ receives an input of $bcast(m)_v$ in round $r$. First, note that process $v$ acknowledges message $m$ by at most two epochs after round $r$, \emph{i.e.}, process $v$ outputs an $ack(m)_v$ by time $r' = r + \Theta(\Delta' \log n)$. Now assume that epoch $\wp$ is the epoch that $v$ becomes ready with $m$. In order to show that SAP solves the local broadcast problem, we claim that by the end of epoch $\wp$, with high probability, $m$ is delivered to all the processes in $\mathcal{N}_{G}(v)$. Consider an arbitrary process $u \in \mathcal{N}_{G}(v)$. To prove this claim, we show that by the end of epoch $\wp$, with high probability, $u$ receives $m$. A Union Bound then completes the proof of the claim.
To show that $u$ receives $m$ by the end of epoch $\wp$, we focus on the processes in $\mathcal{N}^{+}_{G\,'}(u)$. Suppose that $r''$ is the last round of epoch $\wp$ and that the number of ready processes in $\mathcal{N}^{+}_{G'}(u)$ during this epoch is at most $k=c(u, r, r'')$. Now, consider the phase $i = \lfloor \log k \rfloor$ of epoch $\wp$. In each round of this phase, the probability that $u$ receives the message of $v$ is at least $ \frac{1}{2^i} \;(1-\frac{1}{2^i})^{k} \approx \frac{1}{k}\; e^{-\,\frac{k}{k}} = \frac{1}{e\cdot k}$, where the first term of the LHS is the probability of transmission of process $v$ and the other term is the probability that rest of the ready processes in $\mathcal{N}^{+}_{G'}(u)$ remain silent. Now, phase $i$ has $\Theta(2^i \log n) = \Theta(k \log n)$ rounds. Therefore, the probability that $u$ does not receive $m$ in phase $i$ is at most $(1-\frac{1}{e\cdot k})^{\Theta(k \log n)} = e^{-\Theta(\log n)} = (\frac{1}{n})^{\Theta(1)}$
Hence, the probability that $u$ does not receive the message $m$ in epoch $\wp$ is $(\frac{1}{n})^{\Theta(1)}$. This completes the proof.
\end{proof}
\begin{corollary} \label{crl:SAP-classical}The SAP protocol solves the local broadcast problem in the classical model and has an acknowledgment bound of $O(\Delta \log n)$.
\end{corollary}
\begin{proof} The corollary can be easily inferred from Lemma \ref{lemma:SAP-Dual} by setting $G=G'$.
\end{proof}
\subsection{The Synchronous Progress Protocol (SPP)}\label{subsec:SPP}
In this section, we present and analyze the SPP protocol, which is also known as decay procedure. From Theorem 1 in \cite{BGI87}, it can be inferred that this protocol achieves a progress bound of $O(\log\Delta \, \log n)$ in the classical model. Here, we reanalyze this protocol with a specific focus on its progress bound in the dual graph model. More specifically, we show that this protocol yields a progress bound of $f_{prog}(k) = O(k \log(\Delta') \log n)$ in the dual graph model.
Similar to the SAP protocol, the rounds of SPP are divided into contiguous sets called epochs\fullOnly{ and epochs of different processes are synchronized with each other}. The length of each epoch of SPP is $\log \Delta'$ rounds. Similar to the SAP protocol, whenever a process $v$ receives a message $m$ for transmission, by getting input $bcast(m)_v$, it waits till the start of next epoch. Moreover, if input $bcast(m)_v$ happens before the start of an epoch and process $v$ has not outputted $ack(m)_v$ by that time, we say that in that epoch, process $v$ is \emph{ready with message $m$} or simply \emph{ready}.
As presented in Algorithm \ref{alg:SPP}, in each epoch of SPP and for each round $i \in [\log \Delta']$ of that epoch, each ready process transmits its message with probability $\frac{1}{2^i}$. Each process acknowledges its message $\Theta(\Delta' \log n)$ epochs after it receives the message.
\begin{algorithm}[th]\label{alg:SPP}
\caption{The procedure of SPP in process $v$ when $v$ becomes ready with message $m$}
\begin{algorithmic}
\footnotesize
\For {$j$ := $1$ to $\Theta(\Delta' \log n)$}
\For {$i$ := $1$ to $\log \Delta'$} \hfill {/*Each turn of this loop is one epoch*/}
\State transmit $m$ with probability $\frac{1}{2^i}$
\EndFor
\EndFor
\State output $ack(m)_v$
\end{algorithmic}
\end{algorithm}
\fullOnly{From the above description, it is clear that the general approach used in the protocols SAP and SPP are similar. In both of these protocols, in each round $r$, each ready process transmits with some probability $p(r)$ and this probability only depends on the protocol and the round number, i.e., the probabilities of transmissions in different ready processes are equal. Also, one can see that in round $r$, a node $u$ has the maximum probability of receiving some message if $c(u, r)$ is around $\frac{1}{p(r)}$. Hence, having rounds with different transmission probabilities is like aiming for nodes that have different levels of contention, i.e., $c(u, r)$. Noting this point, we see that the core difference between the SAP and SPP protocols is as follows. In the SAP, each epoch starts with a phase of rounds all aimed at nodes with smaller contention. The number of rounds in this phase is designed so that all the nodes at that contention level receive all the messages that are under transmission in their $G$-neighborhood. Then, after clearing out one level of contention, SAP goes to the next level, and it continues this procedure till cleaning up the nodes at largest level of contention. On the other hand, SPP is designed so that makes progress on all levels of contention gradually and altogether. That is, in each epoch of SPP, which is much shorter than those in APP, all the levels of contention are aimed for exactly once.
Now, we show that because of this property, SPP has a good progress bound.
}
\begin{lemma} \label{lem:SPP} The synchronous progress protocol solves the local broadcast problem in the dual graph model and provides progress bound of $f_{prog}(k)=O(k \log (\Delta') \log n)$. \fullOnly{Also, SPP provides receive bound of $f_{rcv}(k)=O(k \log (\Delta')\, \log n)$.}
\end{lemma}
\fullOnly{\begin{proof}
It is clear that in SPP, each message is acknowledged after $\Theta(\Delta' \log n)$ epochs and therefore after $O(\Delta' \log(\Delta') \log n)$ rounds. Similar to the proof of Lemma \ref{lemma:SAP-Dual}, we can easily see that each acknowledged message is delivered to all the $G$-neighbors of its sender. Thus, SPP solves the local broadcast problem.
Now, we first show that SPP has progress time of $f_{prog}(k) = \Theta(k \log (\Delta') \log n)$. Actually, we show something stronger. We show that within the same time bound, $u$ receives the messages of each of its ready $G$-neighbors. For this, suppose that there exists a process $u$ and a round $r$ such that in round $r$, at least one process $w \in \mathcal{N}_{G}(u)$ has a message for transmission such that $u$ has not received it. Also, suppose that the first round after $r$ that $u$ receives the message of $w$ is round $r'$. Such a round exists w.h.p as the SPP solves the broadcast problem. Let $k= c(u, r, r')$, i.e., the total number of processes in $\mathcal{N}_{G\,'}(u)$ that are ready in at least one round in range $[r, r']$. We show that $r' \leq r + \Theta(k \log (\Delta') \log n)$.
Suppose that $P$ consists of all the epochs starting with the first epoch after round $r$ and ending with the epoch that includes round $r'$. If $P$ has less than $\Theta(k \log \log n)$ epochs, we are done with the proof. On the other hand, assume that $P$ has at least $\Theta(k \log \log n)$ epochs. Let $i = \lfloor \log k \rfloor$. Now, for the $i^{th}$ round of each epoch in $P$, the probability that $u$ receives the message of $w$ in that round is at least
\[ \frac{1}{2^i} \;(1-\frac{1}{2^i})^{k} \approx \frac{1}{k}\; e^{-\,\frac{k}{k}} = \frac{1}{e\cdot k}\]
Therefore, the probability that $u$ does not receive $w$'s message in the $\Theta(k \log n)$ epochs of $P$ is at most
\[ (1-\frac{1}{e\cdot k})^{\Theta(k \log n)} = e^{-\Theta(\log n)} = (\frac{1}{n})^{\Theta(1)}\]
To see the second part of the lemma, suppose that process $v$ receives input $bcast(m')_v$ in round $\tau$ and outputs $ack(m')_v$ in round $\tau'$. Let $k' = c'(v, \tau, \tau')$. We argue that all processes in $\mathcal{N}_{G}(v)$ receive $m'$ by round $ \tau''= \tau + \Theta(k' \log (\Delta') \log n)$. Using the above argument, we see that each process $u \in \mathcal{N}_{G}(v)$ receives the message of $v$ in time $O(c(u, r, r') \log (\Delta') \log n)$ where $r$ and $r'$ are defined as above for $u$ and also, we have $r,r' \in [\tau, \tau']$. Moreover, by definition of the $c'(v, \tau, \tau')$, for each $u \in \mathcal{N}_{G}(v)$, we have $c(u, r, r') \leq c'(v, \tau, \tau') = k'$. Thus, all neighbors of $v$ receive $m'$ by time $\tau''$. This completes the proof of the second part.
\end{proof}}
\fullOnly{\begin{lemma} \label{lemma:SPP-classical} The SPP protocol solves the local broadcast problem in the classical model and gives a progress bound of $O(\log(\Delta) \log n)$.
\end{lemma}
\begin{proof} This bound can also be inferred from Theorem 1 in \cite{BGI87}. For the sake of completeness, and since analysis are simple and similar to the previous ones, we present the complete version here.
Similar to Corollary \ref{crl:SAP-classical}, we can easily see that the SPP protocol solves the local broadcast problem in the classical model from the result about the dual graph model by setting by setting $G'=G$ in the Lemma \ref{lem:SPP}. To see the progress time bound, consider process $u$ and suppose that there is a round $r$ in which some process in $\mathcal{N}_G(u)$ has a message for transmission such that process $u$ has not received it so far. Also, let $r'$ be the first round after $r$ that $u$ receives a message. Again, such a round exists since SPP solves the local broadcast problem. Let $k = c(u, r, r')$ and $i = \lfloor \log k \rfloor$. The probability that $u$ receives a new message in $i^{th}$ round of the each epoch after round $r$ is at least $\frac{c(u)}{2^i} \;(1-\frac{1}{2^i})^{k} \approx \; e^{-\,\frac{k}{k}} = \frac{1}{e}$. Therefore, the probability that $r' > r+ \Theta(\log n \, \log(\Delta))$ is at most $(1-\frac{1}{e})^{\Theta(\log n)} = (\frac{1}{n})^{\Theta(1)}$. This completes the proof.
\end{proof}}
\shortOnly{
{\noindent\bf The Asynchronous Progress Protocol (APP):}}\fullOnly{\subsection{The Asynchronous Progress Protocol (APP)}\label{subsec:APP}}
In this section, we present and study the APP protocol and show that it yields progress bound of $f_{prog}(k) = O(k \log(k) \log n)$ in dual graph model. Note that this is better than the bound achieved in SPP. However, in comparison to the bound achieved by SPP in the classical model, APP does not guarantee a good progress time. This protocol is, in principle, similar to the Harmonic Broadcast Algorithm in \cite{KLNOR10} that is used for global broadcast in the dual graph model.
Similar to the SAP and SPP protocols, the rounds of APP are divided into epochs as well. However, in contrast to those two protocols, and as can be inferred from the name, the epochs of APP in different processes are not synchronized with each other. Also, in APP, a process $v$ becomes \emph{ready} immediately after it receives the $bcast(m)_v$ input.
\fullOnly{
}
Whenever a process becomes ready, it starts an epoch as follows. This epoch consists of $\log \Delta' + \log \log \Delta'$ phases. For each $i \in [\log \Delta'+ \log \log \Delta']$, the $i^{th}$ phase is comprised of $\Theta(2^i \log n)$ rounds where in each such round, each ready process transmits with probability $\frac{1}{2^i}$. Also, the process outputs $ack(m)_v$ at the end of this epoch.
\begin{algorithm}[th]\label{alg:APP}
\caption{An epoch of APP in process $v$ when $v$ is ready with message $m$}
\begin{algorithmic}
\For {$i$ := $1$ to $\log \Delta' + \log \log \Delta'$ }
\For {$j$:= $1$ to $\Theta(2^i \log n)$}
\State transmit $m$ with probability $\frac{1}{2^i}$
\EndFor
\EndFor
\State output $ack(m)_v$
\end{algorithmic}
\end{algorithm}
\begin{lemma} \label{lem:APP} The asynchronous progress protocol solves the local broadcast problem in the dual graph model and has progress time of $f_{prog}(k)= O(k \log (k) \log n)$. \fullOnly{Also, APP achieves receive bound of $f_{rcv}(k)=O(k \log (k) \log n)$.}
\end{lemma}
\fullOnly{\begin{proof} Suppose that there exists a process $u$ and a round $r$ such that in round $r$, some process $v$ in $\mathcal{N}_{G}(u)$ has a message $m$ that is not received by $u$, i.e., $m$ is new to $u$. Let $r'$ be an arbitrary round after round $r$ and let $R$ be the set of all rounds in range $[r, r']$. So, we have $r' = r+|R|-1$. Then, let $k= c(u, r, r')$. In order to prove the progress bound part of the theorem, we show that if $r' - r\geq \Theta(k \cdot\log k \cdot\log n)$, then, with high probability, $u$ receives $m$ by round $r'$. Note that this is even stronger than proving the claimed progress bound because this means that $u$ receives each of the new messages (new at round $r$) by $r'$.
Since $k$ can be at most $\Delta'$, this would automatically show that APP solves the local broadcast problem. Also, similar to the proof of Lemma \ref{lem:SPP}, this would prove the second part of the theorem as well.
Let $S$ be the set of all processes in $\mathcal{N}_{G'}(u) - \{v\}$ that are ready in at least one round of $R$. Therefore, we have $|S|=k-1$. First, since the acknowledgment in APP takes a full epoch, during rounds of $R$, each process in $S$ transmits at most a constant number of messages and therefore, the total number of messages under transmission in $\mathcal{N}_{G'}(u)$ in rounds of $R$ is $O(k)$.
We show that w.h.p. $u$ receives message $m$ by the end of rounds of $R$. In order to do this, we divide the rounds of $R$ into two categories of \emph{free} and \emph{busy}. Similar to \cite{KLNOR10}, we call a round $\tau$ \emph{busy} if the total probability of transmission of processes of $S$ in round $\tau$ is greater than or equal to $1$. Otherwise, the round $\tau$ is called \emph{free}. Similar to \cite[Lemma 11]{KLNOR10}, we can see that the total number of busy rounds in set $R$ is $O(k \cdot\log k \cdot\log n)$. Therefore, there are $\Theta(k \cdot \log k \cdot\log n)$ free rounds in $R$. On the other hand, similar to \cite[Lemma 11]{KLNOR10}, we can easily see that if $\tau \in R$ is a free round and the probability of transmission of $v$ in round $r$ is $p_v(\tau)$, then $u$ receives the message of process $u$ in round $\tau$ with probability at least $\frac{1}{4 p_v(\tau)}$. Now, because of the way that SPP chooses its probabilities and since $|R| = \Theta(k \;\log k \;\log n)$, we can infer that the transmission probability of $v$ for each round $\tau \in R$ is at least $\frac{1}{2k \,\log k}$. Therefore, since $R$ has $\Theta(k \cdot \log k \cdot\log n)$ free rounds and for each free round $\tau \in R$, $u$ receives the message of $v$ with probability at least $\frac{1}{4 p_v(\tau)}$, we can conclude that the probability that $u$ does not receive the message of $v$ by the end of rounds of $R$ is at most
\[ (1-\frac{1}{4 k \,\log k})^{\Theta(k \; \log k \;\log n)} \leq e^{-\Theta(\log n)} = (\frac{1}{n})^{\Theta(1)}\]
This completes the proof.
\end{proof}
}
\shortOnly{
{\noindent\bf {Interleaving Progress and Acknowledgment Protocols:}}}\fullOnly{
\subsection{Interleaving Progress and Acknowledgment Protocols}}
\fullOnly{In this section, we show how we can achieve both fast progress and fast acknowledgment bounds by combining our acknowledgment protocol, SAP, with either of the progress protocols, SPP or APP.} The general outline for combining the above algorithms is as follows. Suppose that we want to combine the protocol SAP with a protocol $P_{prog} \in \{SPP, APP\}$. Then, whenever process $v$ receives message $m$ for transmission, by a $bsast(m)_v$ input event, we provide this message as input to both of the protocols SAP and $P_{prog}$. Then, we run the SAP protocol in the odd rounds, and protocol $P_{prog}$ in the even rounds. In the combined algorithm, process $v$ acknowledges the message $m$ by outputting $ack(m)_v$ in the round that SAP acknowledges $m$. Moreover, in that round, the protocol $P_{prog}$ also finishes working on this message. \fullOnly{In the following, we show that using}\shortOnly{Using} this combination, we achieve the fast progress and acknowledgment bounds together. \fullOnly{More formally, we show that the acknowledgment and progress time of the combined algorithm are respectively, two times the minimums of the acknowledgment and two times the minimum of the progress times of the respective two protocols.}
\fullOnly{\begin{lemma} \label{lem:Comb-Ack}If we interleave the SAP protocol with a protocol $P_{prog} \in \{SPP, APP\}$, the resulting algorithm solves the local broadcast problem and has acknowledgment bound of $f_{ack}(k) = O(\Delta' \log n)$ in the dual graph model, and acknowledgment bound of $f_{ack}(k) = O(\Delta \log n)$ in the classical model.
\end{lemma}}
\fullOnly{\begin{proof}First, note that the even and odd rounds of different processes are aligned and therefore, in each round, only one of the protocols SAP and $P_{prog}$ is transmitting throughout the whole network. Because of this, it is clear that when in process $v$, the SAP protocol acknowledges message $m$, $m$ is successfully delivered to all the processes in $\mathcal{N}_{G}(v)$. Now, suppose that process $v$ receives an input $bcast(m)_v$ in round $r$. Using Lemma \ref{lemma:SAP-Dual}, we know that the SAP protocol acknowledges message $m$ by $\Theta(\Delta' \log n)$ odd rounds after $r$. Thus, process $v$ outputs $ack(m)_v$ in a round $r'=r+\Theta(\Delta' \log n)$. Hence, we have that the interleaved algorithm solves the local broadcast problem and has acknowledgment bounds of $f_{ack}(k) = \Theta(\Delta' \log n)$ and $f_{ack}(k) = \Theta(\Delta \log n)$, respectively for, the dual graph and the classical radio broadcast models.
\end{proof}
}
\begin{corollary}If we interleave SAP with SPP, in the dual graph model, we get acknowledgment bound of $f_{ack}(k) = O(\Delta' \log n)$ and progress bound of $f_{prog}(k) = O(\min\{k \log (\Delta') \log n, \Delta' \log n\})$. Also, this interleaving gives acknowledgment and progress bounds of, respectively, $O(\Delta \log n)$ and $O(\log (\Delta) \log n)$ in the classical radio broadcast model. \end{corollary}
\fullOnly{\begin{proof} The acknowledgment bound parts of the corollary follow immediately from Lemma \ref{lem:Comb-Ack}. For the progress bound parts, consider a process $u$ and a round $r$ such that there exists a process $v \in \mathcal{N}_{G}(u)$ that is transmitting message $m$ and process $u$ has not received message $m$ before round $r$. Note that for each $r'>r$, if we have $c(u, r, r')=k$, then, by definition of $c(u, r, r')$, in each even round $\tau \in [r, r']$, we have $c(u, \tau) \leq k$. The rest of the proof follows easily from Lemmas \ref{lem:SPP} and \ref{lemma:SPP-classical}, and by focusing on the SPP protocol in the even rounds after $r$.
\end{proof}
}
\begin{corollary}If we interleave SAP with APP, in the dual graph model, we get acknowledgment bound of $f_{ack}(k) = O(\Delta' \log n)$ and progress bound of $f_{prog}(k) = O(\min\{k \log (k) \log n, \Delta' \log n\})$. \end{corollary}
\fullOnly{
\begin{proof} Again, the acknowledgment bound part of the corollary follows immediately from Lemma \ref{lem:Comb-Ack}. For the progress part, consider a process $u$ and a round $r$ such that there exists some process $v \in \mathcal{N}_{G}(u)$ that is transmitting a message $m$ and process $u$ has not received message $m$. Suppose that $r'$ is the first round that $u$ receives $m$. Such a round exists with high probability as we know from Lemma \ref{lem:Comb-Ack} that the combined algorithm solves the local broadcast problem. Let $k = c(u, r, r')$. Let $r'' = r + \Theta(\min\{k \log (k) \log n, \Delta' \log n\})$. If $r' \leq r''$, we are done with the proof. In the more interesting case, suppose that $r' < r''$.
Now, by definition of $c(u, r, r')$, we know that in each even round $\tau \in [r, r']$, we have $c(u, \tau) \leq k$. Hence, we also have that in each even round $\tau \in [r, r'']$, $c(u, \tau) \leq k$. Let $S$ be the set of processes in $\mathcal{N}_{G\,'}(u)$ that are active in at least one even round in range $[r, r'']$. Thus, $|S|\leq k$. Since $r'' - r \leq \Theta(\Delta' \log n)$ and the algorithm acknowledges each message after $\Theta( \Delta' \log n)$ rounds, during even rounds in range $[r, r'']$, each process $w \in S$ transmits only a constant number of messages. Therefore, the total number of messages under transmission during even rounds in range $[r, r'']$ is $O(k)$. The rest of the proof can be completed exactly as that in the proof of Lemma \ref{lem:APP}.
\end{proof}
}
}
\section{Model}
\label{sec:model}
To study the local broadcast problem in synchronous multi-hop radio networks, we use two models, namely the \emph{classical radio network model} (also known as the radio network model) and the \emph{dual graph model}. The former model assumes that all connections in the network are reliable and it has been extensively studied since 1980s~\cite{CK85, BGI87, ABLP91, CGR00, CGGPR00, CGOR00, CMS01, CCMPS01, CMS04, GPX05, KLN09, KLN09, KKKL10}. On the other hand, the latter model is a more general model, introduced more recently in 2009~\cite{KLN09, KLN-DISC-09, KLN09BA}, which includes the possibility of unreliable edges. Since the former model is simply a special case of the latter, we use dual graph model for explaining the model and the problem statement. However, in places where we want to emphasize on a result in the classical model, we focus on the classical model and explain how the result specializes for this specific case.
In the dual graph model, radio networks have some reliable and
potentially some unreliable links. Fix some $n\geq 1$.
We define a network $(G,G')$
to consist of two undirected graphs, $G=(V,E)$
and $G'=(V,E')$,
where $V$ is a set of $n$ wireless nodes
and $E \subseteq E'$, where intuitively set $E$ is the set of reliable edges while $E'$ is the set of all edges, both reliable and unreliable. In the classical radio network model, there is no unreliable edge and thus, we simply have $G = G'$, i.e., $E = E'$.
We define an algorithm ${\cal A}$
to be a collection of $n$ randomized processes,
described by probabilistic automata.
An execution of ${\cal A}$ in network $(G,G')$
proceeds as follows:
first, we fix a bijection
$proc$ from $V$ to ${\cal A}$.
This bijection assigns
processes to graph nodes.
We assume this bijection is defined by
an adversary and is not known to the processes.
We do not, however, assume that the definition of $(G,G')$
is unknown to the processes (in many real world settings it
is reasonable to assume that devices can make some
assumptions about the structure of their network).
In this study, to strengthen our results,
our upper bounds make no assumptions
about $(G,G')$ beyond bounds on maximum contention and polynomial bounds on size of the network,
while our lower bounds allow full knowledge of the
network graph.
An execution proceeds in synchronous rounds $1,2,...$,
with all processes starting in the first round.
At the beginning of each round $r$, every process
$proc(u), u\in V$
first receives inputs (if any) from the environment.
It then decides whether or not to transmit a message
and which message to send.
Next, the adversary chooses
a {\em reach set} that consists of $E$
and some subset, potentially empty,
of edges in $E'-E$. Note that in the classical model, set $E' - E$ is empty and therefore, the reach set is already determined.
This set describes the links
that will behave reliably in this round. We assume that the adversary has full knowledge of the state of the network while choosing this reach set.
For a process $v$, let $B_{v,r}$ be the set all graph nodes $u$
such that, $proc(u)$ broadcasts in $r$ and $\{u,v\}$ is in the
reach set for this round.
What $proc(v)$ receives in this round \shortOnly{is determined as follows.}\fullOnly{depends
on the size of $B_{v,r}$, the messages
sent by processes assigned to nodes in $B_{v,r}$,
and $proc(v)$'s behavior.}
If $proc(v)$ broadcasts in $r$, then it receives
only its own message.
If $proc(v)$ does not broadcast, there are two cases:
(1) if $|B_{v,r}| = 0$ or $|B_{v,r}| > 1$, then $proc(v)$
receives $\bot$ (indicating {\em silence});
(2) if $|B_{v,r}| = 1$, then $proc(v)$ receives the message sent by
$proc(u)$, where $u$ is the single
node in $B_{v,r}$.
That is, we assume processes cannot send and receive simultaneously,
and also, there is no collision detection in this model. However, to strengthen our results, we note that our lower bound results hold even in the model with collision detection, i.e., where process $v$ receives a special collision indicator message $\top$ in case $|B_{v,r}| > 1$. After processes receive their messages, they generate outputs (if any) to pass back to the environment.
\paragraph{Distributed vs. Centralized Algorithms:}
The model defined above describes distributed algorithms in a radio network setting. To strengthen our results, in some of our lower bounds we
consider the stronger model of {\em centralized} algorithms.
We formally define a centralized algorithm to be defined
the same as the distributed algorithms above,
but with the following two modifications: (1) the processes
are given $proc$ at the beginning of the execution;
and (2) the processes can make use of the current state and inputs
of {\em all} processes in the network when making decisions
about their behavior.
\paragraph{Notation \& Assumptions:}
\fullOnly{The following notation and assumptions will simplify
the results to follow.}
For each $u\in V$, the notations
$N_G(u)$ and $N_{G'}(u)$ describe, respectively, the
neighbors of $u$ in $G$ and $G'$. Also, we define $N^{+}_{G}(u)= N_G(u) \cup \{u\}$ and $N^{+}_{G'}(u)= N_{G'}(u) \cup \{u\}$.
For any algorithm ${\cal A}$,
we assume that each process ${\cal A}$
has a unique identifier. To simplify notation, we assume the identifiers are from $\{1,...,n\}$. We remark that our lower bounds hold even with such strong identifiers, whereas for the upper bounds, we just need the identifiers of different processes to be different.
Let $id(u), u\in V$ describe the id of process $proc(u)$.
For simplicity,
throughout this paper we often use
the notation {\em process $u$}, or sometimes just $u$,
for some $u\in V$,
to refer to $proc(u)$ in the execution in question.
Similarly, we sometimes use {\em process $i$},
or sometimes just $i$,
for some $i\in\{1,...,n\}$,
to refer to the process with id $i$. We sometimes use the notation $[i,i']$, for integers $i' \geq i$,
to indicate the sequence $\{i,...,i'\}$,
and the notation $[i]$ for integer $i$ to indicate $[1,i]$.
Throughout, we use the
the notation {\em w.h.p.} ({\em with high probability}) to indicate
a probability at least $1-\frac{1}{n}$. Also, unless specified, all logarithms are natural log. Moreover, we ignore the integral part signs whenever it is clear that omitting them does not effect the calculations more than a change in constants.
|
2,869,038,155,508 | arxiv | \section{Introduction}
Travel industry actors, such as airlines and hotels, nowadays use sophisticated pricing models to maximize their revenue, which results in highly volatile fares~\cite{Chen2015}. For customers, price fluctuation are a source of worry due to the uncertainty of future price evolution. This situation has opened the possibility to new businesses, such as travel meta-search engines or online travel agencies, providing decision-making tools to customers~\cite{Wohlfarth2011}. In this context, accurate price forecasting over time is a highly desired feature, as it allows customers to take informed decisions about purchases, and companies to build and offer attractive tour packages, while maximizing their revenue margin.
The exponential growth of computer power along with the availability of large datasets has led to a rapid progress in the machine learning (ML) field over the last decades. This has allowed the travel industry to benefit from the powerful ML machinery to develop and deploy accurate models for price time-series forecasting. Development and deployment, however, only represent the first steps of a ML system's life cycle. Currently, it is the monitoring, maintenance and improvement of complex production-deployed ML systems which carry most of the costs and difficulties in time~\cite{Sculley2015,r2019overton}. Model monitoring refers to the task of constantly tracking a model's performance to determine when it degrades, becoming obsolete. Once a degradation in performance is detected, model maintenance and improvement take place to update the deployed model by rebuilding it, recalibrating it or, more generally, by doing model selection.
While it is relatively easy and fast to develop ML-based methods for accurate price forecasting of different travel products, maintaining a good performance over time faces multiple challenges. Firstly, price forecasting of travel products involves the analysis of multiple time-series which are modeled independently, i.e.~a model per series rather than a single model for all. According to the 2019 World Air Transport Statistics report, almost 22K city pairs are directly connected by airlines through regular services~\cite{international2019world}. As each city pair is linked to a time-series, it is impossible to manually monitor the performance of every associated forecasting model. For scalability purposes, it is necessary to develop methods that can continuously and automatically monitor and maintain every deployed model. Secondly, time-series comprise time-evolving complex patterns, non-stationarities or, more generally, distribution changes over time, making forecasting models more prone to deteriorate over time~\cite{aiolfi_persistence_2006}. Poor estimations of a model's degrading performance can lead to business losses, if detected too late, or to unnecessary model updates incurring system maintenance costs~\cite{Sculley2015}, if detected too early. Efficient and timely ways to model monitoring are therefore key to continuously accurate in-production forecasts.
Finally, a model's degrading performance also implies that the model becomes obsolete. As a result, a specific model might not always be the right choice for a given series. Since time-series forecasting can be addressed through a large set of different approaches, the task of choosing the most suitable forecasting method requires finding systematic ways to carry out model selection efficiently. One of the most common ways to achieve all of this is cross-validation~\cite{arlot2010survey}. However, this approach is only valid at development and cannot be used to monitor and maintain models in-production due to the absence of ground truth data.
In this work we introduce a data-driven framework to continuously monitor and maintain time-series forecasting models' performance in-production, i.e in the absence of ground truth, to guarantee continuous accurate performance of travel products price forecasting models.
Under a supervised learning approach, we predict the forecasting error of time-series forecasting models over time. We hypothesize that the estimated forecasting error represents a surrogate measure of the model's future performance. As such, we achieve continuous monitoring by using the predicted forecasting error as a measure to detect degrading performance. Simultaneously, the predicted forecasting error enables model maintenance by allowing to rank multiple models based on their predicted performance, i.e. model comparison, and then select the one with the lowest predicted error measure, i.e. model selection. We refer to it as a model monitoring and model selection framework.
The remaining of this paper is organized as follows.
Section~\ref{sect_rw} discusses related work.
Section~\ref{sect_background} reviews the fundamentals of time-series forecasting and performance assessment.
Section~\ref{sect_methodology} describes the proposed model monitoring and maintenance framework.
Section~\ref{sect_experiments} describes our datasets and presents the experimental setup. Experiments and results are discussed in section~\ref{sec:results}. Finally, in section~\ref{sect_conclusions} we summarize our work and discuss key findings.
\section{Related Work} \label{sect_rw}
\textbf{Maintainable industrial ML systems.} Recent works from tech companies~\cite{Baylor2017,lin2012large,r2019overton} have discussed their strategies to deal with some of the so-called \textit{technical debts}~\cite{Sculley2015} in which ML systems can incur when in production. These works mainly focus on the hard- and soft-ware infrastructure used to mitigate these \textit{debts}. Less emphasis is given to the specific methods put in place.
\\
\textbf{Concept drift.} The phenomenon of time-evolving data patterns is known as concept drift. As time-series are not strictly stationary, it is a common problem of time-series forecasting usually addressed through regular model updates. Most works have focused on its detection, what we denote model monitoring, without performing model selection as they are typically limited to a single model~\cite{Ferreira:2014:DCT:2542820.2562373,10.1007/978-3-642-34166-3_40}. The exception to this is the work of \cite{saadallah2019drift,Saadallah2020}, where a weighted sliding-window is used to combine the forecasts of multiple candidate models into a single value.
\\
\textbf{Performance assessment without ground truth.}
An alternative to cross-validation is represented by information criteria. The rationale consists in quantifying the best trade-off between models' goodness of fit and simplicity.
Information criteria are mostly used to compare nested models, whereas the comparison of different models requires to compute likelihoods on the same data.
Being fully data-driven, our framework avoids any constraint regarding the candidate models, leading to a more general way to perform model selection.
Specifically to time-series forecasting, Wagenmakers et al.~\cite{wagenmakers2006accumulative} achieve performance assessment in the absence of ground truth using a concept similar to ours. They estimate the forecasting error of a new single data point by adding previously estimated forecast errors, obtained from already observed data points. The use of the previous errors makes it sensible to unexpected outlier behaviors of the time-series.
\\
\textbf{Meta-learning.} Meta-learning has been proposed as a way to automatically perform model selection. Its performance has been recently demonstrated in the context of time-series forecasting. Both~\cite{ALI20189,RePEc:msh:ebswps:2018-6} formulate the problem as a supervised learning one, where the meta-learner receives a time-series and outputs the ``best'' forecasting model. Authors in~\cite{cerqueira2017arbitrated} share our idea that forecasting performance decays in time, thus they train a meta-learner to model the error incurred by the base models at each prediction step as a function of the time-series features. Differently from~\cite{RePEc:msh:ebswps:2018-6}, our approach does not seek to select a different model family for each time-series, and avoids model selection at each time step~\cite{cerqueira2017arbitrated}, since these two represent expensive overheads for in-production maintenance. Instead, we maintain a fast forecasting procedure and select the best model for a given time period in the future, which length can be relatively high (6-9 months, for instance).
\section{Time-series forecasting and performance measures} \label{sect_background}
A univariate time-series is a series of data points
$\smash{\mathcal{T} = \{ y_1, \ldots, y_T \}}$,
each one being an observation of a process measured at a specific time $t$.
Univariate time-series contain a single variable at each time instant, while multivariate time-series record more than one variable at a time. Our application is concerned with univariate time-series, which are recorded at discrete points in time, e.g., monthly, daily, hourly. However, extension to the multivariate setting is straightforward.
Time-series forecasting is the task consisting in the use of these past observations (or a subset thereof) to predict future values $\mathcal{T}_h = \{ \hat{y}_{T+1}, \ldots, \hat{y}_{T+h}\}$, with $h$ indicating the forecasting horizon. The number of well-established methods to perform time-series forecasting is quite large. Methods go from classical statistical methods, such as Autoregressive Moving Average (ARMA) and Exponential smoothing, to more recent machine learning models which have shown outstanding performance in different tasks, including time-series forecasting.
The performance assessment of forecasting methods is commonly done using error measures. Despite decades of research on the topic, there is still not an unanimous consensus on the best error measure to use among the multiple available options~\cite{HYNDMAN2006679}.
Among the most used ones, we find Symmetric Mean Absolute Percentage Error (sMAPE) and Mean Absolute Scaled Error (MASE). These two have been adopted in recent time-series forecasting competitions~\cite{article}.
\section{Monitoring and model selection framework} \label{sect_methodology}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\textwidth]{scheme.PNG}
\caption{Illustration of the proposed method. $\mathcal{X}{}$ and $\mathcal{X^*}$ contain multiple time-series, each of these composed of $T_i$ observations (green) and $h$ forecasts (red) estimated by a \textit{monitored model}, $g$. $\mathbf{e}_g$ represents the forecasting performance of the \textit{monitored model}. It is computed using the true values (yellow). A \textit{monitoring model} is trained to learn the function $f$ mapping $\mathcal{X}$ to $\mathbf{e}_g$. With the learned $f$, the \textit{monitoring model} is able to predict $\mathbf{e}^{*}_{g}$, the predicted forecasting performance of the \textit{monitored model} given $\mathcal{X^*}$.
} \label{fig:fig_supervised}
\end{center}
\end{figure}
Let us denote
$\mathcal{X}=\smash{\{ \mathcal{T}^{(i)}, \mathcal{T}_{h}^{(i)} \}_{i=1}^{N}}$
the input training set. A given input $i$ is formed by the observed time-series $\smash{\mathcal{T}^{(i)}}$ and $h$ forecasted values, $\smash{\mathcal{T}_{h}^{(i)}}$. The values in $\smash{\mathcal{T}_{h}^{(i)}}$ are obtained by a given forecasting model which we hereby denote a \textit{monitored model}, $g$. Let $\mathbf{e}_{g}=\smash{ \{ e^{(i)}_g \}_{i=1}^{N}}$ be a collection of $N$ performance measures assessing the accuracy of the forecasts $\smash{\mathcal{T}_{h}^{(i)}}$ estimated by $g$. A given performance measure $\smash{e^{(i)}_g}$ is obtained by comparing the forecasts $\smash{\mathcal{T}_{h}^{(i)}}$ from $g$ to the true values.
Lets define a \textit{monitoring model} as a model that is trained to learn a function $f$ mapping the input time-series $\mathcal{X}$ to the target $\smash{\mathbf{e}_g}$.
Given a new set of time-series
$\mathcal{X^*}=\smash{\{ \mathcal{T^*}^{(i)}, \mathcal{T^*}_{h}^{(i)} \}_{i=1}^{N^*}}$,
formed by a time-series of observations $\smash{\mathcal{T^*}^{(i)}}$, $\smash{|\mathcal{T^*}^{(i)}|=T^*_i}$, and $h$ forecasts $\smash{\mathcal{T^*}_{h}^{(i)}}$ obtained by $g$, the learned \textit{monitoring model} predicts $\mathbf{e}^{*}_g$, i.e. the predicted performance measure of $g$ given $\mathcal{X^*}$ (Figure~\ref{fig:fig_supervised}).
The predicted performance measures $\smash{\mathbf{e}^{*}_g}$ represent a surrogate measure of the performance of a given $g$ within the forecasting horizon $h$. As such it is used for the two tasks: model monitoring and selection. Model monitoring is achieved by using $\smash{\mathbf{e}^{*}_g}$ as an alert signal. If the estimated performance measure of the \textit{monitored model} is poor, this means the model has become stale. To achieve model selection, $\smash{\mathbf{e^*}}$ are used to rank multiple \textit{monitored models}{} and choose the one with the best performance
If the two tasks are executed in a continuous fashion over time, it is possible to guarantee accurate forecasts in an automated way.
In the following, we describe the performance measure $\textbf{e}$ that we use in our framework, as well as the \textit{monitoring} and \textit{monitored models} that we chose to validate our hypotheses.
\subsection{Performance measure}
As previously discussed, performance accuracy of time-series forecasts is measured using error metrics. In this framework, we use the sMAPE. It is defined as:
\begin{equation} \label{sMAPE_Eq}
\textrm{sMAPE} = \frac{1}{h}\sum_{t=1}^{h}2\frac{\left | y_{t} - \hat{y}_{t} \right |}{\left |y_{t} \right |+\left | \hat{y}_{t} \right |} \text{,}
\end{equation}
where $h$ is the number of forecasts (i.e. forecasting horizon), $y$ is the true value and $\hat{y}$ is the forecast.
In the literature, there are multiple definitions of the sMAPE. We choose the one introduced in~\cite{chen2004assessing} because it is bounded between 0 and 2; specifically, it has a maximum value of 2 when either $y$ or $\hat{y}$ is zero, and it is zero when the two values are identical. The sMAPE has two important drawbacks: it is undefined when both $y$, $\hat{y}$ are zero and it can be numerically unstable when the denominator in Eq.~\ref{sMAPE_Eq} is close to zero. In the context of our application, this is not a problem since it is unlikely to have prices with value zero or very close to it.
\subsection{Monitoring models} \label{sect_models}
The formulation of our framework is generic in the sense that any supervised technique that can solve regression problems can be used as a \textit{monitoring model}. In this work, we decided to focus on latest advances in deep learning. We consider four alternative \textit{monitoring models}{}: Long Short-Term Memory (LSTM) networks, Convolutional Neural Networks (CNNs), Bayesian CNNs and Gaussian processes (GP). The latter two models differ from the former ones in that they also provide uncertainties around the predictions. This can enrich the output provided by the monitoring framework, in that whenever an alert is issued because of poor performance, this is equipped with information about its reliability This section illustrates the basic ideas of each of the selected \textit{monitoring models}{}.
\\
\\
\textbf{Long Short-Term Memory networks.}
LSTM~\cite{hochreiter1997long} networks are a type of Recurrent Neural Networks (RNNs) that solve the issue of the vanishing gradient problem~\cite{bengio1994learning} present in the original RNN formulation. They achieve this by introducing a cell state into each hidden unit, which memorizes information. As RNNs they are a well-established architecture
to model sequential data. By construction, LSTMs can handle sequences of varying length, with no need for extra processing like padding. This is useful in our application, whereby time-series in the datasets have different lengths.
\noindent
\textbf{Convolutional Neural Networks.}
CNNs~\cite{lecun1998gradient} are particular class of deep neural networks where the weights (filters) are designed to promote local information to propagate from the input to the output at increasing levels of granularity.
We use the original LeNet \cite{LeCun:1999:ORG:646469.691875} architecture, as it obtains generally good results in image recognition problems, while being considerably faster to train with respect to more modern architectures.
CNNs are not originally conceived to work with time-series data. We adapt the architecture to work with time-series by using 1D convolutional filters.
Unlike RNNs, this model does not support inputs of variable size, so we to resort to padding: where necessary we append zeros to a time-series to make them uniform in length. We denote this model LeNet.
\noindent
\textbf{Bayesian Convolutional Neural Networks.}
Bayesian CNNs~\cite{gal2016dropout} represent the probabilistic version of CNNs, used in applications where quantification of the uncertainty in predictions is needed.
Network parameters are assigned a prior distribution and then inferred using Bayesian inference techniques.
Due to the intractability of the problem, the use of approximations is required.
Here we choose Monte Carlo Dropout~\cite{gal2016dropout} as a practical way to carry out approximate Bayesian CNNs. By applying dropout at test time we are able to sample from an approximate posterior distribution over the network weights. We use this technique on the LeNet CNN with 1D filters to produce probabilistic outputs. We denote this model Bayes-LeNet.
\noindent
\textbf{Gaussian processes.}
GPs~\cite{Rasmussen:2005:GPM:1162254} form the basis of probabilistic nonparametric models. Given a supervised learning problem, GPs consider an infinite set of functions mapping input to output. These functions are defined as random variables with a joint Gaussian distribution, specified by a mean function and a covariance function, the latter encoding properties of the functions with respect to the input.
One of the strengths of GP models is the ability to characterize uncertainty regardless of the size of the data. Similarly to CNNs, in this model input sequences must have the same length, so we resort to padding.
\subsection{Monitored models}
Similar to \textit{monitoring models}{}, given the generic nature of the proposed framework, there is no constraint on the type of \textit{monitored models}{} that can be used. Any time-series forecasting method can be monitored. For this proof of concept, we consider six different \textit{monitored models}{}. We select five of them from the ten benchmarks provided in the M4 competition~\cite{article}, a recent time-series forecasting challenge. These are: Simple Exponential Smoothing (\texttt{ses}), Holt's Exponential Smoothing (\texttt{holt}), Dampen Exponential Smoothing (\texttt{damp}), Theta (\texttt{theta}) and a combination of \texttt{ses}{} - \texttt{holt}{} - \texttt{damp}{} (\texttt{comb}).
Besides these five methods, we included a simple Random Forest (\texttt{rf})
, in order to enrich the benchmark with a machine learning-based model. We refer the reader to \cite{breiman2001random,article} for further details on each of these approaches.
\section{Experimental setup} \label{sect_experiments}
This section presents the data, provides details about the implementation of our methods to ease reproducibility and concludes by describing the evaluation protocol carried during the experiments.
\subsection{Data}
\textbf{Flights and hotels datasets.} We focus on two travel products: direct flights between city pairs and hotels. Our data is an extract of prices for these two travel products obtained from the Amadeus for Developers API~\footnote[1]{\url{https://developers.amadeus.com/}}, an online web-service which enables access to travel-related data through several APIs. It was collected over a two-years and one-month period. Table~\ref{tab_datasets} presents some descriptive features of the datasets.
Using the service's Flight Low-fare Search API, we collected daily data for one-way flight prices of the top 15K most popular city pairs worldwide. The collection was done in two stages. A first batch, corresponding to the top 1.4K pairs (\textsc{flights}), was gathered for the whole collection period. The second batch, corresponding to the remaining pairs (\textsc{flights-ext}), was collected only over the second year. For hotels, we used the Hotel API to collect daily hotel prices for a two-night stay at every destination city contained in the top city pairs used for flight search. These represent 3.2K different time-series.
Both APIs provide information about the available offers for flights/hotels, that meet the search criteria (origin-destination and date, for flights; city, date and number of nights, fixed to 2, for hotels) at the time of search. As such, it is possible to have multiple offers (flights or hotel rooms) for a given search criteria. When multiple offers were proposed, we averaged the different prices to have a daily average flight price for a given city pair, in the case of flights, or daily average hotel price for a given city, in the case of hotels. In the same way, it is possible to have no offers for a given search criteria. Days with no available offers were reported as missing data. Lack of offers can be caused by sold outs, specific flight schedules (e.g. no daily flights for a city pair) or seasonal patterns (e.g. flights for a part of the year or seasonal hotel closures). More rarely, they could even be due to a failure in the query sent to the API. As a result, the number of available observations is smaller than the length of the collection period (see Table~\ref{tab_datasets}).
\\
\\
\noindent
\textbf{Public benchmarks.} In addition to travel products data, we decided to include data coming from publicly available benchmarks. Benchmark data are typically curated and avoid problems present in real data, such as those previously discussed regarding missing data, allowing for an objective assessment and more controlled setup for experimentation. We included two sets from the M4 time-series forecasting challenge competition~\cite{article} dataset, \textsc{yearly} and \textsc{weekly}. Table~\ref{tab_datasets} presents statistics on the number of time-series and the available number observations per time-series for these two datasets. Here, the number of available observations is equivalent to the time-series length as no time-series contains missing values.
\begin{table}[t]
\centering
\caption{Information about number of time-series, and minimum (min-obs), maximum (max-obs), mean (mean-obs) and standard deviation (std-obs) of the available number of time-series observations per dataset.}
\label{tab_datasets}
\setlength{\tabcolsep}{0.23em}
\begin{tabular}{cccccc}
\hline
\textbf{Name} & \multicolumn{1}{l}{\textbf{\# time-series} } & \multicolumn{1}{l}{\textbf{min-obs} } & \multicolumn{1}{l}{\textbf{max-obs} } & \multicolumn{1}{l}{\textbf{mean-obs}} & \multicolumn{1}{l}{\textbf{std-obs} } \\
\hline
\textsc{flights} & 1,415 & 431 & 745 & 734 & 23 \\
\textsc{flights-ext} & 13,810 & 50 & 347 & 346 & 13 \\
\textsc{hotels} & 3,207 & 1 & 658 & 368 & 128 \\
\textsc{yearly} & 23,000 & 13 & 835 & 31 & 25 \\
\textsc{weekly} & 359 & 80 & 2,597 & 1022 & 706 \\
\hline
\end{tabular}
\end{table}
\subsection{Implementation}
The LSTM network was implemented in Tensorflow. It is composed of one hidden layer with 32 hidden nodes. It is a dynamic LSTM, in that it allows the input sequences to have variable lengths, by dynamically creating the graph during execution.
The two CNN-based \textit{monitoring models}{} use the LeNet architecture. We modified both convolutional and pooling layers with 1D filters, given that the input of the model consists in sequences of one dimension. We added dropout layers to limit overfitting. In the Bayesian CNN, we applied a dropout rate of 0.5, also at testing time, to obtain 100 Monte Carlo samples as approximation of the true posterior distribution.
The GP model used the implementation of Sparse GP Regression from the GPy library\footnote[2]{\url{http://github.com/SheffieldML/GPy}}. The inducing points~\cite{titsias2009variational} were initialized with $K$-means and were then fixed during optimization. We used a variable number of inducing points depending on the size of the input and a RBF kernel with Automatic Relevance Determination (ARD). In all experiments we used 75\% data for training and 25\% for test and the Adam optimizer with default learning rate~\cite{DBLP:journals/corr/KingmaB14}. Only in the dataset \textsc{flights-ext} we used mini-batches of size $128$ to speed up the training.
For the \textit{monitored models}{}, we used the implementation available from the M4 competition benchmark Github repository\footnote[3]{\url{https://github.com/M4Competition/M4-methods}} and
we used the Python sklearn package~\cite{scikit-learn} implementation of R andom Forest. All code has been made publicly available\footnote[4]{\url{https://github.com/robustml-eurecom/model_monitoring_selection}}.
\subsection{Evaluation protocol}
For flight and hotel data we set $h=\{90,180\}$, which means we are predicting the price for $h$ days ahead. These are two commonly used values in travel, representing 3 and 6 months ahead of the planned trip, so it is important to have accurate predictions over those horizons. For the M4 competition datasets, we use the horizon given by the challenge organizers: $h=6$ for \textsc{yearly} and $h=13$ for \textsc{weekly}.
For each dataset, we reserve the first $T_i$ data points of the \textit{i-th} time-series, where $T_i$ depends on the time-series's length, as input of the \textit{monitored models}{} to obtain $h$ forecasts. Where missing values were found, in flights or hotels, these were replaced with the nearest non-missing value in the past. We build $\mathcal{X}$ and $\mathcal{X^*}$, by taking 75\% and 25\% from the total number of time-series, respectively. We thus compute the forecasting errors $\mathbf{e}$ using the sMAPE in Eq. \ref{sMAPE_Eq} for the training set $\mathcal{X}$. Finally, we predict the performance measure $\mathbf{e^*}$ for the time-series in $\mathcal{X^*}$, using the four \textit{monitoring models}{}.
We compare our model monitoring and selection framework with the standard cross-validation method, which we here denote \texttt{baseline}, where a model's estimated performance is obtained ``offline'' at training time with the available data. Specifically, given $T$ observations, we use the last $h$ observations as validation set to evaluate the model. This implies to reduce the number of observations available to train the forecasting models, which can be problematic when either $T$ is small or $h$ is large.
\section{Experiments and Results} \label{sec:results}
We first study the proposed framework's ability to achieve model monitoring (Sec~\ref{sec:monitorperf}). Then, we demonstrate how the predicted forecasting errors can be used to carry out model selection and how it positions w.r.t state-of-the-art methods doing the same task (Sec~\ref{sec:selperf}). In Sec~\ref{sec:allperf}, we illustrate the performance of the joint model monitoring and selection framework in our target application.
\subsection{Model monitoring performance}\label{sec:monitorperf}
We evaluate if the \textit{monitoring models}{}' predicted sMAPEs can be used for model monitoring by estimating if the predicted measure represents a good estimate of a \textit{monitored model}'s future forecasting performance. We assess the quality of the predicted forecasting errors by estimating the root mean squared error (RMSE) between the predicted sMAPEs and the true sMAPEs, for every \textit{monitored model}. The true sMAPE is obtained using the \textit{monitored model}'s predictions and the time-series' observations in through Eq.~\ref{sMAPE_Eq}. As a reference, we report the \texttt{baseline}{} RMSE, which is obtained by comparing the estimated sMAPE at training with the observed values at testing.Figure \ref{fig:rmse_summary} left summarizes the obtained results on all datasets.
\begin{wrapfigure}{L}{0.6\textwidth}
\centering
\includegraphics[width=0.59\textwidth]{RSME_all_log2.png}
\caption{RMSE between predicted and measured forecasting error (sMAPE) on all datasets (log scale). The reported baseline RMSE is obtained by comparing the estimated sMAPE at training with the observed values at testing.}
\label{fig:rmse_summary}
\end{wrapfigure}
The overall average error incurred by the \textit{monitoring models}{} is low. This suggests that the forecasting error predictions are accurate, meaning that it is reliable to carry out model monitoring. When compared to it, the \textit{monitoring models}{} consistently perform better than standard cross-validation when estimating the future performance of the forecasting \textit{monitored models}{}. There is an exception to this when the \textit{monitored model} is the Random Forests (\texttt{rf}{}). In this case, the \texttt{baseline}{} is not the worst performing approach. However, it is still surpassed in performance by both LSTM and GP.
Figure~\ref{fig:fig_rmse_results} details the results obtained for flights and hotels time-series. Table~\ref{tab:summary-travel} stratifies the results for travel product time-series in terms of the forecasting horizon. results show that Bayes-LeNet obtains the lowest RMSEs, whereas GPs follows closely and reports lower standard deviation.
Overall, our approach outperforms the \texttt{baseline}{} for large forecasting horizons, e.g. $h=180$, while the methods get closer as the forecasting horizon decreases. This is consistent with our hypothesis that data properties change over time. Using a validation set composed of time points close to the unseen data gives consistent information about the model's performance, because the two sets of data (validation and unseen data) have similar properties. However, increasing $h$ has the effect of pushing away the validation time points from the unseen data. In this case, it is better to rely on the forecast error prediction rather than on an error measure obtained during training.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{fig2_oneline}
\caption{RMSE between predicted and measured forecasting error (sMAPE). From left to right \textsc{flights} and \textsc{flights-ext} (top) with 1) $h=90$, 2) $h=180$, hotels 3) $h=90$, 4) $h=180$.
} \label{fig:fig_rmse_results}
\end{figure}
\begin{table}[t]
\centering
\caption{RMSE between predicted and true sMAPEs for flights and hotel time-series.}\label{tab:summary-travel}
\setlength{\tabcolsep}{0.23em}
\begin{tabular}{|l|r|r|r|r|}
\hline
\multicolumn{1}{|c|}{\textbf{Monitoring}} & \multicolumn{2}{c|}{\textbf{Flights}} & \multicolumn{2}{c|}{\textbf{Hotels}}\\
\cline{2-5}
\multicolumn{1}{|c|}{\textbf{model}}& \multicolumn{1}{c|}{$h=90$} & \multicolumn{1}{c|}{$h=180$} & \multicolumn{1}{c|}{$h=90$} & \multicolumn{1}{c|}{$h=180$}\\
\hline
LSTM & 0.116 $\pm$ 0.017 & 0.151 $\pm$ 0.031& 0.193 $\pm$ 0.021 & 0.182 $\pm$ 0.039\\
LeNet & 0.117 $\pm$ 0.017& 0.155 $\pm$ 0.031 & 0.209 $\pm$ 0.039& 0.224 $\pm$ 0.062\\
Bayes-LeNet &\textbf{ 0.084 $\pm$ 0.017} & \textbf{0.100 $\pm$ 0.035} & \textbf{0.135 $\pm$ 0.022} &\textbf{ 0.148 $\pm$ 0.044}\\
GP & 0.136 $\pm$ 0.007 & 0.126 $\pm$ 0.028 & 0.164 $\pm$ 0.014 & 0.165 $\pm$ 0.036\\
\hline
\texttt{baseline}{} & 0.119 $\pm$ 0.006 & 0.604 $\pm$ 0.328 & 0.190 $\pm$ 0.020 & 0.609 $\pm$ 0.302\\
\hline
\end{tabular}
\end{table}
\subsection{Model selection performance}\label{sec:selperf}
In this experiment,
we assess the capacity of the proposed method to assist model selection in the absence of ground truth. \textit{Monitored models} are ranked by estimating the average predicted sMAPE over a given time-series and ordering the resulting values in ascending order. In this way, we obtain a list of \textit{monitored models}{} from the best to the worst one. The best performing \textit{monitored model} is selected.
We compare the ground truth ranking with the one obtained by each of the \textit{monitoring models}{} and the \texttt{baseline}{}. We apply a Wilcoxon test \cite{wilcoxon1945individual} to the ranking results to verify if there are significant differences between each of the ranked \textit{monitored models}. Table \ref{rank_table} presents obtained results in hotels and flights.
\begin{table}[t]
\centering
\caption{Comparison between true and predicted model rankings, in ascending order of sMAPE. Underlined values indicate pairs of forecasting models not significantly different, according to Wilcoxon test.}
\resizebox{0.99\textwidth}{!}{
\label{rank_table}
\begin{tabular}{|clr|lr|lr|lr|lr|lr|}
\hline
\multicolumn{1}{|l}{} & \multicolumn{2}{c|}{\multirow{2}{*}{ \textbf{Ground Truth} }} & \multicolumn{8}{c|}{\textbf{Monitoring models} } & \multicolumn{2}{c|}{\multirow{2}{*}{ \textbf{Baseline} }} \\
\cline{4-11}
& \multicolumn{2}{c|}{} & \multicolumn{2}{c|}{\textbf{LSTM} } & \multicolumn{2}{c|}{\textbf{LeNet} } &
\multicolumn{2}{c|}{\textbf{Bayes-LeNet} } & \multicolumn{2}{c|}{\textbf{GPs} } & \multicolumn{2}{c|}{} \\
\hline
\multicolumn{13}{c}{\textsc{hotels} - $h$ = 180 } \\
\hline
& \multicolumn{1}{c}{\textbf{model}} & \multicolumn{1}{c|}{\textbf{sMAPE}} & \multicolumn{1}{c}{\textbf{model}} & \multicolumn{1}{c|}{\textbf{sMAPE}} & \multicolumn{1}{c}{\textbf{model}} & \multicolumn{1}{c|}{\textbf{sMAPE}} & \multicolumn{1}{c}{\textbf{model}} & \multicolumn{1}{c|}{\textbf{sMAPE}} & \multicolumn{1}{c}{\textbf{model}} & \multicolumn{1}{c|}{\textbf{sMAPE}} & \multicolumn{1}{c}{\textbf{model}} & \multicolumn{1}{l|}{\textbf{sMAPE} } \\
\hline
\textbf{1} & \texttt{damp} & 0.244 (0.153) & \texttt{ses} & 208 (0.015) & \texttt{ses} & 0.208 (0.032) & \texttt{ses} & 0.212 (0.087) & \texttt{damp} & \underline{0.230} (0.119) & \texttt{ses} & 0.326 (0.202) \\
\textbf{2} & \texttt{ses} & 0.246 (0.164) & \texttt{damp} & 0.220 (0.033) & \texttt{damp} & 0.211 (0.056) & \texttt{damp} & 0.224 (0.130) & \texttt{ses} & \underline{0.231} (0.121) & \texttt{rf} & \underline{0.413} (0.333) \\
\textbf{3} & \texttt{theta} & 0.269 (0.217) & \texttt{theta} & 0.233 (0.059) & \texttt{theta} & \underline{0.231} (0.024) & \texttt{comb} & \underline{0.249} (0.166) & \texttt{comb} & 0.251 (0.149) & \texttt{damp} & \underline{0.462} (0.391) \\
\textbf{4} & \texttt{comb} & 0.270 (0.207) & \texttt{comb} & 0.234 (0.057) & \texttt{rf} & \underline{0.236} (0.047) & \texttt{theta} & \underline{0.268} (0.234) & \texttt{theta} & 0.252 (0.160) & \texttt{comb} & 0.746 (0.569) \\
\textbf{5} & \texttt{rf} & 0.316 (0.300) & \texttt{holt} & 0.280 (0.124) & \texttt{comb} & 0.278 (0.145) & \texttt{rf} & 0.324 (0.329) & \texttt{rf} & 0.291 (0.207) & \texttt{theta} & 0.938 (0.620) \\
\textbf{6} & \texttt{holt} & 0.325 (0.277) & \texttt{rf} & 0.292 (0.210) & \texttt{holt} & 0.298 (0.189) & \texttt{holt} & 0.325 (0.162) & \texttt{holt} & 0.299 (0.190) & \texttt{holt} & 1.047 (0.660) \\
\hline
\multicolumn{13}{c}{\textsc{hotels} - $h$ = 90 } \\
\hline
& \multicolumn{1}{c}{\textbf{model}} & \multicolumn{1}{c|}{\textbf{sMAPE}} & \multicolumn{1}{c}{\textbf{model}} & \multicolumn{1}{c|}{\textbf{sMAPE}} & \multicolumn{1}{c}{\textbf{model}} & \multicolumn{1}{c|}{\textbf{sMAPE}} & \multicolumn{1}{c}{\textbf{model}} & \multicolumn{1}{c|}{\textbf{sMAPE}} & \multicolumn{1}{c}{\textbf{model}} & \multicolumn{1}{c|}{\textbf{sMAPE}} & \multicolumn{1}{c}{\textbf{model}} & \multicolumn{1}{l|}{\textbf{sMAPE} } \\
\hline
\textbf{1} & \texttt{damp} & \underline{0.242} (0.175) & \texttt{ses} & 0.203 (0.022) & \texttt{ses} & 0.217 (0.065) & \texttt{ses} & 0.238 (0.088) & \texttt{damp} & 0.221 (0.137) & \texttt{comb} & 0.237 (0.166) \\
\textbf{2} & \texttt{ses} & \underline{0.243} (0.174) & \texttt{damp} & 0.218 (0.026) & \texttt{damp} & 0.223 (0.073) & \texttt{damp} & 0.239 (0.122) & \texttt{comb} & 0.238 (0.155) & \texttt{ses} & \underline{0.239} (0.177) \\
\textbf{3} & \texttt{comb} & \underline{0.253} (0.189) & \texttt{theta} & 0.223 (0.022) & \texttt{theta} & 0.227 (0.063) & \texttt{comb} & 0.259 (0.108) & \texttt{theta} & 0.240 (0.151) & \texttt{damp} & \underline{0.250} (0.194) \\
\textbf{4} & \texttt{theta} & 0.254 (0.190) & \texttt{comb} & 0.224 (0.030) & \texttt{comb} & 0.229 (0.047) & \texttt{theta} & 0.263 (0.132) & \texttt{ses} & 0.244 (0.180) & \texttt{theta} & 0.251 (0.201) \\
\textbf{5} & \texttt{holt} & 0.275 (0.217) & \texttt{holt} & 0.244 (0.052) & \texttt{holt} & 0.252 (0.096) & \texttt{holt} & 0.282 (0.190) & \texttt{holt} & \underline{0.265} (0.185) & \texttt{holt} & 0.277 (0.235) \\
\textbf{6} & \texttt{rf} & 0.293 (0.285) & \texttt{rf} & 0.254 (0.103) & \texttt{rf} & 0.263 (0.059) & \texttt{rf} & 0.298 (0.176) & \texttt{rf} & \underline{0.266} (0.191) & \texttt{rf} & 0.311 (0.296) \\
\hline
\multicolumn{13}{c}{\textsc{flights} - $h$ = 180 } \\
\hline
& \multicolumn{1}{c}{\textbf{model}} & \multicolumn{1}{c|}{\textbf{sMAPE}} & \multicolumn{1}{c}{\textbf{model}} & \multicolumn{1}{c|}{\textbf{sMAPE}} & \multicolumn{1}{c}{\textbf{model}} & \multicolumn{1}{c|}{\textbf{sMAPE}} & \multicolumn{1}{c}{\textbf{model}} & \multicolumn{1}{c|}{\textbf{sMAPE}} & \multicolumn{1}{c}{\textbf{model}} &\multicolumn{1}{c|}{\textbf{sMAPE}} & \multicolumn{1}{c}{\textbf{model}} & \multicolumn{1}{c|}{\textbf{sMAPE}} \\
\hline
\textbf{1} & \texttt{rf} & 0.238 (0.163) & \texttt{rf} & 0.203 (0.007) & \texttt{rf} & 0.199 (0.039) & \texttt{rf} & 0.219 (0.075) & \texttt{rf} & 0.213 (0.108) & \texttt{rf} & 0.259 (0.200) \\
\textbf{2} & \texttt{ses} & 0.247 (0.144) & \texttt{theta} & \underline{0.217} (0.026) & \texttt{ses} & 0.215 (0.012) & \texttt{theta} & 0.220 (0.097) & \texttt{theta} & \underline{0.226} (0.098) & \texttt{damp} & \underline{0.277} (0.150) \\
\textbf{3} & \texttt{theta} & 0.248 (0.175) & \texttt{ses} & \underline{0.218} (0.024) & \texttt{damp} & \underline{0.216} (0.074) & \texttt{ses} & 0.233 (0.098) & \texttt{ses} & \underline{0.227} (0.100) & \texttt{ses} & \underline{0.278} (0.151) \\
\textbf{4} & \texttt{damp} & 0.249 (0.144) & \texttt{damp} & 0.219 (0.022) & \texttt{theta} & \underline{0.217} (0.042) & \texttt{damp} & 0.240 (0.090) & \texttt{damp} & 0.229 (0.098) & \texttt{theta} & 0.281 (0.155) \\
\textbf{5} & \texttt{comb} & 0.250 (0.148) & \texttt{comb} & 0.221 (0.027) & \texttt{comb} & 0.219 (0.016) & \texttt{comb} & 0.241 (0.094) & \texttt{comb} & 0.231 (0.054) & \texttt{comb} & 0.283 (0.160) \\
\textbf{6} & \texttt{holt} & 0.260 (0.162) & \texttt{holt} & 0.223 (0.034) & \texttt{holt} & 0.222 (0.036) & \texttt{holt} & 0.250 (0.088) & \texttt{holt} & 0.238 (0.119) & \texttt{holt} & 0.299 (0.199) \\
\hline
\multicolumn{13}{c}{\textsc{flights} - $h$ = 90 } \\
\hline
& \multicolumn{1}{c}{\textbf{model}} & \multicolumn{1}{c|}{\textbf{sMAPE}} & \multicolumn{1}{c}{\textbf{model}} & \multicolumn{1}{c|}{\textbf{sMAPE}} & \multicolumn{1}{c}{\textbf{model}} & \multicolumn{1}{c|}{\textbf{sMAPE}} & \multicolumn{1}{c}{\textbf{model}} & \multicolumn{1}{c|}{\textbf{sMAPE}} & \multicolumn{1}{c}{\textbf{model}} & \multicolumn{1}{c|}{\textbf{sMAPE}} & \multicolumn{1}{c}{\textbf{model}} & \multicolumn{1}{c|}{\textbf{sMAPE}} \\
\hline
\textbf{1} & \texttt{comb} & 0.174 (0.102) & \texttt{comb} & 0.154 (0.081) & \texttt{theta} & 0.160 (0.067) & \texttt{comb} & 0.151 (0.086) & \texttt{damp} & 0.159 (0.073) & \texttt{ses} & \underline{0.187} (0.110) \\
\textbf{2} & \texttt{damp} & 0.175 (0.106) & \texttt{damp} & 0.155 (0.076) & \texttt{comb} & 0.164 (0.085) & \texttt{damp} & 0.161 (0.086) & \texttt{comb} & 0.160 (0.082) & \texttt{theta} & \underline{0.188} (0.109) \\
\textbf{3} & \texttt{theta} & 0.176 (0.105) & \texttt{theta} & \underline{0.157} (0.042) & \texttt{holt} & 0.166 (0.076) & \texttt{theta} & 0.163 (0.087) & \texttt{theta} & \underline{0.162} (0.086) & \texttt{damp} & \underline{0.189} (0.110) \\
\textbf{4} & \texttt{ses} & 0.177 (0.106) & \texttt{ses} & \underline{0.158} (0.028) & \texttt{rf} & 0.174 (0.066) & \texttt{holt} & 0.188 (0.074) & \texttt{ses} & \underline{0.163} (0.094) & \texttt{comb} & 0.190 (0.112) \\
\textbf{5} & \texttt{holt} & 0.179 (0.113) & \texttt{holt} & 0.159 (0.036) & \texttt{ses} & 0.183 (0.087) & \texttt{ses} & 0.212 (0.070) & \texttt{holt} & 0.171 (0.119) & \texttt{holt} & 0.195 (0.118) \\
\textbf{6} & \texttt{rf} & 0.232 (0.150) & \texttt{rf} & 0.200 (0.025) & \texttt{damp} & 0.212 (0.044) & \texttt{rf} & 0.287 (0.083) & \texttt{rf} & 0.210 (0.094) & \texttt{rf} & 0.207 (0.137) \\
\hline
\end{tabular}
}
\end{table}
Overall, the obtained rankings are consistent with the ground truth, proving the ability of the method to carry out model selection, by identifying the model with the lowest error measure. Moreover, comparing our approach with the \texttt{baseline}{}, we find that our framework largely outperforms the latter, in that the ranking resulting from the \texttt{baseline}{} is very different from the true one. Even in predictions with a small forecasting horizon ($h=90$), the \texttt{baseline}{}'s ranking performance remains sub-optimal .
Looking at the four \textit{monitoring models}{}, we find that they have a different behavior depending on the dataset. Specifically, GPs result to be slightly more reliable than Bayesian-LeNet, as the latter in some cases swapped the first and second model of the ranking. LSTM's performance is close to the two probabilistic models, although the latter two globally have a better performance in terms of RMSE (see Table~\ref{tab:summary-travel}).
Having showed the reliability of the rankings, we evaluate if these can be effectively used to maintain accurate forecasts over time by doing model selection at fixed periods of time. Specifically, given a forecasting horizon, we divide it in smaller periods. At each time point, we use the predicted forecasting error to rank the \textit{monitored models}{} and thus perform model selection by picking the best ranked model.
We use the public benchmark data to guarantee curated data and we limit the experiments to the best two \textit{monitoring models}{}, Bayesian-LeNet and GPs (Table~\ref{tab:summary-travel}). We compare our model selection with the results obtained using the same \textit{monitored model} along the forecasting horizon.
Figure \ref{fig:naive_comp} left shows the average forecasting performance, measured through the real sMAPE, on the \textsc{weekly} dataset. The proposed model selection scheme allows to have the lower forecasting errors, i.e. a better performance, along the whole forecasting horizon. Among the two \textit{monitoring models}{}, GPs result in smoother curves.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{weekly_meta_oneline.png}
\caption{Measured average forecasting performance(sMAPE) using the proposed method for model selection in the \textsc{weekly} dataset with fixed forecasting models over the whole horizon. Average performance with Bayes-LeNet and GPs as \textit{monitoring models}{} (left). Error bars denote standard deviation. Using GPs as \textit{monitoring model} with six (GP-6) and ten \textit{monitored models}{} (GP-10), worst (center) and best (right) model selection performances in comparison with \texttt{ADE}{} and \texttt{FFORMS}{}.} \label{fig:naive_comp}
\end{figure}
Finally, we compare with two state-of-the-art meta-learning methods, arbitrated dynamic ensembler~\cite{cerqueira2017arbitrated}, \texttt{ADE}{}, and Feature-based FORecast-Model Selection~\cite{RePEc:msh:ebswps:2018-6}, \texttt{FFORMS}{}, with the best performing \textit{monitoring model} in our approach. The characteristics of these two methods allows them to be used to achieve good forecasting model's performance. \texttt{FFORMS}{} uses 12 different base models, whereas \texttt{ADE}{} uses up to 40 different models. To remain competitive with these two methods that use a larger number of base models, we add three standard forecasting models, Arima (\texttt{arima}), Random Walk (\texttt{rwf}) and TBATS (\texttt{tbats})~\cite{de2011forecasting}, and a feed-forward neural network (\texttt{nn}), to our set of \textit{monitored models}{}. We present sMAPE results over two time-series from the \textsc{weekly} dataset: one where our method performs worst (Fig.~\ref{fig:naive_comp} center) and the one where it performs best (Fig.~\ref{fig:naive_comp} right). We show the results of our approach using the original six \textit{monitored models}{} and the enlarged set. Using the original six \textit{monitored models}{}, our performance is worse than the two meta-learning models. However, by enlarging the set of \textit{monitored models}{}, our method performs better than \texttt{FFORMS}{} and achieves a performance comparable to \texttt{ADE}{} with much less monitored/base models.
\subsection{Model monitoring and selection performance}\label{sec:allperf}
Finally, we illustrate the performance of the proposed model monitoring and selection framework by using it to guarantee continuous price forecasting accuracy of our two travel products: flights and hotels. In this context, the predicted sMAPE is used as a surrogate measure of the quality of the forecasts estimated by the \textit{monitored models}{}. When the predicted sMAPE surpasses a given threshold, model selection is performed. Otherwise, the \textit{monitored model} is kept. We use the best performing \textit{monitoring model}, GPs. Since this is a probabilistic method, in addition to having a high predicted sMAPE, we add the condition of having a low uncertainty in the prediction. In our experiments, we set the sMAPE threshold at 0.02 for flights and 0.01 for hotels. The uncertainty was set at 0.01 for both. For this experiment, we removed \texttt{rf}{} from the \textit{monitored models}{} pool as it is the method giving the poorest performance. It is important to remark that differently from other approaches removing a method from the \textit{monitored models}{} pool simply requires to stop generating forecasts with the removed model. No re-training of the \textit{monitoring models}{} is required.
Figure~\ref{fig:final} illustrates the results obtained in terms of the average performance (sMAPE) for \textsc{hotels} with forecasting horizon $h=90$. Our experiment here is quite restrictive, in the sense that no \textit{monitored model} is re-trained along the forecasting period. In this way, we show that even under this restrictive setting the proposed framework is able to improve the performance of simple models. This suggests that through the use of this framework it is possible to extend the moment where \textit{monitored models}{} need to be re-trained by simply using the ranking information to pick a new model. Delaying model re-training represents important cost savings.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\textwidth]{average_hotels.png}
\caption{Average forecasting performance in terms of sMAPE using the proposed model monitoring and selection framework (GPs as \textit{monitoring model}) and using forecasting fixed models over the whole horizon. Error bars denote the standard deviation.}
\label{fig:final}
\end{figure}
\section{Conclusions} \label{sect_conclusions}
In this paper we introduce a data-driven framework to constantly monitor and compare the performance of deployed time-series forecasting models to guarantee accurate forecasts of travel products' prices over time. The proposed approach predicts the forecasting error of a forecasting model and considers it as a surrogate of the model's future performance. The estimated forecasting error is hence used to detect accuracy deterioration over time, but also to compare the performance of different models and carry out dynamic model selection by simply ranking the different forecasting models based on the predicted error measure and selecting the best. In this work, we have chosen to use the sMAPE as forecasting performance measure, since it is appropriate for our application but, it cannot be used in settings where the time-series could present zero-valued observations. However, the framework is general enough that any other measure could be used instead.
The proposed framework has been designed to guarantee accurate price forecasts of different travel products price and it is conceived for travel applications that might be already deployed. As such, it was undesirable to propose a method that performs forecasting and monitoring altogether, as in meta-learning, since this would require deprecating already deployed models to implement a new system. Instead, thanks to the proposed fully data-driven approach, \textit{monitoring models} are completely independent of those doing the forecasts, i.e. the \textit{monitored models}{}, thus allowing a transparent implementation of the monitoring and selection framework.
Although our main objective is to guarantee stable accurate price forecasts, the problem we address is relevant beyond our concrete application. Sculley \textit{et al.}~\cite{Sculley2015} introduced the term hidden technical debt to formalize and help reason about the long term costs of maintainable ML systems. According to their terminology, the proposed model monitoring and selection framework addresses two problems: 1) the monitoring and testing of dynamic systems, which is the task of continuously assessing that a system is working as intended; and 2) the production management debt, which refers to the costs associated to the maintenance of a large number of models that run simultaneously. Our solution represents a simple, flexible and accurate alternative to these problems.
|
2,869,038,155,509 | arxiv | \section{Introduction}
A human cannot live alone without the help of others and has to make a relationship with people based on mutual cooperation. This network of cooperation is a foundation of the society. However, ironically, the human is tempted to act selfishly in the society \cite{Hardin1243,RAND2013413}. When others act cooperatively, a selfish human may gain more benefits than a cooperative human because the selfish human gains cooperation without investments. In reality, the selfish behavior can be observed often in the society. A human has contradictory aspects: cooperation and selfishness. It is a challenge to explain this contradiction of human cooperation, and many pioneers found some conditions that cooperation occurs \cite{rockenbach2006efficient,milinski2002reputation,rand2011dynamic}. However, the answer is not clear yet.
Recently, the evolutionary game theory has been used to explain how cooperation occurs \cite{nowak2006evolutionary,PERC20171}. The evolutionary game theory has two major concepts. The first one is a game. Among many kinds of games, prisoner's dilemma (PD) game well expresses human selfishness and has been studied a lot \cite{rand2011dynamic,wang2017onymity,gallo2015effects}. In this paper, we also use the PD game. The other major concept is the imitation process. In the evolutionary game theory, each node imitates neighbor's strategy following a given rule \cite{nowak1992evolutionary,PhysRevLett.98.108103,santos2006evolutionary}. However, the studies in this field have focused largely on the strategy spreading without population change. In this paper, we study the effect of population change by including a death process and immigration. The population may decrease with the death process based on the minimum requirements rule.
Humans have minimum requirements to survive. If a human does not satisfy minimum requirements, the human is culled (dies out) \cite{gleick1996basic,leslie1984caloric}. For this reason, we set the minimum requirements as the rule of the death process. In our simulation, if an individual does not satisfy minimum requirements in the previous game, the individual dies out with a certain probability; it means that the individual cannot adapt in the society. As a result of the death process with minimum requirements, a highly cooperative society is induced. This is one possible answer to why humans act cooperatively against others.
A highly cooperative society looks stable and good for meeting minimum requirements. Thus, outsiders would want to live in the society, and they may immigrate into the society \cite{massey1990social}. Although this is happening in the real world, immigration occurs restrictively to avoid the disorder of the society \cite{goldin1994political}. To study why restrictions are needed, we immigrate nodes in the highly cooperative society and observe the population change of the society. We show that the maximum number of immigrants the society can accept is influenced by the conditions of the society and the way of immigration of immigrants.
\section{Model and methods}
We model the society using the concept of the network.
Node and edge of the network represent individual and relation between the two individuals, respectively.
Individuals in a network interact only with individuals connected by edges.
We begin the simulation from the Erd\H{o}s-R\'{e}nyi (ER) random network \cite{erdos1959random}. ER random network is simple and has been used in many evolutionary dynamics studies \cite{PhysRevLett.98.108103,PhysRevE.79.016107,PhysRevE.80.026105}. Initially, each node selects its strategy randomly. After that, at each time step, we implement the PD game, imitation process, and death process sequentially.
In the PD game, each member can choose one strategy, cooperation ($C$) or defection ($D$). Thus, there are four possible cases: $CC$, $CD$, $DC$, and $DD$. In each case, payoffs that the individual gains are expressed in the payoff matrix \cite{nowak1992evolutionary,PhysRevLett.98.108103},
\begin{center}
\begin{tabular}{c|c c}
& $C$ & $D$ \\
\hline
$C$ & $1$ & $0$ \\
$D$ & $b$ & $0$ \\
\end{tabular}
\end{center}
Here, the temptation of selfish behavior $b$ satisfies $1<b<2$. This is a weak version of prisoner's dilemma game and is used broadly to analyze strategy transitions according to temptation \cite{nowak1992evolutionary,PhysRevLett.98.108103}.
In the imitation process, imitation probability is given by \cite{PhysRevLett.98.108103,santos2006evolutionary},
\begin{equation}
P_{i \rightarrow j} = \frac{\pi_j-\pi_i}{b \times \mathrm{max}\{k_i,k_j\}} ,\, \mathrm{when}\; \pi_j > \pi_i .\label{E1}
\end{equation}
$P_{i \rightarrow j}$ is the probability that node $i$ imitates the strategy of node $j$. Node $j$ is selected randomly among neighbors of node $i$. $\pi_i$ is $i$'s payoff in the previous round, and $k_i$ means the degree of node $i$. If $\pi_j$ is less than $\pi_i$, node $i$ keeps its strategy.
Every node has a chance to change its strategy at each time step; after all the node have decided their next strategies, they update their strategies at the same time synchronously.
In the death process with minimum requirements $\pi_r$, if $\pi_i < \pi_r$, then node $i$ dies out with the death probability $P_d$. This process may reduce the population, which is the number of nodes in the network.
We also studied effects of immigration (see \cite{Tarik}, and references therein). A group of nodes are added in the network when the network reaches an equilibrium state. Here, we call the action of adding nodes ``immigration'' and an added node is called ``immigrant'', and a node that lived in the society before immigration is called ``native''.
The immigration in ordinary condition is usually on a small scale and very selective, and gives only minor influence on the society. To the contrary, the immigration initiated by special events such as economic crisis and war causes many immigrants. Since this kind of the immigration is abrupt, it causes more serious and critical change to the society \cite{jofre2016immigration,kirisci_1995}. Therefore, we focus on this massive immigration in this paper. We fix the number of immigrants at one time. For example, if $200$ immigrants enter the society and the number of immigrants at one time is $50$, then the total number of immigration is $4$. The time interval between immigration is the immigration interval. At the end of all immigration, we simulate $20000$ time steps more for adaptation of immigrants. The rule of minimum requirements applies equally to immigrants. When the immigrants enter the society, they make links with some members of the society. In our simulation, the number of links of an immigrant is determined within the range of $\pm 1$ from $\langle k \rangle$ in the society. (However, if the number of links of immigrants is determined by the degree distribution of the society, the properties that we will show do not change.) Immigrants connect randomly to members of the society.
When immigrants enter the society, the population of the society changes. To quantify population change posed by the immigration, we define the change of population of the society by a unit immigrant $\chi$ as
\begin{equation} \notag
\chi \equiv \frac{\mathrm{Population\; change\; of\; the\; society}}{\mathrm{Total\; number\; of\; immigrants}}.
\end{equation}
If $\chi= 1$, then all immigrants adapt to the society without the death of natives. On the contrary, if $\chi = -1$, then the population of the society is reduced by the number of immigrants in spite of the immigration. If $\chi=0$, the population of the society does not change. In this paper, we say immigration is successful when $\chi>0$. Also, we control three variables to observe the properties of the massive immigration. The first one is $R_i$, which is the total number of immigrants with respect to the number of natives before immigration occurs ($N$). The second one is the ratio of cooperator among immigrants at each time ($R_c$). The last one is the immigration interval.
\section{Results and discussion}
In order to verify the effect of minimum requirements, we made ER random network of $\langle k \rangle = 12$ and performed the simulation with $P_d = 0.1$ and $b = 1.2$. Figure~\ref{fig1}(b) shows the number of live nodes and the number of live nodes with cooperative strategy as a function of minimum requirements. Since a death process reduces the number of nodes, the number of nodes in the final network is less than the initial network. The population of the final network has a linear dependence on that of the initial network as shown in the inset of Fig.~\ref{fig1}(b). Thus, if we start with more initial nodes, we get a larger number of live nodes in the final network. Note that the cooperation level of the final network is very high even with very small $\pi_r$. It is because defector core (defector surrounded by defectors) does not gain any payoff and so its payoff is always less than $\pi_r$. Thus, defector cores die out with $P_d$ by the death process.
\begin{figure}
\centering
\includegraphics[angle=270,width=1\columnwidth]{fig1-2.pdf}
\caption{(a)~Black and red circles represent cooperators and defectors, respectively. If $\pi_r \leq b$ (left), then the defector satisfies minimum requirements. However, if $\pi_r > b$ (right), then the defector dies out with probability $P_d$. (b)~The number of total live nodes (square) and the number of live nodes with cooperative strategy (triangle) as a function of minimum requirements $\pi_r$. In this simulation, we set $b=1.2$ and the number of nodes of the initial network $N_i$ is $10000$. At each simulation, we simulated until the system was in equilibrium. (At least $20000$ time steps were simulated.) $1000$ simulations were averaged in each case. Introduction of minimum requirements induces highly cooperative society through the death process.
(inset)~The number of live nodes ($N_f$) as a function of the number of initial nodes ($N_i$). In this simulation, we set $b=1.2$ and $\pi_r = 2.5$. $500$ simulations were averaged in each case.}
\label{fig1}
\end{figure}
The results of this simulation have an untrivial point, $\pi_r = b$. When $\pi_r \leq b$, if a defector has at least one cooperative neighbor, then the defector has no probability to die out. Thus, in this condition, the final network is not purely cooperative. However, when $\pi_r > b$, defectors who have only one cooperative neighbor die out with $P_d$; defectors need at least two cooperative neighbors. (See Fig.~\ref{fig1}(a).) In this condition, the final network is mostly purely cooperative, though it is not always purely cooperative. When the final network is purely cooperative, all nodes are cooperator cores (cooperators surrounded by cooperators).
\begin{figure}
\centering
\includegraphics[angle=270,width=1\columnwidth]{fig2.pdf}
\caption{We set $b=1.2$ and $\pi_r = 2.5$, and the immigration interval is $1000$. (a)~$\chi$ as a function of $R_i$ for each $N_i$. In this simulation, $R_c$ is fixed by $0.8$. $1000$ simulations were averaged in each society and $10$ simulations were averaged among the same $N_i$ in each point. (b)~We make highly cooperative society that the population is $N=7582$ and observe $\chi$ with varying $R_c$ and $R_i$. $1000$ simulations were averaged in each case. When the number of immigrants is small and the ratio of cooperator of immigrants is high, $\chi$ is larger than zero. That is, the population of the society becomes larger than before.}\label{fig2}
\end{figure}
To identifying the properties of the immigration, we firstly set $b=1.2$, $\pi_r = 2.5$, and the immigration interval to be $1000$ in the highly cooperative society. From now on, these values do not change except in Fig.~\ref{fig3}, where the immigration interval varies. We made highly cooperative societies from initial networks of $N_i = 20000$, $30000$, and $40000$. We implemented $10$ different societies per each $N_i$. Each society has a different number of natives ($N$). In these societies, we set $R_c=0.8$ and the number of immigrants that immigrate at one time is $1\%$ of $N$. The results are shown in Fig.~\ref{fig2}(a). Average values of $N$ are $4848$, $7366$, and $9685$ for $N_i=20000$, $30000$, and $40000$, respectively. Since $\chi$ values are almost independent on the population of the society, the maximum number of immigrants for successful immigration is proportional to the population of the society. For identifying other properties of immigration, we make highly cooperative society with $N_i=30000$ and $N = 7582$. Also, we fix the number of immigrants at one time to $50$ from now on. In this condition, we observe $\chi$ with varying $R_c$ and $R_i$. The results are shown in Fig.~\ref{fig2}(b). When total number of immigrants $R_i$ is small and $R_c$ is close to $1$, the immigration is successful ($\chi > 0$). Note that $\chi > 0$ if $R_c$ is very close to $1$ even when $R_i$ is large. To summarize the results in Fig.~\ref{fig2}, the population of the society before immigration occurs and the attitude (strategy) of immigrants are main factors for successful immigration.
\begin{figure}
\centering
\includegraphics[angle=270,width=0.46\columnwidth]{fig3.pdf}
\caption{In the same society as in Fig.~2(b), we fix the number of total immigrants to be $450$ ($R_i \approx 0.06$) and observe $\chi$ as a function of the immigration interval for each $R_c$. Values of $R_c$ are $0.9$, $0.85$, $0.8$, and $0.7$, from top to bottom. $3000$ simulations were averaged in each case.}\label{fig3}
\end{figure}
Figure~\ref{fig3} shows the effect of the immigration interval when $R_c$ and $R_i$ are fixed. For $R_c=0.7$, $\chi$ does not depend on the immigration interval. However, for larger $R_c$, $\chi$ increases with larger immigration interval. When immigrants enter the society, the society undergoes a disturbed time, after which the society returns to a highly cooperative society. Thus, $\chi$ is saturated when the immigration interval is larger than the disturbed time. To increase the total number of immigrants while maintaining $\chi > 0$, the society needs enough immigration interval.
Now, we consider an extreme case. We suppose that all immigrants are cooperative. Since immigration does not introduce a defective strategy in the society, $\chi$ is always $1$ and the population of the society can grow infinitely. The question is whether the society changes its structure qualitatively with respect to the stability by the cooperative immigrants. In this paper, we define that the society is stable if the society maintains positive $\chi$ when $20$ defective invaders enter the society.
\begin{figure}
\centering
\includegraphics[angle=270,width=1\columnwidth]{fig4.pdf}
\caption{We use the same society as in Fig.~2(b). (a)~$\chi$ by the invasion for each society, which has different $R_i$ and only cooperative immigrants. $1000$ simulations were averaged in each case. When $R_i$ is larger than $0.075$, $\chi$ is negative. (b)~$\chi$ as a function of $R_c$ for two highly cooperative societies with and without immigrants. The total number of immigrants is $200$ ($R_i \approx 0.02$) and the population is $7582$ in both societies. $1000$ simulations were averaged in each case.}\label{fig4}
\end{figure}
To verify the stability of the society, we immigrate only cooperative immigrants into the society until satisfying $R_i$ we want. After that, $10$ defective invaders enter the society (invasion). Next, we simulate $1000$ time steps, and then $10$ defective invaders enter the society once again. (Invasion interval is $1000$.) After the second invasion, we simulate $20000$ time steps more for adaptation of invaders. The results of the invasion are shown in Fig.~\ref{fig4}(a). When $R_i$ is small, $\chi > 0$. However, although the number of defective invaders is very small compared to the population of the society, $\chi$ is negative by invasion when $R_i$ is large. It is because immigration makes the structure of the society unstable. To verify this, we made two kinds of highly cooperative societies of $N=7582$. The first one has only natives and the second one is composed of $7388$ natives and $194$ immigrants. (The immigration in the second society occurred in four times: three times of $50$ immigrants and once in $44$ immigrants.) Figure~4(b) compares values of $\chi$ by $200$ additional immigrants in the two societies as a function of $R_c$. In the same $R_c$, $\chi$ of the society that is made up of natives and immigrants is less than $\chi$ of the society that is made up of only natives except for $R_c = 1$. The difference of $\chi$ increases as the ratio of the immigrants in the society increases. The society that is made up of natives and immigrants is vulnerable to the immigration of defectors. Thus, even if all immigrants have a cooperative strategy, the society may have to restrict the number of immigrants for the stability of the society.
Until now, we assumed that immigrants connect randomly to members of the society \cite{Tarik}. This approach is reasonable when immigrants have no information about the society. However, what if they have some information about members of the society? In the real world, immigrants may gather some information about the society and prepare for future treats. To study this effect, we adopt preferential attachment \cite{PhysRevE.73.056124,barabasi1999emergence}. Preferential attachment uses the degree of each member to be regarded as the member's reputation in the society. A person who has a high reputation is considered successful in the society, and immigrants want to have a relationship with the person. We simulate in the same society as in Fig.~\ref{fig2}(b), but each immigrant has one preferential attachment link. The other links are connected randomly as before.
\begin{figure}
\centering
\includegraphics[angle=270,width=0.46\columnwidth]{fig5.pdf}
\caption{Plot of $\chi$ as a function of $R_i$ with and without the preferential attachment (PA). The society before immigration is the same as in Fig.~2(b) and $R_c = 0.8$. Preferential attachment increases the number of acceptable immigrants while maintaining $\chi > 0 $. $1000$ simulations were averaged in each case.}\label{fig5}
\end{figure}
As shown in Fig.~\ref{fig5}, when immigrants have one preferential attachment, the number of acceptable immigrants while maintaining $\chi > 0 $ increases substantially. It is because immigrants can accept a suitable strategy in the society through a neighbor connected by the preferential attachment.
\section{Conclusions}
We have observed the effects of minimum requirements and immigration in the prisoner's dilemma game. Minimum requirements condition induces a highly cooperative society. When immigration occurs in this highly cooperative society, the number of immigrants that the society can accept while maintaining successful immigration depends on the population of the society, the ratio of cooperator among immigrants, and the immigration interval. In addition, changing the connection rule of immigrant's link from a random connection to a preferential attachment increases it. Additionally, even if all immigrants are cooperative, the excessive acceptance of immigrants makes the society unstable.
\section*{Acknowledgments}
This work was supported by GIST Research Institute (GRI) grant funded by the GIST in 2018.
|
2,869,038,155,510 | arxiv | \section[]{Introduction}
A key feature of gamma-ray bursts (GRBs) is the observed relation
within a burst between the luminosity and the $\nu F_{\nu}$ peak energy
($E_{\rm peak}^{\rm rest}$)
\citep{Golenetskii:1983,Kargatis:1994,Borgonovo:2001,Ryde:2006,Ghirlanda:2010,Lu:2010bj,Zhang:2012fq,Guiriec:2013hl,Burgess:2014}. This
luminosity-$E_{\rm peak}^{\rm rest}$ relation was first discovered by
\citet{Golenetskii:1983} and has been found in several GRBs regardless
of their lightcurve shape. The relation is sometimes referred to as
the hardness-intensity correlation or the Golenetskii correlation
(GC). It is typically stronger during the decay phase of a GRB
lightcurve. The form of the GC states that the luminosity of the GRB
is proportional to its $E_{\rm peak}^{\rm rest}$ to some power $\gamma$:
\begin{equation}
\label{eq:ler}
L \propto \left( E_{\rm peak}^{\rm rest} \right)^{\gamma} \; {\rm erg\; s}^{-1}
\end{equation}
\noindent Historically, the GC has been used in an attempt to
understand the physical process generating the observed emission
\citep[e.g.][]{Dermer:2004,Ryde:2006,Bosnjak:2014}. \citet{Ryde:2006}
pointed out that the correlation is strongest in GRBs with single
non-overlapping pulses.
Such a correlation should be a signature of the evolving radiative
process occurring in an outflow. For example, if the emission is
purely photospheric then one would expect $L\propto\sigma_{\rm B}
T^{4}$ where $\sigma_{\rm B}$ is the Boltzmann constant and $E_{\rm peak}
\propto 3T$ where $T$ is the temperature of the outflow plasma. This
fact motivated the original research into photospheric emission of
GRBs. \citet{Borgonovo:2001,Ryde:2006} used single- and
multi-component spectral models composed of thermal and non-thermal
components to analyze several GRBs and obtain their GCs. It was found
that the GCs of these bursts exhibit a wide range of $\gamma$
\citep[see also][]{Burgess:2014} and it is difficult to assign a
common physical setting to all GRBs: it is known that photospheric
emission can take on several forms due to the differences in viewing
angle \citep{Lundman:2014} as well as subphotospheric dissipation
\citep[e.g.][]{Peer:2005,Beloborodov:2010} or jet composition
\citep{Giannios:2006,Begue:2015}. These advanced photospheric models
have just begun making predictions to explain the GC
\citep{Fan:2012,Lopez-Camara:2014}.
Other attempts to explain the observed GCs invoked non-thermal
synchrotron emission \citep{Zhang:2002dr,Dermer:2004} in both single
\citep{Ghirlanda:2010} and multi-component
\citep{Burgess:2014,Preece:2014} time-resolved spectra. For
synchrotron emission, one expects $L \propto N_{\rm e} B^2 \Gamma^2
\gamma_{\rm e}^2$ and $E_{\rm peak} \propto B \Gamma \gamma_{\rm e}^2$ where
$N_{\rm e}$ is the number of emitting electrons, $B$ in the magnetic
field, $\gamma_{\rm e}$ is the characteristic electron Lorentz factor
and $\Gamma$ is the bulk Lorentz factor. This implies that $L \propto
N_{\rm e} E_{\rm peak}^2 \gamma_{\rm e}^{-2}$. Deeper considerations for the evolution of the
parameters can lead to differenent $\gamma$'s (see \citet{Dermer:2004}
for examples.) Yet again, the wide range of observed $\gamma$'s make
it difficult to explain all GRBs with synchrotron emission.
Another feature of the GC exploits the fact that if the relation is
generated by a common process, then GCs can be used to estimate
redshift of GRBs \citep{Guiriec:2013hl,Guiriec:2015}. Several GRBs
with known redshift exhibited a common rest-frame proportionality
constant ($N\simeq 10^{53}$ erg s$^{-1}$) between luminosity and
$E_{\rm peak}^{\rm rest}$. If this is true for all GRBs, it could be possible
to use the observer-frame normalization of the relation to estimate
the redshifts of GRBs that do not have one measured by other
means. This estimation of redshift differs from that of the so-called
Amati and Yonetoku relations \citep{Amati:2002,Yonetoku:2004}. These
relations rely on the time-integrated properties of GRBs and most
likely result from the effects of functional correlation
\citep{Massaro:2007} and selection effects \citep{Kocevski:2012}. If
time-resolved GCs such as those studied herein do allow for the
estimation of redshift then this would allow for a very powerful
cosmological tool as GRBs can probe the high-redshift
universe. However, this possibility heavily relies on the assumption
of a common rest-frame normalization, which in both the photospheric
and synchrotron models is not predicted.
\section[]{The Relation and the Bayesian Model}
\label{sec:model}
The GC is given in the rest-frame in Equation \ref{eq:ler}; however,
it is derived from observer-frame time-resolved spectral fits of
GRBs. The observed spectra are integrated over energy (10 keV - 40 MeV) to yield the
energy flux ($F_{\rm E}$) which, assuming isotropic emission, is related to
the luminosity via $L = 4 \pi d_{\rm L}^2 (z) F_{\rm E}$. Here, $z$ in the redshift
of the GRB and $d_{\rm L}(z)$ is the luminosity distance. Similarly, $E_{\rm peak}^{\rm
rest} =E_{\rm peak}^{\rm obs} (1+z)$.
Assuming that the time-evolving rest-frame luminosity of GRBs derives
from a single inherent relationship given as
\begin{equation}
\label{eq:1}
L=N_{\rm rest} \left(\frac{E_{\rm peak}^{\rm rest}}{100 {\rm keV}} \right)^{\gamma} \; {\rm erg\; s}^{-1}{\rm ,}
\end{equation}
\noindent where $N_{\rm rest}$ is a normalization attributed to the intrinsic
physics of the emission process, and $\gamma$ is the GC index again
attributed to an intrinsic physical process, we shift into the
observer-frame such that
\begin{equation}
\label{eq:3}
F_{\rm E} = \frac{N_{\rm rest}}{4 \pi d_{\rm L}^2 (z)} \left( \frac{E_{\rm peak}^{\rm obs} (1+z)}{100 {\rm keV}} \right)^{\gamma} \; {\rm erg\; s}^{-1} {\rm \; cm}^{-2}{\rm .}
\end{equation}
\noindent Taking the base-10 logarithm to obtain a linear relationship we have
\begin{equation}
\label{eq:5}
\log(F_{\rm E}) = \log(N_{\rm rest}) - \log(4 \pi d_{\rm L}^2 (z)) + \gamma \log\left(\frac{E_{\rm peak}^{\rm rest}(1+z)}{100 {\rm keV}}\right) {\rm .}
\end{equation}
\noindent If one makes the assumption that $N_{\rm rest}$ and $\gamma$ remain
constant or tightly distributed for all GRBs, from this equation one
can solve for $z$. I use this assumption to construct a hierarchical
Bayesian model \citep[see for example, ][for related uses of
hierarchical Bayesian models in
astrophysics]{Mandel:2011,March:2011jx,Andreon:2012dp,Sanders:2015,deSouza:2015gk}. For
the $i^{\rm th}$ GRB$_i$ we have
\begin{equation}
\label{eq:lin}
\log\left(F_{\rm E}^{i,j}\right) = \log\left( N_{\rm rest} \right) - \log \left(4 \pi d_{\rm L}^2 \left( z^i \right) \right) + \gamma^{i} \log\left(\frac{E_{\rm peak}^{{\rm obs}i,j} \left( 1+z^i \right)}{100 {\rm keV}}\right) {\rm .}
\end{equation}
\noindent Here, $j$ indexes time. Figure \ref{fig:model} demonstrates
the model. I will call this model \textit{Mod~A}. Henceforth,
logarithmic quantities ($\log\left( N_{\rm rest} \right)$, $\log\left( F_{\rm E}
\right)$, etc.) will be represented as $Q=\log\left( Q \right)$
for notational simplicity. Since I choose $F_{\rm E}$ as the dependent
quantity and $E_{\rm peak}^{\rm obs}$ as the independent quantity, the measurement error
in $E_{\rm peak}^{\rm obs}$ must be accounted for following the methods of
\citet{Dellaportas:1995,Andreon:2013}. I also allow for intrinsic
scatter in each GC. The following priors are assumed:
\begin{eqnarray}
\mu_{\gamma} & \sim & \mathcal{N}(0,\text{std}(F_{\rm E})/\text{std}(E_{\rm peak}^{\rm obs})\\
\sigma_{\gamma} & \sim & \textit{Cauchy}_{(0,\infty)}(0,2.5)\\
\gamma^i &\sim& \mathcal{N}(\mu_{\gamma},\sigma_{\gamma})\\
\mu_{N_{\rm rest}} & \sim & \mathcal{N}(52,5)\\
\sigma_{N_{\rm rest}} & \sim & \textit{Cauchy}_{(0,\infty)}(0,2.5)\\
N_{\rm rest}^i &\sim& \mathcal{N}(\mu_{N_{\rm rest}},\sigma_{N_{\rm rest}})\\
z^i &\sim& \mathcal{U}(0,15)\\
E_{\rm peak}^{{\rm obs} \prime i,j} &\sim& \mathcal{N}(E_{\rm peak}^{{\rm obs} i,j},\sigma_{E_{\rm peak}}^{i,j})\\
\sigma_{\rm scat}^{2,i} &\sim& \textit{Cauchy}_{(0,\infty)}(0,2.5)\\
F_{\rm E}^{\prime i,j} &\sim& \mathcal{N}(\star,\sigma_{\rm scat}^i)\\
F_{\rm E}^{ i,j} &\sim& \mathcal{N}(F_{\rm E}^{\prime i,j},\sigma_{F_{\rm E}}^{i,j})\\
\end{eqnarray}
\noindent Here ($\star$) indicates the right-hand side of Equation
\ref{eq:lin}. Half-Cauchy distributions were chosen as non-informative
proper priors for the variances \citep{Gelman:2006di}. If a GRB has a
known redshift, then it can be fixed in the model which will aid in
constraining the hyper-parameters for $N_{\rm rest}$.
I can further generalize the model by relaxing the assumption of a
common $N_{\rm rest}$ and $\gamma$ and allow separate $N_{\rm rest}$ and $\gamma$ to
be fit without their respective hyper-priors. Therefore, I implement
two further models, \textit{Mod B} and \textit{Mod~C} which relax the
assumption of common $\gamma$ and common $N_{\rm rest}$ respectively. These
models are also detailed in Figure \ref{fig:model} when the
\textit{red} and then the \textit{blue} hyper-priors are removed.
The models are implemented in {\tt
PyStan} \citep{stan} which is a probabilistic
modeling language implementing a Hamiltonian Monte Carlo sampler for
full Bayesian inference. The posterior marginal distributions of the
parameters obtained from {\tt PyStan} can be analyzed to determine
$\gamma^i$, $N_{\rm rest}^i$, and $z^i$. The simultaneous fitting of all GCs
with linked parameters (e.g. $\gamma^i$) of the hierarchical model
provides shrinkage of the estimates and, if the data support it, will
pull together the estimates as well as assist in better fits of GCs
where there is less data. For computational speed, I implement the
highly accurate, analytic form of $d_{\rm L}(z)$ from \citet{Adachi:2012bn} in
my model.
An alternative approach to determining redshifts is to fit all GCs
from GRBs with known redshift together as one data set via a standard
fitting tool such as {\tt FITEXY} \citep{Press:2007}. A common,
calibration rest-frame normalization ($N_{\rm cal}$) and slope can be
determined. Then, GRBs without known redshift can have their
observer-frame GCs fitted to determine their observer-frame
normalizations ($M_z$) which can be solved for redshift (see Section
\ref{sec:simA}). However, this approach neglects the robustness of the
Bayesian model presented here because in the Bayesian model, all
parameters are determined from the data simultaneously accounting for
all correlations in the parameters and data.
\section{Simulations}
To test the feasibility of this approach, I simulate a set of eight
GCs to fit with the model. Starting with a universal relation
\begin{equation}
\label{eq:2}
L^{\rm sim} = N_{\rm rest}^{\rm sim} \left( \frac{E_{\rm peak}^{\rm rest, sim}}{100 {\rm keV}} \right)^{\gamma^{\rm sim}} \; {\rm erg\; s}^{-1}
\end{equation}
\noindent a random number of $E_{\rm peak}^{\rm rest, sim}$'s are drawn
from a log-uniform distribution. For \textit{Mod~A}, $N_{\rm rest}^{\rm
sim}=52$ erg s$^{-1}$ and $\gamma^{\rm sim} =1.5$ for every
synthetic GRB (See Figure \ref{fig:simAdata}). For \textit{Mod~B} and
\textit{Mod~C}, $\gamma^{\rm sim}$ is drawn from $\mathcal{U}(1,2)$ and for
\textit{Mod~C} $N_{\rm rest}$ is drawn from $\mathcal{U}(51.7,52.3)$ (see
Figures \ref{fig:simBdata} and \ref{fig:simCdata}). Then, $L^{\rm
sim}$ is computed for each $E_{\rm peak}^{\rm rest, sim}$. A random redshift
is assigned and the rest-frame quantities are shifted into the
observer-frame. Finally, each observer frame quantity has
heteroscedastic error added to it assuming that the noise is normally
distributed for both $E_{\rm peak}^{\rm obs, sim}$ and $F_{\rm E}^{\rm sim}$. No
intrinsic scatter is added.
\subsection{Simulated GCs with Common $N_{\rm rest}$ and $\gamma$: \textit{Mod~A}}
\label{sec:simA}
The set of simulated GCs is fit with \textit{Mod~A} assuming that
$GRB_1$ - $GRB_4$ have known redshifts and the remaining GRBs do
not. Table \ref{tab:simA} details the fits and marginal posterior
distributions for all parameters are displayed in Appendix
\ref{sec:post}. The observer-frame relations are well recovered by the
model and can be seen in Figure \ref{fig:simAfits}. The estimated
$\gamma^i$ are well recovered by the model. The $N_{\rm rest}^{i}$ are
also well fitted even for GRBs that do not have a redshift owing to
the shrinkage of the posterior by GRBs with known redshift. There is a
tight correlation between $\mu_{\gamma}$ and $\mu_{N_{\rm rest}}$ (Figure
\ref{fig:simAmu}) for the hyper parameters. The estimated unknown
redshifts have long tails but are generally estimated well as shown in
Figure \ref{fig:simzcomp}. All of the simulated values are within the
95\% highest posterior density intervals (HDIs). Still, the spread in
estimated values makes them unusable as a cosmological probe.
Now, I compare to the frequentist approach to see the difference in
the methods. This approach requires at least one redshift be known to
calibrate the $N_{\rm rest}$. The same four simulated GCs are used as known
redshift GRBs as in the previous sections. The method proceeds in the
following fashion:
\begin{itemize}
\item The four known redshift GRBs have their \textit{rest-frame} GCs
fit via the {\tt FITEXY} method to determine a calibration value for
$N_{\rm rest}$.
\item The GRBs without redshift have their \textit{observer-frame}
GCs fit to determine their normalization which is
$A=\nicefrac{M_z}{N_{\rm rest}}$ where $M_z$ is the observer-frame
normalization.
\item Using the calibration $N_{\rm rest}$, $M_z$ is solved for and
subsequently $z$ is obtained.
\end{itemize}
Table \ref{tab:mleA} details the results from the procedure. The
calibrated rest-frame normalization is $N_{\rm cal}=52 \pm 0.01$ with slope
$\gamma=1.49 \pm 0.014$ (compare these to the simulated values of
$N_{\rm rest}=52$ and $\gamma = 1.5$) It is important to note that {\tt FITEXY}
has a positive bias on the obtained $\gamma$'s, an effect that is well
known \citep[e.g.][]{Kelly:2007} and the method's statistical
properties are poorly understood. This makes the fitting GCs via the
method unreliable at determining the inherent physics in the outflow.
The 1$\sigma$ errors on redshift are determined numerically by fully
propagating the errors from the linear fits of both the calibration
and observer-frame fits. The results are displayed in Table
\ref{tab:mleA} The hierarchical Bayesian model performs better at
obtaining the simulated $\gamma$'s (see Table \ref{tab:simA}). The
redshifts are obtained accurately via {\tt FITEXY} but with very small
errors that do not take into account the full variance of the model
and data. Therefore, these errors are likely underestimated. We will
see how these under-estimated errors lead to inaccurate redshift
predictions in the following sections.
\subsection{Simulated GCs with Varying $\gamma$: \textit{Mod~B}}
For simulations where $\gamma^i$ is varied, \textit{Mod~B} is used to
fit the data. Similar to \textit{Mod~A}, \textit{Mod~B} is able to
estimate the simulated properties of the GRBs. Table
\ref{tab:simB} details the results. All parameter marginal posterior
distributions are shown in Appendix \ref{sec:post}. Importantly, the
varying values of $\gamma^{i}$ are well estimated and the redshifts
are found but with higher HDIs than those found with \textit{Mod~A}
(See Figure \ref{fig:simzcomp}).
I then proceed to fit the simulated data with {\tt FITEXY}. The
results are in Table \ref{tab:mleB} and show that the redshifts are
inaccurately estimated and the errors are underestimated. Therefore,
if there is a distribution of $\gamma^i$ in real data, this method
will give incorrect redshifts but the errors may not encompass the
true values..
\subsection{Simulated GCs with Varying $\gamma$ and $N_{\rm rest}$: \textit{Mod~C}}
Finally, I test \textit{Mod~C} which has no linkage between
datasets. The fits are detailed in Table \ref{tab:simC} and all
parameter marginal posterior distributions are displayed in Appendix
\ref{sec:post}. GRBs with known redshift can easily have $N_{\rm rest}$
estimated but the degeneracy between $N_{\rm rest}$ and $z$ for GRBs without
known redshift restricts the ability for $N_{\rm rest}$ to have a compact
marginal distribution. As a consequence, Figure \ref{fig:simCz} shows
that redshift cannot be determined in this model. The model is still
able to determine $\gamma^i$, though, with broader HDIs. The loss of
linkage and information sharing between GRBs with known and unknown
redshift in this model makes it not useful for cosmology though if it
is the model representing the actual physics of GCs, it can be used on
GRBs with known redshift to determine rest-frame properties. Once
again, {\tt FITEXY} is tested (Table \ref{tab:mleC}) and it is found
that the redshifts are inaccurately estimated with underestimated
errors.
\subsection{Reconstructing the Simulated Golenetskii Correlation}
\label{sec:simgol}
Another test of the models is how well they reconstruct the simulated
relations in the rest-frame. Essentially, the marginalization over
redshift removes the cosmological factors in the GCs and allows for
the rest-frame quantities to be calculated with the full variance of
the model and data. This is more descriptive than simply shifting the
observer-frame quantities into the rest-frame using the estimated and
known redshifts.
The process of calculating the rest-frame Golenetskii relation depends
on whether or not the redshift for a GRB is known. For GRBs with
unknown redshift, I first construct the observed rest-frame
$E_{\rm peak}^{\rm rest}$'s. Since I model the measurement error of the observer-frame
$E_{\rm peak}^{\rm obs}$ during the fits, I obtain a distribution for each $E_{\rm peak}^{\rm obs}$ based
off the data and the model. I propagate the distribution of $E_{\rm peak}^{\rm obs}$
and the estimated redshift ($z_{\rm est}$) to reconstruct $E_{\rm peak}^{\rm rest} = E_{\rm peak}^{\rm obs}(1+z_{\rm
est})$. Thus a distribution for $E_{\rm peak}^{\rm rest}$ is obtained. Similarly, I
propagate the obtained distributions of the marginalized parameters to
reconstruct $L = N_{\rm rest} \left( \nicefrac{E_{\rm peak}^{\rm rest}}{100 {\rm
keV}}\right)^{\gamma}$.
For GRBs with known redshift, the process is the same except the
measured value of $z$ is used rather than the estimated value. At the
end of the process, I have distributions of $L$ and $E_{\rm peak}^{\rm rest}$ for each
data point of each GRB. Figure \ref{fig:simGol} displays the results
of this process for all three models. Compared with the simulated
relations (Figures \ref{fig:simAdata} - \ref{fig:simCdata}) all models
reconstruct the relation well for both GRBs with known and unknown
redshift except \textit{Mod~C} which cannot estimate unknown
redshifts.
\section{Application to Real GRBs}
\label{sec:real}
The model is now applied to real data from a sample GRBs that have
been analyzed with a multi-component spectral model. It has been
claimed that the use of multi-component spectra tighten the observed
GC in some GRBs \citep{Guiriec:2015}. I therefore use the
time-resolved luminosity and $E_{\rm peak}$ from Band function
\citep{Band:1993} fits of the sample of GRBs analyzed in
\citet{Burgess:2014}. These GRBs are single pulsed in their
lightcurves and were analyzed with physical synchrotron models as well
as the empirical Band function. Additionally, the models were analyzed
under a multi-component model consisting of a non-thermal component
modeled as a Band function and a blackbody. I choose from this sample
GRBs 081224A, 090719A, 090809A, 110721A, and 110920A. Three tentative
redshifts for GRB 110721A appear in the literature ($z=0.382, 3.2,
3.512$) \citep{Berger:2011,Griener:2011}, though none are stringent
measurements. Nonetheless, I will assume that the actual value is
$z=3.2$ for consistency with \citet{Guiriec:2015}.
In \citet{Guiriec:2015}, GRBs were also analyzed with a
multi-component model including GRB 080916C which has measured
redshift of $z=4.24$. I reanalyzed this GRB with the model and
time-intervals posed in \citet{Guiriec:2015} and added this to my
sample. The nearby ($z=0.34$) GRB 130427A is included as well because
of its brightness which allows for several time-intervals to be fit
with spectral models. Though in \citet{Preece:2014} the GRB was fit
with both synchrotron and Band models, only the fits with synchrotron
statistically required a blackbody. Therefore, fits from the Band
function only are used in this work. Finally, I include
multi-component analysis of GRB 141028A which has a well measured
redshift of $z=2.332$ \citet{Burgess:2015}.
All spectral analysis was carried out with {\tt
RMFIT}\footnote{http://fermi.gsfc.nasa.gov/ssc/data/analysis/rmfit/}. The
$F_{\rm E}$ was calculated for the Band function for each time-interval
using full error propagation of all spectral parameters including
those of the other spectral components to obtain the errors on the
$F_{\rm E}$ (See Appendix \ref{sec:prop} for a discussion on error
propagation). I use only Band function fits from the multi-component
fits because it is claimed that they provide a better redshift
estimator. Finally, it is known that GCs are stronger and possess
positive slope in the decay phase of GRB lightcurves. They are
typically anti-correlated in the rising portion of the lightcurve,
therefore, the rise-phase GC is disregarded for this analysis. With
this sample (see Figure \ref{fig:realdata}), I proceed with testing
the Bayesian models.
\subsection{Results}
First, I fit the real GCs with \textit{Mod~A}. Table \ref{tab:realA}
and Figure \ref{fig:realAfits} show the results of the fits. Figure
\ref{fig:realAgamma} shows some variation in $\gamma^i$, but the
values are clustered due to the pull of the hyper-parameters of the
model. Most importantly, GRBs without known redshift have
unconstrained $N_{\rm rest}^i$ resulting in unconstrained redshifts (see
Figures \ref{fig:realANr} and \ref{fig:realAz}).The hyper-parameters
from the fits are clustered (Figure \ref{fig:realAmu}) due mainly to
GRBs with known redshift.
Next, \textit{Mod~B} is fit to the data. Table \ref{tab:realB} and
Figure \ref{fig:realBfits} show the results of the fits. The values of
$\gamma^i$ vary due to the loosening of their hyper-parameter
constraint (see Figure \ref{fig:realBgamma}). Additionally, estimates of $N_{\rm rest}^i$
have tighter constraints than with \textit{Mod~A}, yet they are still
broad (See Figure \ref{fig:realBNr}). However, redshift estimation is
still unconstrained though the distributions are peaked at lower
redshifts with heavy tails (see figure \ref{fig:realBz}).
Finally, \textit{Mod~C} is fit to the data. Table \ref{tab:realC} and
Figure \ref{fig:realCfits} show the results of the fits. Figures
\ref{fig:realCgamma} and \ref{fig:realCNr} show that both $\gamma^i$
and $N_{\rm rest}^i$ vary due to the loosening of hyper-prior constraints, but
$N_{\rm rest}$ is again loosely constrained. We expect the estimates of
redshift to be unconstrained from simulations and find this in the
data as well (see Figure \ref{fig:realCz}).
Now, using the {\tt FITEXY} method, I find that the calibration
$N_{\rm cal}=51.17 \pm 0.02$ (compared to $\mean{\mu_{N_{\rm rest}}} = 51.74$
with an HDI of $51.33 - 52.21$ for \textit{Mod~A}) obtained from
fitting the four GRBs with known redshift in the rest-frame with a
common $\gamma=1.72 \pm 0.02$ (compared to $\mean{\mu_\gamma }=1.49$
with an HDI of $1.35 - 1.67$ for \textit{Mod~A}). Table
\ref{tab:mleReal} reveals that while the method can estimate redshifts
precisely, known redshifts are reconstructed inaccurately. It was
determined that the errors of this method are likely underestimated
via simulations in Section \ref{sec:simA}.
Another option would be to use individual GRBs as calibration sources
and estimate known redshift. For each GRB with known redshift, I fit
its rest-frame GC to obtain $N_{\rm cal}$ and then proceed as before
using only one GRB as the calibration to estimate the known redshifts
of the other GRBs. Table \ref{tab:mleReal2} shows that the estimated
redshifts (columns) depend on which calibration GRB (rows) is
used. Regardless of which GRB is used as a calibration source, the
estimated redshifts are inaccurate. I conclude that the {\tt FITEXY}
method cannot be used to estimate redshifts.
\subsection{The Rest-Frame Golenetskii Relation}
I calculate the rest-frame Golenetskii relation for each model using
GRBs with known redshift following the procedure in Section
\ref{sec:simgol}. I exclude GRBs without known redshift due to the
inability to calculate rest-frame quantities when the redshift is
unconstrained as shown in Figure \ref{fig:golAll}. Figure
\ref{fig:realGol} shows the rest-frame correlation for all
models. There is little difference between the models'
predictions. Even with the hyper-parameter models, a difference in
the individual $\gamma^i$ and $N_{\rm rest}^i$ is allowed. While $\gamma^i$ do
appear tightly distributed, the $N_{\rm rest}^i$ can vary within an order of
magnitude.
\section{Discussion}
I have presented a hierarchical Bayesian model to test the ability of
the Golenetskii correlation (Equation \ref{eq:1}) to estimate the redshifts of
GRBs. The model incorporates all known variance in both the data and
assumptions and therefore provides a robust assessment of the claim
that accurate redshifts can be obtained. The model performs well with
simulated data. The data are generated under the assumptions of the
model, so it is expected, but fitting of the simulations allows for
determining the accuracy of the model. The model is able to predict
simulated redshifts very accurately as long as the rest-frame
normalization of the relations are all the same. If this assumption is
dropped, the model is unable to predict redshifts though it can still
recover rest-frame properties of GRBs with known redshift.
The method of using {\tt FITEXY} underestimates the errors on
predicted redshifts. More importantly, the method predicts inaccurate
redshifts if there is not a common $N_{\rm rest}$ and $\gamma$ in the
rest-frame. Moreover, the inaccurate redshifts will have errors that
do not encompass the true redshift. Therefore, I find the method
unable to predict redshifts.
When I apply the three Bayesian models to the data, I find that the
redshift estimates are all unconstrained. Constrained estimates for
$N_{\rm rest}^i$ for GRBs with known redshift and $\gamma^i$ for all GRBs are
recovered. Upon examining the inferred rest-frame Golenetskii
relations for GRBs with known redshift, I find that $N_{\rm rest}^i$ are
tightly distributed but can vary within an order of magnitude. This is
most likely the reason that {\tt FITEXY} under-predicts known
redshift. To investigate this, I simulated 1000 GCs from
\textit{Mod~C} and fit them with {\tt FITEXY} as before. Then, the
distribution of the difference between estimated redshift and
simulated is examined as a function of the difference between $N_{\rm
cal}$ and the simulated $N_{\rm rest}$ (see Figure \ref{fig:ncal}). When
$N_{\rm cal}$ underestimates the true $N_{\rm rest}$, then the redshift can be
under-predicted. As can be seen in Tables
\ref{tab:realA}-\ref{tab:realC}, $N_{\rm cal}$ always underestimates
$N_{\rm rest}$ and therefore the {\tt FITEXY} method under predicts the known
redshifts in our sample.
This leads us to consider what happens when simulations from
\textit{Mod~C} (possibly resembling the real GRB sample) are fitted
but \textit{Mod~A}. We know from Section \ref{sec:simA} that
\textit{Mod~A} can perfectly estimate redshifts if it represents the
true generative model of the data. Therefore, I simulate data from
\textit{Mod~C} and fit it with \textit{Mod~A}. Figures
\ref{fig:simCANr} and \ref{fig:simCAz} show that, like the real data
in Section \ref{sec:real}, \textit{Mod~A} cannot constrain $N_{\rm rest}$ and
subsequently redshift when there is not a common $N_{\rm rest}$. Unfortunately,
\textit{Mod~C} which is applicable to the data, cannot constrain
redshifts. This strongly suggests that there is not a universal
Golenetskii correlation in the rest-frame.
The fact that all GRBs \textit{do not} share a common $N_{\rm rest}$ is not
surprising. It is established that there are at least two observed
mechanisms occurring in GRBs: photospheric \citep{Ryde:2010} and
non-thermal emission \citep{Zhang:2015,Burgess:2015}. In both of these
cases, a variance in $N_{\rm rest}$ is expected. I have shown that variance is
not negligible. The choice of spectral model also plays an crucial role in
determining the slope and normalization of GCs. The results of
\citet{Burgess:2014,Iyyani:2015tv} where a physical synchrotron model
was used to fit the spectra of the same GRBs used in this study
produced GCs with different slopes than what is found here with
empirical spectral models. This is due to the different curvature of
these models around the $\nu F_{\nu}$ peak which results in different $E_{\rm peak}$'s
from the spectral fits.
Theoretical predictions for the rest-frame GC are in their
infancy. While weak predictions for $\gamma$ exist
\citep{Zhang:2002dr,Dermer:2004,Fan:2012,Bosnjak:2014}, exact
solutions have only begun to be formulated. This is due in part to the
stochastic nature of GRB lightcurves as well as the limited input
knowledge available for use in predicting the exact processes that
occur in GRB outflows. It is interesting to note that
\citet{Lopez-Camara:2014} predict that photospheric models can have
different normalizations in the rest-frame depending on viewing
angle. If physical spectra can be fit to the data corresponding to
such models, then the subsequently derived GCs could be used to predict
viewing angles of GRBs which presents an interesting new tool to study
GRB prompt emission mechanisms.
\section{Conclusions}
I conclude that the Golenetskii correlation does not possess a common
$N_{\rm rest}$ which may have interesting implications for the outflow physics
of GRBs but precludes them from being used as redshift estimators
under the model posed herein. The Golenetskii correlation fails as a
standard candle without an additional predictor. This also precludes
the use of GCs to estimate cosmological parameters as the broad HDIs
found in this study would be folded into the errors on cosmological
parameters making their determination weak and unconstrained. GCs are
useful for discerning the physical emission mechanisms occurring in
GRBs and warrant dedicated study. A method similar to the one proposed
here can be used to study the observer- and/or rest-frame properties
of GCs provided the goal is to understand the properties intrinsic to
the GRB.
\section*{Acknowledgments}
I would like to thank Johannes Buchner, Michael Betancourt, and Bob
Carpenter for helpful discussion and direction on the probabilistic
model, {\tt Stan}, and font choice. Additionally, I would like to
thank Damien B\'{e}gu\'{e} and Felix Ryde for discussions on the
physics of GCs.
I additionally thank the Swedish National Infrastructure for Computing
(SNIC) and The PDC Center for High Performance Computing at the KTH
Royal Institute of Technology for computation time on the Tegn\'{e}r
cluster via grant PDC-2015-27.
This research made use of {\tt Astropy}, a community-developed core
Python package for Astronomy \citep{astropy} as well {\tt Matplotlib},
an open source Python graphics environment \citep{Hunter:2007} and
{\tt Seaborn} \citep{Waskom:2015} for plotting.
\bibliographystyle{mn2e}
|
2,869,038,155,511 | arxiv | \section{Introduction}
\label{sec:Int}
Let $\mbb{K}$ be a field of characteristic $0$.
By {\em parameter algebra} over $\mbb{K}$ we mean a complete
local noetherian commutative $\mbb{K}$-algebra $R$, with maximal ideal $\mfrak{m}$, such
that $R / \mfrak{m} = \mbb{K}$. The important example is $R = \mbb{K}[[\hbar]]$, the ring of
formal power series in the variable $\hbar$.
Let $(R, \mfrak{m})$ be a parameter algebra, and let
$\mfrak{g} = \bigoplus_{i \in \mbb{Z}}\, \mfrak{g}^i$ be a DG
(differential graded) Lie algebra over $\mbb{K}$.
There is an induced pronilpotent DG Lie algebra
\[ \mfrak{m} \hatotimes \mfrak{g} = \bigoplus_{i \in \mbb{Z}}\, \mfrak{m} \hatotimes \mfrak{g}^i . \]
A solution $\omega \in \mfrak{m} \hatotimes \mfrak{g}^1$ of the Maurer-Cartan equation
\[ \d(\omega) + \smfrac{1}{2} [\omega, \omega] = 0 \]
is called an {\em MC element}. The set of MC elements is denoted by
$\operatorname{MC}(\mfrak{g}, R)$. There is an action of the {\em gauge group}
$\operatorname{exp}(\mfrak{m} \hatotimes \mfrak{g}^0)$
on the set $\operatorname{MC}(\mfrak{g}, R)$, and we write
\begin{equation}
\overline{\operatorname{MC}}(\mfrak{g}, R) := \operatorname{MC}(\mfrak{g}, R) / \operatorname{exp}(\mfrak{m} \hatotimes \mfrak{g}^0) ,
\end{equation}
the quotient set by this action.
The {\em Deligne groupoid} $\mbf{Del}(\mfrak{g}, R)$, introduced in \cite{GM}, is the
transformation \linebreak groupoid associated to the action of the group
$\operatorname{exp}(\mfrak{m} \hatotimes \mfrak{g}^0)$ on the set $\operatorname{MC}(\mfrak{g}, R)$. So the set
$\pi_0 (\mbf{Del}(\mfrak{g}, R))$ of isomorphism classes of objects of this
groupoid equals $\overline{\operatorname{MC}}(\mfrak{g}, R)$.
In \cite[Theorem 2.4]{GM} it was proved that if $R$ is artinian, $\mfrak{g}$ and $\mfrak{h}$
are DG Lie algebras concentrated in the degree range $[0, \infty)$
(we refer to this as the ``nonnegative nilpotent case''), and
$\phi : \mfrak{g} \to \mfrak{h}$ is a DG Lie algebra quasi-isomorphism, then the induced
morphism of groupoids
\[ \mbf{Del}(\phi, R) : \mbf{Del}(\mfrak{g}, R) \to \mbf{Del}(\mfrak{h}, R) \]
is an equivalence.
We introduce the {\em reduced Deligne groupoid}
$\mbf{Del}^{\mrm{r}}(\mfrak{g}, R)$,
which is a certain quotient of the Deligne groupoid $\mbf{Del}(\mfrak{g}, R)$;
see Section \ref{sec:red-del} for details.
If the {\em Deligne 2-groupoid} $\mbf{Del}^{2}(\mfrak{g}, R)$ is defined
(see below), then
\begin{equation} \label{eqn:109}
\mbf{Del}^{\mrm{r}}(\mfrak{g}, R) =
\pi_1 (\mbf{Del}^{2}(\mfrak{g}, R)) .
\end{equation}
However the groupoid $\mbf{Del}^{\mrm{r}}(\mfrak{g}, R)$ always exists.
This new groupoid also has the property that
\[ \pi_0 (\mbf{Del}^{\mrm{r}}(\mfrak{g}, R)) = \overline{\operatorname{MC}}(\mfrak{g}, R) . \]
Here is the first main result of our paper. It is a generalization to the
unbounded pronilpotent case (i.e.\ the DG Lie algebras $\mfrak{g}$ and $\mfrak{h}$ can be
unbounded, and the parameter algebra $R$ doesn't have to be artinian) of
\cite[Theorem 2.4]{GM}. Our proof is similar to that of \cite[Theorem 2.4]{GM}:
we also use obstruction classes.
\begin{thm} \label{thm:4}
Let $R$ be a parameter algebra over $\mbb{K}$, and let $\phi : \mfrak{g} \to \mfrak{h}$ be a
DG Lie algebra quasi-isomorphism over $\mbb{K}$. Then the morphism of groupoids
\[ \mbf{Del}^{\mrm{r}}(\phi, R) : \mbf{Del}^{\mrm{r}}(\mfrak{g}, R)
\to \mbf{Del}^{\mrm{r}}(\mfrak{h}, R) \]
is an equivalence.
\end{thm}
This is Theorem \ref{thm:2} in the body of the paper.
A DG Lie algebra $\mfrak{g} = \bigoplus_{i \in \mbb{Z}}\, \mfrak{g}^i$ is said to be
of {\em quantum type} if $\mfrak{g}^i = 0$ for all $i < -1$. A DG Lie algebra
$\til{\mfrak{g}}$ is said to be of {\em quasi quantum type} if
there exists a DG Lie quasi-isomorphism $\til{\mfrak{g}} \to \mfrak{g}$, for some
quantum type DG Lie algebra $\mfrak{g}$.
Important examples of such DG Lie algebras are given in Example \ref{exa:102}.
In Section \ref{sec:del-2-grpd} we prove that if $R$ is artinian, or if
$\mfrak{g}$ is of quasi quantum type, then the {\em Deligne $2$-groupoid}
$\mbf{Del}^2(\mfrak{g}, R)$ exists. The original construction (see \cite{Ge}) applied
only to the case when $R$ is artinian and $\mfrak{g}$ is of quantum type.
Here is the second main result of this paper (repeated as Theorem
\ref{thm:105}):
\begin{thm} \label{thm:107}
Let $R$ be a parameter algebra, let $\mfrak{g}$ and $\mfrak{h}$ be
DG Lie algebras, and let $\phi : \mfrak{g} \to \mfrak{h}$ be a DG Lie algebra
quasi-isomorphism. Assume either of these two conditions holds\tup{:}
\begin{itemize}
\rmitem{i} $R$ is artinian.
\rmitem{ii} $\mfrak{g}$ and $\mfrak{h}$ are of quasi quantum type.
\end{itemize}
Then the morphism of $2$-groupoids
\[ \mbf{Del}^2(\phi, R) : \mbf{Del}^2(\mfrak{g}, R) \to \mbf{Del}^2(\mfrak{h}, R) \]
is a weak equivalence.
\end{thm}
The proof of Theorem \ref{thm:107} relies on Theorem \ref{thm:4}, via formula
(\ref{eqn:109}).
Theorem \ref{thm:107} plays a crucial role in our new proof of {\em twisted
deformation quantization}, in the revised version of \cite{Ye4}.
An {\em $\mrm{L}_{\infty}$ morphism} $\Phi : \mfrak{g} \to \mfrak{h}$ is a sequence
$\Phi = \{ \phi_j \}_{j \geq 1}$ of $\mbb{K}$-linear functions
\[ \phi_j : {\textstyle \bigwedge}^j \mfrak{g} \to \mfrak{h} \]
that generalizes the notion of DG Lie algebra homomorphism
$\phi : \mfrak{g} \to \mfrak{h}$. Thus $\phi_1 : \mfrak{g} \to \mfrak{h}$ is a
DG Lie algebra homomorphism, up to a homotopy given by $\phi_2$; and so on.
See Section \ref{sec:L-infty} for details.
The morphism $\Phi$ is called an {\em $\mrm{L}_{\infty}$ quasi-isomorphism}
if $\phi_1 : \mfrak{g} \to \mfrak{h}$ is a quasi-isomorphism.
The concept of $\mrm{L}_{\infty}$ morphism gained
prominence after the Kontsevich Formality Theorem from 1997 (see
\cite{Ko2}).
An $\mrm{L}_{\infty}$ morphism $\Phi : \mfrak{g} \to \mfrak{h}$ induces an $R$-multilinear
$\mrm{L}_{\infty}$ morphism
\[ \Phi_R = \{ \phi_{R, j} \}_{j \geq 1} : \mfrak{m} \hatotimes \mfrak{g} \to \mfrak{m} \hatotimes \mfrak{h} . \]
Given an element $\omega \in \mfrak{m} \hatotimes \mfrak{g}^1$ we write
\begin{equation}
\operatorname{MC}(\Phi, R) (\omega) :=
\sum_{j \geq 1} \, \smfrac{1}{j!} \phi_{R, j}
(\underset{j}{\underbrace{\omega, \ldots, \omega}}) \in \mfrak{m} \hatotimes \mfrak{h}^1 .
\end{equation}
This sum converges in the $\mfrak{m}$-adic topology of $\mfrak{m} \hatotimes \mfrak{h}^1$.
It is known that the function $\operatorname{MC}(\Phi, R)$ sends MC elements to MC
elements, and it respects gauge equivalence (see Propositions \ref{prop:4}
and \ref{prop:5}).
So there is an induced function
$\overline{\operatorname{MC}}(\Phi, R)$ from
$\overline{\operatorname{MC}}(\mfrak{g}, R)$ to $\overline{\operatorname{MC}}(\mfrak{h}, R).$
Here is the third main result of our paper.
\begin{thm} \label{thm:1}
Let $\mfrak{g}$ and $\mfrak{h}$ be DG Lie algebras, let $R$ be a parameter algebra, and let
$\Phi : \mfrak{g} \to \mfrak{h}$ be an $\mrm{L}_{\infty}$ quasi-isomorphism, all over the
field $\mbb{K}$. Then the function
\[ \overline{\operatorname{MC}}(\Phi, R) : \overline{\operatorname{MC}}(\mfrak{g}, R) \to \overline{\operatorname{MC}}(\mfrak{h}, R) \]
is bijective.
\end{thm}
This is Theorem \ref{thm:3} in the body of the paper. We emphasize
that this result is in the unbounded pronilpotent case.
The proof of Theorem \ref{thm:1} goes like this: we use the bar-cobar
construction to reduce to the case of a DG Lie algebra quasi-isomorphism
$\til{\Phi} : \til{\mfrak{g}} \to \til{\mfrak{h}}$; and
then we use Theorem \ref{thm:4}.
Theorem \ref{thm:1} was known in the nilpotent case; see
\cite[Theorem 4.6]{Ko2}, and \cite[Theorem 3.6.2]{CKTB}.
The proof sketched in \cite[Section 4.5]{Ko2} relies on the structure
of $\mrm{L}_{\infty}$ algebras. The proof in
\cite[Section 3.7]{CKTB} relies on the work of Hinich on the Quillen model
structure of coalgebras. It is not clear whether these methods work also in the
pronilpotent case.
In this paper we only consider pronilpotent DG Lie algebras of the form
$\mfrak{m} \hatotimes \mfrak{g}$.
Presumably Theorems \ref{thm:4}, \ref{thm:107} and \ref{thm:1} can be extended
to a more general setup -- see Remark \ref{rem:11}.
\medskip \noindent
{\bf Acknowledgments.}
I wish to thank James Stasheff, Vladimir Hinich, Michel Van den Bergh,
William Goldman, Oren Ben Bassat, Marco Manetti and Ronald Brown
for useful conversations.
Thanks also to the referee for reading the paper carefully and providing
several constructive remarks.
\section{Some Facts about DG Lie Algebras}
\label{sec:facts}
Let $\mbb{K}$ be a field of characteristic $0$.
Given $\mbb{K}$-modules $V, W$ we write $V \otimes W$ and
$\operatorname{Hom}(V, W)$ instead of $V \otimes_{\mbb{K}} W$ and $\operatorname{Hom}_{\mbb{K}}(V, W)$,
respectively.
\begin{dfn}
By {\em parameter algebra} over $\mbb{K}$ we mean a complete noetherian local
commutative $\mbb{K}$-algebra $R$, with maximal ideal $\mfrak{m}$, such that $R / \mfrak{m} = \mbb{K}$.
We call $\mfrak{m}$ a {\em parameter ideal}.
\end{dfn}
The most important example is of course
$R = \mbb{K}[[\hbar]]$, where $\hbar$ is a variable, called the deformation
parameter.
Note that $R = \mbb{K} \oplus \mfrak{m}$, so the ring $R$ can be recovered from the
nonunital algebra $\mfrak{m}$. For any $j \in \mbb{N}$ let $R_j := R / \mfrak{m}^{j+1}$ and
$\mfrak{m}_j := \mfrak{m} / \mfrak{m}^{j+1}$. So $R_0 = \mbb{K}$, each $R_j$ is an artinian local ring
with maximal ideal $\mfrak{m}_j$,
$R \cong \lim_{\leftarrow j} R_j$,
and
$\mfrak{m} \cong \lim_{\leftarrow j} \mfrak{m}_j$.
Let us fix a parameter algebra $(R, \mfrak{m})$.
Given an $R$-module $M$, its {\em $\mfrak{m}$-adic completion} is
\[ \what{M} := \lim_{\leftarrow j}\, (R_j \otimes_R M) . \]
The module $M$ is called {\em $\mfrak{m}$-adically complete} if the canonical
homomorphism $M \to \what{M}$ is bijective. (Some texts would say that $M$ is
complete and separated.) Since $R$ is noetherian, the $\mfrak{m}$-adic completion
$\what{M}$ of any $R$-module $M$ is $\mfrak{m}$-adically complete;
see \cite[Corollary 3.5]{Ye5}.
Given an $R$-module $M$ and a $\mbb{K}$-module $V$ let us write
\begin{equation} \label{eqn:52}
M \hatotimes V := \lim_{\leftarrow j}\, (R_j \otimes_R (M \otimes V)) ,
\end{equation}
namely $M \hatotimes V$ is the $\mfrak{m}$-adic completion of the $R$-module $M \otimes V$.
If $W$ is another $\mbb{K}$-module, then there is a unique $R$-module isomorphism
\[ M \hatotimes (V \otimes W) \cong (M \hatotimes V) \hatotimes W \]
that commutes with the canonical homomorphisms from
$M \otimes V \otimes W$; hence we shall simply denote this complete $R$-module by
$M \hatotimes V \hatotimes W$.
Let $\mfrak{g} = \bigoplus\nolimits_{i \in \mbb{Z}} \mfrak{g}^i$ be a DG Lie algebra over $\mbb{K}$.
(There is no finiteness assumption on $\mfrak{g}$.)
For any $i$ let $\mfrak{m} \, \what{\otimes} \, \mfrak{g}^i$ be the complete tensor product
as in (\ref{eqn:52}).
We get a pronilpotent $R$-linear DG Lie algebra
\begin{equation} \label{eqn:64}
\mfrak{m} \, \what{\otimes} \, \mfrak{g} := \bigoplus_{i \in \mbb{Z}}\, \mfrak{m} \, \what{\otimes} \, \mfrak{g}^i ,
\end{equation}
with differential $\d$ and graded Lie bracket $[-,-]$ induced from $\mfrak{g}$.
Recall that the Maurer-Cartan equation in $\mfrak{m} \, \what{\otimes} \, \mfrak{g}$ is
\begin{equation}
\d(\omega) + \smfrac{1}{2} [\omega, \omega] = 0
\end{equation}
for $\omega \in \mfrak{m} \, \what{\otimes} \, \mfrak{g}^1$.
A solution of this equation is called an {\em MC element}
of $\mfrak{m} \, \what{\otimes} \, \mfrak{g}$. The set of MC elements is denoted by
$\operatorname{MC}(\mfrak{m} \, \what{\otimes} \, \mfrak{g})$.
In degree $0$ we have a pronilpotent Lie algebra $\mfrak{m} \, \what{\otimes} \, \mfrak{g}^0$,
so there is an associated pronilpotent group
$\exp(\mfrak{m} \, \what{\otimes} \, \mfrak{g}^0)$, called the {\em gauge group}, and
a bijective function
\[ \exp : \mfrak{m} \hatotimes \mfrak{g}^0 \to \exp(\mfrak{m} \hatotimes \mfrak{g}^0) \]
called the exponential map.
An element $\gamma \in \mfrak{m} \, \what{\otimes} \, \mfrak{g}^0$ acts on $\mfrak{m} \, \what{\otimes} \, \mfrak{g}$ by the
derivation
\begin{equation}
\operatorname{ad}(\gamma)(\alpha) := [\gamma, \alpha] .
\end{equation}
We view $\operatorname{ad}(\gamma)$ as an element of $R \hatotimes \operatorname{End}(\mfrak{g})^0$.
Let $g := \exp(\gamma) \in \exp(\mfrak{m} \, \what{\otimes} \, \mfrak{g}^0)$, and define
\begin{equation} \label{eqn:11}
\operatorname{Ad}(g) := \exp (\operatorname{ad}(\gamma))
= \sum_{i \geq 0} \, \smfrac{1}{i!} \operatorname{ad}(\gamma)^i
\in R \hatotimes \operatorname{End}(\mfrak{g})^0
\end{equation}
(this series converges in the $\mfrak{m}$-adic topology).
The element $\operatorname{Ad}(g)$ is an $R$-linear automorphism of the graded Lie
algebra $\mfrak{m} \, \what{\otimes} \, \mfrak{g}$. There is an induced group automorphism
$\operatorname{Ad}(g)$ of the group $\exp(\mfrak{m} \, \what{\otimes} \, \mfrak{g}^0)$, and this automorphism
satisfies the equation
\begin{equation} \label{eqn:10}
\operatorname{Ad}(g)(h) = g \circ h \circ g^{-1}
\end{equation}
for all $h \in \exp(\mfrak{m} \, \what{\otimes} \, \mfrak{g}^0)$.
There is another action of $\gamma \in \mfrak{m} \, \what{\otimes} \, \mfrak{g}^0$ on
$\omega \in \mfrak{m} \, \what{\otimes} \, \mfrak{g}^{1}$:
\begin{equation}
\operatorname{af}(\gamma)(\omega) := [\gamma, \omega] - \d(\gamma) =
\operatorname{ad}(\gamma)(\omega) - \d(\gamma) .
\end{equation}
This is an affine action, namely
\[ \operatorname{af}(\gamma) \in R \hatotimes \bigl( \operatorname{End}(\mfrak{g}^1) \ltimes \mfrak{g}^1 \bigr) . \]
Consider the elements
$g := \exp(\gamma) \in \exp(\mfrak{m} \, \what{\otimes} \, \mfrak{g}^0)$
and
\begin{equation}
\operatorname{Af}(g) := \exp(\operatorname{af}(\gamma)) =
\sum_{i \geq 0} \, \smfrac{1}{i!} \operatorname{af}(\gamma)^i
\in R \hatotimes \bigl( \operatorname{End}(\mfrak{g}^1) \ltimes \mfrak{g}^1 \bigr) .
\end{equation}
(The series above converges in the $\mfrak{m}$-adic topology.)
We get an affine action $\operatorname{Af}$ of the group $\exp(\mfrak{m} \, \what{\otimes} \, \mfrak{g}^0)$
on the $R$-module $\mfrak{m} \, \what{\otimes} \, \mfrak{g}^{1}$.
For $\omega \in \mfrak{m} \, \what{\otimes} \, \mfrak{g}^1$ this becomes
\[ \operatorname{Af}(g)(\omega) =
\exp(\operatorname{ad}(\gamma))(\omega) +
\frac{ 1 - \exp(\operatorname{ad}(\gamma)) }{ \operatorname{ad}(\gamma) } (\d(\gamma)) . \]
The arguments in \cite[Section 1.3]{GM} (which refer to the case when
$\mfrak{g}^i$ are all finite dimensional over $\mbb{K}$, $\mfrak{g}^i = 0$ for $i < 0$, and $R$ is
artinian) are valid also in our infinite case (cf.\ \cite[Section 2.2]{Ge}),
and they show that $\operatorname{Af}(g)$ preserves the set
$\operatorname{MC}(\mfrak{m} \, \what{\otimes} \, \mfrak{g})$.
(This can be proved also using the method of Lemma \ref{lem:3}.)
We write
\begin{equation}
\overline{\operatorname{MC}}(\mfrak{m} \, \what{\otimes} \, \mfrak{g}) :=
\frac{ \operatorname{MC}(\mfrak{m} \, \what{\otimes} \, \mfrak{g}) }{
\operatorname{exp}(\mfrak{m} \, \what{\otimes} \, \mfrak{g}^0) } \ ,
\end{equation}
the quotient set by this action.
Given a homomorphism of DG Lie algebras
$\phi : \mfrak{g} \to \mfrak{h}$, and homomorphism of parameter algebras
$f : (R, \mfrak{m}) \to (S, \mfrak{n})$, there is an induced function
\begin{equation}
\overline{\operatorname{MC}}(\phi \otimes f) :
\overline{\operatorname{MC}}(\mfrak{m} \, \what{\otimes} \, \mfrak{g}) \to
\overline{\operatorname{MC}}(\mfrak{n} \, \what{\otimes} \, \mfrak{h}) .
\end{equation}
We shall need the following sort of algebraic differential calculus
(which is used a lot implicitly in deformation theory).
Let $\mbb{K}[t]$ be the polynomial algebra in a variable $t$, and let $M$ be a
$\mbb{K}$-module. A polynomial
$f(t) \in \mbb{K}[t] \otimes M$ defines a
function $f : \mbb{K} \to M$, namely for any $\lambda \in \mbb{K}$ the element
$f(\lambda) \in M$ is gotten by substitution $t \mapsto \lambda$.
We refer to $f : \mbb{K} \to M$ as a polynomial function, or as
a polynomial path in $M$.
Let $\mbb{K}[\epsilon] := \mbb{K}[t] / (t^2)$, where $\epsilon$ is the class of $t$.
Given $f(t) \in \mbb{K}[t] \otimes M$ and $\lambda \in \mbb{K}$ we denote by
$f(\lambda + \epsilon) \in \mbb{K}[\epsilon] \otimes M$ the result of the substitution
$t \mapsto \lambda + \epsilon$.
\begin{lem} \label{lem:7}
Let $f(t) \in \mbb{K}[t] \otimes M$. If
\[ f(\lambda + \epsilon) = f(\lambda) \]
in $\mbb{K}[\epsilon] \otimes M$ for all $\lambda \in \mbb{K}$, then $f(\lambda) = f(0)$
for all $\lambda \in \mbb{K}$.
\end{lem}
\begin{proof}
This is an elementary calculation; note that $\mbb{K}$ has characteristic $0$.
\end{proof}
Given an element $\omega \in \operatorname{MC}(\mfrak{m} \, \what{\otimes} \, \mfrak{g})$, consider the
$R$-linear operator (of degree $1$)
\begin{equation} \label{eqn:17}
\d_{\omega} := \d + \operatorname{ad}(\omega)
\end{equation}
on $\mfrak{m} \, \what{\otimes} \, \mfrak{g}$.
\begin{lem} \label{lem:3}
Let $\omega, \omega' \in \operatorname{MC}(\mfrak{m} \, \what{\otimes} \, \mfrak{g})$,
and let $g \in \operatorname{exp}(\mfrak{m} \otimes \mfrak{g}^0)$ be such that
$\omega' = \operatorname{Af}(g)(\omega)$.
Then for any $i \in \mbb{Z}$ the diagram of $R$-modules
\[ \UseTips \xymatrix @C=10ex @R=6ex {
\mfrak{m} \, \what{\otimes} \, \mfrak{g}^{i}
\ar[r]^{\operatorname{Ad}(g)}
\ar[d]_{\d_{\omega}}
&
\mfrak{m} \, \what{\otimes} \, \mfrak{g}^{i}
\ar[d]^{\d_{\omega'}}
\\
\mfrak{m} \, \what{\otimes} \, \mfrak{g}^{i+1}
\ar[r]^{\operatorname{Ad}(g)}
&
\mfrak{m} \, \what{\otimes} \, \mfrak{g}^{i+1}
} \]
is commutative.
\end{lem}
\begin{proof}
Since $\mfrak{m} \, \what{\otimes} \, \mfrak{g}^{i}$ and $\mfrak{m} \, \what{\otimes} \, \mfrak{g}^{i+1}$ are
$\mfrak{m}$-adically complete $R$-modules, it suffices to verify this after replacing
$R$ with $R_j$. Therefore we can assume that $R$ is artinian.
Consider the DG Lie algebra
$\mbb{K}[t] \otimes \mfrak{m} \otimes \mfrak{g}$.
Let $\gamma := \log(g) \in \mfrak{m} \otimes \mfrak{g}^0$, and define
\[ g(t) := \exp(t \gamma) \in \exp(\mbb{K}[t] \otimes \mfrak{m} \otimes \mfrak{g}^0)
\subset \mbb{K}[t] \otimes R \otimes \operatorname{End}(\mfrak{g}^0) . \]
So $g(0) = 1$ and $g(1) = g$. Next let
\[ \omega(t) := \operatorname{Af}(g(t))(\omega) \in \mbb{K}[t] \otimes \mfrak{m} \otimes \mfrak{g}^1 , \]
which is an MC element of $\mbb{K}[t] \otimes \mfrak{m} \otimes \mfrak{g}$, and it
satisfies $\omega(0) = \omega$ and $\omega(1) = \omega'$. Consider the polynomial
\[ f(t) := \operatorname{Ad}(g(1-t)) \circ \d_{\omega(t)} \circ \operatorname{Ad}(g(t)) \in
\mbb{K}[t] \otimes R \otimes \operatorname{Hom}(\mfrak{g}^{i}, \mfrak{g}^{i+1}) . \]
It satisfies
\[ f(0) = \operatorname{Ad}(g) \circ \d_{\omega} \]
and
\[ f(1) = \d_{\omega'} \circ \operatorname{Ad}(g) . \]
We will prove that $f$ is constant.
See diagram below depicting $f(\lambda)$, $\lambda \in \mbb{K}$.
\[ \UseTips \xymatrix @C=10ex @R=7ex {
\mfrak{m} \otimes \mfrak{g}^{i}
\ar[r]^{\operatorname{Ad}(g(\lambda))}
\ar@{-->}[d]
&
\mfrak{m} \otimes \mfrak{g}^{i}
\ar[d]^{\d_{\omega(\lambda)}}
\ar@{-->}[r]
&
\mfrak{m} \otimes \mfrak{g}^{i}
\ar@{-->}[d]
\\
\mfrak{m} \otimes \mfrak{g}^{i+1}
\ar@{-->}[r]
&
\mfrak{m} \otimes \mfrak{g}^{i+1}
\ar[r]^{\operatorname{Ad}(g(1 - \lambda))}
&
\mfrak{m} \otimes \mfrak{g}^{i+1}
} \]
\medskip
Take any $\lambda \in \mbb{K}$. Then
\[ \begin{aligned}
& f(\lambda +\epsilon) - f(\lambda) = \\
& \qquad \operatorname{Ad}(g(1 - \epsilon - \lambda)) \circ
\bigl( \d_{\omega(\lambda + \epsilon)} \circ \operatorname{Ad}(g(\epsilon)) -
\operatorname{Ad}(g(\epsilon)) \circ \d_{\omega(\lambda)} \bigr) \circ \operatorname{Ad}(g(\lambda))
\end{aligned} \]
in $\mbb{K}[\epsilon] \otimes R \otimes \operatorname{Hom}(\mfrak{g}^{i}, \mfrak{g}^{i+1})$.
A calculation shows that
\[ \operatorname{Ad}(g(\epsilon)) = 1 + \epsilon \cdot \operatorname{ad}(\gamma) \in
\mbb{K}[\epsilon] \otimes R \otimes \operatorname{End}(\mfrak{g}) \]
and
\[ \omega(\lambda + \epsilon) = \omega(\lambda) + \epsilon \cdot [\gamma, \omega(\lambda)] -
\epsilon \cdot \d(\gamma) \in \mbb{K}[\epsilon] \otimes \mfrak{m} \otimes \mfrak{g}^1 . \]
Hence for any
$\alpha \in R \otimes \mfrak{g}^{-1}$ we have
\[
\begin{aligned}
& \bigl( \d_{\omega(\lambda + \epsilon)} \circ \operatorname{Ad}(g(\epsilon)) \bigr)(\alpha) = \\
& \qquad \d(\alpha) + \epsilon \cdot \d([\gamma, \alpha])
+ [\omega(\lambda + \epsilon), \alpha] + \epsilon [ \omega(\lambda + \epsilon), [\gamma, \alpha]]
\end{aligned} \]
and
\[ \begin{aligned}
& \bigl( \operatorname{Ad}(g(\epsilon)) \circ \d_{\omega(\lambda} \bigr)(\alpha) = \\
& \qquad \d(\alpha) + [\omega(\lambda), \alpha] + \epsilon [\gamma, \d(\alpha)] +
\epsilon [\gamma, [\omega(\lambda), \alpha ]] \ .
\end{aligned} \]
After expanding terms and using the graded Jacobi identity we see that
\[ \bigl( \d_{\omega(\lambda + \epsilon)} \circ \operatorname{Ad}(g(\epsilon)) \bigr)(\alpha) =
\bigl( \operatorname{Ad}(g(\epsilon)) \circ \d_{\omega(\lambda} \bigr)(\alpha) \]
in $\mbb{K}[\epsilon] \otimes R \otimes \mfrak{g}^0$.
Therefore $f(\lambda +\epsilon) = f(\lambda)$. By Lemma \ref{lem:7} we conclude that
$f$ is constant.
\end{proof}
\begin{prop} \label{prop:100}
\begin{enumerate}
\item Let $\omega \in \operatorname{MC}(\mfrak{m} \hatotimes \mfrak{g})$. Then
$\d_{\omega}$ is a degree $1$ derivation of the graded Lie algebra $\mfrak{m} \hatotimes \mfrak{g}$,
and $\d_{\omega} \circ \d_{\omega} = 0$. We obtain a new DG Lie algebra
$(\mfrak{m} \hatotimes \mfrak{g})_{\omega}$, with the same Lie bracket $[-,-]$, and a new differential
$\d_{\omega}$.
\item Let $g \in \operatorname{exp}(\mfrak{m} \hatotimes \mfrak{g}^0)$, and let
$\omega' := \operatorname{Af}(g)(\omega) \in \operatorname{MC}(\mfrak{m} \hatotimes \mfrak{g})$. Then
\[ \operatorname{Ad}(g) : (\mfrak{m} \hatotimes \mfrak{g})_{\omega} \to (\mfrak{m} \hatotimes \mfrak{g})_{\omega'} \]
is an isomorphism of DG Lie algebras.
\end{enumerate}
\end{prop}
\begin{proof}
(1) This is well known (and very easy to check).
\medskip \noindent
(2) Let $\gamma := \log(g) \in \mfrak{m} \hatotimes \mfrak{g}^0$.
Since $\operatorname{ad}(\gamma)$ is a derivation of the graded Lie algebra
$\mfrak{m} \hatotimes \mfrak{g}$, it follows that
$\operatorname{Ad}(g) = \exp(\operatorname{ad}(\gamma))$ is an automorphism of
$\mfrak{m} \hatotimes \mfrak{g}$. By Lemma \ref{lem:3} this automorphism exchanges $\d_{\omega}$
\and $\d_{\omega'}$.
\end{proof}
\section{The Reduced Deligne Groupoid}
\label{sec:red-del}
As before, $\mbb{K}$ is a field of characteristic $0$,
$(R, \mfrak{m})$ is a parameter algebra over $\mbb{K}$, and
$\mfrak{g} = \bigoplus\nolimits_{i \in \mbb{Z}} \mfrak{g}^i$ is a DG Lie algebra over $\mbb{K}$.
Let us write
\begin{equation} \label{eqn:44}
\operatorname{MC}(\mfrak{g}, R) := \operatorname{MC}(R \, \what{\otimes} \, \mfrak{g})
\end{equation}
and
\begin{equation} \label{eqn:45}
\operatorname{G}(\mfrak{g}, R) := \exp(\mfrak{m} \, \what{\otimes} \, \mfrak{g}^0) .
\end{equation}
Given $\omega, \omega' \in \operatorname{MC}(\mfrak{g}, R)$, let
\begin{equation} \label{eqn:43}
\operatorname{G}(\mfrak{g}, R)(\omega, \omega') :=
\{ g \in \operatorname{G}(\mfrak{g}, R) \mid \operatorname{Af}(g)(\omega) = \omega' \} .
\end{equation}
As in \cite{GM} we define the {\em Deligne groupoid}
$\mbf{Del}(\mfrak{g}, R)$ to be the transformation \linebreak groupoid
associated to the action of the gauge group $\operatorname{G}(\mfrak{g}, R)$ on
the set $\operatorname{MC}(\mfrak{g}, R)$.
So the set of objects of $\mbf{Del}(\mfrak{g}, R)$ is $\operatorname{MC}(\mfrak{g}, R)$, and the set
of morphisms $\omega \to \omega'$ in this groupoid is
$\operatorname{G}(\mfrak{g}, R)(\omega, \omega')$. Identity morphisms and composition in the
groupoid are those of the group $\operatorname{G}(\mfrak{g}, R)$.
Now suppose $\omega, \omega' \in \operatorname{MC}(\mfrak{g}, R)$ and
$g \in \operatorname{G}(\mfrak{g}, R)(\omega, \omega')$.
Since
\[ \operatorname{Af}(g \circ h \circ g^{-1}) =
\operatorname{Af}(g) \circ \operatorname{Af}(h) \circ \operatorname{Af}(g)^{-1} \]
for any $h$, and in view of (\ref{eqn:10}), there is a group isomorphism
\begin{equation}
\operatorname{Ad}(g) : \operatorname{G}(\mfrak{g}, R)(\omega, \omega) \to \operatorname{G}(\mfrak{g}, R)(\omega', \omega') .
\end{equation}
Given $\omega \in \operatorname{MC}(\mfrak{g}, R)$ there is the derivation
$\d_{\omega}$ of formula (\ref{eqn:17}). Let us define
\begin{equation} \label{eqn:14}
\a^{\mrm{r}}_{\omega} := \operatorname{Im} \bigl( \d_{\omega} :
\mfrak{m} \, \what{\otimes} \, \mfrak{g}^{-1} \to \mfrak{m} \, \what{\otimes} \, \mfrak{g}^0 \bigr)
\end{equation}
and
\begin{equation} \label{eqn:15}
(\mfrak{m} \, \what{\otimes} \, \mfrak{g}^0)(\omega) := \operatorname{Ker} \bigl( \d_{\omega} :
\mfrak{m} \, \what{\otimes} \, \mfrak{g}^{0} \to \mfrak{m} \, \what{\otimes} \, \mfrak{g}^1 \bigr) .
\end{equation}
These are $R$-submodules of $\mfrak{m} \, \what{\otimes} \, \mfrak{g}^0$.
\begin{lem} \label{lem:4}
Let $\omega \in \operatorname{MC}(\mfrak{g}, R)$.
Consider the bijection of sets
\[ \exp : \mfrak{m} \, \what{\otimes} \, \mfrak{g}^0 \xrightarrow{\simeq} \exp(\mfrak{m} \, \what{\otimes} \, \mfrak{g}^0) =
\operatorname{G}(\mfrak{g}, R) . \]
\begin{enumerate}
\item The module
$(\mfrak{m} \, \what{\otimes} \, \mfrak{g}^0)(\omega)$ is a Lie subalgebra of
$\mfrak{m} \, \what{\otimes} \, \mfrak{g}^0$, and
\[ \exp \bigl( (\mfrak{m} \, \what{\otimes} \, \mfrak{g}^0)(\omega) \bigr) =
\operatorname{G}(\mfrak{g}, R)(\omega, \omega) \]
as subsets of $\operatorname{G}(\mfrak{g}, R)$.
\item The module $\a^{\mrm{r}}_{\omega}$ is a Lie ideal of the Lie algebra
$(\mfrak{m} \, \what{\otimes} \, \mfrak{g}^0)(\omega)$, and the subset
\[ N^{\mrm{r}}_{\omega} = N^{\mrm{r}}(\mfrak{g}, R)_{\omega} :=
\exp(\a^{\mrm{r}}_{\omega}) \]
is a normal subgroup of $\operatorname{G}(\mfrak{g}, R)(\omega, \omega)$.
\item Let $g \in \operatorname{G}(\mfrak{g}, R)$ and
$\omega' := \operatorname{Af}(g)(\omega)$. Then
\[ \operatorname{Ad}(g) \bigl( N^{\mrm{r}}_{\omega} \bigr) = N^{\mrm{r}}_{\omega'} . \]
\end{enumerate}
\end{lem}
\begin{proof}
(1) Since $\d_{\omega}$ is a graded derivation of the graded Lie
algebra $\mfrak{m} \, \what{\otimes} \, \mfrak{g}$, its kernel is a graded Lie subalgebra, and its image
is a graded Lie ideal in the kernel. In degree $0$ we get a Lie subalgebra
$(\mfrak{m} \, \what{\otimes} \, \mfrak{g}^0)(\omega)$, and a Lie ideal $\a^{\mrm{r}}_{\omega}$ in it.
Because $\d_{\omega}$ is a continuous homomorphism between complete $R$-modules,
its kernel $(\mfrak{m} \, \what{\otimes} \, \mfrak{g}^0)(\omega)$ is closed; so this is a closed Lie
subalgebra of $\mfrak{m} \, \what{\otimes} \, \mfrak{g}^0$. This implies that the subset
$\exp \bigl( (\mfrak{m} \, \what{\otimes} \, \mfrak{g}^0)(\omega) \bigr)$ is a closed subgroup of
$\operatorname{G}(\mfrak{g}, R)$.
Moreover, let $\gamma \in \mfrak{m} \hatotimes \mfrak{g}^0$ and $g := \exp(\gamma)$.
In the proof of \cite[Theorem 2.2]{Ge} it is shown that
$\d_{\omega}(\gamma) = 0$ iff
$\operatorname{Af}(g)(\omega) = \omega$. This shows that
\[ \exp \bigl( (\mfrak{m} \, \what{\otimes} \, \mfrak{g}^0)(\omega) \bigr) =
\operatorname{G}(\mfrak{g}, R)(\omega, \omega) . \]
\medskip \noindent
(2) We already know that $\a^{\mrm{r}}_{\omega}$ is a Lie ideal of
$(\mfrak{m} \, \what{\otimes} \, \mfrak{g}^0)(\omega)$; but since this is not a closed ideal in general,
it is not immediate that the subset $\exp(\a^{\mrm{r}}_{\omega})$ is a normal
subgroup of
$\exp \bigl( (\mfrak{m} \, \what{\otimes} \, \mfrak{g}^0)(\omega) \bigr)$.
Consider the CBH series
\[ F(x_1, x_2) = \sum_{j \geq 1}\, F_j(x_1, x_2) , \]
where $F_j(x_1, x_2)$ are homogeneous elements of degree $i$ in the
free Lie algebra in the variables $x_1, x_2$ over $\mbb{Q}$.
It is known (cf.\ \cite{Bo}) that
\begin{equation} \label{eqn:53}
\exp(\gamma_1) \cdot \exp(\gamma_2) = \exp(F(\gamma_1, \gamma_2))
\end{equation}
for $\gamma_1, \gamma_2 \in \mfrak{m} \hatotimes \mfrak{g}^0$.
Let us define a bracket $[-,-]_{\omega}$ on $\mfrak{m} \hatotimes \mfrak{g}^{-1}$ as follows:
\[ [\alpha_1, \alpha_2]_{\omega} := [ \d_{\omega}(\alpha_1), \alpha_2] . \]
In general this is not a Lie bracket (the Jacobi identity may fail). However,
since $\d_{\omega}$ is a square zero derivation of
the graded Lie algebra $\mfrak{m} \hatotimes \mfrak{g}$, we have
\[ \d_{\omega}([\alpha_1, \alpha_2]_{\omega}) = [\d_{\omega}(\alpha_1), \d_{\omega}(\alpha_2)] . \]
For any $j \geq 1$ and $\alpha_1, \alpha_2 \in \mfrak{m} \hatotimes \mfrak{g}^{-1}$ consider
the element
$F_{j, \omega}(\alpha_1, \alpha_2) \in \mfrak{m} \hatotimes \mfrak{g}^{-1}$
gotten by evaluating the Lie polynomial $F_j(x_1, x_2)$
at $x_i \mapsto \alpha_i$, using the bracket $[-,-]_{\omega}$.
Now take any $\gamma_1, \gamma_2 \in \a^{\mrm{r}}_{\omega}$, and choose
$\alpha_1, \alpha_2 \in \mfrak{m} \hatotimes \mfrak{g}^{-1}$ such that $\gamma_i = \d_{\omega}(\alpha_i)$. Then
\[ \d_{\omega}(F_{j, \omega}(\alpha_1, \alpha_2)) =
F_j(\gamma_1, \gamma_2) \in \mfrak{m} \hatotimes \mfrak{g}^0 . \]
Let
\[ \alpha := \sum\nolimits_{j \geq 1}\, F_{j, \omega}(\alpha_1, \alpha_2) \in
\mfrak{m} \hatotimes \mfrak{g}^{-1} \]
and $\gamma := \d_{\omega} (\alpha) \in \a^{\mrm{r}}_{\omega}$.
By continuity we get
$F(\gamma_1, \gamma_2) = \gamma$.
{}From this and formula (\ref{eqn:53}) we see that
$N^{\mrm{r}}_{\omega} = \exp(\a^{\mrm{r}}_{\omega})$ is a
subgroup of $\operatorname{G}(\mfrak{g}, R) = \exp (\mfrak{m} \, \what{\otimes} \, \mfrak{g}^0)$.
Similarly, Lemma \ref{lem:3}(2) implies that the subset
$\exp(\a^{\mrm{r}}_{\omega})$ is
invariant under the operations $\operatorname{Ad}(g)$,
$g \in \operatorname{G}(\mfrak{g}, R)(\omega, \omega)$. Therefore $N^{\mrm{r}}_{\omega}$ is a
normal subgroup of $\operatorname{G}(\mfrak{g}, R)(\omega, \omega)$.
(3) According to Lemma \ref{lem:3}(2) we know that
$\operatorname{Ad}(g)(\a^{\mrm{r}}_{\omega}) = \a^{\mrm{r}}_{\omega'}$.
\end{proof}
Note that the set
$\operatorname{G}(\mfrak{g}, R)(\omega, \omega')$ has a left action by the group
$N^{\mrm{r}}_{\omega'}$, and a right action by the group
$N^{\mrm{r}}_{\omega}$. Define
\begin{equation} \label{eqn:16}
\operatorname{G}^{\mrm{r}}(\mfrak{g}, R)(\omega, \omega') :=
\operatorname{G}(\mfrak{g}, R)(\omega, \omega') / N^{\mrm{r}}_{\omega} \, ,
\end{equation}
the quotient set. So there is a surjective function
\begin{equation} \label{eqn:51}
\eta_1 : \operatorname{G}(\mfrak{g}, R)(\omega, \omega') \to \operatorname{G}^{\mrm{r}}(\mfrak{g}, R)(\omega, \omega') .
\end{equation}
By Lemma \ref{lem:4}(3), the multiplication map of
$\operatorname{G}(\mfrak{g}, R)$ induces maps
\begin{equation} \label{eqn:68}
\operatorname{G}^{\mrm{r}}(\mfrak{g}, R)(\omega, \omega') \times
\operatorname{G}^{\mrm{r}}(\mfrak{g}, R)(\omega', \omega'') \to
\operatorname{G}^{\mrm{r}}(\mfrak{g}, R)(\omega, \omega'')
\end{equation}
for any $\omega, \omega', \omega'' \in \operatorname{MC}(\mfrak{g}, R)$.
If $\omega' = \omega$ then
$\operatorname{G}^{\mrm{r}}(\mfrak{g}, R)(\omega, \omega)$ is a group.
\begin{dfn}
The {\em reduced Deligne groupoid} associated to $\mfrak{g}$ and $(R, \mfrak{m})$
is the \linebreak groupoid $\mbf{Del}^{\mrm{r}}(\mfrak{g}, R)$ defined as follows.
The set of objects of this groupoid is $\operatorname{MC}(\mfrak{g}, R)$. For any
$\omega, \omega' \in \operatorname{MC}(\mfrak{g}, R)$, the set of morphisms $\omega \to \omega'$ is
the set $\operatorname{G}^{\mrm{r}}(\mfrak{g}, R)(\omega, \omega')$
from formula (\ref{eqn:16}). The composition in $\mbf{Del}^{\mrm{r}}(\mfrak{g}, R)$
is given by formula (\ref{eqn:68}), and the identity morphisms are those of the
groups $\operatorname{G}^{\mrm{r}}(\mfrak{g}, R)(\omega, \omega)$.
\end{dfn}
There is a morphism of groupoids (i.e.\ a functor)
\begin{equation} \label{eqn:13}
\bsym{\eta} = (\eta_0, \eta_1) : \mbf{Del}(\phi, R) \to
\mbf{Del}^{\mrm{r}}(\phi, R) ,
\end{equation}
where $\eta_0$ is the identity on the set of objects
$\operatorname{MC}(\mfrak{g}, R)$, and $\eta_1$ is the surjective function in formula
(\ref{eqn:51}). Hence
\begin{equation} \label{eqn:5}
\pi_0 \bigl( \mbf{Del}^{\mrm{r}}(\mfrak{g}, R) \bigr) =
\pi_0 \bigl( \mbf{Del}(\mfrak{g}, R) \bigr) =
\overline{\operatorname{MC}}(\mfrak{m} \, \what{\otimes} \, \mfrak{g}) ,
\end{equation}
where $\pi_0(-)$ denotes the set of isomorphism classes of objects of a
groupoid.
Given a homomorphism $f : (R, \mfrak{m}) \to (S, \mfrak{n})$ of parameter algebras, and a
homomorphism
$\phi : \mfrak{g} \to \mfrak{h}$ of DG Lie algebras, there is an induced
DG Lie algebra homomorphism
\[ f \otimes \phi : \mfrak{m} \, \what{\otimes} \, \mfrak{g} \to \mfrak{n} \, \what{\otimes} \, \mfrak{h} . \]
Hence there are induced morphisms of groupoids
$\mbf{Del}(\phi, f)$ and $\mbf{Del}^{\mrm{r}}(\phi, f)$
such that the diagram
\[ \UseTips \xymatrix @C=11ex @R=6ex {
\mbf{Del}(\mfrak{g}, R)
\ar[r]^{\mbf{Del}(\phi, f)}
\ar[d]_{\bsym{\eta}}
&
\mbf{Del}(\mfrak{h}, S)
\ar[d]^{\bsym{\eta}}
\\
\mbf{Del}^{\mrm{r}}(\mfrak{g}, R)
\ar[r]^{\mbf{Del}^{\mrm{r}}(\phi, f)}
&
\mbf{Del}^{\mrm{r}}(\mfrak{h}, S)
} \]
is commutative. And there is an induced function
\[ \pi_0 \bigl( \mbf{Del}^{\mrm{r}}(\phi, f) \bigr) :
\pi_0 \bigl( \mbf{Del}^{\mrm{r}}(\mfrak{g}, R) \bigr) \to
\pi_0 \bigl( \mbf{Del}^{\mrm{r}}(\mfrak{h}, S) \bigr) . \]
Under the equality of sets (\ref{eqn:5}), and the corresponding one for
$\mfrak{h}$ and $S$, we have equality of functions
\begin{equation} \label{eqn:18}
\pi_0 \bigl( \mbf{Del}^{\mrm{r}}(\phi, f) \bigr) =
\pi_0 \bigl( \mbf{Del}(\phi, f) \bigr) =
\overline{\operatorname{MC}}(f \otimes \phi) .
\end{equation}
\begin{prop} \label{prop:2}
Let $\omega \in \operatorname{MC}(\mfrak{g}, R)$.
The bijection
\[ \exp : \mfrak{m} \hatotimes \mfrak{g}^0 \to \operatorname{G}(\mfrak{g}, R) \]
induces a bijection \tup{(}of sets\tup{)}
\[ \exp : \mrm{H}^0( (\mfrak{m} \hatotimes \mfrak{g})_{\omega} ) \to
\operatorname{G}^{\mrm{r}}(\mfrak{g}, R)(\omega, \omega) . \]
This bijection is functorial w.r.t.\ homomorphisms $(R, \mfrak{m}) \to (R, \mfrak{n})$ of
parameter algebras and homomorphisms $\mfrak{g} \to \mfrak{h}$ of DG Lie algebras.
\end{prop}
\begin{proof}
This is an immediate consequence of Lemma \ref{lem:4}(1,2) and formula
(\ref{eqn:16}).
\end{proof}
\begin{rem}
After writing an earlier version of this paper, we were told by M. Manetti that
M. Kontsevich had mentioned the idea of a reduced Deligne groupoid already in
1994. See \cite[page 19]{Ko1} and \cite{Mt}.
\end{rem}
\section{DG Lie Quasi-isomorphisms -- Nilpotent Algebras}
\label{sec:dg-quasi-nilp}
In this section we prove several lemmas that will be used in Section
\ref{sec:dg-quasi-comp}. We assume that
$(R, \mfrak{m})$ is an artinian parameter algebra, but $\mfrak{m} \neq 0$. Also we have a DG
Lie algebra quasi-isomorphism $\phi : \mfrak{g} \to \mfrak{h}$.
Let
\[ l(R) := \min\, \{ l \in \mbb{N} \mid \mfrak{m}^{l+1} = 0 \} , \]
and define $\mfrak{n} := \mfrak{m}^{l(R)}$. Thus $\mfrak{n}$ is an ideal in $R$ satisfying
$\mfrak{m} \mfrak{n} = 0$. Let
$\bar{R} := R / \mfrak{n}$ and $\bar{\mfrak{m}} := \mfrak{m} / \mfrak{n}$.
So $(\bar{R}, \bar{\mfrak{m}})$ is a parameter algebra, and there is a
canonical surjection $p : R \to \bar{R}$.
Our assumption that $\mfrak{m} \neq 0$ implies that $l(R) \geq 1$, and that
$l(\bar{R}) < l(R)$.
The homomorphism $p : R \to \bar{R}$ induces a surjective homomorphism of DG Lie
algebras
\[ p : \mfrak{m} \otimes \mfrak{g} \to \bar{\mfrak{m}} \otimes \mfrak{g} , \]
and likewise for $\mfrak{h}$. Thus we get a commutative diagram of morphisms of \linebreak
groupoids
\[ \UseTips \xymatrix @C=9ex @R=6ex {
\mbf{Del}^{\mrm{r}}(\mfrak{g}, R)
\ar[r]^{\phi}
\ar[d]_{p}
&
\mbf{Del}^{\mrm{r}}(\mfrak{h}, R)
\ar[d]^{p}
\\
\mbf{Del}^{\mrm{r}}(\mfrak{g}, \bar{R})
\ar[r]^{\phi}
&
\mbf{Del}^{\mrm{r}}(\mfrak{h}, \bar{R}) \ ,
} \]
where, for the sake of brevity, we write $p$ instead of
$\mbf{Del}^{\mrm{r}}(\mfrak{g}, p)$, etc.
Given elements $\omega, \omega' \in \operatorname{MC}(\mfrak{g}, R)$, let
$\bar{\omega} := p(\omega)$ and $\bar{\omega}' := p(\omega')$ in
$\operatorname{MC}(\mfrak{g}, \bar{R})$. For any element
$\bar{g} \in \operatorname{G}(\mfrak{g}, \bar{R})(\bar{\omega}, \bar{\omega}')$
we define
\begin{equation} \label{eqn:4}
\operatorname{G}(\mfrak{g}, R)(\omega, \omega') / \bar{g} :=
\{ g \in \operatorname{G}(\mfrak{g}, R)(\omega, \omega') \mid p(g) = \bar{g} \} .
\end{equation}
Next, given an element $\bar{\omega} \in \operatorname{MC}(\mfrak{g}, \bar{R})$,
let us denote by $\mbf{Del}(\mfrak{g}, R) / \bar{\omega}$
the fiber over $\bar{\omega}$ of the morphism of groupoids
\[ p : \mbf{Del}(\mfrak{g}, R) \to \mbf{Del}(\mfrak{g}, \bar{R}) . \]
Thus the set of objects of $\mbf{Del}(\mfrak{g}, R) / \bar{\omega}$ is the set
\begin{equation}
\operatorname{MC}(\mfrak{g}, R) / \bar{\omega} := \{ \omega \in \operatorname{MC}(\mfrak{g}, R) \mid
p(\omega) = \bar{\omega} \} .
\end{equation}
The set of morphisms $\omega \to \omega'$ in $\mbf{Del}(\mfrak{g}, R) / \bar{\omega}$
is
\begin{equation}
\operatorname{G}(\mfrak{g}, R)(\omega, \omega') / 1 :=
\{ g \in \operatorname{G}(\mfrak{g}, R)(\omega, \omega') \mid p(g) = 1 \} .
\end{equation}
We shall need some of this construction also for the groupoid
$\mbf{Del}^{\mrm{r}}(\mfrak{g}, R)$.
Given elements $\omega, \omega' \in \operatorname{MC}(\mfrak{g}; R)$, let
$\bar{\omega} := p(\omega)$ and $\bar{\omega}' := p(\omega')$ in
$\operatorname{MC}(\mfrak{g}, \bar{R})$.
Suppose
$\bar{g} \in \operatorname{G}^{\mrm{r}}(\mfrak{g}, \bar{R})(\bar{\omega}, \bar{\omega}')$.
We define the subset
\begin{equation}
\operatorname{G}^{\mrm{r}}(\mfrak{g}, R)(\omega, \omega') / \bar{g} :=
\{ g \in \operatorname{G}^{\mrm{r}}(\mfrak{g}, R)(\omega, \omega') \mid p(g) = \bar{g} \} .
\end{equation}
We now recall the obstruction functions $o_2$ and $o_1$ introduced in
\cite[Section 2.6]{GM}. Let us denote by
$\mrm{Z}^i(\mfrak{g})$ the $\mbb{K}$-module of $i$-cocycles in $\mfrak{g}$.
For $\alpha \in \mfrak{m} \otimes \mrm{Z}^i(\mfrak{g})$ we shall denote its cohomology class by
$[\alpha] \in \mfrak{m} \otimes \mrm{H}^i(\mfrak{g}) \cong \mrm{H}^i(\mfrak{m} \otimes \mfrak{g})$.
Let
\[ \operatorname{cur} : \mfrak{m} \otimes \mfrak{g}^1 \to \mfrak{m} \otimes \mfrak{g}^2 \]
be the function
\begin{equation} \label{eqn:36}
\operatorname{cur}(\omega) := \d(\omega) + \smfrac{1}{2} [\omega, \omega] .
\end{equation}
(``$\operatorname{cur}$'' stands for ``curvature''.)
Thus $\omega$ is an MC element iff $\operatorname{cur}(\omega) = 0$.
Given $\bar{\omega} \in \operatorname{MC}(\mfrak{g}, \bar{R})$,
choose any lift to an element
$\omega \in \mfrak{m} \otimes \mfrak{g}^1$.
Then
$\operatorname{cur}(\omega) \in \mfrak{n} \otimes \mrm{Z}^2(\mfrak{g})$,
and we define
\begin{equation} \label{eqn:41}
o_2(\bar{\omega}) := [\operatorname{cur}(\omega)] \in \mfrak{n} \otimes \mrm{H}^2(\mfrak{g}) .
\end{equation}
It is shown in \cite{GM} that $o_2(\bar{\omega})$
is independent of the choice, and the resulting obstruction function
\[ o_2 : \operatorname{MC}(\mfrak{g}, \bar{R}) \to
\mfrak{n} \otimes \mrm{H}^2 (\mfrak{g}) \]
has the property that an element
$\bar{\omega} \in \operatorname{MC}(\mfrak{g}, \bar{R})$
lifts to an element of $\operatorname{MC}(\mfrak{g}, R)$
iff $o_2(\bar{\omega}) = 0$.
Consider an element $\bar{\omega} \in \operatorname{MC}(\mfrak{g}, \bar{R})$.
The set
$\operatorname{MC}(\mfrak{g}, R) / \bar{\omega}$, if it is nonempty,
has a simply transitive action by the additive group
$\mfrak{n} \otimes \mrm{Z}^1(\mfrak{g})$, namely
$\omega \mapsto \omega + \beta$ for $\beta \in \mfrak{n} \otimes \mrm{Z}^1(\mfrak{g})$.
Given $\omega, \omega' \in \operatorname{MC}(\mfrak{g}, R) / \bar{\omega}$, define
\begin{equation} \label{eqn:42}
o_1(\omega, \omega') := [\omega - \omega'] \in \mfrak{n} \otimes \mrm{H}^1 (\mfrak{g}) .
\end{equation}
The obstruction function
\[ o_1 : \operatorname{MC}(\mfrak{g}, R) / \bar{\omega} \ \times \
\operatorname{MC}(\mfrak{g}, R) / \bar{\omega} \to
\mfrak{n} \otimes \mrm{H}^1 (\mfrak{g}) \]
has the property that the set
$\operatorname{G}(\mfrak{g}, R)(\omega, \omega') / 1$
is nonempty iff
$o_1(\omega, \omega') = 0$.
The obstruction functions $o_2$ and $o_1$ are functorial in $\mfrak{g}$ (in the
obvious sense).
\begin{rem}
It is possible to define the obstruction $o_0$ here too, but we will not use it.
Consider the exact sequence of complexes
\[ 0 \to \mfrak{n} \otimes \mfrak{g} \to (\mfrak{m} \otimes \mfrak{g})_{\omega} \xrightarrow{ \ p \ }
(\bar{\mfrak{m}} \otimes \mfrak{g})_{\bar{\omega}} \to 0 . \]
{}From the cohomology exact sequence we get a homomorphism
\[ \mrm{H}^{-1} ((\bar{\mfrak{m}} \otimes \mfrak{g})_{\bar{\omega}}) \to
\mfrak{n} \otimes \mrm{H}^{0}(\mfrak{g}) . \]
For
$g, g' \in \operatorname{G}^{\mrm{r}}(\mfrak{g}, R)(\omega, \omega') / \bar{g}$,
the obstruction class $o_0(g, g')$ lives in the cokernel of this homomorphism.
\end{rem}
\begin{lem} \label{lem:1}
Let
$\bar{\chi} \in \operatorname{MC}(\mfrak{h}, \bar{R})$,
$\bar{\omega} \in \operatorname{MC}(\mfrak{g}, \bar{R})$,
$\chi \in \operatorname{MC}(\mfrak{h}, R) / \bar{\chi}$
and
\[ \bar{h} \in \operatorname{G}(\mfrak{h}, \bar{R})(\phi(\bar{\omega}), \bar{\chi} ) . \]
Then there exist
\[ \omega \in \operatorname{MC}(\mfrak{g}, R) / \bar{\omega} \]
and
\[ h \in \operatorname{G}(\mfrak{h}, R)(\phi(\omega), \chi) / \bar{h} . \]
\end{lem}
\begin{proof}
The proof is very similar to the proof of ``Surjective on isomorphism \linebreak
classes'' in \cite[Subsection 2.11]{GM}. It is illustrated in Figure
\ref{fig:1}.
Let
\[ \bar{\chi}' := \operatorname{Af}(\bar{h})^{-1}(\bar{\chi})
=\phi(\bar{\omega}) \in \operatorname{MC}(\mfrak{h}, \bar{R}) . \]
Choose any $h' \in \operatorname{G}(\mfrak{h}, R)$ lying above
$\bar{h}$, and let
\[ \chi' := \operatorname{Af}(h')^{-1}(\chi) \in \operatorname{MC}(\mfrak{h}, R) / \bar{\chi}' . \]
Since $\chi'$ exists, the obstruction class
$o_2(\bar{\chi}')$ is zero. Now
$\phi(\bar{\omega}) = \bar{\chi}'$, so by functoriality of the obstruction classes
we get
\[ \mrm{H}^2 (\phi)(o_2(\bar{\omega})) = o_2(\bar{\chi}') = 0 . \]
The assumption is that $\mrm{H}^2 (\phi)$ is injective;
hence $o_2(\bar{\omega}) = 0$, and we can find
$\omega'' \in \operatorname{MC}(\mfrak{g}, R)$ lying above $\bar{\omega}$.
Let $\chi'' := \phi(\omega'') \in \operatorname{MC}(\mfrak{h}, R) / \bar{\chi}'$.
Consider the pair of elements
$\chi'', \chi' \in \operatorname{MC}(\mfrak{h}, R) / \bar{\chi}'$.
There is an obstruction class
\[ o_1(\chi'', \chi') \in \mfrak{n} \otimes \mrm{H}^1 (\mfrak{h}) . \]
By assumption the homomorphism $\mrm{H}^1 (\phi)$ is surjective, so there is
is a cohomology class
$c \in \mfrak{n} \otimes \mrm{H}^1 (\mfrak{g})$
such that
$\mrm{H}^1 (\phi)(c) = o_1(\chi'', \chi')$.
Let
$\gamma \in \mfrak{n} \otimes \mrm{Z}^1 (\mfrak{g})$
be a cocycle representing $c$, and define
\[ \omega := \omega'' - \gamma \in \mfrak{m} \otimes \mfrak{g}^1 . \]
Then
$\omega \in \operatorname{MC}(\mfrak{g}, R) / \bar{\omega}$
(it is an easy calculation done in \cite{GM}).
Let
\[ \chi''' := \phi(\omega) \in \operatorname{MC}(\mfrak{h}, R) / \bar{\chi}' . \]
Now $o_1(\omega'', \omega') = c$, so
\[ o_1(\chi''', \chi') = o_1(\chi'', \chi') - o_1(\chi'', \chi''') =
o_1(\chi'', \chi') - \mrm{H}^1 (\phi)(o_1(\omega'', \omega')) = 0 . \]
Therefore there exists
$h''' \in \operatorname{G}(\mfrak{g}, R)(\chi''', \chi') / 1$.
And we have
\[ h := h' \cdot h''' \in \operatorname{G}(\mfrak{g}, R)(\chi''', \chi) / \bar{h} . \]
\end{proof}
\begin{figure}
\includegraphics[scale=0.35]{figure-1.jpg}
\caption{Illustration for the proof of Lemma \ref{lem:1}.
The diagram is commutative.}
\label{fig:1}
\end{figure}
\begin{lem} \label{lem:5}
Let $\omega \in \operatorname{MC}(\mfrak{g}, R)$ and
$\chi := \phi(\omega) \in \operatorname{MC}(\mfrak{h}, R)$.
\begin{enumerate}
\item The homomorphism of DG Lie algebras
\[ \phi : (\mfrak{m} \otimes \mfrak{g})_{\omega} \to (\mfrak{m} \otimes \mfrak{h})_{\chi} \]
is a quasi-isomorphism.
\item The group homomorphism
\[ \phi : \operatorname{G}^{\mrm{r}}(\mfrak{g}, R)(\omega, \omega) \to
\operatorname{G}^{\mrm{r}}(\mfrak{h}, R)(\chi, \chi) \]
is an isomorphism.
\end{enumerate}
\end{lem}
\begin{proof}
(1) This is done by induction on $l(R)$. If $l(R) = 1$ then
$\mfrak{m}^2 = 0$, so $\d_{\omega} = \d$ and $\d_{\chi} = \d$.
Since $\phi : \mfrak{g} \to \mfrak{h}$ is a
quasi-isomorphisms, and since $\mfrak{m}$ is flat over $\mbb{K}$, the assertion is true.
Now assume that $l(R) \geq 2$. Since $\mfrak{m} \mfrak{n} = 0$ it follows that
$\d_{\omega}|_{\mfrak{n} \otimes \mfrak{g}} = \d|_{\mfrak{n} \otimes \mfrak{g}}$;
and likewise for $\mfrak{n} \otimes \mfrak{h}$.
Let $\bar{\omega} := p(\omega)$ and $\bar{\chi} := p(\chi)$.
We get a commutative diagram of complexes of $R$-modules
\[ \UseTips \xymatrix @C=6ex @R=6ex {
0
\ar[r]
&
\mfrak{n} \otimes \mfrak{g}
\ar[r]
\ar[d]^{\phi}
&
(\mfrak{m} \otimes \mfrak{g})_{\omega}
\ar[r]^{p}
\ar[d]^{\phi}
&
(\bar{\mfrak{m}} \otimes \mfrak{g})_{\bar{\omega}}
\ar[r]
\ar[d]^{\phi}
&
0
\\
0
\ar[r]
&
\mfrak{n} \otimes \mfrak{h}
\ar[r]
&
(\mfrak{m} \otimes \mfrak{h})_{\chi}
\ar[r]^{p}
&
(\bar{\mfrak{m}} \otimes \mfrak{h})_{\bar{\chi}}
\ar[r]
&
0
} \]
with exact rows. By induction the right vertical arrow is a quasi-isomorphism;
and the left vertical arrow is a quasi-isomorphism by the same argument given
in the case $l(R) = 1$.
Therefore the middle vertical arrow is a quasi-isomorphism.
\medskip \noindent
(2) Combine item (1) above and Proposition \ref{prop:2}.
\end{proof}
\begin{lem} \label{lem:2}
Let $\omega, \omega' \in \operatorname{MC}(\mfrak{g}, R)$, and define
$\bar{\omega} := p(\omega)$,
$\bar{\omega}' := p(\omega')$,
$\chi := \phi(\omega)$,
$\chi' := \phi(\omega')$,
$\bar{\chi} := \phi(\bar{\omega})$ and
$\bar{\chi}' := \phi(\bar{\omega}')$.
Let
\[ \bar{g} \in \operatorname{G}^{\mrm{r}}(\mfrak{g}, \bar{R})(\bar{\omega}, \bar{\omega}') , \]
\[ \bar{h} := \phi(\bar{g}) \in \operatorname{G}^{\mrm{r}}(\mfrak{h}, \bar{R})
(\bar{\chi}, \bar{\chi}') , \]
and
\[ h \in \operatorname{G}^{\mrm{r}}(\mfrak{h}, R)(\chi, \chi') / \bar{h} . \]
Then there exists a unique element
\[ g \in \operatorname{G}^{\mrm{r}}(\mfrak{g}, R)(\omega, \omega') / \bar{g} \]
such that $\phi(g) = h$.
\end{lem}
\begin{proof}
The proof is very similar to the proof of ``Full''
in \cite[Subsection 2.11]{GM}. (Note however that there is a mistake in loc.\
cit. In our notation, what is done there is referring to the obstruction class
$o_1(\omega, \omega')$, but this is not defined since
$p(\omega) \neq p(\omega')$ in general.) The proof is illustrated in Figure
\ref{fig:2}.
Choose an arbitrary lift $g'' \in \operatorname{G}(\mfrak{g}, R)$ of $\bar{g}$,
namely $\bar{g} = p(\eta_1(g''))$.
Define
\[ \omega'' := \operatorname{Af}(g'')(\omega) \in \operatorname{MC}(\mfrak{g}, R) , \]
\[ h'' := \phi(g'') \in \operatorname{G}(\mfrak{h}, R) , \]
\[ \chi'' := \phi(\omega'') \in \operatorname{MC}(\mfrak{h}, R) \]
and
\[ h' := h \cdot (h'')^{-1} \in
\operatorname{G}(\mfrak{h}, R)(\chi'', \chi') / 1 . \]
Since $\omega'', \omega' \in \operatorname{MC}(\mfrak{g}, R) / \bar{\omega}'$
the obstruction class
$o_1(\omega'', \omega')$
is defined, and it satisfies
\[ \mrm{H}^1 (\phi)(o_1(\omega'', \omega')) =
o_1(\chi'', \chi') = 0 \]
because $h'$ exists. By assumption the homomorphism
$\mrm{H}^1 (\phi)$ is injective, and we conclude that
$o_1(\omega'', \omega') = 0$. So there exists some
$g' \in \operatorname{G}(\mfrak{g}, R)(\omega'', \omega') / 1$.
Let
\[ g''' := g' \cdot g'' \in \operatorname{G}(\mfrak{g}, R)(\omega, \omega') . \]
Then $p(\eta_1(g''')) = \bar{g}$, and hence
\[ \eta_1(g''') \in \operatorname{G}^{\mrm{r}}(\mfrak{g}, R)(\omega, \omega') / \bar{g} . \]
By Lemma \ref{lem:5}(2) we have a group isomorphism
\[ \phi : \operatorname{G}^{\mrm{r}}(\mfrak{g}, R)(\omega, \omega) / 1 \to
\operatorname{G}^{\mrm{r}}(\mfrak{h}, R)(\chi, \chi) / 1 . \]
Since the set
$\operatorname{G}^{\mrm{r}}(\mfrak{g}, R)(\omega, \omega') / \bar{g}$
is nonempty (it contains $\eta_1(g''')$), it admits a simply transitive action
by the group
$\operatorname{G}^{\mrm{r}}(\mfrak{g}, R)(\omega, \omega) / 1$.
Therefore the function
\[ \phi : \operatorname{G}^{\mrm{r}}(\mfrak{g}, R)(\omega, \omega') / \bar{g} \to
\operatorname{G}^{\mrm{r}}(\mfrak{h}, R)(\chi, \chi') / \bar{h} \]
is bijective. We see that there is a unique element
$g \in \operatorname{G}^{\mrm{r}}(\mfrak{g}, R)(\omega, \omega') / \bar{g}$
such that
$\phi(g) = h$.
\end{proof}
\begin{figure}
\includegraphics[scale=0.35]{figure-2.jpg}
\caption{Illustration for the proof of Lemma \ref{lem:2}.
Some of the arrows, like $g$ and $h$, belong to the groupoid
$\mbf{Del}^{\mrm{r}}(-, -)$.
Other arrows, like $g''$ and $h''$, belong to the groupoid
$\mbf{Del}(-, -)$.
The function $\phi$ sends $\omega \mapsto \chi$,
$\bar{\omega} \mapsto \bar{\chi}$, $g \mapsto h$, etc.
The whole diagram is commutative.}
\label{fig:2}
\end{figure}
\section{DG Lie Quasi-isomorphisms -- Pronilpotent Algebras}
\label{sec:dg-quasi-comp}
In this section we extend \cite[Theorem 2.4]{GM} (attributed to Deligne) to the
case of complete noetherian local rings and unbounded DG Lie algebras.
This is Theorem \ref{thm:2}.
Let $(R, \mfrak{m})$ be a complete parameter algebra, and let
$\phi : \mfrak{g} \to \mfrak{h}$ be a DG Lie algebra quasi-isomorphism.
As in Section \ref{sec:facts}, for any $j \in \mbb{N}$ we write
$R_j := R / \mfrak{m}^{j+1}$ and $\mfrak{m}_j := \mfrak{m} / \mfrak{m}^{j+1}$. We denote by
$p_j : R \to R_j$ and $p_{j, i} : R_j \to R_i$ the canonical projections
(for $j \geq i$).
Let $\mfrak{n}_j := \mfrak{m}^{j} / \mfrak{m}^{j + 1}$, which is an ideal in $R_j$ satisfying \linebreak
$\mfrak{m}_j \mfrak{n}_j = 0$ and
\[ \mfrak{n}_j = \operatorname{Ker}(p_{j, j-1} : R_j \to R_{j - 1}) . \]
Thus $R_{j - 1} \cong R_j / \mfrak{n}_j$.
\begin{lem} \label{lem:6}
\begin{enumerate}
\item Let $\omega \in \operatorname{MC}(\mfrak{g}, R)$ and
$\chi := \phi(\omega) \in \operatorname{MC}(\mfrak{h}, R)$.
Then the homomorphism of DG Lie algebras
\[ \phi : (\mfrak{m} \, \what{\otimes} \, \mfrak{g})_{\omega} \to
(\mfrak{m} \, \what{\otimes} \, \mfrak{h})_{\chi} \]
is a quasi-isomorphism.
\item The canonical function
\[ \operatorname{MC}(\mfrak{g}, R) \to \lim_{\leftarrow j}\, \operatorname{MC}(\mfrak{g}, R_j) \]
is bijective.
\item For any $\omega, \omega' \in \operatorname{MC}(\mfrak{g}, R)$ the canonical function
\[ \operatorname{G}^{\mrm{r}}(\mfrak{g}, R)(\omega, \omega') \to \lim_{\leftarrow j}\,
\operatorname{G}^{\mrm{r}} ( \mfrak{g}, R_j) \bigl( p_j(\omega), p_j(\omega') \bigr) \]
is surjective.
\end{enumerate}
\end{lem}
Of course items (2-3) refer also to $\mfrak{h}$.
\begin{proof}
(1) We forget the Lie brackets. Let $M$ be the mapping cone of the homomorphism
of complexes of $R$-modules
\[ \phi : (\mfrak{m} \, \what{\otimes} \, \mfrak{g})_{\omega} \to (\mfrak{m} \, \what{\otimes} \, \mfrak{h})_{\chi} . \]
So
\[ M = (\mfrak{m} \, \what{\otimes} \, \mfrak{g})_{\omega}[1] \oplus (\mfrak{m} \, \what{\otimes} \, \mfrak{h})_{\chi} , \]
with a suitable differential.
For any $j \geq 0$ let $\omega_j := p_j(\omega)$ and $\chi_j := p_j(\chi)$.
We have an inverse system of homomorphisms of complexes
\[ \phi_j : (\mfrak{m}_j \, \what{\otimes} \, \mfrak{g})_{\omega_j} \to
(\mfrak{m}_j \, \what{\otimes} \, \mfrak{h})_{\chi_j} , \]
and we denote by $M_j$ the mapping cone of $\phi_j$.
Then each $p_j : M \to M_j$ is surjective, and
$M \cong \lim_{\leftarrow j}\, M_j$.
According to Lemma \ref{lem:5}(1) the complexes $M_j$ are acyclic.
Therefore, using the Mittag-Leffler argument, the complex $M$ is also acyclic.
\medskip \noindent
(2) This is because $\mfrak{m} \, \what{\otimes} \, \mfrak{g}^1$ is $\mfrak{m}$-adically complete, and
$\operatorname{MC}(\mfrak{g}, R)$ is a closed subset in it (w.r.t.\ the $\mfrak{m}$-adic metric).
\medskip \noindent
(3) Write $\omega'_j := p_j(\omega')$.
Suppose we are given a sequence $\{ g_j \}_{j \in \mbb{N}}$ of elements
\[ g_j \in \operatorname{G}^{\mrm{r}} ( \mfrak{g}, R_j)(\omega_j, \omega'_j ) \]
such that $p_{j, j-1}(g_j) = g_{j-1}$.
We are going to find a sequence $\{ \til{g}_j \}_{j \in \mbb{N}}$ of elements
\[ \til{g}_j \in \operatorname{G} ( \mfrak{g}, R_j)(\omega_j, \omega'_j ) \]
such that $p_{j, j-1}(\til{g}_j) = \til{g}_{j-1}$
and $\eta_1(\til{g}_j) = g_j$ for all $j$.
Since the group $\operatorname{G}(\mfrak{g}, R)$ is
complete w.r.t.\ its $\mfrak{m}$-adic filtration, the limit
\[ \til{g} := \lim_{\leftarrow j}\, \til{g}_j \in \operatorname{G}(\mfrak{g}, R) \]
exists; and by continuity
$\til{g} \in \operatorname{G}(\mfrak{g}, R)(\omega, \omega')$.
Then the element
\[ g := \eta_1(\til{g}) \in \operatorname{G}^{\mrm{r}}(\mfrak{g}, R)(\omega, \omega') \]
satisfies $p_j(g) = g_j$ for all $j$.
Here is the recursive construction of the sequence
$\{ \til{g}_j \}_{j \in \mbb{N}}$.
For $j = 0$ we take $\til{g}_0 := 1$. Now assume that $j \geq 1$ and we have a
sequence $(\til{g}_0, \ldots, \til{g}_{j-1})$ as required.
Choose any element
$\til{g}'_j \in \operatorname{G}(\mfrak{g}, R_j)(\omega_j, \omega'_j)$
such that $\eta_1(\til{g}'_j) = g_j$.
Then
\[ \eta_1(p_{j, j-1}(\til{g}'_j)) = p_{j, j-1}(\eta_1(\til{g}'_j)) =
p_{j, j-1}(g_j) = g_{j-1} = \eta_1(\til{g}_{j-1}) . \]
There is some $\bar{a} \in N^{\mrm{r}}(\mfrak{g}, R_{j-1})_{\omega_{j-1}}$
such that
$\bar{a} \cdot p_{j, j-1}(\til{g}'_j) = \til{g}_{j-1}$.
Choose any $a \in N^{\mrm{r}}(\mfrak{g}, R_j)_{\omega_j}$
lifting $\bar{a}$. Then
$\til{g}_j := a \cdot \til{g}'_j$ will satisfy
$p_{j, j-1}(\til{g}_j) = \til{g}_{j-1}$ and
$\eta_1(\til{g}_j) = g_j$.
\end{proof}
Here is the main result of this section. We denote the identity automorphism of
$R$ by $\bsym{1}_R$.
\begin{thm} \label{thm:2}
Let $(R, \mfrak{m})$ be a parameter algebra over $\mbb{K}$, and let $\phi : \mfrak{g} \to \mfrak{h}$ be a
DG Lie algebra quasi-isomorphism over $\mbb{K}$. Then the function
\[ \overline{\operatorname{MC}}(\bsym{1}_R \otimes \phi) : \overline{\operatorname{MC}}(\mfrak{m} \, \what{\otimes} \, \mfrak{g}) \to
\overline{\operatorname{MC}}(\mfrak{m} \, \what{\otimes} \, \mfrak{h}) \]
is bijective. Moreover, the morphism of groupoids
\[ \mbf{Del}^{\mrm{r}}(\phi, R) : \mbf{Del}^{\mrm{r}}(\mfrak{g}, R)
\to \mbf{Del}^{\mrm{r}}(\mfrak{h}, R) \]
is an equivalence.
\end{thm}
Observe that there are no finiteness nor boundedness conditions on $\mfrak{g}$ and
$\mfrak{h}$.
\begin{proof}
We will prove these assertions:
\begin{enumerate}
\rmitem{a} The function $\overline{\operatorname{MC}}(\bsym{1}_R \otimes \phi)$
is surjective.
\rmitem{b} The function $\overline{\operatorname{MC}}(\bsym{1}_R \otimes \phi)$
is injective.
\rmitem{c} For any
$\omega \in \operatorname{MC}(\mfrak{g}, R)$ the group homomorphism
\[ \operatorname{G}^{\mrm{r}}(\phi, R) : \operatorname{G}^{\mrm{r}}(\mfrak{g}, R)(\omega, \omega) \to
\operatorname{G}^{\mrm{r}}(\mfrak{h}, R) \bigl( \phi(\omega), \phi(\omega) \bigr) \]
is bijective.
\end{enumerate}
Assertions (a-b) say that the function
$\overline{\operatorname{MC}}(\bsym{1}_R \otimes \phi)$
is bijective. Then assertion (c) implies that the function
\[ \phi : \operatorname{G}^{\mrm{r}}(\mfrak{g}, R)(\omega, \omega') \to
\operatorname{G}^{\mrm{r}}(\mfrak{h}, R)(\phi(\omega), \phi(\omega')) \]
is bijective for every $\omega, \omega' \in \operatorname{MC}(\mfrak{g}, R)$.
Hence $\mbf{Del}^{\mrm{r}}(\phi, R)$ is an equivalence.
\medskip \noindent
Proof of (a). Here it is more convenient to work with the groupoids
$\mbf{Del}(-, -)$.
Suppose we are given $\chi \in \operatorname{MC}(\mfrak{h}, R)$. We will find
elements $\omega \in \operatorname{MC}(\mfrak{g}, R)$ and
$h \in \operatorname{G}(\mfrak{h}, R)(\phi(\omega), \chi)$.
Define $\chi_j := p_j(\chi) \in \operatorname{MC}(\mfrak{h}, R_j)$.
We are going to find a sequence $\{ \omega_j \}_{j \in \mbb{N}}$ of elements
$\omega_j \in \operatorname{MC}(\mfrak{g}, R_j)$, and a sequence $\{ h_j \}_{j \in \mbb{N}}$ of
elements
\[ h_j \in \operatorname{G}(\mfrak{h}, R_j)(\phi(\omega_j), \chi_j) , \]
such that $p_{j, j-1}(\omega_j) = \omega_{j-1}$ and
$p_{j, j-1}(h_j) = h_{j-1}$ for all $j$.
This is done by induction on $j$.
For $j = 0$ we take $\omega_0 := 0$ and $h_0 := 1 = \exp(0)$.
Now consider $j \geq 1$. Assume that we have found sequences
$(\omega_0, \ldots, \omega_{j-1})$ and
$(h_0, \ldots, h_{j-1})$ satisfying the required conditions.
By Lemma \ref{lem:1}, applied to the artinian local ring $R_j$,
and the elements $\chi_j, \omega_{j-1}$ and $h_{j-1}$,
there exist elements $\omega_j$ and $h_j$ as required.
Now the $R$-module $\mfrak{m} \, \what{\otimes} \, \mfrak{g}^1$ is $\mfrak{m}$-adically complete,
and the set $\operatorname{MC}(\mfrak{g}, R)$ is closed inside
$\mfrak{m} \, \what{\otimes} \, \mfrak{g}^1$ (with respect to the $\mfrak{m}$-adic metric).
Hence the limit
$\omega := \lim_{\leftarrow j}\, \omega_j$
belongs to $\operatorname{MC}(\mfrak{g}, R)$.
The gauge group $\operatorname{G}(\mfrak{h}, R)$ is complete, since it is pronilpotent
(with respect to its $\mfrak{m}$-adic filtration).
We get an element
\[ h := \lim_{\leftarrow j}\, h_j \in \operatorname{G}(\mfrak{h}, R) . \]
By continuity we see that
$\operatorname{Af}(h)(\phi(\omega)) = \chi$.
\medskip \noindent
Proof of (b). Here we prefer to work with the groupoids
$\mbf{Del}^{\mrm{r}}(-,-)$.
Take $\omega, \omega' \in \operatorname{MC}(\mfrak{g}, R)$, and define
$\chi := \phi(\omega)$ and $\chi' := \phi(\omega')$.
Suppose we are given
$h \in \operatorname{G}^{\mrm{r}}(\mfrak{h}, R)(\chi, \chi')$;
we will find
$g \in \operatorname{G}^{\mrm{r}}(\mfrak{g}, R)(\omega, \omega')$.
(We do not care whether $\phi(g) = h$ or not.)
Define
$\omega_j := p_j(\omega)$,
$\omega'_j := p_j(\omega')$,
$\chi_j := p_j(\chi)$,
$\chi'_j := p_j(\chi')$
and
$h_j := p_j(h)$.
We will find a sequence $\{ g_j \}_{j \in \mbb{N}}$ of elements
$g_j \in \operatorname{G}^{\mrm{r}}(\mfrak{g}, R_j)(\omega_j, \omega'_j)$
such that
$\phi(g_j) = h_j$
and
$p_{j, j-1}(g_j) = g_{j-1}$.
Then, by Lemma \ref{lem:6}(3) there is an element
$g \in \operatorname{G}^{\mrm{r}}(\mfrak{g}, R)(\omega, \omega')$
such that $p_j(g) = g_j$ for every $j$.
We construct the sequence $\{ g_j \}_{j \in \mbb{N}}$ by induction on $j$. For
$j = 0$ we take $g_0 := 1$.
Now let $j \geq 1$, and suppose that we have a sequence
$(g_0, \ldots, g_{j-1})$ as required. According to Lemma \ref{lem:2}, applied
to the artinian local ring $R_j$, there exists an element
$g_j \in \operatorname{G}^{\mrm{r}}(\mfrak{g}, R_j)(\omega_j, \omega'_j)$
such that $p_{j, j-1}(g_j) = g_{j-1}$ and $\phi(g_j) = h_j$.
\medskip \noindent
Proof of (c). This is the complete version of the proof of Lemma \ref{lem:5}(2).
By Lemma \ref{lem:6}(1) the function
$\mrm{H}^0(\phi_{R, \omega})$ is bijective. The claim now follows from Proposition
\ref{prop:2}.
\end{proof}
\begin{rem}
Assume $\mfrak{g}$ is abelian (i.e.\ the Lie bracket is zero), so that
$\mfrak{m} \hatotimes \mfrak{g}$ is just a complex of $R$-modules, and
\[ \overline{\operatorname{MC}}(\mfrak{g}, R) = \mrm{H}^1 (\mfrak{m} \hatotimes \mfrak{g}) . \]
In this case the Deligne groupoid $\mbf{Del}(\mfrak{g}, R)$ is the truncation
\[ \cdots \to 0 \to \mfrak{m} \hatotimes \mfrak{g}^0 \to
\operatorname{Ker} \bigl( \d : \mfrak{m} \hatotimes \mfrak{g}^{1} \to \mfrak{m} \hatotimes \mfrak{g}^2 \bigr)
\to 0 \to \cdots , \]
whereas as the reduced Deligne groupoid $\mbf{Del}^{\mrm{r}}(\mfrak{g}, R)$
is the truncation
\[ \cdots \to 0 \to
\operatorname{Im} \bigl( \d : \mfrak{m} \hatotimes \mfrak{g}^{-1} \to \mfrak{m} \hatotimes \mfrak{g}^0 \bigr)
\to
\operatorname{Ker} \bigl( \d : \mfrak{m} \hatotimes \mfrak{g}^{1} \to \mfrak{m} \hatotimes \mfrak{g}^2 \bigr)
\to 0 \to \cdots , \]
both concentrated in the degree range $[0, 1]$.
Let $\mfrak{h}$ be another abelian DG Lie algebra.
It is clear that a quasi-isomorphism of complexes $\phi : \mfrak{g} \to \mfrak{h}$ will
induce a quasi-isomorphism
\[ \mbf{Del}^{\mrm{r}}(\phi, R) :
\mbf{Del}^{\mrm{r}}(\mfrak{g}, R) \to \mbf{Del}^{\mrm{r}}(\mfrak{h}, R) . \]
This is a special case of Theorem \ref{thm:2}.
\end{rem}
\begin{rem} \label{rem:11}
Presumably Theorem \ref{thm:2} can be extended to the following more general
situation: $\mfrak{g}$ and $\mfrak{h}$ are $R$-linear DG Lie algebras, such that all the
$R$-modules $\mfrak{g}^i$ and $\mfrak{h}^i$ are $\mfrak{m}$-adically complete, and the graded Lie
algebras $\operatorname{gr}_{\mfrak{m}} (\mfrak{g})$ and $\operatorname{gr}_{\mfrak{m}} (\mfrak{h})$ are abelian.
We are given an $R$-linear DG Lie algebra homomorphism
$\phi : \mfrak{g} \to \mfrak{h}$ such that
\[ \operatorname{gr}_{\mfrak{m}}(\phi) : \operatorname{gr}_{\mfrak{m}} (\mfrak{g}) \to \operatorname{gr}_{\mfrak{m}} (\mfrak{h}) \]
is a quasi-isomorphism. Then
\[ \overline{\operatorname{MC}}(\phi) : \overline{\operatorname{MC}}(\mfrak{g}) \to \overline{\operatorname{MC}}(\mfrak{h}) \]
is bijective. Cf.\ \cite[Theorem 2.1]{Ge} for the corresponding nilpotent case.
\end{rem}
\section{Some Facts on $2$-Groupoids}
\label{sec:2-grpd}
Let us recall that a (strict) {\em $2$-groupoid} $\bsym{G}$ is a groupoid
enriched in the monoidal category of groupoids. Another way of saying this is
that a $2$-groupoid $\bsym{G}$ is a $2$-category in which all $1$-morphisms and
$2$-morphisms are invertible. A comprehensive review of $2$-categories and
related constructions is available in \cite[Section 1]{Ye3}.
See also \cite{Ma, Bw, Ge}.
We wish to make things as explicit as possible, to make calculations
(both in Section \ref{sec:del-2-grpd} of this paper, and in the new
version of \cite{Ye4}) easier.
A (small strict) $2$-groupoid $\bsym{G}$ is made up of the following
ingredients: there
is a set $\operatorname{Ob}(\bsym{G})$, whose elements are the {\em objects}
of $\bsym{G}$. For any $x, y \in \operatorname{Ob}(\bsym{G})$ there is a set
$\bsym{G}(x, y)$, whose elements are called the {\em $1$-morphisms} from $x$
to $y$. Given $f \in \bsym{G}(x, y)$, we write $f : x \to y$.
For any $f, g \in \bsym{G}(x, y)$ there is a set
$\bsym{G}(x, y)(f, g)$, whose elements are called the {\em $2$-morphisms} from
$f$ to $g$. For $a \in \bsym{G}(x, y)(f, g)$ we write
$a : f \Rightarrow g$.
There are three types of composition operations in $\bsym{G}$. There is {\em
horizontal composition of $1$-morphisms}: given $f_1 : x_0 \to x_1$ and
$f_2 : x_1 \to x_2$, their composition is
$f_2 \circ f_1 : x_0 \to x_2$. Suppose we are also given $1$-morphisms
$g_i : x_{i-1} \to x_i$ and $2$-morphisms $a_i : f_i \Rightarrow g_i$. Then there is
a $2$-morphism
\[ a_2 \circ a_1 : f_2 \circ f_1 \Rightarrow g_2 \circ g_1 . \]
This is {\em horizontal composition of $2$-morphisms}.
If we are also given \linebreak $1$-morphisms
$h_i : x_{i-1} \to x_i$ and $2$-morphisms $b_i : g_i \Rightarrow h_i$,
then there is the {\em vertical composition} (of $2$-morphisms)
$b_i \ast a_i : f_i \Rightarrow h_i$.
There are pretty diagrams to display all of this (see \cite[Section 1]{Ye3} or
many other references).
For every $x \in \operatorname{Ob}(\bsym{G})$ there is the identity $1$-morphism
$\bsym{1}_x : x \to x$, and for every
$f \in \bsym{G}(x, y)$ there is the identity $2$-morphism
$\bsym{1}_f : f \Rightarrow f$.
Here are the conditions required for the structure $\bsym{G}$ to be a
$2$-groupoid:
\begin{itemize}
\item The set $\operatorname{Ob}(\bsym{G})$, together with the $1$-morphisms
$f : x \to y$, horizontal composition $g \circ f$, and the identity morphisms
$\bsym{1}_x$, is a groupoid.
We refer to this groupoid as the $1$-truncation of $\bsym{G}$.
\item For every $x, y \in \operatorname{Ob}(\bsym{G})$, the set
$\bsym{G}(x,y)$, together with the $2$-morphisms
$a : f \Rightarrow g$, vertical composition
$b \ast a$, and the identity morphisms $\bsym{1}_f$, is a groupoid. We refer
to it as the vertical groupoid above $(x, y)$.
\item Horizontal composition of $2$-morphisms is associative,
$\bsym{1}_{g \circ f} = \bsym{1}_g \circ \bsym{1}_f$ whenever $g$ and $f$ are
composable, and the $2$-morphisms $\bsym{1}_{\bsym{1}_x}$ are identities for
horizontal composition.
\item The exchange condition: given $f_i, g_i, h_i : x_{i-1} \to x_i$,
$a_i : f_i \Rightarrow g_i$ and $b_i : g_i \Rightarrow h_i$, one has
\[ (b_2 \ast a_2) \circ (b_1 \ast a_1) =
(b_2 \circ b_1) \ast (a_2 \circ a_1) , \]
as $2$-morphisms $f_2 \circ f_1 \Rightarrow h_2 \circ h_1$.
\end{itemize}
A consequence of these four conditions is that $2$-morphisms are invertible for
horizontal composition. Indeed, given $a : f \Rightarrow g$ in
$\bsym{G}(x, y)$, its horizontal inverse
$a^{- \circ} : f^{-1} \Rightarrow g^{-1}$ is given by the formula
$a^{- \circ} = \bsym{1}_{g^{-1}} \circ a^{- \ast} \circ \bsym{1}_{f^{-1}}$,
where
$a^{- \ast} : g \Rightarrow f$ is the vertical inverse of $a$.
Suppose $\bsym{H}$ is another $2$-groupoid. A (strict) {\em $2$-functor}
$F : \bsym{G} \to \bsym{H}$ is a collection of functions
\[ \begin{aligned}
F & : \operatorname{Ob}(\bsym{G}) \to \operatorname{Ob}(\bsym{H})
\\
F & : \bsym{G}(x_0, x_1) \to \bsym{H} \bigl( F(x_0), F(x_1) \bigr)
\\
F & : \bsym{G}(x_0, x_1)(g_0, g_1) \to
\bsym{H} \bigl( F(x_0), F(x_1) \bigr) \bigl( F(g_0), F(g_1) \bigr)
\end{aligned} \]
that respect the various compositions and identity morphisms. We denote by
$\cat{2-Grpd}$ the category consisting of $2$-groupoids and $2$-functors between
them.
Consider a $2$-groupoid $\bsym{G}$. There is an equivalence relation on the set
$\operatorname{Ob}(\bsym{G})$, given by existence of $1$-morphisms, i.e.\ $x \sim y$ if
$\bsym{G}(x, y) \neq \emptyset$. We let
$\pi_0(\bsym{G}) := \operatorname{Ob}(\bsym{G}) / {\sim}$.
For objects $x, y \in \operatorname{Ob}(\bsym{G})$ there is an equivalence relation on
the set $\bsym{G}(x, y)$, given by existence of $2$-morphisms:
$f \sim g$ if $\bsym{G}(x, y)(f, g) \neq \emptyset$. We let
\[ \pi_1(\bsym{G}, x, y) := \bsym{G}(x, y) / {\sim} . \]
We define $\pi_1(\bsym{G})$ to be the groupoid with object set
$\operatorname{Ob}(\bsym{G})$, \linebreak
morphism sets $\pi_1(\bsym{G}, x, y)$, and composition induced by horizontal
composition in $\bsym{G}$. Thus $\pi_1(\bsym{G})$ is a quotient groupoid of the
$1$-truncation of $\bsym{G}$, and \linebreak
$\pi_0(\pi_1(\bsym{G})) = \pi_0(\bsym{G})$.
We write $\pi_1(\bsym{G}, x) := \pi_1(\bsym{G}, x, x)$, which is a group.
We also define
\[ \pi_2(\bsym{G}, x) := \bsym{G}(x, x)(\bsym{1}_x, \bsym{1}_x) . \]
This is an abelian group.
The homotopy set $\pi_0(\bsym{G})$ and groups $\pi_i(\bsym{G}, x)$ are
functorial in an obvious way.
A morphism $F : \bsym{G} \to \bsym{H}$ in
$\cat{2-Grpd}$ is called a {\em weak equivalence} if the functions
\[ \begin{aligned}
\pi_0(F) & : \pi_0(\bsym{G}) \to \pi_0(\bsym{H})
\\
\pi_1(F, x) & : \pi_1(\bsym{G}, x) \to \pi_1(\bsym{H}, F(x))
\\
\pi_2(F, x) & : \pi_2(\bsym{G}, x) \to \pi_2(\bsym{H}, F(x))
\end{aligned} \]
are bijective for all $x \in \operatorname{Ob}(\bsym{G})$.
It will be useful to relate the concept of $2$-groupoid to the less
familiar concept of {\em crossed module over a groupoid}, recalled below.
For a groupoid $\bsym{G}$ and an object $x \in \operatorname{Ob}(\bsym{G})$
we denote by $\bsym{G}(x) := \bsym{G}(x, x)$, the automorphism group of $x$.
The composition in $\bsym{G}$ is $\circ = \circ_{\bsym{G}}$.
Let $\bsym{G}$ and $\bsym{N}$ be groupoids, such that
$\operatorname{Ob}(\bsym{G}) = \operatorname{Ob}(\bsym{N})$. An {\em action} $\Psi$ of $\bsym{G}$
on
$\bsym{N}$ is a collection of group isomorphisms
\[ \Psi(g) : \bsym{N}(x) \xrightarrow{\simeq} \bsym{N}(y) \]
for all $x, y \in \operatorname{Ob}(\bsym{G})$ and $g \in \bsym{G}(x, y)$,
such that
\[ \Psi(h \circ g) = \Psi(h) \circ \Psi(g) \]
whenever $g$ and $h$ are composable, and
$\Psi(\bsym{1}_x) = \bsym{1}_{\bsym{N}(x)}$.
\begin{exa}
Let $\bsym{G}$ be any groupoid. The adjoint action $\operatorname{Ad}_{\bsym{G}}$ of
$\bsym{G}$ on itself is defined by
\[ \operatorname{Ad}_{\bsym{G}}(g)(h) := g \circ h \circ g^{-1} \]
for $g \in \bsym{G}(x, y)$ and $h \in \bsym{G}(x)$.
\end{exa}
A {\em crossed module over a groupoid}, or a {\em crossed groupoid}
for short, is data \linebreak
$(\bsym{G}, \bsym{N}, \Psi, \operatorname{D})$ consisting of:
\begin{itemize}
\item Groupoids $\bsym{G}$ and $\bsym{N}$, such that
$\bsym{N}$ is totally disconnected, and
$\operatorname{Ob}(\bsym{N}) = \operatorname{Ob}(\bsym{G})$.
\item An action $\Psi$ of $\bsym{G}$ on $\bsym{N}$, called the {\em twisting}.
\item A morphism of groupoids (i.e.\ a functor)
$\operatorname{D} : \bsym{N} \to \bsym{G}$, called the {\em feedback}, which is the
identity on objects.
\end{itemize}
These are the conditions:
\begin{enumerate}
\rmitem{i} The morphism $\operatorname{D}$ is $\bsym{G}$-equivariant with respect to
the actions $\Psi$ and $\operatorname{Ad}_{\bsym{G}}$. Namely
\[ \operatorname{D}(\Psi(g)(a)) = \operatorname{Ad}_{\bsym{G}}(g)(\operatorname{D}(a)) \]
in the group $\bsym{G}(y)$, for any $x, y \in \operatorname{Ob}(\bsym{G})$,
$g \in \bsym{G}(x, y)$ and $a \in \bsym{N}(x)$.
\rmitem{ii} For any $x \in \operatorname{Ob}(\bsym{G})$ and
$a \in \bsym{N}(x)$ there is equality
\[ \Psi(\operatorname{D}(a)) = \operatorname{Ad}_{\bsym{N}(x)}(a) , \]
as automorphisms of the group $\bsym{N}(x)$.
\end{enumerate}
\begin{exa}
If $\bsym{G}$ and $\bsym{N}$ are groups, namely
$\operatorname{Ob}(\bsym{G}) = \operatorname{Ob}(\bsym{N}) = \{ 0 \}$, then
a crossed groupoid is just a crossed module.
\end{exa}
\begin{exa} \label{exa:100}
Let $G$ be a group acting on a set $X$. For $x \in X$ let $G(x)$ denote the
stabilizer of $x$. Let $\{ N_x \}_{x \in X}$ be a
collection of groups. Assume that for every $g \in G$ and $x \in X$ there is
given a group isomorphism $\Psi(g) : N_x \xrightarrow{\simeq} N_{g(x)}$, and these satisfy the
functoriality conditions of an action.
Also assume there are
group homomorphisms $\operatorname{D}_x : N_x \to G(x)$ such that
\[ \operatorname{Ad}_{G}(g) \circ \operatorname{D}_x = \operatorname{D}_{g(x)} \circ \, \Psi(g) \]
for any $g \in G$ and $x \in X$.
Define $\bsym{G}$ to be the transformation groupoid associated to the
action of $G$ on $X$. And define $\bsym{N}$ to be the totally
disconnected groupoid with $\operatorname{Ob}(\bsym{N}) := X$ and
$\bsym{N}(x) := N_x$. Then
$( \bsym{G}, \bsym{N}, \Psi, \operatorname{D})$ is a crossed
groupoid, which we call the {\em transformation crossed groupoid} associated to
the action of $G$ on $\{ N_x \}_{x \in X}$.
\end{exa}
\begin{exa}
Suppose $\bsym{G}$ is any groupoid, and $\bsym{N}$ is a {\em normal subgroupoid}
of $\bsym{G}$, in the sense of \cite{Bw, Ye4}. Let
$\operatorname{D} : \bsym{N}\to \bsym{G}$ be the inclusion, and let
$\Psi$ be the restriction of $\operatorname{Ad}_{G}$ to $\bsym{N}$. Then
$(\bsym{G}, \bsym{N}, \Psi, \operatorname{D})$ is a crossed groupoid.
\end{exa}
It is known that crossed groupoids are the same as $2$-groupoids
(cf.\ \cite{Bw}). We will now give a precise statement of this fact.
\begin{prop} \label{prop:102}
Let $(\bsym{H}, \bsym{N}, \Psi, \operatorname{D})$
be a crossed groupoid. Then there is a unique $2$-groupoid
$\bsym{G}$ with these properties:
\begin{enumerate}
\rmitem{i} The $1$-truncation of $\bsym{G}$ is the same groupoid as
$\bsym{H}$. Namely $\operatorname{Ob}(\bsym{G}) = \operatorname{Ob}(\bsym{H})$,
$\bsym{G}(x, y) = \bsym{H}(x, y)$, the identity morphisms $\bsym{1}_x$ are the
same, and the horizontal composition is
$g \circ_{\bsym{G}} f = g \circ_{\bsym{H}} f$
for $f : x \to y$ and $g : y \to z$.
\rmitem{ii} For any $f, g : x \to y$ in $\bsym{G}$ we have
\[ \bsym{G}(x, y)(f, g) =
\{ a \in \bsym{N}(x) \mid g = f \circ_{\bsym{H}} \operatorname{D}(a) \} . \]
The identity morphism $\bsym{1}_f \in \bsym{G}(x, y)(f, f)$ is
$\bsym{1}_x \in \bsym{N}(x)$.
Given $h : x \to y$, $a : f \Rightarrow g$ and $b : g \Rightarrow h$,
the vertical composition is
$b \ast_{\bsym{G}} a = a \circ_{\bsym{N}} b$.
\rmitem{iii} For any $x_0, x_1, x_2 \in \operatorname{Ob}(\bsym{G})$, any
$f_i, g_i : x_{i-1} \to x_i$ and any
$a_i : f_i \Rightarrow g_i$ in $\bsym{G}$,
the horizontal composition $a_2 \circ_{\bsym{G}} a_1$ satisfies
\[ a_2 \circ_{\bsym{G}} a_1 =
\Psi(f_1^{-1})(a_2) \circ_{\bsym{N}} a_1 . \]
\end{enumerate}
Moreover, any $2$-groupoid $\bsym{G}$ arises this way.
\end{prop}
\begin{proof}
It is easy to verify that the conditions of a $2$-groupoid hold.
Conversely, suppose $\bsym{G}$ is any $2$-groupoid. Define the groupoid
$\bsym{G}_1$ to be the $1$-truncation of $\bsym{G}$.
For any $x \in \operatorname{Ob}(\bsym{G})$ consider the set of $2$-morphisms
\begin{equation} \label{eqn:105}
\bsym{G}_2(x) := \coprod_{g \in \bsym{G}(x, x)}
\bsym{G}(x, x)(\bsym{1}_x, g) .
\end{equation}
This is a group under horizontal composition $\circ_{\bsym{G}}$
of $2$-morphisms, with identity element
$\bsym{1}_{\bsym{1}_x}$. There is a group homomorphism
$\operatorname{D} : \bsym{G}_2(x) \to \bsym{G}_1(x)$, defined by
\begin{equation} \label{eqn:104}
\operatorname{D}(a : \bsym{1}_x \Rightarrow g) := g .
\end{equation}
Let $\bsym{G}_2$ be the totally disconnected groupoid with set of objects
$\operatorname{Ob}(\bsym{G})$, and automorphism groups $\bsym{G}_2(x)$ as defined above.
Then $\operatorname{D} : \bsym{G}_2 \to \bsym{G}_1$
is a morphism of groupoids.
A $1$-morphism $f : x \to y$ in $\bsym{G}$ induces a group isomorphism
\[ \operatorname{Ad}_{\bsym{G}_1 \curvearrowright \bsym{G}_{2}}(f) :
\bsym{G}_2(x) \to \bsym{G}_2(y) , \]
with formula
\begin{equation} \label{eqn:106}
\operatorname{Ad}_{\bsym{G}_1 \curvearrowright \bsym{G}_{2}}(f)(a) :=
\bsym{1}_f \circ a \circ \bsym{1}_{f^{-1}}
\end{equation}
for $a \in \bsym{G}_2(x)$.
It is a simple verification that
$(\bsym{G}_1, \bsym{G}_{2},
\operatorname{Ad}_{\bsym{G}_1 \curvearrowright \bsym{G}_{2}}, \operatorname{D})$
is a crossed groupoid.
\end{proof}
\begin{rem}
An amusing consequence of the proof above is that the vertical
composition in a $2$-groupoid can be recovered from the horizontal
compositions.
\end{rem}
In view of Proposition \ref{prop:102}, in a $2$-groupoid $\bsym{G}$
we can talk about the group of $2$-morphisms
$\bsym{G}_2(x)$ for any object $x$. There is a feedback homomorphism
\[ \operatorname{D} : \bsym{G}_2(x) \to \bsym{G}_1(x) , \]
and a twisting
\[ \operatorname{Ad}(g) =
\operatorname{Ad}_{\bsym{G}_1 \curvearrowright \bsym{G}_{2}}(g) :
\bsym{G}_2(x) \to \bsym{G}_2(y) \]
for any $g : x \to y$.
These satisfy the conditions of a crossed groupoid.
\section{The Deligne $2$-Groupoid}
\label{sec:del-2-grpd}
\begin{dfn}
A DG Lie algebra $\mfrak{g} = \bigoplus_{i \in \mbb{Z}}\, \mfrak{g}^i$ is
said to be of {\em quantum type} if $\mfrak{g}^i = 0$ for all $i < -1$.
A DG Lie algebra
$\til{\mfrak{g}} = \bigoplus_{i \in \mbb{Z}}\, \til{\mfrak{g}}^i$ is said to be of
{\em quasi quantum type} if there exists
a quantum type DG Lie algebra $\mfrak{g}$, and a DG Lie algebra quasi-isomorphism
$\til{\mfrak{g}} \to \mfrak{g}$.
\end{dfn}
\begin{exa} \label{exa:102}
Let $C$ be a commutative $\mbb{K}$-algebra. The DG Lie algebras
$\mcal{T}_{\mrm{poly}}(C)$ and $\mcal{D}_{\mrm{poly}}(C)$
that occur in deformation quantization are of quantum type (and hence the
name).
Let $\mfrak{g}$ be a quantum type DG Lie algebra. Consider the DG Lie algebra
$\til{\mfrak{g}} := (\operatorname{L} \circ \operatorname{C})(\mfrak{g})$;
this is the bar-cobar construction discussed in Section \ref{sec:L-infty}.
There is a quasi-isomorphism
$\zeta_{\mfrak{g}} : \til{\mfrak{g}} \to \mfrak{g}$,
so $\til{\mfrak{g}}$ is of quasi quantum type (but is unbounded in the negative
direction).
\end{exa}
Suppose $\mfrak{g}$ is a quantum type DG Lie algebra, and $R$ is artinian. Then the
{\em Deligne $2$-groupoid} of $\mfrak{m} \otimes \mfrak{g}$ is defined; see \cite{Ge}.
In this section we show how this construction can be extended in two ways:
$\mfrak{g}$ can be of quasi quantum type, and $R$ can be complete (i.e.\ not artinian).
Now let $\mfrak{g}$ be any DG Lie algebra, and $(R, \mfrak{m})$ any parameter algebra. We have
the set $\operatorname{MC}(\mfrak{m} \hatotimes \mfrak{g})$ of MC elements, and the gauge
group $\operatorname{exp}(\mfrak{m} \hatotimes \mfrak{g}^0)$.
Given $\omega \in \operatorname{MC}(\mfrak{m} \hatotimes \mfrak{g})$ there is an $R$-bilinear function
$[-,-]_{\omega}$ on $\mfrak{m} \hatotimes \mfrak{g}^{-1}$, whose formula is
\begin{equation}
[\alpha_1, \alpha_2]_{\omega} := [ \d_{\omega}(\alpha_1), \alpha_2 ] ,
\end{equation}
where $\d_{\omega} = \d + \operatorname{ad}(\omega)$.
Define
\begin{equation}
\a_{\omega} := \operatorname{Coker}(\d_{\omega} : \mfrak{m} \hatotimes \mfrak{g}^{-2} \to
\mfrak{m} \hatotimes \mfrak{g}^{-1}) ,
\end{equation}
so there is an exact sequence of $R$-modules
\begin{equation} \label{eqn:107}
\mfrak{m} \hatotimes \mfrak{g}^{-2} \xrightarrow{\d_{\omega}} \mfrak{m} \hatotimes \mfrak{g}^{-1} \to \a_{\omega} \to 0 .
\end{equation}
\begin{prop} \label{prop:110}
Take any $\omega \in \operatorname{MC}(\mfrak{m} \hatotimes \mfrak{g})$.
\begin{enumerate}
\item Let $\alpha \in \mfrak{m} \hatotimes \mfrak{g}^{-1}$ and
$\beta \in \mfrak{m} \hatotimes \mfrak{g}^{-2}$.
Write $\alpha' := \d_{\omega}(\beta) \in \mfrak{m} \hatotimes \mfrak{g}^{-1}$.
Then
\[ [\alpha, \alpha']_{\omega} , \ [\alpha', \alpha]_{\omega} \in \d_{\omega}(\mfrak{m} \otimes \mfrak{g}^{-2}) . \]
\item The induced $R$-bilinear function $[-,-]_{\omega}$ on $\a_{\omega}$ is a Lie
bracket. Thus $\a_{\omega}$ is a Lie algebra.
\item Let $g \in \operatorname{exp}(\mfrak{m} \hatotimes \mfrak{g}^0)$, and let
$\omega' := \operatorname{Af}(g)(\omega) \in \operatorname{MC}(\mfrak{m} \hatotimes \mfrak{g})$. Then
\[ \operatorname{Ad}(g) : \a_{\omega} \to \a_{\omega'} \]
is an isomorphism of Lie algebras.
\end{enumerate}
\end{prop}
\begin{proof}
(1) and (2) are easy direct calculations. (3) is a consequence of Proposition
\ref{prop:100}.
\end{proof}
\begin{prop} \label{prop:103}
Assume either of these two conditions holds\tup{:}
\begin{itemize}
\rmitem{i} $R$ is artinian.
\rmitem{ii} $\mfrak{g}$ is of quasi quantum type.
\end{itemize}
Then for any $\omega \in \operatorname{MC}(\mfrak{m} \hatotimes \mfrak{g})$ the
$R$-module $\a_{\omega}$ is $\mfrak{m}$-adically complete. Hence
$\a_{\omega}$ is a pronilpotent Lie algebra.
\end{prop}
\begin{proof}
If $R$ is artinian then any $R$-module is $\mfrak{m}$-adically complete.
Now assume $R$ is not artinian (namely it is a complete noetherian ring).
If $\mfrak{g}$ is of quantum type then
$\a_{\omega} = \mfrak{m} \hatotimes \mfrak{g}^{-1}$, which is $\mfrak{m}$-adically complete (cf.\
\cite[Corollary 3.5]{Ye5}).
For any DG Lie algebra $\mfrak{g}$ the canonical homomorphism
\begin{equation} \label{eqn:108}
\tau_{\omega} : \a_{\omega} \to \what{\a_{\omega}} =
\lim_{\leftarrow i}\, (R_i \otimes_R \a_{\omega})
\end{equation}
is surjective. Here is the reason: the completion functor $M \mapsto \what{M}$
is not exact (neither right nor left exact), but it does preserve surjections
(see \cite[Proposition 1.2]{Ye5}). Combining this with the
exact sequence (\ref{eqn:107}), and the fact that $\mfrak{m} \hatotimes \mfrak{g}^{-1}$ is
complete, it follows that the homomorphism $\tau_{\omega}$ is surjective.
It remains to prove that if there exists a DG Lie algebra quasi-isomorphism
$\phi : \mfrak{g} \to \mfrak{h}$, for some quantum type DG Lie algebra $\mfrak{h}$, then the
homomorphism $\tau_{\omega}$ is injective. Since
$\d_{\omega} : \a_{\omega} \to \mfrak{m} \hatotimes \mfrak{g}^{0}$ factors through
$\what{\a_{\omega}}$, it follows that
\[ \operatorname{Ker}(\tau_{\omega}) \subset
\operatorname{Ker}(\d_{\omega} : \a_{\omega} \to \mfrak{m} \hatotimes \mfrak{g}^{0}) =
\mrm{H}^{-1}(\mfrak{m} \hatotimes \mfrak{g})_{\omega} . \]
Let $\chi := \phi(\omega)$. We have a commutative diagram with exact rows
\[ \UseTips \xymatrix @C=5ex @R=5ex {
0
\ar[r]
&
\mrm{H}^{-1}(\mfrak{m} \hatotimes \mfrak{g})_{\omega}
\ar[r]
\ar[d]_{\mrm{H}^{-1}(\bsym{1} \hatotimes \phi)}
&
\a_{\omega}
\ar[r]^(0.35){\d_{\omega}}
\ar[d]
&
(\mfrak{m} \hatotimes \mfrak{g}^0)_{\omega}
\ar[d]
\\
0
\ar[r]
&
\mrm{H}^{-1}(\mfrak{m} \hatotimes \mfrak{h})_{\chi}
\ar[r]
&
\a_{\chi}
\ar[r]^(0.35){\d_{\chi}}
&
(\mfrak{m} \hatotimes \mfrak{h}^0)_{\chi} \ .
} \]
Because $\phi$ is a quasi-isomorphism, so is
\[ \bsym{1} \hatotimes \phi : (\mfrak{m} \hatotimes \mfrak{g})_{\omega} \to
(\mfrak{m} \hatotimes \mfrak{h})_{\chi} \]
(we are using Lemma \ref{lem:6}(1)).
Hence the vertical arrow $\mrm{H}^{-1}(\bsym{1} \hatotimes \phi)$ in the diagram is an
isomorphism of $R$-modules. It sends $\operatorname{Ker}(\tau_{\omega})$ bijectively to
$\operatorname{Ker}(\tau_{\chi} : \a_{\chi} \to \what{\a_{\chi}})$.
We know that $\a_{\chi}$ is complete, so $\operatorname{Ker}(\tau_{\chi}) = 0$.
\end{proof}
\begin{cor} \label{cor:101}
In the situation of Proposition \tup{\ref{prop:103}}, for every
$\omega \in \operatorname{MC}(\mfrak{m} \hatotimes \mfrak{g})$
there is a pronilpotent group
$N_{\omega} := \operatorname{exp}(\a_{\omega})$, and a group homomorphism
\[ \operatorname{D}_{\omega} := \operatorname{exp}(\d_{\omega}) : N_{\omega} \to
\operatorname{exp}(\mfrak{m} \hatotimes \mfrak{g}^0)(\omega) . \]
Given any $g \in \operatorname{exp}(\mfrak{m} \hatotimes \mfrak{g}^0)$
and $\omega \in \operatorname{MC}(\mfrak{m} \hatotimes \mfrak{g})$, let
$\omega' := \operatorname{Af}(g)(\omega) \in$ \linebreak $\operatorname{MC}(\mfrak{m} \hatotimes \mfrak{g})$. Then there is a
group isomorphism
\[ \Psi(g) := \operatorname{exp}(\operatorname{Ad}(g)) : N_{\omega} \xrightarrow{\simeq} N_{\omega'} , \]
and the diagram
\[ \UseTips \xymatrix @C=5ex @R=5ex {
N_{\omega}
\ar[d]_{\Psi(g)}
\ar[r]^(0.34){\operatorname{D}_{\omega}}
&
\operatorname{exp}(\mfrak{m} \hatotimes \mfrak{g}^0)(\omega)
\ar[d]^{\operatorname{Ad}(g)}
\\
N_{\omega'}
\ar[r]^(0.34){\operatorname{D}_{\omega'}}
&
\operatorname{exp}(\mfrak{m} \hatotimes \mfrak{g}^0)(\omega')
} \]
is commutative.
The isomorphisms $\Psi(g)$ are an action of the group
$\operatorname{exp}(\mfrak{m} \hatotimes \mfrak{g}^0)$ on the collection of groups
$\{ N_{\omega} \}_{\omega \in \operatorname{MC}(\mfrak{m} \hatotimes \mfrak{g})}$.
Moreover, for any $a, a' \in N_{\omega}$ we have
\[ \Psi( \operatorname{D}_{\omega}(a))(a') =
\operatorname{Ad}_{N_{\omega}}(a)(a') . \]
\end{cor}
\begin{proof}
Combine Propositions \ref{prop:100}, \ref{prop:110} and \ref{prop:103}.
\end{proof}
\begin{rem}
The Lie algebra $\a_{\omega}^{\mrm{r}}$ and the group $N_{\omega}^{\mrm{r}}$ that
occur in Section
\ref{sec:red-del} are quotients, respectively, of the Lie algebra $\a_{\omega}$ and
the group $N_{\omega}$ that occur here.
\end{rem}
\begin{dfn} \label{dfn:103}
Let $\mfrak{g}$ be a DG Lie algebra and $R$ a parameter algebra.
Assume either of these two conditions holds:
\begin{itemize}
\rmitem{i} $R$ is artinian.
\rmitem{ii} $\mfrak{g}$ is of quasi quantum type.
\end{itemize}
The {\em Deligne $2$-groupoid} $\mbf{Del}^2(\mfrak{g}, R)$
is the transformation $2$-groupoid (see Example \ref{exa:100} and Proposition
\ref{prop:102}) associated to the action of the group
$\operatorname{exp}(\mfrak{m} \hatotimes \mfrak{g}^0)$
on the collection of groups
$\{ N_{\omega} \}_{\omega \in \operatorname{MC}(\mfrak{m} \hatotimes \mfrak{g})}$.
The feedback $\operatorname{D}_{\omega}$ and the twisting $\Psi(g)$ are specified in
Corollary \ref{cor:101}.
\end{dfn}
\begin{prop}
Consider pairs $(\mfrak{g}, R)$ such that Deligne $2$-groupoid $\mbf{Del}^2(\mfrak{g}, R)$
is defined, i.e.\ either of the two conditions in Definition \tup{\ref{dfn:103}}
holds.
\begin{enumerate}
\item The Deligne $2$-groupoid $\mbf{Del}^2(\mfrak{g}, R)$ depends functorially on
$\mfrak{g}$ and $R$.
\item The reduced Deligne groupoid satisfies
\[ \mbf{Del}^{\mrm{r}}(\mfrak{g}, R) = \pi_1(\mbf{Del}^2(\mfrak{g}, R)) . \]
\end{enumerate}
\end{prop}
\begin{proof}
This is immediate from the constructions.
\end{proof}
\begin{thm} \label{thm:105}
Let $R$ be a parameter algebra, let $\mfrak{g}$ and $\mfrak{h}$ be
DG Lie algebras, and let $\phi : \mfrak{g} \to \mfrak{h}$ be a DG Lie algebra
quasi-isomorphism. Assume either of these two conditions holds:
\begin{itemize}
\rmitem{i} $R$ is artinian.
\rmitem{ii} $\mfrak{g}$ and $\mfrak{h}$ are of quasi quantum type.
\end{itemize}
Then the morphism of $2$-groupoids
\[ \mbf{Del}^2(\phi, R) : \mbf{Del}^2(\mfrak{g}, R) \to \mbf{Del}^2(\mfrak{h}, R) \]
is a weak equivalence.
\end{thm}
\begin{proof}
Since
\[ \pi_0(\mbf{Del}^2(\phi, R)) = \pi_0(\mbf{Del}^{\mrm{r}}(\phi, R)) \]
and
\[ \pi_1(\mbf{Del}^2(\phi, R), \omega) =
\pi_1(\mbf{Del}^{\mrm{r}}(\phi, R), \omega) \]
for all $\omega \in \operatorname{MC}(\mfrak{m} \hatotimes \mfrak{g})$,
these are bijections by Theorem \ref{thm:2}.
Next, there is a functorial bijection
\begin{equation} \label{eqn:110}
\begin{aligned}
\exp & : \mrm{H}^{-1}(\mfrak{m} \hatotimes \mfrak{g})_{\omega} \xrightarrow{\simeq}
\\
& \qquad
\operatorname{Ker} \bigl( \operatorname{D}_{\omega} : N_{\omega} \to \operatorname{exp}(\mfrak{m} \hatotimes \mfrak{g}^0) \bigr) =
\pi_2(\mbf{Del}^2(\mfrak{g}, R), \omega) ;
\end{aligned}
\end{equation}
cf.\ the commutative diagram in the proof of Proposition \ref{prop:103}.
So
\[ \pi_2(\mbf{Del}^2(\phi, R), \omega) =
\mrm{H}^{-1}(\bsym{1}_{\mfrak{m}} \hatotimes \phi) \]
is bijective.
\end{proof}
\begin{rem}
It is possible to define a Deligne $2$-groupoid $\mbf{Del}^2(\mfrak{g}, R)$ even when
both conditions in Definition \ref{dfn:103} fail, by taking
$N_{\omega} := \exp(\what{\a_{\omega}})$.
However the function $\exp$ in (\ref{eqn:110}) might fail to be bijective.
Hence, in the situation of Theorem \ref{thm:105}, the homomorphism
$\pi_2(\mbf{Del}^2(\phi, R), \omega)$ might fail to be bijective.
\end{rem}
\section{$\mrm{L}_{\infty}$ Morphisms and Coalgebras}
\label{sec:L-infty}
We shall use the coalgebra approach to $\mrm{L}_{\infty}$ morphisms, following
\cite{Qu}, \cite[Section 4]{Ko2}, \cite{Hi2}, \cite[Section 3.7]{CKTB} and
\cite[Section 3]{Ye2}.
Let us denote by $\cat{DGLie}(\mbb{K})$ the category of DG Lie algebras over $\mbb{K}$,
and by $\cat{DGCog}(\mbb{K})$ the category of DG unital cocommutative coalgebras
over $\mbb{K}$. Note that commutativity here is in the graded (or super) sense.
Recall that a unital coalgebra $C$ has a comultiplication
$\Delta : C \to C \otimes C$,
a counit $\epsilon : C \to \mbb{K}$ and a unit $1 \in C$. The differential $\d$ has to be
a coderivation of degree $1$. The conditions on the unit are
$\Delta(1) = 1 \otimes 1$, $\d(1) = 0$ and $\epsilon(1) = 1$ in $\mbb{K}$.
Morphisms in $\cat{DGCog}(\mbb{K})$ are $\mbb{K}$-linear homomorphisms
$C \to D$ respecting
$\Delta$, $\epsilon$ and $1$. We write $C^+ := \operatorname{Ker}(\epsilon)$.
Let $V = \bigoplus\nolimits_{i \in \mbb{Z}} V^i$ be a graded $\mbb{K}$-module. The
symmetric algebra over $\mbb{K}$ of the graded module $V$ is
\[ \operatorname{Sym}(V) = \bigoplus_{i \in \mbb{N}}\, \operatorname{Sym}^j(V) . \]
Note that we are in the super-commutative setting, so
$\operatorname{Sym}^j(V) = {\textstyle \bigwedge}^j(V[1])[-j]$.
We view $\operatorname{Sym}(V)$ as a Hopf algebra over $\mbb{K}$,
which is commutative and cocommutative. The unit is
$1 \in \mbb{K} = \operatorname{Sym}^0(V)$, and the counit is the projection
$\epsilon : \operatorname{Sym}(V) \to \mbb{K}$.
The Hopf algebra $\operatorname{Sym}(V)$ is bigraded, with one grading coming from the
grading of $V$, which we call degree. The second grading is called order; by
definition the $j$-th order component of $\operatorname{Sym}(V)$ is $\operatorname{Sym}^j(V)$.
Let us write
\[ \operatorname{Sym}^+(V) := \bigoplus_{i \geq 1}\, \operatorname{Sym}^j(V) =
\operatorname{Ker}(\epsilon) . \]
The projection $\operatorname{Sym}(V) \to \operatorname{Sym}^1(V)$ is denoted by $\ln$.
So for an element $c \in \operatorname{Sym}(V)$, its first order term is $\ln(c)$.
Recall that giving a homomorphism of unital graded coalgebras
$\Psi : \operatorname{Sym}(V) \to \operatorname{Sym}(W)$
is the same as giving its sequence of Taylor coefficients
$\{ \partial^j\Psi \}_{j \geq 1}$,
where the $j$-th Taylor coefficient of $\Psi$ is the $\mbb{K}$-linear function
\[ \partial^j \Psi := (\ln \circ \Psi)|_{\operatorname{Sym}^j(V)} :
\operatorname{Sym}^j(V) \to W \]
of degree $0$.
The free graded Lie algebra over the graded $\mbb{K}$-module $V$ is denoted by \linebreak
$\operatorname{FrLie}(V)$.
There is a functor
\[ \operatorname{C} : \cat{DGLie}(\mbb{K}) \to \cat{DGCog}(\mbb{K}) \]
called the {\em bar construction}. Given a DG Lie algebra $\mfrak{g}$, the
corresponding DG coalgebra is
$\operatorname{C}(\mfrak{g}) := \operatorname{Sym}(\mfrak{g}[1])$, with a coderivation that encodes both the
differential of $\mfrak{g}$ and its Lie bracket.
There is another functor
\[ \operatorname{L} : \cat{DGCog}(\mbb{K}) \to \cat{DGLie}(\mbb{K}) \]
called the {\em cobar construction}. Given a DG coalgebra $C$, the
corresponding DG Lie algebra is $\operatorname{FrLie}(C^+[-1])$, with a derivation that
encodes both the differential of $C$ and its comultiplication.
If $\mfrak{g} \in \cat{DGLie}(\mbb{K})$ and $C \in \cat{DGCog}(\mbb{K})$, then the set of graded
$\mbb{K}$-linear homomorphisms $\operatorname{Hom}^{\mrm{gr}}(C, \mfrak{g})$
is a DG Lie algebra, and there are functorial bijections
\begin{equation} \label{eqn:60}
\operatorname{Hom}_{\cat{DGLie}(\mbb{K})}(\operatorname{L}(C), \mfrak{g}) \cong
\operatorname{Hom}_{\cat{DGCog}(\mbb{K})}(C, \operatorname{C}(\mfrak{g})) \cong
\operatorname{MC}(\operatorname{Hom}^{\mrm{gr}}(C, \mfrak{g})) .
\end{equation}
Thus the functors $\operatorname{C}$ and $\operatorname{L}$ are adjoint.
We denote the adjunction morphisms by
\[ \zeta_{\mfrak{g}} : (\operatorname{L} \circ \operatorname{C})(\mfrak{g}) \to \mfrak{g} \]
and
\[ \th_C : C \to (\operatorname{C} \circ \operatorname{L})(C) . \]
It is known that the functor $\operatorname{C}$ is faithful.
Let $\mfrak{g}$ and $\mfrak{h}$ be DG Lie algebras.
By definition, an {\em $\mrm{L}_{\infty}$ morphism} $\Phi : \mfrak{g} \to \mfrak{h}$
is a morphism $\operatorname{C}(\mfrak{g}) \to \operatorname{C}(\mfrak{h})$ in $\cat{DGCog}(\mbb{K})$.
Let us define $\cat{DGLie}_{\infty}(\mbb{K})$ to be the category whose objects are
the DG Lie algebras, and the morphisms are the $\mrm{L}_{\infty}$ morphisms
between them. Composition of $\mrm{L}_{\infty}$ morphisms is that of
$\cat{DGCog}(\mbb{K})$. Thus we get a full and faithful functor
\[ \operatorname{C}_{\infty} : \cat{DGLie}_{\infty}(\mbb{K}) \to \cat{DGCog}(\mbb{K}) \]
whose restriction to $\cat{DGLie}(\mbb{K})$ is $\operatorname{C}$.
Take an $\mrm{L}_{\infty}$ morphism
$\Phi : \mfrak{g} \to \mfrak{h}$. Its $i$-th Taylor coefficient
$\partial^i \Phi := \partial^i (\operatorname{C}_{\infty}(\Phi))$ is a $\mbb{K}$-linear function
\[ \partial^i \Phi : {\textstyle \bigwedge}^i \mfrak{g} \to \mfrak{h} \]
of degree $1 - i$. Writing $\phi_i := \partial^i \Phi$, the sequence of
functions $\{ \phi_i \}_{i \geq 1}$ satisfies these equations:
\[ \begin{aligned}
& \mrm{d} \bigl( \phi_i(\gamma_1 \wedge \cdots
\wedge \gamma_i) \bigr) - \sum_{k = 1}^i \pm
\phi_i \bigl( \gamma_1 \wedge \cdots \wedge
\mrm{d}(\gamma_k) \wedge \cdots \wedge \gamma_i \bigr) = \\
& \quad {\smfrac{1}{2}} \sum_{\substack{k, l \geq 1 \\ k + l = i}}
{\smfrac{1}{k! \, l!}} \sum_{\sigma \in \mfrak{S}_i} \pm
\bigl[ \phi_k (\gamma_{\sigma(1)} \wedge \cdots \wedge
\gamma_{\sigma(k)}),
\phi_l (\gamma_{\sigma(k + 1)} \wedge \cdots \wedge
\gamma_{\sigma(i)}) \bigr] \\
& \qquad +
\sum_{k < l} \pm
\phi_{i-1} \bigl( [\gamma_k, \gamma_l] \wedge
\gamma_{1} \wedge \cdots \gamma_k \hspace{-1em} \diagup \cdots
\gamma_l \hspace{-1em} \diagup
\cdots \wedge \gamma_{i} \bigr) .
\end{aligned} \]
Here $\gamma_k \in \mfrak{g}$ are homogeneous elements,
$\mfrak{S}_i$ is the permutation group of $\{ 1, \ldots, i \}$,
and the signs depend only on the indices, the permutations and the
degrees of the elements $\gamma_k$. The
signs are written explicitly in \cite[Section 3.6]{CKTB}.
Conversely, any sequence $\{ \phi_i \}_{i \geq 1}$ of homomorphisms satisfying
these equations determines an $\mrm{L}_{\infty}$ morphism.
Let $\Phi : \mfrak{g} \to \mfrak{h}$ be an $\mrm{L}_{\infty}$ morphism.
The first Taylor coefficient $\partial^1 \Phi : \mfrak{g} \to \mfrak{h}$ is a homomorphism of
complexes of $\mbb{K}$-modules (forgetting the Lie brackets).
If $\partial^i \Phi = 0$ for all $i \geq 2$, then
$\partial^1 \Phi$ is a DG Lie algebra homomorphism.
If $\partial^1 \Phi$ is a quasi-isomorphism, then $\Phi$ is called an
{\em $\mrm{L}_{\infty}$ quasi-isomorphism}.
\begin{lem} \label{lem:20}
Let $\mfrak{g} \in \cat{DGLie}(\mbb{K})$, and define
$C := \operatorname{C}(\mfrak{g})$, $\til{\mfrak{g}} := \operatorname{L}(C)$ and $\til{C} := \operatorname{C}(\til{g})$.
Consider the coalgebra homomorphisms
$\th_C : C \to \til{C}$ and $\operatorname{C}(\zeta_{\mfrak{g}}) : \til{C} \to C$.
Then
\[ \operatorname{C}(\zeta_{\mfrak{g}}) \circ \th_C = \bsym{1}_C , \]
the identity automorphism of $C$.
\end{lem}
\begin{proof}
This is true for any pair of adjoint functors; see
\cite[Section IV.1 Theorem 1]{Ma}.
\end{proof}
\begin{lem} \label{lem:21}
Let $\Phi : \mfrak{g} \to \mfrak{h}$ be an $\mrm{L}_{\infty}$ quasi-isomorphism. Then
\[ (\operatorname{L} \circ \operatorname{C}_{\infty})(\Phi) :
(\operatorname{L} \circ \operatorname{C})(\mfrak{g}) \to (\operatorname{L} \circ \operatorname{C})(\mfrak{h}) \]
is a DG Lie algebra quasi-isomorphism.
\end{lem}
\begin{proof}
Let us write
$C := \operatorname{C}(\mfrak{g})$, $D := \operatorname{C}(\mfrak{h})$ and $\Psi := \operatorname{C}_{\infty}(\Phi)$.
Put on $C$ the ascending filtration
$F_j C := \bigoplus\nolimits_{k = 0}^j \operatorname{Sym}^k(\mfrak{g}[1])$;
so that $\{ F_j C \}_{j \geq 0}$ is an admissible coalgebra filtration, in the
sense of \cite[Definition 4.4.1]{Hi2}. Likewise there is an
admissible coalgebra filtration $\{ F_j D \}_{j \geq 0}$ on $D$.
According to step 2 of the proof of \cite[Proposition 4.4.3]{Hi2}, the
coalgebra homomorphism $\Psi$ is a filtered quasi-isomorphism.
Hence by \cite[Proposition 4.4.4]{Hi2} the DG Lie algebra homomorphism
$(\operatorname{L} \circ \operatorname{C}_{\infty})(\Phi) = \operatorname{L}(\Psi)$
is a quasi-isomorphism.
\end{proof}
\section{$\mrm{L}_{\infty}$ Quasi-isomorphisms between Pronilpotent Algebras}
\label{sec:L-infty-cplt}
Let $(R, \mfrak{m})$ be a parameter $\mbb{K}$-algebra, and let
$\Phi : \mfrak{g} \to \mfrak{h}$ be an $\mrm{L}_{\infty}$ morphism between DG Lie algebras.
Then $\Phi$ extends uniquely to an $R$-multilinear $\mrm{L}_{\infty}$ morphism
\[ \Phi_{R} : R \hatotimes \mfrak{g} \to R \hatotimes \mfrak{h} , \]
whose $i$-th Taylor coefficient
\[ \partial^i \Phi_{R} :
\underset{i}{\underbrace{(R \hatotimes \mfrak{g}) \times \cdots \times
(R \hatotimes \mfrak{g})}} \to R \hatotimes \mfrak{h} \]
is the $R$-multilinear homogeneous extension of
$\partial^i \Phi$. See \cite[Section 3]{Ye2} for more details.
\begin{dfn} \label{dfn:6}
Let $\Phi : \mfrak{g} \to \mfrak{h}$ be an $\mrm{L}_{\infty}$ morphism, and let
$(R, \mfrak{m})$ be a parameter algebra. For an element
$\omega \in \mfrak{m} \hatotimes \mfrak{g}^1$ we define
\[ \operatorname{MC}(\Phi, R)(\omega) :=
\sum_{i \geq 1} \, \smfrac{1}{i!} (\partial^i \Phi_{R})
(\underset{i}{\underbrace{\omega, \ldots, \omega}}) \in \mfrak{m} \hatotimes \mfrak{h}^1 . \]
\end{dfn}
Note that the sum above converges in the $\mfrak{m}$-adic topology of
$\mfrak{m} \hatotimes \mfrak{h}^1$, since
\[ (\partial^i \Phi_{R})(\omega, \ldots, \omega)
\in \mfrak{m}^i \hatotimes \mfrak{h}^1 = \mfrak{m}^{i-1} \cdot (\mfrak{m} \hatotimes \mfrak{h}^1) . \]
\begin{prop} \label{prop:4}
Let
$\omega \in \operatorname{MC}(\mfrak{g}, R) = \operatorname{MC}(\mfrak{m} \hatotimes \mfrak{g})$.
Then the element
$\operatorname{MC}(\Phi, R)(\omega)$ belongs to $\operatorname{MC}(\mfrak{h}, R)$.
Thus we get a function
\[ \operatorname{MC}(\Phi, R) : \operatorname{MC}(\mfrak{g}, R) \to \operatorname{MC}(\mfrak{h}, R) . \]
\end{prop}
\begin{proof}
Let $R_j := R / \mfrak{m}^{j+1}$, and let $p_j : R \to R_j$ be the projection.
According to \cite[Theorem 3.21]{Ye2}, which refers to the artinian case, we
have
\[ \operatorname{MC}(\Phi, R_j)(p_j(\omega)) \in \operatorname{MC}(\mfrak{h} , R_j) \]
for every $j$. And by Lemma \ref{lem:6}(2) we know that
\[ \operatorname{MC}(\mfrak{h}, R) \cong
\lim_{\leftarrow j}\, \operatorname{MC}(\mfrak{h}, R_j) . \]
\end{proof}
Let $\Omega_{\mbb{K}[t]} = \mbb{K}[t] \oplus \Omega^1_{\mbb{K}[t]}$ be the algebra of polynomial
differential forms
in the variable $t$ (relative to $\mbb{K}$). This is a commutative DG
algebra. For $\lambda \in \mbb{K}$ there is a DG algebra homomorphism
$\sigma_{\lambda} : \Omega_{\mbb{K}[t]} \to \mbb{K}$, namely $t \mapsto \lambda$.
There is an induced homomorphism of DG Lie algebras
\begin{equation} \label{eqn:2}
\sigma_{\lambda} : \mfrak{m} \, \what{\otimes} \, \Omega_{\mbb{K}[t]} \, \what{\otimes} \, \mfrak{g} \to
\mfrak{m} \, \what{\otimes} \, \mfrak{g} ,
\end{equation}
and an induced function
\begin{equation} \label{eqn:61}
\overline{\operatorname{MC}}(\sigma_{\lambda}) :
\overline{\operatorname{MC}} ( \mfrak{m} \, \what{\otimes} \, \Omega_{\mbb{K}[t]} \, \what{\otimes} \, \mfrak{g})
\to \overline{\operatorname{MC}}(\mfrak{m} \, \what{\otimes} \, \mfrak{g})
\end{equation}
We shall often think of elements of $\Omega_{\mbb{K}[t]}$ as
``functions of $t$'', as in Section \ref{sec:facts}. Given elements
$f(t) \in \Omega_{\mbb{K}[t]}$ and $\lambda \in \mbb{K}$,
we shall use the substitution notation
$f(\lambda) := \sigma_{\lambda} (f(t)) \in \mbb{K}$.
\begin{lem} \label{lem:16}
Here $\lambda = 0, 1$.
\begin{enumerate}
\item The homomorphisms $\sigma_0, \sigma_1$ in formula \tup{(\ref{eqn:2})}
are quasi-iso\-morphisms.
\item The functions $\overline{\operatorname{MC}}(\sigma_0)$ and $\overline{\operatorname{MC}}(\sigma_1)$
in formula \tup{(\ref{eqn:61})} are bijections.
\item The bijections
$\overline{\operatorname{MC}}(\sigma_0)$ and $\overline{\operatorname{MC}}(\sigma_1)$ are equal.
\end{enumerate}
\end{lem}
\begin{proof}
(1) This is because the homomorphisms $\sigma_i : \Omega_{\mbb{K}[t]} \to \mbb{K}$ are
homotopy equivalences (of complexes of $\mbb{K}$-modules).
\medskip \noindent
(2) This is by item (1) and Theorem \ref{thm:2}.
\medskip \noindent
(3) Note that the inclusion
\[ \phi : \mfrak{m} \hatotimes \mfrak{g} \to \mfrak{m} \, \what{\otimes} \, \Omega_{\mbb{K}[t]} \, \what{\otimes} \, \mfrak{g} \]
is also a quasi-isomorphism, and that
$\sigma_0 \circ \phi = \sigma_1 \circ \phi$ is the identity automorphism of
$\mfrak{m} \hatotimes \mfrak{g}$. Hence
\[ \overline{\operatorname{MC}}(\sigma_0) \circ \overline{\operatorname{MC}}(\phi) =
\overline{\operatorname{MC}}(\sigma_1) \circ \overline{\operatorname{MC}}(\phi) , \]
and canceling the bijection $\overline{\operatorname{MC}}(\phi)$ we obtain our result.
\end{proof}
\begin{lem} \label{lem:17}
Let $\omega_0, \omega_1 \in \operatorname{MC}(\mfrak{m} \, \what{\otimes} \, \mfrak{g})$. The following two
conditions are equivalent:
\begin{enumerate}
\rmitem{i} There exists an element
$g \in \operatorname{exp}(\mfrak{m} \, \what{\otimes} \, \mfrak{g}^0)$ such that
\[ \operatorname{Af}(g)(\omega_0) = \omega_1 . \]
\rmitem{ii} There exists an element
\[ \omega(t) \in
\operatorname{MC}( \mfrak{m} \, \what{\otimes} \, \Omega_{\mbb{K}[t]} \, \what{\otimes} \, \mfrak{g}) \]
such that
$\omega(0) = \omega_0$ and $\omega(1) = \omega_1$.
\end{enumerate}
\end{lem}
\begin{proof}
(i) $\Rightarrow$ (ii): Consider the elements $\gamma := \log(g) \in \mfrak{m} \hatotimes
\mfrak{g}^0$,
\[ g(t) := \exp(t \gamma) \in \exp(\mfrak{m} \hatotimes (\Omega_{\mbb{K}[t]} \otimes \mfrak{g})^0) \]
and
\[ \omega(t) := \operatorname{Af}(g(t))(\omega_0) \in
\operatorname{MC}(\mfrak{m} \hatotimes \Omega_{\mbb{K}[t]} \hatotimes \mfrak{g}) . \]
Then
\[ \omega(0) = \operatorname{Af}(g(0))(\omega_0) = \operatorname{Af}(1)(\omega_0) = \omega_0 \]
and
\[ \omega(1) = \operatorname{Af}(g(1))(\omega_0) =
\operatorname{Af}(g)(\omega_0) = \omega_1 . \]
\medskip \noindent
(ii) $\Rightarrow$ (i): Let us write $[\omega_i]$ for the classes in
$\overline{\operatorname{MC}}(\mfrak{m} \, \what{\otimes} \, \mfrak{g})$.
We know that
\[ \overline{\operatorname{MC}}(\sigma_i)([\omega(t)]) = [\omega_i] \]
for $i = 0, 1$. By Lemma \ref{lem:16}(3) we know that
$\overline{\operatorname{MC}}(\sigma_0) = \overline{\operatorname{MC}}(\sigma_1)$.
Therefore $[\omega_0] = [\omega_1]$; and by definition this says that
$\operatorname{Af}(g)(\omega_0) = \omega_1$ for some $g$.
\end{proof}
\begin{prop} \label{prop:5}
Let $\Phi : \mfrak{g} \to \mfrak{h}$ be an $\mrm{L}_{\infty}$ morphism.
The function
\[ \operatorname{MC}(\Phi, R) : \operatorname{MC}(\mfrak{g}, R) \to \operatorname{MC}(\mfrak{h}, R) \]
respects gauge equivalences. Therefore there is an induced function
\[ \overline{\operatorname{MC}}(\Phi, R) : \overline{\operatorname{MC}}(\mfrak{g}, R) \to
\overline{\operatorname{MC}}(\mfrak{h}, R) . \]
\end{prop}
\begin{proof}
Let $A:= R \hatotimes \Omega_{\mbb{K}[t]}$, which is a commutative DG algebra. There
are induced homomorphisms $\sigma_i : A \to R$.
And there is an induced $A$-multilinear $\mrm{L}_{\infty}$ morphism
\[ \Phi_{A} : \mfrak{m} \hatotimes \Omega_{\mbb{K}[t]} \hatotimes \mfrak{g} \to
\mfrak{m} \hatotimes \Omega_{\mbb{K}[t]} \hatotimes \mfrak{h} \]
(cf.\ \cite[Section 3]{Ye2}).
Let us write
$\operatorname{MC}(\mfrak{g}, A) := \operatorname{MC}(\mfrak{m} \hatotimes \Omega_{\mbb{K}[t]} \hatotimes \mfrak{g})$ etc.
There is a function $\operatorname{MC}(\Phi, A)$ whose formula is like in Definition
\ref{dfn:6}. Since the $\mrm{L}_{\infty}$ morphisms $\Phi_R$ and
$\Phi_A$ are induced from $\Phi$,
the diagram of functions
\[ \UseTips \xymatrix @C=11ex @R=6ex {
\operatorname{MC}(\mfrak{g}, A)
\ar[r]^{\operatorname{MC}(\Phi, A)}
\ar[d]_{\operatorname{MC}(\sigma_i, R)}
&
\operatorname{MC}(\mfrak{h}, A)
\ar[d]^{\operatorname{MC}(\sigma_i, R)}
\\
\operatorname{MC}(\mfrak{g}, R)
\ar[r]^{\operatorname{MC}(\Phi, R)}
&
\operatorname{MC}(\mfrak{h}, R)
} \]
is commutative, for $i = 0, 1$.
Now suppose $\omega_0, \omega_1 \in \operatorname{MC}(\mfrak{g}, R)$ are gauge equivalent. By Lemma
\ref{lem:17} there is an element $\omega(t) \in \operatorname{MC}(\mfrak{g}, A)$ such that
$\omega(i) = \omega_i$. Define
$\chi_i := \operatorname{MC}(\Phi, R)(\omega_i)$ and
$\chi(t) := \operatorname{MC}(\Phi, A)(\omega(t))$.
Because the diagram above is commutative, we have
$\chi(i) = \chi_i$. Using Lemma \ref{lem:17} again we conclude that
$\chi_0$ and $\chi_1$ are gauge equivalent.
\end{proof}
Let $\Psi : C \to D$ be the morphism in $\cat{DGCog}(\mbb{K})$ gotten by applying
the functor $\operatorname{C}_{\infty}$ to the $\operatorname{L}_{\infty}$ morphism
$\Phi : \mfrak{g} \to \mfrak{h}$. And let
$\til{\Psi} : \til{C} \to \til{D}$ be the morphism gotten by applying
the functor $\operatorname{C} \circ \operatorname{L}$ to $\Psi : C \to D$.
Since $\th_-$ is a natural transformation, we get a
commutative diagram
\begin{equation} \label{eqn:62}
\UseTips \xymatrix @C=7ex @R=5ex {
C
\ar[r]^{\Psi}
\ar[d]_{\th_C}
&
D
\ar[d]^{\th_D}
\\
\til{C}
\ar[r]^{\til{\Psi}}
&
\til{D}
}
\end{equation}
in $\cat{DGCog}(\mbb{K})$.
There is a corresponding commutative diagram
\begin{equation} \label{eqn:63}
\UseTips \xymatrix @C=7ex @R=5ex {
\mfrak{g}
\ar[r]^{\Phi}
\ar[d]_{\th_{\mfrak{g}}}
&
\mfrak{h}
\ar[d]^{\th_{\mfrak{h}}}
\\
\til{\mfrak{g}}
\ar[r]^{\til{\Phi}}
&
\til{\mfrak{h}}
}
\end{equation}
in $\cat{DGLie}_{\infty}(\mbb{K})$. Namely
$\til{\mfrak{g}} = (\operatorname{L} \circ \operatorname{C})(\mfrak{g})$,
$\til{\mfrak{h}} = (\operatorname{L} \circ \operatorname{C})(\mfrak{h})$,
and the full faithful functor $\operatorname{C}_{\infty}$ sends the
diagram (\ref{eqn:63}) to the diagram (\ref{eqn:62}).
Note that by the proof of Lemma \ref{lem:20}, the Taylor coefficients
$\partial^j \th_C$ are nonzero for all $j$; so the corresponding morphism
$\th_{\mfrak{g}} : \mfrak{g} \to \til{\mfrak{g}}$ in $\cat{DGLie}_{\infty}(\mbb{K})$ is not a DG Lie
algebra homomorphism. The same for $\th_{\mfrak{h}}$.
\begin{lem} \label{lem:18}
The diagram of functions
\begin{equation} \label{eqn:65}
\UseTips \xymatrix @C=11ex @R=6ex {
\operatorname{MC}(\mfrak{g}, R)
\ar[r]^{\operatorname{MC}(\Phi, R)}
\ar[d]_{\operatorname{MC}(\th_{\mfrak{g}}, R)}
&
\operatorname{MC}(\mfrak{h}, R)
\ar[d]^{\operatorname{MC}(\th_{\mfrak{h}}, R)}
\\
\operatorname{MC}(\til{\mfrak{g}}, R)
\ar[r]^{\operatorname{MC}(\til{\Phi}, R)}
&
\operatorname{MC}(\til{\mfrak{h}}, R)
}
\end{equation}
is commutative.
\end{lem}
\begin{proof}
Because of Lemma \ref{lem:6}(2) we can assume that $R$ is artinian.
Consider the commutative diagram of DG coalgebras over $R$
\begin{equation} \label{eqn:67}
\UseTips \xymatrix @C=11ex @R=6ex {
R \otimes C
\ar[r]^{\Psi_R}
\ar[d]_{\th_{C, R}}
&
R \otimes D
\ar[d]^{\th_{D, R}}
\\
R \otimes \til{C}
\ar[r]^{\til{\Psi}_R}
&
R \otimes \til{D}
}
\end{equation}
induced from (\ref{eqn:62}) by tensoring with $R$.
Take any $\omega \in \mfrak{m} \otimes \mfrak{g}^1 \subset R \otimes C$, and define
\[ e := \exp(\omega) = \sum_{i \geq 0} \, \smfrac{1}{i!} \omega^i \in R \otimes C . \]
According to \cite[Lemma 3.18]{Ye2} we have
\[ \begin{aligned}
& \bigl( \operatorname{MC}(\th_{\mfrak{h}}, R) \circ \operatorname{MC}(\Phi, R) \bigr)(\omega) =
\log \bigl( (\th_{D, R} \circ \Psi_R)(e) \bigr) = \\
& \qquad \log \bigl( (\til{\Psi}_R \circ \th_{C, R})(e) \bigr) =
\bigl( \operatorname{MC}(\til{\Phi}, R) \circ \operatorname{MC}(\th_{\mfrak{g}}, R) \bigr)(\omega) .
\end{aligned} \]
This proves commutativity of the diagram.
\end{proof}
\begin{thm} \label{thm:3}
Let $\mfrak{g}$ and $\mfrak{h}$ be DG Lie algebras,
let $\Phi : \mfrak{g} \to \mfrak{h}$ be an $\mrm{L}_{\infty}$ quasi-isomorphism, and
let $R$ be a parameter algebra, all over the field $\mbb{K}$.
Then the function
\[ \overline{\operatorname{MC}}(\Phi, R) : \overline{\operatorname{MC}}(\mfrak{g}, R) \to
\overline{\operatorname{MC}}(\mfrak{h}, R) \]
\tup{(}see Proposition \tup{\ref{prop:5})} is bijective.
\end{thm}
The idea for the proof was suggested to us by Van den Bergh.
\begin{proof}
By Lemma \ref{lem:21} the DG Lie algebra homomorphism
$\til{\Psi} : \til{\mfrak{g}} \to \til{\mfrak{h}}$ is a quasi-isomorphism.
Therefore by Theorem \ref{thm:2} the function
\[ \overline{\operatorname{MC}}(\til{\Phi}, R) : \overline{\operatorname{MC}}(\til{\mfrak{g}}, R) \to
\overline{\operatorname{MC}}(\til{\mfrak{h}}, R) \]
is bijective.
Next, by \cite[Proposition 4.4.3(1)]{Hi2} the DG Lie algebra homomorphism
$\zeta_{\mfrak{g}} : \til{\mfrak{g}} \to \mfrak{g}$ is a quasi-isomorphism.
Again using Theorem \ref{thm:2} we conclude that the function
\[ \overline{\operatorname{MC}}(\zeta_{\mfrak{g}}, R) : \overline{\operatorname{MC}}(\til{\mfrak{g}}, R) \to
\overline{\operatorname{MC}}(\mfrak{g}, R) \]
is bijective. On the other hand, by Lemma \ref{lem:20}, with the arguments in
the proof of Lemma \ref{lem:18}, we see that
\[ \operatorname{MC}(\zeta_{\mfrak{g}}, R) \circ \operatorname{MC}(\th_{\mfrak{g}}, R) =
\bsym{1}_{\operatorname{MC}(\mfrak{g}, R)} . \]
Therefore
\[ \overline{\operatorname{MC}}(\zeta_{\mfrak{g}}, R) \circ \overline{\operatorname{MC}}(\th_{\mfrak{g}}, R) =
\bsym{1}_{\overline{\operatorname{MC}}(\mfrak{g}, R)} . \]
Because $\overline{\operatorname{MC}}(\zeta_{\mfrak{g}}, R)$ is a bijection, the same is true for the
function $\overline{\operatorname{MC}}(\th_{\mfrak{g}}, R)$.
The same line of reasoning says that
$\overline{\operatorname{MC}}(\th_{\mfrak{h}}, R)$ is bijective.
Finally consider the commutative diagram of functions
\[ \UseTips \xymatrix @C=11ex @R=6ex {
\overline{\operatorname{MC}}(\mfrak{g}, R)
\ar[r]^{\overline{\operatorname{MC}}(\Phi, R)}
\ar[d]_{\overline{\operatorname{MC}}(\th_{\mfrak{g}}, R)}
&
\overline{\operatorname{MC}}(\mfrak{h}, R)
\ar[d]^{\overline{\operatorname{MC}}(\th_{\mfrak{h}}, R)}
\\
\overline{\operatorname{MC}}(\til{\mfrak{g}}, R)
\ar[r]^{\overline{\operatorname{MC}}(\til{\Phi}, R)}
&
\overline{\operatorname{MC}}(\til{\mfrak{h}}, R)
} \]
induced from (\ref{eqn:65}).
We know that three of the arrows are bijective; so the fourth arrow, namely
$\overline{\operatorname{MC}}(\Phi, R)$, is also bijective.
\end{proof}
\begin{rem} \label{rem:1}
Lemma 3.5 in our earlier paper \cite{Ye1} says that the canonical function
\[ \overline{\operatorname{MC}}(\mfrak{g}, R) \to \lim_{\leftarrow j} \,
\overline{\operatorname{MC}}(\mfrak{g}, R / \mfrak{m}^j) \]
is bijective. The proof of this Lemma
(which is actually omitted from the paper) is incorrect. Moreover, we
suspect that statement itself is false. The hidden assumption was that
the gauge group $\operatorname{G}(\mfrak{g}, R)$ acts on the set $\operatorname{MC}(\mfrak{g}, R)$
with {\em closed} orbits.
This lemma is used to deduce \cite[Corollary 3.10]{Ye1} (incorrectly)
from the nilpotent case. The correction is to replace
\cite[Corollary 3.10]{Ye1} with Theorem \ref{thm:3} above.
The same correction pertains also to \cite[formula (12.2)]{Ye4}.
\end{rem}
\begin{rem}
This is a good place to correct a typographical error (repeated several
times) in \cite[Section 3]{Ye2}. In Lemmas 3.14 and 3.18 of op.\ cit., instead
of ``$\mfrak{m} \mfrak{g} [1]$'' the correct expression should be ``$\mfrak{m} \mfrak{g}^1$'' or
``$(\mfrak{m} \mfrak{g} [1])^0$''.
Let us also mention that a ``colocal coalgebra homomorphism'', in the sense of
\cite[Definition 3.3]{Ye2}, is the same as a ``unital coalgebra homomorphism''.
\end{rem}
|
2,869,038,155,512 | arxiv | \section{Amplitudes}
We consider the following scattering process,
\begin{align}\label{kinematicassignment}
g(p_1, \lambda_1, a_1) + g(p_2, \lambda_2, a_2) \to \gamma(p_3,\lambda_3) + \gamma(p_4,\lambda_4) ,
\end{align}
with on-shell conditions $p_j^2 = 0, j=1,...,4$.
The helicities $\lambda_i$ of the external particles are defined by taking the momenta of the gluons
$p_1$ and $p_2$ (with color indices $a_1$ and $a_2$, respectively)
as incoming and the momenta of the photons $p_3$ and $p_4$ as outgoing.
The Mandelstam invariants associated with eq.~(\ref{kinematicassignment}) are defined as
$\, s = \left(p_1 + p_2 \right)^2 ,\, t = \left(p_2 - p_3 \right)^2 ,\, u = \left(p_1 - p_3 \right)^2$.
\subsection*{Projection operators}
Pulling out the polarisation vectors $\varepsilon_{\lambda_i}^{\mu_i}$ of external gauge bosons from the amplitude $\ensuremath{\mathcal{M}}$ describing the process eq.~(\ref{kinematicassignment}), one defines the tensor amplitude $\ensuremath{\mathcal{M}}_{\mu_1\mu_2\mu_3\mu_4}$ by
\begin{equation} \label{eq:ggyyamplitudes}
\ensuremath{\mathcal{M}}{} = \varepsilon_{\lambda_1}^{\mu_1}(p_1)\,\varepsilon_{\lambda_2}^{\mu_2}(p_2)\,\varepsilon_{\lambda_3}^{\mu_3,\star}(p_3)\,\varepsilon_{\lambda_4}^{\mu_4,\star}(p_4)\,\ensuremath{\mathcal{M}}_{\mu_1\mu_2\mu_3\mu_4}(p_1,p_2,p_3,p_4)\,.
\end{equation}
We compute $\ensuremath{\mathcal{M}}_{\mu_1\mu_2\mu_3\mu_4}$ through projection onto a set of Lorentz structures
related to linear polarisation states of the external gauge bosons, with the corresponding
D-dimensional projection operators constructed following the prescription proposed
in ref.~\cite{Chen:2019wyb}.\footnote{This approach has been applied recently in the calculation of ref.~\cite{Ahmed:2019udm}.} These linear polarisation projectors are based on the momentum basis representations of external polarisation vectors (for both bosons and fermions, massless or massive), and all their open Lorentz indices are by definition taken to be D-dimensional to facilitate a uniform projection with just one dimensionality D=$g_{~\mu}^{\mu}$.
For the process in question, we introduce two linear polarisation states $\varepsilon^{\mu}_X$ and $\varepsilon^{\mu}_T$ lying within the scattering plane determined by the three linearly independent external momenta $\{ p_1, p_2, p_3 \}$, and transversal to $p_1$ and $p_3$ respectively.
In addition, a third linear polarisation state vector $\varepsilon^{\mu}_Y$, orthogonal to $p_1, p_2,$ and $p_3$,
is constructed with the help of the Levi-Civita symbol.
To determine the momentum basis representations for
$\varepsilon^{\mu}_{X/T}$, we first write down a Lorentz covariant ansatz and then solve the orthogonality and normalisation conditions of linear polarisation state vectors for the linear decomposition coefficients. Once we establish a definite Lorentz covariant decomposition form
in 4 dimensions solely in terms of external momenta and kinematic invariants,
this form is declared as the definition of the corresponding polarisation state vector in D dimensions.
Applied to the scattering process \eqref{kinematicassignment}, this construction leads to eight projectors
\begin{equation} \label{eq:LPprojectors}
\varepsilon_{[X,Y]}^{\mu_1} \varepsilon_{[X,Y]}^{\mu_2} \varepsilon_{[T,Y]}^{\mu_3} \varepsilon_{[T,Y]}^{\mu_4},
\end{equation}
where the square bracket $[\cdot{},\cdot{}]$ in the subscripts means either entry,
and where only the combinations containing an even number of $\varepsilon_Y$ are considered.
This is simply because \eqref{kinematicassignment} is a P-even 2-to-2 scattering process.
Let us emphasize that, in order to end up with an unambiguous form of projectors to be used in D dimensions,
all pairs of Levi-Civita tensors should be contracted first (as explained in ref.~\cite{Chen:2019wyb})
{\em before} being used for the projection of the amplitude.
In this way the aforementioned projectors are expressed solely in terms of external momenta and metric tensors
whose open Lorentz indices are all set to be D-dimensional.
We remark that since all projectors thus constructed obey all defining physical constraints,
the index contraction between \eqref{eq:LPprojectors} and the tensor amplitude $\ensuremath{\mathcal{M}}_{\mu_1\mu_2\mu_3\mu_4}$
is always done with the spacetime metric tensor $g_{\mu\nu}$ (rather than the physical polarisation sum rule).
The usual helicity amplitudes can be composed using the relations between circular and linear polarisation state,
e.g.~$\varepsilon_{\pm}^\mu(p_1) = \frac{1}{\sqrt{2}} \left( \varepsilon_X^\mu \pm i \varepsilon_Y^\mu \right)$.
\subsection*{Numerical evaluation of amplitudes}
With the linear polarisation projectors defined in \eqref{eq:LPprojectors},
we re-computed the LO amplitudes for the process \eqref{kinematicassignment} analytically,
with both massless and massive quark loops.
These expressions were implemented in our computational setup for the NLO QCD corrections,
which we describe below.
The bare scattering amplitudes of the process \eqref{kinematicassignment} beyond LO contain poles
in the dimensional regulator $\epsilon \equiv (4-D)/2$ arising from ultraviolet (UV)
as well as soft and/or collinear (IR) regions of the loop momenta.
In our computation, we renormalise the UV divergences using the $\overline{\text{MS}}$ scheme,
except for the top quark contribution which is renormalised on-shell.
For details on the UV renormalisation, please refer to ref.~\cite{Chen:2019fla}.
To deal with the intermediate IR divergences, we employ the FKS subtraction approach~\cite{Frixione:1995ms},
as implemented in the \texttt{POWHEG-BOX-V2}{}~framework~\cite{Nason:2004rx,Frixione:2007vw,Alioli:2010xd}.
In practice, we need to supply only the finite part of the born-virtual interference,
under a specific definition~\cite{Alioli:2010xd} in order to combine it with the FKS-subtracted
real radiation generated within the \textsc{GoSam}{}/\texttt{POWHEG-BOX-V2}{}~framework.
For the two-loop QCD diagrams contributing to our scattering process,
there is a complete separation of quark flavors due to the color algebra and Furry's theorem,
with samples shown in fig.~\ref{fig:2l_diagrams}.
\begin{figure}[htb]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=0.51\textwidth]{figs/diag2-eps-converted-to.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=0.51\textwidth]{figs/diag6-eps-converted-to.pdf}
\end{subfigure}
\vspace{1em}
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=0.51\textwidth]{figs/diag8-eps-converted-to.pdf}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=0.51\textwidth]{figs/diag10-eps-converted-to.pdf}
\end{subfigure}
\caption{Examples of diagrams contributing to the virtual corrections.}
\label{fig:2l_diagrams}
\end{figure}
We obtain the two-loop amplitude with the multi-loop extension of
the program~\textsc{GoSam}{}{}~\cite{Jones:2016bci} where {\sc Reduze}\,2~\cite{vonManteuffel:2012np}
is employed for the reduction to master integrals.
In particular, each of the linearly polarised amplitudes projected out using \eqref{eq:LPprojectors}
is eventually expressed as a linear combination of 39 massless
integrals and 171 integrals that depend on the top quark mass, distributed into three integral families.
All massless two-loop master integrals involved are known analytically~\cite{Bern:2001df,Binoth:2002xg,Argeri:2014qva},
and we have implemented the analytic expressions into our code.
Regarding the two-loop massive integrals, which are not yet fully known analytically,
we first rotate to an integral basis consisting partly of quasi-finite loop integrals~\cite{vonManteuffel:2014qoa}.
Our integral basis is chosen such that the second Symanzik polynomial, $\mathcal{F}$, appearing in the Feynman parametric representation of each of the integrals is raised to a power, $n$, where $|n| \le 1$ in the limit $\epsilon \rightarrow 0$. This choice improves the numerical stability of our calculation near to the $t\bar{t}$ threshold, where the $\mathcal{F}$ polynomial can vanish. All these massive integrals are evaluated numerically using py\secdec~\cite{Borowka:2017idc,Borowka:2018goh}.
The phase-space integration of the virtual interferences is achieved by reweighting unweighted Born events.
The accuracy goal imposed on the numerical evaluation of the virtual
two-loop amplitudes in the linear polarisation basis in py\secdec{} is 1 per-mille on
both the relative and the absolute error.
We have collected 6898 phase space points out of which 862 points fall into the diphoton invariant mass
window $m_{\gamma\gamma} \in \left[330,\, 360\right]$ GeV. We further have calculated the amplitudes for 2578 more
points restricted to the threshold region.
~\\
The real radiation matrix elements are calculated using the interface~\cite{Luisoni:2013cuh}
between \textsc{GoSam}{}~\cite{Cullen:2011ac,Cullen:2014yla} and the
\texttt{POWHEG-BOX-V2}{}~\cite{Nason:2004rx,Frixione:2007vw,Alioli:2010xd}, modified accordingly to
compute the real radiation corrections to loop-induced Born amplitudes.
Only real radiation contributions where both photons couple to a closed quark loop are included.
We also include the $q\bar{q}$ initiated diagrams which contain a closed quark loop,
even though their contribution is numerically very small.
\section{Introduction}
\input{intro.tex}
\section{Building blocks of the fixed-order calculation}
\label{sec:calculation}
\input{amplitude.tex}
\section{Treatment of the threshold region}
\label{sec:threshold}
\input{threshold.tex}
\section{Results}
\label{sec:results}
\input{results.tex}
\section{Conclusions}
\label{sec:conclusions}
\input{conclusions.tex}
\bibliographystyle{JHEP}
|
2,869,038,155,513 | arxiv | \section{Introduction}
\label{sec-intro} We consider random walks in random environments on
$\mathbb{Z}^{d}$, $d\geq3$, when the environment is a small perturbation of
the fixed environment corresponding to simple random walk. More precisely, let
$\mathcal{P}$ be the set of probability distributions on $\mathbb{Z}^{d},$
charging only neighbors of $0.$ If $\varepsilon\in(0,1/2d),$ we set, with
$\{e_{i}\}_{i=1}^{d}$ denoting the standard basis of $\mathbb{R}^{d}$,%
\begin{equation}
\mathcal{P}_{\varepsilon}\overset{\mathrm{def}}{=}\left\{ q\in\mathcal{P}%
:\left\vert q\left( \pm e_{i}\right) -\frac{1}{2d}\right\vert \leq
\varepsilon,~\forall i\right\} . \label{eq-Pepsdef}%
\end{equation}
$\Omega\overset{\mathrm{def}}{=}\mathcal{P}^{\mathbb{Z}^{d}}$ is equipped with
the natural product $\sigma$-field $\mathcal{F}.$ We call an element
$\omega\in\Omega$ a \textit{random environment}.
For $\omega\in\Omega,$ and $x\in\mathbb{Z}^{d},$ we consider the transition
probabilities $p_{\omega}\left( x,y\right) \overset{\mathrm{def}}{=}%
\omega_{x}\left( y-x\right) ,$ if $\left\vert x-y\right\vert =1,$ and
$p_{\omega}\left( x,y\right) =0$ otherwise, and construct the random walk
in random environment (RWRE)
$\{X_{n}\}_{n\geq0}$ with initial position $x\in\mathbb{Z}^{d}$ which is,
given the environment $\omega$, the Markov chain with $X_{0}=x$ and transition
probabilities
\[
P_{\omega,x}(X_{n+1}=y|X_{n}=z)=\omega_{z}(y-z)\,.
\]
(By a slight abuse of notation, for consistency with the sequel we also write
$P_{\omega,x}=P_{p_{\omega},x}.$)
We are mainly interested in the case of a \textit{random }$\omega.$ Given a
probability measure $\mu$ on $\mathcal{P},$ we consider the product measure
$\mathbb{P}_{\mu}\overset{\mathrm{def}}{=}\mu^{\otimes\mathbb{Z}^{d}}$ on
$\left( \Omega,\mathcal{F}\right) .$ We usually drop the index $\mu$ in
$\mathbb{P}_{\mu}.$ In all that follows we make the following basic assumption
\begin{condition}
\label{Cond_Mu}$\mu$ is invariant under lattice isometries, i.e. $\mu
f^{-1}=\mu$ for any orthogonal mapping $f$ which leaves $\mathbb{Z}^{d}$
invariant, and $\mu\left( \mathcal{P}_{\varepsilon}\right) =1$ for some
$\varepsilon\in(0,1/2d)$ which will be specified later\texttt{.}
\end{condition}
The model of RWRE has been studied extensively. We refer to \cite{sznitmanLN}
and \cite{zeitouniLN} for recent surveys. A major open problem is the
determination, for $d>1$, of laws of large numbers and central limit theorems
in full generality (the latter, both under the \textit{quenched} measure, i.e.
for $\mathbb{P}_{\mu}$-almost every $\omega$, and under the \textit{annealed}
measure $\mathbb{P}_{\mu}\otimes P_{x,\omega}$). Although much progress has
been reported in recent years (\cite{BSZ,sznitman1,sznitman2}), a full
understanding of the model has not yet been achieved.
In view of the above state of affairs, attempts have been made to understand
the perturbative behavior of the RWRE, that is the behavior of the RWRE when
$\mu$ is supported on $\mathcal{P}_{\varepsilon}$ and $\varepsilon$ is small.
The first to consider such a perturbative regime were \cite{BK}, who
introduced Condition \ref{Cond_Mu} and showed that in dimension $d\geq3$, for
small enough $\varepsilon$ a quenched CLT holds\footnote{As the examples in
\cite{BSZ} demonstrate, for every $d\geq 7$ and
$\varepsilon>0$ there are measures $\mu$
supported on $\mathcal{P}_{\varepsilon}$, with $\mathbb{E}_{\mu}\left[
\sum_{i=1}^{d} e_i
(q(e_{i})-q(-e_{i}))\right] =0$, such that $X_{n}/n\to
_{n\to\infty}v\neq0$, $\mathbb{P}_{\mu}$-a.s. One of the goals of Condition
\ref{Cond_Mu} is to prevent such situations from occurring.}. Unfortunately,
the multiscale proof in \cite{BK} is rather difficult, and challenging to
follow. This in turns prompted the derivation, in \cite{SZ}, of an alternative
multiscale approach, in the context of diffusions in random environments. One
expects that the approach of \cite{SZ} could apply to the discrete setup, as
well.
Our goal in this paper is somewhat different: we focus on the exit law of the
RWRE from large balls, and develop a multiscale analysis that allows us to
conclude that the exit law approaches, in a suitable sense, the uniform
measure. Like in \cite{SZ}, the hypothesis propagated involves smoothing. In
\cite{SZ}, this was done using certain H\"{o}lder norms of (rescaled)
transition probabilities. Here, we focus on two ingredients. The
first is a propagation of
the variational distance between the exit laws of the RWRE from
balls and those of simple
random walk (which distance
remains small but does not decrease as the scale
increases). The second is the propagation of
the variation distance between
the convolution of the exit law of the RWRE with the exit law
of a simple random walk from a ball of (random) radius,
and the corresponding convolution of the exit law of simple random walk
with the same smoothing, which
distance decreases to zero
as scale increases (a precise statement can be
found in Theorems \ref{Th_Main} and \ref{Th_Main1}; the latter, which is our
main result, provides a local limit law for the exit measures).
This approach is of a different nature than the one in \cite{SZ} and, we
believe, simpler. In future work we hope to combine our exit law approach with
suitable exit time estimates in order to deduce a (quenched) CLT for the RWRE.
The structure of the article is the following. In the next section, we
introduce our basic notation and state our induction step and our main
results. In Section \ref{Sect_Preliminaries}, we present our basic
perturbation expansion, coarsening scheme for random walks, and auxiliary
estimates for simple random walk. The (rather standard)
proofs of the latter estimates are
presented in the appendices. Section \ref{Sect_Smooth} is devoted to the
propagation of the smoothed estimates, whereas
Section \ref{Sect_NonSmooth} is
devoted to the propagation of the variation distance
estimate (the non-smooth estimate). Section \ref{sec-finalpush}
completes the proof of our main result by
using the estimates of Sections \ref{Sect_Smooth} and
\ref{Sect_NonSmooth}.
\section{Basic notation and main result\label{Sect_Basic}}
\medskip\noindent\textbf{Sets: }For $x\in\mathbb{R}^{d},$ $\left\vert
x\right\vert $ is the Euclidean norm. If $A,B\subset\mathbb{Z}^{d},$
we set
$d\left(
A,B\right) \overset{\mathrm{def}}{=}$ $
\inf\left\{ |x-y|:\right.$ $\left. x\in
A, y\in B\right\} .$ If $L>0,$
we write $V_{L}\overset{\mathrm{def}}{=}%
\{x\in\mathbb{Z}^{d}:\left\vert x\right\vert \leq L\},$ and for $x\in
\mathbb{Z}^{d},$ $V_{L}\left( x\right) \overset{\mathrm{def}}{=}x+V_{L}.$ If
$V\subset\mathbb{Z}^{d},$ $\partial V=\{x\in V^c: d(x,V)=1\}$
is the outer boundary.
If $x\in V,$ we set
$d_{V}\left( x\right) \overset{\mathrm{def}}{=}d\left( x,\partial V\right)
.$ We also set $d_{L}(x)=L-|x|$ (note that $d_{L}(x)\neq d_{V_{L}}(x)$ with
this convention).
For $0\leq a<b\leq L,$ we define
\begin{equation}
\operatorname*{Shell}\nolimits_{L}\left( a,b\right) \overset{\mathrm{def}%
}{=}\left\{ x\in V_{L}:a\leq d_{L}\left( x\right) <b\right\}
,\ \operatorname*{Shell}\nolimits_{L}\left( b\right) \overset{\mathrm{def}%
}{=}{\operatorname*{Shell}\nolimits}_{L} \left( 0,b\right) .
\label{Def_Shell}%
\end{equation}
\medskip\noindent\textbf{Functions: }If $F,G$ are functions $\mathbb{Z}%
^{d}\times\mathbb{Z}^{d}\rightarrow\mathbb{R}$ we write $FG$ for the (matrix)
product: $FG\left( x,y\right) \overset{\mathrm{def}}{=}\sum_{u}
F\left(
x,u\right) G\left( u,y\right) ,$ provided the right hand side is absolutely
summable. $F^{k}$ is the $k$-th power defined in this way, and $F^{0}\left(
x,y\right) \overset{\mathrm{def}}{=}\delta_{x,y}.$ We interpret $F$ also as a
kernel, operating from the left on functions $f:\mathbb{Z}^{d}\rightarrow
\mathbb{R},$ by $Ff\left( x\right) \overset{\mathrm{def}}{=}\sum_y F\left(
x,y\right) f\left( y\right) $. If $W\subset\mathbb{Z}^{d},$ we use $1_{W}$
not only as the indicator function but, by slight abuse of notation, also to
denote the kernel $\left( x,y\right) \rightarrow1_{W}\left( x\right)
\delta_{x,y}.$
For a function $f:\mathbb{Z}^{d}\rightarrow\mathbb{R},$ $\left\Vert
f\right\Vert _{1}\overset{\mathrm{def}}{=}\sum_{x}\left\vert f\left(
x\right) \right\vert ,$ and $\left\Vert f\right\Vert _{\infty}\overset
{\mathrm{def}}{=}\sup_{x}\left\vert f\left( x\right) \right\vert ,$ as
usual. If \thinspace$F$ is a kernel then, by an abuse of notation, we write
$\left\Vert F\right\Vert _{1}$ for its norm as operator on $L_{\infty},$ i.e.%
\begin{equation}
\left\Vert F\right\Vert _{1}\overset{\mathrm{def}}{=}\sup_{x}\left\Vert
F\left( x,\cdot\right) \right\Vert _{1}. \label{Def_OperatorNorm}%
\end{equation}
\medskip\noindent\textbf{Transition probabilities: }For transition
probabilities $p=\left( p\left( x,y\right) \right) _{x,y\in\mathbb{Z}^{d}%
},$ not necessarily nearest neighbor, we write $P_{p,x}$ for the law of a
Markov chain $X_{0}=x,X_{1},\ldots$
having $p$ as transition
probabilities. If $V\subset
\mathbb{Z}^{d},$ $\tau_{V}\overset{\mathrm{def}}{=}\inf\left\{ n\geq
0:X_{n}\notin V\right\} $ is the first exit time from $V$, and $T_{V}%
\overset{\mathrm{def}}{=}\tau_{V^{c}}$ the first entrance time. We set%
\[
\operatorname*{ex}\nolimits_{\scriptscriptstyle V}\left( x,z;p\right)
\overset{\mathrm{def}}{=}P_{p,x}\left( X_{\tau_{V}}=z\right) .
\]
For $x\in V^{c},$ one has $\operatorname*{ex}\nolimits_{\scriptscriptstyle V}%
\left( x,z;p\right) =\delta_{x,z}.$ A special case is the standard simple
random walk $p\left( x,\pm e_{i}\right) =1/2d,$ where $e_{1},\ldots,e_{d}%
\in\mathbb{Z}^{d}$ is the standard base. We abbreviate this as $p^{\mathrm{RW}%
},$ and set $P_{x}^{\mathrm{RW}}\overset{\mathrm{def}}{=}P_{x,p^{\mathrm{RW}}%
}.$ Also, exit distributions for the simple random walk are written as
$\pi_{\scriptscriptstyle V}\left( x,z\right) \overset{\mathrm{def}}%
{=}\operatorname*{ex}\nolimits_{\scriptscriptstyle V}\left(
x,z;p^{\mathrm{RW}}\right) .$
We will coarse-grain \textit{nearest-neighbor} transition probabilities $p$ in
the following way. Given $W\subset\mathbb{Z}^{d},$ we choose for any $x\in W$
either a fixed finite
subset $U_{x}\subset W,$ $x\in U_{x},$ or a probability
distribution $s_{x}$ on such sets. Of course, a fixed choice $U_{x}$ is just a
special choice for the distribution $s_{x},$ namely the one point distribution
on $U_{x}.$
\begin{definition}
\label{Def_CoarseGrainingScheme}A collection $\mathcal{S}=\left(
s_{x}\right) _{x\in W}$ is called a \textbf{coarse graining scheme}%
\textit{\ }on $W.$ Given such a scheme, and nearest neighbor transition
probabilities $p$, we define the coarse grained transitions by%
\begin{equation}
{p}_{\scriptscriptstyle\mathcal{S},W}^{\mathrm{CG}}\left( x,\cdot\right)
\overset{\mathrm{def}}{=}\sum_{U:x\in U\subset W}s_{x}\left( U\right)
\operatorname*{ex}\nolimits_{\scriptscriptstyle U}\left( x,\cdot;p\right) .
\label{Smoothing1}%
\end{equation}
\end{definition}
In the case of the standard nearest neighbor random walk, we use the notation
$\pi_{\mathcal{S},W}$ instead of $\left( {p}^{\mathrm{RW}}\right)
_{\mathcal{S},W}^{\mathrm{CG}}.$
Using the Markov property, we have, whenever $W$ is finite,%
\begin{equation}
\operatorname*{ex}\nolimits_{\scriptscriptstyle W}\left( x,\cdot;p\right)
=\operatorname*{ex}\nolimits_{\scriptscriptstyle W}\left( x,\cdot
;p_{\scriptscriptstyle\mathcal{S},W}^{\mathrm{CG}}\right) .
\label{EqualExits}%
\end{equation}
We will choose the coarse-graining scheme in special ways. Fix once for all a
probability density%
\begin{equation}
\varphi:\mathbb{R}^{+}\rightarrow\mathbb{R}^{+},\ \varphi\in C^{\infty
},\ \operatorname*{support}\left( \varphi\right) =\left[ 1,2\right] .
\label{SmootingFunction}%
\end{equation}
If $m\in\mathbb{R}^{+},$ the rescaled density is defined by $\varphi
_{m}\left( t\right) \overset{\mathrm{def}}{=}\left( 1/m\right)
\varphi\left( t/m\right) .$ The image measure of $\varphi_{m}\left(
t\right) dt$ under the mapping $t\rightarrow V_{t}\left( x\right) \cap W$
defines a probability distribution on subsets of $W$ containing $x$.
We may also choose $m$ to depend on $x,$ i.e. consider a field ${\Psi}=\left(
m_{x}\right) _{x\in W}$ of positive real numbers on $W.$ Such a field then
defines via the above scheme coarse grained transition probabilities, which by
a slight abuse of notation we denote as $p_{\scriptscriptstyle{\Psi}%
,W}^{\mathrm{CG}}.$ In case $W=\mathbb{Z}^{d},$ we simply drop $W$ in the
notation. In case $p$ is the standard nearest neighbor random walk, we write
$\hat{\pi}_{{\Psi}}$ instead of $p_{\scriptscriptstyle{\Psi}}^{\mathrm{CG}}.$
\medskip\noindent\textbf{The random environment: } We recall from the
introduction the notation $\mathcal{P}_{\varepsilon}$, $\Omega$, $p_{\omega
}\left( x,y\right) $, and the natural product $\sigma$-field $\mathcal{F}.$
For $A\subset\mathbb{Z}^{d},$ we write $\mathcal{F}_{A}=\sigma(\omega_x: x\in A)$.
We also recall the probability measure $\mu$ on $\mathcal{P},$ the product
measure $\mathbb{P}_{\mu}$, and Condition \ref{Cond_Mu}, which is assumed throughout.
For a random environment $\omega\in\Omega$,
we typically write $\Pi_{\scriptscriptstyle V,\omega}\overset{\mathrm{def}}%
{=}\operatorname*{ex}\nolimits_{\scriptscriptstyle
V}\left( \cdot,\cdot;p_{\omega}\right) $ and occasionally drop $\omega$ in
the notation. So $\Pi_{V}$ should always be understood as a \textit{random}
exit distribution. We will also use $\hat{\Pi}_{\mathcal{S},W}$ for $\left(
p_{\omega}\right) _{\mathcal{S},W}^{\mathrm{CG}}.$
For $x\in\mathbb{Z}^{d},$ $L>0,$ and ${\Psi}:\partial V_{L}\left( x\right)
\rightarrow\mathbb{R}^{+}$, we define the random variables%
\begin{equation}
D_{L,{\Psi}}\left( x\right) \overset{\mathrm{def}}{=}\left\Vert \left(
\left[ \Pi_{\scriptscriptstyle V_{L}\left( x\right) }-\pi
_{\scriptscriptstyle V_{L}\left( x\right) }\right] \hat{\pi}_{{\Psi}%
}\right) \left( x,\cdot\right) \right\Vert _{1}, \label{Def_DL}%
\end{equation}%
\begin{equation}
\label{eq-280905a}D_{L,0}\left( x\right) \overset{\mathrm{def}}%
{=}\left\Vert \Pi_{V_{L}\left( x\right) }\left( x,\cdot\right) -\pi
_{V_{L}\left( x\right) }\left( x,\cdot\right) \right\Vert _{1},
\end{equation}
and with $\delta>0,$ we set%
\begin{align*}
& b_{i}\left( L,{\Psi},\delta\right) \overset{\mathrm{def}}{=}%
\mathbb{P}\left( \left( \log L\right) ^{-9+\frac{9(i-1)}{4}}<
D_{L,\Psi}\left(
0\right)
\leq\left( \log L\right) ^{-9+\frac{9i}{4}},\ D_{L,0}\left( 0\right) \leq
\delta\right)\,,\; i=1,2,3, \\
& b_{4}\left( L,{\Psi},\delta\right) \overset{\mathrm{def}}{=}%
\mathbb{P}\left( \left\{ \left( \log L\right) ^{-2.25}< D_{L,\Psi}\left(
0\right) \right\} \cup\left\{ D_{L,0}\left( 0\right) >\delta\right\}
\right)\,, \\
& b\left( L,{\Psi},\delta\right) \overset{\mathrm{def}}{=}\sum_{i=1}^4
b_{i}\left(
L,{\Psi},\delta\right) \,.
\end{align*}
We write $\mathcal{M}_{L}$ for the set of functions ${\Psi}:\partial
V_{L}\rightarrow\left[ L/2,2L\right] $ which are restrictions of functions
defined on $\left\{ x\in\mathbb{R}^{d}:L/2\leq\left\vert x\right\vert
\leq2L\right\} $ that have smooth third derivatives bounded by $10L^{-2}$ and
fourth derivatives bounded by $10L^{-3}$. We write $\Psi_t=(m_x=t)_{x\in
\mathbb{Z}^d}$ for the
coarse-graining scheme that consists of constant coarse-graining
at scale $t$. Of course,
$\Psi_t\in \mathcal{M}_L$ for all $t,L$.
\begin{condition}
\label{Cond_Main}Let $L_{1}\in\mathbb{N},$ and $\delta>0.$ We say that
condition $\operatorname*{Cond}\left( \delta,L_{1}\right) $ holds provided
that for all $L\leq L_{1},$ and for all ${\Psi}\in\mathcal{M}_{L}$,%
\begin{equation}
b_{i}\left( L,{\Psi},\delta\right) \leq\frac{1}{4}\exp\left[ -\left(
1-\left( 4-i\right) /13\right) \left( \log L\right) ^{2}\right]
,\ i=1,2,3,4. \label{BoundBad}%
\end{equation}
\end{condition}
In particular, if $\operatorname*{Cond}\left( \delta,L_{1}\right) $ is
satisfied, then for any $L\leq L_{1},$ and any ${\Psi}\in\mathcal{M}_{L}$,%
\begin{equation}
\mathbb{P}\left( \{D_{L,0}(0)>\delta\}\cup\{D_{L,\Psi}(0)>(\log L)^{-9}\}
\right) \leq\exp\left[ - \frac{10}{13} \left( \log L\right) ^{2}\right]
\label{eq-220805a}%
\end{equation}
Our main technical inductive result is
\begin{proposition}
\label{Prop_Main} There exist $\delta_{0}>0$
such that for all $\delta\in(0,\delta_{0}]$ there exists $\varepsilon
_{0}\left( \delta\right) $ and $L_{0}\in\mathbb{N}$ such that if
$\varepsilon\leq\varepsilon_{0},$ $L_{1}\geq L_{0},$ and $\mu$ is such that
Condition \ref{Cond_Mu} holds for $\varepsilon,$ then%
\[
\operatorname*{Cond}\left( \delta,L_{1}\right) \Longrightarrow
\operatorname*{Cond}\left( \delta,L_{1}\left( \log L_{1}\right)
^{2}\right) .
\]
\end{proposition}
Given $L_{0},\delta_{0},$ we can always choose $\varepsilon_{0}$ so small that
if Condition \ref{Cond_Mu} is satisfied with $\varepsilon\leq
\varepsilon_{0},$ then
$\operatorname*{Cond}\left( \delta_{0},L_{0}\right) $ holds trivially.
Proposition \ref{Prop_Main} then implies that for any
$\delta<\delta_{0}$, there exists
$\varepsilon_{0}=\varepsilon_{0}(\delta)$ small enough such that
if Condition \ref{Cond_Mu} is satisfied with $\varepsilon\leq
\varepsilon_{0},$ then
$\operatorname*{Cond}\left( \delta,L\right) $
holds for all $L$.
In particular, one obtains immediately from
Proposition \ref{Prop_Main} the following theorem (recall that
$\Psi_t$ denotes constant coarse-graining at scale $t$).
\begin{theorem}
\label{Th_Main} For each $\delta>0$
there exists an $\varepsilon_{0}=\varepsilon_0(\delta)>0$ such that if
Condition \ref{Cond_Mu} is satisfied with $\varepsilon\leq
\varepsilon_{0},$ then
for any integer $r\geq 0$,
\[
\limsup_{L\rightarrow\infty}L^r
b\left( L,\Psi_L,\delta\right) =0\,.
\]
\end{theorem}
Our induction will also provide
the following theorem which is the
main result of our paper. It provides a local limit theorem for the exit law.
\begin{theorem}
\label{Th_Main1} There exists $\varepsilon_{0}>0,$ such that
if
Condition \ref{Cond_Mu} is satisfied with $\varepsilon\leq
\varepsilon_{0},$ then
for any $\delta>0$,
and for any integer $r\geq 0$,
\[
\lim_{t\rightarrow\infty}\limsup_{L\rightarrow\infty}L^r b\left( L,\Psi_t,\delta
\right) =0\,.
\]
\end{theorem}
The Borel-Cantelli lemma then implies that under the
conditions of Theorem \ref{Th_Main},
$$\limsup_{L\to\infty}
D_{L,\Psi_t}(0)\leq c_t\,,\quad
\mathbb{P}_{\mu}-a.s.,$$
where $c_t$ is a constant such that
$c_t\to_{t\to\infty} 0$.
A remark about the wording which we use.
When we say that something
holds for \textquotedblleft large enough $L$\textquotedblright, we mean that
there exists $L_{0},$ \textit{depending only on the dimension}, such that the
statement holds for $L\geq L_{0}.$ We emphasize that $L_{0}$ then \textit{does
not depend on }$\varepsilon$.
We write $C$ for a generic positive constant, not necessarily the same at
different occurrences. $C$ may depend on the dimension $d$ of the lattice, but
on nothing else, except when indicated explicitly. Other constants, such as
$c_{0},c_{1},\bar c, k_{0},K,C_{1}$ etc., follow the same convention
concerning what they depend on ($d$ only, unless explicitly stated
otherwise!), but their value is fixed throughout the paper and does not change
from line to line.
\section{Preliminaries \label{Sect_Preliminaries}}
\subsection{The perturbation expansion
\label{Subsect_Perturbation}}
Let $p=\left( p\left( x,y\right) \right) _{x,y\in\mathbb{Z}^{d}}$ be a
Markovian transition kernel on $\mathbb{Z}^{d},$ not necessarily nearest
neighbor, but of finite range, and let $V\subset\subset\mathbb{Z}^{d}.$ The
Green kernel on $V$ with respect to $p$ is defined by%
\[
g_{\scriptscriptstyle V}\left( p\right) \left( x,y\right) \overset
{\mathrm{def}}{=}\sum_{k\geq0}\left( 1_{\scriptscriptstyle V}p\right)
^{k}\left( x,y\right) .
\]
Evidently, if $z\notin V,$ then
\begin{equation}
g_{\scriptscriptstyle V}\left( p\right) \left( \cdot,z\right)
=\operatorname*{ex}\nolimits_{\scriptscriptstyle V}\left( \cdot,z;p\right) .
\label{Green&Exit}%
\end{equation}
If $p,q$ are two transition kernels, write $\Delta_{p,q}=
1_V(p-q)$. The resolvent equation gives for every
$n\in\mathbb{N}$,%
\begin{align}
\label{Pert1}
&g_{\scriptscriptstyle V}\left( p\right) -g_{\scriptscriptstyle V}\left(
q\right)
=g_{\scriptscriptstyle V}\left( q\right) \Delta_{p,q}
g_{\scriptscriptstyle V}\left( p\right)
\nonumber\\
&
=\sum_{k=1}^{n-1}\left[
g_{\scriptscriptstyle V}\left( q\right) \Delta_{p,q}
\right] ^{k}g_{\scriptscriptstyle V}\left( q\right)
+\left[ g_{\scriptscriptstyle V}\left( q\right) \Delta_{p,q}
\right] ^{n}g_{\scriptscriptstyle V}\left( p\right)
=\sum_{k=1}^{\infty}\left[
g_{\scriptscriptstyle V}\left( q\right) \Delta_{p,q}
\right] ^{k}g_{\scriptscriptstyle V}\left( q\right) ,
\end{align}
assuming convergence of the infinite series, which will always be trivial in
cases of interest to us, due to ellipticity and $V$ being finite.
We will occasionally slightly modify the above expansion, but the basis is
always the first equality in (\ref{Pert1}).
\subsection{The coarse graining
schemes on $V_{L}\label{SubSect_Smoothing}$}
Our proof of Theorems \ref{Th_Main} and \ref{Th_Main1} is based
on a couple of explicit coarse graining schemes, whose definitions we now present.
Set
\begin{equation}
r\left( L\right) \overset{\mathrm{def}}{=}L/\left( \log L\right)
^{10},\ s\left( L\right) \overset{\mathrm{def}}{=}L/\left( \log L\right)
^{3} \,, \
\operatorname*{Sh}\nolimits_{L}
\overset{\mathrm{def}}{=}
\operatorname*{Shell}\nolimits_{L}\left( r(L)\right)%
\label{Def_sL&rL}\,, \
\end{equation}
and
\begin{equation}
\gamma\overset{\mathrm{def}}{=}\min\left( \frac{1}{10}, \frac{1}{2}\left(
1-\left( \frac{2}{3}\right) ^{1/\left( d-1\right) }\right) \right) .
\label{Def_Gamma}%
\end{equation}
We fix a $C^{\infty}$-function $h:\mathbb{R}^{+}\rightarrow\mathbb{R}^{+},$
which satisfies $h\left( u\right) =u$ for $u\leq1/2,$ $h\left( u\right)
=1$ for $u\geq2,$ and is strictly monotone and concave on $\left(
1/2,2\right) .$ For $x\in V_{\scriptscriptstyle L},$ we set%
\begin{equation}
h_{\scriptscriptstyle L}\left( x\right) \overset{\mathrm{def}}{=}{\gamma
s\left( L\right) }h\left( \frac{d_{L}\left( x\right) }{s\left( L\right)
}\right) . \label{Def_hL}%
\end{equation}
Remark that for $d_{L}\left( x\right) \geq2s\left( L\right) ,$ we have
$h_{\scriptscriptstyle L}\left( x\right) =\gamma s\left( L\right) .$
\begin{lemma}
\label{lem-130705} Fix $\delta_{1}>0$. Then, there is a constant $\bar
k_{0}=\bar k_{0}(\delta_{1})$ such that if $k\geq\bar k_{0}(\delta_{1})$,
and $\Delta(x,y)=\Pi_{V_{kr(L)}(x)}(x,y)-\pi_{V_{kr(L)}(x)}(x,y)$ then
for all $L$ large, if for some $\delta>0$, $d_{L}\left( x\right) \leq
r\left( L\right) $ and
$D_{kr\left( L\right),0 }\left( x\right)
\leq\delta,$ then
\begin{equation}
\sum_{y\in V_{L}\cap
\operatorname*{Sh}\nolimits_{L}
}\left\vert \Delta(x,y)\right\vert
\leq\delta+\delta_{1}\,. \label{eq-130705a}%
\end{equation}
\end{lemma}
\begin{proof}
Fix $k$. We have
\begin{align*}
\sum_{y\in V_{L}\cap
\operatorname*{Sh}\nolimits_{L}
}|\Delta(x,y)|
& \leq\Pi_{V_{kr\left( L\right) }\left( x\right) }\left( x,V_{L}%
\cap\operatorname*{Sh}\nolimits_{L}
\right) +\pi_{V_{kr\left( L\right) }\left( x\right) }\left( x,V_{L}%
\cap\operatorname*{Sh}\nolimits_{L}
\right) \\
& \leq\delta+2\pi_{V_{kr\left( L\right) }\left( x\right) }\left(
x,V_{L}\cap\operatorname*{Sh}\nolimits_{L}
\right) .
\end{align*}
Choosing $k$ large enough completes the proof.
\end{proof}
We can now define our coarse graining schemes on $V_L$.
The first will depend on a
constant $k_{0}>1$ that will be chosen below, based on some a-priori
estimates concerning simple random walk, see (\ref{k_0_large}).
\begin{definition}
\label{Def_SmoothingScheme}
\begin{enumerate}
\item[a)] The coarse graining scheme $\mathcal{S}_{1} =\mathcal{S}_{1,L,k_{0}}
=\left( s_{x}\right) _{x\in V_{L}}$ is defined
for $d_{L}\left(
x\right) \leq r\left( L\right) $ by $s_{x}=\delta_{V_{k_{0}r\left(
L\right)}\cap V_L }$,
i.e. for such an $x,$ the coarse graining is done by choosing
the exit distribution from $V_{k_{0}r\left( L\right) }\left( x\right) \cap
V_{L}.$ For $d_{L}\left( x\right) >r\left( L\right) $, we take
$m_x=h_L(x)$ and define $s_{x}$
according to the description following
(\ref{SmootingFunction}).
\item[b)] The coarse graining scheme $\mathcal{S}_{2}= \mathcal{S}_{2,L}=\left(
s_{x}\right) _{x\in V_{L}}$ is defined for all $x$
by
$m_x=h_L(x)$.
\end{enumerate}
\end{definition}
We will need the second scheme only in Section \ref{Sect_NonSmooth}, when
propagating the part of the
estimate $b_{4}(L,\Psi,\delta)$ involving the expression $D_{L,0}(x)$ of
(\ref{eq-280905a}). Note that under $\mathcal{S}_2$, if $d_L(x)<1/2\gamma$
then there is no coarse graining at all, i.e. $s_x=\delta_x$.
We write $\rho_{i,L}\left( x\right) $ for the range of the coarse
graining scheme at $x$ in scheme $i,$ $i=1,2$, i.e.
\begin{equation}
\rho_{1,L}\left( x\right) \overset{\mathrm{def}}{=}\left\{
\begin{array}
[c]{ll}%
k_{0}r\left( L\right) & \mathrm{for\ }d_{L}\left( x\right) \leq r\left(
L\right) \\
2h_{L}\left( x\right) & \mathrm{for\ }r\left( L\right) <d_{L}\left(
x\right)
\end{array}
\right.\,, \
\rho_{2,L}=
2h_{L}\left( x\right) \,.
\label{Def_Rho1}%
\end{equation}
\begin{figure}[t]
\begin{picture}(10,200)(-80,0)\input{smooth.pictex}
\end{picture}
\caption{The coarse graining scheme $\mathcal{S}_{1}$}
\end{figure}
\subsection{Estimates on exit distributions and the Green's
function\label{Subsect_Exit&Green}}
For notational convenience, we write $\pi_{L}$ instead of $\pi_{V_{L}},$ and
similarly in other expressions. For instance, we write $\tau_{L}$ instead of
$\tau_{V_{L}}.$
\begin{lemma}
\label{Le_Lawler_Exit}
\begin{enumerate}
\item[a)] For $x\in \partial V_L$,
\[
\frac{1}{C}L^{-d+1}\leq\pi_{L}\left( x\right) \leq CL^{-d+1}.
\]
\item[b)] Let $x$ be a vector of unit length in $\mathbb{R}^{d},$ let
$0<\theta<1,$ and define the cone $C_{\theta}\left( x\right) \overset
{\mathrm{def}}{=}\left\{ y\in\mathbb{Z}^{d}:\left\langle y,x\right\rangle
\geq\left( 1-\theta\right) \left\vert y\right\vert \right\} .$ For any
$\theta,$ there exists $\eta\left( \theta\right) >0,$ such that for all
$L$ large enough, and all $x$%
\begin{equation}
\pi_{L}\left( 0,C_{\theta}\left( x\right) \right) \geq\eta\left(
\theta\right) . \label{ConeEst}%
\end{equation}
\item[c)] Let $0<l<L,$ and $x\in\mathbb{Z}^{d}$ satisfy $l<\left\vert
x\right\vert <L.$ Then%
\[
P_{x}^{\mathrm{RW}}\left( \tau_{L}<T_{V_{l}}\right) =\frac{ l^{-d+2}%
-\left\vert x\right\vert ^{-d+2}+ O\left( l^{-d+1}\right) }{l^{-d+2}%
-L^{-d+2}}%
\]
\end{enumerate}
\end{lemma}
\begin{proof}
a) is Lemma 1.7.4 of \cite{Lawler}. b) is immediate from a). c) is Proposition
1.5.10 of \cite{Lawler}.
\end{proof}
We will repeatedly make use of the following lemma.
\begin{lemma}
\label{Le_MainExit}Assume $x,y\in V_{L},$ $1\leq a\leq5d_{L}\left( y\right)
,$ $x\notin V_{2a}\left( y\right) .$ Then%
\begin{equation}
P_{x}\left( T_{V_{a}\left( y\right) }<\tau_{V_{L}} \right) \leq
C\frac{a^{d-2}d_{L}\left( y\right) d_{L}\left( x\right) }{\left\vert
x-y\right\vert ^{d}} \label{eq-080605gg}%
\end{equation}
\end{lemma}
The proof will be given in Appendix \ref{App_A}.
We will need a corresponding result for the Brownian motion. We write $\pi
_{L}^{\mathrm{BM}}(y,dy^{\prime})$ for the exit distribution of the Brownian
motion from the ball $C_{L}$ of radius $L$ in $\mathbb{R}^{d}.$ The following
lemma is an easy consequence of the Poisson formula, see \cite[(1.43)]{Lawler}.
\begin{lemma}
\label{Le_ExitsBM}For any $y\in C_{L}$, it holds that
\begin{equation}
\frac{C^{-1}d(y,\partial C_{L})}{|y-y^{\prime}|^{d}}\leq\frac{\pi
_{L}^{\mathrm{BM}}(y,dy^{\prime})}{dy^{\prime}}\leq\frac{Cd(y,\partial C_{L}%
)}{|y-y^{\prime}|^{d}}, \label{eq-200305ff}%
\end{equation}
where $dy^{\prime}$ is the surface measure on $\partial C_{L}.$
\end{lemma}
We will also
need a comparison between smoothed exit distribution of the random
walk, and that of Brownian motion. Given $L>0,$ and ${\Psi}\in\mathcal{M}%
_{L},$ let
\begin{equation}
\phi_{L,{\Psi}}\overset{\mathrm{def}}{=}\pi_{L}\hat{\pi}_{{\Psi}}.
\label{Def_PhiLM}%
\end{equation}
We consider also the corresponding Brownian kernel on $\mathbb{R}^{d}$,%
\begin{equation}
\label{170306bb}
\phi_{L,{\Psi}}^{\mathrm{BM}}\left( y,dz\right) \overset{\mathrm{def}}%
{=}\int_{\partial C_{L}\left( 0\right) }\pi_{C_{L}\left( 0\right)
}^{\mathrm{BM}}\left( y,dw\right) \int\pi_{C_{t}\left( w\right)
}^{\mathrm{BM}}\left( w,dz\right) \varphi_{m_{w}}\left( t\right) dt,
\end{equation}
where $\Psi=\left( m_{w}\right) ,$ and where we write $\phi_{L,{\Psi}%
}^{\mathrm{BM}}\left( y,z\right) $ for the density of
$\phi_{L,{\Psi}}^{\mathrm{BM}}\left( y,dz\right)$
with respect to
$d$-dimensional Lebesgue measure.
\begin{lemma}
\label{Le_Approx_Phi_by_BM} There exists a constant $C$ such that for $L>0,$
and ${\Psi}\in\mathcal{M}_{L},$ we have%
\[
\sup_{y\in V_{L}}\sup_{z\in\mathbb{Z}^{d}}\left\vert \phi_{L,{\Psi}}\left(
y,z\right) -\phi_{L,{\Psi}}^{\mathrm{BM}}\left( y,z\right) \right\vert \leq
CL^{-d-1/5}\,.%
\]
\end{lemma}
\begin{lemma}
\label{Le_ThirdDerivative} There exists a constant $C$ such that for $L>0$ and
${\Psi}\in\mathcal{M}_{L},$ we have%
\[
\sup_{y,z}\left\Vert \partial_{y}^{i}\phi_{L,{\Psi}}^{\mathrm{BM}}\left(
y,z\right) \right\Vert \leq CL^{-d-i}\,, i=1,2,3\,.%
\]
\end{lemma}
The proofs of these two lemmas are again in Appendix \ref{App_A}.
We can draw two immediate conclusions from these results:
\begin{proposition}
\label{Prop_LipshitzPhi}
\begin{itemize}
\item[a)] Let $y,y^{\prime}$ be in $V_{L}$, and $\Psi\in\mathcal{M}_{L}.$ Then%
\begin{equation}
\left\vert \phi_{L,{\Psi}}\left( y,z\right) -\phi_{L,{\Psi}}\left(
y^{\prime},z\right) \right\vert \leq C\left( L^{-d-1/5}+\left\vert
y-y^{\prime}\right\vert L^{-d-1}\right) . \label{OneDerivative}%
\end{equation}
\item[b)] Let $x\in V_{L},$ and $l$ be such that $V_{l}\left( x\right)
\subset V_{L}.$ Consider a signed measure $\mu$ on $V_{l}$ with total mass $0$
and total variation norm $|\mu|$,
which is invariant under lattice isometries. Then%
\begin{equation}
\left\vert \sum\nolimits_{y}\mu\left( y-x\right) \phi_{L,{\Psi}}\left(
y,z\right) \right\vert \leq C\left\vert \mu\right\vert \left( L^{-d-1/5}%
+\left( \frac{l}{L}\right) ^{3}L^{-d}\right) \,. \label{ThreeDerivatives}%
\end{equation}
\end{itemize}
\end{proposition}
\begin{proof}
[Proof of Proposition \ref{Prop_LipshitzPhi}]a) is immediate from Lemmas
\ref{Le_Approx_Phi_by_BM} and \ref{Le_ThirdDerivative}.
As for b), we get from Lemma \ref{Le_Approx_Phi_by_BM} that
\[
\left\vert \sum\nolimits_{y}\mu\left( y-x\right) \phi_{L,{\Psi}}\left(
y,z\right) -\sum\nolimits_{y}\mu\left( y-x\right) \phi_{L,{\Psi}%
}^{\mathrm{BM}}\left( y,z\right) \right\vert \leq C\left\vert \mu\right\vert
L^{-d-1/5}\,,
\]%
while
\begin{align}
& \sum\nolimits_{y}\mu\left( y-x\right) \phi_{L,{\Psi}}^{\mathrm{BM}}\left(
y,z\right) =\sum\nolimits_{y}\mu\left( y-x\right) \left[ \phi
_{L,{\Psi}}^{\mathrm{BM}}\left( y,z\right) -\phi_{L,{\Psi}}^{\mathrm{BM}%
}\left( x,z\right) \right] \nonumber\\
&\quad =\sum\nolimits_{y}\mu\left( y-x\right) \partial_{x}\phi_{L,{\Psi}%
}^{\mathrm{BM}}\left( x,z\right) \left[ y-x\right] \label{Eq_Harmonic}\\
&\quad
\quad +\frac{1}{2}\sum\nolimits_{y}\mu\left( y-x\right) \partial_{x}^{2}%
\phi_{L,{\Psi}}^{\mathrm{BM}}\left( x,z\right) \left[ y-x,y-x\right]
+R\left( \mu,x,z\right) ,\nonumber
\end{align}
where, due to Lemma \ref{Le_ThirdDerivative},
\begin{equation}
\left\vert R\left( \mu,x,z\right) \right\vert \leq C\left\vert
\mu\right\vert \left( \frac{l}{L}\right) ^{3}L^{-d} \label{eq-080605a}%
\end{equation}
uniformly in $x$ and $z$,
and $\partial^{k}F\left[ u_{1},\ldots,u_{k}\right] $
denotes the $k$-th derivative of a function $F$
in directions $u_{1},\ldots,u_{k}$.
The first
summand on the right hand side of (\ref{Eq_Harmonic}) vanishes
because $\mu$
has mean $0.$ The second vanishes because by the invariance under lattice
isometry of $\mu,$ the summand
involves only the Laplacian of $\phi_{L,{\Psi}%
}^{\mathrm{BM}}\left( \cdot,z\right) ,$ which in turn
vanishes because of
harmonicity of $\pi_{C_{L}\left( 0\right) }^{\mathrm{BM}}\left(
x,\cdot\right) $ in the $x$-variable.
The proof of the proposition is complete.
\end{proof}
The next lemma gives a-priori estimates for coarse-grained walks. We use
$\hat{\pi}_{L}^{(i)}$, $i=1,2$, to denote the transitions of the coarse
grained random walk that uses the coarse graining $\mathcal{S}_{i}$, and
$\hat{g}_{L}^{(i)}$ to denote the corresponding Green's function. Note that
these quantities all depend on $L$ and $k_{0}$, but we suppress these from the
notation. Recall that $
\operatorname*{Sh} \nolimits_{L}
=
\operatorname*{Shell}%
\nolimits_{L}\left( r\left( L\right) \right)$, c.f. (\ref{Def_sL&rL}).
\begin{lemma}
\label{Cor_Green} There exists a constant $C$ (independent of $k_{0}$!) such that:
\begin{enumerate}
\item[a)]
\[
\sup_{x\in V_{L}}\hat{g}_{L}^{(1)}\left( x,
\operatorname*{Sh} \nolimits_{L}
\right) \leq C.
\]
\item[b)] If $i=1$ and $r\left( L\right) \leq a\leq3s\left( L\right) $ or
$i=2$ and $a\leq3s\left( L\right) $ then,
\[
\sup_{x\in V_{L}}\hat{g}_{L}^{(i)}\left( x,\operatorname*{Shell}%
\nolimits_{L}\left( a,2a\right) \right) \leq C.
\]
\item[c)] For all $x,y\in V_{L}\setminus{\operatorname*{Shell}}_{L}(s(L))$,
and $i=1,2$,
\[
\hat{g}_{L}^{(i)}\left( x,y\right) \leq C\left\{
\begin{array}
[c]{ll}%
\frac{1}{s(L)^{2}[|x-y|\vee s(L)]^{d-2}}, & y\neq x\\
1, & y=x\,.
\end{array}
\right.
\]
\item[d)] For $i=1,2$,
\[
\sup_{x\in V_{L}}\hat{g}_{L}^{(i)}\left( x,V_{L}\right) \leq C\left( \log
L\right) ^{6}.
\]
\item[e)] For $i=1,2$,
\[
\sup_{x,x^{\prime}\in V_{L}:\left\vert x-x^{\prime}\right\vert \leq s\left(
L\right) }\sum_{y\in V_{L}}\left\vert \hat{g}_{L}^{(i)}\left( x,y\right)
-\hat{g}_{L}^{(i)}\left( x^{\prime},y\right) \right\vert \leq C\left( \log
L\right) ^{3}%
\]
\end{enumerate}
\end{lemma}
The proof is presented in Appendix \ref{App_B}.
Lemma \ref{Cor_Green} plays a crucial role in our smoothing procedure.
As a
preparation, for $k\geq1$, set
\begin{equation}
\label{eq-080206}
B_{1}\left( k\right) \overset{\mathrm{def}}{=}\operatorname*{Shell}%
\nolimits_{L}\left( \left( 4/3\right) ^{k}r\left( L\right) \right) .
\end{equation}
$B_{1}\left( k\right) \subset\operatorname*{Shell}\nolimits_{L}\left(
s\left( L\right) \right) $ if $k\leq20\log\log L$.
By Lemma \ref{Cor_Green},
there exists a constant $\bar c\geq1$ (again, independent of $k_0$!)
such that%
\begin{equation}
\sup_{x\in V_{L}}\hat{g}_{L}^{(1)} \left( x,B_{1}\left( k\right) \right)
\leq\bar c \left\{
\begin{array}
[c]{cc}%
k\,, & \mathrm{if\ }k\leq20\log\log L\\
\left( \log L\right) ^{6} & \mathrm{if\ }k>20\log\log L
\end{array}
\right. . \label{Est_BoundaryReach}%
\end{equation}
and, for any ball $V_{rs(L)}(z)\subset V_{L-s(L)}$,
$r\geq1$,
\begin{equation}
\sup_{x\in V_{L}}\hat{g}_{L}^{(1)} \left( x,V_{rs(L)}(z) \right) \leq\bar c
r^{d}\,. \label{Est_BoundaryReach1}%
\end{equation}
With $\bar{c}$ as in (\ref{Est_BoundaryReach}) and (\ref{Est_BoundaryReach1}),
we fix the constant $k_{0}$ large enough such that:
\begin{align}
k_{0} & \geq\bar{k}_{0}(1/200\bar{c}),\nonumber\\
\sup_{x\in
\operatorname*{Sh} \nolimits_{L}
}P_{x}^{\mathrm{RW}}\left( \tau_{V_{L}}<\tau_{V_{k_{0}%
r\left( L\right) }\left( x\right) }\right) & \geq9/10,\label{k_0_large}%
\\
\sup_{x\in
\operatorname*{Sh} \nolimits_{L}
}\pi_{V_{k_{0}r\left( L\right) }\left( x\right) }\left(
x,V_{L}\right) & \leq17/32.\nonumber
\end{align}
That the two last estimates in (\ref{k_0_large}) hold for $k_{0}$ large
is obvious, for example from Donsker's invariance principle.
\section{Smoothed exits\label{Sect_Smooth}}
In this section, we provide estimates
on the quantity $D_{L,\Psi}(0)$.
We use the perturbation expansion in (\ref{Pert1})
repeatedly. The main application is in comparing
exit distributions, as follows.
If $V\subset\subset\mathbb{Z}^{d}$, and $\mathcal{S}$ is any coarse graining
scheme on $V$ (as in Definition \ref{Def_CoarseGrainingScheme}), we compare
the exit distribution of the RWRE $\Pi_{V}$ with the exit distribution
$\pi_{V}$ of simple random walk through this perturbation expansion, using
however coarse grained transitions inside $V:$ using (\ref{Green&Exit}) and
(\ref{EqualExits}) we get for $x\in V$%
\[
\left( \Pi_{V}-\pi_{V}\right) \left( x,\cdot\right) =\sum_{k=0}^{\infty
}\left( \hat{g}_{\mathcal{S},V}\left[ \Delta_{\mathcal{S},V}\hat
{g}_{\mathcal{S},V}\right] ^{k}\Delta_{\mathcal{S},V}\pi_{V}\right) \left(
x,\cdot\right) ,
\]
where%
\[
\Delta_{\mathcal{S},V}\overset{\mathrm{def}}{=}1_{V}\left( \hat{\Pi
}_{\mathcal{S},V}-\hat{\pi}_{\mathcal{S},V}\right) ,\ \hat{g}_{\mathcal{S}%
,V}\overset{\mathrm{def}}{=}g_{V}\left( \hat{\pi}_{\mathcal{S},V}\right) .
\]
Throughout this section, we consider only
the coarse graining scheme $\mathcal{S=S} _{1}$
as in Definition \ref{Def_SmoothingScheme}. We
keep $L$ and $V_L$ fixed, and drop throughout the
$\mathcal{S},V$ subscripts, writing $\hat \Pi, \hat \pi,\Delta$ and $\hat g$ for
$\hat{\Pi}_{\mathcal{S},V},
\hat{\pi}_{\mathcal{S},V},
\Delta_{\mathcal{S},V}$ and $\hat{g}_{\mathcal{S},V}$.
We use repeatedly the identity
\[
\hat{g}\left( x,\cdot\right) =\delta_{x,\cdot}+\hat{\pi}\hat{g}\left(
x,\cdot\right) ,\ x\in V_L.
\]
Setting, for
$k\geq1$,
\begin{equation}
\label{eq-120306e}
\zeta^{\left( k\right) }=\Delta^{k-1}\left( \Delta\hat{\pi}\hat{g}\right) ,
\end{equation}
we get%
\begin{equation}\label{eq-120306d}
\Pi_L-\pi_L=\hat{g}\sum_{m=1}^{\infty}\sum_{k_{1},\ldots,k_{m}=1}^{\infty
}\zeta^{\left( k_{1}\right) }\cdot\ldots\cdot\zeta^{\left( k_{m-1}\right)
}\Delta^{k_{m}}\pi_L
\overset{\mathrm{def}}{=}
{\mathcal{R}}_L\,.
\end{equation}
Remark that we can replace in $\zeta^{\left( k\right) }$ the second part:%
\[
\left( \Delta\hat{\pi}\hat{g}\right) \left( x,y\right) =\sum_{z}\left(
\Delta\hat{\pi}\right) \left( x,z\right) \left( \hat{g}\left( z,y\right)
-\hat{g}\left( x,y\right) \right) ,
\]
i.e., we gain a discrete derivative in the Green function.
We can now describe informally our basic strategy.
When analyzing the term
$D_{L,\Psi}(0)$, boundary effect are not essential, and one can
consider all steps to be coarse-grained (some extra care is however
needed near the boundary, which leads to the specific form
of the coarse graining scheme ${\mathcal {S}}_1$,
but we gloss over these details in the description
that follows).
Note that the steps of the coarse-grained random walk are essentially
in the scale $L/(\log L)^3$. In this scale, most
$x\in V_L$ are good, that is the individual steps of the coarse-grained
random walk are
controlled by the good event in the induction hypothesis.
Consider the linear
term in (\ref{eq-120306d}), that is the term with $m=1$, which turns out
to be the dominant term in the expansion.
Suppose first all $x\in V_L$ are good, and consider
the term with $k_1=0$. In this case, each term is smoothed at scale
$L$ from the right, and its variational norm is bounded
by $ o((\log L)^{-3}) O((\log L)^{-9})$. A-priori estimates
on the coarse-grained simple random walk yield that the
sum over the coarse grained Green function $\hat g$ is
$O((\log L)^{-6})$. This would look alarming, as multiplying these
gives rise to an error which is only
$o((\log L)^{-6})$, which
could
result in non-propagation of the induction hypothesis. However,
one can use the fact that the individual contributions from sites
distance by $\rho_{1,L}$ are independent, and of zero mean due
to the isotropy assumption. Averaging over this sum of essentially
independent random variables improves the estimate from the worst-case
value of $o((\log L)^{-6})$ back to
the desired value of $o((\log L)^{-9})$, see
the proof of Proposition \ref{prop-erwin160605}.
The terms with $k_1\geq 1$ are handled similarly, using now
the part of the induction hypothesis involving $D_{L,0}(0)$
to control the extra powers of $\Delta$ and ensure the convergence
of the series. A similar strategy is applied to the ``non-linear''
terms with $m>1$. Boundary terms are handled by using the fact that
the coarse grained random walk is unlikely to stay at distance
less than $r(L)$ from the boundary for many steps.
A major complication in handling the perturbation expansion is
the presence of
``bad regions''.
The advantage of
the coarse graining scheme $\mathcal{S=S}_{1}$ is that it is unlikely
to have more than one ``bad region'', and that this
single bad region can be handled by an appropriate surgery,
once appropriate
Green function estimates for the RWRE in a ``good environment''
are derived, see Section \ref{Subsect_greengood}.
We now turn to the actual proof, and
write $B_{L}^{\left( i\right) }$, $i=1,2,3,4,$
for the collection of
points which are bad on level $i,$ and in the right scale,
with respect to the coarse graining scheme $\mathcal{S}_{1}$. That is,
for $i=1,2,3$,
\begin{align}
\label{eq-120306b}
B_{L}^{\left( i\right) }=&
\{x\notin
\operatorname*{Sh}\nolimits_{L}:
D_{r,h_{L}(x)}\left( x\right) >\left( \log L\right) ^{-9+\frac{9(i-1)}{4}}
\,\mbox{\rm for some }\
r\in\lbrack h_{L}(x),2h_{L}(x)],
\nonumber\\
&
D_{r,h_{L}(x)}\left(
x\right) \leq\left( \log L\right) ^{-9+\frac{9i}{4}}
\mbox{\rm for all}\
r\in\lbrack h_{L}(x),2h_{L}(x)]\,,\
D_{r,0}\left(
x\right) \leq\delta
\}\,,
\end{align}
and
\begin{align}
\label{eq-120306c}
B_{L}^{\left( 4\right) }=&
\{x\notin
\operatorname*{Sh}\nolimits_{L}:
D_{r,h_{L}(x)}\left( x\right) >\left( \log L\right) ^{-\frac{9}{4}}
\ \mbox{\rm or}\
D_{r,0}\left( x\right) >\delta
\,,\nonumber \\
& \mbox{\rm for some }\
r\in\lbrack h_{L}(x),2h_{L}(x)]
\}\; \bigcap
\{x \in
\operatorname*{Sh}\nolimits_{L}:
D_{k_0r\left( L\right) ,0 }\left( x\right)
\geq\delta\}\,.
\end{align}
We also write
\begin{equation}
B_{L}\overset{\mathrm{def}}{=}\bigcup_{i=1}^4 B_{L}^{\left( i\right)}
\,,\
\operatorname*{Good}\nolimits_{L}\overset{\mathrm{def}}{=}\left\{
B_{L}=\emptyset\right\} .
\label{Def_BL}%
\end{equation}
As mentioned in the beginning of this section,
a major complication in handling the perturbation expansion is
the ``bad regions''.
The advantage of
the coarse graining scheme $\mathcal{S=S}_{1}$ is that it is unlikely
to have essentially more than one ``bad region''.
To make this statement precise, note that
if $L_{1}\leq L\leq L_{1}\left( \log L_{1}\right) ^{2}$ then all the radii
involved in the definition of badness are smaller than $L_{1},$ if $L_{1}$ is
chosen large enough. Remark also that if $d_{L}\left( x\right) >r\left(
L\right) ,$ then $h_{L}\left( x+\cdot\right) \in\mathcal{M}_{r}$ for
$h_{L}\left( x\right) \leq r\leq2h_{L}\left( x\right) ,$
and therefore, if
$L_1$ is large enough,
$\operatorname*{Cond}\left( \delta,L_{1}\right) $ holds,
and $L_{1}\leq
L\leq L_{1}\left( \log L_{1}\right) ^{2},$ then%
\begin{equation}
\mathbb{P}\left( x\in B_{L}\right) \leq2\gamma s\left( L\right)
\exp\left[ -\frac{10}{13}\left( \log\frac{\gamma L}{\left( \log L\right)
^{10}}\right) ^{2}\right] \leq\exp\left[ -0.7\left( \log L\right)
^{2}\right] \,. \label{BoundBadness}%
\end{equation}
The points $y$ whose random environment $\omega_{y}$ can influence the badness
of $x$ are evidently within radius $\rho_L(x)=\rho_{1,L}\left( x\right) $
from $x$, see (\ref{Def_Rho1}). If
$\left\vert x-y\right\vert >\rho_{L}\left( x\right) +\rho_{L}\left(
y\right) ,$ then $\left\{ x\in B_{L}\right\} $ and $\left\{ y\in
B_{L}\right\} $ are independent. Therefore, if we define%
\begin{equation}
\operatorname*{TwoBad}\nolimits_{L}\overset{\mathrm{def}}{=}\bigcup_{x,y\in
V_{L}:\left\vert x-y\right\vert >\rho_{L}\left( x\right) +\rho
_{L}\left( y\right) }\left\{ x\in B_{L}\right\} \cap\left\{ y\in
B_{L}\right\} , \label{DefTwoBad}%
\end{equation}
then:
\begin{lemma}
\label{Le_TwoBad}Assume $L_{1}$ large enough, (\ref{BoundBad}) for $L_{1},$
and $L_{1}\leq L\leq L_{1}\left( \log L_{1}\right) ^{2}.$ Then%
\[
\mathbb{P}\left( \operatorname*{TwoBad}\nolimits_{L}\right) \leq\exp\left[
-1.2\left( \log L\right) ^{2}\right] .
\]
\end{lemma}
Next, we regard $\hat{\Pi}$ as a field $\left(
\hat{\Pi}\left( x,\cdot\right) \right) _{x\in V_{L}}$ of
random transition probabilities. We defined the \textquotedblleft
goodified\textquotedblright\ transition probabilities%
\begin{equation}
\label{eq-160306a}
\operatorname*{gd}\left( \hat{\Pi}\right) \left(
x,\cdot\right) \overset{\mathrm{def}}{=}\left\{
\begin{array}
[c]{cc}%
\hat{\Pi}\left( x,\cdot\right) & \mathrm{if\ }x\notin
B_{L}\\
\hat{\pi}\left( x,\cdot\right) & \mathrm{if\ }x\in B_{L}%
\end{array}
\right. .
\end{equation}
This field might no longer come from an i.i.d. RWRE, but nevertheless, we have
the property that $\operatorname*{gd}\left( \hat{\Pi}_L\right)
\left( x,\cdot\right) $ and $\operatorname*{gd}\left( \hat{\Pi
}_L\right) \left( y,\cdot\right) $ are independent provided
$\left\vert x-y\right\vert >\rho_{L}\left( x\right) +\rho_{L}\left(
y\right) .$ If $X$ is a random variable depending on $\omega$ only trough
$\hat{\Pi}_L$ we define $\operatorname*{gd}\left( X\right) $
by replacing $\hat{\Pi}_L$ by $\operatorname*{gd}\left(
\hat{\Pi}_L\right) .$
We next take ${\Psi}\in\mathcal{M}_{L},$ and
set $\phi\overset{\mathrm{def}}{=}\phi_{L,{\Psi}},$ as in (\ref{Def_PhiLM}).
An easy consequence of our definitions and
Lemma \ref{lem-130705} is the following.
\begin{lemma}
\label{lem-090805}
If $\delta\leq(1/800\bar c)$ then, for all $x\in V_{L}$ and $k\geq2$,
\begin{equation}
\label{eq-090805a}\mathbf{1}_{\{B_{L}=\emptyset\}} \|\Delta^{k}(x,\cdot)\|_{1}
\leq\frac{1}{\bar c} \left( \frac18\right) ^{k} \,.
\end{equation}
\end{lemma}
\begin{proof}
Since $\max_{x\in V_{L}}\left\Vert \Delta(x,\cdot)\right\Vert _{1}\leq2$ and
$\bar{c}\geq1$, it is enough to prove that
\[
\mathbf{1}_{\{B_{L}=\emptyset\}}\sum_{z\in V_{L}}|\Delta^{2}(x,z)|\leq\left(
\frac{1}{64\bar{c}}\right) \,.
\]
If $x\not\in\operatorname*{Sh}_L$
then, on the event $\left\{ B_{L}=\emptyset\right\} $,
$\Vert\Delta(x,\cdot)\Vert_{1}\leq\delta$ and hence $\Vert\Delta^{2}%
(x,\cdot)\Vert_{1}\leq2\delta\leq1/64\bar{c}$ due to our choice of $\delta$.
On the other hand, if
$x\in\operatorname*{Sh}_L$
then on the event $\left\{
B_{L}=\emptyset\right\} $,
\begin{align}
& \sum_{z\in V_{L}}|\Delta^{2}(x,z)|=\sum_{z\in V_{L}}\left\vert
\sum\nolimits_{y\in V_{L}}\Delta(x,y)\Delta(y,z)\right\vert \label{eq-090805b}%
\\
& \leq2\left\vert \sum_{y\in{\operatorname*{Sh}}_{L}}\Delta
(x,y)\right\vert +\left\vert \sum_{y\in V_{L}\setminus{\operatorname*{Sh}%
}_{L}}\Delta(x,y)\right\vert \max_{y\in V_{L}\setminus
{\operatorname*{Sh}}_{L}}\sum_{z\in V_{L}}|\Delta(y,z)|\nonumber\\
& \leq2(\delta+\frac{1}{200\bar{c}})+2\delta=4\delta+\frac{1}{100\bar{c}%
}<\frac{1}{64\bar{c}}\,,\nonumber
\end{align}
where Lemma \ref{lem-130705} and $k_{0}\geq\bar{k}_{0}(1/200\bar{c})$ were
used in the next to last inequality.
\end{proof}
In what follows, we will always consider $\delta\leq1/800\bar c$.
\subsection{The linear part \label{SubSect_GoodLinear}}
For $x\in V_{L},\ B\subset V_{L},$ set%
\begin{align}
\xi_{x}^{\left( k\right) }\left( B,z\right) & =\sum_{y\in B}\hat
{g}\left( x,y\right) \left( \Delta^{k}\pi_L\hat{\pi}_{{\Psi}}\right) \left(
y,z\right) \label{XiB}\\
& =\sum_{y\in B}\sum_{y^{\prime}\in V_{L}}\hat{g}\left( x,y\right)
\Delta^{k}\left( y,y^{\prime}\right) \left( \phi\left( y^{\prime
},z\right) -\phi\left( y,z\right) \right)\,, \nonumber
\end{align}
where the last equality is because the total mass of $\Delta(y,\cdot)$
vanishes.
We write $\xi_{x}^{\left( k\right) }\left( z\right) $ for $\xi
_{x}^{\left( k\right) }\left( V_{L},z\right)$;
in the notation of (\ref{eq-120306d}),
$\xi_x^{\left(k\right)}\left( z\right)=\zeta^{\left(k\right)}\hat\pi_{\Psi}
(x,z)$.
Define%
\[
G_{L}\overset{\mathrm{def}}{=}\left\{ \sup_{x\in V_{L}}\sum\nolimits_{k\geq
1}\left\Vert \xi_{x}^{\left( k\right) }\right\Vert _{1}\leq\left( \log
L\right) ^{-37/4}\right\} .
\]
$G_L $ is precisely the event that the $m=1$
term in the perturbation
expansion (\ref{eq-120306d}), smoothed by $\hat \pi_\Psi$, is ``small''.
\begin{proposition}
\label{prop-erwin160605} If $L$ is large enough, then%
\[
\mathbb{P}\left( \left( G_{L}\right) ^{c}\cap\operatorname*{Good}%
\nolimits_{L}\right) \leq\exp\left[ -\left( \log L\right) ^{17/8}\right]
.
\]
\end{proposition}
\begin{proof}
It suffices to prove that%
\[
\sup_{x\in V_{L}}\mathbb{P}\left( \sum\nolimits_{k\geq1}\left\Vert \xi
_{x}^{\left( k\right) }\right\Vert _{1}\geq\left( \log L\right)
^{-37/4},\ \operatorname*{Good}\nolimits_{L}\right) \leq\exp\left[ -\left(
\log L\right) ^{9/4}\right]\,.
\]%
Note that
\begin{align*}
& \mathbb{P}\left( \sum\nolimits_{k\geq1}\left\Vert \xi_{x}^{\left(
k\right) }\right\Vert _{1}\geq\left( \log L\right) ^{-37/4}%
,\ \operatorname*{Good}\nolimits_{L}\right) \\
&
=\mathbb{P}\left( \sum\nolimits_{k\geq1}\left\Vert \operatorname*{gd}%
\left( \xi_{x}^{\left( k\right) }\right) \right\Vert _{1}\geq\left( \log
L\right) ^{-37/4},\ \operatorname*{Good}\nolimits_{L}\right) \\
& \leq\mathbb{P}\left( \sum\nolimits_{k\geq1}\left\Vert \operatorname*{gd}%
\left( \xi_{x}^{\left( k\right) }\right) \right\Vert _{1}\geq\left( \log
L\right) ^{-37/4}\right) .
\end{align*}
For notation convenience, we drop the notation $\operatorname*{gd}\left(
\cdot\right) ,$ and just use the fact that all $\hat{\Pi}$
involved satisfy the appropriate \textquotedblleft goodness\textquotedblright%
\ properties. (Remark that after \textquotedblleft
goodifications\textquotedblright, the distribution of $\hat{\Pi}
\left( x,x+\cdot\right) $ remains invariant under lattice
isometries, provided $d_{L}\left( x\right) >2s\left( L\right) .$)
We split $\xi_{x}^{\left( k\right) }$ into different parts. If
$y\not\in \operatorname*{Sh}_L$
and $\Delta\left( y,y^{\prime}\right) >0,$
we have, since $\gamma\leq1/8$, that $\left\vert y-y^{\prime}\right\vert \leq
d_{L}\left( y\right) /4,$ i.e. $d_{L}\left( y\right) \leq\left(
4/3\right) d_{L}\left( y^{\prime}\right) .$ Therefore, if $
y^{\prime}\in \operatorname*{Sh}_L $ and $\Delta^{k}\left(
y,y^{\prime}\right) >0,$ then $d_{L}\left( y\right) \leq\left( 4/3\right)
^{k}r\left( L\right) .$ Recall the set $B_{1}(k)$, c.f.
(\ref{eq-080206}),
and the estimate
(\ref{Est_BoundaryReach}).
If $y\in B_{1}(k),$ and $\Delta^{k}\left( y,y^{\prime}\right) >0,$ we have%
$$\left\vert y-y^{\prime}\right\vert \leq kk_{0}r\left( L\right)
+3^{k}\max\left( r\left( L\right) ,d_{L}\left( y\right) \right)
\leq\left( kk_{0}+4^{k}\right) r\left( L\right) ,
$$
and applying (\ref{OneDerivative}), we see that for $y\in B_{1}(k),$ and
$y^{\prime}$ such that
$\Delta^{k}\left( y,y^{\prime}\right) >0,$ we have%
\[
\left\vert \phi\left( y,z\right) -\phi\left( y^{\prime},z\right)
\right\vert \leq C\left( kk_{0}+4^{k}\right) L^{-d}\left( \log L\right)
^{-10}.
\]
By Lemma \ref{lem-090805},
we have $\left\Vert \Delta^{k}\left( y,\cdot\right) \right\Vert _{1}\leq
8^{-k}.$ Combining all these estimates with parts b) and d) of
Lemma \ref{Cor_Green}, we have%
\begin{equation}
\left\Vert \xi_{x}^{(k)}\left( B_{1}(k)\right) \right\Vert _{1}\leq
C\left\{
\begin{array}
[c]{ll}%
8^{-k}\left( kk_{0}+4^{k}\right) \left( \log L\right) ^{-10}, &
\mathrm{if\ }k\leq20\log\log L,\\
8^{-k}\left( kk_{0}+4^{k}\right) \left( \log L\right) ^{-4}\,, &
\mathrm{if\ }k>20\log\log L.
\end{array}
\right. \label{Est_B_1(k)}%
\end{equation}
(We emphasize our convention regarding constants, and in particular the fact
that $C$ does not depend on $x$.) Hence,
\begin{equation}
\sup_{x}\sum_{k\geq1}\left\Vert \xi_{x}^{\left( k\right) }\left(
B_{1}(k)\right) \right\Vert _{1}\leq C\left( \log L\right) ^{-10}%
\leq\left( \log L\right) ^{-37/4}/3. \label{LinearEst1}%
\end{equation}
Next, let%
\[
B_{2}(k)\overset{\mathrm{def}}{=}\operatorname*{Shell}\nolimits_{L}\left(
\left( 4/3\right) ^{k}r\left( L\right) ,\left( 5/4\right) ^{k}2s\left(
L\right) \right) .
\]
If $y\in B_{2}(k)$ and $\Delta^{k}\left( y,y^{\prime}\right) >0,$ we have
$d_{L}\left( y^{\prime}\right) >r\left( L\right) ,$ and we get, using the
fact that for $x\not\in \operatorname*{Sh}_L$
one can write
$\pi_L\left( x,\cdot\right) =\left( \hat{\pi}\pi_L\right) \left(
x,\cdot\right) ,$%
\[
\xi_{x}^{\left( k\right) }\left( B_{2}(k),z\right) =\sum_{y\in B_{2}%
(k)}\sum_{y^{\prime}\in V_{L}}\hat{g}\left( x,y\right) D_{k}\left(
y,y^{\prime}\right) \left( \phi\left( y,z\right) -\phi\left( y^{\prime
},z\right) \right) ,
\]
where%
\begin{equation}
D_{k}\overset{\mathrm{def}}{=}\Delta^{k}\hat{\pi}, \label{Dk}%
\end{equation}
and, on a ``good'' environment,
\begin{align}
\sup_{y\in B_{2}(k)}\left\Vert D_{k}\left( y,\cdot\right) \right\Vert _{1}
& \leq\sup_{y\in B_{2}(k)}\left\Vert \Delta^{k-1}\left( y,\cdot\right)
\right\Vert _{1}\sup_{x:d_{L}\left( x\right) >r\left( L\right) }\left\Vert
\Delta\hat{\pi}\left( x,\cdot\right) \right\Vert _{1}\label{ESt_Dk}\\
& \leq C8^{-k}\left( \log L\right) ^{-9}.\nonumber
\end{align}
Using Lemma \ref{Cor_Green} b), we have $\sup_{x}\hat{g}\left(
x,\operatorname*{Shell}\nolimits_{L}\left( 3s\left( L\right) \right)
\right) \leq C\log\log L.$ Put%
\[
A_{j}\overset{\mathrm{def}}{=}\operatorname*{Shell}\nolimits_{L}\left(
\left( 2+\left( j-1\right) /4\right) s\left( L\right) ,\left(
2+j/4\right) s\left( L\right) \right) ,j\geq1.
\]
Starting from a point in $A_{j},$ $j\geq3,$ the coarse grained
simple random walk has
a probability $\geq1/C$ to reach $A_{j-2}$ in one step. Starting from
$A_{j-2},$ an ordinary random walk has a probability $\geq1/C$ to leave
$V_{L+k_{0}r\left( L\right) }$ before reaching $A_{j},$ and therefore, the
coarse grained simple random walk
leaves $V_{L}$ before reaching $A_{j}$ with at least the
same probability. Therefore $\sup_{x}\hat{g}\left( x,A_{j}\right) \leq Cj,$
and thus,%
\[
\sup_{x}\hat{g}\left( x,B_{2}(k)\right) \leq C\left( \left( \frac{5}%
{4}\right) ^{2k}+\log\log L\right) \leq C\left( 2^{k}+\log\log L\right)
\,.
\]
If $y\in B_{2}(k),$ and $\Delta^{k}\left( y,y^{\prime}\right) >0$, then
$\left\vert y-y^{\prime}\right\vert \leq2ks\left( L\right) ,$ and therefore,
\[
\left\vert \phi\left( y,z\right) -\phi\left( y^{\prime},z\right)
\right\vert \leq CkL^{-d}\left( \log L\right) ^{-3},
\]
again by (\ref{OneDerivative}). Therefore, we get$,$%
\begin{align}
\left\Vert \xi_{x}^{\left( k\right) }\left( B_{2}(k),\cdot\right)
\right\Vert _{1} & \leq Ck\left( \log L\right) ^{-12}\left[ 4^{-k}%
+8^{-k}\log\log L\right] ,\nonumber\\
\sup_{x}\sum_{k\geq1}\left\Vert \xi_{x}^{\left( k\right) }\left(
B_{2}(k),\cdot\right) \right\Vert _{1} & \leq\left( \log L\right)
^{-37/4}/3. \label{LinearEst2}%
\end{align}
Let $B_{3}(k)\overset{\mathrm{def}}{=}V_{L}\backslash\left( B_{1}(k)\cup
B_{2}(k)\right) .$ Given $j\in\mathbb{Z},$ let%
\[
I_{j}\overset{\mathrm{def}}{=}\left\{ jks\left( L\right) +1,\ldots,\left(
j+1\right) ks\left( L\right) \right\} .
\]
Then for $\mathbf{j}\in\mathbb{Z}^{d},$ put $W_{\mathbf{j},k}\overset
{\mathrm{def}}{=}B_{3}(k)\cap I_{j_{1}}\times\cdots\times I_{j_{d}},$
with
$\operatorname*{diameter}\left( W_{\mathbf{j}}\right) \leq\sqrt
{d}ks\left( L\right).$
Let $J_k$
be the set of $\mathbf{j}$'s for which these sets are not empty. We
subdivide
$J_k$ into
subsets $J_{1,k},\ldots,J_{K\left( d,k\right),k }$ such that for any
$1\leq \ell \leq K\left( d,k\right) ,$%
\begin{equation}
\mathbf{j},\mathbf{j}^{\prime}\in J_{\ell,k},\ \mathbf{j}\neq\mathbf{j}^{\prime
}\Longrightarrow d\left( W_{\mathbf{j},k},W_{\mathbf{j}^{\prime},k}\right)
>ks\left( L\right) . \label{Box_Interdistance}%
\end{equation}
We set, recalling (\ref{Dk}),%
\begin{equation}
\xi_{x,\mathbf{j}}^{\left( k\right) }\left( z\right) \overset
{\mathrm{def}}{=}\sum_{y\in W_{\mathbf{j},k}}\sum_{y^{\prime}\in V_{L}}\hat
{g}\left( x,y\right) D_{k}\left( y,y^{\prime}\right) \left( \phi\left(
y,z\right) -\phi\left( y^{\prime},z\right) \right) . \label{Xi_Lj}%
\end{equation}
We fix for the moment $k$ and $x.$ If $t>0$, and%
\begin{equation}
\sum\nolimits_{\mathbf{j}}\mathbb{E}\xi_{x,\mathbf{j}}^{\left( k\right)
}\left( z\right) \leq t/2, \label{Annealed}%
\end{equation}
and we have%
\begin{multline*}
\mathbb{P}\left( \left\Vert \xi_{x}^{\left( k\right) }\left(
B_{3}(k),\cdot\right) \right\Vert _{1}\geq t\right) \leq\mathbb{P}\left(
\left\vert \sum\nolimits_{\mathbf{j}}\left( \xi_{x,\mathbf{j}}^{\left(
k\right) }\left( z\right) -\mathbb{E}\xi_{x,\mathbf{j}}^{\left( k\right)
}\left( z\right) \right) \right\vert \geq t/2\right) \\
\leq K\left( d,k\right) \max_{1\leq \ell \leq K\left( d,k\right)
}\mathbb{P}%
\left( \left\vert \sum\nolimits_{\mathbf{j}\in J_{\ell,k}}\left( \xi
_{x,\mathbf{j}}^{\left( k\right) }\left( z\right) -\mathbb{E}%
\xi_{x,\mathbf{j}}^{\left( k\right) }\left( z\right) \right) \right\vert
\geq t/\left( 2K\left( d,k\right) \right) \right) .
\end{multline*}
The random variables $\xi_{x,\mathbf{j}}^{\left( k\right) }\left( z\right)
-\mathbb{E}\xi_{x,\mathbf{j}}^{\left( k\right) }\left( z\right) $,
$\mathbf{j}\in J_{\ell,k}$, are independent and centered, due to
(\ref{Box_Interdistance}), and we are going to estimate their sup-norm.
We have by (\ref{OneDerivative}) that
$|\phi\left( y,z\right) -\phi\left( y^{\prime},z\right) |\leq
C k\left( \log L\right) ^{-3}L^{-d}$ for
$y,y^{\prime}$ for which $D_{k}\left( y,y^{\prime}\right) \neq0.$ According
to Lemma \ref{Cor_Green} c), we have%
\[
\hat{g}\left( x,W_{\mathbf{j},k}\right) \leq Ck^{d}\left( 1+\frac{d\left(
x,W_{\mathbf{j},k}\right) }{s\left( L\right) }\right) ^{-d+2}.
\]
Substituting that into (\ref{Xi_Lj}), we get%
\[
\left\Vert \xi_{x,\mathbf{j}}^{\left( k\right) }\left( z\right)
\right\Vert _{\infty}\leq Ck^{d+1}8^{-k}\left( 1+\frac{d\left(
x,W_{\mathbf{j},k}\right) }{s\left( L\right) }\right) ^{-d+2}L^{-d}\left(
\log L\right) ^{-12}.
\]
By Hoeffding's inequality (see e.g. \cite[(1.23)]{Ledoux} ), we have for
$1\leq \ell \leq K\left(d,k\right) $%
\begin{align*}
& \mathbb{P}\left( \left\vert \sum\nolimits_{\mathbf{j}\in J_{\ell,k}}\left(
\xi_{x,\mathbf{j}}^{\left( k\right) }\left( z\right) -\mathbb{E}%
\xi_{x,\mathbf{j}}^{\left( k\right) }\left( z\right) \right) \right\vert
\geq\frac{2^{-k}L^{-d}}{2K\left(d,k\right) \left( \log L\right) ^{37/4}%
}\right) \\
& \leq2\exp\left[ -\frac{1}{C}\frac{\left( \log L\right) ^{-37/2}%
}{k^{2d+2}4^{-2k}\left( \log L\right) ^{-24}\sum\nolimits_{r=1}^{C\left(
\log L\right) ^{3}}r^{-d+3}}\right] \leq2\exp\left[ -\frac{1}{C}%
\frac{\left( \log L\right) ^{5/2}}{k^{2d+2}4^{-2k}}\right] \,,
\end{align*}
where we used $d\geq3$ in the last inequality. The upshot of this estimate is
that provided (\ref{Annealed}) holds true with $t=t(k)=
2^{-k}L^{-d}\left( \log
L\right) ^{-37/4},$ we have%
\begin{align*}
\sup_{x}\mathbb{P}\left( \sum\nolimits_{k\geq1}\left\Vert \xi_{x}^{\left(
k\right) }\left( B_{3}\right) \right\Vert _{1}\geq\left( \log L\right)
^{-37/4}\right) & \leq2\sum_{k\geq1}K(d,k)
\exp\left[ -\frac{1}{C}\frac{\left(
\log L\right) ^{5/2}}{k^{2d+2}4^{-2k}}\right] \\
& \leq\exp\left[ -\left( \log L\right) ^{17/8}\right] ,
\end{align*}
It remains to prove (\ref{Annealed}) with this $t$. Write
\[
\sum_{\mathbf{j}}\mathbb{E}\xi_{x,\mathbf{j}}^{\left( k\right) }\left(
z\right) =\sum_{y\in B_{3}}\sum_{y^{\prime}\in V_{L}}\hat{g}\left(
x,y\right) \mathbb{E}\left( D_{k}\left( y,y^{\prime}\right) \right)
\left( \phi\left( y,z\right) -\phi\left( y^{\prime},z\right) \right) .
\]
For every $y,$ $y^{\prime}\mapsto\mathbb{E}\left( D_{k}\left( y,y^{\prime
}\right) \right) $ is a signed measure with total mass $0,$ which is
invariant under lattice isometries. Furthermore%
\[
\sum_{y^{\prime}}\left\vert \mathbb{E}\left( D_{k}\left( y,y^{\prime
}\right) \right) \right\vert \leq C8^{-k}\left( \log L\right) ^{-9}.
\]
Applying (\ref{ThreeDerivatives}), we get%
\begin{align*}
& \left\vert \sum\nolimits_{y^{\prime}}\mathbb{E}\left( D_{k}\left(
y,y^{\prime}\right) \right) \left( \phi\left( y,z\right) -\phi\left(
y^{\prime},z\right) \right) \right\vert \\
& \leq C8^{-k}\left( \log L\right) ^{-9}\left( L^{-d-1/4}+\left(
\frac{Lk\left( \log L\right) ^{-3}}{L}\right) ^{3}L^{-d}\right) \leq
C4^{-k}\left( \log L\right) ^{-18}L^{-d},
\end{align*}
uniformly in $y\in B_{3}(k),$ and $k.$ By Lemma \ref{Cor_Green} d), we have%
\[
\sup_{x}\sum_{y\in B_{3}(k)}\hat{g}\left( x,y\right) \leq C\left( \log
L\right) ^{6}.
\]
From this (\ref{Annealed}) follows.
\end{proof}
\subsection{The non-linear part, good environment \label{SubSect_Nonlinear}}
\begin{proposition}
\label{Prop_Main_no_bad}If $L$ is large enough and $\Psi\in\mathcal{M}_{L},$
then, with
$ D_{L,\Psi}\left( 0\right)$ as in
(\ref{Def_DL}),
\[
\mathbb{P}\left( D_{L,\Psi}\left( 0\right) \geq\left( \log L\right)
^{-9};\operatorname*{Good}\nolimits_{L}\right) \leq\exp\left[ -\left( \log
L\right) ^{17/8}\right] .
\]
\end{proposition}
\begin{proof}
We recall the abbreviation $\operatorname*{Sh}_{L}
\overset{\mathrm{def}}{=}%
\operatorname*{Shell}\nolimits_{L}\left( r\left( L\right) \right), $
c.f. (\ref{Def_sL&rL}).
By Proposition \ref{prop-erwin160605}, it suffices to estimate
on $G_{L}\cap\operatorname*{Good}\nolimits_{L}$ the expression
$\|{\mathcal{R}}_L \hat \pi_{\Psi}\|_1$, c.f. (\ref{eq-120306d}), where
\begin{equation}
{\mathcal{R}}_{L}\hat\pi_{\Psi}
\overset{\mathrm{def}}{=}\sum_{m=1}^{\infty}\sum_{k_{1},\ldots,k_{m}%
=0}^{\infty}\left( \hat{g}\Delta^{k_{1}}\Delta\hat{\pi}
1_{V_{L}}
\right) \cdot
\ldots\cdot\left( \hat{g}\Delta^{k_{n}}\Delta\hat{\pi}1_{V_{L}}\right) \sum
_{k=1}^{\infty}\left( \hat{g}\Delta^{k}\phi\right) , \label{NonLinearSplit}%
\end{equation}
and $\phi=\pi_L\hat \pi_{\Psi}$. The last factor in the
right hand side of (\ref{NonLinearSplit}) is
$\sum_{k=1}^{\infty}\xi^{\left( k\right) }$ of the last section, and
therefore, it suffices to
show that on $\operatorname*{Good}\nolimits_{L},$
\begin{equation}
\sup_{x}\sum_{k\geq0}\left\Vert \left( \hat{g}\Delta^{k}\Delta\hat{\pi
}1_{V_{L}}
\right) \left( x,\cdot\right) \right\Vert _{1}\leq15/16. \label{Est_15/16}%
\end{equation}
Using the definition of
$\operatorname*{Good}\nolimits_{L}$
in the first inequality and Lemma \ref{Cor_Green} d)
together with Lemma \ref{lem-090805} in the second, we get
$$\sup_{y\notin \operatorname*{Sh}_{L}}
\left\Vert \left( \Delta\hat{\pi}\right) \left(
y,\cdot\right) \right\Vert _{1} \leq C\left( \log L\right) ^{-9}\,,\
\sum_{k\geq0}\sup_{x}\left\Vert \left( \hat{g}\Delta^{k}\right) \left(
x,\cdot\right) \right\Vert _{1} \leq C\left( \log L\right) ^{6}.
$$
Therefore, we have%
\[
\sum_{k\geq0}\sup_{x}\left\Vert \sum\nolimits_{y\notin \operatorname*{Sh}_{L}}
\left( \hat
{g}\Delta^{k}\right) \left( x,y\right) \left( \Delta\hat{\pi}\right)
\left( y,\cdot\right) \right\Vert _{1}\leq1/16,
\]
if $L$ is large enough, and in order to prove (\ref{Est_15/16}) it therefore
suffices to prove%
\[
\sum_{k\geq0}\sup_{x}\left\Vert \sum\nolimits_{y\in \operatorname*{Sh}_{L}}
\left( \hat
{g}\Delta^{k}\right) \left( x,y\right) \left( \Delta\hat{\pi}1_{V_{L}}
\right)
\left( y,\cdot\right) \right\Vert _{1}\leq7/8.
\]
As in the proof of proposition (\ref{prop-erwin160605}), if $\Delta
^{k}(z,y)>0$ for $y\in \operatorname*{Sh}_{L}$
then $z\in B_{1}(k)$. Hence, using
(\ref{Est_BoundaryReach}) and Lemma \ref{lem-090805},
\begin{align*}
& \sum_{k\geq1}\sup_{x}\left\Vert \sum\nolimits_{y\in
\operatorname*{Sh}_{L}}\left( \hat
{g}\Delta^{k}\right) \left( x,y\right) \left( \Delta\hat{\pi}\right)
\left( y,\cdot\right) \right\Vert _{1}
\leq\sum_{k\geq1}\sup_{x}\hat{g}(x,B_{1}(k))\sup_{z\in B_{1}(k)}\left\Vert
\Delta^{k+1}(z,\cdot)\right\Vert _{1}\\
& \leq\sum_{k=2}^{20\log\log L+1}k\left( \frac{1}{8}\right) ^{k}%
+\sum_{k\geq20\log\log L+2}(\log L)^{6}\left( \frac{1}{8}\right) ^{k}<\frac
{1}{8}\,.%
\end{align*}
Therefore, it suffices to prove%
\begin{equation}
\sup_{x}\left\Vert \sum\nolimits_{y\in \operatorname*{Sh}_{L}}
\hat{g}\left( x,y\right)
\left( \Delta\hat{\pi}1_{V_{L}}
\right) \left( y,\cdot\right) \right\Vert _{1}%
\leq3/4. \label{Est_3/4}%
\end{equation}
From the second part
of (\ref{k_0_large}) it follows that%
\[
\sup_{x\in V_{L}}\hat{g}\left( x,\operatorname*{Sh}\nolimits_{L}\right)
\leq10/9\,,\
\sup_{x\in \operatorname*{Sh}_{L}}
\pi_{V_{k_{0}r\left( L\right) }\left( x\right) }\left(
x,V_{L}\right) \leq 1/10.
\]
By the third part
of (\ref{k_0_large}), and the choice $\delta<1/800<1/32$,
we get%
\[
\sup_{x\in \operatorname*{Sh}_{L}}
\Pi_{V_{k_{0}r\left( L\right) }\left( x\right) }\left(
x,V_{L}\right) \leq \delta+17/32\leq 9/16.
\]
Combining that, we get%
\begin{align*}
& \sup_{x}\left\Vert \sum\nolimits_{y\in \operatorname*{Sh}_{L}}
\hat{g}\left( x,y\right)
\left( \Delta\hat{\pi}1_{V_{L}}\right) \left( y,\cdot\right) \right\Vert
_{1}\\
\leq & \sup_{x}\sum\nolimits_{y\in \operatorname*{Sh}_{L}}
\hat{g}\left( x,y\right)
\Pi_{V_{k_{0}r\left( L\right) }\left( y\right) }\left( y,V_{L}\right)
+\sup_{x}\sum\nolimits_{y\in \operatorname*{Sh}_{L}}
\hat{g}\left( x,y\right) \pi
_{V_{k_{0}r\left( L\right) }\left( y\right) }\left( y,V_{L}\right) \\
\leq & \frac{10}{9}\cdot \frac{9}{16}+
\frac{10}{9}\cdot \frac{1}{10} <
\frac{3}{4},
\end{align*}
proving (\ref{Est_3/4}).
We conclude that
$\sup_{x\in V_{L}}\left\Vert
{\mathcal{R}}_{L}\hat\pi_{\Psi}
\left( x,\cdot\right) \right\Vert _{1}\leq
C\left( \log L\right) ^{-37/4}%
$
on $G_{L}\cap\operatorname*{Good}\nolimits_{L}.$
\end{proof}
\subsection{Green function estimates in a goodified environment
\label{Subsect_greengood}}
Before proceeding to analyze environments where bad regions are
present, we consider first ``goodified'' transition
kernels $\operatorname*{gd}\left( \hat{\Pi}\right)$,
c.f. (\ref{eq-160306a}).
We write
$\tilde{G}_{L}$ for the Green function corresponding to
this transition kernel. The goal of this section is to
derive some estimates on $\tilde{G}_L$, which will be useful
in handling the event
$\left( \operatorname*{Good}_{L}\cup\operatorname*{TwoBad}_{L}\right)
^{c}$.
Recall the range $\rho=\rho_{1,L}$, c.f. (\ref{Def_Rho1}), and consider the
collection
\begin{equation}
\label{eq-160306b}
\mathcal{D}_{L}=\left\{ V_{5\rho\left(
x\right) }\left( x\right) \,, x\in V_{L} \right\}\,.
\end{equation}
\begin{lemma}
\label{lem-220705a}
There exists a constant $c_{0}$ such that for
all
$D\in\mathcal{D}_{L}$, $D\cap {\operatorname*{Shell}}_{L}(L/2)\neq \emptyset$,%
\begin{equation}
\tilde{G}_{L}(0,D)\leq c_{0}\left[ \frac{\mbox{\rm diam}(D)^{d-2}\left(
\max_{y\in D}d_{L}(y)\vee s(L)\right) }{L^{d-1}}\right] \,.
\label{eq-220705b}%
\end{equation}
Further, there exists a constant $c_{1}\geq1$ such that
for all
$D\in\mathcal{D}_{L}$,
\begin{equation}
\sup_{y\in V_{L}}\tilde{G}_{L}(y,D)\leq c_{1}\,. \label{eq-110805d}%
\end{equation}
\end{lemma}
\begin{proof}
[Proof of Lemma \ref{lem-220705a}]
We begin by establishing some auxiliary estimates for the unperturbed Green
function $\hat g=\hat g_L$. We first show that there is a constant $C$ such that
for any
$D\in\mathcal{D}_{L}$,
\begin{equation}
\label{eq-120805d}\sup_{y\in V_{L}} \hat g(y,D)=
\sup_{y\in D} \hat g(y,D)
\leq C\,.
\end{equation}
For $D$ such
that $D\cap{\operatorname*{Shell}\nolimits}_{L}(2s(L)) \neq\emptyset$,
the estimate (\ref{eq-120805d})
is an immediate consequence of parts a) and b) of Lemma
\ref{Cor_Green}. If $D\cap{\operatorname*{Shell}\nolimits}_{L}(2
s(L))=\emptyset$, then
$$\max_{y,z\in D}
\hat{g}(y,z) =
\max_{y\in D}
\hat{g}(y,y) \leq
1+ \max_{z\in \partial
V_{\gamma s(L)}(x)}\hat g(z,y)
\leq 1 +\frac{C}{s(L)^d}
\,,$$
where $C$ depends on $\gamma$ and the second inequality follows
from part c) of Lemma \ref{Cor_Green}.
Summing over $z\in D$ completes the proof of
(\ref{eq-120805d}).
We next note that, for any $z\in V_{L}$,
\[
\hat g(z,D)\leq P_{z}^{\mathrm{RW}}(T_{D}<\tau_{V_{L}}) \max_{w\in D}
\hat g(w,D)\,.
\]
Applying (\ref{eq-120805d}) and Lemma \ref{Le_MainExit}, we deduce that for
some constant $C_{0}$,
\begin{equation}
\label{eq-120805e}\hat g(z,D)\leq C_{0}\left[ \frac
{\mbox{\rm diam}(D)^{d-2} d_{L}(z) \max_{y\in D} d_{L}(y)}{d(z,D)^{d}}
\wedge 1\right] \,.
\end{equation}
We now turn to proving (\ref{eq-110805d}).
Write the perturbation expansion
\begin{equation}
\label{eq-120805h}\tilde G_{L}(z,D)-\hat g(z,D)= \sum_{k\geq1}
\sum_{y,y^{\prime},w} \hat g(z,y)\Delta^{k}(y,y^{\prime})\hat\pi
(y^{\prime},w)\hat g(w,D) +\mbox{\rm NL}\,,
\end{equation}
where $\mbox{\rm NL}$ denotes the nonlinear term in the perturbation
expansion, that is
\begin{equation}
\mbox{\rm NL}= \sum_{m=2}^{\infty}\sum_{k_{1},\ldots,k_{m}=0}^{\infty} \left(
\hat{g}_{L}\Delta^{k_{1}}\Delta\hat{\pi}\right) \cdot\ldots\cdot\left(
\hat{g}_{L}\Delta^{k_{m-1}}\Delta\hat{\pi}\right) \left( \hat{g}_{L}%
\Delta^{k_{m}}\Delta\hat g(\cdot,D)\right) \,. \label{NonLinearSplit11}%
\end{equation}
We first handle the linear term in (\ref{eq-120805h}).
Using (\ref{eq-120805d}), part d) of Lemma
\ref{Cor_Green}, and Lemma \ref{lem-090805},
we see that in a goodified environment,
\begin{equation}
\label{eq-120805a1}|\sum_{k\geq1} \sum_{y,y^{\prime},w: d_{L}(y^{\prime})\geq
k_{0}r(L)} \hat g(z,y)\Delta^{k}(y,y^{\prime})\hat\pi(y^{\prime},w)\hat
g(w,D)| \leq \frac{C(\log L)^{6-9}}{ 8^{k}}\,,
\end{equation}
and
\begin{equation}
\label{eq-120805a2}| \sum_{y,y^{\prime},w} \hat g(z,y)\Delta
^{k}(y,y^{\prime})\hat\pi(y^{\prime},w)\hat g(w,D)| \leq \frac{C(\log L)^{6}}
{8^{k}}\,.
\end{equation}
From (\ref{eq-120805a2}) it follows that
\begin{equation}
\label{eq-120805a3}|\sum_{k\geq20\log\log L} \sum_{y,y^{\prime},w} \hat
g(z,y)\Delta^{k}(y,y^{\prime})\hat\pi(y^{\prime},w)\hat g(w,D)| \leq C
(\log L)^{-9}\,.
\end{equation}
On the other hand, if $d_{L}(y^{\prime})\leq k_{0} r(L)$ and $\Delta
^{k}(y,y^{\prime}) >0$ then, as in the proof of Proposition
\ref{prop-erwin160605}, $d_{L}(y)\leq(4/3)^{k} k_{0} r(L)$. Using parts a),b)
of Lemma \ref{Cor_Green}, we get that for $k\leq20 \log\log L$,
\begin{equation}
\label{eq-120805a4}| \sum_{y,y^{\prime},w: d_{L}(y^{\prime})\leq k_{0}r(L)}
\hat g(z,y)\Delta^{k}(y,y^{\prime})\hat\pi(y^{\prime},w)\hat g(w,D)|
\leq Ck (1/8)^{k}%
\end{equation}
Combining (\ref{eq-120805a1}), (\ref{eq-120805a3}) and (\ref{eq-120805a4}), we
conclude that
\[
\sup_{z\in V_{L}} \sum_{k\geq1} \sum_{y,y^{\prime},w} \hat g%
(z,y)\Delta^{k}(y,y^{\prime})\hat\pi(y^{\prime},w)\hat g(w,D) \leq C\,.
\]
The term involving $\mbox{\rm NL}$ is handled by recalling that
\[
\sup_{x}\sum_{k\geq0}\left\Vert \left( \hat{g}_L
\Delta^{k}\Delta\hat{\pi
}\right) \left( x,\cdot\right) \right\Vert _{1}\leq15/16\,,
\]
see (\ref{Est_15/16}). We then conclude, using (\ref{eq-120805d}), that
(\ref{eq-110805d}) holds.
To prove (\ref{eq-220705b}), our starting point is the perturbation expansion
(\ref{eq-120805h}). Again, the main contribution is the linear term.
From (\ref{eq-120805a2}) one deduces that
there exists a constant $c_{d}$ such that for all $L$ large,
\begin{equation}
\label{eq-130805a}\sum_{k\geq c_{d} \log\log L} \sum_{y,y^{\prime},w} \hat
g(0,y)\Delta^{k}(y,y^{\prime})\hat\pi(y^{\prime},w)\hat g(w,D)
\leq\left( \frac{r(L)}{L}\right) ^{d-1}\,.
\end{equation}
We divide the sum in the linear term according to the location of $w$
with respect to $D$, writing
\begin{equation}
\label{eq-130805b}\sum_{y,y^{\prime},w} \hat g(0,y)\Delta^{k}(y,y^{\prime
})\hat\pi(y^{\prime},w)\hat g(w,D) =\sum_{y,y^{\prime}}\hat g(0,y)
\Delta^{k}(y,y^{\prime}) \sum_{j=1}^{2} \sum_{w\in B_{j}} \hat
\pi(y^{\prime},w)\hat g(w,D)\,,
\end{equation}
where
\[
B_{1}=\{w\in V_{L}: d(w,D)\leq L/8\}\,,\quad B_{2}=\{w\in V_{L}:
d(w,D)> L/8\}\,.
\]
Considering the term involving $B_{1}$, for $k<c_{d} \log\log L$ the
summation over $y$ extends over a subset of $V_{L}$ that is covered by at most
$Ck^{d}$ elements of ${\mathcal{D}}_{L}$, all inside ${\operatorname*{Shell}%
\nolimits}_{L}(3L/4)$. Thus, for such $k$, using (\ref{eq-120805e})
to bound $\hat g(0,y)$, (\ref{eq-120805d}) to bound $\hat g(w,D)$,
and Lemma \ref{lem-090805}, we get
\[
\sum_{y,y^{\prime}}\hat g(0,y) \Delta^{k}(y,y^{\prime}) \sum_{w\in
B_{1}} \hat\pi(y^{\prime},w)\hat g(w,D) \leq
C \left( \frac{1+\gamma}{8}\right) ^{k} k^{d} \frac{\mbox{\rm diam}(D)^{d-2}
\max_{y\in D} d_{L}(y)}{L^{d-1}}
\]
and hence
\begin{equation}
\label{eq-130805c}\sum_{k\leq c_{d} \log\log L} \sum_{y,y^{\prime}} \hat
g(0,y)\Delta^{k}(y,y^{\prime})\sum_{w\in B_{1}} \hat\pi(y^{\prime
},w)\hat g(w,D) \leq C \frac{\mbox{\rm diam}(D)^{d-2} \max_{y\in D}
d_{L}(y)}{L^{d-1}}\,.
\end{equation}
The term involving $w\in B_{2}$ is simpler: indeed, one has in that case
that $\hat g(w,D)$ satisfies, by (\ref{eq-120805e}), the required bound,
whereas for $k< c_d \log\log L$, using
(\ref{eq-120805d}),
\[
\sum_{y: \exists y^{\prime}\,\mbox{\rm with}\, \Delta^{k}(y,y^{\prime})
\hat\pi(y^{\prime},w)>0} \hat g(0,y) \leq C k^{d}\,,
\]
yielding
\begin{align}
\label{eq-130805d} & \sum_{k\leq c_{d} \log\log L} \sum_{y,y^{\prime}} \hat
g(0,y)\Delta^{k}(y,y^{\prime})\sum_{w\in B_{2}} \hat\pi(y^{\prime
},w)\hat g(w,D)\nonumber\\
& \leq C \sum_{k\leq c_{d} \log\log L} k^{d} (1/8)^{k} \frac
{\mbox{\rm diam}(D)^{d-2} \max_{y\in D} d_{L}(y)}{L^{d-1}}\,.
\end{align}
Combining (\ref{eq-130805a}), (\ref{eq-130805c}) and (\ref{eq-130805d})
results in the required control on the linear term in (\ref{eq-120805h}). The
nonlinear term is even simpler and similar to the handling of the nonlinear
term when estimating $\hat g(z,D)$.
\end{proof}
\subsection{Presence of bad regions\label{Subsect_BadPoints}}
On $\left( \operatorname*{Good}_{L}\cup\operatorname*{TwoBad}_{L}\right)
^{c},$ it is clear that for some
$D\in\mathcal{D}_{L}$, c.f.
(\ref{eq-160306b}),
we have
\begin{equation}
B_{L}\subset D
\label{BL_included_in_ball}%
\end{equation}
We write $\operatorname*{Bad}_{L}\left( D\right) $ for the event that
$\left\{ B_{L}\subset D\right\} ,$ and $\operatorname*{Bad}_{L}^{(i)}\left(
D\right) $ for the event that $\left\{ B_{L}^{(i)}\subset D\right\} $,
$i=1,2,3,4$.
The main aim of this section is to prove the following.
\begin{proposition}
\label{prop-erwin220705} There exists a $\delta_{0}\leq1/800\bar c$ such that
if $\delta<\delta_{0}$, and if $\operatorname*{Cond}\left( L_{1}%
,\delta\right) $ holds for a given $L_{1}$, and if $L\leq L_{1}\left( \log
L_{1}\right) ^{2}$ and ${\Psi}\in\mathcal{M}_{L}$, then,
for $i=1,2,3,4$,
$$\sup_{D\in\mathcal{D}_{L}}\mathbb{P}\left( D_{L,\Psi}(0)
\geq\left( \log L\right) ^{-9+\frac{9(i-1)}{4}}
,\ \operatorname*{Bad}\nolimits_{L}^{\left( i\right) }\left( D\right)
\right)
\leq
\frac{\exp\left[ -\left( \log L\right) ^{2}\right]}{100}
\,.
$$
\end{proposition}
\begin{proof}
[Proof of Proposition \ref{prop-erwin220705}]
We start with the case when $D$ is \textquotedblleft not
near\textquotedblright\ the boundary, meaning that $D\subset V_{ L/2}$. We
write $D=V_{5\rho\left( x_{0}\right) }\left( x_{0}\right) =V_{5\gamma
s\left( L\right) }\left( x_{0}\right) .$ By Lemma \ref{Cor_Green} c), we
can find a constant $K$ (not depending on $L,x_{0}$), such that for any point
$x\notin\widetilde{D}\overset{\mathrm{def}}{=}V_{5K\gamma s\left( L\right)
}\left( x_{0}\right) ,$ and all $L$ large, one has $\hat{g}\left(
x,D\right) \leq1/10.$ We modify now the transition probabilities $\hat{\Pi
},\hat{\pi}$ slightly, when starting in $x\in D,$ by defining%
\begin{equation}
\widetilde{\Pi}\left( x,\cdot\right) \overset{\mathrm{def}}{=}\left\{
\begin{array}
[c]{cc}%
\operatorname*{ex}\nolimits_{\widetilde{D}}\left( x,\cdot;\hat{\Pi}\right) &
\mathrm{\ \mathrm{for\ }}x\in D\\
\hat{\Pi}\left( x,\cdot\right) & \mathrm{\ \mathrm{for\ }}x\notin D
\end{array}
\right. , \label{Def_Enlargement}%
\end{equation}
and similarly we define $\widetilde{\pi}.$ (Remark that this destroys somewhat
the symmetry, when $x\neq x_{0},$ but this is no problem below). Clearly,
these transition probabilities have the same exit distribution from $V_{L}$ as
the one used before. If we write $\widetilde{g}$ for the Green's function on
$V_{L}$ of $\widetilde{\pi},$ we have $\widetilde{g}\left( x,y\right)
=\hat{g}\left( x,y\right) $ for $y\notin\widetilde{D},$ and all $x,$ whereas
$\widetilde{g}\left( x,y\right) \leq\hat{g}\left( x,y\right) $ for
$y\in\widetilde{D}.$ In particular, we have%
\begin{equation}
\sup_{x\not \in \widetilde{D}}\widetilde{g}\left( x,D\right) \leq1/10.
\label{eq-260805a}%
\end{equation}
Writing down the perturbation expansion (\ref{eq-120306d}) using the kernels
$\tilde \Pi$ and $\tilde \pi$, we have%
\[
\left( \left[ \Pi-\pi\right] \hat{\pi}_{{\Psi}}\right)
=\sum_{m=1}^{\infty}\sum_{k_{1},\ldots,k_{m}=0}^{\infty}\left( \widetilde
{g}\Delta^{k_{1}}\Delta\hat{\pi}\right) \cdot\ldots\cdot\left( \widetilde
{g}\Delta^{k_{m-1}}\Delta\hat{\pi}\right)
\left( \widetilde{g}\Delta^{k_{m}%
}\Delta\phi\right) ,
\]
where $\Delta$ now uses the modified transitions, that is $\Delta
(x,y)=\tilde{\Pi}(x,y)-\tilde{\pi}(x,y)$, but remark that for $x\notin D,$
$\Delta\left( x,\cdot\right) $ is the same as before, and
that always $\Delta \tilde \pi=\Delta \hat \pi$. Also, $\phi$ is
modified accordingly.
We first estimate the part with $m=1$. In anticipation of what follows, we
consider an arbitrary starting point $x\in V_{L}$. Put $k=k_{1}+1.$ The part
of the sum%
\[
\sum_{y}\sum_{x_{1},\ldots,x_{k}}\widetilde{g}\left( x,x_{1}\right)
\Delta\left( x_{1},x_{2}\right) \cdot\ldots\cdot\Delta\left( x_{k}%
,y\right) \phi\left( y,\cdot\right)
\]
where all $x_{j}\notin D,$ is estimated in Section \ref{SubSect_GoodLinear},
and the probability that it exceeds $\left( \log L\right) ^{-9}/3$ is
bounded by $\exp\left[ -\left( \log L\right) ^{2}\right] /100.$ If an
$x_{j}\in D,$ then the sum over $x_{j+1}$ extends only to points outside
$\widetilde{D},$ and therefore, the sum over $x_{j+1},x_{j+2},\ldots,x_{j+K}$
is running only over points outside $D.$ Therefore%
\begin{equation}
\label{eq-011005}\sup_{x_{j}\in D}\sum_{x_{j+1},\ldots x_{j+K}}\left\vert
\Delta\left( x_{j},x_{j+1}\right) \cdot\ldots\cdot\Delta\left(
x_{j+K},x_{j+K+1}\right) \right\vert \leq2\delta^{K}.
\end{equation}
Further, let $j$ denote the smallest index such that $x_{j}\in D$. Let
\[
\mathcal{X}_{j}:=\left\{ x_{1}:\Delta(x_{1},x_{2})\cdots\Delta(x_{j-1}%
,x_{j})\right\} >0\,.
\]
Then $\max_{x_{1}\in\mathcal{X}_{j}}d(x_{1},D)\leq5j\gamma s(L)$. For $j<(\log
L)^{2}$ it follows that $\mathcal{X}_{j}\subset V_{L-s(L)}$ and therefore, by
(\ref{Est_BoundaryReach1}),
$\max_{x\in V_{L}}\tilde{g}(x,\mathcal{X}_{j})\leq C j^{d}$. Thus,
\begin{equation}
\label{eq-011005b}\left\vert \sum\nolimits_{x_{1},\ldots,x_{j}}\widetilde
{g}\left( x,x_{1}\right) \Delta\left( x_{1},x_{2}\right) \cdots
\Delta\left( x_{j-1},x_{j}\right) \right\vert \leq C\delta^{j-1}j^{d}\,.
\end{equation}
On the other hand, for $j\geq(\log L)^{2}$ one has
(recalling that $x_i\not\in D$ for $i<j$, and applying
part d) of Lemma \ref{Cor_Green} together with Lemma \ref{lem-090805}),
\[
|\sum_{x_{1},\ldots,x_{j}}\widetilde{g}\left( x,x_{1}\right) \Delta\left(
x_{1},x_{2}\right) \cdots\Delta\left( x_{j-1},x_{j}\right) |\leq
C(1/8)^{j}(\log L)^{6}\,.
\]
Therefore, using (\ref{eq-011005}),
\[
\sum_{j=1}^{\infty}\left\vert \sum_{x_{1},\ldots,x_{j-1}\not \in D,x_{j}\in
D}\widetilde{g}\left( x,x_{1}\right) \Delta\left( x_{1},x_{2}\right)
\cdots\Delta\left( x_{j-1},x_{j}\right) \right\vert \leq C\,.
\]
If $x_{k}\notin D,$ then on the event $B_L\subset D$, using
part a) of Proposition \ref{Prop_LipshitzPhi},
it holds that\\ $\left\Vert \sum_{y}\Delta\left( x_{k},y\right)
\phi\left( y,\cdot\right) \right\Vert _{1}$ $
\leq C\left( \log L\right)
^{-12}.$ On the other hand, if $x_{k}\in D,$ then%
\begin{equation}
\left\Vert \sum\nolimits_{y}\Delta\left( x_{k},y\right) \phi\left(
y,\cdot\right) \right\Vert _{1}\leq C\gamma K\left( \log L\right)
^{-12+2.25i}. \label{ExitFromBad}%
\end{equation}
Combining all the above, we conclude that for some constant $c_{2}$ it holds
that
\[
|\sum_{y,z}\sum_{x_{1},\ldots,x_{k}}\widetilde{g}\left( x,x_{1}\right)
\Delta\left( x_{1},x_{2}\right) \cdot\ldots\cdot\Delta\left( x_{k}%
,y\right) \phi\left( y,z\right) |\leq c_{2}\gamma K(\log L)^{-12+
2.25i}\,.
\]
It follows that%
\begin{equation}
\label{eq-011005d}\left\Vert \sum\nolimits_{x_{1},\ldots,x_{k}}^{\prime
}\widetilde{g} \left( 0,x_{1} \right) \Delta\left( x_{1},x_{2}\right)
\cdot\ldots\cdot\left( \Delta\phi\right) \left( x_{k},\cdot\right)
\right\Vert _{1}\leq\left( \log L\right) ^{-11.5+2.25i},
\end{equation}
where $\sum^{\prime}$ denotes summation where at least one $x_{j}$ is in $D.$
(We note that for $i=1,2,3$, one does not
need to use the $K$-enlargement and
modification of the transition probabilities, as a factor
$\delta$ is caught for each $\|\Delta\|_1$.)
The case $m\geq2$ is handled with an evident modification of the above
procedure, using the estimate (\ref{eq-260805a}). Indeed, let $D^{\prime
}=\{z\in V_{L}:d(z,\tilde D)\leq2\gamma s(L)\}$. A repeat of the previous
argument shows that
\[
\sup_{x} \sum_{k=4}^{\infty}\sum_{x_{k}}|\sum_{\overset{ x_{1},\ldots
,x_{k-1}:}{ \exists j\leq k, x_{j}\in D^{\prime}}}\widetilde{g} \left(
x,x_{1} \right) \Delta\left( x_{1},x_{2}\right) \cdot\ldots\cdot
\Delta\left( x_{k-2},x_{k-1}\right) \hat\pi\left( x_{k-1},x_{k}\right)
|\leq C\delta
\]
while
\[
\sup_{x} \sum_{x_{3}}|\sum_{\overset{x_{1},x_{2}:}{ \exists j\leq3, x_{j}\in
D^{\prime}}}\widetilde{g} \left( x,x_{1} \right) \Delta\left( x_{1}%
,x_{2}\right) \hat\pi\left( x_{2},x_{3}\right) | \leq\left\{
\begin{array}
[c]{ll}%
\frac{2}{10}\,, & x\not \in D^{\prime}\\
C\,, & x\in D^{\prime}\,,
\end{array}
\right.
\]
and, by the computation in Section \ref{SubSect_Nonlinear}, c.f.
(\ref{Est_15/16}),
\[
\sup_{x} \sum_{k=3}^{\infty}\sum_{x_{k}\not \in D^{\prime}}|\sum
_{\overset{x_{1},\ldots,x_{k-1}:}{ x_{j}\not \in D^{\prime}}}\widetilde{g}
\left( x,x_{1} \right) \Delta\left( x_{1},x_{2}\right) \cdot\ldots
\cdot\Delta\left( x_{k-2},x_{k-1}\right) \hat\pi\left( x_{k-1}%
,x_{k}\right) | \leq\frac{15}{16}\,.
\]
Hence, we conclude that always,
\begin{equation}
\sup_{x}\sum_{k\geq0}\left\Vert \left( \hat{g}\Delta^{k}\Delta\hat{\pi
}\right) \left( x,\cdot\right) \right\Vert _{1}\leq C, \label{Est_15/16C}%
\end{equation}
and for all $\delta$ small,
\begin{equation}
\sup_{x}\sum_{k_{1},k_{2}\geq0} \left\Vert \left( \hat{g}\Delta^{k_{1}}%
\Delta\hat{\pi}\right) \left( \hat{g}\Delta^{k_{2}}\Delta\hat{\pi}\right)
\left( x,\cdot\right) \right\Vert _{1}\leq\frac{16}{17}\,.
\label{Est_15/162}%
\end{equation}
Together with the computation for $m=1$, c.f. (\ref{eq-011005d}) when
$D^{\prime}$ is visited, and Proposition \ref{prop-erwin160605} when it is
not, this completes the proof of Proposition \ref{prop-erwin220705} in case
$D\subset V_{L/2}$.
We next turn to
$D\cap {\operatorname*{Shell}}_{L}(L/2)\neq \emptyset$.
Recall the Green function $\tilde G_{L}$ of the goodified environment,
introduced above Lemma \ref{lem-220705a}. Let $\Pi^{g}_L$ denote the
exit distribution $\Pi_L$
from $V_L$ with the environment replaced by the goodified
environment. Let $\Delta^{g}=1_{D}(\Pi_L-\Pi^{g}_L)$.
The perturbation expansion (\ref{Pert1}) then gives
\[
[\Pi_L-\Pi^{g}_L](z)=\sum\tilde G_{L}(0,y)\Delta^{g}(y,y^{\prime
})\Pi_L(y^{\prime},z)\,,
\]
and thus, using part a) of Lemma \ref{lem-220705a} in the second inequality,
\begin{equation}
\label{eq-220705c}\|\Pi_{{L}}-\Pi_{{L}}^{g}\|_1 \leq2 \tilde G_{L}(0,D)
\leq C
\frac{s(L)^{d-2}}{L^{d-2}} \leq C (\log L)^{3(2-d)}
\,,
\end{equation}
This completes the proof in case $i=4$ (and also $i=1,2,3$ if $d\geq5$, although
we do not use this fact).
Consider next the case $i=1,2,3$ (and $d=3,4$).
Rewrite the perturbation expansion as
\begin{equation}
\lbrack\Pi_L-\Pi^{g}_L](z)=\sum_{k\geq1}\sum_{y}\tilde{G}%
_{L}\left( \Delta^{g}\right) ^{k}(0,y)\left( \hat{\Pi}
\tilde{G}_{L}\Delta^{g}\Pi^{g}_L\right) (y,z)\,
\end{equation}
In particular, using Lemma \ref{lem-090805} and part b) of Lemma
\ref{lem-220705a},
\begin{align}
\Vert\Pi_{L}-\Pi_{L}^{g}\Vert_1 & \leq C\tilde{G}_{L}(0,D)\sum_{k\geq
1}(1/8)^{k}(\log L)^{-9+2.25i} \sup_{y^{\prime}\in V_{L}}\tilde{G}%
_{L}(y^{\prime},D)\nonumber\label{eq-110805b}\\
& \leq C (\log L)^{3(2-d)}(\log L)^{-9+ 2.25i}\leq(\log L)^{-11.5+2.25 i}\,.
\end{align}
\end{proof}
\section{The non-smoothed exit estimate\label{Sect_NonSmooth}}
The aim of this section is to prove the following.
\begin{proposition}
\label{Prop_Nonsmooth} There exists $0<\delta_{0}\leq1/2$ such that for
$\delta\leq\delta_{0},$ there exist $L_{0}\left( \delta\right) $ and
$\varepsilon_{0}\left( \delta\right) $ such that if $L_{1}\geq L_{0}$ and
$\varepsilon\leq\varepsilon_{0}$, then $\operatorname*{Cond}\left(
L_{1},\delta\right) ,$ and $L\leq L_{1}\left( \log L_{1}\right) ^{2}$ imply%
\[
\mathbb{P}\left( \left\Vert \Pi_{L}\left( 0,\cdot\right) -\pi_{L}\left(
0,\cdot\right) \right\Vert _{1}\geq\delta\right) \leq\frac{1}{10}\exp\left[
-\left( \log L\right) ^{2}\right]\,.
\]
\end{proposition}
Before starting the proof, we provide a sketch of the main idea.
As with the smoothed estimates, the starting point is the perturbation
expansion (\ref{eq-120306d}). In contrast to the proof in Section
\ref{Sect_Smooth}, however, no smoothing is provided by the kernel
$\hat \pi_\Psi$, and hence the lack of control of the exit
measure in the last step of the coarse-graining scheme
$\mathcal{S}_1$ does not allow one
to propagate the estimate on $D_{L,0}(0)$.
This is why we need to work with the scheme $\mathcal{S}_2$ introduced
in Definition \ref{Def_CoarseGrainingScheme}.
Using $\mathcal{S}_{2}$ means
that we refine the coarse graining
scale up to the boundary, and when carrying out the perturbation
expansion, less smoothing is gained from the coarse graining
for steps near the boundary.
The drawback of $\mathcal{S}_2$
is that the presence of many bad regions
close to the boundary is unavoidable. We will however show that these regions
are
rather sparse, so that with high enough probability, the RWRE
avoids the bad regions. As in Section \ref{Subsect_BadPoints},
this will be achieved by an appropriate
estimate on the Green function in a ``goodified'' environment.
\noindent
\begin{proof}
[Proof of Proposition \ref{Prop_Nonsmooth}]
We use the coarse graining scheme $\mathcal{S}_{2}$ from Definition
\ref{Def_CoarseGrainingScheme}, but we stick to the notations before, so
$\hat{\pi}=\hat{\pi}_{\mathcal{S}_{2},L}$, etc. Using $\mathcal{S}_{2}$ means
that we refine the coarsening scale up to the boundary. In particular,
$h_{L}\left( x\right) =\gamma d_{L}\left( x\right) $
for all $x$ with $d_{L}\left( x\right) \leq s\left( L\right) /2,$ and
$\hat{\pi}\left( x,\cdot\right) $ is obtained by averaging exit
distributions from balls with radii between $\gamma d_{L}\left( x\right) $
and $2\gamma d_{L}\left( x\right) $ ($\gamma$ from (\ref{Def_Gamma})). If
$d_{L}\left( x\right) <1/2\gamma,$ then there is no coarsening at all, and
$\hat{\pi}\left( x,\cdot\right) =p^{\mathrm{RW}}\left( x,\cdot\right) .$
To handle the presence of many bad regions near the boundary, we
introduce the
layers%
\begin{equation}
\label{eq-110206b}
\Lambda_{j}\overset{\mathrm{def}}{=}\operatorname*{Shell}\nolimits_{L}\left(
2^{j-1},2^{j}\right) ,
\end{equation}
for $j=1,\ldots,J_{1}\left( L\right) \overset{\mathrm{def}}{=}\left[
\frac{\log r\left( L\right) }{\log2}\right] +1,$ so that
\begin{equation}
\operatorname*{Shell}\nolimits_{L}\left( r\left( L\right) \right)
\subset\bigcup\nolimits_{j\leq J_1(L)}\Lambda_{j}\subset\operatorname*{Shell}%
\nolimits_{L}\left( 2r\left( L\right) \right) . \label{Layers_in_Shells}%
\end{equation}
We subdivide each $\Lambda_{j}$ into
subsets $D_{1}^{\left( j\right) },D_{2}^{\left(
j\right) },\ldots,D_{N_{j}}^{\left( j\right) }$ of diameter
$\leq\sqrt{d}2^{j},$ where%
\begin{equation}
C^{-1}
\left( L2^{-j}\right) ^{d-1}
\leq
N_{j}\leq C\left( L2^{-j}\right) ^{d-1}. \label{Est_NoLayerBoxes}%
\end{equation}
The collection
of these subsets is denoted by $\mathcal{L}_{j}.$ $\mathcal{L}_{j}$ is
split into disjoint $\mathcal{L}_{j}^{\left( 1\right) },\ldots
,\mathcal{L}_{j}^{\left( R\right) },$ such that for any $m$ one has
\begin{equation}
d\left( D,D^{\prime}\right) >5\gamma2^{j},\ \forall D,D^{\prime}%
\in\mathcal{L}_{j}^{\left( m\right) }, \label{MinimalDistance}%
\end{equation}%
\begin{equation}
N_{j}^{\left( m\right) }\overset{\mathrm{def}}{=}\left\vert \mathcal{L}%
_{j}^{\left( m\right) }\right\vert \geq N_{j}/2R. \label{Number_of_Cells}%
\end{equation}
We can do that in such a way that $R\in\mathbb{N}$ depends only on the
dimension $d$ (recall that $\gamma$ is fixed by
(\ref{Def_Gamma}) once the dimension is fixed).
\begin{figure}
\begin{picture}(10,200)(-80,0)\input{bad.pictex}
\end{picture}
\caption{The layers $\Lambda_j$. Bad regions (excluding $B_1\neq\emptyset$)
are shaded.}
\end{figure}
For $x\in \cup_{j=1}^{J_1(L)}\Lambda_j$,
we modify the definition of
$\operatorname*{Good}_{L}$ in order to adapt it to the smoothing
scheme $\mathcal{S}_{2}$.
Thus, we set
$\hat B_{L}^{\left( 4\right) }$ to consist of the
union (over $j=1,\ldots,J_1(L)$) of
points $x\in \Lambda_j$
which
have the property that
$D_{\gamma d_L(x),0 }\left( x\right)
\geq\delta$. We also write
$\widehat{
\operatorname*{Good}_{L}}=V_L\setminus \hat B_{L}^{\left( 4\right) }$.
If $B\in\mathcal{L}_{j},$ we write $\operatorname*{Bad}\left( B\right) $ for
the event $\left\{ B\not\subset\widehat
{\operatorname*{Good}_{L}}\right\} .$ Remark
that%
\begin{equation}
\label{eq-110206c}
\mathbb{P}\left( \operatorname*{Bad}\left( B\right) \right) \leq
C2^{\left( d+1\right) j}\exp\left[ -\frac{10}{13}
\log^{2}\left( \gamma2^{j-1}\right)
\right]
\leq\exp\left[ -j^{5/3}\right] \overset{\mathrm{def}}{=}p_{j}.
\end{equation}
for $j\geq J_{0},$ $J_{0}$ appropriately chosen (depending on $d$).
We set%
\[
X_{j}^{\left( m\right) }\overset{\mathrm{def}}{=}\sum_{D\in\mathcal{L}%
_{j}^{\left( m\right) }}1_{\operatorname*{Bad}\left( D\right) }%
,\ X_{j}\overset{\mathrm{def}}{=}\sum_{m=1}^{R}X_{j}^{\left( m\right) }.
\]
Due to (\ref{MinimalDistance}), the events $\operatorname*{Bad}\left(
D\right) ,\ D\in\mathcal{L}_{j}^{\left( m\right) },$ are independent.
Remark that $p_{j}<j^{-3/2}\leq1/2$ for all $j\geq2.$ From a standard coin
tossing estimate via Chebycheff's inequality, we get
\[
\mathbb{P}\left( X_{j}^{\left( m\right) }\geq j^{-3/2}N_{j}^{\left(
m\right) }\right) \leq\exp\left[ -N_{j}^{\left( m\right) }I\left(
j^{-3/2}\mid p_{j}\right) \right]
\]
with $I\left( x\mid p\right) \overset{\mathrm{def}}{=}x\log\left(
x/p\right) +\left( 1-x\right) \log\left( \left( 1-x\right) /\left(
1-p\right) \right) $, and
\[
I\left( j^{-3/2}\mid p_{j}\right) \geq-\frac{3}{2}j^{-3/2}\log
j+j^{-3/2}j^{5/3}-\log2\geq2Rj^{1/7}%
\]
for $j\geq J_0$, if $J_{0}$ is large enough. Therefore%
\begin{align*}
\mathbb{P}\left( X_{j}\geq j^{-3/2}N_{j}\right) & \leq R\max_{1\leq m\leq
R}\mathbb{P}\left( X_{j}^{\left( m\right) }\geq j^{-3/2}N_{j}^{\left(
m\right) }\right) \\
& \leq R\exp\left[ -\left( L2^{-j}\right) ^{d-1}j^{1/7}\right] \leq
R\exp\left[ -\frac{1}{C}\left( \log L\right) ^{20}j^{1/7}\right]
\end{align*}
for $J_{0} \leq j\leq J_1\left( L\right) ,$ $L$ large
enough (implied by $L_{0}$ large enough). Using this, we get for $L\geq L_0$,%
\[
\sum_{J_{0} \leq j\leq J_1\left( L\right) }%
\mathbb{P}\left( X_{j}\geq j^{-3/2}N_{j}\right) \leq\frac{1}{20}\exp\left[
-\left( \log L\right) ^{2}\right]\,,
\]
increasing further $J_0$ and $L_0$ if necessary.
Setting%
\[
\operatorname*{ManyBad}\nolimits_{L}\overset{\mathrm{def}}{=}\bigcup
\nolimits_{J_{0} \leq j\leq J_1\left( L\right) }\left\{
X_{j}\geq j^{-3/2}N_{j}\right\} \cup\operatorname*{TwoBad}\nolimits_{L},
\]
we get%
\begin{align}
\mathbb{P}\left( \operatorname*{ManyBad}\nolimits_{L}\right) & \leq
\frac{1}{20}\exp\left[ -\left( \log L\right) ^{2}\right] +\exp\left[
-1.2\left( \log L\right) ^{2}\right] \label{BadEventNonSmooth}\\
& \leq\frac{1}{10}\exp\left[ -\left( \log L\right) ^{2}\right] ,\nonumber
\end{align}
for all $L$ large enough (note that the choice
of $J_0$ and $L_0$ made above depended on the dimension only).
We now choose $\varepsilon_{0}
>0$ small enough such that for $\varepsilon\leq\varepsilon
_{0},$ one has $X_{j}=0,$ deterministically, for $j<J_{0}
.$
We will show now
that if $\omega\notin\operatorname*{ManyBad}\nolimits_{L}$, then
$\left\Vert \Pi_{L}-\pi_{L}\right\Vert _{1}\leq\delta$, which
together with
(\ref{BadEventNonSmooth}),
will
prove
Proposition \ref{Prop_Nonsmooth}.
Toward this end,
distinguish between two (disjoint) bad regions $B_{1},B_{2}\subset V_{L}.$
We set $\widetilde{B}_{L}\overset{\mathrm{def}}{=}B_{L}\backslash
\operatorname*{Sh}\nolimits_{L} ,$ (
$B_{L}$ is as in (\ref{Def_BL})). Set%
\begin{equation}
\label{eq-110206hh}
B_{2}^{\prime}\overset{\mathrm{def}}{=}\bigcup\left\{ D_{i}^{\left(
j\right) }:\omega\in\operatorname*{Bad}\left( D_{i}^{\left( j\right)
}\right) ,\ j=1,\ldots,J_1(L);\ i\leq N_{j}\right\} .
\end{equation}
On the complement of $\operatorname*{TwoBad}\nolimits_{L}$ there exists
$x_{0}$ with $d_L(x_{0}) >r\left( L\right) ,$ such that
$\widetilde{B}_{L}\subset V_{5\rho\left( x_{0}\right) }\left( x_{0}\right)
$. (See (\ref{BL_included_in_ball}). There is some ambiguity in choosing
$x_{0},$ but this of no importance. In particular,
$x_0$ is arbitrary if $\widetilde{B}_{L}=\emptyset$.)
If $\left\vert x_{0}\right\vert \leq
L/2,$ we define $B_{1}\overset{\mathrm{def}}{=}V_{5\rho\left( x_{0}\right)
}\left( x_{0}\right) =V_{5\gamma s\left( L\right) }\left( x_{0}\right)
,$ and $B_{2}\overset{\mathrm{def}}{=}B_{2}^{\prime}.$ If $\left\vert
x_{0}\right\vert >L/2,$ we put $B_{1}\overset{\mathrm{def}}{=}\emptyset,$ and
$B_{2}\overset{\mathrm{def}}{=}B_{2}^{\prime}\cup V_{5\rho\left(
x_{0}\right) }\left( x_{0}\right) .$ Of course, if $\widetilde{B}%
_{L}=\emptyset,$ then $B_{1}\overset{\mathrm{def}}{=}\emptyset,$ and
$B_{2}\overset{\mathrm{def}}{=}B_{2}^{\prime}$. Remark that $B_{1}$ and
$B_{2}$ are disjoint. We put $B\overset{\mathrm{def}}{=}B_{1}\cup B_{2},$ and
$G\overset{\mathrm{def}}{=}V_{L}\backslash B.$
In case $B_{1}=V_{5\gamma s\left( L\right) }\left( x_{0}\right) ,$
$\left\vert x_{0}\right\vert \leq L/2,$ we use the same (slight) modification
of $\hat{\Pi}\left( y,\cdot\right) ,\ \hat{\pi}\left( y,\cdot\right) $ for
$y\in V_{5\gamma s\left( L\right) }\left( x_{0}\right) $ as used in
Section \ref{Subsect_BadPoints}, i.e. we replace $\hat{\pi},\hat{\Pi}$ by
$\widetilde{\pi},\widetilde{\Pi}$ as defined in (\ref{Def_Enlargement}), but
we retain the \symbol{94}-notation for convenience.
We use a slight modification of the perturbation expansion
(\ref{Pert1}). Again with $\Delta
\overset{\mathrm{def}}{=}\hat{\Pi}-\hat{\pi},$ we have%
\[
\Pi_{L}=\pi_{L}+\hat{g}1_{B}\Delta\Pi_{L}+\hat{g}1_{G}\Delta\Pi_{L}.
\]
Set $\gamma_{k}\overset{\mathrm{def}}{=}\hat{g}\left( 1_{G}\Delta\right)
^{k}.$ Then%
\begin{align*}
\gamma_{k}\Pi_{L} & =\hat{g}\left( 1_{G}\Delta\right) ^{k}\Pi_{L}\\
& =\hat{g}\left( 1_{G}\Delta\right) ^{k}\pi_{L}+\hat{g}\left( 1_{G}%
\Delta\right) ^{k}\hat{g}\Delta\Pi_{L}\\
& =\hat{g}\left( 1_{G}\Delta\right) ^{k}\pi_{L}+\hat{g}\left( 1_{G}%
\Delta\right) ^{k}1_{B}\Delta\Pi_{L}+\hat{g}\left( 1_{G}\Delta\right)
^{k}\hat{\pi}\hat{g}\Delta\Pi_{L}+\gamma_{k+1}\Pi_{L}%
\end{align*}
Therefore, iterating, we get%
\begin{align*}
\Pi_{L} & =\pi_{L}+\hat{g}\sum_{k=0}^{\infty}\left( 1_{G}\Delta\right)
^{k}1_{B}\Delta\Pi_{L}+\hat{g}\sum_{k=1}^{\infty}\left( 1_{G}\Delta\right)
^{k}\hat{\pi}\hat{g}\Delta\Pi_{L}+\hat{g}\sum_{k=1}^{\infty}\left(
1_{G}\Delta\right) ^{k}\pi_{L}\\
& =\pi_{L}+\hat{g}\overline{\Gamma}1_{B}\Delta\Pi_{L}+\hat{g}\Gamma\hat{\pi
}\Pi_{L}.
\end{align*}
where $\Gamma\overset{\mathrm{def}}{=}\sum_{k=1}^{\infty}\left( 1_{G}%
\Delta\right) ^{k},\ \overline{\Gamma}\overset{\mathrm{def}}{=}I+\Gamma.$
With the partition $B=B_{1}\cup B_{2},$ we get,
setting $\Xi
_{1}\overset{\mathrm{def}}{=}\hat{g}\overline{\Gamma}1_{B_{1}}\Delta,$
$\Xi_{2}\overset{\mathrm{def}}
{=}\hat{g}\overline{\Gamma}1_{B_{2}}\Delta$,%
\[
\Pi_{L}=\pi_{L}+\Xi_{1}\Pi_{L}+\Xi_{2}\Pi_{L}+\hat{g}\Gamma\hat{\pi}\Pi_{L},
\]
and by induction on $m\in\mathbb{N},$ replacing successively $\Pi_{L}$ in the
second summand%
\[
\Pi_{L}-\pi_{L}=\left( \sum_{r=1}^{m}\Xi_{1}^{r}\right) \pi_{L}+\left(
\sum_{r=0}^{m}\Xi_{1}^{r}\right) \Xi_{2}\Pi_{L}+\left( \sum_{r=0}^{m}\Xi
_{1}^{r}\right) \hat{g}\Gamma\hat{\pi}\Pi_{L}+\Xi_{1}^{m+1}\Pi_{L}%
\]
i.e. with $m\rightarrow\infty$%
\begin{align}
\Pi_{L}-\pi_{L} & =\sum_{r=1}^{\infty}\Xi_{1}^{r}\pi_{L}+\left( \sum
_{r=0}^{\infty}\Xi_{1}^{r}\right) \Xi_{2}\Pi_{L}+\left( \sum_{r=0}^{\infty
}\Xi_{1}^{r}\right) \hat{g}\Gamma\hat{\pi}\Pi_{L}\label{PertExp_stopped}\\
& : =A_{1}+A_{2}+A_{3}\,.\nonumber
\end{align}
For $D\subset V_{L},$ we write%
\[
U_{k}\left( D\right) \overset{\mathrm{def}}{=}\left\{ y\in V_{L}:\exists
x\in D\ \mathrm{with\ }\Delta^{k}\left( y,x\right) >0\right\} .
\]
We now prove that each of the
three parts $A_{1},A_{2},A_{3}$ is bounded by
$\delta/3.$
\noindent\textbf{First summand} $A_{1}:$ This does not involve the bad regions
near the boundary, and we can apply the estimates from Section
\ref{Subsect_BadPoints}. There is nothing to prove if $B_{1}=\emptyset,$ so we
assume $B_{1}=V_{5\gamma s\left( L\right) }\left( x_{0}\right) ,$
$\left\vert x_{0}\right\vert \leq L/2.$
We have%
\begin{equation}
\sup_{x\in V_{L}}\left\vert \hat{g}\left(
1_{G}\Delta\right) ^{k}\left(
x,B_{1}\right) \right\vert \leq
\sup_{x\in V_{L}}
\delta^{k}\hat{g}\left( x,U_{k}\left(
B_{1}\right) \right) \leq C\delta^{k}k^{d}, \label{NonSmooth1}%
\end{equation}
where the second inequality is due to part c) of
Lemma \ref{Cor_Green},
and therefore,%
\begin{equation}
\sum_{k=0}^{\infty}\left\Vert \hat{g}\left( 1_{G}\Delta\right) ^{k}1_{B_{1}%
}\right\Vert _{1}\leq C. \label{NonSmooth2}%
\end{equation}
In the same way, we obtain, with
$K$ from Section \ref{Subsect_BadPoints},
\begin{equation}
\sum_{k=0}^{\infty}\sup_{x\notin V_{5K\gamma s\left( L\right) }}\left\Vert
\hat{g}\left( 1_{G}\Delta\right) ^{k}1_{B_{1}}\left( x,\cdot\right)
\right\Vert _{1}\leq\frac{1}{2}, \label{NonSmooth3}%
\end{equation}
by using (\ref{NonSmooth1}) for $k\geq1,$ and (\ref{eq-260805a}) for $k=0.$
Furthermore\,,%
\begin{align}
\left\Vert \Xi_{1}\pi_{L}\right\Vert _{1} & \leq\sum_{k=0}^{\infty
}\left\Vert \hat{g}\left( 1_{G}\Delta\right) ^{k}1_{B_{1}}\Delta\pi
_{L}\right\Vert _{1}\leq C\sum_{k=0}^{\infty}\left\Vert \hat{g}\left(
\cdot,U_{k}\left( B_{1}\right) \right) \right\Vert _{\infty}2^{-k}%
\sup_{x\in B_{1}}\left\Vert \Delta\pi_{L}\left( x,\cdot\right) \right\Vert
_{1}\label{NonSmooth4}\\
& \leq C\sum_{k=0}^{\infty}k^{d}2^{-k}\sup_{x\in B_{1}}\left\Vert \Delta
\pi_{L}\left( x,\cdot\right) \right\Vert _{1}\leq C\left( \log L\right)
^{-3}.\nonumber
\end{align}
Using these inequalities, we get $\left\Vert A_{1}\right\Vert _{1}\leq
C\left( \log L\right) ^{-3}\leq C\left( \log L_{0}\right) ^{-3}\leq
\delta/3$ by choosing $L_{0}\left( \delta\right) $ large enough: When
estimating $\left\Vert \Xi_{1}^{r}\pi_{L}\right\Vert _{1}$ for $r\geq2,$ we
use (\ref{NonSmooth2}) for the first factor $\Xi_{1}$, (\ref{NonSmooth4}) for
the last $\Xi_{1}\pi_{L}$, and (\ref{NonSmooth3}) for the middle $\Xi
_{1}^{r-2}.$ The point is that $\left( 1_{B_{1}}\Delta\right) \left(
x,y\right) $ is $\neq0$ only if $y\notin V_{5K\gamma s\left( L\right)
}\left( x_{0}\right) ,$ and so we can use (\ref{NonSmooth3}) for this part.
\noindent\textbf{Second summand }$A_{2}$: We drop here the $\Pi_{L}$-factor,
using the trivial estimate $\left\Vert \Pi_{L}\left( x,\cdot\right)
\right\Vert _{1}\leq2$. If $r=0,$ one has to estimate $\left\Vert \Xi
_{2}\left( 0,\cdot\right) \right\Vert _{1}$ where $B_{2}$ consists of the
bad regions in the layers $\mathcal{L}_{j}$, and the possible one bad ball
from $\widetilde{B}_{L}$ which is outside $V_{L/3}.$ In case $r\geq1,$ when
$B_{1}\neq\emptyset,$ we have $B_{2}=B_{2}^{\prime}$, which is at distance
$\geq L/3$ from $B_{1}.$ Therefore, in case $r=0,$ we have to estimate%
\begin{equation}
\left\Vert \hat{g}\left( 1_{G}\Delta\right) ^{k}1_{B_{2}}\left(
0,\cdot\right) \right\Vert _{1} \label{NonSmooth5}%
\end{equation}
(the last $\Delta$ is of no help, and we drop it), and in case $r\geq1,$ using
(\ref{NonSmooth2}) and (\ref{NonSmooth3})%
\[
C2^{-r}\sup_{\left\vert x\right\vert \leq2L/3}\left\Vert \hat{g}\left(
1_{G}\Delta\right) ^{k}1_{B_{2}}\left( x,\cdot\right) \right\Vert _{1},
\]
but in this case, we have $B_{2}\subset\operatorname*{Shell}\nolimits_{L}%
\left( 2r\left( L\right) \right) .$ The estimate of the second case is
entirely similar to the estimate of (\ref{NonSmooth5}), and we therefore
provide the details only of the proof of the latter.
We split the parts coming from the different bad regions. For a bad
region $D_{i}^{\left( j\right) }$ in layer $\mathcal{L}_{j},$ we have%
\[
\left\Vert \hat{g}\left( 1_{G}\Delta\right) ^{k}1_{D_{i}^{\left( j\right)
}}\left( 0,\cdot\right) \right\Vert _{1}\leq C2^{-k}\hat{g}\left(
0,U_{k}\left( D_{i}^{\left( j\right) }\right) \right) .
\]
It suffices to estimate $\hat{g}\left( 0,U_{k}\left( D_{i}^{\left(
j\right) }\right) \right) $ very crudely. Points in $U_{k}\left(
D_{i}^{\left( j\right) }\right) $ are at distance of
at most $r_{j,k}=2^{j}\left(
1-2\gamma\right) ^{-k}$ from $D_{i}^{\left( j\right) }.$ We first
consider $k$'s only such that
$V_L\setminus \operatorname*{Shell}\nolimits_{L}\left(
s\left( L\right) \right) $ is not touched,
which is the case if
$k\leq20\log\log L$ ($L$ large enough).
Then, for some $y$ with $0.5 r_{j,k}\leq d_L(y)\leq 2.5 r_{j,k}$,
$U_{k}\left(
D_{i}^{\left( j\right) }\right)\subset B(y,r_{j,k})=:B_{j,k}$.
Applying Lemma
\ref{Le_MainExit}, we see that
the probability that
simple random walk started at the origin hits $B_{j,k}$ before
$\tau_L$ is bounded above by
$$C2^{(d-1)j}(1-2\gamma)^{-(d-1)k} L^{-d+1}\leq
C2^{(d-1)j} \left(\frac{3}{2}\right)^k L^{-d+1}\,,$$
where in the last inequality, we have used the definition
of $\gamma$, c.f. (\ref{Def_Gamma}).
Combined with Lemma \ref{Cor_Green} b), we conclude that
for any $r$ such that $\Lambda_r\cap
U_{k}\left(
D_{i}^{\left( j\right) }\right)\neq \emptyset$,
it holds that
\[
\hat{g}\left( 0,U_{k}\left( D_{i}^{\left( j\right) }\right)
\cap \Lambda_r \right) \leq
C 2^{\left( d-1\right) j}\left( \frac{3}{2}\right)
^{k}L^{-d+1}.
\]
The number of layers $r$ touched is
bounded by $2(1+k)$,
and thus we conclude that
\[
\hat{g}\left( 0,U_{k}\left( D_{i}^{\left( j\right) }\right) \right) \leq
C\left( 1+k\right) 2^{\left( d-1\right) j}\left( \frac{3}{2}\right)
^{k}L^{-d+1}.
\]
Therefore, using $\omega\notin\bigcup\nolimits_{J_{0}
\leq j\leq J_1\left( L\right) }\left\{ X_{j}\geq j^{-3/2}N_{j}\right\} ,$ we
have the estimates%
\begin{align*}
\sum_{k\leq10\log\log L}\left\Vert \hat{g}\left( 1_{G}\Delta\right)
^{k}1_{B_{2}^{\prime}\cap\Lambda_{j}}\left( 0,\cdot\right) \right\Vert _{1}
& \leq Cj^{-3/2},\\
\sum_{k\leq10\log\log L}\left\Vert \hat{g}\left( 1_{G}\Delta\right)
^{k}1_{B_{2}^{\prime}}\left( 0,\cdot\right) \right\Vert _{1} & \leq
CJ_{0}^{-1/2}.
\end{align*}
For the sum over $k>20\log\log L,$ we simply estimate $\hat{g}\left(
0,U_{k}\left( B_{2}^{\prime}\right) \right) \leq\hat{g}\left(
0,V_{L}\right) \leq C\left( \log L\right) ^{6}$ and we therefore get%
\begin{align}
\sum_{k}\left\Vert \hat{g}\left( 1_{G}\Delta\right) ^{k}1_{B_{2}%
\cap\operatorname*{Shell}\nolimits_{L}\left( r\left( L\right) \right)
}\left( 0,\cdot\right) \right\Vert _{1} & \leq C\left( J_{0}%
^{-1/2}+\left( \log L\right) ^{6}2^{-20\log\log L}\right) \label{NonSmooth6}%
\\
& \leq C\left( J_{0}^{-1/2}+\left( \log L\right) ^{-7}\right) \leq
\delta/6\nonumber
\end{align}
for all $L\geq L_0$,
by choosing $J_{0}=J_0(\delta) $ and $L_{0} =L_0(\delta) $
large enough (again, depending only on $d$ and $\delta$).
It remains to add the part of $B_{2}$ outside $B_{2}^{\prime}.$ This is
(contained in) a ball $V_{5\gamma\rho\left( x_{0}\right) }\left(
x_{0}\right) $ with $\left\vert x_{0}\right\vert >L/2.$
\[
\hat{g}\left( 0,U_{k}\left( V_{5\gamma\rho\left( x_{0}\right) }\left(
x_{0}\right) \right) \right) \leq\hat{g}\left( 0,U_{k}\left( V_{5\gamma
s\left( L\right) }\left( x_{0}\right) \right) \right) \leq\hat{g}\left(
0,V_{\left( 5+2k\right) \gamma s\left( L\right) }\left( x_{0}\right)
\right) .
\]
As $\left\vert x_{0}\right\vert \geq L/2,$ we have $V_{\left( 5+2k\right)
\gamma s\left( L\right) }\left( x_{0}\right) \cap V_{L/3}=\emptyset$
provided $k\leq\left( \log L\right) ^{3}/C,$ and $V_{\left( 5+2k\right)
\gamma s\left( L\right) }\left( x_{0}\right) $ can be covered by $\leq
Ck^{d}$ balls $V_{s\left( L\right) }\left( y\right) ,$ $\left\vert
y\right\vert \geq L/3.$ By Lemma \ref{Le_MainExit}, one has $\hat{g}\left(
0,V_{s\left( L\right) }\left( y\right) \right) \leq C\left( \log
L\right) ^{-3}.$ (This remains true also if $V_{s\left( L\right) }\left(
y\right) $ intersects $\operatorname*{Shell}\nolimits_{L}\left( s\left(
L\right) \right) ,$ as is easily checked). Therefore, for $k\leq\left( \log
L\right) ^{3}/C,$ we have%
\[
\hat{g}\left( 0,U_{k}\left( V_{5\gamma\rho\left( x_{0}\right) }\left(
x_{0}\right) \right) \right) \leq Ck^{d}\left( \log L\right) ^{-3},
\]
and therefore,%
\begin{align*}
& \sum_{k}\left\Vert \hat{g}\left( 1_{G}\Delta\right) ^{k}1_{V_{5\gamma
\rho\left( x_{0}\right) }\left( x_{0}\right) }\left( 0,\cdot\right)
\right\Vert _{1}\\
\leq & C\sum_{k\leq\left( \log L\right) ^{3}/C}2^{-k}k^{d}\left( \log
L\right) ^{-3}+C\sum_{k>\left( \log L\right) ^{3}/C}2^{-k}\left( \log
L\right) ^{6} \leq\delta/6,
\end{align*}
provided $L_{0}$ is large enough. Combining this with (\ref{NonSmooth6})
proves $\left\Vert A_{2}\right\Vert _{1}\leq\delta/3.$
\noindent\textbf{Third summand} $A_{3}.$ By the same argument as in the
discussion of $A_{2},$ it suffices to consider $r=0,$ and we
drop $\Pi_{L}.$ Then, %
\begin{align}
& \sum_{k\geq1}\left\Vert \sum_{x\notin\operatorname*{Shell}\nolimits_{L}%
\left( r\left( L\right) \right) }\hat{g}\left( 1_{G}\Delta\right)
^{k-1}\left( 0,x\right) \left( 1_{G}\Delta\hat{\pi}\right) \left(
x,\cdot\right) \right\Vert _{1}\nonumber\\
& \leq\sum_{k\geq1}^{\infty}2^{-k+1}\hat{g}\left( 0,V_{L}\right)
\sup_{x\notin\operatorname*{Shell}\nolimits_{L}\left( r\left( L\right)
\right) }\left\Vert 1_{G}\Delta\hat{\pi}\left( x,\cdot\right) \right\Vert
_{1}\label{NonSmooth7}\\
& \leq C\left( \log L\right) ^{-3}\leq\delta/9\nonumber
\end{align}
if $L_{0}$ is large enough.
For $J_{0} \leq j\leq
J_{1}\left( L\right) $,%
\begin{align*}
\left\Vert \sum_{x\in{\Lambda}_{j}}\hat{g}\left( 1_{G}\Delta\right)
^{k-1}\left( 0,x\right) \left( 1_{G}\Delta\hat{\pi}\right) \left(
x,\cdot\right) \right\Vert _{1} & \leq2^{-k+1}\hat{g}\left( 0,U_{k}\left(
\Lambda_{j}\right) \right) \sup_{x\in\Lambda_{j}}\left\Vert 1_{G}\Delta
\hat{\pi}\left( x,\cdot\right) \right\Vert _{1}\\
& \leq Cj^{-9}2^{-k+1}\hat{g}\left( 0,U_{k}\left( \Lambda_{j}\right)
\right) ,
\end{align*}
and it is evident from part b) of
Lemma \ref{Cor_Green}
that $\sum_{k\geq1}2^{-k+1}%
\hat{g}\left( 0,U_{k}\left( \Lambda_{j}\right) \right) \leq C.$ Therefore,%
\begin{equation}
\sum_{k\geq1}\left\Vert \sum_{J_{0} \leq j\leq
J_{1}\left( L\right) }\sum_{x\in{\Lambda}_{j}}\hat{g}\left( 1_{G}%
\Delta\right) ^{k-1}\left( 0,x\right)
\left( 1_{G}\Delta\hat{\pi}\right)
\left( x,\cdot\right) \right\Vert _{1}\leq C\left(
J_{0}
\right)^{-8}\leq\delta/9, \label{NonSmooth8}%
\end{equation}
if $J_{0}$ is chosen large enough
(again, independently of $\varepsilon_0$!).
On the other hand, putting
$\hat{\Lambda}\overset{\mathrm{def}}%
{=}\bigcup\nolimits_{j\leq J_{0} }\Lambda_{j}$,
\begin{align*}
&\sum_{k\geq1}\left\Vert \sum\nolimits_{x\in\hat{\Lambda}}\hat{g}\left(
1_{G}\Delta\right)^{k-1}\left( 0,x\right) \left( 1_{G}\Delta\hat{\pi
}\right) \left( x,\cdot\right) \right\Vert _{1}\\
& \leq C\sum_{k\geq
1}2^{-k+1}\hat{g}\left( 0,U_{k}\left( \hat{\Lambda}\right) \right)
\sup_{x\in\hat{\Lambda}}\left\Vert \Delta\left(x,\cdot\right)
\right\Vert_{1}
\leq C\left( J_{0}\right) \sup_{x\in\hat{\Lambda}}\left\Vert
\Delta\left( x,\cdot\right) \right\Vert _{1}\leq\delta/9
\end{align*}
if $\varepsilon\leq\varepsilon_{0}\left( \delta\right)$
and $\varepsilon_0(\delta)$ is taken small enough.
Combining this
with (\ref{NonSmooth7}) and (\ref{NonSmooth8}) proves $\left\Vert
A_{3}\right\Vert _{1}\leq\delta/3.$
Substituting the estimate on $\|A_i\|_1$, $i=1,2,3$,
into (\ref{PertExp_stopped})
and using (\ref{BadEventNonSmooth})
completes the proof of
Proposition \ref{Prop_Nonsmooth}.
\end{proof}
\section{Proof of Proposition \ref{Prop_Main}}
\label{sec-finalpush}
We just have to collect the estimates we have obtained so far. We take
$\delta_{0}$ small enough as in the
conclusion of
Propositions \ref{prop-erwin220705} and
\ref{Prop_Nonsmooth}, and for $\delta\leq\delta_{0},$ we choose
$L_{0}$ large enough, also according to these propositions.
For $L_{1}\geq L_{0}$ we assume $\operatorname*{Cond}\left( \delta
,L_{1}\right) ,$ and take and $L\leq L_{1}\left( \log L_{1}\right) ^{2}.$
For $i=1,2,3,$ and $\Psi\in\mathcal{M}_{L},$ we have according to Lemma
\ref{Le_TwoBad} and Proposition \ref{Prop_Main_no_bad}%
\begin{align*}
b_{i}\left( L,{\Psi},\delta\right) & \leq\mathbb{P}\left( D_{L,\Psi
}\left( 0\right) >\left( \log L\right) ^{-11.25-2.25i}\right) \\
& \leq\mathbb{P}\left( D_{L,\Psi}\left( 0\right) >\left( \log L\right)
^{-11.25-2.25i},\left( \operatorname*{TwoBad}\nolimits_{L}\right) ^{c}%
\cap\left( \operatorname*{Good}\nolimits_{L}\right) ^{c}\right) \\
& +\mathbb{P}\left( D_{L,\Psi}\left( 0\right) >\left( \log L\right)
^{-9},\operatorname*{TwoBad}\nolimits_{L}\cap\operatorname*{Good}%
\nolimits_{L}\right) +\mathbb{P}\left( \operatorname*{TwoBad}\nolimits_{L}%
\right) \\
& \leq\mathbb{P}\left( D_{L,\Psi}\left( 0\right) >\left( \log L\right)
^{-11.25-2.25i},\left( \operatorname*{TwoBad}\nolimits_{L}\right) ^{c}%
\cap\left( \operatorname*{Good}\nolimits_{L}\right) ^{c}\right) \\
& +\exp\left[ -1.2\left( \log L\right) ^{2}\right] +\exp\left[ -\left(
\log L\right) ^{17/8}\right] .
\end{align*}
We therefore only have to estimate the first summand.
Using the notation of Sections \ref{Subsect_greengood}
and \ref{Subsect_BadPoints},%
\begin{align*}
& \mathbb{P}\left( D_{L,\Psi}\left( 0\right) >\left( \log L\right)
^{-11.25-2.25i},\left( \operatorname*{TwoBad}\nolimits_{L}\right) ^{c}%
\cap\left( \operatorname*{Good}\nolimits_{L}\right) ^{c}\right) \\
& \leq\sum_{D\in\mathcal{D}_{L}}\sum_{j}\mathbb{P}\left( \left\Vert \left(
\left[ \Pi_{V_{L}}-\pi_{V_{L}}\right] \hat{\pi}_{{\Psi}}\right) \left(
0,\cdot\right) \right\Vert _{1}\geq\left( \log L\right) ^{-11.25+2.25i}%
,\ \operatorname*{Bad}\nolimits_{L}^{\left( j\right) }\left( D\right)
\right) \\
& \leq\sum_{D\in\mathcal{D}_{L}}\sum_{j\leq i}\mathbb{P}\left( \left\Vert
\left( \left[ \Pi_{V_{L}}-\pi_{V_{L}}\right] \hat{\pi}_{{\Psi}}\right)
\left( 0,\cdot\right) \right\Vert _{1}\geq\left( \log L\right)
^{-11.25+2.25j},\ \operatorname*{Bad}\nolimits_{L}^{\left( j\right) }\left(
D\right) \right) \\
& \quad\quad +\sum_{D\in\mathcal{D}_{L}}\sum_{j>i}\mathbb{P}\left(
\operatorname*{Bad}\nolimits_{L}^{\left( j\right) }\left( D\right)
\right) \\
& \leq\frac{4\left\vert \mathcal{D}_{L}\right\vert }{100}\exp\left[ -\left(
\log L\right) ^{2}\right] +\left\vert \mathcal{D}_{L}\right\vert \exp\left[
-\left[ 1-\left( 4-i-1\right) /13\right] \left( \log\frac{L}{\left( \log
L\right) ^{10}}\right) ^{2}\right] \\
& \leq\frac{1}{8}\exp\left[ -\left[ 1-\left( 4-i\right) /13\right]
\left( \log L\right) ^{2}\right] .
\end{align*}
Combining these estimates, we get for $i=1,2,3$ and $L$ large enough,%
\[
b_{i}\left( L,{\Psi},\delta\right) \leq\frac{1}{4}\exp\left[ -\left[
1-\left( 4-i\right) /13\right] \left( \log L\right) ^{2}\right]\,.
\]
For $i=4,$ we have%
\[
b_{4}\left( L,{\Psi},\delta\right) \leq\mathbb{P}\left( D_{L,\Psi}\left(
0\right) >\left( \log L\right) ^{-2.25}\right) +\mathbb{P}\left(
\left\Vert \Pi_{L}\left( 0,\cdot\right) -\pi_{L}\left( 0,\cdot\right)
\right\Vert _{1}\geq\delta\right) .
\]
The second summand is estimated by Proposition \ref{Prop_Nonsmooth}, and the
first in the same way as the $b_{i},\ i\leq3.$
This completes the proof of Proposition \ref{Prop_Main}.
\section{Proof of Theorem \ref{Th_Main1}\label{proof-Main1}}
The proof is based on a modification of
the computations in
Section \ref{Sect_NonSmooth}.
We begin by an auxiliary definition.
In what follows, $c_d$ is a constant large enough so as to satisfy
\begin{equation}
\label{eq-110206a} \log c_d>4d\,.
\end{equation}
For any $x\in V_L$ and random walk $\{X_n\}$ with $X_0=x$, set
$\eta(x)=\min\{n>0: |X_n-x|>d_L(x)\}$.
We fix $\delta$ small enough such that $\delta<\delta_0$ and
\begin{equation}
\label{eq-110206z}
\bar c_d
\overset{\mathrm{def}}{=}\max_{y\in V_L}
p^{\mathrm{RW},y}(\tau_L>\eta(y))+\delta<1\end{equation}
for all $L$ large enough (this is possible by the
invariance principle for simple random walk).
We then chose $\varepsilon_0$ so as the conclusion of Theorem
\ref{Th_Main} is satisfied with this value of $\delta$.
For $x\in \Lambda_j$, set $U_1(x)=\partial V_{\gamma d_L(x)}(x)$, and
inductively, for $k\geq 2$, set $U_k(x)=\cup_{y\in U_{k-1}(x)} \partial
V_{c_d^{k} 2^j}(y)$.
\begin{definition}
\label{def-110206a}
A point $x\in \Lambda_j$
is $K$-good if $D_{\gamma d_L(x),0}(x)\leq \delta$ and,
for any $k=1,\ldots,K$,
for all $y\in U_k(x)$, $D_{c_d^{k+1} 2^j,0}(y)\leq \delta$.
\end{definition}
We note that there exists
some constant $C$
depending only on $d$ such that
for all $x\in \Lambda_j$,
\begin{equation}
\label{eq-110206gg}
\mathbb{P}(x\,
\mbox{\rm is not $K$-good})\leq
\exp\left(-\frac{10}{13} (\log (\gamma 2^{j-1}))^2\right)+
\sum_{k=1}^K C(c_d^k 2^j)^{d}
\exp\left(-\frac{10}{13} (\log ( c_d^k 2^j))^2\right)\,.
\end{equation}
For $J>J_0$, set
$$\tilde B_{L,J,K}^{(4)}
\overset{\mathrm{def}}{=}
\left(\cup_{j=J+1}^{J_1(L)} \{x\in \Lambda_j:
D_{\gamma d_L(x),0}(x)
\geq \delta\}\right)\bigcup\left(\{x\in \Lambda_J: \mbox{\rm $x$ is
not $K$-good}\}\right)\,,$$
and
$\widetilde{
\operatorname*{Good}}_{L,J,K}
\overset{\mathrm{def}}{=}
V_L\setminus \tilde B_{L,J,K}^{\left( 4\right) }$.
If $B\in\mathcal{L}_{j},$ $j\geq J$,
we write $\widetilde{\operatorname*{Bad}}_{L,J,K}\left( B\right) $ for
the event $\left\{ B\not \subset\widetilde
{\operatorname*{Good}}_{L,J,K}\right\}.$ Remark
that for $J>J_1(L)$,
$\widetilde{\operatorname*{Bad}}_{L,J,K}\left( B\right) =
\operatorname*{Bad}\left( B\right) $.
By combining the computation in (\ref{eq-110206c})
with (\ref{eq-110206gg}),
and using our choice for the value of $c_d$,
we get that
there exists a $J_2\geq J_0$ such that for all $J>J_2$, all $K$,
and all $L$ large enough,
\begin{equation}
\label{eq-110206d}
\mathbb{P}\left( \widetilde{\operatorname*{Bad}}_{L,J,K}
\left( B\right) \right) \leq p_J\,.
\end{equation}
We set next
\[
\tilde
X_{j}\overset{\mathrm{def}}{=}\sum_{D\in\mathcal{L}_{j}}
1_{\widetilde{\operatorname*{Bad}}_{L,J,K}\left( D\right) }\]
and
\[
\widetilde{\operatorname*{ManyBad}}_{L,J,K}
\overset{\mathrm{def}}{=}\bigcup
\nolimits_{J_{0}\left( \gamma\right) \leq j\leq J\left( L\right) }\left\{
\tilde X_{j}\geq j^{-3/2}N_{j}\right\}
\cup\operatorname*{TwoBad}\nolimits_{L}\,.
\]
Arguing as in the computation leading to
(\ref{BadEventNonSmooth}) (except that ${\cal L}_J$ is divided
into more sets to achieve independence, and the number of such sets depends
on $K$), we conclude that for each $J,K$ there is an $L_2=L_2(J,K)$ such that
for all $L>L_2(J,K)$,
\begin{equation}
\mathbb{P}\left(
\widetilde{\operatorname*{ManyBad}}_{L,J,K}\right) \leq
\frac{1}{10}\exp\left[ -\left( \log L\right) ^{2}\right]\,.
\label{BadEventNonSmoothtilde}
\end{equation}
Next, replace $B_2'$ in
(\ref{eq-110206hh}) by considering
in the union there only $j\in [J,J_1(L)]$, and replacing
$\operatorname*{Bad}\left( D_{i}^{\left( j\right)
}\right)$ by
$\widetilde{\operatorname*{Bad}}_{J,K,L}\left( D_{i}^{\left( j\right)
}\right)$ (note that this influences only the layers $\Lambda_j$ with $j\leq J$).
We rewrite the expansion (\ref{PertExp_stopped})
\begin{align}
\Pi_{L}-\pi_{L} & =\sum_{r=1}^{\infty}\Xi_{1}^{r}\pi_{L}+\left( \sum
_{r=0}^{\infty}\Xi_{1}^{r}\right) \Xi_{2}\Pi_{L}+
\left( \sum_{r=0}^{\infty
}\Xi_{1}^{r}\right) \hat{g}\Gamma\hat{\pi}\Pi_{L}\label{PertExp_stopped1}\\
& : =A_{1}+A_{2}+A_{3}\,.\nonumber
\end{align}
except that now $\tilde B_2'$ is used instead of $B_2'$ in the definition
of $\Xi_{2}$. By repeating the computation leading to
(\ref{NonSmooth4}) and
(\ref{NonSmooth6}) (using
(\ref{BadEventNonSmoothtilde}) instead of
(\ref{BadEventNonSmooth}) in the latter), we conclude that
\begin{equation}
\label{eq-110206w}
\|A_1\|+\|A_2\|\leq CJ^{-1/2}\,,
\end{equation}
for $L$ large. To analyze $A_3$,
we write, with obvious notation, $A_3=\sum_{r=0}^\infty A_3^{(r)}$, and argue
as in Section \ref{Sect_NonSmooth} that it is enough to consider
$A_3^{(0)}$. We then write
\begin{eqnarray}
\label{eq-110206kk1}
A_3^{(0)}(\cdot)&=&
\sum_{k\geq1} \sum_{x\notin\operatorname*{Shell}\nolimits_{L}
\left( 2^J \right) }\hat{g}\left( 1_{G}\Delta\right)
^{k-1}\left( 0,x\right) \left( 1_{G}\Delta\hat{\pi}\right) \left(
x,z\right) \Pi_L (z,\cdot)\\
&&
\!\!\!\!\!\!\!\!
+
\sum_{k\geq1} \sum_{x\in\operatorname*{Shell}\nolimits_{L}
\left( 2^J \right) }\hat{g}\left( 1_{G}\Delta\right)
^{k-1}\left( 0,x\right) \left( 1_{G}\Delta\hat{\pi}\right) \left(
x,z\right) \Pi_L (z,\cdot)=A_3^{(0),1}+A_3^{(0),2}\,.
\nonumber
\end{eqnarray}
We already have from
Section \ref{Sect_NonSmooth} that on the event
$\left(\widetilde{\operatorname*{ManyBad}}_{L,J,K}\right)^c$,
it holds that
$\Vert A_3^{(0),1}\Vert_1\leq C J^{-1/2}$. Note that all the estimates
so far held for any large fixed $J,K$, as long as $L$ is large enough
(large enough depending on the choice of $J,K$).
It remains only
to analyze $\|A_3^{(0,1)} \hat \pi_s\Vert_1$ for $s$ independent
of $L$, and when doing that, we can
choose $K$ in any way that does not depend on $L$. As a preliminary step
in the choice of $K$, we have the following lemma
(recall the constant $\bar c_d$, c.f. (\ref{eq-110206z})):
\begin{lemma}
\label{lem-021106a}
For all $J,K$, and
any $K$-good
$x\in \Lambda_J$, it holds that
for all $L$ large enough,
\begin{equation}
\label{eq-110206ll}
\sum_{y: |x-y|>c_d^{K+2} 2^J} | \Delta \hat \pi \Pi_L(x,y)| \leq
(\bar c_d)^ K
\end{equation}
\end{lemma}
\begin{proof}
[Proof of Lemma \ref{lem-021106a}]
The proof is similar to the argument in Lemma
\ref{lem-130705}. Consider a RWRE $X_n$ started at
$y\in U_1(x)$. Let $\eta_1=\min\{n: X_n\in \partial V_{c_d 2^J}(y)\}$,
and, for
$k=2,\ldots,K$, define successively
$\eta_k(y)=\min\{n>\eta_{k-1}(y): X_n\in \partial
V_{c_d^k 2^J}(
X_{\eta_{k-1}})
)\}$.
The sum in (\ref{eq-110206ll}) is then bounded above by
\begin{equation}
\label{eq-110206zz}
\max_{y\in U_1(x)}
p_\omega^y(\tau_L>\eta_K(y))\leq \prod_{k=1}^K \max_{y\in U_k(x)}
p_\omega^y(\tau_L>\eta(y))\,.
\end{equation}
Since $x$ is $K$-good, we have that for $y\in U_k(x)$,
$p_\omega^y(\tau_L>\eta_k(y))\leq \delta+p^{\mathrm{RW},y}(\tau_L>\eta(y))\leq
\bar c_d$. Substituting in (\ref{eq-110206zz}), the lemma follows.
\end{proof}
We next recall, c.f. Lemma \ref{Le_Lipshitz_Pi}, that
$$\sup_{x_1,x_2: |x_1-x_2|\leq D}\|\hat \pi_s(x_1,\cdot)-\hat \pi_s(x_2,\cdot)\|
\leq \frac{CD \log s}{s}\,.$$
In particular, for any fixed $K,J$ with $J>J_2$,
using (\ref{eq-110206ll}), and the fact that
$\sum_z A_3^{(0),1}\hat\pi_s(z)=0$, we get
$$ \Vert A_3^{(0),1} \hat\pi_s\Vert_1\leq
\delta (\bar c_d)^K+
C\frac{c_d^{K+1} \log s}{s}\,.$$
and thus
\begin{equation}
\label{eq-110206y}
\Vert A_3 \hat \pi_s\Vert
\leq \delta (\bar c_d)^K+
C\frac{(c_d)^{K+1}2^J \log s}{s}+CJ^{-1/2}\,.
\end{equation}
Combining (\ref{eq-110206y}) and
(\ref{eq-110206w}),
we conclude that on the event
$\left(\widetilde{\operatorname*{ManyBad}}_{L,J,K}\right)^c$,
$$D_{L,\Psi_s}(0)
\leq \delta (\bar c_d)^K+
C\frac{2(c_d)^{K+1}2^J \log s}{s}+CJ^{-1/2}\,.$$
Choosing $J$ large such that $CJ^{-1/2}<\delta/3$
and $K$ large enough such that $(\bar c_d)^K<\delta/3$, and
$s$ large enough such that $2C (c_d)^{K+1}2^J\log s/s<\delta/3$,
and using (\ref{BadEventNonSmoothtilde}),
it follows that\\
$\limsup_{L\to\infty} L^r b(L,\Psi_s,\delta)$ $=0$\,.
This completes the proof of the theorem.
|
2,869,038,155,514 | arxiv | \section{Introduction}
A graph $G=(V,E)$ is a \textit{contact graph of paths on a grid} (or \textit{CPG} for short) if there exists a collection $\mathcal{P}$ of interiorly disjoint paths on a grid $\mathcal{G}$ in one-to-one correspondence with $V$ such that two vertices are adjacent in $G$ if and only if the corresponding paths touch; if furthermore every path has at most $k$ bends (i.e. 90-degree turn at a grid-point) for $k \geq 0$, then the graph is \textit{$B_k$-CPG}. The pair $\mathcal{R} = (\mathcal{G}, \mathcal{P})$ is a \textit{CPG representation of $G$}, and more specifically a \textit{$k$-bend CPG representation of $G$} if every path in $\mathcal{P}$ has at most $k$ bends. It was shown in \cite{cpg} that not all planar graphs are CPG and that there exists no value of $k\geq 0$ for which $B_k$-CPG is a subclass of the class of planar graphs. In this note, we show that there exists no value of $k$ such that $B_k$-CPG contains the class of planar CPG graphs. More specifically, we prove the following theorem.
\begin{theorem}
\label{thm:unbound}
For any $k \geq 0$, there exists a planar graph in $B_{k+1}$-CPG $\setminus B_k$-CPG.
\end{theorem}
It immediately follows from the definition that $B_k$-CPG $\subseteq$ $B_{k+1}$-CPG but it was not known whether this inclusion is strict; Theorem \ref{thm:unbound} settles this question.
\begin{corollary}
For any $k\geq 0$, $B_k$-CPG is strictly contain in $B_{k+1}$-CPG, even within the class of planar graphs.
\end{corollary}
Note finally that Theorem \ref{thm:unbound} implies that the class of planar CPG graphs has an unbounded bend number (the bend number of a graph class $\mathcal{G}$ is the smallest $k \geq 0$ such that $\mathcal{G} \subseteq B_k$-CPG).
\section{Preliminaries}
Let $G=(V(G),E(G))$ be a CPG graph and $\mathcal{R} = (\mathcal{G},\mathcal{P})$ be a CPG representation of $G$. The path in $\mathcal{P}$ representing some vertex $u \in V(G)$ is denoted by $P_u$. An \textit{interior point} of a path $P$ is a point belonging to $P$ and different from its endpoints; the \textit{interior} of $P$ is the set of all its interior points. A grid-point $p$ is of \textit{type II.a} if it is an endpoint of two paths and an interior point, different from a bendpoint, of a third path~(see Fig. \ref{fig:typeIIa}); a grid-point $p$ is of \textit{type II.b} if it is an endpoint of two paths and a bendpoint of a third path~(see Fig. \ref{fig:typeIIb}).
\begin{figure}
\centering
\begin{subfigure}[b]{.45\textwidth}
\centering
\begin{tikzpicture}[scale=.7]
\node at (0,0) (p) [label=below left:{\scriptsize $p$}] {};
\draw[thick,-,>=stealth] (0,1)--(0,-1);
\draw[thick,dotted] (0,1.2)--(0,-1.2);
\draw[thick,<-,>=stealth] (0,0)--(1,0);
\draw[thick,dotted] (1,0)--(1.2,0);
\draw[thick,<-,>=stealth] (0,0)--(-1,0);
\draw[thick,dotted] (-1,0)--(-1.2,0);
\end{tikzpicture}
\caption{A grid-point $p$ of type II.a.}
\label{fig:typeIIa}
\end{subfigure}
\begin{subfigure}[b]{.45\textwidth}
\centering
\begin{tikzpicture}[scale=.7]
\node at (3,0) (p) [label=below left:{\scriptsize $p$}] {};
\draw[thick,-,>=stealth] (3,1)--(3,0)--(4,0);
\draw[thick,dotted] (3,1)--(3,1.2);
\draw[thick,dotted] (4,0)--(4.2,0);
\draw[thick,<-,>=stealth] (3,0)--(3,-1);
\draw[thick,dotted] (3,-1)--(3,-1.2);
\draw[thick,<-,>=stealth] (3,0)--(2,0);
\draw[thick,dotted] (1.8,0)--(2,0);
\end{tikzpicture}
\caption{A grid-point $p$ of type II.b.}
\label{fig:typeIIb}
\end{subfigure}
\caption{Two types of grid-points (the endpoints are marked by an arrow).}
\end{figure}
\section{Proof of Theorem \ref{thm:unbound}}
We show that the planar graph $G_k$, with $k \geq 0$, depicted in Fig.~\ref{fig:Gk} is in $B_{k+1}$-CPG $\setminus B_k$-CPG. We refer to the vertices $\alpha_i$, for $1 \leq i \leq 20$, as the \textit{secondary vertices}, and to the vertices $u_j^i$, for $1 \leq j \leq k+2$ and a given $1 \leq i \leq 19$, as the \textit{$(i,i+1)$-sewing vertices}. In Fig.~\ref{fig:representation} is given a $(k+1)$-bend CPG representation of $G_k$ (where the blue paths correspond to sewing vertices and the red paths correspond to secondary vertices). We next prove that in any CPG representation of $G_k$, there exists a path with at least $k+1$ bends.
\begin{figure}[hb]
\centering
\begin{subfigure}{.65\linewidth}
\centering
\begin{tikzpicture}[scale=0.55]
\fill[green!20!white] (-4.75,0) ellipse (1.25cm and 1cm);
\fill[green!20!white] (-2.25,0) ellipse (1.25cm and 1cm);
\fill[green!20!white] (4.75,0) ellipse (1.25cm and 1cm);
\node at (-4.75,0) {$H_1$};
\node at (-2.25,0) {$H_2$};
\node at (4.75,0) {$H_{19}$};
\node[circ] (a) at (0,3) [label=above:{\tiny $a$}] {};
\node[circ] (b) at (0,-3) [label=below:{\tiny $b$}] {};
\node[circ] (a1) at (-6,0) [label=below left:{\tiny $\alpha_1$}] {};
\node[circ] (a2) at (-3.5,0) [label=below right:{\tiny $\alpha_2$}] {};
\node[circ] (a3) at (-1,0) [label=below right:{\tiny $\alpha_3$}] {};
\node[circ] (a19) at (3.5,0) [label=below left:{\tiny $\alpha_{19}$}] {};
\node[circ] (a20) at (6,0) [label=below right:{\tiny $\alpha_{20}$}] {};
\draw (a) .. controls (-6,2) .. (a1);
\draw (a) .. controls (-3.5,2) .. (a2);
\draw (a) .. controls (-1,2) .. (a3);
\draw (a) .. controls (3.5,2) .. (a19);
\draw (a) .. controls (6,2) .. (a20);
\draw (a) .. controls (-11,4) and (-11,-4) .. (b);
\draw (b) .. controls (-6,-2) .. (a1);
\draw (b) .. controls (-3.5,-2) .. (a2);
\draw (b) .. controls (-1,-2) .. (a3);
\draw (b) .. controls (3.5,-2) .. (a19);
\draw (b) .. controls (6,-2) .. (a20);
\node[scirc] at (1,0) {};
\node[scirc] at (1.25,0) {};
\node[scirc] at (1.5,0) {};
\node[invisible] at (2,-4.8) {};
\end{tikzpicture}
\caption{The planar graph $G_k$.}
\label{fig:overall}
\end{subfigure}
\begin{subfigure}{.3\linewidth}
\centering
\begin{tikzpicture}[scale=0.6]
\node[circ] (ai) at (0,0) [label=left:{\tiny $\alpha_i$}] {};
\node[circ] (ai1) at (4,0) [label=right:{\tiny $\alpha_{i+1}$}] {};
\node[circ] (u1) at (2,3) [label=above right:{\tiny $u_1^i$}] {};
\node[circ] (u2) at (2,2) [label=above right:{\tiny $u_2^i$}] {};
\node[circ] (u3) at (2,1) [label=above right:{\tiny $u_3^i$}] {};
\node[circ] (uk1) at (2,-2) [label=below right:{\tiny $u_{k+1}^i$}] {};
\node[circ] (uk2) at (2,-3) [label=below right:{\tiny $u_{k+2}^i$}] {};
\draw (ai) .. controls (0.5,3) .. (u1);
\draw (ai) .. controls (0.5,2) .. (u2);
\draw (ai) .. controls (0.5,1) .. (u3);
\draw (ai) .. controls (0.5,-2) .. (uk1);
\draw (ai) .. controls (0.5,-3) .. (uk2);
\draw (ai1) .. controls (3.5,3) .. (u1);
\draw (ai1) .. controls (3.5,2) .. (u2);
\draw (ai1) .. controls (3.5,1) .. (u3);
\draw (ai1) .. controls (3.5,-2) .. (uk1);
\draw (ai1) .. controls (3.5,-3) .. (uk2);
\draw[-]
(u1) -- (u2) -- (u3)
(uk1) -- (uk2);
\node[scirc] at (2,0) {};
\node[scirc] at (2,-0.5) {};
\node[scirc] at (2,-1) {};
\end{tikzpicture}
\caption{The gadget $H_i$.}
\label{fig:gadget}
\end{subfigure}
\caption{The construction of $G_k$.}
\label{fig:Gk}
\end{figure}
Let $\mathcal{R} = (\mathcal{G},\mathcal{P})$ be a CPG representation of $G_k$. A path in $\mathcal{P}$ corresponding to a secondary vertex (resp. an $(i,i+1)$-sewing vertex) is called a \textit{secondary path} (resp. an \textit{$(i,i+1)$-sewing path}). A secondary path $P_{\alpha_i}$ is said to be \textit{pure} if no endpoint of $P_a$ or $P_b$ belongs to $P_{\alpha_i}$. We then have the following easy observation.
\begin{figure}
\centering
\begin{subfigure}{.45\linewidth}
\centering
\begin{tikzpicture}[scale=0.5]
\draw[thick,<->,>=stealth] (12,-1) -- (1,-1) node[left] {\tiny $P_a$};
\draw[thick,<->,>=stealth] (12,-1) -- (12,10) node[above] {\tiny $P_b$};
\draw[red,thick,<-,>=stealth] (2,-1) -- (2,5) -- (3,5) -- (3,6) -- (4,6) -- (4,6.5);
\draw[red,thick,<-,>=stealth] (3,-1) -- (3,4) -- (4,4) -- (4,5) -- (5,5) -- (5,5.5);
\draw[red,thick,<-,>=stealth] (4,-1) -- (4,3) -- (5,3) -- (5,4) -- (6,4) -- (6,4.5);
\draw[red,thick,<-,>=stealth] (6,-1) -- (6,1) -- (7,1) -- (7,2) -- (8,2) -- (8,2.5);
\draw[red,thick,<-,>=stealth] (7,-1) -- (7,0) -- (8,0) -- (8,1) -- (9,1) -- (9,1.5);
\draw[red,thick,->,>=stealth] (5.5,8) -- (6,8) -- (6,9) -- (12,9);
\draw[red,thick,->,>=stealth] (6.5,7) -- (7,7) -- (7,8) -- (12,8);
\draw[red,thick,->,>=stealth] (7.5,6) -- (8,6) -- (8,7) -- (12,7);
\draw[red,thick,->,>=stealth] (9.5,4) -- (10,4) -- (10,5) -- (12,5);
\draw[red,thick,->,>=stealth] (10.5,3) -- (11,3) -- (11,4) -- (12,4);
\draw[cyan,thick,<->,>=stealth] (2,4) -- (3,4);
\draw[cyan,thick,<->,>=stealth] (3,5) -- (3,4);
\draw[cyan,thick,<->,>=stealth] (3,5) -- (4,5);
\draw[cyan,thick,<->,>=stealth] (4,6) -- (4,5);
\draw[cyan,thick,<-,>=stealth] (4,6) -- (4.5,6);
\draw[cyan,thick,<-,>=stealth] (6,8) -- (6,7.5);
\draw[cyan,thick,<->,>=stealth] (6,8) -- (7,8);
\draw[cyan,thick,<->,>=stealth] (7,9) -- (7,8);
\draw[cyan,thick,<->,>=stealth] (3,3) -- (4,3);
\draw[cyan,thick,<->,>=stealth] (4,4) -- (4,3);
\draw[cyan,thick,<->,>=stealth] (4,4) -- (5,4);
\draw[cyan,thick,<->,>=stealth] (5,5) -- (5,4);
\draw[cyan,thick,<-,>=stealth] (5,5) -- (5.5,5);
\draw[cyan,thick,<-,>=stealth] (7,7) -- (7,6.5);
\draw[cyan,thick,<->,>=stealth] (7,7) -- (8,7);
\draw[cyan,thick,<->,>=stealth] (8,8) -- (8,7);
\draw[cyan,thick,<-,>=stealth] (4,2) -- (4.5,2);
\draw[cyan,thick,<-,>=stealth] (5,3) -- (5,2.5);
\draw[cyan,thick,<-,>=stealth] (5,3) -- (5.5,3);
\draw[cyan,thick,<-,>=stealth] (6,4) -- (6,3.5);
\draw[cyan,thick,<-,>=stealth] (6,4) -- (6.5,4);
\draw[cyan,thick,<-,>=stealth] (8,6) -- (8,5.5);
\draw[cyan,thick,<-,>=stealth] (8,6) -- (8.5,6);
\draw[cyan,thick,<-,>=stealth] (9,7) -- (9,6.5);
\draw[cyan,thick,->,>=stealth] (5.5,1) -- (6,1);
\draw[cyan,thick,->,>=stealth] (6,1.5) -- (6,1);
\draw[cyan,thick,->,>=stealth] (6.5,2) -- (7,2);
\draw[cyan,thick,->,>=stealth] (7,2.5) -- (7,2);
\draw[cyan,thick,->,>=stealth] (9.5,5) -- (10,5);
\draw[cyan,thick,->,>=stealth] (10,5.5) -- (10,5);
\draw[cyan,thick,<->,>=stealth] (6,0) -- (7,0);
\draw[cyan,thick,<->,>=stealth] (7,1) -- (7,0);
\draw[cyan,thick,<->,>=stealth] (7,1) -- (8,1);
\draw[cyan,thick,<->,>=stealth] (8,2) -- (8,1);
\draw[cyan,thick,<-,>=stealth] (8,2) -- (8.5,2);
\draw[cyan,thick,<-,>=stealth] (10,4) -- (10,3.5);
\draw[cyan,thick,<->,>=stealth] (10,4) -- (11,4);
\draw[cyan,thick,<->,>=stealth] (11,5) -- (11,4);
\node[scirc] at (4.7,0) {};
\node[scirc] at (5,0) {};
\node[scirc] at (5.3,0) {};
\node[scirc] at (11,5.7) {};
\node[scirc] at (11,6) {};
\node[scirc] at (11,6.3) {};
\node[scirc] at (5.7,5.7) {};
\node[scirc] at (6,6) {};
\node[scirc] at (6.3,6.3) {};
\node[scirc] at (9,2.4) {};
\node[scirc] at (9.3,2.7) {};
\node[scirc] at (9.6,3) {};
\end{tikzpicture}
\caption{$k$ is even.}
\end{subfigure}
\begin{subfigure}{.45\linewidth}
\centering
\begin{tikzpicture}[scale=0.5]
\draw[thick,<->,>=stealth] (13,-1) -- (1,-1) node[left] {\tiny $P_a$};
\draw[thick,<->,>=stealth] (13,-1) -- (13,10) -- (1,10) node[left] {\tiny $P_b$};
\draw[red,thick,<-,>=stealth] (2,-1) -- (2,5) -- (3,5) -- (3,6) -- (4,6) -- (4,6.5);
\draw[red,thick,<-,>=stealth] (3,-1) -- (3,4) -- (4,4) -- (4,5) -- (5,5) -- (5,5.5);
\draw[red,thick,<-,>=stealth] (4,-1) -- (4,3) -- (5,3) -- (5,4) -- (6,4) -- (6,4.5);
\draw[red,thick,<-,>=stealth] (6,-1) -- (6,1) -- (7,1) -- (7,2) -- (8,2) -- (8,2.5);
\draw[red,thick,<-,>=stealth] (7,-1) -- (7,0) -- (8,0) -- (8,1) -- (9,1) -- (9,1.5);
\draw[red,thick,->,>=stealth] (5.5,8) -- (6,8) -- (6,9) -- (7,9) -- (7,10);
\draw[red,thick,->,>=stealth] (6.5,7) -- (7,7) -- (7,8) -- (8,8) -- (8,10);
\draw[red,thick,->,>=stealth] (7.5,6) -- (8,6) -- (8,7) -- (9,7) -- (9,10);
\draw[red,thick,->,>=stealth] (9.5,4) -- (10,4) -- (10,5) -- (11,5) -- (11,10);
\draw[red,thick,->,>=stealth] (10.5,3) -- (11,3) -- (11,4) -- (12,4) -- (12,10);
\draw[cyan,thick,<->,>=stealth] (2,4) -- (3,4);
\draw[cyan,thick,<->,>=stealth] (3,5) -- (3,4);
\draw[cyan,thick,<->,>=stealth] (3,5) -- (4,5);
\draw[cyan,thick,<->,>=stealth] (4,6) -- (4,5);
\draw[cyan,thick,<-,>=stealth] (4,6) -- (4.5,6);
\draw[cyan,thick,<-,>=stealth] (6,8) -- (6,7.5);
\draw[cyan,thick,<->,>=stealth] (6,8) -- (7,8);
\draw[cyan,thick,<->,>=stealth] (7,9) -- (7,8);
\draw[cyan,thick,<->,>=stealth] (7,9) -- (8,9);
\draw[cyan,thick,<->,>=stealth] (3,3) -- (4,3);
\draw[cyan,thick,<->,>=stealth] (4,4) -- (4,3);
\draw[cyan,thick,<->,>=stealth] (4,4) -- (5,4);
\draw[cyan,thick,<->,>=stealth] (5,5) -- (5,4);
\draw[cyan,thick,<-,>=stealth] (5,5) -- (5.5,5);
\draw[cyan,thick,<-,>=stealth] (7,7) -- (7,6.5);
\draw[cyan,thick,<->,>=stealth] (7,7) -- (8,7);
\draw[cyan,thick,<->,>=stealth] (8,8) -- (8,7);
\draw[cyan,thick,<->,>=stealth] (8,8) -- (9,8);
\draw[cyan,thick,<-,>=stealth] (4,2) -- (4.5,2);
\draw[cyan,thick,<-,>=stealth] (5,3) -- (5,2.5);
\draw[cyan,thick,<-,>=stealth] (5,3) -- (5.5,3);
\draw[cyan,thick,<-,>=stealth] (6,4) -- (6,3.5);
\draw[cyan,thick,<-,>=stealth] (6,4) -- (6.5,4);
\draw[cyan,thick,<-,>=stealth] (8,6) -- (8,5.5);
\draw[cyan,thick,<-,>=stealth] (8,6) -- (8.5,6);
\draw[cyan,thick,<-,>=stealth] (9,7) -- (9,6.5);
\draw[cyan,thick,<-,>=stealth] (9,7) -- (9.5,7);
\draw[cyan,thick,->,>=stealth] (5.5,1) -- (6,1);
\draw[cyan,thick,->,>=stealth] (6,1.5) -- (6,1);
\draw[cyan,thick,->,>=stealth] (6.5,2) -- (7,2);
\draw[cyan,thick,->,>=stealth] (7,2.5) -- (7,2);
\draw[cyan,thick,->,>=stealth] (9.5,5) -- (10,5);
\draw[cyan,thick,->,>=stealth] (10,5.5) -- (10,5);
\draw[cyan,thick,->,>=stealth] (10.5,6) -- (11,6);
\draw[cyan,thick,<->,>=stealth] (6,0) -- (7,0);
\draw[cyan,thick,<->,>=stealth] (7,1) -- (7,0);
\draw[cyan,thick,<->,>=stealth] (7,1) -- (8,1);
\draw[cyan,thick,<->,>=stealth] (8,2) -- (8,1);
\draw[cyan,thick,<-,>=stealth] (8,2) -- (8.5,2);
\draw[cyan,thick,<-,>=stealth] (10,4) -- (10,3.5);
\draw[cyan,thick,<->,>=stealth] (10,4) -- (11,4);
\draw[cyan,thick,<->,>=stealth] (11,5) -- (11,4);
\draw[cyan,thick,<->,>=stealth] (11,5) -- (12,5);
\node[scirc] at (4.7,0) {};
\node[scirc] at (5,0) {};
\node[scirc] at (5.3,0) {};
\node[scirc] at (9.7,8.5) {};
\node[scirc] at (10,8.5) {};
\node[scirc] at (10.3,8.5) {};
\node[scirc] at (5.7,5.7) {};
\node[scirc] at (6,6) {};
\node[scirc] at (6.3,6.3) {};
\node[scirc] at (9,2.4) {};
\node[scirc] at (9.3,2.7) {};
\node[scirc] at (9.6,3) {};
\end{tikzpicture}
\caption{$k$ is odd.}
\end{subfigure}
\caption{A $(k+1)$-bend CPG representation of $G_k$.}
\label{fig:representation}
\end{figure}
\begin{observation}
\label{pureendpoints}
If a path $P_{\alpha_i}$ is pure, then one endpoint of $P_{\alpha_i}$ belongs to $P_a$ and the other endpoint belongs to $P_b$.
\end{observation}
\begin{Claim}
\label{bendpoint}
Let $P_{\alpha_i}$ and $P_{\alpha_{i+1}}$ be two pure paths and let $u$ and $v$ be two $(i,i+1)$-sewing vertices such that $uv \in E(G_k)$. If a grid-point $x$ belongs to $P_u \cap P_v$, then $x$ corresponds to an endpoint of both $P_u$ and $P_v$, and a bendpoint of either $P_{\alpha_i}$ or $P_{\alpha_{i+1}}$.
\end{Claim}
It follows from Observation \ref{pureendpoints} and the fact that $u$ is non-adjacent to both $a$ and $b$, that no endpoint of $P_{\alpha_i}$ or $P_{\alpha_{i+1}}$ belongs to $P_u$. Consequently, one endpoint of $P_u$ belongs to $P_{\alpha_i}$ and the other endpoint belongs to $P_{\alpha_{i+1}}$. We conclude similarly for $P_v$. By definition, $x$ corresponds to an endpoint of at least one of $P_u$ and $P_v$, which implies that $x$ belongs to $\mathring{P_{\alpha_i}}$ or $\mathring{P_{\alpha_{i+1}}}$. But then, $x$ must be an endpoint of both $P_u$ and $P_v$; in particular, $x$ is a grid-point of type either II.a or II.b. Without loss of generality, we may assume that $x \in \mathring{P_{\alpha_i}}$. We denote by $y_u$ (resp. $y_v$) the endpoint of $P_u$ (resp. $P_v$) belonging to $P_{\alpha_{i+1}}$. Now, suppose to the contrary that $x$ is of type II.a. The union of $P_u$, $P_v$ and the portion of $P_{\alpha_{i+1}}$ between $y_u$ and $y_v$ defines a closed curve $\mathcal{C}$, which divides the plane into two regions. Since $P_a$ and $P_b$ touch neither $P_u$, $P_v$ nor $\mathring{P_{\alpha_{i+1}}}$ (recall that $P_{\alpha_{i+1}}$ is pure), $P_a$ and $P_b$ lie entirely in one of those regions; and, as $a$ and $b$ are adjacent, $P_a$ and $P_b$ in fact belong to the same region. On the other hand, since one endpoint of $P_u$ (resp. $P_v$) belongs to $P_{\alpha_i}$ while the other endpoint belongs to $P_{\alpha_{i+1}}$, and both endpoints of $P_{\alpha_i}$ are in $P_a \cup P_b$, it follows that $x \in \mathcal{C}$ is the only contact point between $P_u$ (resp. $P_v$) and $P_{\alpha_i}$; but $\alpha_i$ and $\alpha_{i+1}$ being non-adjacent, this implies that $P_{\alpha_i}$ crosses $\mathcal{C}$ only once and has therefore one endpoint in each region. But both endpoints of $P_{\alpha_i}$ belong to $P_a \cup P_b$ which contradicts the fact that $P_a$ and $P_b$ lie in the same region. Hence, $x$ is of type II.b which concludes the proof.~$\diamond$
\bigskip
\begin{Claim}
\label{bends}
If two paths $P_{\alpha_i}$ and $P_{\alpha_{i+1}}$ are pure, then one of them contains at least $\lfloor \frac{k+1}{2} \rfloor$ bends and the other one contains at least $\lceil \frac{k+1}{2} \rceil$ bends. Moreover, all of those bendpoints belong to \textit{(i,i+1)}-sewing paths.
\end{Claim}
For all $1 \leq j \leq k+1$, consider a point $x_j \in P_{u_j^i} \cap P_{u_{j+1}^i}$. It follows from Claim \ref{bendpoint} that $x_j$ is a bendpoint of either $P_{\alpha_i}$ or $P_{\alpha_{i+1}}$. Since $x_j$ and $x_{j+1}$ are the endpoints of $P_{u_{j+1}^i}$, one belongs to $P_{\alpha_i}$ while the other belongs to $P_{\alpha_{i+1}}$. Therefore, $\{x_j, 1 \leq j \leq k+1 \text{ and } (j \text{ mod } 2) = 0\}$ is a subset of one of the considered secondary paths and $\{x_j, 1 \leq j \leq k+1 \text{ and } (j \text{ mod } 2) = 1\}$ is a subset of the other secondary path. $\diamond$
\bigskip
Finally, we claim that there exists an index $1 \leq j \leq 17$ such that $P_{\alpha_j}$, $P_{\alpha_{j+1}}$, $P_{\alpha_{j+2}}$ and $P_{\alpha_{j+3}}$ are all pure. Indeed, if it weren't the case, then there would be at least $\lfloor 20/4 \rfloor = 5$ secondary paths that are not pure; but at most $4$ secondary paths can contain endpoints of $P_a$ or $P_b$, a contradiction. It now follows from Claim \ref{bends} that $P_{\alpha_{j+1}}$ has at least $\lfloor \frac{k+1}{2} \rfloor$ bends (which belong to $(j,j+1)$-sewing paths), and that $P_{\alpha_{j+2}}$ has at least $\lfloor \frac{k+1}{2} \rfloor$ bends (which belong to $(j+2,j+3)$-sewing paths). Furthermore, either $P_{\alpha_{j+1}}$ or $P_{\alpha_{j+2}}$ has at least $\lceil \frac{k+1}{2} \rceil$ bends which are endpoints of $(j+1,j+2)$-sewing paths. Either way, there is a path with at least $\lfloor \frac{k+1}{2} \rfloor + \lceil \frac{k+1}{2} \rceil = k+1$ bends, which concludes the proof of Theorem \ref{thm:unbound}.
\section{Conclusion}
In this note, we prove that the class of planar CPG graphs is not included in any $B_k$-CPG, for $k \geq 0$. More specifically, we show that for any $k \geq 0$, there exists a planar graph which is $B_{k+1}$-CPG but not $B_k$-CPG. As a consequence, we also obtain that $B_k$-CPG $\subsetneq$ $B_{k+1}$-CPG for any $k \geq 0$.
|
2,869,038,155,515 | arxiv | \section{Introduction}
\label{sec:introduction}
\subsection{Price impact}
Liquidity in financial markets is an elusive concept, with many definitions in existence. From a practical point of view, however, one of its most important metrics is the response of price to buying and selling.
This reaction is called \emph{price impact}, and it has been treated in a long series of empirical papers (see e.g. \citet{hasbrouck1991measuring,bouchaud2004fluctuations,almgren2005direct, moro2009market, bouchaud2008markets,toth2011anomalous} and refs. therein). One of the most important findings is that impact is not only mechanical but dynamic, meaning that it cannot be described exclusively by the revealed supply or demand at any given time --
say the content of the visible limit order book \citep{weber2005order}. It is rather related to underlying ``latent'' supply and demand \citep{toth2011anomalous,donier2015fully} which correspond to the intentions of market participants, and which manifest themselves over time. Most of the recent studies of impact have been carried out on transparent, listed markets. Nevertheless, many aspects of price impact appear to be universal, i.e. common to many asset classes, even to exotic ones \citep{donier2015million, toth2016square}.
In this paper, we will continue the exploration by presenting the price impact of individual transactions in credit indices, where trading does not take place in limit order books. The remainder of this section introduces these products, briefly surveys the relevant literature, and presents the data used for our study. Section \ref{sec:naive} then describes a naive approach to calculate impact based on a standard propagator technique.
Section \ref{sec:correction} looks at the effect of misclassification between buy and sell trades, and corrects the resulting biases in our results. Section \ref{sec:seasonal} discusses a temporary pattern of increased order splitting and higher impact observed in the data. Section \ref{sec:on_off_sef} finds indications that the impact of trades increases with the number of dealers involved. Finally Section \ref{sec:conclusion} concludes.
\subsection{The credit index market}
Today the credit index market is fairly mature. The most liquid derivative products are proposed by Markit, we will look at four of them: two US based indices CDX IG (for Investment Grade) and CDX HY (for High Yield), and their European counterparts iTraxx Europe and iTraxx Crossover, respectively. These correspond to baskets of CDSs \citep{oehmke2014anatomy}, each of which represents an insurance on bonds of a given corporate issuer within the respective grade and geographical zone. The instruments are standardized, their mechanics very much resembles futures contracts, and they roll once every 6 months. At the time of the roll a new maturity is issued, these are called ``series", for example S23, S24 and S25 for iTraxx Europe. On any given day most of the liquidity is concentrated in the most recently issued five-year series of each index at the time, in the following we will only study these (see table\ \ref{tab:products}).
One particularity of this market -- as opposed to stocks or futures -- is that it is purely \emph{over-the-counter} (OTC), currently \emph{without} any liquid limit order book. Information is fragmented, there is no single, central source to verify when one is looking for tradable prices, client orders pass through a large number of dealers instead. The latter usually do provide indicative bid/ask prices, but the actual trades are mostly done via Request for Quotes (RFQ), see also \citet{hendershott2015click}. This means that the client (liquidity taker) auctions off its trades to the liquidity providers, by sending them information about the conditions of the deal (which product, buy or sell, size) either electronically or by voice, receiving competing quotes in return, and taking the best price.
To counteract the bilateral design of OTC markets which favors opacity, the Dodd-Frank Act has mandated several changes. Among them are obligatory post-trade reporting, and the creation of Swap Exchange Facilities (SEFs) which provide an organized framework to dealing in eligible OTC instruments. Today a large portion of trades is required to go through SEFs, whose volume is predominantly done via electronic RFQ. Even though they provide order books, those are -- for the moment -- empty.
\subsection{Literature review}
Credit trading has received considerably less attention than equities or futures, and most studies have been done by or in collaboration with regulators who have privileged access to non-anonymous data. \citet{gehde2015liquidity} study records from the German Bundesbank regarding single-name CDS issues. They find significant price impact using a model where the effect of each trade is permanent. \citet{shachar2012exposing} of the New York Federal Reserve defines buy/sell orders by assuming that the initiator of the trades is the end-user (as opposed to the dealer), and focuses to a large extent on the inventory management of dealers. The study finds evidence of "hot-potato" trading \citep{lyons1997hotpotato} whereby an initial client trade changes hands among dealers several times, while its effect is being gradually incorporated into the price. \citet{loon2016does} of the Securities and Exchange Commission focus on the same credit indices as our study. Their work takes a policy-maker's point of view, and argues that the wider transparency created by the Dodd-Frank Act has improved several metrics of liquidity. They focus particularly on the transitory period during the introduction of the reform, whereas we will consider more recent data where market structure is already relatively stable. Finally, \citet{hendershott2015click} analyze both electronic and voice trading in single-name CDS. Most notably they identify price impact related to information leakage, especially when the client requests prices from many dealers, and even if he finally decides not to trade.
\subsection{The dataset}
Our period of study is 17 June 2015 -- 31 August 2016. For the four products we have recorded a semi-realtime (several updates per minute) indicative data feed via a service called CBBT (Composite Bloomberg Bond Trader). This represents a continuous, electronic poll of recent executable prices from dealers. Nevertheless, it is not a bid or ask price, only an indicative level around which one expects to be able to transact. At some time $t$ the indicative price is quoted as a credit spread $s_t$, which is the annualized insurance premium in basis points.\footnote{Note that the quoting convention for CDX HY is different from the rest, but the credit spread can be recalculated based on the available prices.} In the following we will express all prices as basis points of the typical credit spread itself, meaning
\begin{equation*}
m_t = 10^4 \times \frac{s_t}{\left\langle s_t\right\rangle},
\end{equation*}
where $\left\langle \cdot\right\rangle$ denotes a time average.
Anonymous information about trades is also available from a different source: \emph{trade repositories} mandated by regulation. We have used the records of two such organizations.\footnote{Data from Bloomberg SDRV is available at \href{http://www.bloombergsdr.com/}{http://www.bloombergsdr.com/}, and from the DTCC at \href{http://www.dtcc.com/repository-otc-data}{http://www.dtcc.com/repository-otc-data}.} These include a substantial part of all trades with credit spread, timestamp, volume and other additional information.
While the data are rich and relatively clean, the two sources (prices and trades) are independent, and there is no \emph{a priori} reason for perfect synchronization between the two.
\begin{table}
\caption{Summary of the different credit index products studied. Notice that the value of $G$ is much more stable than that of ${\cal R}$ across different series of the same product.}
\begin{indented}
\item[]\begin{tabular}{@{}lcccc}
\br
\multicolumn{1}{c}{Product code} & Avg. intertrade & $N_\mathrm{eff}$ & ${\cal R}(\ell=30)$ & $G(\ell=30)$ \\
& time [sec] & & [bps] & [bps] \\
\mr
CDX HY S24 & 75 & 2.3 & 5.8 & 3.4 \\
CDX HY S25 & 66 & 14.2 & 51.2 & 3.9 \\
CDX HY S26 & 90 & 1.7 & 7.1 & 4.2 \\ \hline
CDX IG S24 & 113 & 1.6 & 11.8 & 7.5 \\
CDX IG S25 & 92 & 9.8 & 55.0 & 4.9 \\
CDX IG S26 & 119 & 2.8 & 10.0 & 5.3 \\ \hline
iTraxx Crossover S23 & 128 & 1.5 & 13.4 & 9.3 \\
iTraxx Crossover S24 & 106 & 5.7 & 65.4 & 6.6 \\
iTraxx Crossover S25 & 199 & 1.2 & 19.9 & 10.6 \\ \hline
iTraxx Europe S23 & 171 & 1.4 & 15.8 & 10.1 \\
iTraxx Europe S24 & 132 & 7.1 & 67.1 & 8.4 \\
iTraxx Europe S25 & 181 & 2.0 & 21.4 & 10.2 \\
\br
\end{tabular}
\label{tab:products}
\end{indented}
\end{table}
\begin{figure}[tbh]
\begin{center}
\includegraphics[width=0.9\columnwidth]{price_example.png}
\caption{An example of the time evolution of indicative credit spread and its value reported for trades. We show the product iTraxx Europe S25 on 31 August 2016.}
\label{fig:price_example}
\end{center}
\end{figure}
\section{Naive propagators}
\label{sec:naive}
Price impact is often analyzed in the context of linear models, where the market price $m_t$ just before trade $t$ is written as a linear combination of the time dependent impact of past trades \citep{bouchaud2004fluctuations}:
\begin{equation}
m_t = \sum_{t'<t}\left[{G}(t-t')\epsilon_{t'} + \eta_{t'} \right] + m_{-\infty}.
\label{eq:ptMO}
\end{equation}
$\epsilon_{t'}$ is the sign of the trade at time $t'$ ($+$ for buyer, $-$ for seller initiated trades), and $\eta_{t'}$ is an independent noise term. ${G}(\ell)$ is called the `propagator', and it describes how the price at time $t$ is modified \emph{due to} the trade at $t-\ell$. In equity and futures markets, this propagator is found to decay with time, i.e. a large part of price impact is transient rather than permanent (for a recent study of the long term behaviour of $G(\ell)$ in equities, see \citet{brokmann2015slow}).
In order to calibrate the model \eqref{eq:ptMO} one calculates the \emph{response function} ${\cal R}(\ell)$, which is defined as
\begin{equation}
{\cal R}(\ell) = \langle (m_{t+\ell}-m_t) \cdot \epsilon_t \rangle,
\end{equation}
and which quantifies the price move \emph{after} a trade, but not necessarily \emph{due to} the trade. One then measures the autocorrelation of order signs, which is is customarily defined as
\begin{equation}
C(\ell) = \langle \epsilon_t \epsilon_{t+\ell} \rangle.
\end{equation}
And finally one solves the linear equation \citep{bouchaud2006random}
\begin{equation}
\label{eq:RCG}
{\cal R}(\ell) = \sum_{0 < n\leq \ell} {G}(n) C(\ell-n) + \sum_{n>0} [{G}(n+\ell)-G(n)] C(n)
\end{equation}
to map out the numerical value of $G(\ell)$.\footnote{In fact, it is much better to solve numerically the corresponding equation for the discrete derivative of $G$ in terms of the discrete derivative of $R$. This is what we have done in this paper.}
To guess the sign of trades one often relies on some heuristic. If we denote the price of trade $t$ by $p_t$, then simply
\begin{equation}
\label{eq:sign}
\epsilon_t = \mathrm{sign}(p_t-m_t).
\end{equation}
An example of transaction and reference prices is shown in figure\ \ref{fig:price_example}.
As one can see from figure\ \ref{fig:C}, the shape of $C(\ell)$ is well fitted by a stretched exponential. This is true for most individual products and on average across them, and it is in contrast with earlier studies in order book markets, where $C(\ell)$ rather decays as a slow, power-law function \citep{bouchaud2008markets}. In the latter liquidity at good prices is often small \citep{bouchaud2006random}, so large orders tend to be sliced, producing a long-range autocorrelation of small trades. In OTC markets clients are encouraged to request deals that correspond to their full liquidity needs, so that after the trade is done, the dealer offloading the inventory just acquired will not have to compete for liquidity with the same client. Since trades are bilateral, the dealer knows the identity of the client, and can reward or penalize it by adjusting the bid-ask spread according to any adverse selection perceived on earlier deals \citep{osler2016bid}.
Since the order sign process is not long range correlated, its autocorrelation function is integrable and one can therefore define an \emph{effective number of correlated orders} via
\begin{equation*}
N_\mathrm{eff} = \sum_{\ell = 0}^\infty C(\ell).
\end{equation*}
This value varies between $1.2$ and $14.2$, as reported in table\ ~\ref{tab:products}. These differences are due to time periods, we will study this point further in Section \ref{sec:seasonal}
We can calculate the propagator via \eqref{eq:RCG}, their average across products is given in figure\ \ref{fig:Gtrue}. \citet{bouchaud2004fluctuations} have shown that if $C(\ell)$ has a power-law form, then the propagator should itself decay over time as a power-law in order to maintain the efficiency of prices. In other words, impact is mostly {\it transient} in that case. On the other hand, since our $C(\ell)$ is short ranged, the same argument predicts that $G(\ell)$ should tend to a constant for large $\ell$, corresponding to a non-zero {\it permanent impact} component. This is indeed what we observe, see figure\ \ref{fig:Gtrue}.
Calculating $G$ from ${\cal R}$ involves inverting a matrix whose elements are related to $C$. This operation amplifies the noise in the correlations, and so it is useful to also give approximate formulas that avoid this. If we know that $G$ is increasing before converging to a fixed value, and $C$ is positive, then from \eqref{eq:RCG} one can find two bounds on $G(\ell)$. In the limit when $C(\ell)$ reaches zero much more quickly than $G(\ell)$ goes to its asymptotic value, we can write for large $\ell$ that
\begin{equation}
(2 N_\mathrm{eff}-1)^{-1}{\cal R}(\ell) \leq G(\ell).
\label{eq:resplower}
\end{equation}
Conversely, if we assume that $G$ saturates immediately, then ${\cal R}$ will be maximal, and for large $\ell$ we get
\begin{equation}
G(\ell)\leq N_\mathrm{eff}^{-1} {\cal R}(\ell).
\label{eq:respupper}
\end{equation}
Figure\ \ref{fig:Gtrue} shows these bounds which are not excessively wide, as well as the real $G$ calculated numerically.
In reality the propagator -- which was not observed in previous studies -- increases steeply but continuously in the initial period. This makes sense in the absence of a central orderbook: The information that someone bought or sold takes a finite time of $5-10$ trades to diffuse in the market and to get incorporated into the price.
Beyond the propagators which give a microscopic description of price moves, one can also look at a more aggregate characterization by dividing the data into $15$ minute bins. Then one can calculate in each bin $b$ the net signed notional defined as
\begin{equation*}
I_b = \sum_{t\in b} \epsilon_tQ_t,
\end{equation*}
where the sum runs over trades in the bin, and $Q_t$ is the notional value of trade $t$. As a function of this quantity one can calculate the average price change $r_b = m_b - m_{b-1}$ over the bin. Figure\ \ref{fig:bins15} confirms a strong correlation, similar results can be obtained regardless of the precise time scale. Note the concavity of this plot, also observed in many other markets with order books (see e.g. \citep{bouchaud2008markets}, figure\ 2.5). Although we do not have enough statistics to test on our trades the $\sqrt{Q}$ impact law universally observed in all markets studied so far, we believe that the concave shape seen in figure\ \ref{fig:bins15} is compatible with the general ``latent liquidity'' idea of \citet{toth2011anomalous,donier2015fully}.
\begin{figure}[tbh]
\begin{center}
\includegraphics[width=1.0\columnwidth]{CGR.png}
\caption{{\it (left)} Average of the sign autocorrelation $C(\ell)$, and the stretched exponential fit $a\times \exp(-b\ell)^{\nu}$, with $a=0.43$, $b=0.16$ and $\nu=2/3$. {\it (right)} Average of the response function ${\cal R}(\ell)$, and the stretched exponential fit $a\times [1-\exp(-b\ell)^{\nu}]$, with $a=33.9$, $b=0.068$ and $\nu=0.88$.}
\label{fig:C}
\end{center}
\end{figure}
\begin{figure}[tbh]
\begin{center}
\includegraphics[width=0.9\columnwidth]{Gtrue.png}
\caption{Average value across products of various quantities: ({\it red points}) Propagator $G(\ell)$, ({\it red line}) fits of the former quantity with the same stretched exponential form as above. ({\it red shaded area}) Bounds on $G(\ell)$ based on \eqref{eq:resplower} and \eqref{eq:respupper}. Note that as expected these are only valid for large $\ell$. ({\it blue line}) The true, bias-adjusted propagator $G^\mathrm{true}(\ell)$ calculated with the fits of $C^\mathrm{true}(\ell)$ and ${\cal R}^\mathrm{true}(\ell)$, and divided by a factor $3$ for better readability.}
\label{fig:Gtrue}
\end{center}
\end{figure}
\begin{figure}[tbh]
\begin{center}
\includegraphics[width=0.9\columnwidth]{bins_clean_15.png}
\caption{The average return in 15-minute windows normalized by their absolute mean ($r_b/\left\langle|r_b|\right\rangle$), as a function of the imbalance normalized by its own standard deviation ($I_b/\mathrm{std}(I_b)$). Data points have been separated into 30 groups according to their rank on the horizontal axis, we show the average returns in each group. Outliers outside the horizontal range $[-3, +3]$ have been discarded. The purple lines correspond to individual products, and the red points to all products together. The dashed line is a linear fit with slope $1/3$. Note the clear concavity of the average curve as the volume imbalance increases.}
\label{fig:bins15}
\end{center}
\end{figure}
\section{Correction for the noise in order signs}
\label{sec:correction}
In the previous section we have confirmed the existence of price impact in OTC credit indices. However, the magnitude of the effect remains to be validated for the following reason. It is known that the heuristic \eqref{eq:sign} for identifying order signs does not always give correct results. If $\epsilon_t$ is incorrect, naturally all expectation values calculated from it will be incorrect as well. In order to verify such biases in our results, we are going to use a proprietary dataset including 252 trades executed over the same period by our firm (CFM).
As opposed to the detected trade sign $\epsilon_t$, let us introduce the notation $\epsilon^\mathrm{true}_t$ for the true sign of the same order, which is not a priori known, except for those of CFM. It is also convenient to introduce an auxiliary variable $q_t$ that is $1$ when we classified the trade correctly, and $0$ when we did not. This way
\begin{equation}
\epsilon_t = q_t\epsilon^\mathrm{true}_t + (1-q_t)\times(-\epsilon^\mathrm{true}_t) \equiv \epsilon^\mathrm{true}_t (2q_t-1).
\label{eq:q}
\end{equation}
If we look at the above mentioned CFM trades, the rate of correct classification, described by the average $\left\langle q_t\right\rangle_{t\in\mathrm{CFM}}$, is only 72\%. This value is constant within noise level across different months in the sample.
As a first step we would like to show that basic correlations of the detected order signs $\epsilon_t$ are the same in the CFM subset of trades and the rest. Let us compare $C(\ell) = \langle\epsilon_t\epsilon_{t+\ell}\rangle$ with the subsample average
\begin{equation*}
C_\mathrm{CFM}(\ell) = [\langle\epsilon_t\epsilon_{t+\ell}\rangle_{t\in\mathrm{CFM}}+\langle\epsilon_t\epsilon_{t+\ell}\rangle_{t+\ell\in\mathrm{CFM}}]/2,
\end{equation*}
where we conditioned on at least one of the trades belonging to CFM. Figure\ \ref{fig:Cint} shows that there is a fair match, especially for short lags. This gives an indication that other statistics calculated on CFM trades may be approximately similar to the whole market, and can be used in the following.
Let us now look at how misclassification biases our earlier calculations. Let us define
\begin{eqnarray}
C_\mathrm{true}(\ell) = \left \langle \epsilon^\mathrm{true}_t \epsilon^\mathrm{true}_{t+\ell}\right\rangle = \left \langle \epsilon^\mathrm{true}_t \epsilon_{t+\ell}\right\rangle + \left\langle \epsilon^\mathrm{true}_t (\epsilon^\mathrm{true}_{t+\ell}-\epsilon_{t+\ell})\right\rangle.
\end{eqnarray}
We can readily measure the first term when $t\in\mathrm{CFM}$, whereas for $\ell \geq 1$ the second term can be rewritten as
\begin{eqnarray}
\left\langle\epsilon^\mathrm{true}_t (\epsilon^\mathrm{true}_{t+\ell}-\epsilon_{t+\ell})\right\rangle = -\left\langle\epsilon^\mathrm{true}_t \epsilon^\mathrm{true}_{t+\ell}\cdot 2(q_{t+\ell}-1)\right\rangle \approx \nonumber \\
\left\langle\epsilon^\mathrm{true}_t\epsilon^\mathrm{true}_{t+\ell} \right\rangle \cdot 2(\left\langle q_{t+\ell}\right\rangle-1) = -2 C^\mathrm{true}(\ell)\cdot (\left\langle q_t\right\rangle-1).
\label{eq:truedelta}
\end{eqnarray}
For the approximation step we assumed that whether or not we make a mistake in identification is independent of the two-point product of true signs. After reorganization and assuming that we can use CFM trades in part of the correlation, we get
\begin{equation}
C_\mathrm{true}(\ell \geq 1) = \frac{\left \langle \epsilon^\mathrm{true}_t \epsilon_{t+\ell}\right\rangle_{t\in\mathrm{CFM}}}{2\left\langle q_t\right\rangle-1}.
\label{eq:Ctrue}
\end{equation}
This estimation of $C_\mathrm{true}(\ell)$ is shown in figure\ \ref{fig:Cint}.
As for the response function, one can define its ``true" variant as
\begin{equation}
{\cal R}^\mathrm{true}(\ell) = \langle (m_{t+\ell}-m_t) \cdot \epsilon^\mathrm{true}_t \rangle.
\label{eq:Rtrue}
\end{equation}
This is related to the response with detected order signs as
\begin{eqnarray}
{\cal R}(\ell) = \langle (p_{t+\ell}-p_t)\epsilon_t \rangle = \nonumber \\
\langle (p_{t+\ell}-p_t)\times(2q_t-1)\epsilon^\mathrm{true}_t\rangle \approx (2\langle q_t\rangle-1){\cal R}^\mathrm{true}(\ell).
\label{eq:Rtrue2}
\end{eqnarray}
In the approximation step we neglect the correlation of identification error and future price change. Finally:
\begin{eqnarray}
{\cal R}^\mathrm{true}(\ell) = \frac{{\cal R}(\ell)}{2\langle q_t\rangle-1}.
\label{eq:Rtrue3}
\end{eqnarray}
Finally one can define a true propagator $G^\mathrm{true}(\ell)$ via \eqref{eq:RCG} by inserting ${\cal R}^\mathrm{true}(\ell)$ and $C^\mathrm{true}(\ell)$. If we use \eqref{eq:Ctrue} and \eqref{eq:Rtrue3} for the approximation of these latter, one can obtain the numerical value of the true propagator averaged over all products, see figure\ \ref{fig:Gtrue}. This shows that finally $G^\mathrm{true}(\ell) \approx 3G(\ell)$.
\begin{figure}[tbh]
\begin{center}
\includegraphics[width=0.9\columnwidth]{Cint.png}
\caption{Cumulative sign autocorrelation functions. The estimated curve corresponds to \eqref{eq:Ctrue}.}
\label{fig:Cint}
\end{center}
\end{figure}
\section{Seasonal patterns in order splitting}
\label{sec:seasonal}
We now revisit our results to study their variation across time. Figure\ \ref{fig:Neff_all} shows the measured value of $N_\mathrm{eff}$ in monthly windows. One can see that order correlations intensify at the turn of the year, while at the same time credit spreads climb to a local maximum. Liquidity itself remains roughly constant. In figure\ \ref{fig:Neff_cumsum} we offer a more detailed view of the effect, by showing the cumulative autocorrelation $\sum_{\ell' = 0}^\ell C(\ell')$ for each month separately (averages over all products). We do not see much structure in periods of low correlation, while during high correlation $C$ is very positive up to 10--20 trades. Note that the mis-classifier variable $q_t$ is reasonably stationary, so that this seasonality is too strong to result from an incorrect classification of trades.
\begin{figure}[tbh]
\begin{center}
\includegraphics[width=0.9\columnwidth]{Neff_all.png}
\caption{The points in color show the effective number of correlated trades for each product, measured in 1-month periods. The dashed black line represents the credit spread shifted and normalized such that the data spans the interval [0, 1], this is an average over the four products.}
\label{fig:Neff_all}
\end{center}
\end{figure}
\begin{figure}[tbh]
\begin{center}
\includegraphics[width=0.9\columnwidth]{Neff_cumsum.png}
\caption{The cumulative trade sign autocorrelation $\sum_{\ell' = 0}^\ell C(\ell')$, averaged across products. Each curve corresponds to a 1-month period.}
\label{fig:Neff_cumsum}
\end{center}
\end{figure}
An explanation could be given in the context of the "hot-potato" theory of \citet{lyons1997hotpotato}, advocated for credit markets in \citet{shachar2012exposing}. Clients are expected to trade the full required size in a single deal, so further orders (and hence $N_\mathrm{eff} > 1$) could come from the subsequent inter-dealer exchange of risk. In the period of difficult markets liquidity becomes "recycled" as it takes a longer time for the position to find a final counter party to warehouse the risk. This, however, does not explain the increase of ${\cal R}$ which is shown in figure\ \ref{fig:monthly_Reff}. Response is amplified in proportion to correlations, which means that these additional trades have full impact, and they likely cause a net variation of dealer inventory \citep{shachar2012exposing}. Hence this is more likely the signature of increased real order splitting or herding among clients, in the spirit of recent papers on transparent markets \citep{bouchaud2008markets}.
\begin{figure}[tbh]
\begin{center}
\includegraphics[width=0.9\columnwidth]{monthly_Reff.png}
\caption{The points in show the response function ${\cal R}$ after 30 trades for the each product, measured in 1-month periods.}
\label{fig:monthly_Reff}
\end{center}
\end{figure}
\section{Comparison of on-SEF and off-SEF trades}
\label{sec:on_off_sef}
\citet{loon2016does} compare various forms of trading, and argue that in the transitory period after the enactment of Dodd-Frank, increased market transparency has lead to lower trading cost and price impact. Their data dates from 2013, when uncleared and off-SEF trading were still commonplace. In our more recent dataset only $11\%$ of all trades are off-SEF, and in terms of transparency we no longer expect much difference. More importantly though, on-SEF it is mandatory to have \emph{at least} three competing brokers when requesting a price, whereas on electronic platforms for off-SEF this is \emph{at most} three brokers. It is common lore among traders that increased competition might reduce instantaneous costs, but as \citet{hendershott2015click} also show, it leads to higher information leakage, and thus more impact.
It is straightforward to extend \eqref{eq:ptMO} to a case where trades are classified into discrete categories $\pi$ \cite{eisler2011models}, each of which has its own propagator $G_\pi$. If each trade $t'$ falls into category $\pi_{t'}$, then
\begin{equation}
m_t = \sum_{t'<t}\left[G_\mathrm{\pi_{t'}}(t-t')\epsilon_{t'} + \eta_{t'} \right] + m_{-\infty}.
\label{eq:ptMO2}
\end{equation}
We use this technique to separate on-SEF and off-SEF execution, the naive impact kernels and the corresponding theoretical bounds are shown in figure\ \ref{fig:on_off_sef}. Indeed, despite the high noise level we find that off-SEF trades have significantly lower impact than on-SEF ones. This is not a result of the orders themselves being smaller, the mean size and the shape of the distribution are nearly identical, see figure\ \ref{fig:trade_distribution}. We see this rather as support for the theory of \citet{hendershott2015click}, that price impact grows with the number of dealers involved.
\begin{figure}[tbh]
\begin{center}
\includegraphics[width=0.9\columnwidth]{on_off_sef.png}
\caption{Naive propagators for on-SEF and off-SEF trades. The shaded areas correspond to the theoretical bounds derived from the multi-propagator equivalents of \eqref{eq:resplower} and \eqref{eq:respupper}, which are only expected to be valid for large $\ell$.}
\label{fig:on_off_sef}
\end{center}
\end{figure}
\begin{figure}[tbh]
\begin{center}
\includegraphics[width=0.9\columnwidth]{trade_distribution.png}
\caption{The distribution of order sizes on-SEF and off-SEF, close to an exponential. Both are normalized by their (nearly identical) respective means, which are shown in the legend.}
\label{fig:trade_distribution}
\end{center}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
At first the microstructure of the OTC credit index market seems different from that of equity and futures markets, as it is centered around dealers, without a central order book. However, we find that from the point of view of order flow and price impact, the differences are only quantitative. Client orders are much less split, and as a consequence, the impact of an isolated order, as expressed by $G$, reaches a permanent plateau. The numerical value of impact, after correcting for the imperfect identification of order signs, is the same order of magnitude as the bid-ask spreads in our daily trading experience. This is in line with what is expected based on theoretical arguments about the break-even costs of market making \cite{wyart2008relation}. The propagator takes $5-10$ trades, in real time more than $15$ minutes, to reach its final level. Because the market is very fragmented, it takes this long for the effect of a trade to become fully incorporated in the price.
Qualitatively the behavior is in line with what was observed for other, more frequently studied products. This finding gives further support to the argument that price impact is a universal phenomenon, and it behaves similarly in classical markets and more ``exotic" ones such as Bitcoin \cite{donier2015million}, options \cite{toth2016square} and now credit.
\section*{Acknowledgments}
The authors thank Panos Aliferis, Iacopo Mastromatteo and Bence T\'oth for their ideas and critical input.
|
2,869,038,155,516 | arxiv | \section{Introduction}
The Szeg\"{o} theorem (also known as the strong Szeg\"{o} theorem) is an
interesting asymptotic formula for the restrictions of functions of the
Toeplitz operators as the size of the domain of restriction tends to
infinity. It has a number of applications and extensions pertinent to
analysis, mathematical physics, operator theory, probability theory and
statistics and (recently) quantum information theory, see \cite%
{Bi:12,Bo-Si:90,De-Co:13,Si:12,So:13}. In this paper we consider an
extension of the theorem viewed as an asymptotic trace formula for a certain
class of selfadjoint operators. We will start with an outline of the
continuous version of the Szeg\"{o} theorem presenting it in the form which
explains our motivation.
Let $k:\mathbb{R}\rightarrow \mathbb{R}$ be an even and sufficiently smooth function
from $L^{1}(\mathbb{R})$,%
\begin{equation}
\Lambda =[-M,M],\;|\Lambda |=2M, \label{la}
\end{equation}%
$K$ and $K_{\Lambda }:=K|_{\Lambda }$ be selfadjoint convolution operators in $%
L^{2}(\mathbb{R})$ and its restriction to $L^{2}(\Lambda )$ given by
\begin{eqnarray}\label{coco}
(Ku)(x) &=&\int_{-\infty }^{\infty }k(x-y)u(y)dy,\;x\in \mathbb{R}, \\
(K_{\Lambda }u)(x) \notag &=&\int_{-M}^{M}k(x-y)u(y)dy,\;x\in \Lambda
\end{eqnarray}%
Set $A=\mathbf{1}_{L^{2}(\mathbb{R})}+K$ and $A_{\Lambda }= \mathbf{1}%
_{L^{2}(\Lambda)}+K_{\Lambda}$ and consider $\varphi :\mathbb{R}\rightarrow
\mathbb{R}$ such that $\varphi (A_{\Lambda })$ is of trace class in $%
L^{2}(\Lambda )$. Then we have according to Szeg\"o and subsequent works%
\begin{equation}
\tr_\Lambda \varphi (A_{\Lambda })=|\Lambda |\int_{-\infty }^{\infty
}\varphi (a(t))dt+\mathcal{T}+o(1),\;|\Lambda |\rightarrow \infty ,
\label{szcl}
\end{equation}%
where $\tr_{\Lambda }$ is the trace in $L^2(\Lambda)$, $a(t)=1+\widehat{k}(t),\;t\in
\mathbb{R}$, $\widehat{k}$ is the Fourier transform of $k$ and the subleading term $\mathcal{T}$ is a $\Lambda$-independent
functional of $\varphi $ and $a$. We will call $\varphi $ and $a$ the test
function and the symbol respectively.
Let $P=i\frac{d}{dx}$ be the selfadjoint operator in $L^{2}(\mathbb{R})$.
Then the r.h.s. of is $\tr_{\Lambda }\varphi (a_{\Lambda }(P))$, i.e., is
determined by the triple $(\varphi ,a,P)$, and since $a$ is even and smooth enough, we have $a(x)=b(x^2)$, hence the triple $(\varphi,b,P^2)$. It was proposed in \cite{Ki-Pa:15}
to consider instead of $P^2$ the Schrodinger operator $H=P^{2}+V$ where the
potential $V:\mathbb{R}\rightarrow \mathbb{R}$ is an ergodic process. It
seems that the replacement is of interest in itself since the ergodicity of
the potential guarantees the sufficient regular large $\Lambda $ behavior of
$\tr_{\Lambda }\varphi (a_{\Lambda }(H))$, hence a well defined asymptotic
formulas. Besides, the quantity $\tr_{\Lambda }\varphi (a_{\Lambda }(H))$
for certain $\varphi ,a$ and $V$ arises in quantum information theory and
quantum statistical mechanics, see \cite{Ei-Co:11}, Remark \ref{r:renyi} and
references therein.
Similar setting is also possible in the discrete case. In fact, it
is this case of which was initially studied by Szeg\"o for Toeplitz operators, while the continuous
case outlined above was considered later by Akhiezer, Kac and Widom, see e.g. \cite{Bo-Si:90} for a review.
We will also consider in this paper the discrete case.
In \cite{Ki-Pa:15} simple but rather non-trivial discrete cases were
studied. There $a(x)=x$ and $\varphi $ is $\ (x-x_0)^{-1}$ or $\log (x-x_0)$
where $x_0$ is outside the spectrum of the discrete Schrodinger operator
with ergodic potential (random and almost periodic). In particular, it was
shown that if the potential in the discrete Schrodinger equation is a
collection of independent identically distributed (i.i.d.) random variables,
then the leading term on the right of the analog of (\ref{szcl}) is again of
the order \ $|\Lambda |$ and is not random, but the subleading term is of
the order $|\Lambda |^{1/2}$ and is a Gaussian random variable. In fact, a
certain Central Limit Theorem for an appropriately normalized quantity $\tr%
_\Lambda \varphi (a_{\Lambda }(H))$ was established. In this paper we extend
this result for those $\varphi $ and $a$ which, roughly speaking, have the
Lipshitz derivative (see condition (\ref{afcon}) below). Note that similar
conditions were used Szeg\"o in his pioneering works, although the
conditions were seriously weakened in subsequent works, see \cite%
{Bo-Si:90,De-Co:13,Si:12,So:13}.
\section{Problem and Results}
Let $H$ be the one-dimensional Schrodinger operator in $l^{2}(\mathbb{Z}) $%
\begin{equation}
H=H_{0}+V, \label{h}
\end{equation}%
where%
\begin{equation}
(H_{0}u)_{j}=-u_{j+1}-u_{j-1},\;j\in \mathbb{Z} \label{h0}
\end{equation}%
and%
\begin{equation}
(Vu)_{ j}=V_{j}u_{j},\;j\in \mathbb{Z},\;|V_{j}|\leq \overline{V}<\infty
\label{qq}
\end{equation}%
is a potential which we assume to be a sequence of independent and
identically distributed (i.i.d.) random variables bounded for the sake of
technical simplicity.
The spectrum $\sigma (H)$ is a non-random closed set and
\begin{equation}
\sigma (H)\subset K:=[-2-\overline{V},2+\overline{V}], \label{spk}
\end{equation}%
see \cite{Pa-Fi:92}.
Let also $a :\sigma(H)\rightarrow \mathbb{R}$ (symbol) and $%
\varphi:a(\sigma(H))\rightarrow \mathbb{R}$ (test function) be bounded
functions. Introduce the integer valued interval (cf. (\ref{la}))
\begin{equation}
\Lambda =[-M,M]\subset \mathbb{Z},\;|\Lambda |=2M+1 \label{lm}
\end{equation}%
and the operator $\chi _{\Lambda }: l^2(\mathbb{Z}) \to l^2(\Lambda)$ of
restriction, i.e., if $x=\{x_{j}\}_{j\in \mathbb{Z}}\in l^{2}(\mathbb{Z})$,
then $\chi _{\Lambda }x=x_{\Lambda }:=\{x_{j}\}_{j\in \Lambda }\in
l^{2}(\Lambda )$. For any operator $A=\{A_{jk}\}_{j,k\in \mathbb{Z}}$ in $%
l^2(\mathbb{Z})$ we denote (cf. (\ref{coco}))%
\begin{equation}
A_{\Lambda }:=\chi _{\Lambda }A\chi _{\Lambda }=\{A_{jk}\}_{j,k\in \Lambda }
\label{ala}
\end{equation}%
its restriction to $l^{2}(\Lambda )$. Note that the spectra of $A$ and $%
A_{\Lambda }$ are related as follows%
\begin{equation}
\sigma (A_{\Lambda })\subset \sigma (H) \label{ssl}
\end{equation}
Our goal is to study the asymptotic behavior of
\begin{equation}
\mathrm{Tr}_{\Lambda }\varphi (a_{\Lambda }(H)):=\sum_{j \in \Lambda}
(\varphi (a_{\Lambda }(H)))_{jj},\;|\Lambda |\rightarrow \infty,
\label{trfl}
\end{equation}%
where
\begin{equation} \label{trl}
\tr_{\Lambda }...=\tr \chi_{\Lambda}...\chi_{\Lambda}
\end{equation}
As was mentioned above, this problem dates back to works of Szeg\"{o} \cite%
{Gr-Sz:58} and has been extensively studied afterwards for the Toeplitz and
convolution operators, see e.g., \cite{Bo-Si:90,De-Co:13,Si:12} and
references therein. Recall that any sequence
\begin{equation}
\{A_{j}\}_{j\in \mathbb{Z}},\;\overline{A_{j}}=A_{-j},\;\sum_{j\in \mathbb{Z}%
}|A_{j}|<\infty \label{aj}
\end{equation}%
determines a selfadjoint (discrete convolution) operator in $l^{2}(\mathbb{Z})$, cf. (\ref{coco})
\begin{equation}
A=\{A_{j-k}\}_{j,k\in \mathbb{Z}},\;(Au)_{j}=\sum_{k\in \mathbb{Z}%
}A_{j-k}u_{k}. \label{Ac}
\end{equation}%
Let
\begin{equation*}
a(p)=\sum_{j\in \mathbb{Z}}A_{j}e^{2\pi ipj},\;p\in \mathbb{T}=[0,1)
\end{equation*}%
be the Fourier transform of $\{A_{j}\}_{j\in \mathbb{Z}}$. Then, according
to Szeg\"{o} (see e.g. \cite{Gr-Sz:58}), if $\varphi $ and $a$ are
sufficiently regular, then we have the two-term asymptotic formula (cf \ref%
{szcl})%
\begin{equation}
\tr_\Lambda \varphi (A_{\Lambda })=|\Lambda |\int_{\mathbb{T}}\varphi
(a(t))dt+\mathcal{T}+o(1),\;|\Lambda |\rightarrow \infty , \label{sf2}
\end{equation}%
where the subleading term $\mathcal{T}$ is again a $\Lambda $-independent functional of
$\varphi $ and $a$. Note that the traditional setting for the
Szeg\"{o} theorem uses the Toeplitz operators defined by the semi-infinite
matrix $\{A_{j-k}\}_{j,k\in \mathbb{Z}_{+}}$ and acting in $l^{2}(\mathbb{Z}%
_{+})$. The restrictions of Toeplitz operators are the upper left blocks $%
\{A_{j-k}\}_{j,k=0}^{L}$ of $\{A_{j-k}\}_{j,k=0}^{%
\infty }$. On the other hand, we will use in this paper the convolution
operators (\ref{Ac}) defined by the double infinite matrix $%
\{A_{j-k}\}_{j,k\in \mathbb{Z}}$, acting in $l^{2}(\mathbb{Z})$ and having
their central $L \times L, \; L=2M+1$ blocks as restrictions. The latter
setting seems more appropriate for the goal of this paper dealing with
ergodic operators where the setting seems more natural. The same setting is
widely used in multidimensional analogs of Szeg\"{o} theorem \cite{Bo-Si:90}.
Note now that the convolution operators in $l^{2}(\mathbb{Z}^{d})$ and $%
L^{2}(\mathbb{R}^{d}), \; d \ge 1$ admit a generalization, known as ergodic
(or metrically transitive) operators, see \cite{Pa-Fi:92}. We recall their
definition in the (discrete) case of $l^{2}(\mathbb{Z}).$
Let $(\Omega ,\mathcal{F},P)$ be a probability space and $T$ is an ergodic
automorphism of the space. A measurable map $A=\{A_{jk}\}_{j,k\in \mathbb{Z}%
} $ from $\Omega $ to bounded operators in $l^{2}(\mathbb{Z})$ is called
ergodic operator if we have with probability 1 for every $t\in \mathbb{Z}$
\begin{equation}
A_{j+t,k+t}(\omega )=A_{jk}(T^{t}\omega ),\;\forall j,k\in \mathbb{Z}.
\label{eom}
\end{equation}%
Choosing $\Omega =\{0\}$, we obtain from (\ref{eom}) that $A$ is a
convolution operator (\ref{Ac}). Thus, ergodic operators comprise a
generalization of convolution operators, while the latter can be viewed as
non-random ergodic operators.
It is easy to see that the discrete Schrodinger operator with ergodic
potential (\ref{h}) -- (\ref{qq}) is an ergodic operator. Moreover, if $%
\sigma (H)$ is the spectrum of $H$, then $\sigma (H)$ is non-random, for any
bounded and measurable $f:\sigma (H)\rightarrow \mathbb{R}$ the operator $%
f(H)$ is also ergodic and if $\{f_{jk}\}_{j,k\in \mathbb{Z}}$ is its matrix,
then $\{f_{jj}\}_{j\in \mathbb{Z}}$ is an ergodic sequence \cite{Pa-Fi:92}.
Besides, there exists a non-negative and non-random measure $N_{H}$ on $%
\sigma (H),\;N(\mathbb{R})=1$ such that
\begin{equation}
\mathbf{E}\{f_{jj}(H)\}=\mathbf{E}\{f_{00}(H)\}=\int_{\sigma (H)}f(\lambda
)N_{H}(d\lambda ). \label{IDS}
\end{equation}%
The measure $N_{H}$ is an important spectral characteristic of selfadjoint
ergodic operators known as the Integrated Density of States \cite{Pa-Fi:92}.
In particular, we have for any bounded $f:\sigma (H)\rightarrow \mathbb{R}$
with probability 1%
\begin{equation}
\lim_{|\Lambda |\rightarrow \infty \Lambda }|\Lambda |^{-1}\tr_\Lambda
f(H_{\Lambda })=\int_{\sigma (H)}f(\lambda )N_{H}(d\lambda ). \label{lim}
\end{equation}%
This plays the role of the Law of Large Numbers for $\tr f(H_{\Lambda })$.
Accordingly, it is shown in \cite{Ki-Pa:15} (see also formula (\ref{SL})
below) that the leading term in an analog of (\ref{sf2}) for an ergodic
Schrodinger operator is always
\begin{equation}
|\Lambda |\int_{\sigma (H)}\varphi (a(\lambda ))N_{H}(d\lambda ).
\label{ltl}
\end{equation}%
On the other hand, the order of magnitude and the form of the subleading
term depend on the "amount of randomness" of an ergodic potential and on the
smoothness of $\varphi $ and, especially, $a$, see e.g. \cite%
{Bo-Si:90,De-Co:13,El-Co:17,Ki-Pa:15,Pa-Sl:18,So:13} for recent problems and
results.
In this paper we consider the discrete Schrodinger operator with random
i.i.d. potential, known also as the Anderson model. Thus, our quantity of
interest (\ref{trfl}) as well as the terms of its asymptotic form are random
variables in general (except the leading term (\ref{ltl}), which is not
random). Correspondingly, we will prove below two types of asymptotic trace
formulas, both having the subleading terms of the order $|\Lambda |^{1/2}$
(cf. (\ref{sf2})). The formulas of the first type are valid in the sense of
distributions, i.e., are analogs of the classical Central Limit Theorem (see
Theorems \ref{t:clt} and \ref{t:renyi}), while the formulas of the second
type are valid with probability 1, i.e., are analogs of the so called almost
sure Central Limit Theorem (see Theorem \ref{t:asclt}).
\begin{theorem}
\label{t:clt} Let $H$ be the ergodic Schrodinger operator (\ref{h}) -- (\ref%
{qq}) with a bounded i.i.d. potential and let $\sigma (H)$ be its spectrum.
Consider bounded functions $a:\sigma (H)\rightarrow \mathbb{R}$ and $%
\varphi:a(\sigma (H))\rightarrow \mathbb{R}$ and assume that $a$, $\varphi $
and $\gamma :=\varphi \circ a:\sigma (H)\rightarrow \mathbb{R}$ admit
extensions $\widetilde{a}$, $\widetilde{\varphi }$ and $\widetilde{\gamma }$
on the whole axis such that their Fourier transforms $\widehat{a}$, $%
\widehat{\varphi }$ and $\widehat{\gamma }$ satisfy the conditions
\begin{equation} \label{afcon}
\int_{-\infty }^{\infty }(1+|t|^{\theta})|\widehat{f }(t)|dt<\infty,\;\;%
\theta >1, \;\; f=a,\varphi, \gamma.
\end{equation}
Denote%
\begin{equation} \label{SL}
\Sigma _{\Lambda }=|\Lambda |^{-1/2}\Big(\tr_{\Lambda }\varphi (a_{\Lambda
}(H))-|\Lambda |\int_{\sigma (H)}\gamma (\lambda )N_H(d\lambda )\Big)
\end{equation}%
and
\begin{equation}
\sigma _{\Lambda }^{2}=\mathbf{E}\{\Sigma _{\Lambda }^{2}\}. \label{sil}
\end{equation}%
Then:
(i) there exists the limit
\begin{equation}
\lim_{\Lambda \rightarrow \infty }\sigma _{\Lambda }^{2}=\sigma ^{2}
\label{sili}
\end{equation}%
where
\begin{equation}
\sigma ^{2}=\sum_{l\in \mathbb{Z}}C_{l} \label{si1}
\end{equation}%
with
\begin{equation}
C_{j}=\mathbf{E}\{\overset{\circ }{\gamma }_{00}(H)\overset{\circ }{\gamma }%
_{jj}(H)\},\;\overset{\circ }{\gamma }_{jj}(H)=\gamma _{jj}(H)-\mathbf{E}%
\{\gamma _{jj}(H)\}, \label{gac}
\end{equation}%
and also%
\begin{eqnarray}
\sigma ^{2} &=&\mathbf{E}\left\{ \left( \mathbf{E}\left\{ A_{0}|\mathcal{F}%
_{0}^{\infty }\right\} -\mathbf{E}\left\{ A_{0}|\mathcal{F}_{1}^{\infty
}\right\} \right) ^{2}\right\} \label{si2} \\
&=&\mathbf{E}\left\{ \mathbf{Var}\left\{ \mathbf{E\{}A_{0}|\mathcal{F}%
_{0}^{\infty }\right\} |\mathcal{F}_{1}^{\infty }\}\right\} \notag
\end{eqnarray}%
where%
\begin{equation}
A_{0}=V_{0}\int_{0}^{1}\gamma _{00}^{\prime }(H|_{V_{0}\rightarrow uV_{0}})du
\label{aa0}
\end{equation}%
and $\mathcal{F}_{a}^{b},\;-\infty \leq a\leq b\leq \infty $ is the $\sigma $%
-algebra generated by $\{V_{j}\}_{j=a}^{b}$;
(ii) if $\gamma $ is non constant monotone function on the spectrum of $H,$
then
\begin{equation}
\sigma ^{2}>0 \label{sipos}
\end{equation}%
and we have
\begin{equation}
\mathbf{P\{}\sigma ^{-1}\Sigma _{\lbrack -M,M]}\in \Delta \}=\Phi (\Delta
)+o(1),\;M\rightarrow \infty , \label{dlim}
\end{equation}%
where $\Delta \subset \mathbb{R}$ is an interval and $\Phi $ is the standard
Gaussian law (of zero mean and unit variance).
\end{theorem}
\begin{remark}
The theorem is an extension of Theorem 2.1 of \cite{Ki-Pa:15}, where the
cases $a(\lambda )=\lambda $ and $\varphi (\lambda )=(\lambda -x_{0})^{-1}$
or $\varphi (\lambda )=\log (\lambda -x_{0}),\;x_{0}\notin \sigma (H)$ were
considered. In these cases $a,\varphi $ and $\gamma =\varphi \circ a$ are
real analytic on $\sigma (H)$ (see (\ref{spk})), hence admit real analytic
and fast decaying at infinity extensions to the whole line. Besides, $\gamma
$ is monotone on $\sigma (H)$, hence Theorem \ref{t:clt} applies.
It is worth also mentioning that conditions (\ref{afcon}) are not optimal in
general. Consider, for instance, the case where $\varphi (\lambda )= \chi
_{(-\infty,E]}(\lambda ),\;E\in \sigma (H)$, \; $a(\lambda )=\lambda $ with $%
\chi _{(-\infty,E]}$ being the indicator of $(-\infty,E] \subset \mathbb{R}$%
. Here $\gamma :=\varphi \circ a=$ $\chi _{(-\infty,E]}$ and%
\begin{equation}
\tr_{\Lambda }\varphi (a_{\Lambda }(H))=\tr_{\Lambda }\chi
_{(-\infty,E]}(H):=\mathcal{N}_{\Lambda }(E) \label{CM}
\end{equation}%
is the number of eigenvalues of $H_{\Lambda }$ not exceeding $E$. It is
known that if the potential in $H$ is ergodic, then with probability 1%
\begin{equation*}
\lim_{|\Lambda |}|\Lambda |^{-1}\mathcal{N}_{\Lambda }(E)=N(E),
\end{equation*}%
where $N(E)$ is defined in (\ref{IDS}). This plays the role of the Law of
Large Numbers for $\mathcal{N}_{\Lambda }(E)$ \cite{Pa-Fi:92}. The Central Limit Theorem for $\mathcal{N}_{\Lambda }(E)$ is also known
\cite{Re:81}. Its proof is based on a careful analysis of a Markov chain
arising in the frameworks of the so called phase formalism, an efficient
tool of spectral analysis of the one dimensional Schrodinger operator \cite%
{Pa-Fi:92}. It can be shown that the theorem can also be proved following
the scheme of proof of Theorem \ref{t:clt}, despite that $\gamma $ is
discontinuous in this case. However, one has to use more sophisticated facts
on the Schrodinger operator with i.i.d. random potential, in particular the
bound
\begin{equation}\label{frmo}
\sup_{\varepsilon >0}\mathbf{E}\{|(H-E-i\varepsilon )_{jk}^{-1}|^{s}\}\leq
Ce^{-c|j-k|},
\end{equation}%
valid for some $s\in (0,1),\;C<\infty $ and $c>$ \cite{Ai-Wa:15} if the
probability law of potential possesses certain regularity, e.g. a bounded
density. The bound is one of the basic results of the spectral theory of the random Schrodinger operator,
implying the pure point character of the spectrum of $H$ and a number of its
other important properties. It is worth noting that the monotonicity of $%
\gamma $ on the spectrum remains true in this case. Thus, the monotonicity
of $\gamma$ seems a pertinent sufficient condition for the positivity of the
limiting variance.
\end{remark}
Here, however, is a version of the theorem, applicable to the case where $%
\gamma $ is a certain convex function on $\sigma (H)$.
\begin{theorem}
\label{t:renyi} Consider the functions $r_{\alpha }:[0,1]\rightarrow \lbrack
0,1]$ and $n_{F}:\mathbb{R}\rightarrow \lbrack 0,1]$ given by%
\begin{equation}
r_{\alpha }(\lambda )=(1-\alpha )^{-1}\log _{2}(\lambda ^{\alpha
}+(1-\lambda )^{\alpha }), \; \lambda \in [0,1], \;\alpha >0, \label{ren}
\end{equation}%
and%
\begin{equation}
n_{F}(\lambda )=(e^{\beta (\lambda -E_{F})}+1)^{-1}, \; \lambda \in \mathbb{R%
},\;\beta >0,\;E_{F}\in \sigma (H) \label{fer}
\end{equation}%
Assume that the \ random i.i.d. potential in (\ref{h}) -- (\ref{qq}) has
zero mean $\mathbf{E}\{V_{0}\}=0$ and that the support of its probability
law contains zero. Then the conclusions of Theorem \ref{t:clt} remain valid
for $\varphi =r_{\alpha }$ and $a=n_{F}$, i.e., the random variable $\Sigma
_{\Lambda }$ of (\ref{SL}) converges in distribution to the Gaussian random
variable of zero mean and a certain variance $\sigma ^{2}>0$.
\end{theorem}
\begin{remark}
\label{r:renyi} The quantity $\tr_{\Lambda }r_{\alpha }((n_{F}(H))_{\Lambda
})$ is known in quantum statistical mechanics and quantum information theory
as the R\'{e}nyi entanglement entropy of free fermions in the thermal state
of the inverse temperature $\beta ^{-1}>0$ and the Fermi energy $E_{F}$ and having $H$ as
the one body Hamiltonian, see, e,g. \cite{Ab-St:15,Ar-Co:14,Ei-Co:11}. An
important particular case where $\alpha =1$, hence $h_{1}(\lambda )=-\lambda
\log _{2}\lambda - (1-\lambda )\log _{2}(1-\lambda ), \;\lambda \in [0,1]$,
is known as the von Neumann entanglement entropy. One is interested in the
large-$|\Lambda |$ asympotic form of the entanglement entropy. In the
translation invariant case, i.e., for the case of constant potential in (\ref%
{h}) -- (\ref{qq}) one can use the Szeg\"o theorem (see (\ref{sf2}) and (\ref%
{szst1})) to find a two-term asymptotic formula for the entanglement
entropy. In this case the term proportional to $|\Lambda |$ in (\ref{sf2})
and (\ref{szst1}), i.e., to the one dimensional analog of the volume of the
spatial domain occupied by the system, is known as the volume law, while the
second term in (\ref{sf2}), which is independent of $|\Lambda |$, i.e.,
proportional to the one dimensional analog $\{-M,M\}$ of the surface area of
the domain, is known as the area law \cite{Ei-Co:11}. In view of the above
theorem we conclude that in the disorder case (random potential in $H$) the
leading term of the entanglement entropy is non-random and is again the
volume law while the subleading term is random, proportional to $|\Lambda
|^{1/2}$ and describes random fluctuations of the volume law. The $O(1)$ in $%
|\Lambda| $ term can also be found for some $\varphi $ and $a$ \cite%
{Ki-Pa:15}. It is random and is now the "subsubleading" term of the
asymptotic formula. Of particular interest is the zero-temperature case $%
\beta=\infty$, where $n_F=\chi_{-\infty,E}$ and this term is leading. We
refer the reader to recent works \cite%
{Ab-St:15,El-Co:17,Le-Co:13,Pa-Sl:14,Pa-Sl:18,Pi-So:18,So:13} for related
results and references.
\end{remark}
The above results can be viewed as stochastic analogs of the Szeg\"{o}
theorem (see more on the analogy in \cite{Ki-Pa:15} and below). It is
essentially a Central Limit Theorem in its traditional form, i.e., an
assertion on the convergence of distribution of an appropriately normalized
sums of random variables to the Gaussian random variable. In recent decades
there has been a considerable interest to the almost sure versions of
classical (distributional) limit theorems. The prototype of such theorems
dates back to P.Levy and P.Erdos and is as follows, see e.g. \cite{Be:98,De:13} for reviews.
Let $\{X_{l}\}_{l=1}^{\infty }$ be a sequence of i.i.d. random variables of
zero mean and unit variance. Denote $S_{m}=\sum_{l=1}^{m}$, $%
Z_{m}=m^{-1/2}S_{m}$. Then we have with probability 1
\begin{equation}
\frac{1}{\log M}\sum_{m=1}^{M}\frac{1}{m}\mathbf{1}_{\Delta }(Z_{m})=\Phi
(\Delta )+o(1), \; M \to \infty, \label{asclt}
\end{equation}%
In other words, the random ("empirical") distribution of $Z_{m}$ converges
with probability 1 to the (non-random) Gaussian distribution.
On the other hand, the classical Central Limit Theorem implies%
\begin{equation}
\frac{1}{\log M}\sum_{m=1}^{M}\frac{1}{m}\mathbf{E}\{\mathbf{1}_{\Delta
}(Z_{m})\}=\Phi (\Delta )+o(1), \; M \to \infty, \label{clt}
\end{equation}%
i.e., just the convergence of expectations of the random distributions on
the l.h.s. of (\ref{asclt}). Thus, replacing the expectation by the
logarithmic average, a sequence of random variables satisfying the CLT can
be observed along all its typical realizations.
The situation with the almost sure CLT (\ref{asclt}) for independent random
variables is rather well understood, see e.g. \cite{Be:98,De:13} and
references therein, while the case of dependent random variable is more
involved and diverse, see e.g. \cite{Ch-Go:07,Ib-Li:00,La-Ph:90,Pe-Sh:95}.
As in the case of classical CLT (\ref{clt}), the existing results concern
mostly the weakly dependent stationary sequences, e.g. strongly mixing
sequences. This and the approximation techniques developed \cite{Ib-Li:71}),
Section 18,3 allow us to prove an almost sure version of Theorem \ref{t:clt}.
\begin{theorem}
\label{t:asclt} We have with probability 1 under the conditions of Theorem %
\ref{t:clt}%
\begin{equation} \label{gasclt}
\frac{1}{\log M}\sum_{m=1}^{M}\frac{1}{m}\mathbf{1}_{\Delta }(\sigma
^{-1}\Sigma _{\lbrack -m,m]})=\Phi (\Delta )+o(1), \; M \to \infty,
\end{equation}%
where, $\Sigma _{\lbrack -m,m]}$ is given by (\ref{SL}) with $\Lambda
=[-m,m] $, $\Delta \subset \mathbb{R}$ is an interval and $\Phi $ is the
standard Gaussian law.
\end{theorem}
\begin{remark}
Given a sequence $\{\xi _{m}\}_{m\geq 1}$ of random variables and a random
variable $\xi $, write%
\begin{equation}
\xi _{M}\overset{\mathcal{D}}{=}M^{1/2}\xi +o(M^{1/2}),\;M\rightarrow \infty
\label{cd1}
\end{equation}%
if we have%
\begin{equation}
\mathbf{P}\{\xi _{M}/M^{1/2}\in \Delta \}=G(\Delta ) + o(1),\;M\rightarrow
\infty, \label{cd2}
\end{equation}%
where $G$ is the probability law of $\xi $,
and write
\begin{equation}
\xi_{M}\overset{\mathcal{L}}{= }M^{1/2}\xi+o(M^{1/2}),\;M\rightarrow \infty
\label{cl1}
\end{equation}%
if we have with probability 1 (assuming that all $\{\xi_m, \;m\geq 1$ ) are
defined on the same probability space)%
\begin{equation}
\frac{1}{\log M}\sum_{m=1}^{M}I_\Delta(\xi_{m}/m^{1/2})=G(\Delta) +
o(1),\;M\rightarrow \infty. \label{cl2}
\end{equation}%
Then, we can formulate Theorems \ref{t:clt} and (\ref{t:asclt}) in the form
similar to that of the Szeg\"{o} theorem (cf. (\ref{sf2})), namely as
\begin{align}
\tr_{\Lambda }\varphi (a_{\Lambda }(H))& \overset{\mathcal{D}}{=}|\Lambda
|\int_{\sigma (H)}\gamma (\lambda )N_{H}(d\lambda ) \label{szst1} \\
& +|\Lambda |^{1/2}\sigma ^{-1}\xi +o(|\Lambda |^{1/2}),\;|\Lambda
|=(2M+1)\rightarrow \infty \notag
\end{align}%
for Theorems \ref{t:clt} and with probability 1 as%
\begin{align}
\tr_{\Lambda }\varphi (a_{\Lambda }(H))& \overset{\mathcal{L}}{=}|\Lambda
|\int_{\sigma (H)}\gamma (\lambda )N_{H}(d\lambda ) \label{szst2} \\
& +|\Lambda |^{1/2} \Phi(\sigma ^{-1}\Delta )+o(|\Lambda |^{1/2}),\;|\Lambda
|=(2M+1)\rightarrow \infty \notag
\end{align}%
for Theorems \ref{t:clt}, i.e., as two-term "Szeg\"{o}-like" asymptotic
formulas valid in the sense of the $\mathcal{D}$- and the $\mathcal{L}$%
-convergence, the latter valid with probability 1. An apparent difference
between the Szeg\"{o} formula (\ref{sf2}) and its stochastic counterparts (%
\ref{szst1}) and (\ref{szst2}) is that the subleading term of the Szeg\"o
theorem is independent of $|\Lambda | $ while the subleading term of its
stochastic counterparts grows as $|\Lambda |^{1/2}$ although with stochastic
oscillations (see below).
\end{remark}
We will comment now on the errors bounds in the above asymptotic formulas.
We will mostly use known results on the rates of convergence for the both
CLT (\ref{cd2}) and (\ref{cl2}) with $\xi_m$ being the sum of i.i.d. random
variable (see (\ref{clt}) and (\ref{asclt})), despite that in our (spectral)
context the terms of the sum in (\ref{trfl}) are always dependent even if
the "output" potential is a collection of i.i.d. random variables. It seems
plausible that the error bounds for the i.i.d. case provide best possible
but not too overestimated versions of the error bounds for the case of
sufficiently weakly dependent terms. Known results on the sums of weakly
dependent random variables support this approach, see e.g. \cite%
{Be:98,De:13,Ch-Go:07,Ib-Li:00,La-Ph:90,Pe-Sh:95}.
Recall first that for the classical Szeg\"o (non-random) case (\ref{sf2}),
i.e., for the Toeplitz and convolution operators, the subleading term is $%
\Lambda $-independent and the error is just $o(1)$ in general. However, if $%
\varphi $ and $a$ are infinitely differentiable, one can construct the whole
asymptotic series in the powers of $|\Lambda |^{-1}$ \cite{Wi:85}.
On the other hand, it follows from the standard CLT for bounded i.i.d.
random variables (see (\ref{cd2})) and the Berry-Esseen bound that we have in
(\ref{cd2}) the error term $O(M^{-1/2})$ instead of $o(1)$,
and, hopefully, $O(|\Lambda^{-1/2}|)$ in the $\mathcal{D}$-convergence
stochastic analog (\ref{dlim}) of the Szeg\"o theorem.
As for the "point-wise" case treated in Theorem \ref{t:asclt}, we note first
that this is a "frequency"-type result, analogous to the Law of Large
Numbers or, more generally, to the ergodic theorem. This is clear from the
following observation on the well known Gaussian random processes \cite%
{Be:98}. Namely, let $W:[0,\infty )\rightarrow \mathbb{R}$ be the Wiener
process and $U:\mathbb{R}\rightarrow \mathbb{R}$ be the Uhlenbeck-Ornstein
process. They are related as $U(s)=e^{-s/2}W(e^{s}),\;s\in \mathbb{R}$, thus
\begin{equation*}
\frac{1}{\log M}\int_{1}^{M}\mathbf{1}_{\Delta }(W(t)/t^{1/2})dt=\frac{1}{%
\log M}\int_{0}^{\log M}\mathbf{1}_{\Delta }(U(s))ds.
\end{equation*}%
Since $U$ is ergodic and its one-point (invariant) distribution is the
standard Gaussian, the r.h.s. converges with probability 1 to $\Phi (\Delta
) $ as $T\rightarrow \infty $ according to the ergodic theorem. We obtained
the almost sure Central Limit Theorem for the Wiener process, the continuous
time analog of the sequence of i.i.d. Gaussian random variables, see (\ref%
{asclt}).
In view of this observation (explaining, in particular, the appearance of
the logarithmic average in the almost sure Central Limit Theorem) and the
Law of Iterated Logarithm we have to have with probability 1 in (\ref{cd2})
the oscillating error term $O((\log\log\log M/\log M)^{1/2})$ instead of $%
o(1)$, hence the error term $O((\log\log\log |\Lambda|/\log \Lambda)^{1/2})$
in the $\mathcal{L}$-convergence stochastic analog (\ref{gasclt}) of the
Szeg\"o theorem. More precisely, it follows from the invariance principle
that with probability 1 we have to have the additional terms $\widetilde{%
\sigma }W(\log M)+O(\log M^{1/2-\varepsilon }), \; M\rightarrow \infty$ in (%
\ref{cl2}) and, correspondingly, the terms
\begin{equation*}
\widetilde{\sigma }W(\log |\Lambda |)+O(|\log |\Lambda \}|^{1/2-\varepsilon
}),\;\log |\Lambda |\rightarrow \infty,
\end{equation*}
with $\widetilde{\sigma }>0$ and some $\varepsilon >0$ in (\ref{gasclt}).
\medskip
\medskip
We prove in this paper asymptotic formulas
for traces of certain random operators related to the restrictions
to the expanding intervals $\Lambda =[-M,M]\subset \mathbb{Z}%
,\;M\rightarrow \infty $ of the one dimensional discrete Schrodinger
operator $H$ assuming that its potential is a collection of random i.i.d.
variables. We do not use, however, a remarkable property of $H$, the pure
point character of its spectrum. This spectral type holds for any
bounded i.i.d. potential \cite{Ai-Wa:15} and can be contrasted
with the absolute continuous type of the spectrum of $H$ with constant or
periodic potential. Moreover, if the common probability law of the on-site
potential is Lipschitzian, we have the bound (\ref{frmo}).
It can be shown that the use of the bound
makes the
conditions of our results somewhat weaker (it suffices to have $\theta =1$
in (\ref{afcon}), certain bounds somewhat stronger ($O(1)$ instead $%
o(|\Lambda |^{1/2})$ in (\ref{dce}), $Ce^{-cp}$ instead $C/p^{\theta }$ in (%
\ref{fadec}), etc.) and proofs simpler (Lemmas \ref{l:ctu} and (\ref{l:fdec}%
) are not necessary). On the other hand, the bound (\ref{frmo}) holds only
under the condition of some regularity of the common probability law of
the i.i.d. potential (e.g., the Lipschitz continuity of its probability law). This is why
we prefer to use rather standard spectral tools, somewhat less optimal
conditions (\ref{afcon}) on $a$ and $\varphi $ and somewhat more involved proofs but to have corresponding results valid for
a larger class of random i.i.d. potentials of
Theorems \ref{t:clt} and
\ref{t:asclt}.
It is worth noting, however, that the bound (\ref{frmo}) is an important necessary tool in the analysis of the large-$\Lambda$ behavior of $\mathrm{Tr}_{\Lambda} \varphi(a_\Lambda (H))$ with not too smooth $a$ and $\varphi$, e.g. $a=n_F|_{\beta=\infty}=\chi_{[E_F,\infty)}$ with $n_F$ of (\ref{fer}) and $\varphi=r_\alpha, \; \alpha \leq 1$ with $r_\alpha$ of (\ref{ren}) corresponding to the entanglement entropy of the ground state of free
disordered fermions at zero temperature, see \cite{El-Co:17,Pa-Sl:18} and references therein.
\section{Proof of Results}
\textbf{Proof of Theorem \ref{t:clt}} . It follows from (\ref{afcon}) and
Lemma \ref{l:fdif} that we have uniformly in potential
\begin{equation}
\tr_{\Lambda }\varphi (a_{\Lambda }(H))=\tr_{\Lambda }\varphi
(a(H))+o(|\Lambda |^{1/2}),\;|\Lambda |\rightarrow \infty . \label{dce}
\end{equation}%
Hence, we obtain in view of (\ref{IDS}) and the definition (\ref{trl}) of $%
\tr_{\Lambda }$
\begin{align}
\tr_{\Lambda }\varphi (a_{\Lambda }(H)=|\Lambda |\int_{-\infty }^{\infty
}\gamma (\lambda )N(d\lambda )+\overset{\circ }{\gamma }_{\Lambda
}+o(|\Lambda |^{1/2}), \notag
\end{align}%
where
\begin{align}
&\overset{\circ }{\gamma }_{\Lambda } :=\gamma _{\Lambda }-\mathbf{E}%
\{\gamma _{\Lambda }\},\;\; \gamma _{\Lambda }=\sum_{j\in \Lambda }(\gamma
_{jj}(H)-\mathbf{E}\{\gamma _{jj}(H)\}), \label{GL} \\
&\mathbf{E}\{\gamma _{\Lambda }(H)\}) =|\Lambda |\int_{-\infty }^{\infty
}\gamma (\lambda )N_{H}(d\lambda ) \notag
\end{align}%
The above formulas reduce the proof of the theorem to that of the Central
Limit Theorem for $|\Lambda |^{-1/2}\gamma _{\Lambda }$, i.e., for the
sequence $\{\gamma _{jj}(H)\}_{j\in \mathbb{Z}}$. The sequence is ergodic
according to (\ref{eom}) for $j=k$. %
We use in this case a general Central Limit Theorem for stationary
weakly dependent sequences given by Proposition \ref{p:ibli} with $%
X_{j}=\;V_{j},\;j\in \mathbb{Z}$ and $Y_{0}=\gamma _{00}(H)$. To verify the
approximation condition (\ref{dc}) of the proposition it is convenient to
write $V=(V_{<},V_{>})$, where $V_{<}=\{V_{j}\}_{|j|\leq p}\;$and $%
V_{>}=\{V_{j}\}_{|j|>p}$ are independent collections of independent random
variables whose probability laws we denote $P_{<}$ and $P_{>}$ so that the
probability law $P$ of $V$ is symbolically $P=P_{<}\cdot P_{>}$. Denoting $%
\gamma _{00}(H)=g(V_{<},V_{>})$, we have
\begin{align*}
& \mathbf{E}\{|\gamma _{00}(H)-\mathbf{E}\{\gamma _{00}(H)|\mathcal{F}%
_{-p}^{p}\}|\} \\
& =\int \Big|g(V_{<},V_{>})-\int g(V_{<},V_{>}^{\prime })P(dV_{>}^{\prime })%
\Big|P(dV_{>})P(dV_{<}) \\
& \leq \int \left( \int \Big|g(V_{<},V_{>})-g(V_{<},V_{>}^{\prime })\Big|%
P(dV_{>}^{\prime })\right) P(dV_{>})P(dV_{<}).
\end{align*}%
Applying to the difference in the third line of the above formula Lemma \ref%
{l:asin} with $f=\gamma_{00}$, we find that the expression in
the first line of the formula is bounded by $C/p^{\theta}$. Thus, the series (\ref%
{dc}) is convergent in our case.
This and Proposition (\ref{p:ibli}) imply the validity of (\ref{sili}) -- (%
\ref{gac}). The formula for the \ limiting variance (\ref{si2}) -- (\ref{aa0}) is proved in Lemma \ref{l:clt1}.
Let us prove now the positivity of the limiting variance $\sigma ^{2}$ (\ref%
{sipos}). According to (\ref{si2}) -- (\ref{aa0}), the hypothesis $\sigma
^{2}=0$ implies that for an almost every event from $\mathcal{F}_{1}^{\infty
}$ the expression%
\begin{equation}
V_{0}\int_{0}^{1}ds\ \mathbf{E\{}\gamma _{00}^{\prime }(H|_{V_{0}\rightarrow
uV_{0}})|\mathcal{F}_{1}^{\infty }\}du \label{hyp}
\end{equation}%
is independent of $V_{0} \in \mathrm{supp} F$. Assume without loss of
generality that zero is in support of $F$. Then the above expression is
zero. On the other hand, if our i.i.d. random potential is non-trivial, then
there exists a non-zero point $V_{0}\neq 0$ in the support. If,
in addition, $\gamma^{\prime }$ does not change the sign on the spectrum of $%
H$ and is not zero, then (\ref{hyp}) cannot be zero, and we have a contradiction.
Now it suffices to use a general argument (see e.g. Theorem 18.6.1 of \cite%
{Ib-Li:71} or Proposition 3.2.9 of \cite{Pa-Sh:11}) to finish the proof of
Theorem \ref{t:clt}.$\blacksquare $
\bigskip \textbf{Proof of Theorem} \ref{t:renyi}. We will first use Theorem %
\ref{t:clt}. Indeed, according to (\ref{spk}) and (\ref{fer}), $a=n_{F}$ is
real analytic on the finite interval $K$ of (\ref{spk}) and admits a real analytic
and fast decaying at infinity extension to the whole axis. Besides, $%
a(K)=[a_{-},a_{+}],\;0<a_{-}<a_{+}<1$ is also finite, hence, $\varphi =r_{\alpha }$ of (\ref%
{ren}) is real analytic on $a(K)$ and admits a real analytic and fast
decaying at infinity extension to the whole axis. Thus, assertion (i) of
Theorem \ref{t:clt} is valid in this case.
We cannot, however, use assertion (ii) of Theorem \ref{t:clt}, since $\gamma
=r_{\alpha }\circ n_{F} $ is not monotone but convex on $K$. Here is another
argument proving the positivity (\ref{sipos}) of the limiting variance (\ref%
{si2}) -- (\ref{aa0}).
Assuming that the variance is zero and using the fact that zero is in
support of the probability law $F$ of the potential, we obtain from (\ref%
{si2}) -- (\ref{aa0}), as in the proof of Theorem \ref{t:clt}, that for
almost every event from $\mathcal{F}_{1}^{\infty }$ we have
\begin{equation*}
V_{0}\int_{0}^{1}\mathbf{E}\left\{\gamma _{00}^{\prime }(H_{0}(u))|\mathcal{F%
}_{1}^{\infty }\right\}du=0,\;V_{0}\in \mathrm{supp}F,
\end{equation*}%
where \ $H_{0}(u):=H|_{V_{0}\rightarrow uV_{0}}$. Integrating here by parts
with respect to $u$, we get%
\begin{align*}
&V_{0}\mathbf{E}\{\gamma _{00}^{\prime }(H_{0}(0)) |\mathcal{F}_{1}^{\infty
}\}+V_{0}\int_{0}^{1}
\mathbf{E}\Big\{\frac{\partial }{\partial u}\gamma _{00}^{\prime
}(H_{0}(u))|\mathcal{F}_{1}^{\infty }\Big\}(1-u)du=0, \; \;V_{0}\in \mathrm{supp}F.
\end{align*}%
and since $\mathbf{E}\{V_{0}\}=0$ and $\gamma _{00}^{\prime }(H_{0}(0))$ is
independent of $V_{0}$, the expectation with respect to $V_{0}$ yields for
almost every event from $\mathcal{F}_{1}^{\infty }$
\begin{equation}
\int_{0}^{1}(1-u)du\int V_{0}\mathbf{E}\{\frac{\partial }{\partial u}\gamma
_{00}^{\prime }(H(u))|\mathcal{F}_{1}^{\infty }\}F(dV_{0})=0. \label{intp}
\end{equation}%
We will now use the formula%
\begin{equation*}
\frac{\partial }{\partial u}\gamma _{00}^{\prime }(H(u)=V_{0}\int \int \frac{%
\gamma^{\prime } (\lambda _{1})-\gamma ^{\prime }(\lambda _{2})}{\lambda
_{1}-\lambda _{2}}\mu _{_{H(u)}}(d\lambda _{1})\mu _{_{H(u)}}(d\lambda _{2}),
\end{equation*}%
where $\mu _{_{H(u)}}(d\lambda )=(\mathcal{E}_{H(u)}(d\lambda ))_{00},\;$
and $\mathcal{E}_{H(u)}$ is the resolution of identity of $H(u)$.
Thus, $\mu _{0}\geq 0$ and $\mu _{0}(\mathbb{R})=1$. The formula can be
obtained by iterating twice the Duhamel formula (\ref{duh}).
Plugging the r.h.s. of the formula in (\ref{intp}) and recalling that $%
\gamma $ is strictly convex on the spectrum, hence $(\gamma (\lambda
_{1})-\gamma (\lambda _{2}))(\lambda _{1}-\lambda )^{-1}<0$, we conclude
that the r.h.s. of (\ref{intp}) is not zero. This implies the positivity of
the variance.$\blacksquare $
\bigskip \textbf{Proof of Theorem \ref{t:asclt}} . As in the proof of Theorem %
\ref{t:clt} we will start with passing from $\tr_{\Lambda }\varphi
(a_{\Lambda }(H))$ to $\tr_{\Lambda } \varphi (a(H))=\tr_{\Lambda } \gamma
(H)$ with the error $o(|\Lambda |^{1/2})$ by using (\ref{afcon}) and Lemma %
\ref{l:fdif} (see (\ref{dce})), thereby reducing the proof of the theorem to
the proof of the almost sure CLT for $|\Lambda |^{-1/2}\gamma _{\Lambda }$
(see \ref{GL}) i.e., for the same ergodic sequence $\{\gamma
_{jj}(H)\}_{j\in \mathbb{Z}}$ as in Theorem \ref{t:clt}.
Our further proof is essentially based on that in \cite{Pe-Sh:95} of the
almost sure CLT for ergodic strongly mixing sequences (see(\ref{alk})) and
on the procedure of approximation of general ergodic sequences by strongly
mixing sequences (see \ref{dc})) given in \cite{Ib-Li:71}, Section 18.3.
In particular, according to Proposition \ref{p:pesh1} (see Theorem 1 in \cite%
{Pe-Sh:95}), it suffices to prove the bound
\begin{equation}
\mathbf{Var}\left\{ \frac{1}{\log M}\sum_{m=1}^{M}\frac{1}{m}f\left(
Z_{m}\right) \right\} =O(1/(\log M)^{\varepsilon }),\;M\rightarrow \infty \;
\label{varf}
\end{equation}%
for any bounded Lipschitzian $f$ \ (see (\ref{lip})),
\begin{equation}
Z_{m}=\mu _{m}^{-1/2}\Sigma _{[ -m,m]},\;\mu _{m}=2m+1 \label{zm}
\end{equation}%
and some $\varepsilon >0$.
To this end we denote
\begin{equation} \label{Y}
\overset{\circ }{\gamma }_{jj}(H):=\gamma _{jj}(H)-\mathbf{E}\{\gamma
_{jj}(H)\}=Y_{j},\;j\in \mathbb{Z}
\end{equation}
and introduce for every positive integer $s$ the ergodic sequences $\{\xi
_{j}^{(s)}\}_{j\in \mathbb{Z}}$ and $\{\eta _{j}^{(s)}\}_{j\in \mathbb{Z}}$
with%
\begin{equation}
\xi _{j}^{(s)}=\mathbf{E}\{Y_{j}|\mathcal{F}_{j-s}^{j+s}\},\;\;\eta
_{j}^{(s)}=Y_{j}-\xi _{j}^{(s)}. \label{xeg}
\end{equation}%
Denote also%
\begin{equation*}
F_{M}=\frac{1}{\log M}\sum_{m=1}^{M}\frac{1}{m}f\left( Z_{m}\right)
\end{equation*}%
and%
\begin{equation}
F_{M}^{(s)}=\frac{1}{\log M}\sum_{m=1}^{M}\frac{1}{m}f\left(
Z_{m}^{(s)}\right) ,\;Z_{m}^{(s)}=Z_{m}|_{Y_{j}\rightarrow \xi _{j}^{(s)}}.
\label{fms}
\end{equation}%
We have then from the elementary inequality $\mathbf{Var}\{\xi\}\leq 2\mathbf{%
Var}\{\eta\}+2\mathbf{Var}\{\xi-\eta\}$ and (\ref{lip}):%
\begin{eqnarray}
\mathbf{Var}\{F_{M}\} &\leq &2\mathbf{Var}\{F_{M}^{(s)}\}+2\mathbf{Var}%
\{F_{M}-F_{M}^{(s)}\} \label{var1} \\
&\leq &2\mathbf{Var}\{F_{M}^{(s)}\}+\frac{2C_1 ^{2}}{\log M}\sum_{m=1}^{M}%
\frac{1}{m}\mathbf{Var}\{R_{m}^{(s)}\}, \notag
\end{eqnarray}%
where $C_1$ is defined in (\ref{lip}) and%
\begin{equation}
R_{m}^{(s)}:=Z_{m}-Z_{m}^{(s)}=\mu_m^{-1/2}\sum_{|j|\leq m}\eta
_{j}^{(s)}. \label{zms}
\end{equation}%
Recall now that an ergodic sequence is said to be strongly mixing if%
\begin{equation}
\alpha _{k}:=\sup_{A\in \mathcal{F}_{-\infty }^{n},\;B\in \mathcal{F}%
_{k+n}^{\infty }}|P(AB)-P(A)P(B)|\rightarrow 0 \label{alk}
\end{equation}%
as $k\rightarrow \infty $ through positive values and $\alpha _{k}$ is
called the mixing coefficient.
Since the random potential is a sequence of i.i.d. random variables, the
sequence $\{\xi _{j}^{(s)}\}_{j\in \mathbb{Z}}$ of (\ref{xeg}) is strongly mixing and its
mixing coefficient is (see (\ref{alk}))
\begin{equation}
\alpha _{k}^{(s)}=\left\{
\begin{array}{cc}
\leq 1, & k\leq 2s \\
0, & k>2s.%
\end{array}%
\right. \label{axi}
\end{equation}%
We are going to bound the first term on the right of (\ref{var1}) by using
Lemma 1 of \cite{Pe-Sh:95} on the almost sure CLT for strongly mixing
sequences and we will deal with the second term on the right of (\ref{var1})
by using the sufficiently good approximation of $\{Y_{j}\}_{j\in \mathbb{Z}}$
of (\ref{Y}) by $\{\xi _{j}^{(s)}\}_{j\in \mathbb{Z}}$ of (\ref{xeg}) as $%
s\rightarrow \infty $ following from Lemma \ref{l:asin}. Note that similar
argument has been already used in the proof of Theorem \ref{t:clt}, see (\ref%
{dc}) in Proposition \ref{p:ibli} and Theorem 18.6.3 in \cite{Ib-Li:71}.
This is obtained in Lemmas \ref{l:sims} and \ref{l:vxi} below for $%
M\rightarrow \infty $ and $s\rightarrow \infty $.
They allow us to continue (\ref{var1}) as%
\begin{equation*}
\mathbf{Var}\{F_{M}\}=O(\log s/\log M)+O(1/s^{\theta -1}),
\end{equation*}%
where $\theta >1$ (see (\ref{afcon})). Choosing here $s=(\log
M)^{1-\varepsilon },\;\varepsilon \in (0,1)$, we obtain (\ref{varf}), hence,
the theorem.$\blacksquare $
\section{Auxiliary Results}
We start with a general Central Limit Theorem for ergodic
sequences of random variables, see \cite{Ib-Li:71}, Theorems 18.6.1 -
18.6.3, more precisely. with its version involving i.i.d. random variables.
\begin{proposition}
\label{p:ibli} Let $\{X_{j}\}_{j\in \mathbb{Z}}$ be i.i.d. random variables,
$\mathcal{F}_{a}^{b}$ be the $\sigma $-algebra generated by $%
\{X_{j}\}_{j=a}^{b}$, $Y_{0}$ be a function measurable with respect to $%
\mathcal{F=F}_{-\infty }^{\infty }$. Denote $T$ the standard shift
automorphism ($X_{j+1}(\omega)=X_j(T\omega)$) and set $Y_{j}(\omega
)=Y_{0}(T^{j}\omega )$. Assume that
(i) \ $Y_{0}$ is bounded;
(ii)%
\begin{equation}
\sum_{p=1}^{\infty }\mathbf{E}\{|Y_{0}-\mathbf{E}\{Y_{0}|\mathcal{F}%
_{-p}^{p}\}|\}<\infty . \label{dc}
\end{equation}%
Then
(a) \ $\sigma ^{2}:=\sum_{k=0}^{\infty }\mathbf{Cov}\{Y_{0},Y_{k}\}<\infty ;$
(b) if $\sigma ^{2}>0$, then%
\begin{equation}
|\Lambda |^{-1/2}\sum_{|j|\leq M}Y_{j} \label{ns}
\end{equation}%
converges in distribution to the Gaussian random variable of zero mean and
variance $\sigma ^{2}$.
\end{proposition}
The proof of the proposition is based on the proof of the CLT for strongly
mixing ergodic sequences (see (\ref{alk})) and on the approximation of more
general ergodic sequences by strongly mixing sequences provided by condition
(\ref{dc}).
We will also need several facts on the one-dimensional discrete Schrodinger
operator with bounded potential.
We recall first the Duhamel formula for the difference of two one-parametric
groups $U_{1}(t)=e^{itA_{1}}$ and $U_{1}(t)=e^{itA_{1}}$ corresponding to
two bounded operators $A_{1}$ and $A_{2}:$%
\begin{equation} \label{duh}
U_{2}(t)-U_{1}(t)=i\int_{0}^{|t|}U_{2}(t-s)(A_{2}-A_{1})U_{1}(s)ds,\;t\in
\mathbb{R}.
\end{equation}
\begin{lemma}
\label{l:ctu} Let $H=H_{0}+V$ be the one-dimensional discrete Schrodinger
operator with real-valued bounded potential, $U(t)=e^{itH}$ be the
corresponding unitary group and $\{U_{jk}(t)\}_{j,k\in \mathbb{Z}}$ be the
matrix of $U(t)$. Then we have for any $t\in \mathbb{R}$ and $\delta >0$%
\begin{equation}
|U_{jk}(t)|\leq e^{-\delta |j-k|+s(\delta )|t|},\;s(\delta )=2\sinh \delta .
\label{ctu}
\end{equation}
\end{lemma}
\begin{proof}
Introduce the diagonal operator $D=\{D_{jk}\}_{j,k\in \mathbb{Z}}$, with $%
D_{jk}=e^{\rho j}\delta _{jk}$, $\rho \in \mathbb{R}$ and consider%
\begin{equation*}
DU(t)D^{-1}=e^{itDHD^{-1}} =e^{itH+itQ},
\end{equation*}%
where%
\begin{equation*}
Q=DHD^{-1}-H=DH_{0}D^{-1}-H_{0}.
\end{equation*}%
Since $H_{0}$ is the operator of second finite difference with the symbol $%
-2\cos p,\;p$ $\in \mathbb{T}$, the symbol of $Q$ is%
\begin{equation*}
-2\cos (p+i\rho )+2\cos p=-2\cos p(\cosh \rho -1)+2i\sin p\sinh \rho .
\end{equation*}%
Hence $Q=Q_{1}+iQ_{2}$, where $Q_{1}$ and $Q_{2}$ are selfadjoint operators
and%
\begin{equation*}
||Q_{2}||\leq 2\sinh |\rho |.
\end{equation*}%
Now, denoting $A_{2}=H+Q_{1}+iQ_{2}$ and $A_{1}=H+Q_{1}$, iterating the
Duhamel formula (\ref{duh}) and using $||e^{itA_{1}}||=1$, we obtain%
\begin{equation*}
||e^{itH_{1}-tQ_{2}}||\leq e^{|t|\;||Q_{2}||}=e^{|t|2\sinh |\rho |}.
\end{equation*}%
This and the relation%
\begin{equation*}
(DU(t)D^{-1})_{jk}=e^{\rho j}U_{jk}(t)e^{-\rho k}
\end{equation*}%
imply (\ref{ctu}).
\end{proof}
\begin{remark}
\label{r:ctr} Bound (\ref{ctu}) is an analog of the Combes-Thomas bound for
the resolvent $\{((H-z)^{-1})_{jk}\}_{j,k \in \mathbb{Z}}$ of $H$ and the
above proof uses an essentially same argument as that in the proof of
the bound, see e.g. \cite{Ai-Wa:15}.
\end{remark}
\begin{lemma}
\label{l:fdec} Let $H=H_{0}+V$ be the one-dimensional discrete Schrodinger
operator with real-valued potential and $a:\mathbb{R}\rightarrow \mathbb{R}$
admits the Fourier transform $\widehat{a}$ and%
\begin{equation}
\int_{-\infty }^{\infty }(1+|t|^{\theta })|\widehat{a}(t)|dt<\infty
,\;\theta >0..\; \label{fta}
\end{equation}%
If $A=a(H)=\{F_{jk}\}_{j,k\in \mathbb{Z}}$, then we have%
\begin{equation}
|A_{jk}|\leq C/|j-k|^{\theta },\; C< \infty, \;j \neq k. \label{fdec}
\end{equation}
\end{lemma}
\begin{proof}
It follows from the spectral theorem
\begin{equation} \label{frep}
A=a(H)=\int_{-\infty }^{\infty }\widehat{a}(t)U(t)dt,
\end{equation}%
hence, we have for any $T>0$
\begin{align*}
& A_{jk}=\int_{-\infty }^{\infty }\widehat{a}(t)U_{jk}(t)dt \\
& =\int_{|t|\leq T}\widehat{a}(t)U(t)dt+\int_{|t|\geq T}\widehat{a}%
(t)U(t)dt=I_{1}+I_{2}.
\end{align*}%
We have further
\begin{equation*}
|I_{1}|\leq e^{-\delta |j-k|+s(\delta )T}\int_{-\infty }^{\infty } |\widehat{%
a }(t)|dt
\end{equation*}%
by using Lemma \ref{l:ctu} and
\begin{equation*}
|I_{2}|\leq \frac{1}{T^{\theta}}\int_{-\infty }^{\infty }(1+|t|^{\theta})|%
\widehat{a}(t)|dt
\end{equation*}%
by condition (\ref{fta}) of the lemma.
Now, choosing $T=\frac{\delta}{s}|j-k| -\theta \log |j-k|$, we obtain (\ref%
{fdec}).
\end{proof}
\begin{lemma}
\label{l:fdif} Let $A=\{A_{jk}\}_{j,k\in \mathbb{Z}}$ be bounded selfadjoint
operator in $l^{2}(\mathbb{Z})$ such that
\begin{equation}
|A_{jk}|\leq C/|j-k|^{\theta},\;C<\infty ,\;\theta>1 \label{adec}
\end{equation}%
and $A_{\Lambda }=\chi _{\Lambda }A\chi _{\Lambda }=\{A_{jk}\}_{j,k\in
\Lambda }$ be its restriction to $\Lambda $. Then for any $f:\mathbb{R}%
\rightarrow \mathbb{C}$ admitting the Fourier transform $\widehat{f}$ such
that%
\begin{equation}
\int_{-\infty }^{\infty }(1+|t|)|\widehat{f}(t)|dt<\infty \label{ft1}
\end{equation}%
we have uniformly in $V$ satisfying (\ref{qq})
\begin{equation}
\big|\tr\chi _{\Lambda }f(A_{\Lambda })\chi _{\Lambda }-\tr\chi _{\Lambda
}f(A)\chi _{\Lambda }\big|=o(L^{1/2}),\;L\rightarrow \infty . \label{tfdif}
\end{equation}
\end{lemma}
\begin{proof}
Consider $A_{\Lambda }\oplus A_{\overline{\Lambda }}, \; \overline{\Lambda }=%
\mathbb{Z}\setminus \Lambda $ and
\begin{equation*}
A-A_{\Lambda }\oplus A_{\overline{\Lambda }}=\left(
\begin{array}{cc}
0 & \chi _{\Lambda }A\chi _{\overline{\Lambda }} \\
\chi _{\overline{\Lambda }}A\chi _{\Lambda } & 0%
\end{array}%
\right) .
\end{equation*}%
Thus, writing an analog of (\ref{frep}) for $A$ instead of $H$
and using the Duhamel formula (\ref{duh}), we obtain%
\begin{align}
&f(A)-f(A_{\Lambda }\oplus A_{\mathbb{Z}\setminus \Lambda }) \label{fdif}
=\int_{-\infty }^{\infty }\widehat{f}(t)dt\int_{0}^{|t|}U(t-s)(\chi _{%
\overline{\Lambda }}A\chi _{\Lambda }+\chi _{\Lambda }A\chi _{\overline{%
\Lambda }})U_{\Lambda }(s)\oplus U_{\overline{\Lambda }}(s)ds,
\end{align}
and
\begin{align}\label{trdif}
&\mathrm{Tr}\ \chi _{\Lambda }f(A)\chi _{\Lambda }-\mathrm{Tr}\ \chi
_{\Lambda }f(A_{\Lambda })\chi _{\Lambda }
=\int_{-\infty }^{\infty }\widehat{f}(t)dt\int_{0}^{t}\mathrm{Tr}\chi
_{\Lambda }U_{\Lambda }(s)\chi _{\Lambda }U(t-s) \chi _{\overline{\Lambda}
}A\chi _{\Lambda }ds.
\end{align}%
Denoting
\begin{align}
B=\chi _{\Lambda }U_{\Lambda }(s)\chi _{\Lambda }U(t-s):l^{2}(\mathbb{Z}%
)\rightarrow l^{2}(\Lambda ) \label{B}
\end{align}%
we can write the integrand $J$ in (\ref{trdif}) as
\begin{equation}
J=\sum_{j\in \Lambda ,k\in \overline{\Lambda }}A_{kj}B_{jk}, \label{AB}
\end{equation}%
hence
\begin{equation*}
|J|\leq \sum_{j\in \Lambda }\Big(\sum_{k\in \overline{\Lambda }%
}|A_{kj}|^{2}\sum_{k\in \overline{\Lambda }}|B_{jk}|^{2}\Big)^{1/2}\leq
\sum_{j\in \Lambda }\Big(\sum_{k\in \overline{\Lambda}}|A_{kj}|^{2}\sum_{k%
\in \mathbb{Z}}|B_{jk}|^{2}\Big)^{1/2}.
\end{equation*}
We have in view of (\ref{B})
\begin{align*}
&\hspace{-1cm}\sum_{k%
\in \mathbb{Z}}|B_{jk}|^{2}=(BB^{\ast })_{jj}
\\&=(U_{\Lambda }(s)\chi _{\Lambda }U(t-s)U^{\ast }(t-s)\chi
_{\Lambda }U^{\ast }(s))_{jj}
=(\chi _{\Lambda }U_{\Lambda }(s)U_{\Lambda
}^{\ast }(s))_{jj}=1
\end{align*}%
since $U(t-s)$ is unitary in $l^{2}(\mathbb{Z})$ and $U_{\Lambda }(s)$ is
unitary in $l^{2}(\Lambda )$. Thus, we have in view of (\ref{adec})
\begin{equation*}
|J|\leq \sum_{j\in \Lambda }\Big(\sum_{k\in \overline{\Lambda }}|A_{kj}|^{2}%
\Big)^{1/2}\leq C\sum_{j\in \Lambda }\Big(\sum_{k\in \overline{\Lambda }%
}|k-j|^{-2\theta }|\Big)^{1/2}=o(L^{1/2})
\end{equation*}%
and (\ref{fdif}) follows. Note that for $\theta >3/2$ the r.h.s. of the
above bound is $O(1)$.
\end{proof}
\medskip Similar result was obtained in \cite{La-Sa:96} by another method.
\begin{lemma}
\label{l:asin} Let $H_{1}$ and $H_{2}$ be the one dimensional discrete
Schrodinger operators with bounded potentials $V_{1}$ and $V_{2}$
coinciding within the integer valued interval $[-p,p]$. Consider $f:\mathbb{R}%
\rightarrow \mathbb{C}$ whose Fourier transform $\widehat{f}$ is such that
\begin{equation}
\int_{-\infty }^{\infty }(1+|t|^{\theta })|\widehat{f}(t)|dt<\infty
,\;\theta >1. \label{fdec1}
\end{equation}%
Then we have
\begin{equation}
|f_{00}(H_{1})-f_{00}(H_{2})|\leq C/p^{\theta }, \label{fadec}
\end{equation}%
where $C$ is independent of $V_{1}$ and $V_{2}$.
\end{lemma}
\begin{proof}
We denote
\begin{equation*}
V_{1}=\{V^{\prime }\}_{|j|>s}\cup \{V_{j}\}_{|j|\leq
s},\;V_{2}=\{V_{j}^{\prime \prime }\}_{|j|>s}\cup \{V_{j}\}_{|j|\leq s},
\end{equation*}%
$U^{(1)}(t)=e^{itH_{1}}$ and $U^{(2)}(t)=e^{itH_{2}}$ and use (\ref{frep})
and the spectral theorem to write for any $T>0$%
\begin{eqnarray} \label{df}
|f_{00}(H_{2})-f_{00}(H_{1})| &\leq &\int_{|t|\leq T}|\widehat{f}%
(t)||U_{00}^{(2)}(t)-U_{00}^{(1)}(t)|dt \label{def} \\
&&+\int_{|t|\geq T}|\widehat{f}%
(t)||U_{00}^{(2)}(t)-U_{00}^{(1)}(t)|=:I_{1}+I_{2}. \notag
\end{eqnarray}%
We have then by the Duhamel formula (\ref{duh}) and (\ref{qq})
\begin{align*}
I_{1}& \leq \int_{|t|\leq T}|\widehat{f}(t)|dt\int_{0}^{|t|}%
\sum_{|j|>s}|U_{0j}^{(2)}(t-t^{\prime })(V_{j}^{\prime \prime
}-V_{j}^{\prime })U_{j0}^{(1)}(t^{\prime })|dt^{\prime }. \\
& \leq 2\overline{V}\int_{|t|\leq T}|\widehat{f}(t)|dt\int_{0}^{|t|}%
\sum_{|j|>s}|U_{0j}^{(2)}(t-t^{\prime })||U_{j0}^{(1)}(t^{\prime
})|dt^{\prime }.
\end{align*}%
We will use now Lemma \ref{l:ctu} implying%
\begin{equation}
I_{1}\leq 2\overline{V}e^{-2\delta p+s(\delta )T}\int_{-\infty }^{\infty
}(1+|t|)\widehat{f}(t)|dt. \label{I1}
\end{equation}%
To estimate $I_{2}$ of (\ref{df}), we write
\begin{equation}
I_{2}\leq 2\int_{|t|\geq T}|\widehat{f}(t)|dt\leq \frac{2}{T^{\theta }}%
\int_{-\infty }^{\infty }(1+|t|^{\theta })|\widehat{f}(t)|dt. \label{I2}
\end{equation}%
Choosing now in (\ref{I1}) and (\ref{I2}) $T=2\delta p/s(\delta )-\theta
\log p$, we obtain (\ref{fadec}).
\end{proof}
\begin{lemma}
\label{l:clt1} Consider a bounded $\gamma :\mathbb{R}\rightarrow \mathbb{R}$
admitting the Fourier transform $\widehat{\gamma }$ such that
\begin{equation}
\int_{-\infty }^{\infty }(1+|t|)|\widehat{\gamma }(t)|dt<\infty .
\label{gcond}
\end{equation}%
and set $\gamma (H)=\{\gamma _{jk}(H)\}_{j,k\in \mathbb{Z}}$, where $H$ is
the one-dimensional Schrodinger operator (\ref{h}) -- (\ref{qq}) with random
i.i.d. potential.
Let $\gamma _{\Lambda }$ be defined in (\ref{GL}) and
\begin{equation} \label{sila}
\sigma _{\Lambda }^{2}=|\Lambda|^{-1}\mathbf{Var}\{\gamma _{\Lambda }\}
\end{equation}%
Then there exists the limit%
\begin{equation} \label{siM}
\sigma ^{2}:=\lim_{\Lambda \rightarrow \infty }\sigma _{\Lambda }^{2} =%
\mathbf{E}\{(\mathcal{M}^{(0)})^{2}\}
\end{equation}%
where%
\begin{align}\label{M}
\mathcal{M}^{(0)}=&\mathbf{E}\{A_{0}(V_{0},\{V_{j}\}_{j\neq 0})|\mathcal{F}%
_{0}^{\infty }\}\\&-\int_{-\infty }^{\infty }\mathbf{E\{}A_{0}(V'_0,\{V_{j}\}_{j%
\neq 0})F(dV_0')|\mathcal{F}_{0}^{\infty }\}
\notag
\end{align}%
and%
\begin{equation}
A_{0}(V_{0},\{V_{j}\}_{j\neq 0})=V_{0}\int_{0}^{1}(\gamma ^{\prime
}(H)|_{V_{0}\rightarrow uV_{0}})_{00}du \label{A0}
\end{equation}
\end{lemma}
\begin{proof}
It is convenient to consider%
\begin{equation}
\tau _{\Lambda }:=\mathrm{Tr}_{\Lambda }\gamma (H_{\Lambda }) \label{tal}
\end{equation}%
instead of $\gamma _{\Lambda }$ of (\ref{GL}). It follows from Lemma \ref%
{l:fdif} that
\begin{equation}
\sigma^2_\Lambda =|\Lambda |^{-1}\mathbf{Var}\{\tau _{\Lambda }\} +
o(1),\;|\Lambda |\rightarrow \infty . \label{dtg}
\end{equation}%
To deal with $\mathbf{Var}\{\tau _{\Lambda }\}$ we will use a simple version
of the martingale techniques (see e.g. \cite{Pa-Sh:11}, Proposition 18.1.1),
according to which if $\{X_{j}\}_{j=-M}^{M}$ are the i.i.d. random
variables, $\Phi :\mathbb{R}^{2M+1}\rightarrow \mathbb{R}$ is bounded and $%
\Phi =\Phi (X_{-M},X_{-M+1},...,X_{M})$, then%
\begin{align}
& \mathbf{Var}\{\Phi \}:=\{|\Phi -\mathbf{E}\{\Phi \}|^{2}\} \label{mart1}
=\sum_{|m|\leq M}\mathbf{E}\{|\Phi ^{(m)}-\Phi
^{(m+1)}|^{2}\},
\end{align}%
where
\begin{equation}
\Phi ^{(m)}=\mathbf{E}\{\Phi |\mathcal{F}_{m}^{M}\},\;\Phi ^{(-M)}=\Phi
,\;\Phi ^{(M+1)}=\mathbf{E}\{\Phi \}. \label{mart2}
\end{equation}
We choose in (\ref{mart1}) -- (\ref{mart2}) $X_{j}=V_{j},\;|j|\leq M$ and $%
\Phi =\tau _{\Lambda }$ \ (see (\ref{tal})) and we obtain%
\begin{align}
|\Lambda |^{-1}\mathbf{Var}\{\tau _{\Lambda }\} &=|\Lambda
|^{-1}\sum_{m=-M}^{M}\mathbf{E}\{|\mathcal{M}_{\Lambda }^{(m)}|^{2}\},
\label{varm} \\
\mathcal{M}_{\Lambda }^{(m)}& =\tau _{\Lambda }^{(m)}-\tau _{\Lambda
}^{(m+1)} \notag
\end{align}%
where (see (\ref{mart2}))
\begin{equation}
\tau _{\Lambda }^{(m)}=\mathbf{E}\{\tau _{\Lambda }|\mathcal{F}%
_{m}^{M}\},\;\tau _{\Lambda }^{(-M)}=\tau _{\Lambda },\;\tau _{\Lambda
}^{(M+1)}=\mathbf{E}\{\tau _{\Lambda }\}. \label{tlL}
\end{equation}%
By using the formula%
\begin{eqnarray}
\tau _{\Lambda }-\tau _{\Lambda }|_{V_{m}=0} &=&\int_{0}^{1}du\frac{\partial
}{\partial u}\mathrm{Tr_{\Lambda }}\ \gamma (H_{\Lambda }|_{V_{m}\rightarrow
uV_{m}}) \label{tvt0} \\
&=&V_{m}\int_{0}^{1}du(\gamma ^{\prime }(H_{\Lambda }|_{V_{m}\rightarrow
uV_{m}}))_{mm}=:A_{\Lambda }(V_{m},\{V\}_{j\neq m}). \notag
\end{eqnarray}%
we can write%
\begin{align} \label{Mmla}
\mathcal{M}_{\Lambda }^{(m)} &=\mathbf{E}\{A_{\Lambda }(V_{m},\{V\}_{j\neq
m})|\mathcal{F}_{m}^{M}\} \\
&-\int_{-\infty }^{\infty }A_{\Lambda }(V_{m}',\{V\}_{j\neq m})|\mathcal{F}%
_{m}^{M}\}F(dV_m'). \notag
\end{align}%
Let us show now that
\begin{equation}
\lim_{\Lambda \rightarrow \infty }|\Lambda |^{-1}\sum_{m\in \Lambda }%
\mathbf{E}\{|\mathcal{M}_{\Lambda }^{(m)}|^2\}=\mathbf{E}\{|\mathcal{M}^{(0)}|^2\},
\label{amml}
\end{equation}%
where for any $m\in \mathbb{Z}$
\begin{align} \label{lim1}
\mathcal{M}^{(m)}&=\mathbf{E}\{A(V_{m},\{V\}_{j\neq m})|\mathcal{F}%
_{m}^{\infty}\} \\
&-\int_{-\infty }^{\infty } \mathbf{E}\{A(V_{m}',\{V\}_{j\neq m})|\mathcal{F}%
_{m}^{\infty}\}F(dV_m'). \notag
\end{align}%
Note first that since $A_{\Lambda }(V_{m},\{V\}_{j\neq m})$ does not depend
on $\{V_j\}_{|j| > M }$, we can replace $\mathcal{F}_m^M$
by $\mathcal{F}_m^\infty$
Next, it is easy to see that $\mathcal{M}_{\Lambda }^{(m)}$
is bounded in $\Lambda $ and $V$, thus the proof of (\ref{amml}) reduces to
the proof of validity with probability 1 of the relation%
\begin{equation}
\lim_{\Lambda \rightarrow \infty ,\;\mathrm{dist}(m,\{M,-M\})\rightarrow
\infty}\mathcal{M}_{\Lambda }^{(m)}=\mathcal{M}^{(m)}. \label{lim2}
\end{equation}%
Note that $\mathcal{M}^{(m)}$ of (\ref{lim1}) differs from its prelimit form
$\mathcal{M}^{(m)}_\Lambda$ of (\ref{Mmla}) by the replacement of $%
H_{\Lambda}$ by $H$ in the r.h.s. of (\ref{tvt0}).
Indeed, if (\ref{lim2}) is valid, then we can replace $\mathcal{M}_{\Lambda
}^{(m)}$ by $\mathcal{M}^{(m)}$ in the l.h.s. of (\ref{amml}) and then take
into account that $V$ is a collection of i.i.d. random variables, hence \ $%
\mathbf{E}\{|\mathcal{M}^{(m)}|^2\}=$ $\mathbf{E}\{|\mathcal{M}^{(0)}|^2\}$ for any $%
m\in \mathbb{Z}$.
To prove (\ref{lim2}) we will use a version of formula (\ref{fdif}) with $%
f=\gamma ^{\prime }$ implying for $m\in \Lambda $%
\begin{equation}
(\gamma ^{\prime }(H)-\gamma ^{\prime }(H_{\Lambda }))_{mm}=i\int_{-\infty
}^{\infty }t\widehat{\gamma }(t)dt\int_{0}^{|t|}(U_{\Lambda }(t-s)\chi
_{\Lambda }H\chi _{\overline{\Lambda }}U(s))_{mm}ds. \label{gamd}
\end{equation}%
Taking into account that the non-zero entries of $\chi _{\Lambda }H\chi _{\overline{\Lambda }}$ are
$-\delta _{j,M}\delta _{k,M+1}$ and $-\delta
_{j,-M}\delta _{k,-(M+1)}$, $|j|\leq M,|k|>M$, we obtain%
\begin{equation}
|(U_{\Lambda }(t-s)\chi _{\Lambda }H\chi _{\overline{\Lambda }%
}U(s))_{mm}|\leq |U_{M+1,m}(s)|+|U_{-(M+1),m}(s)|. \label{www}
\end{equation}%
We write now the integral over $t$ in (\ref{gamd}) as the sum of the
integral $I_{1}$ over $|t|\leq T$ and that $I_{2}$ over $|t|\geq T$ for some
$T$, cf. the proofs of Lemmas \ref{l:fdec} and \ref{l:asin}. We have by
Lemma \ref{l:ctu} and (\ref{www})%
\begin{align*}
|I_{1}|\leq & 2e^{-\delta d+sT}\int_{|t|
\leq T}|t|^{2}|\widehat{\gamma }(t)|dt
\\ \leq & 2e^{-\delta d+sT}T\int_{|t|
\leq T}|t||\widehat{\gamma }(t)|dt,\;
d=\mathrm{dist}(m,\{M,-M\})
\end{align*}%
and by (\ref{www}) and the unitarity of $U(s)$%
\begin{equation*}
|I_{2}|\leq 2\int_{|t|\geq T}|t||\widehat{\gamma }(t)|dt.
\end{equation*}%
Now, choosing $sT=\delta d/2$ and taking into account (\ref{gcond}), we
obtain (\ref{lim2}), hence, the assertion of the lemma.
\end{proof}
\begin{proposition}
\label{p:pesh1} Let $\{X_{j}\}_{j\in \mathbb{Z}}$ be a sequence of random
variables on the same probability space with $\mathbf{E}\{X_{l}\}=0,\;%
\mathbf{E}\{X_{l}^{2}\}<\infty $. Put (cf. (\ref{SL} -- (\ref{sil}))%
\begin{equation}
S_{m}=\sum_{|j|\leq m}X_{l},\;Z_{m}=\mu _{m}^{-1/2}S_{m}, \;\mu
_{m}=2m+1,\;\sigma _{m}^{2}=\mathbf{E}\{Z_{m}^{2}\} \label{ssz}
\end{equation}%
and assume:
(i) $Z_{m}\overset{\mathcal{D}}{\rightarrow }\xi_\sigma ,\;m\rightarrow \infty $,
where $\xi $ is the Gaussian random variable of zero mean and variance $%
\sigma ^{2}>0$;
(ii) for every bounded Lipschitz $f$:%
\begin{equation}
|f(x)|\leq C,\;|f(x)-f(y)|\leq C_1 |x-y| \label{lip}
\end{equation}%
there exists $\varepsilon >0$, such that%
\begin{equation*}
\mathbf{Var}\left\{ \frac{1}{\log M}\sum_{m=1}^{M}\frac{1}{m}f\left(
Z_{m}\right) \right\} =O(1/(\log M)^{\varepsilon }),\;M\rightarrow \infty .
\end{equation*}%
Then $\{X_{j}\}_{j\in \mathbb{Z}}$ satisfies the almost sure Central Limit
Theorem, i.e., we have with probability 1%
\begin{equation*}
\lim_{M\rightarrow \infty }\frac{1}{\log M}\sum_{m=1}^{M}\frac{1}{m} \mathbf{%
1}_{\Delta }(Z_{m})=\Phi(\sigma^{-1}\Delta ),
\end{equation*}%
where $\Delta $ is an interval and $\Phi$ is the standard Gaussian law.
\end{proposition}
The proposition is a version of Theorem 1 of \cite{Pe-Sh:95} where the case
of semi-infinite stationary sequences $\{X_{l}\}_{l=1}^{\infty }$ was
considered. For another criterion of the validity of the almost sure CLT see \cite{Ib-Li:00}).
\begin{lemma}
\label{l:sims} Let $\{\xi _{j}^{(s)}\}_{j\in \mathbb{Z}}$ be defined in \ (%
\ref{xeg}), $Z _{m}^{(s)}$ be defined in (\ref{zm}) and (\ref{fms}) and
\begin{equation*}
(\sigma _{m}^{(s)})^{2}=\mathbf{E}\{(Z_{m}^{(s)})^{2}\}.
\end{equation*}%
Then we have:
(i) for every $m=1,2,...$
\begin{equation*}
|\sigma _{m}^{(s)}-\sigma _{m}|\leq C/s^{(\theta -1)},
\end{equation*}%
where $\sigma_m>0$ is given in (\ref{ssz}), $C$ is independent of $m$ and $s$ and $\theta >1$ is given in
(\ref{afcon});
(ii) for any $\delta >0$ there exists $m_{0}>0$ and $s_{0}>0\,$
such that
\begin{equation*}
|\sigma _{m}^{(s)}-\sigma |\leq \sigma \delta
\end{equation*}%
if $m>m_0$ and $s>s_0$ and $\sigma >0$ is given
in Theorem \ref{t:clt};;
(iii)
for every $m=1,2,...$
\begin{equation*}
\mathbf{E}\{((R^{(s)})_m)^2\} \leq C/s^{\theta-1},
\end{equation*}%
where $C$ is independent of $m$ and $s$
and $\theta >1$ is given in (\ref{afcon}).
\end{lemma}
\begin{proof}
The lemma is a version of the obvious fact $\lim_{s\rightarrow
\infty }\xi
_{m}^{(s)}=Y_{m}$ valid with probability 1 for every $m$ and following from (%
\ref{xeg}).
(i). Since $\{Y_{j}\}_{j\in \mathbb{Z}}$ and $\{\xi _{j}^{(s)}\}_{j\in \mathbb{Z}%
} $ are ergodic sequences, we can write%
\begin{eqnarray}
\sigma _{m}^{2}-(\sigma _{m}^{(s)})^{2} &=&\sum_{|l|\leq 2s}(1-|l|/\mu
_{m})(C_{l}-C_{l}^{(s)}) \label{dsig}
+\sum_{2s<|l|\leq 2m}(1-|l|/\mu _{m})C_{l},
\end{eqnarray}%
where $C_{l}=\mathbf{E}\{Y_{0}Y_{l}\}$ and $C_{l}^{(s)}=\mathbf{E}\{\xi
_{0}^{(s)}\xi _{l}^{(s)}\}$ are the correlation functions of the
corresponding sequences (see (\ref{gac}) and we took into account (\ref{axi}%
) implying that $C_{l}^{(s)}=0,\;|l|>2s$ (and that the second term
on the right is present only if $m>2s$). Since $Y_{j}=\xi
_{l}^{(s)}+\eta _{l}^{(s)}
$, we have%
\begin{equation*}
C_{l}-C_{l}^{(s)}=\mathbf{E}\{\xi _{0}^{(s)}\eta _{l}^{(s)}\}+\mathbf{E}%
\{\xi _{j}^{(s)}\eta _{0}^{(s)}\}+\mathbf{E}\{\eta _{0}^{(s)}\eta
_{l}^{(s)}\}.
\end{equation*}%
Since $\gamma$ is bounded, it follows from (\ref{Y}) --
(\ref{xeg}) that
\begin{equation}\label{coes}
|\mathbf{E}\{\xi _{0}^{(s)}\eta _{l}^{(s)}\}| \leq C \psi_s, \;
|\mathbf{E}\{\xi _{j}^{(s)}\eta _{0}^{(s)}\}\leq C \psi_s, \;
|\mathbf{E}\{\eta _{0}^{(s)}\eta _{l}^{(s)}\}| \leq C \psi_s,
\end{equation}
where
\begin{equation}
\psi _{s}=\mathbf{E}\{|\eta _{0}^{(s)}|\} \label{psi}
\end{equation}%
and by (\ref{xeg}) and Lemma \ref{l:asin}%
\begin{equation}
\psi _{s}=O(1/s^{\theta }),\;\theta >1. \label{ces}
\end{equation}%
This and (\ref{dsig}) imply uniformly in $m$%
\begin{align}
& |\sigma _{m}^{2}-(\sigma _{m}^{(s)})^{2}|\leq \sum_{|l|\leq
2s}|C_{l}-C_{l}^{(s)}|+\sum_{2s<|l|\leq 2m}|C_{l}| \label{sim2} \\
& \hspace{1cm}=O(s\psi _{s})+O(1/s^{(\theta -1)})=O(1/s^{(\theta
-1)}),\;s\rightarrow \infty . \notag
\end{align}%
(ii). $\sigma _{m}$ of (\ref{ssz}) is strictly positive for every
$m$ and
according to Theorem \ref{t:clt} and Lemma \ref{l:clt1}%
\begin{equation*}
\lim_{m\rightarrow \infty }\sigma _{m}^{2}=\sigma ^{2}>0.
\end{equation*}%
This and (\ref{sim2}) imply the assertion.
(iii). The ergodicity of $\{\eta^{(s)}_{j}\}_{j\in \mathbb{Z}}$ implies
(cf. (\ref{dsig}))
\begin{equation}\label{varms}
\mathbf{Var}\{R_m^{(s)}\} = \sum_{|l| \leq 2m} (1-|l|/\mu
_{m})\mathbf{E}\{\eta _{0}^{(s)}\eta _{l}^{(s)}\} =\sum_{|l| \leq
6s} + \sum_{6s<|l| \le 2m }.
\end{equation}
It follows then from the proof of Proposition \ref{p:ibli} (see \cite%
{Ib-Li:71}, Theorem 18.6.3) and (\ref{axi}
) that
\begin{equation}
\mathbf{E}\{\eta _{0}^{(s)}\eta _{l}^{(s)}\} \leq C\psi
_{[l/3]}),\;|l|>6s, \label{ees}
\end{equation}%
We will use now (\ref{coes}) -- (\ref{ces}) in the first sum on
the r.h.s. of (\ref{varms}) and (\ref{ces}) and (\ref{ees}) in the
second sum (cf. (\ref{sim2})) to get the bound
\begin{equation*}
\mathbf{Var}\{R_m^{(s)}\} \leq C(s\psi_s + \sum_{|l| > 6s}
\psi_{[l/3]}) \leq C/s^{\theta-1}.
\end{equation*}
proving the assertion.
\end{proof}
\begin{lemma}
\label{l:vxi} Let $\{\xi _{j}^{(s)}\}_{j\in \mathbb{Z}}$ and $F_{M}^{(s)}$
be defined in \ (\ref{xeg}) and (\ref{fms}) respectively. Then we have:
\begin{equation*}
\mathbf{Var}\left\{ F_{m}^{(s)}\right\} =O(\log s/\log M),\;s\rightarrow
\infty ,\;M\rightarrow \infty .\;
\end{equation*}
\end{lemma}
\begin{proof}
Repeating almost literally the proof of Lemma 1 in \cite{Pe-Sh:95} (where
the case of semi-infinite strongly mixing sequences $\{X_{l}\}_{l=1}^{\infty
}$ was considered), we obtain
\begin{align}
& \hspace{-1.5cm}\mathbf{Var}\left\{ F_{m}^{(s)}\right\} \leq \frac{%
C^{\prime }}{\log M^{2}}+\frac{C^{\prime \prime }}{\log M}\sum_{m=1}^{.M}%
\frac{\alpha _{m}^{(s)}}{m} \label{pesh} \\
& +\frac{C^{\prime \prime \prime }}{(\log M)^{2}}\sum_{m=1}^{M-1}
(2m^{-1/2}\sigma _{2m}^{(s)}+\mathbf{E}\{m^{-1}|\xi
_{0}^{(s)}|\})\sum_{l=m+1}^{M}\frac{1}{l^{3/2}} \notag
\end{align}%
where $C^{\prime },C^{\prime \prime },C^{\prime \prime \prime }$ depend only
on $C$ in (\ref{lip}) and $\alpha _{m}^{(s)}$ is the mixing coefficient (\ref%
{alk}) of $\{\xi _{l}^{(s)}\}_{l\in \mathbb{Z}}$ given by (\ref{axi}). In
view of (\ref{axi}) the second term is bounded by
\begin{equation}
\frac{C^{\prime \prime }}{\log M}\sum_{m=1}^{2s}\frac{1}{m}\leq
C_{2}^{^{\prime }}\frac{\log 2s}{\log M}=O(\log s/\log M) \label{ster}
\end{equation}%
as $M\rightarrow \infty $ and $s\rightarrow \infty $.
Consider now the third term of the r.h.s. of (\ref{pesh}). It follows from (%
\ref{xeg}) and our assumption on the boundedness of $Y_{0}$ that the
contribution of $\mathbf{E}\{|\xi _{0}^{(s)}|\}$ is $O(1/(\log M)^{2})$.
Next, given an $M$-independent $M_{0}$ of Lemma \ref{l:sims} (ii), we write
\begin{equation*}
\sum_{m=1}^{M-1}2\sigma _{2m}^{(s)}\sum_{l=m+1}^{M}\frac{1}{l^{3/2}}%
=\sum_{m=1}^{M_{0}}\sum_{l=m+1}^{M}+\sum_{m=M_{0}+1}^{M-1}\sum_{l=m+1}^{M}.
\end{equation*}%
The first double sum on the right is bounded in $M$ in view of Lemma \ref%
{l:sims} (i) and the fact that $\sigma _{m},\;m=1,2,...,M_{0}$ are bounded
(e.g. $\sigma _{m}\leq \mathbf{E}^{1/2}\{Y_{0}^{2}\}$). The second double
sum is in view of Lemma \ref{l:sims} (ii)
\begin{equation*}
O\left( \sum_{m=M_{0}+1}^{M-1}\frac{1}{m^{1/2}}\sum_{l=m+1}^{M}\frac{1}{%
l^{3/2}}\right) =O\left( \log M\right) .
\end{equation*}%
Hence, the third term on the right of (\ref{pesh}) is $O\left( 1/\log
M\right) $. This and (\ref{ster}) imply that the r.h.s. of (\ref{pesh}) is $%
O(\log s/(\log M))$.
\end{proof}
|
2,869,038,155,517 | arxiv | \section{ Additional figures related to the \decay{\Dz}{\Km\pip\pip\pim} test case}
\label{App:AppA}
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m12_1}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $ 0 < m_{12} < 480$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m12_1}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m12_1}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $0 < m_{12} < 480$ MeV$/c^2$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m12_1}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m12_2}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $480 < m_{12} < 620$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m12_2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m12_2}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $ <480 m_{12} < 620$ MeV$/c^2$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m12_2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m12_3}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $620 < m_{12} < 820$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m12_3}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m12_3}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $620 < m_{12} < 820$ MeV$/c^2$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m12_3}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m12_4}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $820 < m_{12}$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m12_4}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m12_4}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $820 < m_{12}$ MeV$/c^2$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m12_4}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m23_1}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $ 0 < m_{23} < 480$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m23_1}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m23_1}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $0 < m_{23} < 480$ MeV$/c^2$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m23_1}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m23_2}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $480 < m_{23} < 620$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m23_2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m23_2}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $ <480 m_{23} < 620$ MeV$/c^2$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m23_2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m23_3}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $620 < m_{23} < 820$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m23_3}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m23_3}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $620 < m_{23} < 820$ MeV$/c^2$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m23_3}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m23_4}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $820 < m_{23}$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m23_4}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m23_4}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $820 < m_{23}$ MeV$/c^2$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m23_4}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m34_1}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $ 0 < m_{34} < 780$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m34_1}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m34_1}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $0 < m_{34} < 780$ MeV$/c^2$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m34_1}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m34_2}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $780 < m_{34} < 920$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m34_2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m34_2}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $ <780 m_{34} < 920$ MeV$/c^2$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m34_2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m34_3}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $920 < m_{34} < 1100$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m34_3}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m34_3}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $920 < m_{34} < 1100$ MeV$/c^2$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m34_3}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m34_4}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $1100 < m_{34}$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m34_4}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m34_4}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $1100 < m_{34}$ MeV$/c^2$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m34_4}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m123_1}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $ 0 < m_{123} < 800$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m123_1}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m123_1}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $0 < m_{123} < 800$ MeV$/c^2$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m123_1}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m123_2}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{123}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $800 < m_{123} < 950$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m123_2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m123_2}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $ <800 m_{123} < 950$ MeV$/c^2$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m123_2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m123_3}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $950 < m_{123} < 1150$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m123_3}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m123_3}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $950 < m_{123} < 1150$ MeV$/c^2$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m123_3}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m123_4}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $1150 < m_{123}$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m123_4}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m123_4}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $1150 < m_{123}$ MeV$/c^2$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m123_4}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m234_1}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $ 0 < m_{234} < 800$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m234_1}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m234_1}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $0 < m_{234} < 800$ MeV$/c^2$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m234_1}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m234_2}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{123}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $800 < m_{234} < 950$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m234_2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m234_2}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $ <800 m_{234} < 950$ MeV$/c^2$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m234_2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m234_3}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $950 < m_{234} < 1150$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m234_3}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m234_3}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $950 < m_{234} < 1150$ MeV$/c^2$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m234_3}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m234_4}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $1150 < m_{234}$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m234_4}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m234_4}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $1150 < m_{234}$ MeV$/c^2$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m234_4}
\end{figure}
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m12_1_Sel2}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one obtained with a tighter selection (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $ 0 < m_{12} < 480$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m12_1_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m12_1_Sel2}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $0 < m_{12} < 480$ MeV$/c^2$ and with a tighter selection. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m12_1_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m12_2_Sel2}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one obtained with a tighter selection (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $480 < m_{12} < 620$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m12_2_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m12_2_Sel2}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $ <480 m_{12} < 620$ MeV$/c^2$ and with a tighter selection. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m12_2_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m12_3_Sel2}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one obtained with a tighter selection (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $620 < m_{12} < 820$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m12_3_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m12_3_Sel2}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $620 < m_{12} < 820$ MeV$/c^2$ and with a tighter selection. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m12_3_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m12_4_Sel2}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one obtained with a tighter selection (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $820 < m_{12}$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m12_4_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m12_4_Sel2}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $820 < m_{12}$ MeV$/c^2$ and with a tighter selection. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m12_4_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m23_1_Sel2}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one obtained with a tighter selection (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $ 0 < m_{23} < 480$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m23_1_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m23_1_Sel2}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $0 < m_{23} < 480$ MeV$/c^2$ and with a tighter selection. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m23_1_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m23_2_Sel2}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one obtained with a tighter selection (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $480 < m_{23} < 620$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m23_2_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m23_2_Sel2}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $ <480 m_{23} < 620$ MeV$/c^2$ and with a tighter selection. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m23_2_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m23_3_Sel2}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one obtained with a tighter selection (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $620 < m_{23} < 820$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m23_3_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m23_3_Sel2}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $620 < m_{23} < 820$ MeV$/c^2$ and with a tighter selection. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m23_3_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m23_4_Sel2}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one obtained with a tighter selection (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $820 < m_{23}$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m23_4_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m23_4_Sel2}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $820 < m_{23}$ MeV$/c^2$ and with a tighter selection. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m23_4_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m34_1_Sel2}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one obtained with a tighter selection (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $ 0 < m_{34} < 780$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m34_1_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m34_1_Sel2}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $0 < m_{34} < 780$ MeV$/c^2$ and with a tighter selection. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m34_1_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m34_2_Sel2}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one obtained with a tighter selection (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $780 < m_{34} < 920$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m34_2_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m34_2_Sel2}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $ <780 m_{34} < 920$ MeV$/c^2$ and with a tighter selection. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m34_2_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m34_3_Sel2}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one obtained with a tighter selection (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $920 < m_{34} < 1100$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m34_3_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m34_3_Sel2}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $920 < m_{34} < 1100$ MeV$/c^2$ and with a tighter selection. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m34_3_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m34_4_Sel2}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one obtained with a tighter selection (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $1100 < m_{34}$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m34_4_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m34_4_Sel2}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $1100 < m_{34}$ MeV$/c^2$ and with a tighter selection. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m34_4_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m123_1_Sel2}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one obtained with a tighter selection (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $ 0 < m_{123} < 800$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m123_1_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m123_1_Sel2}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $0 < m_{123} < 800$ MeV$/c^2$ and with a tighter selection. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m123_1_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m123_2_Sel2}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{123}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one obtained with a tighter selection (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $800 < m_{123} < 950$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m123_2_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m123_2_Sel2}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $ <800 m_{123} < 950$ MeV$/c^2$ and with a tighter selection. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m123_2_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m123_3_Sel2}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one obtained with a tighter selection (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $950 < m_{123} < 1150$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m123_3_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m123_3_Sel2}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $950 < m_{123} < 1150$ MeV$/c^2$ and with a tighter selection. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m123_3_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m123_4_Sel2}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one obtained with a tighter selection (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $1150 < m_{123}$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m123_4_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m123_4_Sel2}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $1150 < m_{123}$ MeV$/c^2$ and with a tighter selection. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m123_4_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m234_1_Sel2}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one obtained with a tighter selection (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $ 0 < m_{234} < 800$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m234_1_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m234_1_Sel2}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $0 < m_{234} < 800$ MeV$/c^2$ and with a tighter selection. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m234_1_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m234_2_Sel2}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{123}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one obtained with a tighter selection (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $800 < m_{234} < 950$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m234_2_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m234_2_Sel2}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $ <800 m_{234} < 950$ MeV$/c^2$ and with a tighter selection. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m234_2_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m234_3_Sel2}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one obtained with a tighter selection (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $950 < m_{234} < 1150$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m234_3_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m234_3_Sel2}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $950 < m_{234} < 1150$ MeV$/c^2$ and with a tighter selection. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m234_3_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_m234_4_Sel2}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one obtained with a tighter selection (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in Sect.~\ref{subsec:ResultsK3pi}. The data used here are restricted to the region $1150 < m_{234}$ MeV$/c^2$.
The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_m234_4_Sel2}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_m234_4_Sel2}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
in the region $1150 < m_{234}$ MeV$/c^2$ and with a tighter selection. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:ResultsK3pi}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_m234_4_Sel2}
\end{figure}
\section{ Additional figures related to the \decay{\Bz}{\Kstarz(\rightarrow\Kp\pim)\mup\mun} test case}
\label{App:AppB}
\begin{figure}[tb]
\hspace*{-2.3cm}
\includegraphics[width=1.3\linewidth]{Ratio_q2_1}
\vspace*{-0.5cm}
\caption{Efficiency in $q^{2}$, cos$\theta_l$, cos$\theta_K$ and $\phi$ in the data generated for the present study,
in the region $0.1 < q^2 < 0.98$ GeV$^{2}/c^4$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:resultsB2KstMuMu}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_q2_1}
\end{figure}
\begin{figure}[tb]
\hspace*{-2.3cm}
\includegraphics[width=1.3\linewidth]{Ratio_q2_2}
\vspace*{-0.5cm}
\caption{Efficiency in $q^{2}$, cos$\theta_l$, cos$\theta_K$ and $\phi$ in the data generated for the present study,
in the region $0.98 < q^2 < 2.0$ GeV$^{2}/c^4$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:resultsB2KstMuMu}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_q2_2}
\end{figure}
\begin{figure}[tb]
\hspace*{-2.3cm}
\includegraphics[width=1.3\linewidth]{Ratio_q2_3}
\vspace*{-0.5cm}
\caption{Efficiency in $q^{2}$, cos$\theta_l$, cos$\theta_K$ and $\phi$ in the data generated for the present study,
in the region $2.0 < q^2 < 3.0$ GeV$^{2}/c^4$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:resultsB2KstMuMu}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_q2_3}
\end{figure}
\begin{figure}[tb]
\hspace*{-2.3cm}
\includegraphics[width=1.3\linewidth]{Ratio_q2_4}
\vspace*{-0.5cm}
\caption{Efficiency in $q^{2}$, cos$\theta_l$, cos$\theta_K$ and $\phi$ in the data generated for the present study,
in the region $3.0 < q^2 < 4.0$ GeV$^{2}/c^4$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:resultsB2KstMuMu}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_q2_4}
\end{figure}
\begin{figure}[tb]
\hspace*{-2.3cm}
\includegraphics[width=1.3\linewidth]{Ratio_q2_5}
\vspace*{-0.5cm}
\caption{Efficiency in $q^{2}$, cos$\theta_l$, cos$\theta_K$ and $\phi$ in the data generated for the present study,
in the region $4.0 < q^2 < 5.0$ GeV$^{2}/c^4$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:resultsB2KstMuMu}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_q2_5}
\end{figure}
\begin{figure}[tb]
\hspace*{-2.3cm}
\includegraphics[width=1.3\linewidth]{Ratio_q2_6}
\vspace*{-0.5cm}
\caption{Efficiency in $q^{2}$, cos$\theta_l$, cos$\theta_K$ and $\phi$ in the data generated for the present study,
in the region $5.0 < q^2 < 6.0$ GeV$^{2}/c^4$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:resultsB2KstMuMu}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_q2_6}
\end{figure}
\begin{figure}[tb]
\hspace*{-2.3cm}
\includegraphics[width=1.3\linewidth]{Ratio_q2_7}
\vspace*{-0.5cm}
\caption{Efficiency in $q^{2}$, cos$\theta_l$, cos$\theta_K$ and $\phi$ in the data generated for the present study,
in the region $6.0 < q^2 < 7.0$ GeV$^{2}/c^4$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:resultsB2KstMuMu}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_q2_7}
\end{figure}
\begin{figure}[tb]
\hspace*{-2.3cm}
\includegraphics[width=1.3\linewidth]{Ratio_q2_8}
\vspace*{-0.5cm}
\caption{Efficiency in $q^{2}$, cos$\theta_l$, cos$\theta_K$ and $\phi$ in the data generated for the present study,
in the region $7.0 < q^2 <8.0 $ GeV$^{2}/c^4$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:resultsB2KstMuMu}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_q2_8}
\end{figure}
\begin{figure}[tb]
\hspace*{-2.3cm}
\includegraphics[width=1.3\linewidth]{Ratio_q2_9}
\vspace*{-0.5cm}
\caption{Efficiency in $q^{2}$, cos$\theta_l$, cos$\theta_K$ and $\phi$ in the data generated for the present study,
in the region $8.0 < q^2 < 9.0$ GeV$^{2}/c^4$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:resultsB2KstMuMu}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_q2_9}
\end{figure}
\begin{figure}[tb]
\hspace*{-2.3cm}
\includegraphics[width=1.3\linewidth]{Ratio_q2_10}
\vspace*{-0.5cm}
\caption{Efficiency in $q^{2}$, cos$\theta_l$, cos$\theta_K$ and $\phi$ in the data generated for the present study,
in the region $9.0 < q^2 < 10.0$ GeV$^{2}/c^4$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:resultsB2KstMuMu}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_q2_10}
\end{figure}
\begin{figure}[tb]
\hspace*{-2.3cm}
\includegraphics[width=1.3\linewidth]{Ratio_q2_11}
\vspace*{-0.5cm}
\caption{Efficiency in $q^{2}$, cos$\theta_l$, cos$\theta_K$ and $\phi$ in the data generated for the present study,
in the region $10.0 < q^2 < 11.0$ GeV$^{2}/c^4$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:resultsB2KstMuMu}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_q2_11}
\end{figure}
\begin{figure}[tb]
\hspace*{-2.3cm}
\includegraphics[width=1.3\linewidth]{Ratio_q2_12}
\vspace*{-0.5cm}
\caption{Efficiency in $q^{2}$, cos$\theta_l$, cos$\theta_K$ and $\phi$ in the data generated for the present study,
in the region $11.0 < q^2 < 12.0$ GeV$^{2}/c^4$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:resultsB2KstMuMu}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_q2_12}
\end{figure}
\begin{figure}[tb]
\hspace*{-2.3cm}
\includegraphics[width=1.3\linewidth]{Ratio_q2_13}
\vspace*{-0.5cm}
\caption{Efficiency in $q^{2}$, cos$\theta_l$, cos$\theta_K$ and $\phi$ in the data generated for the present study,
in the region $12.0 < q^2 <13.0 $ GeV$^{2}/c^4$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:resultsB2KstMuMu}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_q2_13}
\end{figure}
\begin{figure}[tb]
\hspace*{-2.3cm}
\includegraphics[width=1.3\linewidth]{Ratio_q2_14}
\vspace*{-0.5cm}
\caption{Efficiency in $q^{2}$, cos$\theta_l$, cos$\theta_K$ and $\phi$ in the data generated for the present study,
in the region $13.0 < q^2 < 14.0$ GeV$^{2}/c^4$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:resultsB2KstMuMu}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_q2_14}
\end{figure}
\begin{figure}[tb]
\hspace*{-2.3cm}
\includegraphics[width=1.3\linewidth]{Ratio_q2_15}
\vspace*{-0.5cm}
\caption{Efficiency in $q^{2}$, cos$\theta_l$, cos$\theta_K$ and $\phi$ in the data generated for the present study,
in the region $14.0 < q^2 < 15.0$ GeV$^{2}/c^4$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:resultsB2KstMuMu}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_q2_15}
\end{figure}
\begin{figure}[tb]
\hspace*{-2.3cm}
\includegraphics[width=1.3\linewidth]{Ratio_q2_16}
\vspace*{-0.5cm}
\caption{Efficiency in $q^{2}$, cos$\theta_l$, cos$\theta_K$ and $\phi$ in the data generated for the present study,
in the region $15.0 < q^2 < 16.0$ GeV$^{2}/c^4$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:resultsB2KstMuMu}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_q2_16}
\end{figure}
\begin{figure}[tb]
\hspace*{-2.3cm}
\includegraphics[width=1.3\linewidth]{Ratio_q2_17}
\vspace*{-0.5cm}
\caption{Efficiency in $q^{2}$, cos$\theta_l$, cos$\theta_K$ and $\phi$ in the data generated for the present study,
in the region $16.0 < q^2 < 17.0$ GeV$^{2}/c^4$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:resultsB2KstMuMu}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_q2_17}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-2.3cm}
\includegraphics[width=1.3\linewidth]{Ratio_q2_18}
\vspace*{-0.5cm}
\caption{Efficiency in $q^{2}$, cos$\theta_l$, cos$\theta_K$ and $\phi$ in the data generated for the present study,
in the region $17.0 < q^2 < 18.0$ GeV$^{2}/c^4$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:resultsB2KstMuMu}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_q2_18}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-2.3cm}
\includegraphics[width=1.3\linewidth]{Ratio_q2_19}
\vspace*{-0.5cm}
\caption{Efficiency in $q^{2}$, cos$\theta_l$, cos$\theta_K$ and $\phi$ in the data generated for the present study,
in the region $18.0 < q^2 < 19.0$ GeV$^{2}/c^4$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in Sect.~\ref{subsec:resultsB2KstMuMu}. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_q2_19}
\end{figure}
\section{\mbox{Application to the study of {\bf\boldmath \decay{\Dz}{\Km\pip\pip\pim}}}}
\label{sec:AppliedToDK3pi}
Multibody {\ensuremath{\D^0}}\xspace decays provide examples where multidimensional efficiencies must be determined.
These decays can be used, for instance, to improve our understanding
of {\ensuremath{\D^0}}\xspace mixing~\cite{PDG2014}. One of the most studied decay is \decay{\Dz}{K^0_S\pip\pim}. Its phase space can be described
in terms of two invariant masses squared: $m^{2}(K^{0}_{S}{\ensuremath{\pion^+}}\xspace)$ and $m^{2}(K^{0}_{S}{\ensuremath{\pion^-}}\xspace)$.
The distribution of the \decay{\Dz}{K^0_S\pip\pim} decays in the Dalitz plane~\cite{Dalitz:1953cp} defined by $m^{2}(K^{0}_{S}{\ensuremath{\pion^+}}\xspace)$ and $m^{2}(K^{0}_{S}{\ensuremath{\pion^-}}\xspace)$
varies in time under the action of the mixing. By measuring this variation, one can access directly the mixing
parameters $x=\frac{M_1-M_2}{\Gamma}$ and $y=\frac{\Gamma_1-\Gamma_2}{2\Gamma}$~\cite{Asner:2005sz,Abe:2007rd,delAmoSanchez:2010xz},
where $M_{1}$($M_{2}$),
$\Gamma_1$($\Gamma_2$) and $\Gamma$ are the masses of the two neutral $D$ physical states, the two
corresponding widths, and the average {\ensuremath{\D^0}}\xspace width, respectively. In simpler approaches, only a combination
of $x$ and $y$ with a phase due to the strong interaction can be measured. Such Dalitz analyses also open the
possibility to measure CP violation in the mixing. They can be extended to four-body decays, where the
phenomenology is richer and can therefore provide more information. In this case, a five-dimensional Dalitz analysis
must be carried out. Our case study in this document is the \decay{\Dz}{\Km\pip\pip\pim} decay, which dynamics can be described by the
following set of masses:
\begin{equation*}
\begin{array}{c}
m_{12}=m({\ensuremath{\pion^+_1}}\xspace{\ensuremath{\pion^-}}\xspace) \\
m_{23}=m({\ensuremath{\pion^+_2}}\xspace{\ensuremath{\pion^-}}\xspace) \\
m_{34}=m({\ensuremath{\pion^+_2}}\xspace{\ensuremath{\kaon^-}}\xspace) \\
m_{123}=m({\ensuremath{\pion^+_1}}\xspace{\ensuremath{\pion^-}}\xspace{\ensuremath{\pion^+_2}}\xspace) \\
m_{234}=m({\ensuremath{\pion^-}}\xspace{\ensuremath{\pion^+_2}}\xspace{\ensuremath{\kaon^-}}\xspace). \\
\end{array}
\end{equation*}
\subsection{Generation of the distorted and original samples}
We generated an original sample and a distorted sample of \decay{\Dz}{\Km\pip\pip\pim} decays.
Both contain $\sim$ 230000 decays. A sample of reconstructed and selected
events of this size is a significant computing effort if a full simulation
(physics, detector response and reconstruction) has to be performed. However, this is still in the
scope of what a collaboration like LHCb, the present world leading experiment in Flavor Physics~\cite{LHCb-DP-2014-002},
is ready to produce even for measurements of intermediate importance.
To produce samples that can be compared with the samples used by LHCb in~\cite{LHCb_D2K3pi_ANA},
an important element is to generate
decays that have the same kinematics as in LHCb's laboratory frame. This is not possible
with the \texttt{TGenPhaseSpace} class alone, which knows nothing of the physics of the
proton-proton collision where $D^0$ mesons are produced. Using this class one
can generate the four-momentum of each daughter particles given the four-momentum of the decaying particle,
assuming a flat phase space ( i.e. assuming that all the combinations of daughter four-momenta that respect
energy and momentum conservation are equally allowed). To reproduce the kinematics of {\ensuremath{\D^0}}\xspace mesons
produced in proton-proton collisions at a center-of-mass energy of 7 or 8 TeV, we combined
several samples, each generated assuming a different value of the {\ensuremath{\D^0}}\xspace transverse momentum and rapidity.
The relative contribution of each sample to the final one was decided according the {\ensuremath{\D^0}}\xspace production cross sections
measured in Ref.~\cite{Aaij:2013mga} as a function of these quantities. This is how the original
sample was produced.
To produce the distorted sample, we first generated decays as described above,
and filtered them according to a selection which aims to be as close as possible to
the selection used in Ref.~\cite{LHCb_D2K3pi_ANA}.
At LHCb, the selection of a given $M$ meson decay typically involves:
\begin{itemize}
\item The momentum ($p$) and transverse momentum ($p_{\mathrm{T}}$) of the mother and daughter particles.
\item The minimum distance of a track to a proton-proton primary vertex (PV), called the impact parameter (IP).
\item The difference in $\chi^{2}$ of the closest PV reconstructed with and without the particle under
consideration ($\chi^{2}_{\mathrm{IP}}$). This particle can either be $M$ or one of its decay products.
\item The $\chi^{2}$ evaluating the quality of the $M$ decay vertex ($\chi^{2}_{\mathrm{Vtx}}$).
\item The invariant mass of the $M$ candidate $m(M)$.
\item The angle between the $M$ momentum and the line joining the PV and the decay vertex of $M$ (DIRA), which
has to be consistent with zero to ensure the $M$ candidate originates from the PV.
\item The significance of the distance between the $M$ decay vertex and the PV ($\chi^{2}_{\mathrm{FD}}$).
\item The reconstructed lifetime $\tau_M$.
\item Variables describing the isolation of the final state tracks.
\item Statistical estimators of the nature of the decay products, which combine the information from LHCb's RICH,
calorimeters and muon system (\texttt{PID($\pi$)}, \texttt{ProbNNmu}).
\end{itemize}
In the present study, only the generation phase is simulated. Therefore, only the ``true'', generator-level, $p$ and $p_{\mathrm{T}}$
of the {\ensuremath{\D^0}}\xspace and decay products are available in our samples, and the selection we apply to
produce a realistic distorted sample relies only on these variables.
We checked that the distortion we introduced in the phase space is consistent with that observed in the analysis presently
carried out by LHCb~\cite{LHCb_D2K3pi_ANA}, even though we can use only the true information,
rather than the reconstructed one, and no information on vertexing nor on decay topology.
To achieve this, some cuts on momenta and transverse momenta are tightened with respect to the selection
in~\cite{LHCb_D2K3pi_ANA}.
We require all the decay products to be in the acceptance of the LHCb detector (i.e. the angle between their
momentum and the nominal beam line should lie in the range 0.01-0.4 rad), and the candidates must
satisfy the following criteria~:
\begin{itemize}
\item $p_{\mathrm{T}}(D^0) > 3 $~GeV$/c$.
\item $p_{\mathrm{T}} > 0.5 $~GeV$/c$, $p > 3 $~GeV$/c$ for all the decay products.
\item max($p(K^-),p(\pi^+),p(\pi^+),p(\pi^-)$)$ > 10$~GeV$/c$.
\item max($p_{\mathrm{T}}(K^-),p_{\mathrm{T}}(\pi^+),p_{\mathrm{T}}(\pi^+),p_{\mathrm{T}}(\pi^-)$)$ > 1.7$~GeV$/c$.
\item max($p_{\mathrm{T}}(K^-),p_{\mathrm{T}}(\pi^+),p_{\mathrm{T}}(\pi^+),p_{\mathrm{T}}(\pi^-)$)$ > 3.7$~GeV$/c$ with the hardware trigger reconstruction.
\end{itemize}
By construction, the generator-level $p$ and $p_{\mathrm{T}}$'s available in our samples do not account for the finite resolution
of the reconstruction.
Since in LHCb the momentum resolution of the offline reconstruction is very small (from 0.5\% at low momentum to 1\% at 200 GeV$/c$),
we consider that we can safely neglect it. One expection is made for the first stage of LHCb's trigger system.
It's a hardware trigger that searches for high $p_{T}$ objects, based on a partial event reconstruction carried out by the Front End
electronics~\cite{LHCb-DP-2012-004}.
Several kinds of objects are searched for, involving several trigger $lines$:
candidates are looked for in the muons system, while in the electromagnetic calorimeter clusters
due to photons or electrons are sought. In the case of a decay like \decay{\Dz}{\Km\pip\pip\pim}, the hardware trigger
looks for high $p_{T}$ clusters ($>$ 3.7 GeV$/c$) in the
Hadron Calorimeter, which resolution is $\sigma_{E}/E=80\%/\sqrt{E}\oplus 10\%$ with $E$ expressed in GeV. We apply the trigger cut
to particles which $p_{T}$ has been smeared in order to reproduce this resolution.
This trigger cut is applied to only a third of the events since LHCb events
in which a \decay{\Dz}{\Km\pip\pip\pim} decay is produced are often triggered on independently, due to the decay products of
the second charm hadron produced in the event (proton-proton collisions actually produce $c-\bar{c}$ $pairs$).
On Fig.~\ref{fig:Distr_whole_NoCorr} we compare the distributions of $m_{12},~m_{23},~m_{34},~m_{123}$ and $m_{234}$
in the distorted and original samples. The selection efficiencies as a function of each of these variables
(i.e., the ratio between the distributions superimposed on Fig.~\ref{fig:Distr_whole_NoCorr}) are also shown on Fig.~\ref{fig:Ratio_whole_NoCorr}.
This illustrates the effect of the selection on the phase space.
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_whole_NoCorr}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram) and
in the distorted one (full circles). The normalisation is arbitrary.}
\label{fig:Distr_whole_NoCorr}
\end{figure}
\clearpage
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_whole_NoCorr}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ observed in the data generated for this study.
Shown are the ratios of the distributions found in the distorted and original samples. The normalisation of these distributions was arbitrary.}
\label{fig:Ratio_whole_NoCorr}
\end{figure}
\subsection{Neural network training}
We chose to use the \texttt{MLPBNN} NN provided by the \texttt{TMVA} package.
This Multilayer Perceptron (MLP) NN is trained using the BFGS method instead of a
simple back-propagation method. The definition of a MLP, and a description of the latter method
can be found in Ref.~\cite{TMVA}. Also, a Bayesian regulation technique is
employed. It is described in Ref.~\cite{Zhong:2011xm}.
We used the default configuration found in the example macro downloaded with the \texttt{TMVA} package.
To the attention of the expert reader, we specify that in this case the MLP involves only one hidden layer,
which comprises $N+10$ neurons, where $N$ is the number of input variables ($m_{12},~m_{23},~m_{34},~m_{123}$ and $m_{234}$ in our case).
The neuron activation function is $tanh$. Input variables are linearly scaled to lie with $\mathrm{\left[-1;1\right]}$,
and 600 training cycles are performed. An overtraining test is run every 5 cycles.
All the other settings can be found in Table~19 of Ref.~\cite{TMVA}.
It took two hours to train this MLP to distinguish between decays from the distorted or original samples (both contain 230000 events).
We used a machine equipped with a Intel i7-640M 2.8 GHz 2-Core processor.
Two additional distorted and original samples, produced in the same way as those used for training, and of
identical size, have been produced to test the resulting NN score. The distribution of this score in both
samples is shown on Fig.~\ref{fig:Score_K3pi_Sel1}, which also displays the ratio these distributions,
to which we fitted a parameterised efficiency, $\epsilon\left(s\right)$, where $s$ is the NN score.
We used for that a fifth order polynomial. The $\epsilon\left(s\right)$ function should match
$\epsilon\left(m_{12},m_{23},m_{34},m_{123},m_{234}\right)$, the multidimensional efficiency we aim at.
\begin{figure}[tb]
\begin{center}
\hspace*{-2.3cm}
\includegraphics[width=1.3\linewidth]{Score_K3pi_Sel1}
\vspace*{-0.5cm}
\end{center}
\caption{Distributions and distribution ratio showing (a) the NN score $s$ in the distorted (red) and original (black) \decay{\Dz}{\Km\pip\pip\pim} samples, and (b)
the selection efficiency as a function of $s$ (with an arbitrary normalisation). The fit leading to $\epsilon(s)$ is superimposed to
the measured efficiencies.}
\label{fig:Score_K3pi_Sel1}
\end{figure}
\subsection{Results}
\label{subsec:ResultsK3pi}
To test the efficiency obtained above, we weighted each \decay{\Dz}{\Km\pip\pip\pim} decay $i$ in the distorted test sample with
$\omega_i = 1/\epsilon\left(s_i\right)$. The result can be found in Fig.~\ref{fig:Distr_whole}.
It is the same as Fig.~\ref{fig:Distr_whole_NoCorr} with distributions from the re-weighted distorted sample superimposed.
One can see that these distributions match
closely those observed in the original sample, before the distortions due to the selection were imposed.
The ratios on Fig.~\ref{fig:Ratio_whole} are consistent with 1 all over the $m_{12},~m_{23},~m_{34},~m_{123}$ and $m_{234}$ spectra,
unlike the un-weighted ratios on Fig.~\ref{fig:Ratio_whole_NoCorr}, which show clear distortions.
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_whole}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one (full black circles) and in the distorted sample where the decays have been re-weighted using the $\omega_i$ weights (red),
as explained in the text. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_whole}
\end{figure}
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_whole}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study.
Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in the text. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Ratio_whole}
\end{figure}
Consequently, the MLP we trained produces a single variable $s$ to fully encompass the 5-dimensional information
on the distortion of the ($m_{12},m_{23},m_{34},m_{123},m_{234}$) space.
The $\epsilon\left(s\right)$ efficiency accounts simultaneously for the 5 individual
efficiencies as a function of $m_{12},~m_{23},~m_{34},~m_{123}$ and $m_{234}$. The yield in each bin of the corrected distributions in
Fig.~\ref{fig:Distr_whole} is calculated as the sum of the $\omega_i$ weights over
all the events that lie in this bin. In principle, this corrected yield can be correct even if $\epsilon\left(s\right)$
is not an accurate evaluation of $\epsilon\left(m_{12},m_{23},m_{34},m_{123},m_{234}\right)$ everywhere in the
$\left(m_{12},m_{23},m_{34},m_{123},m_{234}\right)$ space. A compensation is possible in the sum. Also, the evaluation
could be wrong in regions containing too little statistics for the effect to appear significantly
on Figs.~\ref{fig:Distr_whole} and~\ref{fig:Ratio_whole}. To constrain this possibility, we repeated the
test of Figs.~\ref{fig:Distr_whole} and~\ref{fig:Ratio_whole} in various restricted regions of the $\left(m_{12},m_{23},m_{34},m_{123},m_{234}\right)$
space. The results are on Figs.~\ref{fig:Distr_m12_1} to~\ref{fig:Ratio_m234_4} of Appendix~\ref{App:AppA}.
The corrected distributions once more match the original ones. Note that
we still use the efficiency from Fig.~\ref{fig:Score_K3pi_Sel1} for these corrections. The polynomial's parameters were not
re-evaluated by performing in each region specific fits or new trainings.
We conclude that $\epsilon\left(s\right)$ = $\epsilon\left(m_{12},m_{23},m_{34},m_{123},m_{234}\right)$,
in the limit of the precision with which it can be verified with our data.
We reach the same conclusion when applying the same technique to correct an alternative distorted sample,
obtained with tighter cuts and therefore showing stronger distortions in the phase space.
We tightened one cut on the \decay{\Dz}{\Km\pip\pip\pim} decay products: $p_{\mathrm{T}} > 0.6 $~GeV$/c$.
Also, the hardware trigger selection is applied to all events instead of only a third.
The results we obtained can be judged with the help of Figs.~\ref{fig:Distr_whole_Sel2} and~\ref{fig:Ratio_whole_Sel2}.
What we obtained in particular regions of the phase space is shown on Figs.~\ref{fig:Distr_m12_1_Sel2} to~\ref{fig:Ratio_m234_4_Sel2} in
Appendix~\ref{App:AppA}.
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Distr_whole_Sel2}
\vspace*{-0.5cm}
\caption{Distributions of $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the original sample (histogram),
in the distorted one obtained with a tighter selection (full black circles) and in the same sample where the decays have
been re-weighted using the $\omega_i$ weights (red),
as explained in the text. The absolute normalisation is arbitrary when the correction is not applied and natural when it is applied (red).}
\label{fig:Distr_whole_Sel2}
\end{figure}
\begin{figure}[tb]
\hspace*{-3cm}
\includegraphics[width=1.4\linewidth]{Ratio_whole_Sel2}
\vspace*{-0.5cm}
\caption{Efficiency in $m_{12}$, $m_{23}$, $m_{34}$, $m_{123}$ and $m_{234}$ in the data generated for this study,
with a tighter selection for the distorted sample. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in the text. The absolute normalisation is arbitrary when the correction is not applied and natural
when it is applied (red).}
\label{fig:Ratio_whole_Sel2}
\end{figure}
\section{\mbox{Application to the study of {\bf\boldmath \decay{\Bz}{\Kstarz(\rightarrow\Kp\pim)\mup\mun}}}}
\label{sec:AppliedToKstMuMu}
We performed a second case study to explore the potential of MVAs to treat multidimensional efficiencies.
It is based on the decay \decay{\Bz}{\Kstarz(\rightarrow\Kp\pim)\mup\mun}. A similar procedure to that described in Sect.~\ref{sec:AppliedToDK3pi}
has been carried out for that purpose.
\subsection{Sample generation }
We generated a distorted and an original sample, both containing 600\,000 decays.
This is less than in the sample used by the LHCb collaboration for the result in Ref.~\cite{LHCb_KstMuMu_conf}.
Because the study of this mode is one of LHCb's priorities, it was actually acceptable
produce as much as 1.406M reconstructed and selected events.
The \texttt{ROOT} \texttt{TGenPhaseSpace} class is used once more to generate decays.
The kinematics of the $B^{0}$ meson is derived from the differential production
cross-section as a function of transverse momentum and rapidity, measured in proton-proton
collisions at a center-of-mass energy of 7 TeV~\cite{Aaij:2013noa}.
To produce the distorted sample, we applied a selection as similar
as possible to that in~\cite{LHCb_KstMuMu_conf}. The following criteria are used:
\begin{itemize}
\item All the decay products must be in the acceptance of the LHCb detector.
\item One of the muons should satisfy $p_{\mathrm{T}}(\mu) > 1.8 $~GeV$/c$ in order to reproduce the cut used by the
muon-specific line which dominates the hardware trigger in the case of decays such as \decay{\Bz}{\Kstarz(\rightarrow\Kp\pim)\mup\mun}.
\item max($p(K^+),p(\pi^-),p(\mu^+),p(\mu^-)$)$ > 10$~GeV$/c$.
\item max($p_{\mathrm{T}}(K^+),p_{\mathrm{T}}(\pi^-),p_{\mathrm{T}}(\mu^+),p_{\mathrm{T}}(\mu^-)$)$ > 0.2$~GeV$/c$.
\item $p_{\mathrm{T}}(B^0) > 4 $~GeV$/c$.
\item $p(B^0) > 40 $~GeV$/c$.
\item max($IP(K^+),IP(\pi^-),IP(\mu^+),IP(\mu^-)$)$ > 1$~mm.
\item For all decay products, $\chi^{2}_{\mathrm{IP}}>9$.
\end{itemize}
Only momenta and transverse momenta are directly accessible in the data generated with the
\texttt{TGenPhaseSpace} class. The cuts on these variables, even tighter than those
in~\cite{LHCb_KstMuMu_conf}, could not reproduce the distortion of the phase space variables
actually observed in this analysis (Fig.~\ref{fig:Fig-KstMuMu-1}). This was true in particular of the
acceptance as a function of $\phi$, which was flatter.
In the hope to reproduce a comparable distortion, we included cuts that affect angular variables
more directly. This is the reason of the presence of cuts on $IP$ and $\chi^{2}_{\mathrm{IP}}$ in
the above list. The $IP$ was determined based on the angle between the \decay{\Bz}{\Kstarz(\rightarrow\Kp\pim)\mup\mun} decay products and
the momentum of the $B^{0}$, assuming the latter travelled a typical 8 mm before its decay.
We approximated $\chi^{2}_{\mathrm{IP}}$ by the squared ratio between $IP$ and its uncertainty.
The latter was computed based on the transverse momentum of the particle under consideration and on
the typical uncertainty observed in LHCb. We used the uncertainty
advertised by LHCb in recent publications (see for instance~\cite{LHCb_KstMuMu_conf}),
$\sigma_{IP}=(15+29/\mbox{$p_{\rm T}$}\xspace)\ensuremath{{\,\upmu\rm m}}\xspace$ with $\mbox{$p_{\rm T}$}\xspace$ in GeV$/c$. These cuts did not yet suffice to reproduce the amplitude of the distortion in the
$\phi$ distribution. We obtained this by discarding decays which do not satisfy $\left|m(K\pi)-892\right|>0.09\times\left(q^2/(19.10^{6})\mathrm{sin^2}(\phi)\times 100\right)$ where $m(K\pi)$ and $q$ are expressed in MeV$/c^{2}$. With this set of criteria,
the distortion of the ($q^2$, cos$\theta_l$, cos$\theta_K$, $\phi$) space, although not idenditical, is similar to that
observed in Ref.~\cite{LHCb_KstMuMu_conf}. This can be seen in on Figs.~\ref{fig:Ratio_q2_1_NoCorr} and~\ref{fig:Ratio_q2_19_NoCorr}, which show the
distortion in our generated data, in extreme regions of $q^{2}$: $0.1 < q^2 < 0.98$ GeV$^{2}/c^4$ and
$18.0 < q^2 < 19.0$ GeV$^{2}/c^4$. They can be compared with Fig.~\ref{fig:Fig-KstMuMu-1} to notice that the evolution of the distortion
between these two regions is reproduced. We checked that the same conclusion holds in all
$q^{2}$ regions by comparing Figs.~\ref{fig:Ratio_q2_1} to~\ref{fig:Ratio_q2_19} (see Appendix~\ref{App:AppB}) with the equivalent figures in~\cite{LHCb_KstMuMu_ANA}.
\begin{figure}[tb]
\begin{center}
\hspace*{-1.3cm}
\includegraphics[width=1.1\linewidth]{Fig-KstMuMu-1}
\vspace*{-0.3cm}
\end{center}
\caption{Angular efficiency in cos$\theta_l$, cos$\theta_K$ and $\phi$, as determined from a principal moment
analysis of simulated three-body \decay{\Bz}{\Kstarz(\rightarrow\Kp\pim)\mup\mun} phase-space decays (black solid and red long-dashed lines), and compared to simulated
data (histograms). The efficiency is shown for the regions: $0.1 < q^2 < 0.98$ GeV$^{2}/c^4$ (black) and
$18.0 < q^2 < 19.0$ GeV$^{2}/c^4$ (red). The absolute normalisation is arbitrary. This figure is reproduced from Ref.~\cite{LHCb_KstMuMu_conf}.}
\label{fig:Fig-KstMuMu-1}
\end{figure}
\begin{figure}[tb]
\begin{center}
\hspace*{-2.3cm}
\includegraphics[width=1.3\linewidth]{Ratio_q2_1_NoCorr}
\vspace*{-0.5cm}
\end{center}
\caption{Efficiency in $q^{2}$, cos$\theta_l$, cos$\theta_K$ and $\phi$ in the data generated for the present study,
in the region $0.1 < q^2 < 0.98$ GeV$^{2}/c^4$. The absolute normalisation is arbitrary.}
\label{fig:Ratio_q2_1_NoCorr}
\end{figure}
\begin{figure}[tb]
\begin{center}
\hspace*{-2.3cm}
\includegraphics[width=1.3\linewidth]{Ratio_q2_19_NoCorr}
\vspace*{-0.5cm}
\end{center}
\caption{Efficiency in $q^{2}$, cos$\theta_l$, cos$\theta_K$ and $\phi$ in the data generated for the present study,
in the region $18.0 < q^2 < 19.0$ GeV$^{2}/c^4$. The absolute normalisation is arbitrary.}
\label{fig:Ratio_q2_19_NoCorr}
\end{figure}
\subsection{Neural network training}
We trained the \texttt{MLP} NN provided by the \texttt{TMVA} package.
It is the same NN as that described in Sect.~\ref{sec:AppliedToDK3pi}, modulo
two differences: the classical back-propagation technique described in~\cite{TMVA}
is used instead of the BFGS method, and we use no Bayesian regulation technique.
The distortion of the ($q^2$, cos$\theta_l$, cos$\theta_K$, $\phi$) space is more challenging
to correct as that of the ($m_{12},~m_{23},~m_{34},~m_{123}$, $m_{234}$) space in the previous section.
Indeed, as can be seen on Figs.~\ref{fig:Fig-KstMuMu-1}, \ref{fig:Ratio_q2_1_NoCorr} and~\ref{fig:Ratio_q2_19_NoCorr},
the distortion the cos$\theta_l$ and $\phi$ distributions is
symmetric in these variables. As a consequence, the discriminative power of these variables is low:
the distorted and original samples cannot be distinguished
via a strong ``preference'' of one of them for the higher or the lower end of the cos$\theta_l$ or $\phi$
distributions. In other words, the efficiencies with which distorted and original events would be selected
by a cut on cos$\theta_l$ or $\phi$ would not differ nor vary much as a function of the value of this cut.
Another difficulty complicates the determination of the multidimensional efficiency
$\epsilon\left(q^2, cos\theta_l, cos\theta_K, \phi\right)$ with the approach presented in this
document. It stems from the fast variation as a function of $q^{2}$ of the distortion in the 3 other variables
and the particular pattern it follows:
at low $q^{2}$, the distortion is strong in cos$\theta_l$ and
very light in $\phi$, while the opposite is observed at high $q^{2}$ (this pattern
can be seen again on Figs.~\ref{fig:Fig-KstMuMu-1}, \ref{fig:Ratio_q2_1_NoCorr} and~\ref{fig:Ratio_q2_19_NoCorr}).
There is therefore a more complicated
structure to be understood by the NN. Also, the fact that in some regions in $q^{2}$ only
two variables are available to discrimate the distorted sample against the original one is a difficulty in
itself. Moreover, it is not trivial for the NN to adapt to this varying behavior since it is trained using the whole
distorted and original samples. It is not instructed to treat differently the various $q^{2}$ regions.
This difficulty affects the determination of $\epsilon\left(q^2, cos\theta_l, cos\theta_K, \phi\right)$ in
the whole sample and becomes more acute when it comes to determine the multidimensional efficiency in restricted regions
of $q^2$.
To overcome these difficulties, the input variables mapped to the NN's first layer are
not directly $q^2$, cos$\theta_l$, cos$\theta_K$, and $\phi$. We replace cos$\theta_l$ and $\phi$ by two transformed,
non-symmetric variables: $e^{-\theta_l^2/4}$ and $e^{-\mathrm{sin}^2(\phi)/4}$. Also, unlike in Sect.~\ref{sec:AppliedToDK3pi},
we had to optimise the NN settings instead of using blindly the ones provided by default by \texttt{TMVA}.
We devoted a limited effort (a day of work) to this. In practice~:
\begin{itemize}
\item We further transformed the input variables so as to make their distributions gaussian, which helps
the decorrelation algorithms used by the NN~\cite{TMVA}.
\item We used two hidden layers instead of only one. The number of neurons constituting the first and second layers is 14 and 6,
respectively. It has to be compared with 4, the number of input variables.
\item The number of training cycle was raised to 7000.
\item Overtraining tests were run every 5 cycles. Each time, the convergence
is also tested. If 10 consecutive tests fail to observe an improvement of the error function,
the training is considered optimal and stopped.
\end{itemize}
All the other settings can be found in Table~19 of Ref.~\cite{TMVA}.
With these settings, it took $\sim$12 hours to train the NN on the same machine as in Sect.~\ref{sec:AppliedToDK3pi}.
\subsection{Results}
\label{subsec:resultsB2KstMuMu}
We show on Fig.~\ref{fig:Score_KstMuMu} the distribution of the NN score $s$ obtained in original and distorted test samples,
generated independently from the training samples. This figure also shows $\epsilon\left(s\right)$, the parameterised
efficiency fitted to the ratio of the original and distorted distributions.
We tested this efficiency in the same way as in Sect.~\ref{subsec:ResultsK3pi}.
Fig.~\ref{fig:Ratio_whole_KstMuMu} shows the efficiency in $q^2$, cos$\theta_l$, cos$\theta_K$ and $\phi$. Also shown are the efficiencies
obtained with the corrected distorted sample, in which each decay $i$ is weigted by $\omega_i = 1/\epsilon\left(s_i\right)$.
The corrections works precisely: in no bin does the latter ratio differ from 1 by more than a few percents. This difference
is never statiscally significant. The latter statement also holds in
specific regions of the $q^{2}$ distribution, as can be seen on Fig.~\ref{fig:Ratio_q2_1_dem}, which
shows this efficiency in the region $0.1 < q^2 < 0.98$ GeV$^{2}/c^4$ and on Fig.~\ref{fig:Ratio_q2_19_dem}
which focusses on the region $18.0 < q^2 < 19.0$ GeV$^{2}/c^4$. This can be compared to what was obtained by the
analysis reported in~\cite{LHCb_KstMuMu_conf} (Fig.~\ref{fig:Fig-KstMuMu-1}),
with the principal moment analysis briefly described in Sect.~\ref{sec:TechSummClass}. The correction we obtained is less
precise statistically --- which is natural with more than twice less statistics --- but is of comparable accuracy. This is
a promising result since it was obtained with very limited efforts and MVA-related competences. The quality
of the correction was confirmed in 19 $q^{2}$ regions, as can be seen on Figs.~\ref{fig:Ratio_q2_1} to~\ref{fig:Ratio_q2_19} in
Appendix~\ref{App:AppB}. We remind that
in each $q^{2}$ region the weights are still calculated from the $\epsilon\left(s\right)$ function fitted to the efficiency
shown on Fig.~\ref{fig:Score_KstMuMu}. No specific training and no
re-evaluation of the polynomial is performed in individual regions.
\begin{figure}[tb]
\begin{center}
\hspace*{-1.5cm}
\includegraphics[width=1.2\linewidth]{Score_KstMuMu}
\vspace*{-0.5cm}
\end{center}
\caption{Distributions and distribution ratio showing (a) the NN score $s$ in the distorted (red) and original (black) \decay{\Bz}{\Kstarz(\rightarrow\Kp\pim)\mup\mun} samples, and (b)
the selection efficiency as a function of $s$ (with an arbitrary normalisation). The fit providing to $\epsilon\left(s\right)$ is superimposed to
the measured efficiencies.}
\label{fig:Score_KstMuMu}
\end{figure}
\begin{figure}[tb]
\begin{center}
\hspace*{-2.3cm}
\includegraphics[width=1.3\linewidth]{Ratio_whole_KstMuMu}
\vspace*{-0.5cm}
\end{center}
\caption{Efficiency in $q^{2}$, cos$\theta_l$, cos$\theta_K$ and $\phi$ in the data generated for the present study.
Shown are the ratios of the distributions found in the distorted and original samples, with no correction (black) and for
decays re-weighted using $\omega_i$ weights (red) as explained in the text.
The absolute normalisation is arbitrary when the correction is not applied, natural when it is (red).}
\label{fig:Ratio_whole_KstMuMu}
\end{figure}
\begin{figure}[tb]
\begin{center}
\hspace*{-2.3cm}
\includegraphics[width=1.3\linewidth]{Ratio_q2_1}
\vspace*{-0.5cm}
\end{center}
\caption{Efficiency in $q^{2}$, cos$\theta_l$, cos$\theta_K$ and $\phi$ in the data generated for the present study,
in the region $0.1 < q^2 < 0.98$ GeV$^{2}/c^4$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in the text. The absolute normalisation is arbitrary when the correction is not applied, natural
when it is (red).}
\label{fig:Ratio_q2_1_dem}
\end{figure}
\clearpage
\begin{figure}[tb]
\begin{center}
\hspace*{-2.3cm}
\includegraphics[width=1.3\linewidth]{Ratio_q2_19}
\vspace*{-0.5cm}
\end{center}
\caption{Efficiency in $q^{2}$, cos$\theta_l$, cos$\theta_K$ and $\phi$ in the data generated for the present study,
in the region $18.0 < q^2 < 19.0$ GeV$^{2}/c^4$. Shown are the ratios of the distributions found in the distorted and
original samples, with no correction (black) and for decays re-weighted using $\omega_i$ weights (red)
as explained in the text. The absolute normalisation is arbitrary when the correction is not applied, natural
when it is (red).}
\label{fig:Ratio_q2_19_dem}
\end{figure}
As in Sect.~\ref{sec:AppliedToDK3pi}, we conclude the $\epsilon\left(s\right)$
obtained with the approach proposed in this document can be used to evaluate the effiency
at a given point of the decay phase space.
\section{Conclusion}
\label{sec:Conclusion}
We proposed a novel approach to the determination of multidimensional efficiencies
and explored its potential with two realistic examples, typical of the need
of modern Heavy Flavor physics measurements. We used Neural Networks to characterize
the differences introduced in a 4 or 5-dimensional phase space by typical
reconstruction and selecion criteria. These tools were trained and tested using
simulated samples of similar size to that of the samples used by the LHCb collaboration
for published (or soom published) measurements of the decay modes \decay{\Dz}{\Km\pip\pip\pim} and \decay{\Bz}{\Kstarz(\rightarrow\Kp\pim)\mup\mun}.
In both test cases studied in this document, the NN score allows to correct
the selected samples in order to reproduce the phase space distributions
observed in samples that have not undergone any selection. This is an evidence
the approach developped here allows to evaluate the efficiency at any point of
a multidimensional phase space. Compared to elaborate techniques like the
principal moment method used in~\cite{LHCb_KstMuMu_conf}, this new appraoch seems
less precise statistically although as accurate. It would probably suffice for many
measurements that do not require the same level of precision as the angular
analysis of \decay{\Bz}{\Kstarz(\rightarrow\Kp\pim)\mup\mun} reported in~\cite{LHCb_KstMuMu_conf}. In such cases, it would
represent a considerable gain of working time since satisfactory results
can be achieved with minimum skills and knowledge of MVA techniques, using
packages already routinely used within the HEP community, with no need of any
elaborate optimisation nor of more than a few days of work and
a few hours of CPU consumption. This is the conclusion of this study. It also
suggests that with more expertise and cutting-edge MVA techniques, a precise
treatment of multidimensional efficiencies is possible and could be applied
to measurements of primary importance.
Other applications of MVAs to HEP, besides signal vs. background discrimination, can
be considered in the future. Selection efficiencies often rely on simulations that
match real data only imperfectly and require systematic Data/MC comparisons to correct
the simulation. When the correction must be applied to many variables, using a MVA to
compare data with MC and derive ``automatically'' a unique number to reweight the
simulation would be a valuable tool.
In a given analysis, if one knows the efficiency at each point of the
phase space, the signal events found in data can be corrected to obtained the
distributions of interest before any selection bias, without having to
use an imperfect simulation. This is not possible in cases where a rare decay is searched
for. There, very few signal events are found, if any, and there is nothing to re-weight. The technique proposed
in this document could be used to guide the selection design in order to obtain
a flat $\epsilon(s)$. For that purpose, one could re-train the MVA developped
for the signal selection with a weight applied to signal training events, derived from the $\epsilon\left(s\right)$
observed when the original selection is applied.
If this goal is achieved, even a simulation that does not reproduce correctly
the real phase space of the decay can be used to determine the efficiency. Upper limits
are often normalised to the branching fraction of a well known
non-suppressed decay which decay products are identical to the signal's. It differs
from the signal only due to a different phase space. Ensuring for both modes a flat
efficiency across the phase space would make their efficiency ratio closer to one and
more robust against systematic uncertainties.
\section{Introduction}
\label{sec:Introduction}
With the advent of the LHC and, a few years before, of Flavor
Factories, High Energy Physics (HEP) entered an era of high statistical precision.
More and more differential measurements of particle collisions
or decays are now possible. For instance, stringent
constraints can be imposed on the models predicting the dynamics of the decay \decay{\Bz}{\Kstarz(\rightarrow\Kp\pim)\mup\mun} by
measuring the distribution of this decay in the four-dimensional
space defined by $q^2$, cos$\theta_l$, cos$\theta_K$ and $\phi$, a
set of independent variables that provide a full description of the
decay dynamics and that are defined in Sect.~\ref{sec:TechSummClass}. Deviations from the
Standard Model (SM) predictions in specific regions of the phase space can be
detected this way and sign the action of a physics beyond the SM.
Loop-mediated rare decays like \decay{\Bz}{\Kstarz(\rightarrow\Kp\pim)\mup\mun} are particularly sensitive to this New Physics.
Many models in which particles do not couple to weak interaction via only their
left chiral component predict angular distributions that differ from the SM ones.
More detail can be found, for instance, in Ref.~\cite{LHCb_KstMuMu_conf}.
In analyses of the kind introduced above, one has to account for the distortion of the phase space
caused by reconstruction and selection criteria. In the example above, this is the distortion of the
($q^2$, cos$\theta_l$, cos$\theta_K$, $\phi$) distribution. The most straightforward method would be to use a
sample of simulated \mbox{\decay{\Bz}{\Kstarz(\rightarrow\Kp\pim)\mup\mun}} phase-space decays.
A four-dimensional binning could be defined, and the efficiency in each bin determined
by the ratio between the yield of reconstructed and selected events and the yield of generated events.
If this determination is made in terms of all the kinematic variables describing the
decay and if the granularity of the binning is fine enough, the result does not
depend on the distributions assumed by the simulation for these variables. This assumption often
relies on decay models poorly predicted by theory. However, this method would necessit to generate
a huge sample. Even with only 10 bins per dimension, 10000 four-dimensional bins have to be defined.
It takes typically millions of generated events to determine the efficiency bins with less than a 5\% uncertainty.
Instead, sophisticated methods are available to account for efficiencies that depend simultaneously on several
variables and that do not factorise in these variables (see Sect.~\ref{sec:TechSummClass}).
We propose in this document to explore the potential of another approach, suggested by
rapid progresses in machine learning and multivariate analysis observed in the last decade.
Techniques such as Neural Networks (NN)~\cite{Haykin:2009} or Boosted Decision Trees (BDT)~\cite{Roe:2004na},
among others,
can now detect with high sensitivity differences between two samples of events characterised by a
large number of variables. They are routinely used nowdays in particle physics measurements
to tell signal from background events. Said otherwise, these techniques are performant at comparing
$n$-dimensional distributions to detect even subtle differences between signal and background samples
(that traditional ``by eye'' studies would miss) and discrimate between event types via the NN or BDT score,
a single variable incorporating all the information
found in $n$ dimensions. They should naturally be sensitive also to distortions introduced in the phase space of
particle collisions or decays by reconstruction and selection criteria.
We do not take for granted that multivariate techniques reach the same level of precision as
methods such as the principal moment analysis described in~\cite{LHCb_KstMuMu_conf}, nor that such a result can be
obtained without a certain expertise in MVA techniques or without a time-consumming optimisation of
the parameters that rule the technique's behavior and performance. However, the rythm at which MVA
techniques have been progressing recently, their growing availability to basic users in the form of user-friendly packages, and
the increasing typical expertise of high energy physicists suggest MVAs might soon become
very valuable tools, and easy to use, for multidimensional efficiency determination. It is therefore
interesting to start exploring the potential of this approach.
This document only starts this exploration, with the ambition to answer the following question:
what can be achieved if multivariate techniques are used in a very simple manner, i.e.
without devoting more than a few hours to their optimisation --- therefore using
generic settings, like the default ones found in packages like~\cite{TMVA}? In the case
of analyses which do not require the same precision as the analysis of \mbox{\decay{\Bz}{\Kstarz(\rightarrow\Kp\pim)\mup\mun}} introduced above,
is this approach enough ? Also, our goal is not to provide quantitative results, but an illustration of
the potential of this approach.
In this document, we will first describe briefly a typical technique used to treat
multidimensional efficiencies, and describe the approach we propose (Sect.~\ref{sec:TechSumm}).
Then, we will apply it to a first test case, involving the decay \decay{\Dz}{\Km\pip\pip\pim} (Sect.~\ref{sec:AppliedToDK3pi}).
This decay can be used to improve our knowledge of $D$-meson mixing. For that purpose, one needs
to determine the selection efficiency across the space defined by five independent variables in terms
of which the decay dynamics can be expressed. Another test case will involve \mbox{\decay{\Bz}{\Kstarz(\rightarrow\Kp\pim)\mup\mun}} (Sect.~\ref{sec:AppliedToKstMuMu}).
The results observed in these two cases will be summarized and discussed in the conclusion (Sect.~\ref{sec:Conclusion}).
\section{List of all symbols}
\label{sec:listofsymbols}
\subsection{Experiments}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash lhcb} & \mbox{LHCb}\xspace & \texttt{\textbackslash atlas} & \mbox{ATLAS}\xspace & \texttt{\textbackslash cms} & \mbox{CMS}\xspace \\
\texttt{\textbackslash alice} & \mbox{ALICE}\xspace & \texttt{\textbackslash babar} & \mbox{BaBar}\xspace & \texttt{\textbackslash belle} & \mbox{Belle}\xspace \\
\texttt{\textbackslash cleo} & \mbox{CLEO}\xspace & \texttt{\textbackslash cdf} & \mbox{CDF}\xspace & \texttt{\textbackslash dzero} & \mbox{D0}\xspace \\
\texttt{\textbackslash aleph} & \mbox{ALEPH}\xspace & \texttt{\textbackslash delphi} & \mbox{DELPHI}\xspace & \texttt{\textbackslash opal} & \mbox{OPAL}\xspace \\
\texttt{\textbackslash lthree} & \mbox{L3}\xspace & \texttt{\textbackslash sld} & \mbox{SLD}\xspace & \texttt{\textbackslash cern} & \mbox{CERN}\xspace \\
\texttt{\textbackslash lhc} & \mbox{LHC}\xspace & \texttt{\textbackslash lep} & \mbox{LEP}\xspace & \texttt{\textbackslash tevatron} & Tevatron\xspace \\
\end{tabular*}
\subsubsection{LHCb sub-detectors and sub-systems}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash velo} & VELO\xspace & \texttt{\textbackslash rich} & RICH\xspace & \texttt{\textbackslash richone} & RICH1\xspace \\
\texttt{\textbackslash richtwo} & RICH2\xspace & \texttt{\textbackslash ttracker} & TT\xspace & \texttt{\textbackslash intr} & IT\xspace \\
\texttt{\textbackslash st} & ST\xspace & \texttt{\textbackslash ot} & OT\xspace & \texttt{\textbackslash spd} & SPD\xspace \\
\texttt{\textbackslash presh} & PS\xspace & \texttt{\textbackslash ecal} & ECAL\xspace & \texttt{\textbackslash hcal} & HCAL\xspace \\
\texttt{\textbackslash MagUp} & \mbox{\em Mag\kern -0.05em Up}\xspace & \texttt{\textbackslash MagDown} & \mbox{\em MagDown}\xspace & \\
\end{tabular*}
\subsection{Particles}
\subsubsection{Leptons}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash electron} & {\ensuremath{\Pe}}\xspace & \texttt{\textbackslash en} & \en & \texttt{\textbackslash ep} & {\ensuremath{\Pe^+}}\xspace \\
\texttt{\textbackslash epm} & \epm & \texttt{\textbackslash epem} & {\ensuremath{\Pe^+\Pe^-}}\xspace & \texttt{\textbackslash muon} & {\ensuremath{\Pmu}}\xspace \\
\texttt{\textbackslash mup} & {\ensuremath{\Pmu^+}}\xspace & \texttt{\textbackslash mun} & \mun & \texttt{\textbackslash mumu} & {\ensuremath{\Pmu^+\Pmu^-}}\xspace \\
\texttt{\textbackslash tauon} & {\ensuremath{\Ptau}}\xspace & \texttt{\textbackslash taup} & {\ensuremath{\Ptau^+}}\xspace & \texttt{\textbackslash taum} & {\ensuremath{\Ptau^-}}\xspace \\
\texttt{\textbackslash tautau} & {\ensuremath{\Ptau^+\Ptau^-}}\xspace & \texttt{\textbackslash lepton} & {\ensuremath{\ell}}\xspace & \texttt{\textbackslash ellm} & {\ensuremath{\ell^-}}\xspace \\
\texttt{\textbackslash ellp} & {\ensuremath{\ell^+}}\xspace & \texttt{\textbackslash neu} & {\ensuremath{\Pnu}}\xspace & \texttt{\textbackslash neub} & {\ensuremath{\overline{\Pnu}}}\xspace \\
\texttt{\textbackslash neue} & {\ensuremath{\neu_e}}\xspace & \texttt{\textbackslash neueb} & {\ensuremath{\neub_e}}\xspace & \texttt{\textbackslash neum} & {\ensuremath{\neu_\mu}}\xspace \\
\texttt{\textbackslash neumb} & {\ensuremath{\neub_\mu}}\xspace & \texttt{\textbackslash neut} & {\ensuremath{\neu_\tau}}\xspace & \texttt{\textbackslash neutb} & {\ensuremath{\neub_\tau}}\xspace \\
\texttt{\textbackslash neul} & {\ensuremath{\neu_\ell}}\xspace & \texttt{\textbackslash neulb} & {\ensuremath{\neub_\ell}}\xspace & \\
\end{tabular*}
\subsubsection{Gauge bosons and scalars}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash g} & {\ensuremath{\Pgamma}}\xspace & \texttt{\textbackslash H} & {\ensuremath{\PH^0}}\xspace & \texttt{\textbackslash Hp} & {\ensuremath{\PH^+}}\xspace \\
\texttt{\textbackslash Hm} & {\ensuremath{\PH^-}}\xspace & \texttt{\textbackslash Hpm} & {\ensuremath{\PH^\pm}}\xspace & \texttt{\textbackslash W} & {\ensuremath{\PW}}\xspace \\
\texttt{\textbackslash Wp} & {\ensuremath{\PW^+}}\xspace & \texttt{\textbackslash Wm} & {\ensuremath{\PW^-}}\xspace & \texttt{\textbackslash Wpm} & {\ensuremath{\PW^\pm}}\xspace \\
\texttt{\textbackslash Z} & {\ensuremath{\PZ}}\xspace & \\
\end{tabular*}
\subsubsection{Quarks}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash quark} & {\ensuremath{\Pq}}\xspace & \texttt{\textbackslash quarkbar} & {\ensuremath{\overline \quark}}\xspace & \texttt{\textbackslash qqbar} & {\ensuremath{\quark\quarkbar}}\xspace \\
\texttt{\textbackslash uquark} & {\ensuremath{\Pu}}\xspace & \texttt{\textbackslash uquarkbar} & {\ensuremath{\overline \uquark}}\xspace & \texttt{\textbackslash uubar} & {\ensuremath{\uquark\uquarkbar}}\xspace \\
\texttt{\textbackslash dquark} & {\ensuremath{\Pd}}\xspace & \texttt{\textbackslash dquarkbar} & {\ensuremath{\overline \dquark}}\xspace & \texttt{\textbackslash ddbar} & {\ensuremath{\dquark\dquarkbar}}\xspace \\
\texttt{\textbackslash squark} & {\ensuremath{\Ps}}\xspace & \texttt{\textbackslash squarkbar} & {\ensuremath{\overline \squark}}\xspace & \texttt{\textbackslash ssbar} & {\ensuremath{\squark\squarkbar}}\xspace \\
\texttt{\textbackslash cquark} & {\ensuremath{\Pc}}\xspace & \texttt{\textbackslash cquarkbar} & {\ensuremath{\overline \cquark}}\xspace & \texttt{\textbackslash ccbar} & {\ensuremath{\cquark\cquarkbar}}\xspace \\
\texttt{\textbackslash bquark} & {\ensuremath{\Pb}}\xspace & \texttt{\textbackslash bquarkbar} & {\ensuremath{\overline \bquark}}\xspace & \texttt{\textbackslash bbbar} & {\ensuremath{\bquark\bquarkbar}}\xspace \\
\texttt{\textbackslash tquark} & {\ensuremath{\Pt}}\xspace & \texttt{\textbackslash tquarkbar} & {\ensuremath{\overline \tquark}}\xspace & \texttt{\textbackslash ttbar} & {\ensuremath{\tquark\tquarkbar}}\xspace \\
\end{tabular*}
\subsubsection{Light mesons}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash hadron} & {\ensuremath{\Ph}}\xspace & \texttt{\textbackslash pion} & {\ensuremath{\Ppi}}\xspace & \texttt{\textbackslash piz} & {\ensuremath{\pion^0}}\xspace \\
\texttt{\textbackslash pizs} & {\ensuremath{\pion^0\mbox\,\rm{s}}}\xspace & \texttt{\textbackslash pip} & {\ensuremath{\pion^+}}\xspace & \texttt{\textbackslash pim} & {\ensuremath{\pion^-}}\xspace \\
\texttt{\textbackslash pipm} & {\ensuremath{\pion^\pm}}\xspace & \texttt{\textbackslash pimp} & {\ensuremath{\pion^\mp}}\xspace & \texttt{\textbackslash rhomeson} & {\ensuremath{\Prho}}\xspace \\
\texttt{\textbackslash rhoz} & {\ensuremath{\rhomeson^0}}\xspace & \texttt{\textbackslash rhop} & {\ensuremath{\rhomeson^+}}\xspace & \texttt{\textbackslash rhom} & {\ensuremath{\rhomeson^-}}\xspace \\
\texttt{\textbackslash rhopm} & {\ensuremath{\rhomeson^\pm}}\xspace & \texttt{\textbackslash rhomp} & {\ensuremath{\rhomeson^\mp}}\xspace & \texttt{\textbackslash kaon} & {\ensuremath{\PK}}\xspace \\
\texttt{\textbackslash Kb} & {\ensuremath{\Kbar}}\xspace & \texttt{\textbackslash KorKbar} & \kern 0.18em\optbar{\kern -0.18em K}{}\xspace & \texttt{\textbackslash Kz} & {\ensuremath{\kaon^0}}\xspace \\
\texttt{\textbackslash Kzb} & {\ensuremath{\Kbar{}^0}}\xspace & \texttt{\textbackslash Kp} & {\ensuremath{\kaon^+}}\xspace & \texttt{\textbackslash Km} & {\ensuremath{\kaon^-}}\xspace \\
\texttt{\textbackslash Kpm} & {\ensuremath{\kaon^\pm}}\xspace & \texttt{\textbackslash Kmp} & {\ensuremath{\kaon^\mp}}\xspace & \texttt{\textbackslash KS} & {\ensuremath{\kaon^0_{\rm\scriptscriptstyle S}}}\xspace \\
\texttt{\textbackslash KL} & {\ensuremath{\kaon^0_{\rm\scriptscriptstyle L}}}\xspace & \texttt{\textbackslash Kstarz} & {\ensuremath{\kaon^{*0}}}\xspace & \texttt{\textbackslash Kstarzb} & {\ensuremath{\Kbar{}^{*0}}}\xspace \\
\texttt{\textbackslash Kstar} & {\ensuremath{\kaon^*}}\xspace & \texttt{\textbackslash Kstarb} & {\ensuremath{\Kbar{}^*}}\xspace & \texttt{\textbackslash Kstarp} & {\ensuremath{\kaon^{*+}}}\xspace \\
\texttt{\textbackslash Kstarm} & {\ensuremath{\kaon^{*-}}}\xspace & \texttt{\textbackslash Kstarpm} & {\ensuremath{\kaon^{*\pm}}}\xspace & \texttt{\textbackslash Kstarmp} & {\ensuremath{\kaon^{*\mp}}}\xspace \\
\texttt{\textbackslash etaz} & \ensuremath{\Peta}\xspace & \texttt{\textbackslash etapr} & \ensuremath{\Peta^{\prime}}\xspace & \texttt{\textbackslash phiz} & \ensuremath{\Pphi}\xspace \\
\texttt{\textbackslash omegaz} & \ensuremath{\Pomega}\xspace & \\
\end{tabular*}
\subsubsection{Heavy mesons}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash D} & {\ensuremath{\PD}}\xspace & \texttt{\textbackslash Db} & {\ensuremath{\Dbar}}\xspace & \texttt{\textbackslash DorDbar} & \kern 0.18em\optbar{\kern -0.18em D}{}\xspace \\
\texttt{\textbackslash Dz} & {\ensuremath{\D^0}}\xspace & \texttt{\textbackslash Dzb} & {\ensuremath{\Dbar{}^0}}\xspace & \texttt{\textbackslash Dp} & {\ensuremath{\D^+}}\xspace \\
\texttt{\textbackslash Dm} & {\ensuremath{\D^-}}\xspace & \texttt{\textbackslash Dpm} & {\ensuremath{\D^\pm}}\xspace & \texttt{\textbackslash Dmp} & {\ensuremath{\D^\mp}}\xspace \\
\texttt{\textbackslash Dstar} & {\ensuremath{\D^*}}\xspace & \texttt{\textbackslash Dstarb} & {\ensuremath{\Dbar{}^*}}\xspace & \texttt{\textbackslash Dstarz} & {\ensuremath{\D^{*0}}}\xspace \\
\texttt{\textbackslash Dstarzb} & {\ensuremath{\Dbar{}^{*0}}}\xspace & \texttt{\textbackslash Dstarp} & {\ensuremath{\D^{*+}}}\xspace & \texttt{\textbackslash Dstarm} & {\ensuremath{\D^{*-}}}\xspace \\
\texttt{\textbackslash Dstarpm} & {\ensuremath{\D^{*\pm}}}\xspace & \texttt{\textbackslash Dstarmp} & {\ensuremath{\D^{*\mp}}}\xspace & \texttt{\textbackslash Ds} & {\ensuremath{\D^+_\squark}}\xspace \\
\texttt{\textbackslash Dsp} & {\ensuremath{\D^+_\squark}}\xspace & \texttt{\textbackslash Dsm} & {\ensuremath{\D^-_\squark}}\xspace & \texttt{\textbackslash Dspm} & {\ensuremath{\D^{\pm}_\squark}}\xspace \\
\texttt{\textbackslash Dsmp} & {\ensuremath{\D^{\mp}_\squark}}\xspace & \texttt{\textbackslash Dss} & {\ensuremath{\D^{*+}_\squark}}\xspace & \texttt{\textbackslash Dssp} & {\ensuremath{\D^{*+}_\squark}}\xspace \\
\texttt{\textbackslash Dssm} & {\ensuremath{\D^{*-}_\squark}}\xspace & \texttt{\textbackslash Dsspm} & {\ensuremath{\D^{*\pm}_\squark}}\xspace & \texttt{\textbackslash Dssmp} & {\ensuremath{\D^{*\mp}_\squark}}\xspace \\
\texttt{\textbackslash B} & {\ensuremath{\PB}}\xspace & \texttt{\textbackslash Bbar} & {\ensuremath{\kern 0.18em\overline{\kern -0.18em \PB}{}}}\xspace & \texttt{\textbackslash Bb} & {\ensuremath{\Bbar}}\xspace \\
\texttt{\textbackslash BorBbar} & \kern 0.18em\optbar{\kern -0.18em B}{}\xspace & \texttt{\textbackslash Bz} & {\ensuremath{\B^0}}\xspace & \texttt{\textbackslash Bzb} & {\ensuremath{\Bbar{}^0}}\xspace \\
\texttt{\textbackslash Bu} & {\ensuremath{\B^+}}\xspace & \texttt{\textbackslash Bub} & {\ensuremath{\B^-}}\xspace & \texttt{\textbackslash Bp} & {\ensuremath{\Bu}}\xspace \\
\texttt{\textbackslash Bm} & {\ensuremath{\Bub}}\xspace & \texttt{\textbackslash Bpm} & {\ensuremath{\B^\pm}}\xspace & \texttt{\textbackslash Bmp} & {\ensuremath{\B^\mp}}\xspace \\
\texttt{\textbackslash Bd} & {\ensuremath{\B^0}}\xspace & \texttt{\textbackslash Bs} & {\ensuremath{\B^0_\squark}}\xspace & \texttt{\textbackslash Bsb} & {\ensuremath{\Bbar{}^0_\squark}}\xspace \\
\texttt{\textbackslash Bdb} & {\ensuremath{\Bbar{}^0}}\xspace & \texttt{\textbackslash Bc} & {\ensuremath{\B_\cquark^+}}\xspace & \texttt{\textbackslash Bcp} & {\ensuremath{\B_\cquark^+}}\xspace \\
\texttt{\textbackslash Bcm} & {\ensuremath{\B_\cquark^-}}\xspace & \texttt{\textbackslash Bcpm} & {\ensuremath{\B_\cquark^\pm}}\xspace & \\
\end{tabular*}
\subsubsection{Onia}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash jpsi} & {\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace & \texttt{\textbackslash psitwos} & {\ensuremath{\Ppsi{(2S)}}}\xspace & \texttt{\textbackslash psiprpr} & {\ensuremath{\Ppsi(3770)}}\xspace \\
\texttt{\textbackslash etac} & {\ensuremath{\Peta_\cquark}}\xspace & \texttt{\textbackslash chiczero} & {\ensuremath{\Pchi_{\cquark 0}}}\xspace & \texttt{\textbackslash chicone} & {\ensuremath{\Pchi_{\cquark 1}}}\xspace \\
\texttt{\textbackslash chictwo} & {\ensuremath{\Pchi_{\cquark 2}}}\xspace & \texttt{\textbackslash OneS} & {\Y1S} & \texttt{\textbackslash TwoS} & {\Y2S} \\
\texttt{\textbackslash ThreeS} & {\Y3S} & \texttt{\textbackslash FourS} & {\Y4S} & \texttt{\textbackslash FiveS} & {\Y5S} \\
\texttt{\textbackslash chic} & {\ensuremath{\Pchi_{c}}}\xspace & \\
\end{tabular*}
\subsubsection{Baryons}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash proton} & {\ensuremath{\Pp}}\xspace & \texttt{\textbackslash antiproton} & {\ensuremath{\overline \proton}}\xspace & \texttt{\textbackslash neutron} & {\ensuremath{\Pn}}\xspace \\
\texttt{\textbackslash antineutron} & {\ensuremath{\overline \neutron}}\xspace & \texttt{\textbackslash Deltares} & {\ensuremath{\PDelta}}\xspace & \texttt{\textbackslash Deltaresbar} & {\ensuremath{\overline \Deltares}}\xspace \\
\texttt{\textbackslash Xires} & {\ensuremath{\PXi}}\xspace & \texttt{\textbackslash Xiresbar} & {\ensuremath{\overline \Xires}}\xspace & \texttt{\textbackslash Lz} & {\ensuremath{\PLambda}}\xspace \\
\texttt{\textbackslash Lbar} & {\ensuremath{\kern 0.1em\overline{\kern -0.1em\PLambda}}}\xspace & \texttt{\textbackslash LorLbar} & \kern 0.18em\optbar{\kern -0.18em \PLambda}{}\xspace & \texttt{\textbackslash Lambdares} & {\ensuremath{\PLambda}}\xspace \\
\texttt{\textbackslash Lambdaresbar} & {\ensuremath{\Lbar}}\xspace & \texttt{\textbackslash Sigmares} & {\ensuremath{\PSigma}}\xspace & \texttt{\textbackslash Sigmaresbar} & {\ensuremath{\overline \Sigmares}}\xspace \\
\texttt{\textbackslash Omegares} & {\ensuremath{\POmega}}\xspace & \texttt{\textbackslash Omegaresbar} & {\ensuremath{\overline \POmega}}\xspace & \texttt{\textbackslash Lb} & {\ensuremath{\Lz^0_\bquark}}\xspace \\
\texttt{\textbackslash Lbbar} & {\ensuremath{\Lbar{}^0_\bquark}}\xspace & \texttt{\textbackslash Lc} & {\ensuremath{\Lz^+_\cquark}}\xspace & \texttt{\textbackslash Lcbar} & {\ensuremath{\Lbar{}^-_\cquark}}\xspace \\
\texttt{\textbackslash Xib} & {\ensuremath{\Xires_\bquark}}\xspace & \texttt{\textbackslash Xibz} & {\ensuremath{\Xires^0_\bquark}}\xspace & \texttt{\textbackslash Xibm} & {\ensuremath{\Xires^-_\bquark}}\xspace \\
\texttt{\textbackslash Xibbar} & {\ensuremath{\Xiresbar{}_\bquark}}\xspace & \texttt{\textbackslash Xibbarz} & {\ensuremath{\Xiresbar{}_\bquark^0}}\xspace & \texttt{\textbackslash Xibbarp} & {\ensuremath{\Xiresbar{}_\bquark^+}}\xspace \\
\texttt{\textbackslash Xic} & {\ensuremath{\Xires_\cquark}}\xspace & \texttt{\textbackslash Xicz} & {\ensuremath{\Xires^0_\cquark}}\xspace & \texttt{\textbackslash Xicp} & {\ensuremath{\Xires^+_\cquark}}\xspace \\
\texttt{\textbackslash Xicbar} & {\ensuremath{\Xiresbar{}_\cquark}}\xspace & \texttt{\textbackslash Xicbarz} & {\ensuremath{\Xiresbar{}_\cquark^0}}\xspace & \texttt{\textbackslash Xicbarm} & {\ensuremath{\Xiresbar{}_\cquark^-}}\xspace \\
\texttt{\textbackslash Omegac} & {\ensuremath{\Omegares^0_\cquark}}\xspace & \texttt{\textbackslash Omegacbar} & {\ensuremath{\Omegaresbar{}_\cquark^0}}\xspace & \texttt{\textbackslash Omegab} & {\ensuremath{\Omegares^-_\bquark}}\xspace \\
\texttt{\textbackslash Omegabbar} & {\ensuremath{\Omegaresbar{}_\bquark^+}}\xspace & \\
\end{tabular*}
\subsection{Physics symbols}
\subsubsection{Decays}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash BF} & {\ensuremath{\cal B}} & \texttt{\textbackslash BRvis} & {\ensuremath{\BF_{\rm{vis}}}} & \texttt{\textbackslash BR} & \BF \\
\texttt{\textbackslash decay[2] \textbackslash decay\{\Pa\}\{\Pb \Pc\}} & \decay{\Pa}{\Pb \Pc} & \texttt{\textbackslash ra} & \ensuremath{\rightarrow}\xspace & \texttt{\textbackslash to} & \ensuremath{\rightarrow}\xspace \\
\end{tabular*}
\subsubsection{Lifetimes}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash tauBs} & {\ensuremath{\tau_{{\ensuremath{\B^0_\squark}}\xspace}}}\xspace & \texttt{\textbackslash tauBd} & {\ensuremath{\tau_{{\ensuremath{\B^0}}\xspace}}}\xspace & \texttt{\textbackslash tauBz} & {\ensuremath{\tau_{{\ensuremath{\B^0}}\xspace}}}\xspace \\
\texttt{\textbackslash tauBu} & {\ensuremath{\tau_{{\ensuremath{\Bu}}\xspace}}}\xspace & \texttt{\textbackslash tauDp} & {\ensuremath{\tau_{{\ensuremath{\D^+}}\xspace}}}\xspace & \texttt{\textbackslash tauDz} & {\ensuremath{\tau_{{\ensuremath{\D^0}}\xspace}}}\xspace \\
\texttt{\textbackslash tauL} & {\ensuremath{\tau_{\rm L}}}\xspace & \texttt{\textbackslash tauH} & {\ensuremath{\tau_{\rm H}}}\xspace & \\
\end{tabular*}
\subsubsection{Masses}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash mBd} & {\ensuremath{m_{{\ensuremath{\B^0}}\xspace}}}\xspace & \texttt{\textbackslash mBp} & {\ensuremath{m_{{\ensuremath{\Bu}}\xspace}}}\xspace & \texttt{\textbackslash mBs} & {\ensuremath{m_{{\ensuremath{\B^0_\squark}}\xspace}}}\xspace \\
\texttt{\textbackslash mBc} & {\ensuremath{m_{{\ensuremath{\B_\cquark^+}}\xspace}}}\xspace & \texttt{\textbackslash mLb} & {\ensuremath{m_{{\ensuremath{\Lz^0_\bquark}}\xspace}}}\xspace & \\
\end{tabular*}
\subsubsection{EW theory, groups}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash grpsuthree} & {\ensuremath{\mathrm{SU}(3)}}\xspace & \texttt{\textbackslash grpsutw} & {\ensuremath{\mathrm{SU}(2)}}\xspace & \texttt{\textbackslash grpuone} & {\ensuremath{\mathrm{U}(1)}}\xspace \\
\texttt{\textbackslash ssqtw} & {\ensuremath{\sin^{2}\!\theta_{\mathrm{W}}}}\xspace & \texttt{\textbackslash csqtw} & {\ensuremath{\cos^{2}\!\theta_{\mathrm{W}}}}\xspace & \texttt{\textbackslash stw} & {\ensuremath{\sin\theta_{\mathrm{W}}}}\xspace \\
\texttt{\textbackslash ctw} & {\ensuremath{\cos\theta_{\mathrm{W}}}}\xspace & \texttt{\textbackslash ssqtwef} & {\ensuremath{{\sin}^{2}\theta_{\mathrm{W}}^{\mathrm{eff}}}}\xspace & \texttt{\textbackslash csqtwef} & {\ensuremath{{\cos}^{2}\theta_{\mathrm{W}}^{\mathrm{eff}}}}\xspace \\
\texttt{\textbackslash stwef} & {\ensuremath{\sin\theta_{\mathrm{W}}^{\mathrm{eff}}}}\xspace & \texttt{\textbackslash ctwef} & {\ensuremath{\cos\theta_{\mathrm{W}}^{\mathrm{eff}}}}\xspace & \texttt{\textbackslash gv} & {\ensuremath{g_{\mbox{\tiny V}}}}\xspace \\
\texttt{\textbackslash ga} & {\ensuremath{g_{\mbox{\tiny A}}}}\xspace & \texttt{\textbackslash order} & {\ensuremath{\cal O}}\xspace & \texttt{\textbackslash ordalph} & {\ensuremath{\mathcal{O}(\alpha)}}\xspace \\
\texttt{\textbackslash ordalsq} & {\ensuremath{\mathcal{O}(\alpha^{2})}}\xspace & \texttt{\textbackslash ordalcb} & {\ensuremath{\mathcal{O}(\alpha^{3})}}\xspace & \\
\end{tabular*}
\subsubsection{QCD parameters}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash as} & {\ensuremath{\alpha_s}}\xspace & \texttt{\textbackslash MSb} & {\ensuremath{\overline{\mathrm{MS}}}}\xspace & \texttt{\textbackslash lqcd} & {\ensuremath{\Lambda_{\mathrm{QCD}}}}\xspace \\
\texttt{\textbackslash qsq} & {\ensuremath{q^2}}\xspace & \\
\end{tabular*}
\subsubsection{CKM, CP violation}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash eps} & {\ensuremath{\varepsilon}}\xspace & \texttt{\textbackslash epsK} & {\ensuremath{\varepsilon_K}}\xspace & \texttt{\textbackslash epsB} & {\ensuremath{\varepsilon_B}}\xspace \\
\texttt{\textbackslash epsp} & {\ensuremath{\varepsilon^\prime_K}}\xspace & \texttt{\textbackslash CP} & {\ensuremath{C\!P}}\xspace & \texttt{\textbackslash CPT} & {\ensuremath{C\!PT}}\xspace \\
\texttt{\textbackslash rhobar} & {\ensuremath{\overline \rho}}\xspace & \texttt{\textbackslash etabar} & {\ensuremath{\overline \eta}}\xspace & \texttt{\textbackslash Vud} & {\ensuremath{V_{\uquark\dquark}}}\xspace \\
\texttt{\textbackslash Vcd} & {\ensuremath{V_{\cquark\dquark}}}\xspace & \texttt{\textbackslash Vtd} & {\ensuremath{V_{\tquark\dquark}}}\xspace & \texttt{\textbackslash Vus} & {\ensuremath{V_{\uquark\squark}}}\xspace \\
\texttt{\textbackslash Vcs} & {\ensuremath{V_{\cquark\squark}}}\xspace & \texttt{\textbackslash Vts} & {\ensuremath{V_{\tquark\squark}}}\xspace & \texttt{\textbackslash Vub} & {\ensuremath{V_{\uquark\bquark}}}\xspace \\
\texttt{\textbackslash Vcb} & {\ensuremath{V_{\cquark\bquark}}}\xspace & \texttt{\textbackslash Vtb} & {\ensuremath{V_{\tquark\bquark}}}\xspace & \texttt{\textbackslash Vuds} & {\ensuremath{V_{\uquark\dquark}^\ast}}\xspace \\
\texttt{\textbackslash Vcds} & {\ensuremath{V_{\cquark\dquark}^\ast}}\xspace & \texttt{\textbackslash Vtds} & {\ensuremath{V_{\tquark\dquark}^\ast}}\xspace & \texttt{\textbackslash Vuss} & {\ensuremath{V_{\uquark\squark}^\ast}}\xspace \\
\texttt{\textbackslash Vcss} & {\ensuremath{V_{\cquark\squark}^\ast}}\xspace & \texttt{\textbackslash Vtss} & {\ensuremath{V_{\tquark\squark}^\ast}}\xspace & \texttt{\textbackslash Vubs} & {\ensuremath{V_{\uquark\bquark}^\ast}}\xspace \\
\texttt{\textbackslash Vcbs} & {\ensuremath{V_{\cquark\bquark}^\ast}}\xspace & \texttt{\textbackslash Vtbs} & {\ensuremath{V_{\tquark\bquark}^\ast}}\xspace & \\
\end{tabular*}
\subsubsection{Oscillations}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash dm} & {\ensuremath{\Delta m}}\xspace & \texttt{\textbackslash dms} & {\ensuremath{\Delta m_{{\ensuremath{\Ps}}\xspace}}}\xspace & \texttt{\textbackslash dmd} & {\ensuremath{\Delta m_{{\ensuremath{\Pd}}\xspace}}}\xspace \\
\texttt{\textbackslash DG} & {\ensuremath{\Delta\Gamma}}\xspace & \texttt{\textbackslash DGs} & {\ensuremath{\Delta\Gamma_{{\ensuremath{\Ps}}\xspace}}}\xspace & \texttt{\textbackslash DGd} & {\ensuremath{\Delta\Gamma_{{\ensuremath{\Pd}}\xspace}}}\xspace \\
\texttt{\textbackslash Gs} & {\ensuremath{\Gamma_{{\ensuremath{\Ps}}\xspace}}}\xspace & \texttt{\textbackslash Gd} & {\ensuremath{\Gamma_{{\ensuremath{\Pd}}\xspace}}}\xspace & \texttt{\textbackslash MBq} & {\ensuremath{M_{{\ensuremath{\PB}}\xspace_{\ensuremath{\Pq}}\xspace}}}\xspace \\
\texttt{\textbackslash DGq} & {\ensuremath{\Delta\Gamma_{{\ensuremath{\Pq}}\xspace}}}\xspace & \texttt{\textbackslash Gq} & {\ensuremath{\Gamma_{{\ensuremath{\Pq}}\xspace}}}\xspace & \texttt{\textbackslash dmq} & {\ensuremath{\Delta m_{{\ensuremath{\Pq}}\xspace}}}\xspace \\
\texttt{\textbackslash GL} & {\ensuremath{\Gamma_{\rm L}}}\xspace & \texttt{\textbackslash GH} & {\ensuremath{\Gamma_{\rm H}}}\xspace & \texttt{\textbackslash DGsGs} & {\ensuremath{\Delta\Gamma_{{\ensuremath{\Ps}}\xspace}/\Gamma_{{\ensuremath{\Ps}}\xspace}}}\xspace \\
\texttt{\textbackslash Delm} & {\mbox{$\Delta m $}}\xspace & \texttt{\textbackslash ACP} & {\ensuremath{{\cal A}^{{\ensuremath{C\!P}}\xspace}}}\xspace & \texttt{\textbackslash Adir} & {\ensuremath{{\cal A}^{\rm dir}}}\xspace \\
\texttt{\textbackslash Amix} & {\ensuremath{{\cal A}^{\rm mix}}}\xspace & \texttt{\textbackslash ADelta} & {\ensuremath{{\cal A}^\Delta}}\xspace & \texttt{\textbackslash phid} & {\ensuremath{\phi_{{\ensuremath{\Pd}}\xspace}}}\xspace \\
\texttt{\textbackslash sinphid} & {\ensuremath{\sin\!\phid}}\xspace & \texttt{\textbackslash phis} & {\ensuremath{\phi_{{\ensuremath{\Ps}}\xspace}}}\xspace & \texttt{\textbackslash betas} & {\ensuremath{\beta_{{\ensuremath{\Ps}}\xspace}}}\xspace \\
\texttt{\textbackslash sbetas} & {\ensuremath{\sigma(\beta_{{\ensuremath{\Ps}}\xspace})}}\xspace & \texttt{\textbackslash stbetas} & {\ensuremath{\sigma(2\beta_{{\ensuremath{\Ps}}\xspace})}}\xspace & \texttt{\textbackslash stphis} & {\ensuremath{\sigma(\phi_{{\ensuremath{\Ps}}\xspace})}}\xspace \\
\texttt{\textbackslash sinphis} & {\ensuremath{\sin\!\phis}}\xspace & \\
\end{tabular*}
\subsubsection{Tagging}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash edet} & {\ensuremath{\varepsilon_{\rm det}}}\xspace & \texttt{\textbackslash erec} & {\ensuremath{\varepsilon_{\rm rec/det}}}\xspace & \texttt{\textbackslash esel} & {\ensuremath{\varepsilon_{\rm sel/rec}}}\xspace \\
\texttt{\textbackslash etrg} & {\ensuremath{\varepsilon_{\rm trg/sel}}}\xspace & \texttt{\textbackslash etot} & {\ensuremath{\varepsilon_{\rm tot}}}\xspace & \texttt{\textbackslash mistag} & \ensuremath{\omega}\xspace \\
\texttt{\textbackslash wcomb} & \ensuremath{\omega^{\rm comb}}\xspace & \texttt{\textbackslash etag} & {\ensuremath{\varepsilon_{\rm tag}}}\xspace & \texttt{\textbackslash etagcomb} & {\ensuremath{\varepsilon_{\rm tag}^{\rm comb}}}\xspace \\
\texttt{\textbackslash effeff} & \ensuremath{\varepsilon_{\rm eff}}\xspace & \texttt{\textbackslash effeffcomb} & \ensuremath{\varepsilon_{\rm eff}^{\rm comb}}\xspace & \texttt{\textbackslash efftag} & {\ensuremath{\etag(1-2\omega)^2}}\xspace \\
\texttt{\textbackslash effD} & {\ensuremath{\etag D^2}}\xspace & \texttt{\textbackslash etagprompt} & {\ensuremath{\varepsilon_{\rm tag}^{\rm Pr}}}\xspace & \texttt{\textbackslash etagLL} & {\ensuremath{\varepsilon_{\rm tag}^{\rm LL}}}\xspace \\
\end{tabular*}
\subsubsection{Key decay channels}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash BdToKstmm} & \decay{\Bd}{\Kstarz\mup\mun} & \texttt{\textbackslash BdbToKstmm} & \decay{\Bdb}{\Kstarzb\mup\mun} & \texttt{\textbackslash BsToJPsiPhi} & \decay{\Bs}{\jpsi\phi} \\
\texttt{\textbackslash BdToJPsiKst} & \decay{\Bd}{\jpsi\Kstarz} & \texttt{\textbackslash BdbToJPsiKst} & \decay{\Bdb}{\jpsi\Kstarzb} & \texttt{\textbackslash BsPhiGam} & \decay{\Bs}{\phi \g} \\
\texttt{\textbackslash BdKstGam} & \decay{\Bd}{\Kstarz \g} & \texttt{\textbackslash BTohh} & \decay{\B}{\Ph^+ \Ph'^-} & \texttt{\textbackslash BdTopipi} & \decay{\Bd}{\pip\pim} \\
\texttt{\textbackslash BdToKpi} & \decay{\Bd}{\Kp\pim} & \texttt{\textbackslash BsToKK} & \decay{\Bs}{\Kp\Km} & \texttt{\textbackslash BsTopiK} & \decay{\Bs}{\pip\Km} \\
\end{tabular*}
\subsubsection{Rare decays}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash BdKstee} & \decay{\Bd}{\Kstarz\epem} & \texttt{\textbackslash BdbKstee} & \decay{\Bdb}{\Kstarzb\epem} & \texttt{\textbackslash bsll} & \decay{\bquark}{\squark \ell^+ \ell^-} \\
\texttt{\textbackslash AFB} & \ensuremath{A_{\mathrm{FB}}}\xspace & \texttt{\textbackslash FL} & \ensuremath{F_{\mathrm{L}}}\xspace & \texttt{\textbackslash AT\#1 \textbackslash AT2} & \AT2 \\
\texttt{\textbackslash btosgam} & \decay{\bquark}{\squark \g} & \texttt{\textbackslash btodgam} & \decay{\bquark}{\dquark \g} & \texttt{\textbackslash Bsmm} & \decay{\Bs}{\mup\mun} \\
\texttt{\textbackslash Bdmm} & \decay{\Bd}{\mup\mun} & \texttt{\textbackslash ctl} & \ensuremath{\cos{\theta_\ell}}\xspace & \texttt{\textbackslash ctk} & \ensuremath{\cos{\theta_K}}\xspace \\
\end{tabular*}
\subsubsection{Wilson coefficients and operators}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash C\#1 \textbackslash C9} & \C9 & \texttt{\textbackslash Cp\#1 \textbackslash Cp7} & \Cp7 & \texttt{\textbackslash Ceff\#1 \textbackslash Ceff9 } & \Ceff9 \\
\texttt{\textbackslash Cpeff\#1 \textbackslash Cpeff7} & \Cpeff7 & \texttt{\textbackslash Ope\#1 \textbackslash Ope2} & \Ope2 & \texttt{\textbackslash Opep\#1 \textbackslash Opep7} & \Opep7 \\
\end{tabular*}
\subsubsection{Charm}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash xprime} & \ensuremath{x^{\prime}}\xspace & \texttt{\textbackslash yprime} & \ensuremath{y^{\prime}}\xspace & \texttt{\textbackslash ycp} & \ensuremath{y_{\CP}}\xspace \\
\texttt{\textbackslash agamma} & \ensuremath{A_{\Gamma}}\xspace & \texttt{\textbackslash dkpicf} & \decay{\Dz}{\Km\pip} & \\
\end{tabular*}
\subsubsection{QM}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash bra[1] \textbackslash bra\{a\}} & \bra{a} & \texttt{\textbackslash ket[1] \textbackslash ket\{b\}} & \ket{b} & \texttt{\textbackslash braket[2] \textbackslash braket\{a\}\{b\}} & \braket{a}{b} \\
\end{tabular*}
\subsection{Units}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash unit[1] \textbackslash unit\{kg\}} & \unit{kg} & \\
\end{tabular*}
\subsubsection{Energy and momentum}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash tev} & \ifthenelse{\boolean{inbibliography}}{\ensuremath{~T\kern -0.05em eV}\xspace}{\ensuremath{\mathrm{\,Te\kern -0.1em V}}}\xspace & \texttt{\textbackslash gev} & \ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace & \texttt{\textbackslash mev} & \ensuremath{\mathrm{\,Me\kern -0.1em V}}\xspace \\
\texttt{\textbackslash kev} & \ensuremath{\mathrm{\,ke\kern -0.1em V}}\xspace & \texttt{\textbackslash ev} & \ensuremath{\mathrm{\,e\kern -0.1em V}}\xspace & \texttt{\textbackslash gevc} & \ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace \\
\texttt{\textbackslash mevc} & \ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c}}\xspace & \texttt{\textbackslash gevcc} & \ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace & \texttt{\textbackslash gevgevcccc} & \ensuremath{{\mathrm{\,Ge\kern -0.1em V^2\!/}c^4}}\xspace \\
\texttt{\textbackslash mevcc} & \ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace & \\
\end{tabular*}
\subsubsection{Distance and area}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash km} & \ensuremath{\rm \,km}\xspace & \texttt{\textbackslash m} & \ensuremath{\rm \,m}\xspace & \texttt{\textbackslash ma} & \ensuremath{{\rm \,m}^2}\xspace \\
\texttt{\textbackslash cm} & \ensuremath{\rm \,cm}\xspace & \texttt{\textbackslash cma} & \ensuremath{{\rm \,cm}^2}\xspace & \texttt{\textbackslash mm} & \ensuremath{\rm \,mm}\xspace \\
\texttt{\textbackslash mma} & \ensuremath{{\rm \,mm}^2}\xspace & \texttt{\textbackslash mum} & \ensuremath{{\,\upmu\rm m}}\xspace & \texttt{\textbackslash muma} & \ensuremath{{\,\upmu\rm m^2}}\xspace \\
\texttt{\textbackslash nm} & \ensuremath{\rm \,nm}\xspace & \texttt{\textbackslash fm} & \ensuremath{\rm \,fm}\xspace & \texttt{\textbackslash barn} & \ensuremath{\rm \,b}\xspace \\
\texttt{\textbackslash mbarn} & \ensuremath{\rm \,mb}\xspace & \texttt{\textbackslash mub} & \ensuremath{{\rm \,\upmu b}}\xspace & \texttt{\textbackslash nb} & \ensuremath{\rm \,nb}\xspace \\
\texttt{\textbackslash invnb} & \ensuremath{\mbox{\,nb}^{-1}}\xspace & \texttt{\textbackslash pb} & \ensuremath{\rm \,pb}\xspace & \texttt{\textbackslash invpb} & \ensuremath{\mbox{\,pb}^{-1}}\xspace \\
\texttt{\textbackslash fb} & \ensuremath{\mbox{\,fb}}\xspace & \texttt{\textbackslash invfb} & \ensuremath{\mbox{\,fb}^{-1}}\xspace & \\
\end{tabular*}
\subsubsection{Time }
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash sec} & \ensuremath{\rm {\,s}}\xspace & \texttt{\textbackslash ms} & \ensuremath{{\rm \,ms}}\xspace & \texttt{\textbackslash mus} & \ensuremath{{\,\upmu{\rm s}}}\xspace \\
\texttt{\textbackslash ns} & \ensuremath{{\rm \,ns}}\xspace & \texttt{\textbackslash ps} & \ensuremath{{\rm \,ps}}\xspace & \texttt{\textbackslash fs} & \ensuremath{\rm \,fs}\xspace \\
\texttt{\textbackslash mhz} & \ensuremath{{\rm \,MHz}}\xspace & \texttt{\textbackslash khz} & \ensuremath{{\rm \,kHz}}\xspace & \texttt{\textbackslash hz} & \ensuremath{{\rm \,Hz}}\xspace \\
\texttt{\textbackslash invps} & \ensuremath{{\rm \,ps^{-1}}}\xspace & \texttt{\textbackslash invns} & \ensuremath{{\rm \,ns^{-1}}}\xspace & \texttt{\textbackslash yr} & \ensuremath{\rm \,yr}\xspace \\
\texttt{\textbackslash hr} & \ensuremath{\rm \,hr}\xspace & \\
\end{tabular*}
\subsubsection{Temperature}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash degc} & \ensuremath{^\circ}{C}\xspace & \texttt{\textbackslash degk} & \ensuremath {\rm K}\xspace & \\
\end{tabular*}
\subsubsection{Material lengths, radiation}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash Xrad} & \ensuremath{X_0}\xspace & \texttt{\textbackslash NIL} & \ensuremath{\lambda_{int}}\xspace & \texttt{\textbackslash mip} & MIP\xspace \\
\texttt{\textbackslash neutroneq} & \ensuremath{\rm \,n_{eq}}\xspace & \texttt{\textbackslash neqcmcm} & \ensuremath{\rm \,n_{eq} / cm^2}\xspace & \texttt{\textbackslash kRad} & \ensuremath{\rm \,kRad}\xspace \\
\texttt{\textbackslash MRad} & \ensuremath{\rm \,MRad}\xspace & \texttt{\textbackslash ci} & \ensuremath{\rm \,Ci}\xspace & \texttt{\textbackslash mci} & \ensuremath{\rm \,mCi}\xspace \\
\end{tabular*}
\subsubsection{Uncertainties}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash sx} & \sx & \texttt{\textbackslash sy} & \sy & \texttt{\textbackslash sz} & \sz \\
\texttt{\textbackslash stat} & \ensuremath{\mathrm{\,(stat)}}\xspace & \texttt{\textbackslash syst} & \ensuremath{\mathrm{\,(syst)}}\xspace & \\
\end{tabular*}
\subsubsection{Maths}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash order} & {\ensuremath{\cal O}}\xspace & \texttt{\textbackslash chisq} & \ensuremath{\chi^2}\xspace & \texttt{\textbackslash chisqndf} & \ensuremath{\chi^2/\mathrm{ndf}}\xspace \\
\texttt{\textbackslash chisqip} & \ensuremath{\chi^2_{\rm IP}}\xspace & \texttt{\textbackslash chisqvs} & \ensuremath{\chi^2_{\rm VS}}\xspace & \texttt{\textbackslash chisqvtx} & \ensuremath{\chi^2_{\rm vtx}}\xspace \\
\texttt{\textbackslash deriv} & \ensuremath{\mathrm{d}} & \texttt{\textbackslash gsim} & \gsim & \texttt{\textbackslash lsim} & \lsim \\
\texttt{\textbackslash mean[1] \textbackslash mean\{x\}} & \mean{x} & \texttt{\textbackslash abs[1] \textbackslash abs\{x\}} & \abs{x} & \texttt{\textbackslash Real} & \ensuremath{\mathcal{R}e}\xspace \\
\texttt{\textbackslash Imag} & \ensuremath{\mathcal{I}m}\xspace & \texttt{\textbackslash PDF} & PDF\xspace & \texttt{\textbackslash sPlot} & \mbox{\em sPlot}\xspace \\
\end{tabular*}
\subsection{Kinematics}
\subsubsection{Energy, Momenta}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash Ebeam} & \ensuremath{E_{\mbox{\tiny BEAM}}}\xspace & \texttt{\textbackslash sqs} & \ensuremath{\protect\sqrt{s}}\xspace & \texttt{\textbackslash ptot} & \mbox{$p$}\xspace \\
\texttt{\textbackslash pt} & \mbox{$p_{\rm T}$}\xspace & \texttt{\textbackslash et} & \mbox{$E_{\rm T}$}\xspace & \texttt{\textbackslash mt} & \mbox{$M_{\rm T}$}\xspace \\
\texttt{\textbackslash dpp} & \ensuremath{\Delta p/p}\xspace & \texttt{\textbackslash msq} & \ensuremath{m^2}\xspace & \texttt{\textbackslash dedx} & \ensuremath{\mathrm{d}\hspace{-0.1em}E/\mathrm{d}x}\xspace \\
\end{tabular*}
\subsubsection{PID}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash dllkpi} & \ensuremath{\mathrm{DLL}_{\kaon\pion}}\xspace & \texttt{\textbackslash dllppi} & \ensuremath{\mathrm{DLL}_{\proton\pion}}\xspace & \texttt{\textbackslash dllepi} & \ensuremath{\mathrm{DLL}_{\electron\pion}}\xspace \\
\texttt{\textbackslash dllmupi} & \ensuremath{\mathrm{DLL}_{\muon\pi}}\xspace & \\
\end{tabular*}
\subsubsection{Geometry}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash degrees} & \ensuremath{^{\circ}}\xspace & \texttt{\textbackslash krad} & \ensuremath{\rm \,krad}\xspace & \texttt{\textbackslash mrad} & \ensuremath{\rm \,mrad}\xspace \\
\texttt{\textbackslash rad} & \ensuremath{\rm \,rad}\xspace & \\
\end{tabular*}
\subsubsection{Accelerator}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash betastar} & \ensuremath{\beta^*} & \texttt{\textbackslash lum} & \lum & \texttt{\textbackslash intlum[1] \textbackslash intlum\{2 \,\ensuremath{\mbox{\,fb}^{-1}}\xspace\}} & \intlum{2 \,\ensuremath{\mbox{\,fb}^{-1}}\xspace} \\
\end{tabular*}
\subsection{Software}
\subsubsection{Programs}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash bcvegpy} & \mbox{\textsc{Bcvegpy}}\xspace & \texttt{\textbackslash boole} & \mbox{\textsc{Boole}}\xspace & \texttt{\textbackslash brunel} & \mbox{\textsc{Brunel}}\xspace \\
\texttt{\textbackslash davinci} & \mbox{\textsc{DaVinci}}\xspace & \texttt{\textbackslash dirac} & \mbox{\textsc{Dirac}}\xspace & \texttt{\textbackslash evtgen} & \mbox{\textsc{EvtGen}}\xspace \\
\texttt{\textbackslash fewz} & \mbox{\textsc{Fewz}}\xspace & \texttt{\textbackslash fluka} & \mbox{\textsc{Fluka}}\xspace & \texttt{\textbackslash ganga} & \mbox{\textsc{Ganga}}\xspace \\
\texttt{\textbackslash gaudi} & \mbox{\textsc{Gaudi}}\xspace & \texttt{\textbackslash gauss} & \mbox{\textsc{Gauss}}\xspace & \texttt{\textbackslash geant} & \mbox{\textsc{Geant4}}\xspace \\
\texttt{\textbackslash hepmc} & \mbox{\textsc{HepMC}}\xspace & \texttt{\textbackslash herwig} & \mbox{\textsc{Herwig}}\xspace & \texttt{\textbackslash moore} & \mbox{\textsc{Moore}}\xspace \\
\texttt{\textbackslash neurobayes} & \mbox{\textsc{NeuroBayes}}\xspace & \texttt{\textbackslash photos} & \mbox{\textsc{Photos}}\xspace & \texttt{\textbackslash powheg} & \mbox{\textsc{Powheg}}\xspace \\
\texttt{\textbackslash pythia} & \mbox{\textsc{Pythia}}\xspace & \texttt{\textbackslash resbos} & \mbox{\textsc{ResBos}}\xspace & \texttt{\textbackslash roofit} & \mbox{\textsc{RooFit}}\xspace \\
\texttt{\textbackslash root} & \mbox{\textsc{Root}}\xspace & \texttt{\textbackslash spice} & \mbox{\textsc{Spice}}\xspace & \texttt{\textbackslash urania} & \mbox{\textsc{Urania}}\xspace \\
\end{tabular*}
\subsubsection{Languages}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash cpp} & \mbox{\textsc{C\raisebox{0.1em}{{\footnotesize{++}}}}}\xspace & \texttt{\textbackslash ruby} & \mbox{\textsc{Ruby}}\xspace & \texttt{\textbackslash fortran} & \mbox{\textsc{Fortran}}\xspace \\
\texttt{\textbackslash svn} & \mbox{\textsc{SVN}}\xspace & \\
\end{tabular*}
\subsubsection{Data processing}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash kbytes} & \ensuremath{{\rm \,kbytes}}\xspace & \texttt{\textbackslash kbsps} & \ensuremath{{\rm \,kbits/s}}\xspace & \texttt{\textbackslash kbits} & \ensuremath{{\rm \,kbits}}\xspace \\
\texttt{\textbackslash kbsps} & \ensuremath{{\rm \,kbits/s}}\xspace & \texttt{\textbackslash mbsps} & \ensuremath{{\rm \,Mbytes/s}}\xspace & \texttt{\textbackslash mbytes} & \ensuremath{{\rm \,Mbytes}}\xspace \\
\texttt{\textbackslash mbps} & \ensuremath{{\rm \,Mbyte/s}}\xspace & \texttt{\textbackslash mbsps} & \ensuremath{{\rm \,Mbytes/s}}\xspace & \texttt{\textbackslash gbsps} & \ensuremath{{\rm \,Gbytes/s}}\xspace \\
\texttt{\textbackslash gbytes} & \ensuremath{{\rm \,Gbytes}}\xspace & \texttt{\textbackslash gbsps} & \ensuremath{{\rm \,Gbytes/s}}\xspace & \texttt{\textbackslash tbytes} & \ensuremath{{\rm \,Tbytes}}\xspace \\
\texttt{\textbackslash tbpy} & \ensuremath{{\rm \,Tbytes/yr}}\xspace & \texttt{\textbackslash dst} & DST\xspace & \\
\end{tabular*}
\subsection{Detector related}
\subsubsection{Detector technologies}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash nonn} & \ensuremath{\rm {\it{n^+}}\mbox{-}on\mbox{-}{\it{n}}}\xspace & \texttt{\textbackslash ponn} & \ensuremath{\rm {\it{p^+}}\mbox{-}on\mbox{-}{\it{n}}}\xspace & \texttt{\textbackslash nonp} & \ensuremath{\rm {\it{n^+}}\mbox{-}on\mbox{-}{\it{p}}}\xspace \\
\texttt{\textbackslash cvd} & CVD\xspace & \texttt{\textbackslash mwpc} & MWPC\xspace & \texttt{\textbackslash gem} & GEM\xspace \\
\end{tabular*}
\subsubsection{Detector components, electronics}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash tell1} & TELL1\xspace & \texttt{\textbackslash ukl1} & UKL1\xspace & \texttt{\textbackslash beetle} & Beetle\xspace \\
\texttt{\textbackslash otis} & OTIS\xspace & \texttt{\textbackslash croc} & CROC\xspace & \texttt{\textbackslash carioca} & CARIOCA\xspace \\
\texttt{\textbackslash dialog} & DIALOG\xspace & \texttt{\textbackslash sync} & SYNC\xspace & \texttt{\textbackslash cardiac} & CARDIAC\xspace \\
\texttt{\textbackslash gol} & GOL\xspace & \texttt{\textbackslash vcsel} & VCSEL\xspace & \texttt{\textbackslash ttc} & TTC\xspace \\
\texttt{\textbackslash ttcrx} & TTCrx\xspace & \texttt{\textbackslash hpd} & HPD\xspace & \texttt{\textbackslash pmt} & PMT\xspace \\
\texttt{\textbackslash specs} & SPECS\xspace & \texttt{\textbackslash elmb} & ELMB\xspace & \texttt{\textbackslash fpga} & FPGA\xspace \\
\texttt{\textbackslash plc} & PLC\xspace & \texttt{\textbackslash rasnik} & RASNIK\xspace & \texttt{\textbackslash elmb} & ELMB\xspace \\
\texttt{\textbackslash can} & CAN\xspace & \texttt{\textbackslash lvds} & LVDS\xspace & \texttt{\textbackslash ntc} & NTC\xspace \\
\texttt{\textbackslash adc} & ADC\xspace & \texttt{\textbackslash led} & LED\xspace & \texttt{\textbackslash ccd} & CCD\xspace \\
\texttt{\textbackslash hv} & HV\xspace & \texttt{\textbackslash lv} & LV\xspace & \texttt{\textbackslash pvss} & PVSS\xspace \\
\texttt{\textbackslash cmos} & CMOS\xspace & \texttt{\textbackslash fifo} & FIFO\xspace & \texttt{\textbackslash ccpc} & CCPC\xspace \\
\end{tabular*}
\subsubsection{Chemical symbols}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash cfourften} & \ensuremath{\rm C_4 F_{10}}\xspace & \texttt{\textbackslash cffour} & \ensuremath{\rm CF_4}\xspace & \texttt{\textbackslash cotwo} & \cotwo \\
\texttt{\textbackslash csixffouteen} & \csixffouteen & \texttt{\textbackslash mgftwo} & \mgftwo & \texttt{\textbackslash siotwo} & \siotwo \\
\end{tabular*}
\subsection{Special Text }
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash eg} & \mbox{\itshape e.g.}\xspace & \texttt{\textbackslash ie} & \mbox{\itshape i.e.}\xspace & \texttt{\textbackslash etal} & \mbox{\itshape et al.}\xspace \\
\texttt{\textbackslash etc} & \mbox{\itshape etc.}\xspace & \texttt{\textbackslash cf} & \mbox{\itshape cf.}\xspace & \texttt{\textbackslash ffp} & \mbox{\itshape ff.}\xspace \\
\texttt{\textbackslash vs} & \mbox{\itshape vs.}\xspace & \\
\end{tabular*}
\section{Techniques for multidimensional efficiencies}
\label{sec:TechSumm}
\subsection{A classical technique}
\label{sec:TechSummClass}
Methods of a certain complexity can be used to account for efficiencies that depend simultaneously on several
variables and {\it that do not factorise in these variables}. The study of the decay \decay{\Bz}{\Kstarz(\rightarrow\Kp\pim)\mup\mun} provides a typical example~\cite{LHCb_KstMuMu_conf}.
The dymamics of this decay is fully described by four independent variables, $q^2$, cos$\theta_l$, cos$\theta_K$ and $\phi$. They are
defined as the invariant mass of the dimuon system squared, the cosine of the angle between the {\ensuremath{\Pmu^+}}\xspace (\mun) and the direction
opposite the {\ensuremath{\B^0}}\xspace ({\ensuremath{\Bbar{}^0}}\xspace) in the rest frame of the dimuon system, the cosine of the angle between the
direction of the $K^+$ ($K^{-}$) and the {\ensuremath{\B^0}}\xspace ({\ensuremath{\Bbar{}^0}}\xspace) in the rest frame of the {\ensuremath{\kaon^{*0}}}\xspace
({\ensuremath{\Kbar{}^{*0}}}\xspace) system, and the angle between the plane defined by the {\ensuremath{\Pmu^+}}\xspace and \mun and the plane
defined by the kaon and pion in the {\ensuremath{\B^0}}\xspace ({\ensuremath{\Bbar{}^0}}\xspace) rest frame, respectively. The definition of the three
angles above is illustrated on Fig.~\ref{fig:bkstarmumu_angles}.
\begin{figure}[tb]
\begin{center}
\hspace*{-1.3cm}
\includegraphics[width=0.6\linewidth]{bkstarmumu_angles}
\vspace*{-0.3cm}
\end{center}
\caption{Graphical representation of cos$\theta_l$, cos$\theta_K$ and $\phi$. Their precise definitions are given in the text.}
\label{fig:bkstarmumu_angles}
\end{figure}
In~\cite{LHCb_KstMuMu_conf},
a long sum of products of Legendre polynomials in the concerned variables is used :
\begin{equation}
\label{eq:principalmoment}
\epsilon\left(cos\theta_l,cos\theta_K,\phi,q^{2}\right) = \sum_{klmn}c_{klmn}P_{k}(cos\theta_l)P_{l}(cos\theta_K)P_{m}(\phi)P_{n}(q^{2}),
\end{equation}
where the terms $P_{i}(x)$ stand for Legendre polynomials of order $i$. The coefficients $c_{klmn}$ are
evaluated by performing a principal moment analysis of simulated \mbox{\decay{\Bz}{\Kstarz(\rightarrow\Kp\pim)\mup\mun}} phase-space decays:
\begin{dmath}
c_{klmn}=\frac{1}{N^{'}}\sum_{i}^{N}\omega_i\left[\left(\frac{2k+1}{2}\right)\left(\frac{2l+1}{2}\right)\left(\frac{2m+1}{2}\right)\left(\frac{2n+1}{2}\right)
\times P_{k}(cos\theta_l)P_{l}(cos\theta_K)P_{m}(\phi)P_{n}(q^{2})\right],
\end{dmath}
where $N$ is the total number of events and $\omega_{i}$ are weights imposed for instance to correct
for known discrepancies between data and simulation. Their sum provides the total normalisation $N^{'}$.
For $cos\theta_l$, $cos\theta_K$, the angle $\phi$ and $q^{2}$, polynomials up
to order 5, 6, 6 and 7 are used, respectively (see Ref.~\cite{LHCb_KstMuMu_conf}).
Designing the method, understanding the properties of the Legendre
polynomials and, more generally, of the sum in Eq.~\ref{eq:principalmoment}, implementing the software to compare
this parametrisation with the efficiencies observed for simulated decays so as to determine the highest orders to include until
a proper description of the efficiency is obtained and to determine the $c_{klmn}$ coefficients, and interpretating the
results are time demanding tasks. In the end, hundreds of $c_{klmn}$ are necessary. This might be worse in cases where
more than four variables have to be dealt with, and it's not obvious the method will always work accurately.
\subsection{A new approach}
\label{sec:TechSummNew}
The approach we propose is based on the idea that if a BDT or a NN is very powerful
at detecting differences between a signal and a background sample based on a given set
of $n$ variables, not by considering them individually, but by also exploiting
their correlations, i.e. by comparing $n$-dimensional distributions, it should also be powerful at finding
the differences between two samples differing only due to
reconstruction and selection biases. In this case, instead of training the multivariate
discriminator by comparing a $signal$ and a $background$ sample, one would compare samples of the
same decay: an $original$ sample made of generator level decays with a $distorted$ sample made of decays that satisfied
reconstruction and selection criteria. Instead of comparing discriminating variables, one would focus on
the phase space of the decay, or in other words
the $n$-dimensional distribution of events in the space defined by a set of independent variables that
can describe fully the decay dynamics. In the example of the decay \decay{\Bz}{\Kstarz(\rightarrow\Kp\pim)\mup\mun}, this is
($q^2$, cos$\theta_l$, cos$\theta_K$, $\phi$).
In a four-stage approach, we first generate an original MC sample and a distorted one. The latter is generated in
the same way as the former, save that reconstruction and selection cuts are applied.
This stage is mandatory in most if not all physics analyses in HEP. The approaches like the
ones in Sect.~\ref{sec:TechSummClass} need that too. When high precision is necessary
one generates samples containing up to a few million events. In most HEP collaborations,
generating larger samples is challenging due to limited CPU and data storage capabilities.
In the test cases presented in Sect.~\ref{sec:AppliedToDK3pi} and~\ref{sec:AppliedToKstMuMu},
we use the \texttt{ROOT} package~\cite{Brun:1997pa}, and more specifically the \texttt{TGenPhaseSpace} class to generate these samples.
The second step is to train a multivariate analysis by comparing the samples generated
above. In the case studies we carried out, we used the \texttt{TMVA}~\cite{TMVA} package to train Multilayers Perceptron NNs.
The only variables we provided the NNs with are the phase space variables.
The third stage is to generate additional original and distorted samples, independent of those
used above for training, and to compute for each event the NN score.
It summarises into one single variable all the difference detected between the original and
distorted phase spaces. The reconstruction and the selection efficiency can then be parameterised in terms of only this single
variable, which makes the task of accounting for this multidimensional
efficiency far more practical. This is the fourth stage, where one computes the ratio of the NN score distribution obtained
in the distorted sample to the distribution in the original sample. Fitting this ratio provides the parameterization, and therefore
a per-candidate efficiency as a function of the score, that can be used, e.g., to weight the distorted sample so as to reproduce the phase
space (and the various characteristic distributions) of the original sample.
|
2,869,038,155,518 | arxiv | \section{Introduction} \label{S:Intro}
\setcounter{equation}{0}
Recently there has been much interest and an evolving theory of
noncommutative function theory and associated multivariable
operator theory and multidimensional system theory with evolution
along a free semigroup; we mention \cite{A-KV, KV-V, BB-noncomint, BGM1,
BGM2, CJ, HMcCV, MS-Annalen, MS-Schur, PopescuNF1, PopescuNF2,
Popescu-Nehari, Popescu-Memoir}.
A central player in many of these developments is the
noncommutative Schur class consisting of formal power series in a
set of noncommuting indeterminates which define contractive
multipliers between (unsymmetrized) vector-valued Fock spaces; such
Schur-class functions play the role of the characteristic function
for the Popescu analogue for a row contraction of the
Sz.-Nagy-Foia\c s model theory for a single contraction operator
(see \cite{PopescuNF2, Cuntz-scat}). For the classical (univariate) case,
there is an approach to operator-model theory complementary to the
Sz.-Nagy-Foia\c s approach which emphasizes constructions with
reproducing kernel
Hilbert spaces over the unit disk rather than the geometry of the
unitary dilation space of a contraction operator. Our purpose here
is to flesh out the ingredients of this approach for the Fock space
setting. The appropriate noncommutative multivariable version of a
reproducing kernel Hilbert space has already been worked out in
\cite{NFRKHS} and certain other relevant background material
appears in \cite{BBF1}. Unlike the work in some of the
papers mentioned above, specifically
\cite{A-KV, ariaspopescu, BB-noncomint, BGM2, CJ,
davidsonpitts, HMcCV, KV-V, MS-Annalen, MS-Schur, Popescu-Nehari},
we shall deal with formal power series with operator coefficients
as parts of some formal structure
(e.g., as inducing multiplication operators between two Hilbert
spaces whose elements are formal power series with vector
coefficients) rather than as themselves functions on some
collection of noncommutative operator-tuples.
Before discussing the precise
noncommutative results which we present here, we review the
corresponding classical versions of the results.
For ${\mathcal U}$ and ${\mathcal Y}$ two Hilbert spaces, let ${\mathcal L}({\mathcal U}, {\mathcal Y})$ denote
the space of bounded linear operators between ${\mathcal U}$ and ${\mathcal Y}$.
We also let $H^2_{{\mathcal U}}({\mathbb D})$ be the standard Hardy
space of the ${\mathcal U}$-valued holomorphic functions on the unit disk ${\mathbb
D}$. By the classical Schur class ${\mathcal S}({\mathcal U}, {\mathcal Y})$ we mean the
set of ${\mathcal L}({\mathcal U}, {\mathcal Y})$-valued functions holomorphic on the unit
disk ${\mathbb D}$ with values $S(\lambda)$ having norm at most $1$ for
each $\lambda \in {\mathbb D}$. There are several equivalent
characterizations of the class ${\mathcal S}({\mathcal U}, {\mathcal Y})$; for
convenience, we list some in the following theorem.
\begin{theorem} \label{T:I1}
Let $S$ be an ${\mathcal L}({\mathcal U}, {\mathcal Y})$-valued function defined on the unit
disk ${\mathbb D}$. Then the following are equivalent:
\begin{enumerate}
\item $S \in {\mathcal S}({\mathcal U}, {\mathcal Y})$, i.e., $S$ is analytic on ${\mathbb D}$
with contractive values in ${\mathcal L}({\mathcal U}, {\mathcal Y})$.
\item The multiplication operator $M_{S} \colon
f(z) \mapsto S(z) \cdot f(z)$ is a contraction from
$H^{2}_{{\mathcal U}}({\mathbb D})$ into $H^{2}_{{\mathcal Y}}({\mathbb D})$.
\item The kernel
$$ K_{S}(\lambda, \zeta) := \frac{ I_{{\mathcal Y}} - S(\lambda) S(\zeta)^{*}}{1 -
\lambda \overline{\zeta}}
$$
is positive on ${\mathbb D} \times {\mathbb D}$, i.e.,
there exists an auxiliary Hilbert space ${\mathcal X}$ and a function $H
\colon {\mathbb D} \to {\mathcal L}({\mathcal X}, {\mathcal Y})$ such that
\begin{equation} \label{KSfact}
K_{S}(\lambda, \zeta) = H(\lambda) H(\zeta)^{*} \quad\text{for all}\quad
\lambda, \zeta \in {\mathbb D}.
\end{equation}
\item There exists a Hilbert space ${\mathcal X}$ and a unitary connection
operator (or colligation) ${\mathbf U}$ of the form
\begin{equation} \label{I:colligation}
{\mathbf U} = \begin{bmatrix} A & B \\ C & D \end{bmatrix} \colon
\begin{bmatrix} {\mathcal X} \\ {\mathcal U} \end{bmatrix} \to \begin{bmatrix} {\mathcal X}
\\ {\mathcal Y} \end{bmatrix}
\end{equation}
so that $S(\lambda)$ can be realized in the form
\begin{equation} \label{I:realization}
S(\lambda) = D + \lambda C (I_{{\mathcal X}} - \lambda A)^{-1} B.
\end{equation}
\item There exists a Hilbert space ${\mathcal X}$ and a contractive
connecting operator ${\mathbf U}$ of the form \eqref{I:colligation} so that
\eqref{I:realization} holds.
\end{enumerate}
\end{theorem}
A pair $(C,A)$ is called {\em an output pair} if $C \in {\mathcal L}({\mathcal X}, {\mathcal Y})$
and $A \in {\mathcal L}({\mathcal X}, {\mathcal X})$. An output pair $(C,A)$ is called {\em
contractive} if $A^{*} A + C^{*} C \le I_{{\mathcal X}}$, {\em isometric} if
$A^{*} A + C^{*} C = I_{{\mathcal X}}$ and {\em observable} if ${\displaystyle
\bigcap_{n=0}^{\infty} \operatorname{Ker}\, C A^{n} =\{0\}}$.
We shall say that the realization \eqref{I:realization} of
$S(\lambda)$ is {\em observable} if the output pair $(C,A)$ occurring in
\eqref{I:realization} is observable.
Furthermore,
with an output contractive pair $(C,A)$, one can associate the positive
kernel
\begin{equation} \label{I:defKCA}
K_{C,A}(\lambda, \zeta) = C(I - \lambda A)^{-1} (I - \overline{\zeta}
A^{*})^{-1} C^{*}
\end{equation}
which is (as it is readily seen) defined on ${\mathbb D}\times{\mathbb D}$.
As also remarked in \cite{BBF2}, the coisometric version
of (4) $\Longrightarrow$ (2) is particularly transparent,
since in this case a simple computation shows that then
\eqref{KSfact} holds with $H(\lambda) = C (I - \lambda A)^{-1}$, i.e.,
$K_{S}(\lambda, \zeta) =K_{C,A}(\lambda, \zeta)$. We have the following
sort of converse of these observations.
\begin{theorem} \label{T:I2}
\begin{enumerate}
\item
Suppose that $S\in {\mathcal S}({\mathcal U}, {\mathcal Y})$ and that $(C,
A)$ is an observable, contractive output-pair
of operators such that
\begin{equation} \label{I:KS=KCA}
K_{S}(\lambda, \zeta) = K_{C,A}(\lambda, \zeta).
\end{equation}
Then there is a unique choice of $B \colon {\mathcal U} \to {\mathcal X}$ so that
${\mathbf U} = \sbm{ A & B \\ C & S(0) }$ is coisometric and ${\mathbf U}$
provides a realization for $S$:
$S(\lambda) = S(0) + \lambda C (I - \lambda A)^{-1}B$.
\item Suppose that we are given only an observable,
contractive output-pair of operators $(C, A)$ as above.
Then there is a choice of an input space ${\mathcal U}$ and a Schur
multiplier $S \in {\mathcal S}({\mathcal U}, {\mathcal Y})$ so that
\eqref{I:KS=KCA} holds.
\end{enumerate}
\end{theorem}
As we see from Theorem \ref{T:I1}, for any Schur-class function $S
\in {\mathcal S}({\mathcal U}, {\mathcal Y})$, we can associate
the positive kernel
$K_{S}(\lambda, \zeta)$ and therefore also by Aronszajn's
construction the reproducing kernel
Hilbert space ${\mathcal H}(K_{S})$; this space is called the de
Branges-Rovnyak space associated with $S$. It turns out that
any observable
coisometric realization ${\mathbf U}$ for $S$ is unitarily equivalent to a
certain canonical functional-model realization.
\begin{theorem} \label{T:I3} Let $S\in {\mathcal S}({\mathcal U},{\mathcal Y})$. Then the
operator
$$
{\mathbf U}_{\text{dBR}} = \begin{bmatrix} A_{\text{dBR}} &
B_{\text{dBR}} \\ C_{\text{dBR}} & D_{\text{dBR}} \end{bmatrix}
\colon \begin{bmatrix} {\mathcal H}(K_{S}) \\ {\mathcal U} \end{bmatrix} \to
\begin{bmatrix}
{\mathcal H}(K_{S}) \\ {\mathcal Y} \end{bmatrix}
$$
with the entries given by
\begin{align*}
A_{\text{dBR}} \colon f(\lambda) \to \frac{f(\lambda) - f(0)}{\lambda},
& \qquad B_{\text{dBR}} \colon u \to
\frac{S(\lambda) - S(0)}{\lambda} u, \\
C_{\text{dBR}} \colon f \to f(0),
& \qquad D_{\text{dBR}} \colon u \to S(0) u
\end{align*}
provides an observable and coisometric realization
\begin{equation}
S(\lambda) = D_{\text{dBR}} + \lambda C_{\text{dBR}} (I_{{\mathcal H}(K_{S})} - \lambda
A_{\text{dBR}} )^{-1} B_{\text{dBR}}.
\label{debr}
\end{equation}
Moreover, any other observable coisometric realization of $S$ is
unitarily equivalent to \eqref{debr}.
\end{theorem}
Let us say that a Schur function $S \in {\mathcal S}({\mathcal U}, {\mathcal Y})$ is {\em inner}
if the associated multiplication operator $M_{S} \colon
H^{2}_{{\mathcal U}}({\mathbb D}) \to H^{2}_{{\mathcal Y}}({\mathbb D})$ is a
partial isometry. Equivalently, $S \in {\mathcal S}({\mathcal U}, {\mathcal Y})$ and the almost
everywhere existing boundary value function $S(\zeta) = \lim_{r
\uparrow 1}S(r \zeta)$ is a partial isometry for almost all $\zeta \in
{\mathbb T}$.
The following characterization of inner
functions in terms of realizations is well known (see
\cite{dbr1, dbr2}).
\begin{theorem}\label{T:I5} A Schur multiplier $S \in {\mathcal S}({\mathcal U},
{\mathcal Y})$ is inner if and only if its essentially unique observable,
coisometric realization of the form
\eqref{I:realization} is such that $A$ is strongly stable, i.e.,
\begin{equation} \label{I:stable}
\lim_{n \to \infty} \| A^{n} x \| = 0 \text{ for all } x \in{\mathcal X}.
\end{equation}
\end{theorem}
Inner functions come up in the representation of shift-invariant
subspaces of $H^{2}_{{\mathcal Y}}$ as in the Beurling-Lax theorem.
The following version of the Beurling-Lax theorem first identifies
any shift-invariant subspace as the set of solutions of a
collection of homogeneous interpolation conditions and then
obtains a realization for the Beurling-Lax representer in terms of
the data set for the homogeneous interpolation problem.
The finite-dimensional version of this result can be found in
\cite[Chapter 14]{BGR} while the
details of the general case appear in \cite{BallRaney}.
We let $M_{\lambda}$ denote the shift operator
$$
M_{\lambda} \colon f(\lambda) \to \lambda f(\lambda) \quad \text{for} \quad f \in
H^{2}_{{\mathcal Y}}({\mathbb D})
$$
and given a contractive pair $(C,A)$ we let
\begin{equation} \label{defMCA}
{\mathcal M}_{A^{*},C^{*}} = \{ f \in H^{2}_{{\mathcal Y}}({\mathbb D})
\colon \; (C^{*}f)^{\wedge L}(A^{*}) = 0 \}
\end{equation}
where we have set
$$
(C^{*}f)^{\wedge L}(A^{*}): = \sum_{n=0}^{\infty} A^{*n} C^{*}
f_{n} \quad\text{if} \quad f(\lambda) = \sum_{n=0}^{\infty} f_{n}
\lambda^{n} \in H^{2}_{{\mathcal Y}}({\mathbb D}).
$$
\begin{theorem} \label{T:I6} \begin{enumerate}
\item Suppose that ${\mathcal M}$ is a subspace of
$H^{2}_{{\mathcal Y}}({\mathbb D})$ which is $M_{\lambda}$-invariant.
Then there is an isometric pair $(C,A)$ such that $A$ is strongly
stable (i.e., \eqref{I:stable} holds) and such that
${\mathcal M}={\mathcal M}_{A^{*},C^{*}}$.
\item Suppose that the shift-invariant subspace ${\mathcal M}
\subset H^{2}_{{\mathcal Y}}({\mathbb D})$ has the representation ${\mathcal
M} = {\mathcal M}_{A^{*},C^{*}}$ as in \eqref{defMCA} where $(C,A)$
is an isometric pair with $A$ strongly stable. Choose an input
space ${\mathcal U}$ and operators
$B \colon {\mathcal U} \to {\mathcal X}$ and $D \colon {\mathcal U} \to {\mathcal Y}$ so that
$$
{\mathbf U} = \begin{bmatrix} A & B \\ C & D \end{bmatrix} \colon
\begin{bmatrix} {\mathcal X} \\ {\mathcal U} \end{bmatrix} \to \begin{bmatrix} {\mathcal X}
\\ {\mathcal Y} \end{bmatrix}
$$
is unitary. Then the function
$S(\lambda) = D + \lambda C (I_{{\mathcal X}} - \lambda A)^{-1} B$
is inner (i.e., $M_{S}$ is isometric) and is a
Beurling-Lax representer for ${\mathcal M}$:
$$ S \cdot H^{2}_{{\mathcal U}}({\mathbb D}) = {\mathcal M}_{A^{*},C^{*}}.
$$
\end{enumerate}
\end{theorem}
Our goal here is to obtain noncommutative analogues of these
results, where the classical Schur class is replaced by the
noncommutative Schur class of contractive multipliers between Fock
spaces of formal power series in noncommuting indeterminates and
where the classical reproducing kernel Hilbert spaces become the
noncommutative formal reproducing kernel Hilbert spaces introduced
in \cite{NFRKHS}.
Let $z = (z_{1}, \dots, z_{d})$ and $w = (w_{1}, \dots, w_{d})$ be
two sets of noncommuting indeterminates. We let ${\mathcal F}_{d}$ denote
the free semigroup generated by the $d$ letters $\{1, \dots, d\}$.
A generic element of ${\mathcal F}_{d}$ is a word $w$ equal to a string of
letters
\begin{equation} \label{word}
\alpha = i_{N} \cdots i_{1}\quad \text{where}\quad i_{k} \in \{1,
\dots, d\} \; \text{ for } \; k=1, \dots, N.
\end{equation}
Given two words $\alpha$ and $\beta$ with $\alpha$ as in
\eqref{word} and $\beta$ of
the form $\beta = j_{N'} \cdots j_{1}$, say, the product $\alpha
\beta$ is defined by concatenation:
$$
\alpha \beta = i_{N} \cdots i_{1} j_{N'} \cdots j_{1} \in {\mathcal F}_{d}.
$$
The unit element of ${\mathcal F}_{d}$ is the {\em empty word} denoted by
$\emptyset$.
For $\alpha$ a word of the form \eqref{word}, we let $z^{\alpha}$
denote the
monomial in noncommuting indeterminates
$$
z^{\alpha} = z_{i_{N}} \cdots z_{i_{1}}
$$
and we let $z^{\emptyset} = 1$.
We extend this noncommutative functional calculus to a $d$-tuple of
operators ${\mathbf A} = (A_{1}, \dots, A_{d})$ on a Hilbert space
${\mathcal X}$:
\begin{equation} \label{bAv}
{\mathbf A}^{v} = A_{i_{N}} \cdots A_{i_{1}}\quad \text{if}\quad
v = i_{N} \cdots i_{1} \in {\mathcal F}_{d} \setminus \{ \emptyset\};\quad {\mathbf
A}^{\emptyset} = I_{{\mathcal X}}.
\end{equation}
We will also have need of the {\em
transpose operation} on ${\mathcal F}_{d}$:
\begin{equation} \label{transpose}
\alpha^{\top} = i_{1} \cdots i_{N} \quad\text{if}\quad \alpha =
i_{N}\cdots i_{1}.
\end{equation}
A natural analogue of the Szeg\"o kernel is the noncommutative
Szeg\"o kernel
\begin{equation}
k_{\text{Sz}}(z,w) = \sum_{\alpha \in {\mathcal F}_{d}} z^{\alpha}
w^{\alpha^{\top}}.
\label{szego}
\end{equation}
The associated reproducing kernel Hilbert space
${\mathcal H}(k_{\text{Sz}})$ (in the sense of \cite{NFRKHS}) is a natural
analogue of the classical Hardy space $H^{2}({\mathbb D})$; we
recall all the relevant definitions and main properties more
precisely in Section \ref{S:NFRKHS}.
Our main purpose here is to obtain the
analogues the Theorems \ref{T:I1}--\ref{T:I6} above with the classical
Szeg\"o kernel replaced by its noncommutative analogue
\eqref{szego}.
In particular, the analogue of Theorem \ref{T:I6} involves
the study of shift-invariant subspaces of the Fock space
$H^{2}_{{\mathcal Y}}({\mathcal F}_{d})$
generated by a collection of homogeneous interpolation conditions
defined via a functional calculus with noncommutative operator
argument. We mention that interpolation problems in the
noncommutative Schur-multiplier class defined by nonhomogeneous
interpolation conditions associated with such a functional calculus
have been studied recently by a number of authors, including the late
Tiberiu Constantinescu to whom this paper is dedicated (see
\cite{BB-noncomint, CJ, Popescu-Nehari, Popescu-Memoir}).
While the Fock-space version of the Beurling-Lax theorem already
appears in the work of Popescu \cite{PopescuNF1} (see also
\cite{BBF1}), the proof here through inner solution of a homogeneous
interpolation problem gives an alternative approach.
The present paper (with the exception of the final Section
\ref{S:BL} ) parallels our companion paper \cite{BBF2} where
corresponding results are worked out with the noncommutative Szeg\"o
kernel \eqref{szego} replaced by the so-called Arveson kernel
$k_{d}({\boldsymbol \lambda}, {\boldsymbol \zeta}) = 1/(1 - \langle {\boldsymbol \lambda}, {\boldsymbol \zeta}
\rangle_{{\mathbb C}^{d}})$
which is positive on the unit ball ${\mathbb B}^{d} = \{{\boldsymbol \lambda} =
(\lambda_{1}, \dots, \lambda_{d}) \colon \sum_{k=1}^{d} |\lambda_{k}|^{2}
< 1\}$ of ${\mathbb C}^{d}$.
There the corresponding results are
more delicate; in particular, the observable, coisometric
realization for a contractive multiplier is unique only in very
special circumstances, but the
nonuniqueness can be explicitly characterized. In contrast, the
results obtained here for the setting of the noncommutative Szeg\"o
kernel $k_{\text{Sz}}(z,w)$ parallel more directly the situation
for the classical univariate case.
The paper is organized as follows. After the present Introduction,
Section \ref{S:NFRKHS} recalls the main facts from \cite{NFRKHS}
which are needed in the sequel. Section \ref{S:NC-Schur}
introduces the noncommutative Schur class of contractive Fock-space
multipliers $S$ and the associated
noncommutative positive kernel $K_{S}(z,w)$, and develops the
noncommutative analogues of Theorems \ref{T:I1} and \ref{T:I2}. In
fact, various pieces of the noncommutative version of Theorem
\ref{T:I1} (see theorem \ref{T:NC1} below) are already worked out
in \cite{NFRKHS, PopescuNF2, Cuntz-scat}. In connection with the
noncommutative analogue of Theorem \ref{T:I2} (see Theorems
\ref{T:CAtoS} and \ref{T:CAStoB} below), we rely on our paper
\cite{BBF1} where the structure of noncommutative formal
reproducing kernel spaces of the type ${\mathcal H}(K_{C,A})$ were worked out.
Section \ref{S:dBR} introduces the noncommutative functional-model
coisometric colligation ${\mathbf U}_{dBR}$ and obtains the
analogue of Theorem \ref{T:I3} for the Fock space setting (see
Theorem \ref{T:NC3} below). This functional model is the
Brangesian model parallel to the noncommutative Sz.-Nagy-Foia\c{s}
model for a row contraction found in \cite{PopescuNF2, Cuntz-scat}.
The final Section \ref{S:BL} uses previous results concerning
${\mathcal H}(K_{S})$ and ${\mathcal H}(K_{C,A})$ to arrive at the Fock-space version
of Theorem \ref{T:I6} (see Theorems \ref{T:shift=int} and
\ref{T:BLhomint} below) in a simple way.
\section{Noncommutative formal reproducing kernel Hilbert spaces}
\label{S:NFRKHS}
We now recall some of the basic ideas from \cite{NFRKHS} concerning
noncommutative
formal reproducing kernel Hilbert spaces.
We let $z = (z_{1}, \dots, z_{d})$, $w =
(w_{1}, \dots, w_{d})$ be two sets of noncommuting indeterminates
and we let ${\mathcal F}_{d}$ be the free semigroup generated by the alphabet
$\{1, \dots, d\}$ with unit element equal to the empty word
$\emptyset$ as in the introduction. Given a coefficient Hilbert space
${\mathcal Y}$ we let ${\mathcal Y}\langle z \rangle$ denote the set of all polynomials in
$z = (z_{1}, \dots, z_{d})$ with coefficients in ${\mathcal Y}:$
$$
{\mathcal Y}\langle z \rangle = \left\{ p(z) = \sum_{\alpha \in {\mathcal F}_{d}}
p_{\alpha} z^{\alpha} \colon p_{\alpha} \in {\mathcal Y} \text{ and }
p_{\alpha} = 0 \text{ for all but finitely many } \alpha \right\},
$$
while ${\mathcal Y} \langle \langle z \rangle \rangle$ denotes the set of all
formal power series in the indeterminates $z$ with coefficients in
${\mathcal Y}$:
$$ {\mathcal Y}\langle \langle z \rangle \rangle = \left\{ f(z) = \sum_{\alpha \in
{\mathcal F}_{d}} f_{\alpha} z^{\alpha} \colon f_{\alpha} \in {\mathcal Y} \right\}.
$$
Note that vectors in ${\mathcal Y}$ can be considered as Hilbert space
operators between ${\mathbb C}$ and ${\mathcal Y}$. More generally, if ${\mathcal U}$
and ${\mathcal Y}$ are two Hilbert spaces, we let
${\mathcal L}({\mathcal U}, {\mathcal Y})\langle z \rangle$ and ${\mathcal L}({\mathcal U}, {\mathcal Y})\langle \langle z
\rangle \rangle$ denote the space of polynomials (respectively,
formal power series) in the noncommuting indeterminates $z = (z_{1},
\dots, z_{d})$ with coefficients in ${\mathcal L}({\mathcal U}, {\mathcal Y})$.
Given $S = \sum_{\alpha \in {\mathcal F}_{d}} s_{\alpha}
z^{\alpha} \in {\mathcal L}({\mathcal U}, {\mathcal Y})\langle \langle z \rangle
\rangle$ and $f = \sum_{\beta \in {\mathcal F}_{d}} f_{\beta} z^{\beta} \in
{\mathcal U}\langle \langle z \rangle \rangle$, the product $S(z) \cdot f(z)
\in {\mathcal Y} \langle \langle z \rangle \rangle$ is defined as an element
of ${\mathcal Y}\langle \langle z \rangle \rangle$ via the noncommutative
convolution:
\begin{equation} \label{multiplication}
S(z) \cdot f(z) = \sum_{\alpha, \beta \in {\mathcal F}_{d}} s_{\alpha}
f_{\beta} z^{\alpha \beta} =
\sum_{v \in {\mathcal F}_{d}} \left( \sum_{\alpha, \beta \in {\mathcal F}_{d} \colon
\alpha \cdot \beta = v} s_{\alpha} f_{\beta} \right) z^{v}.
\end{equation}
Note that the coefficient of $z^{v}$ in \eqref{multiplication} is
well defined since any given word $v \in {\mathcal F}_{d}$ can be decomposed
as a product $v = \alpha \cdot \beta$ in only finitely many distinct
ways.
In general, given a coefficient Hilbert space ${\mathcal C}$, we use the
${\mathcal C}$ inner product to generate a pairing
$$ \langle \cdot, \, \cdot \rangle_{{\mathcal C} \times {\mathcal C}\langle \langle w
\rangle \rangle} \colon {\mathcal C} \times {\mathcal C}\langle \langle w \rangle
\rangle \to {\mathcal C}\langle \langle w \rangle \rangle
$$
via
$$
\left\langle c, \sum_{\beta \in {\mathcal F}_{d}} f_{\beta} w^{\beta}
\right\rangle_{{\mathcal C} \times
{\mathcal C}\langle \langle w \rangle \rangle} = \sum_{\beta \in {\mathcal F}_{d}} \langle
c, f_{\beta} \rangle_{{\mathcal C}} w^{\beta^{\top}} \in {\mathcal C} \langle \langle
w \rangle \rangle.
$$
We also may use the pairing in the reverse order
$$
\left\langle \sum_{\alpha \in {\mathcal F}_{d} } f_{\alpha} w^{\alpha}, c
\right\rangle_{{\mathcal C}\langle \langle w \rangle \rangle \times {\mathcal C}} =
\sum_{\alpha \in {\mathcal F}_{d}} \langle f_{\alpha}, c \rangle_{{\mathcal C}}
w^{\alpha} \in {\mathcal C}\langle \langle w \rangle \rangle.
$$
These are both special cases of the more general pairing
$$
\left\langle \sum_{\alpha \in {\mathcal F}_{d}} f_{\alpha} w^{\prime \alpha},
\sum_{\beta \in {\mathcal F}_{d}} g_{\beta} w^{\beta}
\right\rangle_{{\mathcal C}\langle \langle w' \rangle \rangle \times {\mathcal C}
\langle \langle w \rangle \rangle} =
\sum_{\alpha, \beta \in {\mathcal F}_{d}} \langle f_{\alpha}, g_{\beta}
\rangle_{{\mathcal C}} w^{\beta^{\top}} w^{\prime \alpha}.
$$
Suppose that ${\mathcal H}$ is a Hilbert space whose elements are formal power series in
${\mathcal Y} \langle \langle z \rangle \rangle$ and that $K(z,w) =
\sum_{\alpha, \beta \in {\mathcal F}_{d}} K_{\alpha, \beta} z^{\alpha}
w^{\beta^{\top}}$ is a formal power series in the two sets of $d$
noncommuting indeterminates $z = (z_{1}, \dots, z_{d})$ and $w =
(w_{1}, \dots, w_{d})$. We say that {\em $K(z,w)$ is a reproducing
kernel for ${\mathcal H}$} if, for each $\beta \in {\mathcal F}_{d}$ the formal power series
$$ K_{\beta}(z): = \sum_{\alpha \in {\mathcal F}_{d}} K_{\alpha, \beta}
z^{\alpha}\quad\mbox{belongs to} \; \; {\mathcal H}
$$
and we have the reproducing property
$$
\langle f, K(\cdot, w) y \rangle_{{\mathcal H} \times {\mathcal H}\langle \langle w
\rangle \rangle} = \langle f(w), y \rangle_{{\mathcal Y}\langle \langle w
\rangle \rangle \times {\mathcal Y}}\quad\text{ for every } f\in{\mathcal H}.
$$
As a consequence we then also have
$$
\langle K(\cdot,w')y', K(\cdot, w) y \rangle_{{\mathcal H}\langle \langle
w'\rangle \rangle \times {\mathcal H}\langle \langle w \rangle \rangle} =
\langle K(w,w') y',y\rangle_{{\mathcal Y}\langle \langle w,w' \rangle \rangle
\times {\mathcal Y}}.
$$
It is not difficult to see that a reproducing kernel for a given
${\mathcal H}$ is necessarily unique.
Let us now suppose that ${\mathcal H}$ is a Hilbert space whose elements are
formal power series $f(z) = \sum_{\alpha \in {\mathcal F}_{d}} f_{v} z^{v}
\in {\mathcal Y} \langle \langle z \rangle \rangle$ for a coefficient Hilbert
space ${\mathcal Y}$. We say that ${\mathcal H}$ is a NFRKHS ({\em noncommutative
formal reproducing kernel Hilbert space}) if, for each $\alpha \in
{\mathcal F}_{d}$, the linear operator $\Phi_{\alpha} \colon {\mathcal H} \to {\mathcal Y}$
defined by $f(z) = \sum_{v \in {\mathcal F}_{d}} f_{v} z^{v} \mapsto
f_{\alpha}$ is continuous. In this case, define $K(z,w) \in
{\mathcal L}({\mathcal Y})\langle \langle
z, w \rangle \rangle$ by
$$
K(z,w) = \sum_{\beta \in {\mathcal F}_{d}} \Phi_{\beta}^{*} w^{\beta^{\top}}
=:\sum_{\alpha, \beta \in {\mathcal F}_{d}} K_{\alpha, \beta} z^{\alpha}
w^{\beta^{\top}}.
$$
Then one can check that $K(z,w)$ is a reproducing kernel for ${\mathcal H}$ in
the sense defined above. Conversely (see \cite[Theorem 3.1]{NFRKHS}),
a given formal kernel $K(z,w) = \sum_{\alpha, \beta \in {\mathcal F}_{d}}
K_{\alpha, \beta} z^{\alpha} w^{\beta^{\top}}
\in {\mathcal L}({\mathcal Y})\langle \langle z, w \rangle \rangle$ is the reproducing
kernel for some NFRKHS ${\mathcal H}$ if and only if $K$ is positive definite
in either one of the equivalent senses:
\begin{enumerate}
\item $K(z,w)$ has a factorization
\begin{equation} \label{pos1}
K(z,w) = H(z) H(w)^{*}
\end{equation}
for some $H \in {\mathcal L}({\mathcal X}, {\mathcal Y})\langle \langle z \rangle \rangle$
for some auxiliary Hilbert space ${\mathcal X}$. Here
$$
H(w)^{*} = \sum_{\beta \in {\mathcal F}_{d}} H_{\beta}^{*} w^{\beta^{\top}} =
\sum_{\beta \in {\mathcal F}_{d}} H_{\beta^{\top}}^{*} w^{\beta}\quad \text{if}
\quad
H(z) = \sum_{\alpha \in {\mathcal F}_{d}} H_{\alpha} z^{\alpha}.
$$
\item For all finitely supported ${\mathcal Y}$-valued functions $\alpha
\mapsto y_{\alpha}$ it holds that
\begin{equation} \label{pos2}
\sum_{\alpha, \alpha' \in {\mathcal F}_{d}} \langle K_{\alpha, \alpha'}
y_{\alpha'}, y_{\alpha} \rangle \ge 0.
\end{equation}
\end{enumerate}
If $K$ is such a positive kernel, we denote by ${\mathcal H}(K)$ the
associated NFRKHS consisting of elements of ${\mathcal Y}\langle \langle z
\rangle \rangle$.
\section{The noncommutative Schur class: associated positive kernels
and transfer-function realization} \label{S:NC-Schur}
A natural analogue of the vector-valued Hardy space over the unit disk
(see e.g.~\cite{PopescuNF1})
is the Fock space with coefficients in ${\mathcal Y}$ which we denote here by
$H^{2}_{{\mathcal Y}}({\mathcal F}_{d})$:
$$
H^{2}_{{\mathcal Y}}({\mathcal F}_{d}) = \left\{ f(z) = \sum_{\alpha \in {\mathcal F}_{d}}
f_{\alpha} z^{v} \colon \sum_{\alpha \in {\mathcal F}_{d}} \|
f_{\alpha}\|^{2} < \infty \right\}.
$$
When ${\mathcal Y} = {\mathbb C}$ we write simply $H^{2}({\mathcal F}_{d})$.
As explained in \cite{NFRKHS}, $H^{2}({\mathcal F}_{d})$ is a NFRKHS with
reproducing kernel equal to the following noncommutative analogue of the
classical Szeg\"o kernel:
\begin{equation} \label{kSz}
k_{\text{Sz}}(z,w) = \sum_{\alpha \in {\mathcal F}_{d}} z^{\alpha}
w^{\alpha^{\top}}.
\end{equation}
Thus we have in general
$H^{2}_{{\mathcal Y}}({\mathcal F}_{d}) = {\mathcal H}(k_{\text{Sz}} \otimes I_{{\mathcal Y}})$.
We let $S_{j}$ denote the shift operator
\begin{equation} \label{shift}
S_{j} \colon f(z) = \sum_{v \in {\mathcal F}_{d}} f_{v} z^{v} \mapsto
f(z) \cdot z_{j} = \sum_{v \in {\mathcal F}_{d}} f_{v} z^{v \cdot j} \text{
for } j = 1, \dots, d
\end{equation}
on $H^{2}_{{\mathcal Y}}({\mathcal F}_{d})$; when we wish to specify the coefficient
space ${\mathcal Y}$ explicitly, we write $S_{j} \otimes I_{{\mathcal Y}}$ rather than
only $S_{j}$. The adjoint of $S_{j} \colon
H^{2}_{{\mathcal Y}}({\mathcal F}_{d}) \to H^{2}_{{\mathcal Y}}({\mathcal F}_{d})$ is then given by
\begin{equation} \label{bs}
S_{j}^{*} \colon \sum_{v \in {\mathcal F}_{d}} f_{v}z^{v} \mapsto \sum_{v \in
{\mathcal F}_{d}} f_{v \cdot j} z^{v}\quad \text{for}\quad j = 1, \dots, d.
\end{equation}
We let ${\mathcal M}_{nc,d}({\mathcal U}, {\mathcal Y})$ denote the set of formal power series
$S(z) = \sum_{\alpha \in {\mathcal F}_{d}} s_{\alpha} z^{\alpha}$ with
coefficients $s_{\alpha} \in {\mathcal L}({\mathcal U}, {\mathcal Y})$ such that the associated
multiplication operator $M_{S} \colon f(z) \mapsto S(z) \cdot f(z)$
(see \eqref{multiplication}) defines a bounded operator from
$H^{2}_{{\mathcal U}}({\mathcal F}_{d})$ to $H^{2}_{{\mathcal Y}}({\mathcal F}_{d})$. It is not difficult
to show that ${\mathcal M}_{nc,d}({\mathcal U}, {\mathcal Y})$ is the intertwining space for
the two tuples ${\mathbf S} \otimes I_{{\mathcal U}} = (S_{1} \otimes I_{{\mathcal U}},
\dots,, S_{d} \otimes I_{{\mathcal U}})$ and ${\mathbf S}\otimes I_{{\mathcal Y}} =
(S_{1} \otimes I_{{\mathcal Y}}, \dots, S_{d} \otimes I_{{\mathcal Y}})$:
{\em an operator $X \in {\mathcal L}({\mathcal U}, {\mathcal Y})$ equals $X = M_{S}$ for some $S
\in {\mathcal M}_{nc,d}({\mathcal U}, {\mathcal Y})$ whenever $S_{j} \otimes I_{{\mathcal Y}}) X = X
(S_{j}\otimes I_{{\mathcal U}})$ for $j = 1, \dots, d$}
(see e.g.~\cite{PopescuNF2} where, however, the
conventions are somewhat different).
We define the
noncommutative Schur class ${\mathcal S}_{nc, d}({\mathcal U}, {\mathcal Y})$ to
consist of such multipliers $S$ for which $M_{S}$ has operator norm at
most 1:
\begin{equation} \label{ncSchur}
{\mathcal S}_{nc, d}({\mathcal U}, {\mathcal Y}) = \{ S \in {\mathcal L}({\mathcal U}, {\mathcal Y}) \colon
M_{S} \colon H^{2}_{{\mathcal Y}}({\mathcal F}_{d}) \to H^{2}_{{\mathcal Y}}({\mathcal F}_{d}) \text{
with } \|M_{S}\|_{op} \le 1 \}.
\end{equation}
The following is the noncommutative analogue of Theorem \ref{T:I1}
for this setting.
\begin{theorem} \label{T:NC1} Let $S(z) \in {\mathcal L}({\mathcal U}, {\mathcal Y}) \langle
\langle z \rangle \rangle$
be a formal power series in $z = (z_{1}, \dots, z_{d})$ with
coefficients in ${\mathcal L}({\mathcal U}, {\mathcal Y})$. Then the following are equivalent:
\begin{enumerate}
\item $S \in {\mathcal S}_{nc,d}({\mathcal U}, {\mathcal Y})$, i.e., $M_{S} \colon
{\mathcal U}\langle
z\rangle \to {\mathcal Y} \langle \langle z \rangle \rangle$ given by
$M_{S} \colon p(z) \to S(z) p(z)$ extends to define a
contraction operator from $H^{2}_{{\mathcal U}}({\mathcal F}_{d})$ into
$H^{2}_{{\mathcal Y}}({\mathcal F}_{d})$.
\item The kernel
\begin{equation} \label{KS}
K_{S}(z, w) : = k_{\text{Sz}}(z,w) - S(z) k_{\text{Sz}}(z,w)
S(w)^{*}
\end{equation}
is a noncommutative positive kernel (see \eqref{pos1} and
\eqref{pos2}).
\item There exists a Hilbert space ${\mathcal X}$ and a unitary connection
operator ${\mathbf U}$ of the form
\begin{equation} \label{NCcolligation}
{\mathbf U} = \begin{bmatrix} A & B \\ C & D \end{bmatrix} =
\begin{bmatrix} A_{1} & B_{1} \\ \vdots & \vdots \\ A_{d} &
B_{d} \\ C & D \end{bmatrix} \colon \begin{bmatrix} {\mathcal X} \\ {\mathcal U}
\end{bmatrix} \to \begin{bmatrix} {\mathcal X} \\ \vdots \\ {\mathcal X} \\ {\mathcal Y}
\end{bmatrix}
\end{equation}
so that $S(z)$ can be realized as a formal power series in the
form
\begin{equation} \label{NCrealization}
S(z) = D + \sum_{j=1}^{d} \sum_{v \in {\mathcal F}_{d}} C A^{v}B_{j}
z^{v}\cdot z_{j}=
D + C (I - Z(z) A)^{-1} Z(z) B
\end{equation}
where we have set
\begin{equation}
Z(z)=\begin{bmatrix}z_1 I_{{\mathcal X}} & \ldots &
z_dI_{{\mathcal X}}\end{bmatrix},\quad
A=\begin{bmatrix} A_1 \\ \vdots \\ A_d\end{bmatrix},\quad
B=\begin{bmatrix} B_1 \\ \vdots \\ B_d\end{bmatrix}.
\label{1.6a}
\end{equation}
\item There exists a Hilbert space ${\mathcal X}$ and a contractive block
operator matrix ${\mathbf U}$
as in \eqref{NCcolligation} such that $S(z)$ is given as in
\eqref{NCrealization}
\end{enumerate}
\end{theorem}
\begin{proof} (1) $\Longrightarrow$ (2) is Theorem 3.15 in
\cite{NFRKHS}. A proof of (2)
$\Longrightarrow$ (3) is done in \cite[Theorem 5.4.1]{Cuntz-scat}
as an application of the Sz.-Nagy-Foia\c s model theory for row
contractions worked out there following ideas of Popescu
\cite{PopescuNF1, PopescuNF2}; an
alternative proof via the ``lurking isometry argument'' can be
found in \cite[Theorem 3.16]{NFRKHS}. The implication (3)
$\Longrightarrow$ (4) is trivial.
The content of (4) $\Longrightarrow$ (1) amounts to Proposition
4.1.3 in \cite{Cuntz-scat}.
\end{proof}
We note that formula \eqref{NCrealization} has the interpretation
that $S(z)$ is the {\em transfer function} of the
multidimensional linear system with evolution along ${\mathcal F}_{d}$
given by the input-state-output equations
\begin{equation} \label{sys}
\Sigma \colon \left\{ \begin{array}{ccc}
x(1 \cdot \alpha) & = & A_{1} x(\alpha) + B_{1} u(\alpha) \\
\vdots & & \vdots \\
x(d \cdot \alpha) & = & A_{d} x(\alpha) + B_{d} u(\alpha) \\
y(\alpha) & = & C x(\alpha) + D u(\alpha)
\end{array} \right.
\end{equation}
initialized with $x(\emptyset) = 0$. Here $u(\alpha)$ takes
values in the input space ${\mathcal U}$, $x(\alpha)$ takes
values in the state space ${\mathcal X}$, and $y(\alpha)$ takes values in
the output space ${\mathcal Y}$ for each $\alpha \in {\mathcal F}_{d}$. If we
introduce the noncommutative $Z$-transform
$$ \{x(\alpha)\}_{\alpha \in {\mathcal F}_{d}} \mapsto \widehat{x}(z) : =
\sum_{\alpha \in
{\mathcal F}_{d}} x(\alpha) z^{\alpha}
$$
and apply this transform to each of the system
equations in \eqref{sys}, one can solve for $\widehat y(z)$ in
terms of $\widehat u(z)$ to arrive at
$$
\widehat y(z) = T_{\Sigma}(z) \cdot \widehat u(z)
$$
where the {\em transfer function} $T_{\Sigma}(z)$ of the system
\eqref{sys} is the formal power series with coefficients in
${\mathcal L}({\mathcal U}, {\mathcal Y})$ given by
\begin{equation} \label{transfunc}
T_{\Sigma}(z) = D + \sum_{j=1}^{d} \sum_{\alpha \in {\mathcal F}_{d}} C
{\mathbf A}^{v}B_{j} z^{v} z_{j} = D + C (I - Z(z) A)^{-1} Z(z) B.
\end{equation}
For complete details, we refer to \cite{Cuntz-scat, BGM1, BGM2}.
The implication (4) $\Longrightarrow$ (2) can be seen directly
via the explicit identity \eqref{NC-ADR} given in the next
proposition; for the commutative case we refer to \cite[Lemma
2.2]{ADR}.
\begin{proposition} \label{P:NC-ADR}
Suppose that ${\mathbf U} = \sbm{ A & B \\ C &
D } \colon {\mathcal X} \oplus {\mathcal U} \to {\mathcal X}^{d} \oplus {\mathcal Y}$ is contractive
with associated transfer function $S \in {\mathcal S}_{nc,d}({\mathcal U},
{\mathcal Y})$ given by \eqref{NCrealization}. Then the kernel
$K_{S}(z,w)$ given by \eqref{KS} is can also be represented as
\begin{equation}
K_{S}(z,w) =C(I_{{\mathcal X}} - Z(z)A)^{-1} (I_{{\mathcal X}} - A^{*} Z(w)^{*})^{-1} C^{*}
+ D_{S}(z,w)
\label{NC-ADR}
\end{equation}
where
\begin{align}
D_{S}(z,w) =&\begin{bmatrix} C (I_{{\mathcal X}} - Z(z) A)^{-1} Z(z) & I_{{\mathcal Y}}
\end{bmatrix} k_{\text{Sz}}(z,w)\notag\\
&\cdot (I - {\mathbf U} {\mathbf U}^{*})
\begin{bmatrix} Z(w)^{*} (I - A^{*} Z(w)^{*})^{-1}
C^{*} \\ I_{{\mathcal Y}} \end{bmatrix}.
\label{DS}
\end{align}
\end{proposition}
\begin{proof}
For a fixed $\alpha \in {\mathcal F}_{d}$, let us set
\begin{align}
X_{\alpha} & = z^{\alpha} w^{\alpha^{\top}} I_{{\mathcal Y}} - S(z)
z^{\alpha}w^{\alpha^{\top}} S(w)^{*}, \label{xa}\\
Y_{\alpha} & = \begin{bmatrix} C (I - Z(z)A)^{-1} Z(z) & I_{{\mathcal Y}}
\end{bmatrix} z^{\alpha} w^{\alpha^{\top}} (I - {\mathbf U}
{\mathbf U}^{*}) \begin{bmatrix} Z(w)^{*}(I - A^{*}
Z(w)^{*})^{-1} C^{*} \\ I_{{\mathcal Y}} \end{bmatrix}.\notag
\end{align}
Note that by \eqref{KS} and \eqref{kSz},
$$
\sum_{\alpha \in {\mathcal F}_{d}} X_{\alpha} = K_{S}(z,w)\quad\mbox{and}\quad
\sum_{\alpha \in {\mathcal F}_{d}} Y_{\alpha}= D_{S}(z,w).
$$
Therefore \eqref{NC-ADR} is verified once we
show that
\begin{equation} \label{NC-ADR'}
\sum_{\alpha \in {\mathcal F}_{d}} X_{\alpha} - \sum_{\alpha \in
{\mathcal F}_{d}} Y_{\alpha} =
C(I - Z(z) A)^{-1} (I - A^{*} Z(w)^{*})^{-1} C^{*}.
\end{equation}
Substituting \eqref{NCrealization} into \eqref{xa} gives
\begin{align*}
X_{\alpha}
& = z^{\alpha} w^{\alpha^{\top}}I_{{\mathcal Y}} -
[D + C (I - Z(z)A)^{-1} Z(z)B] \cdot z^{\alpha}
w^{\alpha^{\top}}
\cdot \\
& \qquad \cdot
[D^{*} + B^{*} Z(w)^{*}(I - A^{*} Z(w)^{*})^{-1} C^{*}] \\
& = z^{\alpha} w^{\alpha^{\top}}( I_{{\mathcal Y}} - DD^{*})
-C(I - Z(z)A)Z(z)B D^{*}z^{\alpha} w^{\alpha^{\top}} \\
& \qquad
- z^{\alpha} w^{\alpha^{\top}} D B^{*} Z(w)^{*} (I -
A^{*} Z(w)^{*})^{-1} C^{*} \\
& \qquad - C(I - Z(z) A)^{-1} Z(z) B \cdot z^{\alpha}
w^{\alpha^{\top}}
\cdot
B^{*} Z(w)^{*} (I - A^{*} Z(w)^{*})^{-1} C^{*}.
\end{align*}
On the other hand, careful bookkeeping and use of the identity
$$ I - {\mathbf U} {\mathbf U}^{*} =
\begin{bmatrix} I - AA^{*} - BB^{*} & -AC^{*} - BD^{*} \\ -CA^{*}
-DB^{*} & I - CC^{*} -DD^{*} \end{bmatrix}
$$
gives that
\begin{align*}
Y_{\alpha} & = C(I - Z(z)A)^{-1} Z(z) \cdot z^{\alpha}
w^{\alpha^{\top}} \cdot (I - A A^{*} - BB^{*}) Z(w)^{*} (I -
A^{*} Z(w)^{*})^{-1} C^{*} \\
& \qquad - C(I - Z(z)A)^{-1} Z(z) (AC^{*} + B D^{*})
z^{\alpha} w^{\alpha^{\top}} \\
& \qquad - z^{\alpha} w^{\alpha^{\top}} (C A^{*} + D B^{*})
Z(w)^{*} (I - A^{*} Z(w)^{*})^{-1} C^{*} \\
& \qquad
+ z^{\alpha} w^{\alpha^{\top}} (I - CC^{*} -DD^{*}).
\end{align*}
Further careful bookkeeping then shows that
\begin{align}
& X_{\alpha} - Y_{\alpha} = z^{\alpha} w^{\alpha^{\top}} C
C^{*} + C (I -
Z(z)A)^{-1}Z(z) A C^{*} z^{\alpha} w^{\alpha^{\top}} \notag \\
& \qquad + z^{\alpha} w^{\alpha^{\top}} C A^{*} Z(w)^{*} (I -
A^{*} Z(w)^{*})^{-1} C^{*} \notag \\
& \qquad
- C(I - Z(z) A)^{-1} Z(z) \cdot z^{\alpha} w^{\alpha^{\top}}
\cdot (I - AA^{*}) Z(w)^{*} (I - A^{*} Z(w)^{*})^{-1} C^{*}
\notag \\
& = C(I - Z(z)A)^{-1} (z^{\alpha} w^{\alpha^{\top}} I_{{\mathcal X}} -
Z(z) z^{\alpha} w^{\alpha^{\top}} Z(w)^{*})
(I - A^{*} Z(w)^{*})^{-1} C^{*}.
\label{Xalpha-Yalpha}
\end{align}
Note that
$$ Z(z) \cdot z^{\alpha} w^{\alpha^{\top}}\cdot Z(w)^{*}
= \sum_{k=1}^{d} z_{k} z^{\alpha} w^{\alpha^{\top}} w_{k}
$$
and hence
$$
\sum_{\alpha \in {\mathcal F}_{d}\colon |\alpha| = N} Z(z)
z^{\alpha} w^{\alpha^{\top}} Z(w)^{*}
= \sum_{\alpha \in {\mathcal F}_{d} \colon |\alpha| = N+1}
z^{\alpha} w^{\alpha^{\top}} I_{{\mathcal X}}.
$$
Therefore,
\begin{align}
& \sum_{\alpha \in {\mathcal F}_{d}} z^{\alpha} w^{\alpha^{\top}} I_{{\mathcal X}}
- \sum_{\alpha \in {\mathcal F}_{d}} Z(z) \notag
z^{\alpha}w^{\alpha^{\top}} Z(w)^{*} \\
& \qquad =\sum_{N=0}^{\infty} \sum_{\alpha \in {\mathcal F}_{d}\colon
|w| = N}
z^{\alpha} w^{\alpha^{\top}} I_{{\mathcal X}}
- \sum_{N=1}^{\infty} \sum_{\alpha \in {\mathcal F}_{d}\colon |w| = N}
z^{\alpha}w^{\alpha^{\top}}I_{{\mathcal X}}= I_{{\mathcal X}}.
\label{middle}
\end{align}
Summing \eqref{Xalpha-Yalpha} and combining with \eqref{middle}
gives the result \eqref{NC-ADR'} as wanted.
\end{proof}
Given a $d$-tuple of operators $A_{1}, \dots, A_{d}$ on the
Hilbert space ${\mathcal X}$, we let ${\mathbf A} = (A_{1}, \dots, A_{d})$ denote
the operator $d$-tuple while
$A$ denotes the associated column matrix as in \eqref{1.6a}
considered as an operator from ${\mathcal X} $ into ${\mathcal X}^{d}$. If $C$ is
an operator from ${\mathcal X}$ into an output space ${\mathcal Y}$, we say that
$(C, {\mathbf A})$ is an output pair. The paper \cite{BBF1} studied
output pairs and connections with the associated state-output
noncommutative linear system \eqref{sys}.
We are particularly interested in the case where in addition $(C,
{\mathbf A})$ is {\em contractive}, i.e.,
\begin{equation}
A_{1}^{*} A_{1} + \cdots + A_{d}^{*}A_{d} + C^{*} C \le I_{{\mathcal X}}.
\label{cont}
\end{equation}
In this case we have the following result.
\begin{proposition} \label{P:BBF1}
Suppose that $(C, {\mathbf A})$ is a contractive
output pair. Then:
\begin{enumerate}
\item The observability operator
\begin{equation} \label{ob-op}
{\mathcal O}_{C, {\mathbf A}} \colon x \mapsto \sum_{\alpha \in
{\mathcal F}_{d}}( C {\mathbf A}^{v} x) z^{\alpha} = C(I - Z(z) A)^{-1} x
\end{equation}
maps ${\mathcal X}$ contractively into $H^{2}_{{\mathcal Y}}({\mathcal F}_{d})$.
\item The space $\operatorname{Ran}\, {\mathcal O}_{C, {\mathbf A}}$
is a
NFRKHS with norm given by
$$
\| {\mathcal O}_{C, {\mathbf A}}x \|_{{\mathcal H}(K_{C,{\mathbf A}})}=\| Q x\|_{{\mathcal X}}
$$
where $Q$ is the orthogonal projection onto
$(\operatorname{Ker}\,{\mathcal O}_{C,{\mathbf A}})^{\perp}$ and with
formal reproducing kernel $K_{C,A}$ given by
\begin{equation} \label{KCA}
K_{C,{\mathbf A}}(z,w) = C(I - Z(z)A)^{-1} (I - Z(w)^{*}A^{*})^{-1}
C^{*}.
\end{equation}
\item ${\mathcal H}(K_{C,{\mathbf A}})$ is invariant under the backward shift
operators $S_{j}^{*}$ given by \eqref{bs} for $j = 1, \dots, d$
and moreover the difference-quotient inequality
\begin{equation} \label{DQineq}
\sum_{j=1}^{d} \|S_{j}^{*}f\|^{2}_{{\mathcal H}(K_{C, {\mathbf A}})} \le
\|f\|^{2}_{{\mathcal H}(K_{C, {\mathbf A}})} - \|f_{\emptyset}\|^{2}_{{\mathcal Y}}\quad
\text{for all} \quad f \in {\mathcal H}(K_{C, {\mathbf A}})
\end{equation}
is satisfied.
\item ${\mathcal H}(K_{C,{\mathbf A}})$ is isometrically included in
$H^{2}_{{\mathcal Y}}({\mathcal F}_{d})$ if and only if in addition ${\mathbf A}$ is
{\em strongly stable}, i.e.,
\begin{equation} \label{stronglystable}
\lim_{N \to \infty} \sum_{\alpha \in {\mathcal F}_{d} \colon
|\alpha| = N} \| {\mathbf A}^{v} x \|^{2} = 0\quad \text{for all} \quad
x\in {\mathcal X}.
\end{equation}
\end{enumerate}
\end{proposition}
\begin{proof} We refer the reader to \cite[Theorem
2.10]{BBF1} for complete
details of the proof. Here we only note that the
backward-shift-invariance property in part (3) is a
consequence of the intertwining relation
\begin{equation} \label{intertwine}
S_{j}^{*}{\mathcal O}_{C, {\mathbf A}} = {\mathcal O}_{C,
{\mathbf A}} A_{j} \quad\text{for} \quad j = 1, \dots, d
\end{equation}
and that, in the observable case, \eqref{DQineq} is equivalent to
the contractivity property \eqref{cont} of $(C, {\mathbf A})$. \end{proof}
The paper \cite{BBF1} studies the NFRKHSs ${\mathcal H}(K)$ where the
kernel $K$ has the special form $K_{C,{\mathbf A}}$ for a contractive
output pair as in \eqref{KCA}.
Here we wish to study the noncommutative analogues of de
Branges-Rovnyak spaces ${\mathcal H}(K_{S})$ with $K_{S}$ given by
\eqref{KS}.
The following corollary to Proposition
\ref{P:NC-ADR} gives a connection between kernels of the form
$K_{C,{\mathbf A}}$ for a contractive output pair $(C, {\mathbf A})$
and kernels of the form $K_{S}$ for a noncommutative Schur-class
multiplier $S \in {\mathcal S}_{nc, d}({\mathcal U}, {\mathcal Y})$.
\begin{corollary} \label{C:NC-ADR}
Suppose that the operator ${\mathbf U}$ of the form \eqref{NCcolligation}
is contractive with
associated noncommutative Schur multiplier $S(z)$ given by
\eqref{NCrealization}. Suppose that the associated
output-pair $(C,{\mathbf A})$ with ${\mathbf A} = (A_{1}, \dots, A_{d})$ is
observable (i.e., the observability operator ${\mathcal O}_{C,
{\mathbf A}}$ given by \eqref{ob-op} is injective).
Then the associated kernels
$K_{S}(z,w)$ and $K_{C,{\mathbf A}}(z,w)$
given by \eqref{KS} and \eqref{KCA} are the same
\begin{equation} \label{identical-kernels}
K_{S}(z,w) = K_{C,{\mathbf A}}(z,w)
\end{equation}
if and only if ${\mathbf U}$ is coisometric.
\end{corollary}
\begin{proof}
By Proposition \ref{P:NC-ADR} the identity of kernels
\eqref{identical-kernels} holds if and only if the defect
kernel $D_{S}(z,w)$ defined in \eqref{DS} is zero.
Let us partition $I - {\mathbf U} {\mathbf
U}^{*}$ as a $(d+1) \times (d+1)$ block matrix with respect to
the $(d+1)$-fold decomposition ${\mathcal X}^{d} \oplus {\mathcal Y}$ of its
domain and range spaces
$$
I - {\mathbf U} {\mathbf U}^{*} = [M_{i,j}]_{1 \le i, j \le d+1}
$$
and let us write $D_S(z,w)$ as a formal power series
$$
D_S(z,w) = \sum_{v,v' \in {\mathcal F}_{d}} D_{v,v'} z^{v} w^{v'}.
$$
It follows from \eqref{DS} that $D_{v,v'}$ is given by
\begin{align*}
D_{v,v'}& = \sum_{\beta, \alpha, \gamma \in {\mathcal F}_{d}, i,j \in
\{1, \dots, d\} \colon \beta i \alpha = v, \alpha^{\top} j
\gamma^{\top} = v'} C A^{\beta} M_{i,j} A^{* \gamma^{\top}}
C^{*} \\
& \qquad + \sum_{\beta \in {\mathcal F}_{d}, i \in \{1, \dots, d\}
\colon \beta i = v (v^{\prime \top})^{-1}} M_{i,d+1} \\
& \qquad + \sum_{j \in \{1, \dots, d \}, \beta \in {\mathcal F}_{d}
\colon j \gamma^{\top} = v'(v^{\top})^{-1}} M_{d+1,j}+
M_{d+1,d+1},
\end{align*}
where in general we write
$$ v w^{-1} = \begin{cases} v' &\text{if } v=v'w \\
\text{undefined} &\text{otherwise.}
\end{cases}
$$
Considering the case $v=v' = \emptyset$ leads to $M_{d+1, d+1} = 0$.
Considering next the case $v=i_{0}$, $v' = \emptyset$ leads to
$M_{i_{0}, d+1} = 0$ for $i_{0} = 1, \dots, d$. Similarly, the case
$v = \emptyset$, $v' = j_{0}$ leads to $M_{d+1,j_{0}} = 0$ for
$j_{0} =1, \dots, d$. Considering next the case $v = i_{0}$, $v' =
j_{0}$ leads to $C M_{i_{0},j_{0}} C^{*} = 0$ for all $i_{0},j_{0}
= 1, \dots, d$, and hence $C (I - {\mathbf U} {\mathbf U}^{*} )
C^{*} = 0$. The general case together with an induction argument
on the length of words leads to the general collapsing
$$
C A^{\beta}(I - {\mathbf U} {\mathbf U}^{*} )
A^{*\gamma^{\top}}C^{*} = 0.
$$
The observability assumption then forces $I - {\mathbf U} {\mathbf
U}^{*} = 0$, i.e., that ${\mathbf U}$ is coisometric as wanted.
\end{proof}
Alternatively, we can suppose that we know only the contractive
output pair $(C, {\mathbf A})$ and we seek to find a noncommutative Schur
multiplier $S \in {\mathcal
S}_{nc,d}({\mathcal U}, {\mathcal Y})$ so that \eqref{identical-kernels} holds.
We start with a preliminary result.
\begin{theorem} \label{T:CAtoS}
Let $(C,{\mathbf A})$ with $C \in {\mathcal {\mathcal L}}({\mathcal X}, {\mathcal Y})$ be a
contractive output-pair. Then there exists an input space
${\mathcal U}$ and an $S \in {\mathcal S}_{nc, d}({\mathcal U}, {\mathcal Y})$ so that
\begin{equation} \label{KS=KCA}
K_{S}(z,w) = K_{C, {\mathbf A}}(z,w).
\end{equation}
\end{theorem}
\begin{proof} By the result of Corollary \ref{C:NC-ADR}, it
suffices to find an input space ${\mathcal U}$ and an operator
$\sbm{B \\ D } \colon {\mathcal U} \to {\mathcal X}^{d} \oplus {\mathcal Y}$ so that
${\mathbf U} : = \sbm{ A & B \\ C & D } \colon {\mathcal X} \oplus
{\mathcal U} \to {\mathcal X}^{d} \oplus {\mathcal Y}$ is a coisometry. The details
for such a coisometry-completion problem are carried out in
the proof of Theorem 2.1 in \cite{BBF2}.
\end{proof}
We now consider the situation where we are given a contractive
output-pair $(C, {\mathbf A})$ and a noncommutative Schur multiplier $S \in
{\mathcal S}_{nc, d}({\mathcal U}, {\mathcal Y})$ so that \eqref{KS=KCA} holds.
\begin{lemma}\label{L:generalfact}
Let
$$
F(z) = \sum_{v \in {\mathcal F}_{d}} F_{v}
z^{v} \in {\mathcal L}({\mathcal U}, {\mathcal Y})\quad \text{and}\quad
G(z) = \sum_{v \in {\mathcal F}_{d}}
G_{v} z^{v} \in {\mathcal L}({\mathcal U}', {\mathcal Y})
$$
be two formal power series.
Then the formal power series identity
\begin{equation} \label{=kernels}
F(z) F(w)^{*} = G(z) G(w)^{*}
\end{equation}
is equivalent to the existence of a (necessarily unique)
isometry $V$ from
$$
{\mathcal D}_{V} : = \overline{\operatorname{span}}_{v \in
{\mathcal F}_{d}} \operatorname{Ran}\, F_{v}^{*} \subset
{\mathcal U}\quad\mbox{onto}\quad
{\mathcal R}_{V} : = \overline{\operatorname{span}}_{v \in
{\mathcal F}_{d}} \operatorname{Ran}\, G_{v}^{*} \subset {\mathcal U}'
$$
so that the identity of formal power series
\begin{equation} \label{series-id}
V F(w)^{*} = G(w)^{*}
\end{equation}
holds.
\end{lemma}
\begin{proof} If there is an isometry $V$ satisfying
\eqref{series-id}, equating coefficients of $v^{\top}$ gives
$$ V F_{v}^{*} = G_{v}^{*}.
$$
The isometric property of $V$ then leads to
\begin{equation} \label{coef-id}
F_{v'} F_{v}^{*} = G_{v'} G_{v}^{*}\quad \text{for all} \quad v,v' \in
{\mathcal F}_{d}
\end{equation}
from which we get
$$ \sum_{v',v \in {\mathcal F}_{d}} F_{v'} F_{v}^{*} z^{v'} w^{v^{\top}} =
\sum_{v',v \in {\mathcal F}_{d}} G_{v'} G_{v}^{*} z^{v'} w^{v^{\top}}
$$
which is the same as \eqref{=kernels} written out in coefficient form.
Conversely, the assumption \eqref{=kernels} leads to \eqref{coef-id}.
Then the formula
\begin{equation} \label{formula}
V \colon F_{v}^{*} y \mapsto G_{v}^{*} y \quad\text{for}\quad v \in
{\mathcal F}_{d} \; \text{ and } \; y \in {\mathcal Y}
\end{equation}
extends by linearity and continuity to a well-defined isometry (still
denoted by $V$) from ${\mathcal D}_{V}$ onto ${\mathcal R}_{V}$.
Since identification of coefficients of $z^{v}$ on both sides of
\eqref{series-id} reduces to \eqref{formula}, we see that
\eqref{series-id} follows as wanted.
\end{proof}
\begin{lemma} \label{L:2.1}
Let $(C,{\mathbf A})$ be a contractive output pair and $S \in {\mathcal
L}({\mathcal U}, {\mathcal Y})\langle \langle z \rangle \rangle$ a formal power
series. Then the following are equivalent:
\begin{enumerate}
\item \eqref{KS=KCA} holds, i.e.,
\begin{equation} \label{KS=KCA'}
C(I - Z(z)A)^{-1} (I - A^{*} Z(w)^{*})^{-1} C^{*} =
k_{\text{Sz}}(z,w) I_{{\mathcal Y}} -
S(z) k_{\text{Sz}}(z,w) S(w)^{*}.
\end{equation}
\item The alternative version of \eqref{KS=KCA'} holds:
\begin{equation} \label{KS=KCAalt}
C(I - Z(z)A)^{-1} (I_{{\mathcal X}} - Z(z) Z(w)^{*}) (I - A^{*}
Z(w)^{*})^{-1} C^{*} = I - S(z) S(w)^{*}.
\end{equation}
\item There is an isometry
$$
V = \begin{bmatrix}A_{V} & B_{V} \\
C_{V} & D_{V}\end{bmatrix} \colon
[\overline{\operatorname{Ran}}
({\mathcal O}_{C,{\mathbf A}})^{*}]^{d} \oplus {\mathcal Y} \to {\mathcal X} \oplus{\mathcal U}
$$
so that we have the identity of formal power series:
\begin{equation} \label{isomV}
\begin{bmatrix} A_{V} & B_{V} \\ C_{V} & D_{V}
\end{bmatrix} \begin{bmatrix} Z(w)^{*} (I -
A^{*} Z(w)^{*} C^{*} \\ I_{{\mathcal Y}} \end{bmatrix} =
\begin{bmatrix} (I - A^{*} Z(w)^{*})^{-1} C^{*} \\
S(w)^{*} \end{bmatrix}.
\end{equation}
\end{enumerate}
\end{lemma}
\begin{proof}
\textbf{(1) $\Longleftrightarrow$ (2):} Suppose that
\eqref{KS=KCA'} holds. Then
\begin{align*}
& C(I - Z(z) A)^{-1} Z(z) Z(w)^{*} (I - A^{*} Z(w)^{*})^{-1}
C^{*} \\
& \qquad =
\sum_{k=1}^{d} w_{k} C (I - Z(z)A)^{-1} (I -
A^{*}Z(w)^{*})^{-1} C^{*} z_{k} \\
& \qquad = \sum_{k=1}^{d} w_{k} k_{\text{Sz}}(z,w) z_{k} - S(z)
\left(
\sum_{k=1}^{d} w_{k} k_{\text{Sz}}(z,w) z_{k} \right) S(w)^{*} \\
& \qquad = (k_{\text{Sz}}(z,w) - 1)I_{{\mathcal Y}} - S(z) \left(
k_{\text{Sz}}(z,w)-1\right) S(w)^{*}
\end{align*}
and consequently,
\begin{align*}
& C(I - Z(z)A)^{-1} (I - Z(z) Z(w)^{*}) (I -
A^{*}Z(w)^{*})^{-1}C^{*} \\
& \qquad =
k_{\text{Sz}}(z,w) I_{{\mathcal Y}} - S(z)k_{\text{Sz}}(z,w) S(w)^{*} \\
& \qquad \qquad - \left[ (k_{\text{Sz}}(z,w) - 1) I_{{\mathcal Y}} - S(z)
\left(k_{\text{Sz}}(z,w)-1\right) S(w)^{*} \right] \\
&\qquad = I_{{\mathcal Y}} - S(z) S(w)^{*}
\end{align*}
and we recover \eqref{KS=KCAalt} as desired.
Conversely, assume that \eqref{KS=KCAalt} holds. Multiplication
of \eqref{KS=KCAalt} on the left by $w^{\gamma^{\top}}$ and on the
right by $z^{\gamma}$ gives
\begin{align}
& C(I - Z(z)A)^{-1} \left(z^{\gamma}w^{\gamma^{\top}}I_{{\mathcal X}} -
Z(z)z^{\gamma}w^{\gamma^{\top}}
Z(w)^{*}\right) (I - A^{*}Z(w)^{*})^{-1}C^{*} \notag \\
& \qquad = z^{\gamma} w^{\gamma^{\top}}I_{{\mathcal Y}} - S(z)
z^{\gamma}w^{\gamma^{\top}} S(w)^{*}.
\label{KS-KCApre}
\end{align}
Summing up \eqref{KS-KCApre} over all $\gamma \in {\mathcal F}_{d}$ leaves
us with \eqref{KS=KCA'}. This completes the proof of (1)
$\Longleftrightarrow$ (2).
\textbf{(2) $\Longleftrightarrow$ (3):}
Observe that \eqref{KS=KCAalt} can be written in equivalent block matrix
form as
\begin{align*}
& \begin{bmatrix} C(I - Z(z)A)^{-1} Z(z) & I_{{\mathcal Y}} \end{bmatrix}
\begin{bmatrix} Z(w)^{*}(I - A^{*}Z(w)^{*})^{-1}C^{*} \\
I_{{\mathcal Y}} \end{bmatrix} \\
& \qquad =
\begin{bmatrix} C (I - Z(z) A)^{-1} & S(z) \end{bmatrix}
\begin{bmatrix} (I - A^{*} Z(w)^{*})^{-1} C^{*} \\
S(w)^{*} \end{bmatrix}.
\end{align*}
Now we apply Lemma \ref{L:generalfact} to the particular case
$$
F(w)^{*} = \begin{bmatrix} Z(w)^{*}(I - A^{*}Z(w)^{*})^{-1}C^{*} \\
I_{{\mathcal Y}} \end{bmatrix}, \qquad
G(w)^{*} = \begin{bmatrix} (I - A^{*} Z(w)^{*})^{-1} C^{*} \\
S(w)^{*} \end{bmatrix}
$$
to see the equivalence of (2) and (3). It is easily
checked that ${\mathcal D}_{V}$ for our case here is the $d$-fold
inflation of the observability subspace inside ${\mathcal X}^{d}$:
$$
{\mathcal D}_{V} =
[\overline{\operatorname{span}}_{v \in {\mathcal F}_{d}} \operatorname{Ran}\,
{\mathbf A}^{* v} C^{*}]^{d} \oplus {\mathcal Y} = [ \overline{\operatorname{Ran}} \,
{\mathcal O}_{C, {\mathbf A}})^{*}]^{d} \oplus {\mathcal Y}.
$$
\end{proof}
\begin{theorem} \label{T:CAStoB}
Suppose that $S(z) \in {\mathcal S}_{nc, d}({\mathcal U}, {\mathcal Y})$ and that $(C,
{\mathbf A})$ is an observable,
contractive output-pair such that \eqref{KS=KCA} holds.
Then there exists a unique operator
$ B = \sbm{ B_{1} \\ \vdots \\ B_{d}} \colon {\mathcal U}
\to {\mathcal X}^{d}$
so that ${\mathbf U} = \sbm{ A & B \\ C & s_{\emptyset}}$ is a coisometry
and
${\mathbf U}$ provides a realization for $S$: $S(z) = s_{\emptyset} + C
(I - Z(z) A)^{-1} Z(z) B$.
\end{theorem}
\begin{proof} We are given the operators $A \colon {\mathcal X} \to
{\mathcal X}^{d}$, $C \colon {\mathcal X} \to {\mathcal Y}$ and $D = S_{\emptyset} \colon
{\mathcal U} \to {\mathcal Y}$ and seek an operator $B \colon {\mathcal U} \to {\mathcal X}^{d}$ so
that $S(z) = D + C(I - Z(z)A)^{-1} Z(z)B$, or, what is the same,
so that
$$
S(w)^{*} = D^{*} + B^{*}Z(w)^{*}(I - A^{*} Z(w)^{*})^{-1} C^{*}.
$$
This last identity can be rewritten as
\begin{equation} \label{want1}
\begin{bmatrix} A^{*} & C^{*} \\ B^{*} & D^{*} \end{bmatrix}
\begin{bmatrix} Z(w)^{*}(I - A^{*} Z(w)^{*})^{-1}C^{*} \\
I_{{\mathcal Y}} \end{bmatrix} = \begin{bmatrix} (I - A^{*}
Z(w)^{*})^{-1} C^{*} \\ S(w)^{*} \end{bmatrix}
\end{equation}
since the identity
$$
A^{*} Z(w)^{*}(I - A^{*} Z(w)^{*})^{-1}C^{*} + C^{*} = (I - A^{*}
Z(w)^{*})^{-1} C^{*}
$$
expressing equality of the top components holds true automatically.
Lemma \ref{L:2.1} tells us that there is an isometry $V =
\sbm{A_{V} & B_{V} \\ C_{V} & D_{V}} \colon
{\mathcal X}^{d} \oplus {\mathcal Y} \to
{\mathcal X} \oplus {\mathcal U}$ which has the same action as desired by $\sbm{A^{*}
& C^{*} \\ B^{*} & D^{*} }$ in \eqref{want1}. It suffices to set
$B^{*} = C_{V}$.
\end{proof}
We say that two colligations ${\mathbf U} = \sbm{A & B \\ C & D } \colon
{\mathcal X} \oplus {\mathcal U} \to {\mathcal X}^{d} \oplus {\mathcal Y}$ and ${\mathbf U}' = \sbm{A'
& B' \\ C' & D' }$ are {\em unitarily equivalent} if there is a
unitary operator $U \colon {\mathcal X} \to {\mathcal X}'$ such that
$$
\begin{bmatrix} \oplus_{k=1}^{d} U & 0 \\ 0 & I_{{\mathcal Y}} \end{bmatrix}
\begin{bmatrix} A & B \\ C & D \end{bmatrix} = \begin{bmatrix}
A' & B' \\ C' & D' \end{bmatrix} \begin{bmatrix} U & 0 \\ 0
& I_{{\mathcal U}} \end{bmatrix}.
$$
\begin{corollary} \label{C:unique-coisometric}
Any two observable, coisometric realizations ${\mathbf U}$ and
${\mathbf U}'$ for the same $S \in {\mathcal S}_{nc,d}({\mathcal U}, {\mathcal Y})$ are
unitarily equivalent.
\end{corollary}
\begin{proof}
Suppose that ${\mathbf U} = \sbm{A & B \\ C & D }$ and
${\mathbf U}' = \sbm{A' & B' \\ C' & D'}$ are two such
realizations. From Proposition \ref{P:NC-ADR} we see that
$$
K_{C, {\mathbf A}}(z,w) = K_{C',{\mathbf A}'}(z,w).
$$
Then Theorem 2.13 of \cite{BBF1} implies that $(C,{\mathbf A})$ is
unitarily equivalent to $(C', {\mathbf A}')$, so there is a unitary
operator $U \colon {\mathcal X} \to {\mathcal X}'$ such that
$$
C' = C U^{*}\quad\text{and}\quad A_{j}' = U A_{j} U^{*}\quad
\text{for}\quad j=1,
\dots, d.
$$
Then $\widetilde{\mathbf U} = \sbm{A' & (\oplus_{k=1}^{d} U) B
\\ C' & D }$ and ${\mathbf U}' = \sbm{A' & B' \\ C' & D }$ both
give coisometric realizations of $S$ with the same observable
output pair $(C', {\mathbf A}')$. By the uniqueness assertion of Theorem
\ref{T:CAtoS}, it follows that $B' = (\oplus_{k=1}^{d}U) B$ as
well, and hence ${\mathbf U}$ and ${\mathbf U}'$ are unitarily
equivalent.
\end{proof}
\section{de Branges-Rovnyak model colligations} \label{S:dBR}
In this section we show that any $S \in {\mathcal S}_{nc,d}({\mathcal U}, {\mathcal Y})$ has a
canonical observable, coisometric realization which uses
${\mathcal H}(K_{S})$ as the state space. We first need some preliminaries
concerning the finer structure of the
noncommutative de Branges-Rovnyak functional-model spaces
${\mathcal H}(K_{S})$. Let us denote the Taylor coefficients of
$S(z)$ as $s_{v}$, so
$$
S(z) = \sum_{v \in {\mathcal F}_{d}} s_{v} z^{v},
$$
to avoid confusion with the (right) shift operators $S_{j}
\colon f(z) \mapsto f(z) \cdot z_{j}$.
Just as in the classical case, the de Branges-Rovnyak space
${\mathcal H}(K_{S})$ has several equivalent characterizations.
\begin{proposition} \label{P:HKS-char}
Let $S \in {\mathcal S}_{nc, d}({\mathcal U}, {\mathcal Y})$ and let ${\mathcal H}$ be a Hilbert
space of formal power series in ${\mathcal Y}\langle \langle z \rangle
\rangle$. Then the following are equivalent.
\begin{enumerate}
\item ${\mathcal H}$ is equal to the NFRKHS ${\mathcal H}(K_{S})$
isometrically, where $K_S(z,w)$ is the noncommutative
positive kernel given by \eqref{KS}.
\item ${\mathcal H} = \operatorname{Ran}\, (I - M_{S}
M_{S}^{*})^{1/2}$ with lifted norm
\begin{equation} \label{lifted-norm}
\| (I - M_{S}M_{S})^{1/2} g \|_{{\mathcal H}} = \| Q g
\|_{H^{2}_{{\mathcal Y}}({\mathcal F}_{d})}
\end{equation}
where $Q$ is the orthogonal projection of
$H^{2}_{{\mathcal Y}}({\mathcal F}_{d})$ onto $(\operatorname{Ker}\, (I -
M_{S}M_{S}^{*})^{1/2})^{\perp}$.
\item ${\mathcal H}$ is the space of all formal power series $f(z)
\in {\mathcal Y}\langle \langle z \rangle \rangle$ with finite
${\mathcal H}$-norm, where the ${\mathcal H}$-norm is given by
\begin{equation} \label{cH-norm}
\|f\|^{2}_{{\mathcal H}} = \sup_{g \in H^{2}_{{\mathcal U}}({\mathcal F}_{d})}
\left\{
\| f + M_{S} g \|^{2}_{H^{2}_{{\mathcal Y}}({\mathcal F}_{d})} - \| g
\|^{2}_{H^{2}_{{\mathcal U}}({\mathcal F}_{d})} \right\}.
\end{equation}
\end{enumerate}
\end{proposition}
\begin{proof}
\textbf{ (1) $\Longleftrightarrow$ (2):}
It is straightforward to verify the identity
$$
(I - M_{S} M_{S}^{*}) (k_{\text{Sz}}(\cdot, w) y)=
K_{S}(\cdot, w) y \text{ for each } y \in {\mathcal Y}.
$$
(The interpretation for this is that, for each word
$\gamma$, the coefficient of $w^{\gamma}$ of the left
hand side agrees with the coefficient of $w^{\gamma}$ on
the right hand side as elements of ${\mathcal H}(k_{\text{Sz}}
I_{{\mathcal Y}}) = H^{2}_{{\mathcal Y}}({\mathcal F}_{d})$---see \cite{NFRKHS}).
We then see that
\begin{align*}
& \langle (I - M_{S}M_{S}^{*}) k_{\text{Sz}}( \cdot, w') y', (I -
M_{S} M_{S}^{*}) k_{\text{Sz}}(\cdot, w) y
\rangle_{{\mathcal H}(K_{S})\langle \langle w' \rangle \rangle
\times {\mathcal H}(K_{S})\langle \langle w \rangle \rangle)} \\
& \qquad = \langle K_{S}(\cdot, w') y', K_{S}(\cdot, w)y
\rangle_{{\mathcal H}(K_{S})\langle \langle w' \rangle \rangle
\times {\mathcal H}(K_{S}) \langle \langle w \rangle \rangle} \\
& \qquad = \langle K_{S}(w,w')y', y \rangle_{{\mathcal Y}\langle \langle w',
w \rangle \rangle \times {\mathcal Y}} \\
& \qquad = \langle K_{S}(\cdot, w') y', k_{\text{Sz}}(\cdot, w)y
\rangle_{H^{2}_{{\mathcal Y}}({\mathcal F}_{d})\langle \langle w' \rangle \rangle
\times H^{2}_{{\mathcal Y}}({\mathcal F}_{d})\langle \langle w \rangle \rangle} \\
& \qquad =
\langle (I-M_{S}M_{S}^{*}) k_{\text{Sz}}(\cdot, w') y',
k_{\text{Sz}}(\cdot, w) y \rangle_{(H^{2}_{{\mathcal Y}}({\mathcal F}_{d})
\langle \langle w' \rangle \rangle \times H^{2}_{{\mathcal Y}}({\mathcal F}_{d})
\langle \langle w \rangle \rangle)}.
\end{align*}
It follows that $\operatorname{Ran}\, (I - M_{S} M_{S}^{*}) \subset
{\mathcal H}(K_{S})$ with
$$
\langle (I - M_{S}M_{S}^{*}) g, (I - M_{S} M_{S}^{*}) g' \rangle
_{{\mathcal H}(K_{S})} = \langle (I - M_{S}M_{S}^{*}) g, g'
\rangle_{H^{2}_{{\mathcal Y}}({\mathcal F}_{d})}
$$
for $g, g' \in H^{2}_{{\mathcal Y}}({\mathcal F}_{d})$. The precise characterization
${\mathcal H}(K_{S}) = \operatorname{Ran} \, (I - M_{S} M_{S}^{*})^{1/2}$
with the lifted norm \eqref{lifted-norm} now follows via a
completion argument.
\textbf{(2) $\Longleftrightarrow$ (3):} This follows from the
argument in \cite[NI-6]{sarasonbook}.
\end{proof}
\begin{proposition} \label{P:H(KS)} Suppose that $S \in {\mathcal
S}_{nc, d}({\mathcal U}, {\mathcal Y})$
and let ${\mathcal H}(K_{S})$ be the associated NFRKHS where $K_{S}$ is
given by \eqref{KS}. Then the following conditions hold:
\begin{enumerate}
\item
The NFRKHS ${\mathcal H}(K_{S})$ is contained
contractively in $H^{2}_{{\mathcal Y}}({\mathcal F}_{d})$:
$$ \| f \|^{2}_{{\mathcal H}_{{\mathcal Y}}({\mathcal F}_{d})} \le
\|f\|^{2}_{{\mathcal H}(K_{S})}\quad \text{ for all } f \in {\mathcal H}(K_{S}).
$$
\item ${\mathcal H}(K_{S})$ is invariant under each of the
backward-shift operators $S_{j}^{*}$
given by \eqref{bs} for $j = 1, \dots d$, and moreover,
the difference-quotient
inequality \eqref{DQineq} holds for ${\mathcal H}(K_{S})$:
\begin{equation} \label{DQineq-HKS}
\sum_{j=1}^{d} \| S_{j}^{*}f\|^{2}_{{\mathcal H}(K_{S})}
\le \|f\|^{2}_{{\mathcal H}(K_{S})} - \|f_{\emptyset}\|^{2}.
\end{equation}
\item For each $u \in {\mathcal U}$ and $j = 1, \dots, d$, the
vector $S_{j}^{*} (M_{S}u)$ belongs to ${\mathcal H}(K_{S})$ with the
estimate
\begin{equation} \label{estimate}
\sum_{j=1}^{d} \| S_{j}^{*} (M_{S}u) \|^{2}_{{\mathcal H}(K_{S})}
\le \| u \|^{2}_{{\mathcal U}} - \| s_{\emptyset} u \|^{2}_{{\mathcal Y}}.
\end{equation}
\end{enumerate}
\end{proposition}
\begin{proof} We know from Theorem \ref{T:NC1} that $S(z)$
can be realized as in \eqref{NCcolligation} and
\eqref{NCrealization} with ${\mathbf U} = \sbm{A & B \\
C & D }$ a coisometry (or even unitary).
From Proposition \ref{P:NC-ADR} it follows that
$K_{S}(z,w) = K_{C, {\mathbf A}}(z,w)$ and hence ${\mathcal H}(K_{S}) =
{\mathcal H}(K_{C,{\mathbf A}})$ isometrically. Conditions (1) and (2)
now follow from the properties of ${\mathcal H}(K_{C, {\mathbf A}})$ listed
in Proposition \ref{P:BBF1} and the discussion
immediately following.
One can also prove points (1) and (2) directly
from the characterization of ${\mathcal H}(K_{S})$ in part (3) of
Proposition \ref{P:HKS-char} (and thereby bypass
realization theory) as follows; these proofs
follow the proofs for the classical case in \cite{dbr1,
dbr2}. For the contractive inclusion property (part (1)),
note that
\begin{align*}
\|f\|^{2}_{H^{2}_{{\mathcal Y}}({\mathcal F}_{d})}& =
\left. \left[ \|f + M_{S}g \|^{2}_{H^{2}_{{\mathcal Y}}({\mathcal F}_{d})} - \|
g \|^{2}_{H^{2}_{{\mathcal U}}({\mathcal F}_{d})} \right] \right|_{g = 0}
\\
& \qquad \le
\sup_{g \in H^{2}_{{\mathcal U}}({\mathcal F}_{d})} \left\{ \| f +
M_{S}g\|^{2}_{H^{2}_{{\mathcal Y}}({\mathcal F}_{d})} - \| g
\|^{2}_{H^{2}_{{\mathcal U}}({\mathcal F}_{d})}\right\}
= \| f\|^{2}_{{\mathcal H}(K_{S})}.
\end{align*}
To verify part (2), we compute
\begin{align*}
& \sum_{j=1}^{d} \| S_{j}^{*}f \|^{2}_{{\mathcal H}(K_{S})} =
\sup_{g_{j}} \left\{
\sum_{j=1}^{d} \left[\|S_{j}^{*}f + M_{S} g_{j}
\|^{2}_{H^{2}_{{\mathcal Y}}({\mathcal F}_{d})} - \| g_{j}
\|^{2}_{H^{2}_{{\mathcal U}}({\mathcal F}_{d})} \right]\right \} \\
& = \sup_{g_{j}}
\left\{ \sum_{j=1}^{d}\left[ \| S_{j}S_{j}^{*}f +
M_{S} (g_{j}z_{j}) \|^{2} - \| g_{j}z_{j}
\|^{2}_{H^{2}_{{\mathcal U}}({\mathcal F}_{d})} \right]\right\} \\
& = \sup_{g_{j}}
\left\{\sum_{j=1}^{d} \| S_{j}
S_{j}^{*}f + M_{S} (g_{j}z_{j})
\|^{2}_{H^{2}_{{\mathcal Y}}({\mathcal F}_{d})} +
\|f_{\emptyset}\|^{2}_{{\mathcal Y}} - \sum_{j=1}^{d} \|g_{j}
z_{j}\|^{2}_{H^{2}_{{\mathcal U}}({\mathcal F}_{d})} \right\} -
\| f_{\emptyset}\|^{2}_{{\mathcal Y}} \\
& = \sup_{g \in H^{2}_{{\mathcal Y}}({\mathcal F}_{d}) \colon
g_{\emptyset} = 0} \left\{ \| f + M_{S} g
\|^{2}_{H^{2}_{{\mathcal Y}}({\mathcal F}_{d})} -
\| g \|^{2}_{H^{2}_{{\mathcal U}}({\mathcal F}_{d})}\right\} -
\|f_{\emptyset}\|^{2}_{{\mathcal Y}} \\
& \le \sup_{g \in H^{2}_{{\mathcal Y}}({\mathcal F}_{d})}
\left\{ \| f + M_{S} g
\|^{2}_{H^{2}_{{\mathcal Y}}({\mathcal F}_{d})} -
\| g \|^{2}_{H^{2}_{{\mathcal U}}({\mathcal F}_{d})}\right\} -
\|f_{\emptyset}\|^{2}_{{\mathcal Y}}
= \| f \|^{2}_{{\mathcal H}(K_{S})} - \|
f_{\emptyset}\|^{2}_{{\mathcal Y}}
\end{align*}
and part (2) of Proposition \ref{P:H(KS)} follows.
To verify part (3), we again use the third characterization
of ${\mathcal H}(K_{S})$ in Proposition \ref{P:HKS-char}. Pick
$g_1,\ldots,g_d\in H^{2}_{{\mathcal U}}({\mathcal F}_{d})$ and let
$$
\widetilde{g} = \sum_{j=1}^{d} g_{j} z_{j}=\sum_{j=1}^{d}S_j g_{j}.
$$
Since $S_j^*S_i=\delta_{ij}I$ for $i,j=1,\ldots,d$ where $\delta_{ij}$
is the Kronecker's symbol, we have
\begin{equation}
\|\widetilde{g}\|^2_{H^{2}_{{\mathcal U}}({\mathcal F}_{d})}=\sum_{j=1}^d
\|S_jg_j\|^2_{H^{2}_{{\mathcal U}}({\mathcal F}_{d})}=\sum_{j=1}^d
\|g_j\|^2_{H^{2}_{{\mathcal U}}({\mathcal F}_{d})}
\label{july1}
\end{equation}
and, since the multiplication operator $M_S$ commutes with $S_j$ for
$j=1,\ldots,d$, we have also
\begin{equation}
\|M_S\widetilde{g}\|^2_{H^{2}_{{\mathcal U}}({\mathcal F}_{d})}=
\sum_{j=1}^d\|M_Sg_j\|^2_{H^{2}_{{\mathcal U}}({\mathcal F}_{d})}.
\label{july2}
\end{equation}
Next we note that
\begin{align*}
& \| S_{j}^{*}(M_{S}u) + M_{S} g_{j}\|^{2}_{H^{2}_{{\mathcal Y}}({\mathcal F}_{d})} =
\|S_j^*(M_Su)\|^2+2\Re \langle S_j^*(M_Su), \, M_Sg_j\rangle+
\|M_Sg_j\|^2 \\
& \qquad \qquad =\langle S_jS_j^*(M_{S}u), \, M_{S}u\rangle+2\Re
\langle M_Su, \,
M_Sg_jz_j\rangle+\|M_Sg_j\|^2.
\end{align*}
Summing up the latter equalities for $j=1,\ldots,d$, making use of
\eqref{july2} and applying the identity
$$
f- f_{\emptyset}=\sum_{j=1}^d S_jS_j^*f \qquad (f\in
H^{2}_{{\mathcal Y}}({\mathcal F}_{d})
$$
to $f=M_Su$, we get
\begin{align}
\sum_{j=1}^{d}\| S_{j}^{*}(M_{S}u) + M_{S}
g_{j}\|^{2}_{H^{2}_{{\mathcal Y}}({\mathcal F}_{d})}&=
\langle M_Su-s_{\emptyset}u, \, M_Su\rangle+2\Re \langle M_Su, \,
M_S\widetilde{g}\rangle+\|M_S\widetilde{g}\|^2\notag \\
&=\| M_Su\|^2-\|s_{\emptyset}u\|^2+2\Re \langle M_Su, \,
M_S\widetilde{g}\rangle+\|M_S\widetilde{g}\|^2\notag \\
&=\|M_Su+M_S\widetilde{g}\|^2_{H^{2}_{{\mathcal Y}}({\mathcal F}_{d})}-
\|s_{\emptyset}u\|^2_{{\mathcal Y}}.\label{july3}
\end{align}
Since $\| M_{S}\|_{op} \le 1$ and since $\widetilde{g}_{\emptyset}=0$,
we have
\begin{align}
\|M_Su+M_S\widetilde{g}\|^2_{H^{2}_{{\mathcal Y}}({\mathcal F}_{d})}&=
\|M_S(u+\widetilde{g})\|^2_{H^{2}_{{\mathcal Y}}({\mathcal F}_{d})}\notag\\
&\le
\|u+\widetilde{g}\|^2_{H^{2}_{{\mathcal U}}({\mathcal F}_{d})}=
\|u\|^2_{{\mathcal U}}+\|\widetilde{g}\|^2_{H^{2}_{{\mathcal U}}({\mathcal F}_{d})}.\label{july4}
\end {align}
Adding \eqref{july1}, \eqref{july3} and \eqref{july4} gives
$$
\sum_{j=1}^{d}\left[ \| S_{j}^{*}(M_{S}u) + M_{S}
g_{j}\|^{2}_{H^{2}_{{\mathcal Y}}({\mathcal F}_{d})} - \|
g_{j}\|^{2}_{H^{2}_{{\mathcal U}}({\mathcal F}_{d})} \right]
\le \|u\|^2_{{\mathcal U}}-\|s_{\emptyset}u\|^2_{{\mathcal Y}}.
$$
The latter estimate is uniform with respect to $g_j$'s and then taking
suprema we conclude (by the third characterization of ${\mathcal H}(K_{S})$ in
Proposition \ref{P:HKS-char}) that $S_{j}^{*}(M_{S}u) \in
{\mathcal H}(K_{S})$ for
each $j = 1, \dots, d$ with the estimate
$$
\sum_{j=1}^{d} \| S_{j}^{*} (M_{S}u) \|^{2}_{{\mathcal H}(K_{S})}
\le \| u \|^{2}_{{\mathcal U}}-\|s_{\emptyset}u\|^2_{{\mathcal Y}},
$$
This concludes the proof of Proposition \ref{P:H(KS)}.
\end{proof}
Let us define an operator $E \colon H^{2}_{{\mathcal Y}}({\mathcal F}_{d}) \to {\mathcal Y}$ by
\begin{equation} \label{defE}
E \colon \sum_{v \in {\mathcal F}_{d}} f_{v} z^{v} \mapsto f_{\emptyset}.
\end{equation}
As is observed in \cite[Proposition 2.9]{BBF1} and can be
observed directly,
\begin{equation} \label{model-obs}
E {\mathbf S}^{*v} f = E \left( \sum_{\alpha \in {\mathcal F}_{d}} f_{\alpha
v^{\top}} z^{\alpha} \right) = f_{v^{\top}} \text{ for all }
f(z) = \sum_{\alpha \in {\mathcal F}_{d}} f_{\alpha} z^{\alpha} \in
{\mathcal H}^{2}_{{\mathcal Y}}({\mathcal F}_{d}) \text{ and } v \in {\mathcal F}_{d}.
\end{equation}
Hence the observability operator ${\mathcal O}_{E, {\mathbf S}} \colon
H^{2}_{{\mathcal Y}}({\mathcal F}_{d}) \to H^{2}_{{\mathcal Y}}({\mathcal F}_{d})$ defined as in
\eqref{ob-op} works out to be
$$ {\mathcal O}_{E, {\mathbf S}^{*}} = \tau
$$
where $\tau$ is the involution on $H^{2}_{{\mathcal Y}}({\mathcal F}_{d})$ given by
\begin{equation} \label{tau}
\tau \colon \sum_{v \in {\mathcal F}_{d}} f_{v} z^{v} \mapsto
\sum_{v \in {\mathcal F}_{d}} f_{v^{\top}} z^{v}.
\end{equation}
For this reason we use the ``reflected'' de Branges-Rovnyak space
\begin{equation}
{\mathcal H}^\tau(K_S) = \tau \circ {\mathcal H}(K_{S}): = \{ \tau(f) \colon f \in {\mathcal H}(K_{S})\}
\label{july5}
\end{equation}
as the state space for our
de Branges-Rovnyak-model realization of $S$ rather than simply
${\mathcal H}(K_{S})$ as in the classical case. We define
$$
\| \tau(f) \|_{{\mathcal H}^\tau(K_S)} = \| f \|_{{\mathcal H}(K_{S})}.
$$
Recall that the operator of multiplication on the right by
the variable
$z_{j}$ on $H^{2}_{{\mathcal Y}}({\mathcal F}_{d})$ was denoted in
\eqref{shift} by $S_{j}$
rather than by $S^{R}_{j}$ for simplicity. We shall now need
its left counterpart, denoted by $S^{L}_{j}$ and given by
\begin{equation} \label{SL}
S^{L}_{j} \colon f(z) = \sum_{v \in {\mathcal F}_{d}} f_{v} z^{v} \mapsto
z_{j} \cdot f(z) = \sum_{v \in {\mathcal F}_{d}} f_{v} z^{j \cdot v}
\end{equation}
with adjoint (as an operator on $H^{2}_{{\mathcal Y}}({\mathcal F}_{d})$) given by
\begin{equation} \label{SL-adj}
(S^{L}_{j})^{*} \colon \sum_{v \in {\mathcal F}_{d}} f_{v} z^{v} \mapsto
\sum_{v \in {\mathcal F}_{d}} f_{j \cdot v} z^{v}.
\end{equation}
For emphasis we now write $S_{j}^{R}$ rather than simply $S_{j}$.
We then have the following result.
\begin{theorem} \label{T:NC3}
Let $S(z) \in {\mathcal S}_{nc,d}({\mathcal U}, {\mathcal Y})$ and let ${\mathcal H}^\tau(K_S)$
be the associated de Bran\-ges-Rovnyak space given by
\eqref{july5}. Define operators
\begin{align*}
& A_{\text{dBR},j} \colon \; {\mathcal H}^\tau(K_S) \to {\mathcal H}^\tau(K_S),\quad
& B_{\text{dBR},j}& \colon \; {\mathcal U} \to {\mathcal H}^\tau(K_S) \quad (j =
1, \dots, d), \\
& C_{\text{dBR}} \colon \; {\mathcal H}^\tau(K_S) \to {\mathcal Y}, \qquad
& D_{\text{dBR}}& \colon \; {\mathcal U} \to {\mathcal Y}&
\end{align*}
by
\begin{align}
& A_{\text{dBR},j} = (S^{L}_{j})^{*}|_{{\mathcal H}^\tau(K_S)} ,
\quad
& B_{\text{dBR},j} &= \tau (S^{R}_{j})^{*} M_{S}|_{{\mathcal U}}
= (S^{L}_{j})^{*} \tau M_{S}|_{{\mathcal U}},
\notag \\
& C_{\text{dBR}} = E|_{{\mathcal H}^\tau(K_S)}, \quad
& D_{\text{dBR}} &= s_{\emptyset}
\label{dBRops}
\end{align}
where $E$ is given by \eqref{defE},
and set
$$ A_{\text{dBR}} = \begin{bmatrix} A_{\text{dBR},1} \\ \vdots
\\ A_{\text{dBR},d} \end{bmatrix} \colon {\mathcal H}^\tau(K_S) \to
{\mathcal H}^\tau(K_S)^{d}, \quad
B_{\text{dBR}} = \begin{bmatrix} B_{\text{dBR},1} \\ \vdots \\
B_{\text{dBR},d} \end{bmatrix} {\mathcal U} \to {\mathcal H}^\tau(K_S)^{d}.
$$
Then
$$ {\mathbf U}_{dBR} = \begin{bmatrix} A_{\text{dBR}} &
B_{\text{dBR}} \\
C_{\text{dBR}} & D_{\text{dBR}} \end{bmatrix} \colon
\begin{bmatrix} {\mathcal H}^\tau(K_S) \\ {\mathcal U} \end{bmatrix} \to
\begin{bmatrix}{\mathcal H}^\tau(K_S)^{d} \\ {\mathcal Y} \end{bmatrix}
$$
is an observable coisometric colligation with transfer function
equal to $S(z)$:
\begin{equation} \label{dBRreal}
S(z) = D_{\text{dBR}} + C_{\text{dBR}}(I_{{\mathcal H}^\tau(K_S)} - Z(z)
A_{\text{dBR}})^{-1} Z(z) B_{\text{dBR}}.
\end{equation}
Any other observable, coisometric realization of
$S$ is unitarily equivalent to this functional-model realization
of $S$.
\end{theorem}
\begin{proof}
As observed in Proposition \ref{P:H(KS)}, ${\mathcal H}(K_{S})$ is invariant under
$S_{j}^{*}$ for each $j = 1, \dots, d$.
From the easily checked intertwining relations
\begin{equation} \label{tau-intertwine}
(S^{L}_{j})^{*} \tau = \tau (S^{R}_{j})^{*} \text{ for } j = 1, \dots,
d,
\end{equation}
the fact that ${\mathcal H}(K_{S})$ is invariant under each $(S^{R}_{j})^{*}$
implies that $ {\mathcal H}^\tau(K_S)$ is invariant under each
$(S^{L}_{j})^{*}$ for $j = 1, \dots, d$. Hence the formula for
$A_{\text{dBR},j}$ in \eqref{dBRops} defines an operator on
$ {\mathcal H}^\tau(K_S)$.
The first formula for $B_{\text{dBR},j}$ in \eqref{dBRops}
defines an operator from ${\mathcal U}$
into ${\mathcal H}^\tau(K_S)$ by part (3) of Proposition \ref{P:H(KS)}; this
is consistent with the second formula as a consequence of
\eqref{tau-intertwine}.
From \eqref{model-obs} it follows that the pair $(E, {\mathbf
S}^{*})$ is observable and therefore, since $C$ and ${\mathbf A}$
are restrictions of $E$ and ${\mathbf S}$ respectively, the pair
$(C, {\mathbf A})$ is also observable.
Hence, for $u \in {\mathcal U}$, making use of \eqref{model-obs}
gives
$$ C_{\text{dBR}} {\mathbf A}_{\text{dBR}}^{*v} B_{\text{dBR},j} u
= E ({\mathbf S}^{L})^{*v} \tau S_{j}^{*} (M_{S} \cdot u)
= s_{v \cdot j} u
$$
and it follows that
\begin{align*}
D_{\text{dBR}} + C_{\text{dBR}} (I - Z(z) {\mathbf A}_{\text{dBR}})^{-1} Z(z)
B_{\text{dBR}} & = s_{\emptyset} +
\sum_{j=1}^{\infty} \sum_{v \in {\mathcal F}_{d}}
C_{\text{dBR}} {\mathbf A}_{\text{dBR}}^{v} B_{\text{dBR},j} z^{v} z_{j}
\\ &
= s_{\emptyset} + \sum_{j=1}^{d} \sum_{v \in {\mathcal F}_{d}} s_{v \cdot
j} z^{v} z_{j} = S(z)
\end{align*}
and \eqref{dBRreal} follows.
By Proposition \ref{P:H(KS)} we know that
${\mathcal H}(K_{S})$ is contractively included in $H^{2}_{{\mathcal Y}}({\mathcal F}_{d})$,
is invariant under the backward-shift operators $(S^{R}_{j})^{*}$ given by
\eqref{bs} for $j = 1, \dots, d$ with the difference-quotient
inequality \eqref{DQineq-HKS} satisfied.
Hence, by part (4) of Theorem 2.8 in \cite{BBF1}, it follows that
the kernels $K_{S}$ and $K_{C_{\text{dBR}}, {\mathbf A}_{\text{dBR}}}$ match:
\begin{equation} \label{=kernel-dBR}
K_{S}(z,w) = K_{C_{\text{dBR}}, {\mathbf A}_{\text{dBR}}}(z,w).
\end{equation}
The fact that ${\mathbf U}_{\text{dBR}}$ is coisometric now follows
from Corollary \ref{C:NC-ADR}.
Finally, the uniqueness statement in Theorem \ref{T:NC3} follows from
Corollary \ref{C:unique-coisometric}.
\end{proof}
\begin{remark} \label{R:dBRreal} {\em The proof of Theorem
\ref{T:NC3} assumed knowledge of the candidate operators
\eqref{dBRops} for a realization of $S$ and then amounted to a
check that these operators work. We remark here that, once
$A_{\text{dBR}}$ and $B_{\text{dBR}}$ are chosen so that
\eqref{=kernel-dBR} holds, one can then solve for
$B_{\text{dBR},1} \dots B_{\text{dBR},d}$ according to the
prescription \eqref{want1} in the proof of Theorem
\ref{T:CAStoB}:
$$ B_{\text{dBR}}^{*} Z(w)^{*}(I - A_{\text{dBR}}^{*}
Z(w)^{*})^{-1}C^{*} = S(w)^{*} - s_{\emptyset}^{*}
$$
to arrive at the formula for $B_{\text{dBR},j}$ ($j = 1,
\dots, d$) in formula \eqref{dBRops}.}
\end{remark}
\begin{remark} {\em It is possible to make all the ideas and results
of this paper symmetric with respect to ``left versus right''.
Then the multiplication operator
$M_{S}$ given by \eqref{multiplication} is really the {\em left}
multiplication operator
$$
M^{L}_{S} = \sum_{v \in {\mathcal F}_{d}} s_{\alpha} ({\mathbf S}^{L})^{v}
\colon f(z) \mapsto S(z) \cdot f(z).
$$
It is natural to define the corresponding {\em right} multiplication
operator $M^{R}_{S}$ by
$$
M^{R}_{S} = \sum_{v \in {\mathcal F}_{d}} s_{\alpha} ({\mathbf S}^{R})^{v}.
$$
In the scalar case ${\mathcal U} = {\mathcal Y} = {\mathbb C}$ where $f(z) \cdot
S(z)$ makes sense, we have
$$
M^{R}_{S} \colon f(z) \mapsto f(z) \cdot (\tau\circ S)(z)
$$
while in general we have
$$
M^{R}_{S} \colon \sum_{v \in {\mathcal F}_{d}} f_{v} z^{v}
\mapsto \sum_{v \in {\mathcal F}_{d}} \left[ \sum_{\alpha, \beta \in
{\mathcal F}_{d} \colon \alpha \beta = v} s_{\beta^{\top}} f_{\alpha}
\right] z^{v}.
$$
The Schur-class ${\mathcal S}_{nc, d}({\mathcal U}, {\mathcal Y})$ is really the
{\em left} Schur class ${\mathcal S}^{L}_{nc, d}({\mathcal U}, {\mathcal Y})$. The
{\em right} Schur class ${\mathcal S}^{R}_{nc,d}({\mathcal U}, {\mathcal Y})$
consists of all formal power series $S(z) = \sum_{v \in {\mathcal F}_{d}}
s_{v} z^{v}$ for which the associated {\em right} multiplication
operator $M^{R}_{S} = \sum_{v \in {\mathcal F}_{d}} s_{v} ({\mathbf
S}^{R})^{v}$ has operator norm at most 1.
The kernel $K_{S}(z,w)$ given by \eqref{KS} is really the {\em
left} kernel $K^{L}_{S}(z,w)$ given by
$$
K_{S}(z,w) = K^{L}_{S}(z,w) = \{[I_{{\mathcal Y}} - M^{L}_{S}
(M^{L}_{S})^{*}](k_{\text{Sz}}( \cdot, w) )\}(z).
$$
It is then natural to define the corresponding {\em right} kernel
$$
K^{R}_{S}(z,w) = \{[I_{{\mathcal Y}} - M^{R}_{S}
(M^{R}_{S})^{*}](k_{\text{Sz}}( \cdot, w) )\}(z).
$$
Given an output pair $(C, {\mathbf A})$, the observability operator
${\mathcal O}_{C, {\mathbf A}}$ given by \eqref{ob-op} is really the {\em left}
observability operator ${\mathcal O}^{L}_{C, {\mathbf A}}$ with range space
invariant under the {\em right} backward-shift operators
$(S^{R}_{j})^{*}$; the corresponding {\em right}
observability operator ${\mathcal O}^{R}_{C, {\mathbf A}}$ is given by
$$
{\mathcal O}^{R}_{C, {\mathbf A}} \colon x \mapsto \sum_{\alpha \in {\mathcal F}_{d}} (C
{\mathbf A}^{v^{\top}}x)z^{\alpha} = C (I - Z({\mathbf S}^{R})A)^{-1} x
$$
and has range space invariant under the {\em left} backward shifts
$(S^{L}_{j})^{*}$. The system \eqref{sys} is really a {\em left}
noncommutative multidimensional linear system with
{\em left} transfer function \eqref{transfunc}
$$
T_{\Sigma^{L}}(z) = D + C (I - Z({\mathbf S}^{L}) A)^{-1}
Z({\mathbf S}^{L}) B.
$$
For a given colligation ${\mathbf U} = \sbm{A & B \\ C & D}$, there is an
associated {\rm right} transfer function
$$ T_{\Sigma^{R}}(z) = D + C (I - Z({\mathbf S}^{R}) A)^{-1}
Z({\mathbf S}^{R}) B
$$
associated with the {\em right} noncommutative multidimensional
linear system
\begin{equation} \label{sysR}
\Sigma^{R} \colon \left\{ \begin{array}{ccc}
x( \alpha \cdot 1) & = & A_{1} x(\alpha) + B_{1} u(\alpha) \\
\vdots & & \vdots \\
x( \alpha \cdot d) & = & A_{d} x(\alpha) + B_{d} u(\alpha) \\
y(\alpha) & = & C x(\alpha) + D u(\alpha)
\end{array} \right.
\end{equation}
initialized with $x(\emptyset) = 0$.
With these definitions in place, it is straightforward to formulate
and prove mirror-reflected versions of Theorem \ref{T:NC1},
Proposition \ref{P:BBF1}, Theorem \ref{T:CAtoS}, Theorem
\ref{T:CAStoB} (as well as Theorems \ref{T:shift=int} and
\ref{T:BLhomint} to come below); we leave the details to the reader.
With all this in hand, it is then possible to identify the state-space
${\mathcal H}^\tau(K_S) = \tau \circ {\mathcal H}(K^{L}_{S})$ appearing in Theorem
\ref{T:NC3} as nothing other than
${\mathcal H}(K^{R}_{S})$. Thus, the functional-model realization for a given
$S$ as an element of the {\em left} Schur class ${\mathcal
S}^{L}_{nc, d}({\mathcal U}, {\mathcal Y})$
uses as state space the functional-model space ${\mathcal H}(K^{R}_{S})$
based on the {\em right} kernel $K^{R}_{S}$ while the realization of
$S$ as a member of the {\em right} Schur-class ${\mathcal S}^{R}_{nc,
d}({\mathcal U}, {\mathcal Y})$ uses
as the state space the functional-model ${\mathcal H}(K^{L}_{S})$ based on
the {\em left} kernel $K^{L}_{S}$. Presumably it is possible to have
an $S$ in the left Schur-class ${\mathcal S}^{L}_{nc, d}({\mathcal U}, {\mathcal Y})$
but not in the right Schur-class ${\mathcal S}^{R}_{nc, d}({\mathcal U}, {\mathcal Y})$
and vice-versa, although we have not worked out an example.
With this interpretation, the functional-model realization in Theorem
\ref{T:NC3} becomes a more canonical extension of the classical
univariate case.}
\end{remark}
Let us say that $S \in {\mathcal S}_{nc, d}({\mathcal U}, {\mathcal Y})$ is {\em inner} if the
multiplication operator
$$
M_{S} \colon H^{2}_{{\mathcal U}}({\mathcal F}_{d}) \to
H^{2}_{{\mathcal Y}}({\mathcal F}_{d})
$$
is isometric; such multipliers are the representers for
shift-invariant subspaces in Popes\-cu's Fock-space analogue of the
Beurling-Lax theorem \cite{PopescuNF1} (see also \cite{BBF1}).
It is now an
easy matter to characterize which functional-model realizations as
in Theorem
\ref{T:NC3} go with inner multipliers.
\begin{theorem} \label{T:NC4}
The Schur-class multiplier $S \in {\mathcal S}_{nc,d}({\mathcal U}, {\mathcal Y})$
is inner if and only if $S$ has an observable, coisometric
realization \eqref{NCrealization} such that ${\mathbf A} = (A_{1}, \dots,
A_{d})$ is strongly stable (see
\eqref{stronglystable}).
\end{theorem}
\begin{proof} By Corollary \ref{C:unique-coisometric}, any
observable, coisometric realization is unitarily equivalent to
the functional-model realization given in Proposition
\ref{P:H(KS)}. Note that $S$ is inner if and only if $I -
M_{S}M_{S}^{*}$ is an orthogonal projection. From the
characterization of ${\mathcal H}(K_{S})$ in part (2) of Proposition
\ref{P:HKS-char}, we see that
this last condition occurs if and only if ${\mathcal H}(K_{S})$ is
contained isometrically in $H^{2}_{{\mathcal Y}}({\mathcal F}_{d})$. By part
(3) of Proposition \ref{P:BBF1}, this in turn is equivalent to
strong stability of ${\mathbf A}$, and Theorem \ref{T:NC4} follows.
\end{proof}
\section{Shift-invariant subspaces and Beurling-Lax
representation theorems} \label{S:BL}
Suppose that $({\mathbf Z}, X)$ is an isometric input pair, i.e., ${\mathbf Z} =
(Z_{1}, \dots, Z_{d})$ where each $Z_{j}
\colon {\mathcal X} \to {\mathcal X}$ and $X \colon {\mathcal Y} \to {\mathcal X}$. We say that the
input pair $({\mathbf Z},X)$ is {\em input-stable} if the associated
controllability operator
$$ {\mathcal C}_{{\mathbf Z},X} \colon \sum_{v \in {\mathcal F}_{d}} f_{v} z^{v}
\mapsto \sum_{v \in {\mathcal F}_{d}} {\mathbf Z}^{v^{\top}} X f_{v}
$$
maps $H^{2}_{{\mathcal Y}}({\mathcal F}_{d})$ into ${\mathcal X}$. We say that the pair
$({\mathbf Z}, X)$ is {\em exactly controllable} if in addition
${\mathcal C}_{{\mathbf Z},X}$ maps $H^{2}_{{\mathcal Y}}({\mathcal F}_{d})$ onto ${\mathcal X}$. In this
case the associated controllability gramian
$$
{\mathcal G}_{{\mathbf Z},X}: = {\mathcal C}_{{\mathbf Z},X}({\mathcal C}_{{\mathbf Z},X})^{*}
$$
is strictly positive-definite on ${\mathcal X}$. and is the unique
solution
$H = {\mathcal G}_{{\mathbf Z},X}$
of the Stein equation
\begin{equation} \label{control-Stein}
H - Z_{1} H Z_{1}^{*} - \cdots - Z_{d} H Z_{d}^{*} = X X^{*}.
\end{equation}
By considering the similar pair
$$ ({\mathbf Z}', X') \text{ with } {\mathbf Z}' = (Z'_{1}, \dots, Z'_{d}) \text{
where } Z_{j}' = H^{-1/2} Z_{j} H^{1/2} \text{ and } X' =
H^{-1/2}X,
$$
without loss of generality we may assume that the input pair
$({\mathbf Z}, X)$ is {\em isometric}, i.e., \eqref{control-Stein} is
satisfied with $H = I_{{\mathcal X}}$. We are interested in the case when
in addition ${\mathbf Z}^{*}$ is {\em strongly stable} in the sense of
\eqref{stronglystable}; in this case ${\mathcal G}_{{\mathbf Z},X}$ is
the unique
solution of the Stein equation \eqref{control-Stein}. We remark
that all these statements are dual to the analogous statements
made for observability operators ${\mathcal O}_{C, {\mathbf A}}$ since the adjoint
$(C, {\mathbf A}): = (X^{*}, {\mathbf Z}^{*})$ of any input pair $({\mathbf Z}, X)$ is an
output pair.
Given any isometric input pair $({\mathbf Z},X)$ with ${\mathbf Z}^{*}$ strongly
stable, we define a {\em left functional calculus with operator
argument} as follows. Given $f \in H^{2}_{{\mathcal Y}}({\mathcal F}_{d})$ of the
form $f(z) = \sum_{v \in {\mathcal F}_{d}} f_{v} z^{v}$, define
$$
(X f)^{\wedge L}({\mathbf Z}) = \sum_{v \in {\mathcal F}_{d}} {\mathbf Z}^{v^{\top}} X
f_{v} =: {\mathcal C}_{{\mathbf Z},X} f.
$$
We define a subspace ${\mathcal M}_{{\mathbf Z},X}$ to be the set of all solutions
of the associated homogeneous interpolation condition:
$$
{\mathcal M}_{{\mathbf Z},X}:= \{ f \in H^{2}_{{\mathcal Y}}({\mathcal F}_{d}) \colon (Xf)^{\wedge
L}({\mathbf Z}) = 0\}.
$$
That ${\mathcal M}_{{\mathbf Z},X}$ is invariant under the (right) shift operator $S_{j}$
follows from the intertwining property ${\mathcal C}_{{\mathbf Z},X} S_{j} = Z_{j}
{\mathcal C}_{{\mathbf Z},X}$ verified by the following computation:
\begin{align*}
{\mathcal C}_{{\mathbf Z},X} S_{j} f& = (X S_{j}f)^{\wedge L}({\mathbf Z})
= \sum_{v \in {\mathcal F}_{d}} {\mathbf Z}^{(v j)^{\top}} X f_{v}
= Z_{j} \cdot \sum_{v \in {\mathcal F}_{d}}
{\mathbf Z}^{v^{\top}} X f_{v} \\
& = Z_{j} \cdot (X f)^{\wedge L}({\mathbf Z}) = Z_{j} {\mathcal C}_{{\mathbf Z},X} f.
\end{align*}
It is easily checked that ${\mathcal M}_{{\mathbf Z},X}$ is closed in the metric
of $H^{2}_{{\mathcal Y}}({\mathcal F}_{d})$. Hence, by
Popescu's Beurling-lax theorem for the Fock space (see
\cite{PopescuNF1}) it is guaranteed that
${\mathcal M}_{{\mathbf Z},X}$ has a representation of the form
$$
{\mathcal M}_{{\mathbf Z},X} =
\theta \cdot H^{2}_{{\mathcal U}}({\mathcal F}_{d}) = \operatorname{Ran}\,
M_{\theta}
$$
for an inner multiplier $\theta \in {\mathcal S}_{nc, d}({\mathcal U},
{\mathcal Y})$. Our goal is to understand how to compute a
transfer-function realization for $\theta$ directly from the
homogeneous interpolation data $({\mathbf Z}, X)$. First, however, we
show that shift-invariant subspaces ${\mathcal M} \subset
H^{2}_{{\mathcal Y}}({\mathcal F}_{d})$ of the form ${\mathcal M} = {\mathcal M}_{{\mathbf Z},X}$ for an
admissible input pair $({\mathbf Z}, X)$ as above are not as special as
may at first appear.
\begin{theorem} \label{T:shift=int}
Suppose that ${\mathcal M}$ is a closed, shift-invariant
subspace of $H^{2}_{{\mathcal Y}}({\mathcal F}_{d})$. Then there is an
isometric input-pair $({\mathbf Z}, X)$ with ${\mathbf Z}^{*}$ strongly
stable so that ${\mathcal M} = {\mathcal M}_{{\mathbf Z}, X}$.
\end{theorem}
\begin{proof} If ${\mathcal M}$ is invariant for the operators $S_{j}$,
then ${\mathcal M}^{\perp}$ is invariant for the operators $S_{j}^{*}$
for each $j = 1, \dots, d$. Hence by Theorem 2.8 from
\cite{BBF1} there is an observable, contractive output pair $(C,
{\mathbf A})$ so
that ${\mathcal M}^{\perp} = {\mathcal H}(K_{C,{\mathbf A}}) = \operatorname{Ran}\,
{\mathcal O}_{C, {\mathbf A}}$ isometrically. As ${\mathcal M}^{\perp} \subset
H^{2}_{{\mathcal Y}}({\mathcal F}_{d})$ isometrically, Proposition \ref{P:BBF1}
tells us that we may take $(C, {\mathbf A})$ isometric and that ${\mathbf A}$
is strongly stable. Let $({\mathbf Z}, X)$ be the input pair
$({\mathbf Z},X) = ({\mathbf A}^{*}, C^{*})$. As ${\mathcal M}^{\perp} =
\operatorname{Ran}\, {\mathcal O}_{C, {\mathbf A}}$, we may compute ${\mathcal M}$ as
\begin{align*}
{\mathcal M} = \left( \operatorname{Ran}\, {\mathcal O}_{C, {\mathbf A}}
\right)^{\perp} =
\operatorname{Ker}\, ({\mathcal O}_{C, {\mathbf A}})^{*} =
\operatorname{Ker}\, {\mathcal C}_{{\mathbf A}^{*}, C^{*}} =
\operatorname{Ker}\, {\mathcal C}_{{\mathbf Z}, X}
\end{align*}
and Theorem \ref{T:shift=int} follows.
\end{proof}
We now suppose that a shift-invariant subspace is given in the form
${\mathcal M} = {\mathcal M}_{{\mathbf Z}, X}$ for an admissible homogeneous interpolation data
set and we construct a realization for the associated Beurling-Lax
representer.
\begin{theorem} \label{T:BLhomint} Suppose that $({\mathbf Z}, X)$ is an
admissible homogeneous interpolation data set and ${\mathcal M}_{{\mathbf Z}, X} =
\operatorname{Ker}\, {\mathcal C}_{{\mathbf Z},X}$ is the associated
shift-invariant subspace. Let $(C, {\mathbf A})$ be the output pair
defined by
$$
(C, {\mathbf A}) = (X^{*}, {\mathbf Z}^{*})
$$
and choose an input space ${\mathcal U}$ with
$\operatorname{dim}\, {\mathcal U} = \operatorname{rank}\, (I_{{\mathcal X}^{d}
\oplus {\mathcal Y}} - \sbm{ A \\ C } \sbm{A^{*} & C^{*}})$ and define an
operator $\sbm{B \\ D } \colon {\mathcal U} \to {\mathcal X}^{d} \oplus {\mathcal Y}$ as a
solution of the Cholesky factorization problem
$$
\begin{bmatrix} B \\ D \end{bmatrix}
\begin{bmatrix} B^{*} & D^{*} \end{bmatrix} =
I_{{\mathcal X}^{d} \oplus {\mathcal Y}} - \begin{bmatrix} A \\ C
\end{bmatrix} \begin{bmatrix} A^{*} & C^{*} \end{bmatrix}.
$$
Set ${\mathbf U} = \sbm{ A & B \\ C & D}$ and let $\theta \in
{\mathcal S}_{nc,d}({\mathcal U}, {\mathcal Y})$ be the transfer function of
${\mathbf U}$:
$$
\theta(z) = D + C (I - Z(z) A)^{-1} Z(z) B.
$$
Then $\theta$ is inner and ${\mathcal M}_{{\mathbf Z},X} = \theta \cdot
H^{2}_{{\mathcal U}}({\mathcal F}_{d})$.
\end{theorem}
\begin{proof} If $({\mathbf Z}, X)$ is an
admissible homogeneous interpolation data set, then $({\mathbf Z}, X)$
is controllable and ${\mathbf Z}^{*}$ is strongly stable. Since $(C,
{\mathbf A}) = (X^{*}, {\mathbf Z}^{*})$,
we have $(C, {\mathbf A})$ is observable and ${\mathbf A}$ is strongly stable.
>From the construction
of ${\mathbf U}$, we know ${\mathbf U}$ is coisometric. Then by
Theorem \ref{T:NC4}, $\theta$ is inner and hence $I- M_{\theta}
M_{\theta}^{*}$ is the orthogonal projection of
$H^{2}_{{\mathcal Y}}({\mathcal F}_{d})$ onto $(\operatorname{Ran} \, M_{\theta})^{\perp}$.
From part (2) of Proposition \eqref{P:HKS-char} it then follows that
\begin{equation} \label{HKtheta1}
{\mathcal H}(K_{\theta}) = H^{2}_{{\mathcal Y}} \ominus \theta \cdot
H^{2}_{{\mathcal U}}({\mathcal F}_{d}) \text{ isometrically.}
\end{equation}
On the other hand, again since ${\mathbf U}$ is coisometric, from
Corollary \ref{C:NC-ADR} we see that $K_{\theta} = K_{C, {\mathbf A}}$ and
hence ${\mathcal H}(K_{\theta}) = {\mathcal H}(K_{C, {\mathbf A}})$. Since ${\mathbf A}$ is strongly
stable, Proposition \ref{P:BBF1} tells us that ${\mathcal H}(K_{C, {\mathbf A}})$ is
isometrically included in $H^{2}_{{\mathcal Y}}({\mathcal F}_{d})$ and is characterized
by
\begin{equation} \label{HKtheta2}
{\mathcal H}(K_{\theta}) = {\mathcal H}(K_{C, {\mathbf A}}) = \operatorname{Ran}\, {\mathcal O}_{C,
{\mathbf A}} = \operatorname{Ran}\, ({\mathcal C}_{{\mathbf Z},X)})^{*}.
\end{equation}
Comparing \eqref{HKtheta1} with \eqref{HKtheta2} and taking
orthogonal complements finally leaves us with
$$
\theta \cdot H^{2}_{{\mathcal U}}({\mathcal F}_{d}) = (\operatorname{Ran}\,
({\mathcal C}_{{\mathbf Z},X})^{*})^{\perp} = \operatorname{Ker}\, {\mathcal C}_{{\mathbf Z},X} =
{\mathcal M}_{{\mathbf Z},X}
$$
and Theorem \ref{T:BLhomint} follows.
\end{proof}
|
2,869,038,155,519 | arxiv | \section{Introduction} \label{sect:intro}
Graphs with or without special patterns for edge crossings are an
important topic in Topological Graph Theory, Graph Drawing, and
Computational Geometry. Particular patterns
are no crossings, single crossings, fans, independent edges, or no three
pairwise crossing edges.
A \emph{fan} is a set of edges with a single common endpoint.
In complement, edges are \emph{independent} if they do not share a common endpoint.
Important graph classes have been defined in this way, including
the planar, 1-planar
\cite{klm-bib-17, ringel-65}, fan-planar \cite{bddmpst-fan-15,
bcghk-rfpg-17,ku-dfang-14}, fan-crossing free
\cite{cpkk-fan-15}, and quasi-planar graphs \cite{aapps-qpg-97}.
A first order logic definition of these and other graph classes is given in \cite{b-FOL-17}.
These definitions are motivated by the need for classes of
non-planar graphs from real world applications, and a negative
correlation between edge crossings and the readability of graph
drawings by human users. The aforementioned graph classes aim to
meet both requirements.
We consider undirected graphs $G = (V,E)$ with finite sets of
vertices $V$ and edges $E$ that are \emph{simple} both in a graph
theoretic and in a topological sense. Thus we do not admit multiple
edges and self-loops, and we exclude multiple crossings of two edges
and crossings among adjacent edges.
A \emph{drawing} of a graph $G$ is a mapping of $G$ into the plane
so that the vertices are mapped to distinct points and each edge is
mapped to a Jordan arc between the endpoints. Two edges \emph{cross}
if their Jordan arcs intersect in a point other than an endpoint.
Crossings subdivide an edge into uncrossed pieces, called \emph{edge
segments}, whose endpoints are vertices or crossing points. An edge
is \emph{uncrossed} if and only if it consists of a single edge
segment.
A drawn graph is called a \emph{topological graph}. In other works,
a topological graph is called an \emph{embedding} which is the
class of topologically equivalent drawings.
An embedding defines a \emph{rotation system} which is the cyclic
sequence of edges incident to each vertex. A drawn graph partitions
the plane into topologically connected regions, called \emph{faces}.
The unbounded region is called the \emph{outer face}. The
\emph{boundary} of each face consists of a cyclic sequence of edge
segments. It is commonly specified by the sequence of vertices and
crossing points of the edge segments.
The subgraph
of a graph $G$ induced by a subset $U$ of vertices is denoted by
$G[U]$. It inherits its embedding from an embedding of $G$, from
which all vertices not in $U$ and all edges with at most one
endpoint in $U$ are removed.
\begin{figure}
\centering
\subfigure[ ]{
\parbox[b]{2.8cm}{%
\centering
\includegraphics[scale=0.3]{simplefanA.pdf}
}
\label{fig:simplefan}
}
\hfil
\subfigure[ ]{
\parbox[b]{2.8cm}{%
\centering
\includegraphics[scale=0.3]{fcf.pdf}
}
\label{fig:fcf}
}
\caption{(a) A fan-crossing and
(b) an independent crossing or fan-crossing free }
\label{fig:typeedgecrossings}
\end{figure}
An edge $e$ has a \emph{fan-crossing} if the crossing edges form a
fan, as in Fig.~\ref{fig:simplefan}, and an \emph{independent
crossing} if the crossing edges are independent, see
Fig.~\ref{fig:fcf}. Fan-crossings are also known as radial $(k,1)$
grid crossings and independent crossings as grid crossings
\cite{afps-grids-14}.
Independent crossings are excluded if and
only if \emph{adjacency-crossings} are allowed in which two edges
are adjacent if they both cross an edge \cite{b-FOL-17}.
\emph{Fan-planar} graphs were introduced by Kaufmann and
Ueckerdt \cite{ku-dfang-14}, who imposed a special restriction,
called \emph{configuration II}. It is shown in Fig.~\ref{fig:conf2}.
Let $e,f$ and $g$ be three edges in a drawing so that $e$ is crossed
by $f$ and $g$, and $f$ and $g$ share a common vertex $t$. Then they
form configuration II if one endpoint of $e$ is inside a cycle
through $t$ with segments of $e, f$ and $g$, and the other endpoint
of $e$ is outside this cycle. If $e = \{u,v\}$ is oriented from $u$
(left) to $v$ (right) and $f$ and $g$ are oriented away from $t$,
then $f$ and $g$ cross $e$ from different directions. Configuration
II admits \emph{triangle-crossings} in which an edge crosses the
edges of a triangle, see Fig.~\ref{fig:conf2triangle}. Observe that
a triangle-crossing is the only configuration in which an edge is
crossed by edges that do not form a fan and that are not
independent.
\begin{figure}
\centering
\subfigure[ ]{
\parbox[b]{4.5cm}{%
\centering
\includegraphics[scale=0.6]{conf2B.pdf}
}
\label{fig:conf2}
}
\hfil
\subfigure[ ]{
\parbox[b]{4.5cm}{%
\centering
\includegraphics[scale=0.6]{conf2triangleB.pdf}
}
\label{fig:conf2triangle}
}
\caption{(a) Configuration II in which edge $e=\{u,v\}$ is crossed by edges $\{t,x\}$
and $\{t,y\}$ and $x$ and $y$ are on opposite sides of $e$ and (b) edge $e= \{u,v\}$
crosses
a triangle. The shaded regions represent subgraphs which shall prohibit another
routing of $e$. Similar regions could be added to (a), as in Fig.~\ref{fig:graphM}.}
\label{fig:fanconfiguration}
\end{figure}
A graph is \emph{fan-crossing free} if it admits a drawing without
fan-crossings \cite{cpkk-fan-15}. Then there are only independent
crossings. A graph is \emph{fan-crossing} if it admits a drawing in
which each crossing is a
fan-crossing, and \emph{adjacency-crossing} if it can be
drawn so that each edge is crossed by edges that are adjacent. Then
independent crossings are excluded. As stated in \cite{b-FOL-17}, adjacency
crossing is complementary to independent crossing, but the graph
classes are not complementary and both properly include the 1-planar
graphs. A graph is \emph{fan-planar} if it avoids independent
crossings and configuration II \cite{ku-dfang-14}.
Observe the subtle differences between adjacency-crossing,
fan-crossing, and fan-planar graphs, which each exclude independent
crossings, and in addition exclude triangle-crossings and
configuration II, respectively.
Kaufmann and Ueckerdt \cite{ku-dfang-14} observed that configuration II
cannot occur in straight-line drawings, so that every straight-line
adjacency-crossing drawing is fan-planar.
They proved that fan-planar graphs of size $n$ have at most $5n-10$
edges and posed the density of adjacency-crossing graphs as an open
problem. The \emph{density} defines an upper bound an on the number
of edges in graphs of size $n$.
We show that triangle-crossings can be avoided by an edge
rerouting, and that configuration II can be restricted to a special
case.
Moreover, the allowance
or exclusion of configuration II has no impact on the density,
which answers the above question. In particular, we prove the
following:
\begin{enumerate}
\item Every adjacency-crossing graph is fan-crossing. Thus
triangle-crossings can be avoided.
\item There are fan-crossings graphs that are not fan-planar.
Thus configuration II is essential.
\item For every fan-crossing graph $G$
there is a fan-planar graph $G'$ on the same set of vertices and
with (at least) the same number of edges. Thus fan-crossing graphs
of size $n$ have at most $5n-10$ edges.
\end{enumerate}
We prove that triangle-crossings can be avoided by an edge
rerouting in Section \ref{sect:trianglecrossings} study
configuration II in Section \ref{sect:fanplanar}. We conclude in
Section \ref{sect:conclusion} with some open problems on
fan-crossing graphs.
\section{Triangle-Crossings} \label{sect:trianglecrossings}
In this section, all embeddings $\mathcal{E}(G)$ are
adjacency-crossing or equivalently they exclude independent
crossings. We consider triangle-crossings and show that they can be
avoided by an edge rerouting. A \emph{rerouted edge} is denoted by
$\tilde{e}$ if $e$ is the original one. More formally, we transform
an adjacency-crossing embedding $\mathcal{E}(G)$ into an
adjacency-crossing embedding $\tilde{\mathcal{E}}(G)$ which differs
from $\mathcal{E}(G)$ in the embedding of the rerouted edges such
that $\tilde{e}$ does not cross a particular triangle if $e$
crosses that triangle.
For convenience, we assume that triangle-crossings are in a
\emph{standard configuration},
in which a triangle $\Delta = (a,b,c)$ is crossed by
edges $e_1,\ldots, e_k$ for some $k \geq 1$ that cross each edge of
$\Delta$. We call each $e_i$ a \emph{triangle-crossing edge} of
$\Delta$. These edges are incident to a common vertex $u$ if $k \geq
2$. We assume that a triangle-crossing edge $e=\{u,v\}$ crosses
$\{a,c\}, \{b,c\}$ and $ \{a,b\}$ in this order and that $u$ is
outside $\Delta$. Then $v$ must be inside $\Delta$. All other cases
are similar exchanging inside and outside and the order in which the
edges of $\Delta$ are crossed.
We need some further notation. Let $fan(v)$ denote a subset of edges
incident to vertex $v$ that cross a particular edge. This is a
generic definition. If the crossed edge is given, then $fan(v)$ can
be retrieved from the embedding $\mathcal{E}(G)$. In general,
$fan(v)$ does not contain all edges incident to $v$. A \emph{sector}
is a subsequence of edges of $fan(v)$ properly between two edges
$\{v, s\}$ and $\{v,t\}$ in clockwise order. An edge $e$ is
\emph{covered} by a vertex $v$ if $e$ is crossed by at least two
edges incident to $v$ so that $fan(v)$ has at least two elements.
Let $cover(v)$ denote the set of edges covered by $v$. Note that
uncrossed edges and edges that are crossed only once are not
covered. If an edge $e$ is crossed by an edge $g= \{u,v\}$, then $e$
is a candidate for $cover(u)$ or $cover(v)$ and $e \not\in cover(w)$
for any other vertex $w \neq u,v$ except if $e$ crosses a triangle.
In fact, an edge $e=\{u,v\}$ is triangle-crossing if and only if
$\{e\} = cover(x) \cap cover(y)$ for vertices $x \neq y$. To see
this, observe that $e \in cover(x)$ for $x = a,b,c$ if $e$ crosses a
triangle $\Delta = (a,b,c)$. Conversely, if $e$ is crossed by edges
$\{a, w_1\}, \{a, w_2\}$ and $\{b, w_3\}$ with $a \neq b$ and $w_1
\neq w_2$, then $w_1 = w_3$ and $w_2=b$ (up to renaming) if there
are no independent crossings.
Triangle crossings are special. If an edge $e$ crosses a triangle
$\Delta$, then $e$ cannot be crossed by any edge other than the
edges of $\Delta$. In particular, $e$ cannot cross another triangle
or another triangle-crossing edge. But an edge may be part of two
triangle-crossings, as a common edge of two crossed triangles, as
shown in Fig.~\ref{fig:twotriangles}, or as a triangle-crossing
edge of one triangle and an edge of another triangle, as shown in
Fig.~\ref{fig:tri2}, and both configurations can be combined.
\begin{figure}[t]
\centering
\subfigure[ ]{
\parbox[b]{4cm}{%
\centering
\includegraphics[scale=0.6]{twotriangles.pdf}
}
\label{fig:twotriangles}
}
\hfil
\subfigure[ ]{
\parbox[b]{4cm}{%
\centering
\includegraphics[scale=0.6]{tri2.pdf}
}
\label{fig:tri2}
}
\caption{Two crossed triangles sharing (a) an edge
or (b) an edge and a triangle-crossing edge.}
\label{fig:triangleconfigurations}
\end{figure}
A particular example is $K_5$, which has five embeddings
\cite{hm-dcgmnc-92}, see Fig.~\ref{fig:allK5}. The one of
Fig.~\ref{fig:allK5}(e) has a triangle-crossing. If it is a part
of an adjacency-crossing embedding, then we show that it can be
transformed into the embedding of Fig.~\ref{fig:allK5}(c) by
rerouting an edge of the crossed triangle.
\begin{figure}[h]
\centering
\subfigure[ ]{
\includegraphics[scale=0.35]{K5E1.pdf}
}
\quad
\subfigure[ ]{
\rotatebox{1}{%
\includegraphics[scale=0.35]{K5E2.pdf}
}
}
\quad
\subfigure[ ]{
\includegraphics[scale=0.35]{K5E3.pdf}
}
\quad
\subfigure[ ]{
\includegraphics[scale=0.35]{K5E4.pdf}
}
\quad
\subfigure[ ]{
\includegraphics[scale=0.35]{K5E5.pdf}
}
\caption{All non-isomorphic embeddings of $K_5$ \cite{hm-dcgmnc-92} with two drawings. Only (a) is
1-planar and fan-crossing free, (b), (c), and (d) are fan-planar
and
(e) is adjacency-crossing and has a triangle crossing with the
triangle-crossing edge drawn red.
Our rerouting transforms (e) into (c) and reroutes and straightens the curved edge.}
\label{fig:allK5}
\end{figure}
In return, the edges of $\Delta$ can only be crossed by edges of
$fan(u)$ or $fan(v)$ if $e= \{u,v\}$ is a triangle-crossing edge of
$\Delta$. They are covered by $u$ if there are at least two
triangle-crossing edges incident to $u$.
In addition, there may be edges that cross only one or
two edges of $\Delta$. These are incident to $u$ or $v$ and they
are incident to $u$ if there are at least two triangle-crossing
edges incident to $u$. We assume a standard configuration and
classify crossing edges by the sequence of crossed edges, as
stated in Table~$1$.
\begin{table}[h]
\begin{tabular}{|l | c| l |}
\hline
name & set & sequence of crossed edges \\
\hline
needle & $N_1, N_2, N_3$ & $\{a,c\}$ \\
$a$-hook & $H_a$& $\{a,b\}$ \\
$c$-hook & $H_c$ & $\{b,c\}$ \\
$a$-arrow & $A_a$ & $\{a,c\}, \{a,b\}$ \\
$c$-arrow & $A_c$ & $\{a,c\}, \{b,c\}$ \\
$a$-sickle & $S_a$ & $\{a,b\}, \{a,c\}$ \\
$c$-sickle & $S_c$ & $\{b,c\}, \{a,c\}$ \\
clockwise triangle-crossing & $C$ & $\{a,c\}, \{b,c\}, \{a,b\}$ \\
counterclockwise triangle-crossing & $CC$ & $\{a,c\}, \{a,b\}, \{b,c\}$ \\
\hline
\end{tabular}
\label{tab:classify}
\caption{Classification of edges crossing the edges of a triangle
$\Delta = (a,b,c)$}
\end{table}
Suppose that $u$ is outside $\Delta$. Then the other endpoint of $g=
\{u,w\}$ is inside $\Delta$ if $g$ is a needle, a hook, or a
triangle-crossing edge, and $w$ is outside $\Delta$ if $g$ is an
arrow or a sickle, see Fig.~\ref{fig:triangle2dir}. An $a$-arrow and
an $a$-sickle are covered by $a$, since they are crossed by at
least two edges of $fan(a)$. Similarly, a $c$-arrow and a $c$-sickle
are covered by $c$. A needle $g$ may be covered by $a$ or by $c$ and
there is a preference for $a$ ($c$) if $g$ is before (after) any
triangle-crossing
edge according to the order of crossing points on $\{a,c\}$ from
$a$ to $c$. Otherwise, there is an instance of configuration II, as
shown in Fig.~\ref{fig:badneedle}. Accordingly, an $a$-hook
may be covered by $a$ or by $b$ and the
crossing edges are on or inside $\Delta$ if it is covered by $b$,
since the triangle-crossing edges prevent edges from $b$ outside
$\Delta$ that cross $a$-hooks.
By symmetry, we consider needles, hooks, arrows, and sickles from
the viewpoint of vertex $v$ inside $\Delta$. Then a needle first
crosses $\{a,b\}$ and an $a$-hook first crosses $\{a,c\}$ and the
other endpoint is outside $\Delta$.
\begin{figure}[H]
\centering
\subfigure[ ]{
\parbox[b]{4.5cm}{%
\centering
\includegraphics[scale=0.7]{triangle2dir.pdf}
}
\label{fig:triangle2dir}
}
\\
\subfigure[ ]{
\parbox[b]{4.5cm}{%
\centering
\includegraphics[scale=0.7]{triangle2dirR.pdf}
}
\label{fig:triangle2dirRR}
}
\caption{Triangle-crossings (a) with clockwise triangle-crossing
edges, $c$-hooks, $c$-sickles, and $c$-arrows crossing $\{b,c\}$ drawn red and
counterclockwise triangle crossing edges, $a$-arrows, $a$-hooks and $a$-sickles crossing $\{a,b\}$, drawn blue and
(b) rerouting the edges along $e_i$ and $e_j$}
\label{fig:notrianglemultiCC}
\end{figure}
A triangle $\Delta = (a,b,c)$ can be crossed by several
triangle-crossing edges, even in opposite directions, see
Fig.~\ref{fig:triangle2dir}. We say that a triangle-crossing edge
crosses \emph{clockwise} if it crosses $\{a,c\}, \{b,c\}, \{a,b\}$
cyclicly in this order, and \emph{counterclockwise} if it crosses
the edges in the cyclic order $\{a,c\}, \{a,b\}, \{b,c\}$.
\begin{lemma} \label{lem:triangle2dir}
Let $\mathcal{E}(G)$ be an adjacency-crossing embedding of a graph
$G$ such that a triangle $\Delta$ is crossed by triangle-crossing
edges
in clockwise and in counterclockwise order. Then there is an
adjacency-crossing embedding in which each triangle-crossing edge is
rerouted so that it crosses only one edge of $\Delta$, and no new
triangle-crossings are introduced.
\end{lemma}
\begin{proof}
Suppose that the edges of $\Delta = (a,b,c)$ are crossed by the
edges of a set $X$. If there are at least two triangle-crossing
edges, then there is a vertex $u$ so that $X=fan(u)$. By our
assumption, $u$ is outside $\Delta$ and $\{a,c\}$ is crossed first.
All other cases are similar. Classify the edges according to
Table~$1$. Choose a clockwise triangle-crossing edge $e_i$ and a
counterclockwise triangle-crossing edge $e_j$, and assume that
$e_i$ precedes $e_j$ in clockwise order at $u$. The other case is
similar. Partition the set of needles so that
$N_1, N_2$ and $N_3$ are the sets of needles before
$e_i$, between $e_i$ and $e_j$, and after $e_j$ in
clockwise order at $u$. Then $N_3 < e_j < N_2 < e_i < N_1$ according
to the order (of the crossing points) on $\{a,c\}$. Accordingly,
partition the set of counterclockwise triangle-crossing edges into
$CC_l$ and $CC_r$, where $CC_l$ comprises the edges before $e_i$
and $CC_r = CC-CC_l$ is the set of edges after $e_i$, and partition
the set $C$ into the sets of edges to the left and right of $e_j$.
Then edges of $\Delta$ are crossed by the edges of $X=N_1 \cup N_2
\cup N_3 \cup H_a \cup H_c \cup A_a \cup A_c \cup L_a \cup L_c\cup C
\cup CC$. Some of these sets may be empty. The edges from these sets
are unordered at $u$. In particular, edges of $C$ and $CC$ may
alternate, needles may appear anywhere, whereas $c$-hooks and
$c$-sickles precede triangle-crossing edges which precede $a$-hooks
and $a$-sickles.
We sort the edges of $X$ in clockwise order at $u$ and reroute them
along $e_i$ and $e_j$ in the following order:
\noindent $S_c < N_1 < CC_l < H_c < A_c < CC_r < N_2 < C_r < A_a
< H_a < C_l < N_3 < S_a$.
Two edges in a set are ordered by the crossing points with edges of
$\Delta$ so that adjacent edges do not cross one another. The edges
of $S_c$ and $N_1$ are routed along $e_i$ from $u$ to the crossing
point of $e_i$ and $\{a,c\}$, where they make a left turn and follow
$\{a,c\}$. Then the rerouted edge $\tilde{g}$ follows the original
$g$ so that $\tilde{g}$ crosses $\{a,c\}$ if $g$ is a needle. An
edge $\tilde{g}$ first follows $e_i$ to the crossing point with
$\{b,c\}$ if $g \in H_c \cup CC_l \cup A_c \cup CC_r$, then it
follows $\{b,c\}$ and finally $g$. If $g \in H_c \cup CC_l$, then
$\tilde{g}$ makes a left turn and a right turn for edges in
$CC_r$. Accordingly, edges $\tilde{g}$ make a left or right turn and
cross $\{b,c\}$ if $g$ is an arrow. An edge $\tilde{g}$ may follow
$e_i$ or $e_j$ from $u$ to $\{a,c\}$ or adopts the route of $g$ if
$g \in N_2$ is a needle between the chosen triangle-crossing edges
$e_i$ and $e_j$. Similarly, edges of $C_r, A_a, C_l, N_3$ and $S_a$
are routed along $e_j$ from $u$ to the crossing point with $\{a,b\}$
and $\{a,c\}$, respectively, then along one of these edges, and
finally along the original edge. For an illustration see
Fig.~\ref{fig:notrianglemultiCC}.
The rerouting saves many crossings. Only arrows cross two edges of
$\Delta$, and needles, hooks and triangle-crossing edges cross
$\{a,c\}$. In fact, each rerouted edge is crossed by a subset of
edges crossing the original one, except if the edge is a hook. This
is due to the fact that triangle-crossing edges are only crossed by
the edges of the triangle. Hence, there are (uncrossed) segments
from $u$ to $\{a,c\}$ and from $\{a,c\}$ to $\{b,c\}$ and $\{a,b\}$,
respectively.
In the
final part, $\tilde{g}$ coincides with $g$ and adopts the edge
crossings from $g$. In consequence, $\tilde{g}$ crosses only
$\{a,c\}$ if $g$ is a triangle-crossing edge.
If $g$ is a $c$-hook, then
the crossing with edge $\{b,c\}$ is replaced by a crossing with
$\{a,c\}$ and crossings with edges of $fan(c)$ outside $\Delta$ are
avoided. The replacement is feasible. A $c$-hook cannot be covered
by $b$, since a further crossing edge $\{b,d\}$ must cross a
clockwise triangle-crossing edge, which is excluded. Hence,
$\tilde{g}$ is crossed by edges of $fan(c)$, and each edge $h$
crossing $\tilde{g}$ is in $fan(u)$. Similarly, edge $\{a,b\}$ can
be replaced by $\{a,c\}$ at $a$-hooks. The other rerouted edges
adopt the crossings from the final part, so that new
triangle-crossings cannot be introduced. Topological simplicity is
preserved, since the bundle of edges is well-ordered, and two edges
cross at most once, since there are segments from $u$ to $\{a,c\}$
and between $\{a,c\}$ and $\{b,c\}$ and $\{a,b\}$, respectively.
In consequence, triangle-crossings of $\Delta$ are avoided, there
are no new triangle-crossings, and the obtained embedding is
adjacency-crossing.
\qed
\end{proof}
The rerouting technique of Lemma \ref{lem:triangle2dir} widely
changes the order of the edges of $fan(u)$ and it avoids many
crossings. It is possible to restrict the rerouting to
triangle-crossing edges so that they cross only a single edge of the
triangle. Therefore consider two consecutive crossing points of
clockwise triangle crossing edges or $c$-arrows and $\{b,c\}$, and
reroute the counterclockwise crossing edges crossing $\{b,c\}$ in
the sector along one of the bounding edges. Accordingly, proceed
with clockwise triangle-crossing edges and sectors of $\{a,b\}$.
Thereby hooks, sickles and arrows remain unchanged.
\\
From now on, we assume that all triangle-crossing edges cross
clockwise. We wish to reroute them along
an $a$-arrow, $a$-hook or $a$-sickle if such an edge exists.
This is doable, but we must take a detour if
the edge is covered by $b$ or $c$.
\begin{lemma} \label{lem:trianglearrow}
Suppose there is an adjacency crossing embedding $\mathcal{E}(G)$
and a triangle $\Delta$ is crossed by clockwise triangle-crossing
edges. If there are an
$a$-hook, an $a$-arrow or an $a$-sickle, then some edges are
rerouted so that
$\tilde{g}$ crosses only one edge of $\Delta$ if $g$ is a triangle-crossing edge of $\Delta$,
and there are no new triangle-crossings.
\end{lemma}
\begin{proof}
Our target is edge $\{a,b\}$ of $\Delta=(a,b,c)$, where the crossing
edges are ordered from $a$ to the left to $b$. Then $a$-hooks and
$a$-sickles are to the left of all triangle-crossing edges, whereas
$a$-arrows are interspersed. Edge $\{a,b\}$ is covered by $u$.
Let $f= \{u,w\}$ be the rightmost edge among all $a$-hooks,
$a$-arrows, and $a$-sickles. First, if $f$ is an $a$-hook, then
reroute all edges $g$ crossing $\{a,b\}$ to the right of $f$ in a
bundle from $u$ to $\{a,b\}$ along the outside of $f$, see
Fig.~\ref{fig:trianglehook}. Since $f$ is rightmost, edge $g$ is
triangle-crossing. Then $\tilde{g}$ makes a right turn and follows
$\{a,b\}$ and finally it follows $g$. Thereby, $\tilde{g}$ crosses
$\{a,b\}$. Let $F$ be the set of edges
in the sector between $\{a,b\}$ and
$\{a,c\}$ that cross $f$, i.e., outside $\Delta$. Then $\tilde{g}$
is crossed by the edges of $F$ and also by $\{a,b\}$. Each crossing
edge is in $fan(a)$ and is uncovered or covered by $u$. It cannot be
covered by the other endpoint $w$ of $f$, since $w$ is inside
$\Delta$ and any edge $\{w,w'\}$ crossing an edge $\{a,d\} \in F$
must cross $\{a,b\}, \{a,c\}$ or a triangle-crossing edge, which is
excluded, since it enforces an independent crossing. Thus
$\tilde{g}$ is only crossed by edges of $fan(a)$, and $\tilde{g}$
can be added to the fan of edges of $fan(u)$ that cross such edges.
Hence, all introduced crossings are fan-crossings, as
Fig.~\ref{fig:trianglehookR} shows.
We would like to proceed accordingly if $f$ is an $a$-sickle and
reroute triangle-crossing edges along the outside of $f$ from $u$ to
$\{a,b\}$. However, $f$ may be crossed by edges $\{a,d\}$ that are
covered by $w$, as shown in Fig.~\ref{fig:trianglesickle}. Then a
rerouted edge along $f$ introduces an independent crossing. We take
another path.
Let the $a$-sickle $f= \{u,w\}$ cross $\{a,b\}$ in $p_1$ and
$\{a,c\}$ in $p_2$, see Fig.~\ref{fig:trianglesickle}.
Let $H$ be the set of edges that cross $\{a,c\}$ between the first
triangle-crossing edge $e_1$ and $f$ including $f$. Now we reroute
all edges $h \in H$ and all triangle-crossing edges $g$ so that
they first follow
$e_1$ from $u$ to $\{a,c\}$, then
$\{a,c\}$, where the edges $\tilde{h}$ branch off and and follow
$h$. If $g$ is a triangle-crossing edge, then $\tilde{g}$ crosses
$\{a,c\}$ at $p_2$, and then follows $f, \{a,b\}$, and finally $g$,
see Fig.~\ref{fig:trianglesickleR}.
The rerouted edges are uncrossed from $u$ to their crossing point
with $\{a,c\}$. Hence, each edge $\tilde{h}$ is crossed by a subset
of edges that cross $h$ for $h \in H$. Let $F$ be the set of edges
crossing $f$ in the sector between $p_1$ and $p_2$. Since $f$ is
covered by $a$, these edges are incident to $a$. Now $\tilde{g}$ is
crossed by $\{a,c\}$ and by the edges of $F$ if $g$ is
triangle-crossing, so that $\tilde{g}$ is crossed by edges of
$fan(a)$. Each edge $h \in F$ is in $fan(u)$, since it crosses $f=
\{u,w\}$ and it cannot be covered by $w$. Otherwise, it must be
crossed by another edge $\{w, w'\}$. However, $w$ is outside
$\Delta$ and $\{w, w'\}$ must cross $\{a,c\}$ or $\{a,b\}$ or a
triangle-crossing edge, which introduces an independent crossing.
Hence, $\tilde{g}$ can be added to the fan of edges at $u$ that
cross $h$ so that there is a fan-crossing.
We proceed similarly
if $f = \{u,w\}$ is an $a$-arrow, see Fig.~\ref{fig:tribadarrowX}.
Reroute all edges $g$ that cross $\{a,c\}$
to the right of the leftmost triangle-crossing edge $e_1$ including $e_1$. Then $g$ is triangle-crossing
or an $a$-arrow. Route $\tilde{g}$ from $u$ to $\{a,c\}$ along the
first edge that crosses $\{a,c\}$ and is covered by $c$, then along
$\{a,c\}$ to the crossing point with $f$, then along $f$ and
finally along $g$. Then there is a segment from $u$ to the crossing
with $\{a,c\}$. In the sector between $\{a,c\}$ and $\{a,b\}$, $\tilde{g}$ is
crossed by the edges of $fan(a)$ that cross $f$ in this sector. If
$g$ is a triangle-crossing edge, then $\tilde{g}$ is not crossed by
further edges, whereas $\tilde{g}$ adopts the crossings with further
edges incident to $a$ outside $\Delta$ if $g$ is an $a$-arrow.
Now, $\tilde{g}$ is crossed by a subset of edges that cross $g$ if $g$ is an
$a$-arrow, since $f$ is the rightmost $a$-arrow. If $g$ is a
triangle-crossing edge, then the edges crossing $\tilde{g}$ are
incident to $a$, and each crossing edge is incident to $u$. It
cannot be incident to or covered by the other endpoint $e$ of $f$,
since $w$ is outside $\Delta$ and the edges crossing $\tilde{g}$
are inside, and and no further edge $\{w,w'\}$ with $w' \neq u$ can
cross $\{a,b\}, \{a,c\}$, or a triangle-crossing edge. Hence, there
is a fan-crossing, $\tilde{g}$ crosses only one edge of $\Delta$ if
$g$ is triangle-crossing, and there are no new triangle-crossings.
\qed
\end{proof}
\begin{figure}
\centering
\subfigure[ ]{
\parbox[b]{4.5cm}{%
\centering
\includegraphics[scale=0.6]{trihook.pdf}
}
\label{fig:trianglehook}
}
\hfil
\subfigure[ ]{
\parbox[b]{4.5cm}{%
\centering
\includegraphics[scale=0.6]{trihookR.pdf}
}
\label{fig:trianglehookR}
}
\caption{(a) An $a$-hook (drawn blue and dashed) and triangle-crossing edges which (b) are rerouted along the $a$-hook.}
\label{fig:trihook}
\end{figure}
\begin{figure}
\centering
\subfigure[ ]{
\parbox[b]{4.5cm}{%
\centering
\includegraphics[scale=0.6]{sickle.pdf}
}
\label{fig:trianglesickle}
}
\hfil
\subfigure[ ]{
\parbox[b]{4.5cm}{%
\centering
\includegraphics[scale=0.6]{sickleR.pdf}
}
\label{fig:trianglesickleR}
}
\caption{ An $a$-sickle and triangle-crossing edges (a) before and (b) after the edge rerouting.}
\label{fig:sickle}
\end{figure}
\begin{figure}
\centering
\subfigure[ ]{
\parbox[b]{4.5cm}{%
\centering
\includegraphics[scale=0.6]{triBadarrow.pdf}
}
\label{fig:triarrow}
}
\hfil
\subfigure[ ]{
\parbox[b]{4.5cm}{%
\centering
\includegraphics[scale=0.6]{triBadarrowR.pdf}
}
\label{fig:triarrowR}
}
\caption{An $a$-arrow and triangle-crossing edges (a) before and (b) after the edge
rerouting.}
\label{fig:tribadarrowX}
\end{figure}
The existence of an $a$-hook, $a$-sickle or $a$-arrow implies that
edge $\{a,b\}$ is covered by $u$. By symmetry, we can reroute all
triangle-crossing edges, if there are $a$-hooks, $a$-sickles or
$a$-arrows from the viewpoint of vertex $v$ inside $\Delta$. Then
$\{a,c\}$ is covered by $v$. For example, an arrow from $v$ first
crosses $\{a,b\}$ and then $\{b,c\}$ so that vertex $b$ is enclosed
and triangle-crossing edges are rerouted along the outer side of the
arrow. It remains to consider the case without such edges. Then
there are only triangle-crossing edges, needles (from $u$ and from
$v$), $c$-hooks, $c$-arrows, and $c$-sickles.
\begin{lemma} \label{lem:trianglecovered}
Suppose there is an adjacency crossing embedding $\mathcal{E}(G)$
and a triangle $\Delta = (a,b,c)$ is crossed by clockwise
triangle-crossing edges. If there are no
$a$-hooks, $a$-arrows and $a$-sickles
and edges $\{a,c\}$ and $\{b,c\}$ are not covered by $v$, then edge
$\ell=\{a,b\}$ can be rerouted so that $\tilde{\ell}$ does not
cross the rerouted edge,
and there are no new triangle-crossings.
Similarly, reroute $\{a,c\}$ if $\{b,c\}$ is not covered by $u$ and
there are no $a$-hooks, $a$-arrow and $a$-sickles from the
viewpoint of $v$.
\end{lemma}
\begin{proof}
Besides one or more clockwise triangle-crossing edges there are only
needles, $c$-hooks, $c$-arrows and $c$-sickles. We cannot route the
triangle-crossing edges along the edges of $\Delta$, since vertices
$a$ and $b$ may be incident to ``fat edges'', that are explained in
Section \ref{sect:fanplanar}, and prevent a bypass. Therefore, we
reroute $\{a,b\}$. Similarly, we reroute $\{a,c\}$ if $\{a,b\}$ and
$\{b,c\}$ are not covered by $u$, and both ways may be possible.
If $\{u,b\}$ is an edge of $G$, then it crosses $\{a,c\}$ and we
take $f=\{u,b\}$;
otherwise let $f$ be the first edge crossing both
$\{a,c\}$ in $p_1$ and $\{b,c\}$ in $p_2$. Then $f$ is covered by
$c$ and is a triangle-crossing edge or a $c$-arrow. There is a
segment from $u$ to $p_1$, from $p_1$ to $p_2$, and from $p_2$ to
$b$. Other edges incident to $c$ cannot cross $f$, since $f$ is
triangle-crossing or is protected from $c$ by a triangle-crossing
edge, and the final part along $\{b,c\}$ is uncrossed, because $f$
is the first edge crossing $\{b,c\}$ from $b$.
Reroute $\ell = \{a,b\}$ so that $\tilde{\ell}$ first follows
$\{a,c\}$ from $a$ to $p_1$, then $f$ to $p_2$ and finally $\{b,c\}$
to $b$. If $f= \{u,b\}$, then $p_2$ and $b$ coincide.
Let $N$ be the set of edges crossing $\{a,c\}$ in the segment from
$a$ to $p_1$. Then $N$ consists of needles so that $N = N_c \cup
N_a$, where a needle $n \in N_c$ is covered by $c$ and a needle $n
\in N_a$ is uncovered or covered by $a$. The needles in $N_c$ cross
$\{a,c\}$ before the needles of $N_a$. In fact, if an edge $\{x,y\}$
other than $\{a,c\}$ crosses a needle $n \in N$, then $\{x,y\}$ is
outside $\Delta$ if $n \in N_c$. If $\{x,y\}$ crosses $n$ inside
$\Delta$, then $n \in N_a$, since further edges incident to $c$
cannot enter the interior of $\Delta$ below the triangle-crossing
edges.
Now $\tilde{\ell}$ is crossed by the edges of $N$. Note that there
are no crossings of $\tilde{\ell}$ in the second part along $f$ and
in the third part along $\{b,c\}$. Since the edges of $N$ are
incident to $a$, $\tilde{\ell}$ is crossed by edges $fan(a)$. In
return, consider an edge $h$ crossing some needle $n = \{u,w\} \in
N$. Then $n$ and may be covered by $a$ or by $c$ so that $h=
\{a,d\}$ or $h= \{d,d\}$. If $h$ is not covered by $c$, we are
done, since we can add $\tilde{\ell} = \{a,b\}$ to the fan of edges
of $fan(a)$ crossing $n$.
However, there is a conflict if $n$ is covered by $c$, as shown in
Fig.~\ref{fig:badneedle}.
Then there are needles $\{u,w_1\}, \ldots, \{u,w_s\}$ and edges
$\{c, z_1\},\ldots, \{c,z_t\}$ for some $s,t \geq 1 $ so that each
$\{u,w_i\}$ is crossed by some $\{c, z_j\}$.
We resolve the conflict by rerouting the needles in advance, so
that needles of $N_c$ are no longer covered by $c$, see
Fig.~\ref{fig:badneedleR}. Reroute each needle $\tilde{n}$ from
$u$ to $p_1$ along $f$, then along $\{a,c\}$, and finally along $n$.
Then there is a segment from $u$ to the crossing point with
$\{a,c\}$ so that $\tilde{n}$ is only crossed by a subset of edges
that cross $g$. Thereafter, there are no needles covered by $c$, and
we are done.
\qed
\end{proof}
\begin{figure}
\centering
\subfigure[ ]{
\parbox[b]{4.5cm}{%
\centering
\includegraphics[scale=0.6]{trianglebase.pdf}
}
\label{fig:badneedle}
}
\hfil
\subfigure[ ]{
\parbox[b]{4.5cm}{%
\centering
\includegraphics[scale=0.6]{trianglebaseR.pdf}
}
\label{fig:badneedleR}
}
\caption{A triangle-crossing (a) with a needle covered by vertex
$c$ that introduces configuration II
and an edge rerouting that avoids triangle-crossing edges.}
\label{fig:routebase}
\end{figure}
We can now show that triangle-crossings can be avoided.
\begin{theorem} \label{thm:trianglecrossing}
Every adjacency-crossing graph is fan-crossing.
\end{theorem}
\begin{proof}
Let $\mathcal{E}(G)$ be an adjacency-crossing embedding of a graph
$G$ and suppose that there are triangle crossings. We remove them
one after another and first consider all triangles
with triangle-crossing edges in both directions
(Lemma \ref{lem:triangle2dir}), then the triangles with $a$-hooks,
$a$-arrows or $a$-sickles (Lemma \ref{lem:trianglearrow}), and
finally those without such edges (Lemma \ref{lem:trianglecovered}).
Each step
removes a crossed triangle and does not introduce new ones. Hence,
the resulting embedding is fan-crossing.
\qed
\end{proof}
\section{Fan-Crossing and Fan-Planar Graphs} \label{sect:fanplanar}
In this Section we assume that embeddings are fan-crossing so that
independent crossings and triangle-crossings are excluded.
Fan-planar embeddings also exclude configuration II
\cite{ku-dfang-14}. An instance of configuration II consists of
the fan-crossing embedding of a subgraph $C$ induced by the vertices
of an edge $e=\{u,v\}$ and of all edges $\{t,w\}$ crossing $e$,
where $e$ is crossed from both sides, as shown in
Fig.~\ref{fig:conf2}. We call $e$ the \emph{base} and its crossing
edges the \emph{fan} of $C$, denoted $fan(C)$. Since $e$ is crossed
from both sides, it it crossed at least twice, and therefore it is
covered by $t$.
It may be crossed by more than two edges. Hence, an edge is the base of
at most one configuration, but a base
may be in the fan of another configuration.
Each edge $g$ of $fan(C)$ is uncovered or is covered by exactly
one of $u$ and $v$. It may cross several base edges so that it is
part of several configurations.
An edge of $fan(C)$ is said to be \emph{straight} if it crosses $e$
from the left and \emph{curved} if it crosses $e$ from the right.
Then an instance of configuration II has at least a straight and a
curved edge. Moreover, exactly one of $u$ and $v$ is inside a cycle
with edge segments of a curved edge, the base, and a straight edge.
For convenience, we assume that $u$ is inside the
cycle and curved edges are \emph{left curves}. Right curves enclose $v$ and both left and right curves
are possible. However, if there are left and right curves, then curves in one direction can
be rerouted.
For convenience, we augment the embedding and assume that for every
instance $C$ of configuration II there are edges $\{t,u\}$ and
$\{t,v\}$. If these edges do not exist, they can be added.
Therefore, route $\{u,t\}$
along the first left curve $f$ from $u$ to the
first crossing point with an edge $g$ of $fan(u)$ and then
along $g$. Then $f$ is uncovered or covered by $u$ and $\{t,u\}$ is
uncrossed, or $f$ is covered by $v$ and $\{t,u\}$ is covered by $v$
or is uncovered. Accordingly, $\{t,v\}$ follows the rightmost edge
crossing $e$ and the first crossed edge of $fan(v)$. The case with
right curves is similar. Hence, we can assume that there is a
triangle $\Delta = (t,u,v)$ associated with $C$.
There are some cases in which configuration II can be avoided by an
edge rerouting. A special one has been used in Lemma
\ref{lem:trianglecovered} in which the straight edge is crossed by a
triangle-crossing edge. However, there is a case in which
configuration II is unavoidable.
\begin{lemma} \label{lem:conf2covercurve}
If a straight edge $s$ of an instance $C$ of configuration II is
uncovered or is covered by $u$, then the left curves $g$ to the left
of $s$ can be rerouted so that $\tilde{g}$ does not cross the base.
The edge rerouting does not introduce new instances of
configuration II.
\end{lemma}
\begin{proof}
We reroute each edge $g$ to the left of $s$ so that $\tilde{g}$
first follows $s$ from $t$ to the crossing point with the first edge
$f$ of $fan(u)$ that crosses both $g$ and $s$. Then $\tilde{g}$
follows $f$ and finally $g$. If $g$ is a straight edge, then $f =
\{u,v\}$, which is crossed.
See
Fig.~\ref{fig:conf2leftsR} for an illustration. If $g$ is a left
curve, then $\tilde{g}$ is only crossed by the edges of $fan(u)$
that cross $s$ in the sector between $\{u,t\}$ and $f$, and by the
edges that cross $g$ in the sector from $f$ to the endpoint. All
edges are in $fan(u)$ and $\{u,v\}$ is not crossed by $\tilde{g}$.
Each edge $h$ that is crossed by $\tilde{g}$ is crossed only once,
since $f$ is the first edge crossing $g$ and $s$. If $h \in fan(u)$
is crossed by $\tilde{g}$ and $g$ and $h$ do not cross, then $h$
crosses $s$ and $h$ is a straight edge for $\tilde{g}$. If there is
a curved edge $\{u,w\}$ crossing $\tilde{g}$, then $\{u,w\}$ is also
a curved edge for $s$. Hence, $\tilde{g}$ can be added to that
instance of configuration II.
If $g$ is a straight edge,
then $\tilde{g}$ is crossed by a subset of edges that cross $g$,
since each edge of $fan(u)$ crossing $s$ in the sector between
$\{u,t\}$ and $\{u,v\}$ must cross $g$. Hence there are no more edge
crossings and instances of configuration II.
\qed
\end{proof}
\begin{figure}[h]
\centering
\subfigure[ ]{
\parbox[b]{6.0cm}{%
\centering
\includegraphics[scale=0.7]{conf2left.pdf}
}
\label{fig:conf2left}
}
\hfil
\subfigure[ ]{
\parbox[b]{5.5cm}{%
\centering
\includegraphics[scale=0.7]{conf2leftR.pdf}
}
\label{fig:conf2leftR}
}
\caption{ An instance of configuration II with (a) a straight edge $s$ covered by $u$
and left curves to its left and (b) rerouting the edges crossing
$\{u,v\}$ to the left of $s$. }
\label{fig:conf2leftsR}
\end{figure}
In consequence, we can remove instances of configuration II in which
there are left curves, right curves and straight edges, since Lemma
\ref{lem:conf2covercurve} either applies to the left or to the right
curves. Lemma \ref{lem:conf2covercurve} cannot be used if left
curves are to the right of straight edges, since the left curves may
be covered by $v$ and the straight edges by $u$. Then configuration
II may be unavoidable using a construction similar to the one of
Theorem \ref{thm:notfanplanar}.\\
\iffalse
\begin{lemma} \label{lem:conf2covercurve}
If a straight edge $h$ of an instance $C$ of configuration II
is to the left of a left curve $g$ and $h$ is not covered by $u$, then
$g$ can be rerouted so that $\tilde{g}$
does not cross the base. Similarly, if $g$ is not covered by $v$,
then $h$ can be rerouted so that $\tilde{h}$ does not cross the
base.
\end{lemma}
\begin{proof}
Reroute $g$ such that $\tilde{g}$ first follows $h$ from $t$ to
towards the base $e$, and it then routed along $e$ and $g$. If $h$
is uncovered, then $\tilde{g}$ is uncrossed along $h$ and $e$ and it
adopts the edge crossings from $g$. If $h$ is covered by $v$, then
$g$ is not crossed by an edge of $fan(u)$ inside the triangle
$(t,u,v)$, since each such edge must cross $h$. Hence, $\tilde{g}$
is uncrossed along $e$ and $g$ and may be covered by $v$. Clearly,
$\tilde{g}$ is only crossed by a subset of edges crossing $h$ and it
avoids crossing the base. By symmetry, we route $\tilde {g}$ along
$g$, the base and $h$ and so avoid a crossing with the base.
\qed
\end{proof}
In consequence, all left curves to the right of a straight edge that
is not covered by the left endpoint of the base can be rerouted and
disappear. Then there are no left curves to the right of the
leftmost straight edge that is uncovered or covered by $v$.
However, the restriction that $h$ is not covered by $u$, cannot be
released, since rerouting left curves as in Lemma
\ref{lem:conf2covercurve} induces independent crossings by edges
$\{u,w\}$ crossing $h$ and $\tilde{g}$ and $\{v,w'\}$ crossing $g$
both inside the triangle $(t,u,v)$.
\fi
A left curve $g= \{t,x\}$ is \emph{semi-covered} by $u$ if it is
only crossed by an
edge $\{u,w\}$ in the sector between $\{u,t\}$ and $\{u,v\}$.
Thus the crossing edge is inside the triangle $\Delta = (t,u,v)$.
Accordingly, a straight edge $h= \{t,y\}$ is \emph{semi-covered} by $v$
if each edge $\{v,w\}$ with $w\neq u$ crosses $h$ in the sector
between $\{v,t\}$ and $\{v,u\}$, i.e., outside $\Delta$.
A semi-covered edge is covered, but not conversely. A covered left
curve that is not semi-covered is crossed by edges of $fan(u)$ in
the sector between $\{t,v\}$ and $\{t,u\}$ in clockwise order, i.e.,
outside the triangle $(t,u,v)$. Similarly, a semi-covered straight
edge may be crossed by edges of $fan(v)$ inside the triangle. Thus a
semi-covered left curve consists of a segment from $u$ to the
crossing with $\{u,v\}$ and a semi-covered straight edge is
uncrossed inside $\Delta$. These segments are good for routing other
edges.
\begin{lemma} \label{lem:oonf2semicovered}
If there is a semi-covered straight (curved) edge, then all curved
(straight) edges
can be rerouted such that they do not cross the base, so that
configuration II is avoided.
\end{lemma}
\begin{proof}
We proceed as in Lemmas \ref{lem:triangle2dir} and
\ref{lem:trianglearrow} and reroute all straight and curved edges in
a bundle along the semi-covered edge $f$ from $t$ to the base
$\{u,v\}$, where they make a left or right turn, follow the base and
finally their original. If $f$ is straight (curved), then the curved
(straight) edges do not cross the base. Each rerouted edge
$\tilde{g}$ is only crossed by a subset of edges that cross $g$,
since the part of $\tilde{g}$ is uncrossed until it meets $g$.
\qed
\end{proof}
Next, we construct graph $M$ in which configuration II is
unavoidable. Graph $M$ has fat and ordinary edges. A \emph{fat
edge} consists of $K_7$. In fan-crossing graphs, a fat edge plays
the role of an edge in planar graphs. It is impermeable to any other
fat or ordinary edge. This observation is due to Binucci et al.
\cite{bddmpst-fan-15} who proved the following:
\begin{lemma} \label{lem:fatedge}
For every fan-crossing embedding of $K_7$ and every pair of vertices
$u$ and $v$ there is a path of segments in which at least one
endpoint is a crossing point. Thus, each pair of vertices is
connected if the uncrossed edges are removed.
\end{lemma}
There are (at least) three fan-crossing embeddings of $K_7$ with
$K_5$ as in Figs.~\ref{fig:allK5}(a-c) and two vertices in the outer
face, see Fig.~\ref{fig:allK7}. The embeddings in
Figs.~\ref{fig:allK5}(d) and \ref{fig:allK5}(e) cannot be extended
to a fan-crossing embedding of $K_7$ by adding two vertices in the
outer face.
\begin{figure}
\centering
\subfigure[ ]{
\parbox[b]{3.3cm}{%
\centering
\includegraphics[scale=0.5]{K7D1.pdf}
}
\label{fig:K7D1}
}
\hfil
\subfigure[ ]{
\parbox[b]{3.3cm}{%
\centering
\includegraphics[scale=0.5]{K7D2.pdf}
}
\label{fig:K7D2}
}
\hfil
\subfigure[ ]{
\parbox[b]{3.3cm}{%
\centering
\includegraphics[scale=0.5]{K7D3.pdf}
}
\label{fig:K7D3}
}
\caption{Different fan-crossing embeddings of $K_7$ that are obtained from different embeddings
of $K_5$ by adding two vertices in the outer face}
\label{fig:allK7}
\end{figure}
\begin{theorem} \label{thm:notfanplanar}
There are fan-crossing graphs that are not fan-planar. In other
words, configuration II is unavoidable.
\end{theorem}
\begin{proof}
Consider graph $M$ from Fig.~\ref{fig:graphM} with fat edges
representing $K_7$ and ordinary ones. Up to the embedding of the
fat edges, graph $M$ has a unique fan-crossing embedding. This is
due to the following fact.
There is a fixed outer frame consisting of two 5-cycles with
vertices $U = \{t', v', y',a',b', t,v,y,a,b\}$ and fat edges. If fat
edges are contracted to edges or regarded as such, this subgraph is
planar and 3-connected and as such has a unique planar embedding. By
a similar reasoning, $M[U]$ has a fixed fan-crossing embedding up
to the embeddings of $K_7$. There are two disjoint 5-cycles, since
fat edges do not admit a penetration by any other edge. Hence, the
edges $\{t,y\}$ and $\{b,v\}$ must be routed inside a face of the
embedding of $M[U]$, and they cross. Consider the subgraph
$M[t,s,u,w,x,z]$ restricted to fat edges. Since vertex $t$ is in the
outer frame, it admits four fan-crossing embeddings with outer face
$(t,u,x,w,z),(t,u,x,z), (t,u,s)$, and $(t,s,z)$, respectively. But
the edges $\{u,a\}, \{u,b\}, \{v,w\}$ and $\{v,z\}$ exclude the
latter three embeddings, since the edges on the outer cycle are fat
edges and do not admit any penetration by another edge.
Edge $\{u,a\}$ cannot cross $\{t,y\}$, since the latter is crossed
by $\{v,z\}$. Hence, $\{t,y\}$ is crossed by $\{w,v\}$ and
$\{z,v\}$. Finally, edge $\{t,x\}$ must cross $\{u,w\}$. It cannot
cross $\{v,z\}$ without introducing an independent crossing. Hence,
it must cross $\{u,a\}, \{u,b\}, \{u,v\}$ and $\{u,w\}$.
Modulo the
embeddings of $K_7$, every fan-crossing embedding is as shown in
Fig.~\ref{fig:graphM} in which $\{u,v\}$ is crossed by $\{t,x\}$
from the right and by $\{t,y\}$ from the left and thus is
configuration II. Hence, graph $M$ is fan-crossing and not
fan-planar.
\qed
\end{proof}
\begin{figure}
\centering
\includegraphics[scale=0.7]{graphM.pdf}
\caption{Graph $M$ with fat edges representing $K_7$ and an unavoidable configuration II}
\label{fig:graphM}
\end{figure}
Theorems \ref{thm:trianglecrossing} and \ref{thm:notfanplanar} solve
a problem of my recent paper on beyond-planar graphs
\cite{b-FOL-17}. Let FAN-PLANAR, FAN-CROSSING, and ADJ-CROSSING
denote the classes of fan-planar, fan-crossing, and
adjacency-crossing graphs. Then Theorems \ref{thm:trianglecrossing}
and \ref{thm:notfanplanar} show:
\begin{corollary}
FAN-PLANAR $\subset$ FAN-CROSSING $=$ ADJ-CROSSING.
\end{corollary}
Kaufmann and Ueckeredt \cite{ku-dfang-14} have shown that fan-planar
graphs of size $n$ have at most $5n-10$ edges, and they posed the
density of fan-crossing and adjacency-crossing graphs as an open
problem.
\begin{theorem}
For every adjacency-crossing graph $G$ there is a fan-planar graph
$G'$ on the same set of vertices and with the same number of edges.
\end{theorem}
\begin{proof}
By Theorem \ref{thm:trianglecrossing} we can restrict ourselves to
fan-crossing graphs. Let $\mathcal{E}(G)$ be a fan-crossing
embedding of $G$
and suppose there is an instance of configuration II in which the
base $\{u,v\}$ is crossed by $\{t, x\}$ from the right and by
$\{t,y\}$ from the left, or vice-versa.
Augment $\mathcal{E}(G)$ and add edges $\{u,w\}$ if they are
fan-crossing and do not cross both $\{t, x\}$ and $\{t, y\}$, and
similarly, add $\{v,w\}$.
Consider the cyclic order of edges or neighbors of $u$ and $v$
starting at $\{u,v\}$ in clockwise order. Let $a$ and $b$ be the
vertices encountered first. Vertices $a$ and $b$ exist, since $a$
precedes $x$ and $b$ precedes $y$, where $x=a$ or $b=y$ are
possible. Then $a$ and $b$ are both incident to both $u$ and $v$ and
there are two faces $f_1$ and $f_2$ containing a common segment of
$\{u,v\}$ and $a$ and $b$, respectively, on either side of
$\{u,v\}$. Otherwise, further edges can be added that are routed
close to $\{u,v\}$ and are crossed either by edges of $fan(t)$ that
are covered by $u$ or by $v$.
We claim that there is no edge $\{a,b\}$ in $\mathcal{E}(G)$.
Therefore, observe that the base is covered by $t$, so that
$\{a,b\}$ cannot cross $\{u,v\}$. Note that there is a triangle
crossing if $x=a$ and $b=y$ and $\{u,v\}$ crosses $\{a,b\}$ with a
triangle-crossing edge $\{u,v\}$. Edge $\{a,b\}$ crosses neither
$\{t,x\}$ nor $\{t,y\}$. If $a,b$ are distinct from $x,y$, then
there is an independent crossing of $\{t,x\}$ and $\{t,y\}$,
respectively, by $\{a,b\}$ and $\{u,v\}$. If $a=x$, then $\{t,x\}$
and $\{x,b\}$ are adjacent and do not cross and $\{x,b\}$ and
$\{u,v\}$ independently cross $\{t,y\}$ if $b \neq y$, and for
$b=y$, $\{x,y\}$ and $\{t,y\}$ cannot cross as adjacent edges.
However, after a removal of the base $\{u,v\}$, vertices $a$ and $b$
are in a common face and can be connected by an uncrossed edge $\{a,
b\}$, which clearly cannot be part of another instance of
configuration II.
Hence, we can successively remove all instances of configuration II
and every time replace the base edge by a new uncrossed edge.
\qed
\end{proof}
In consequence, we solve an open problem of Kaufmann and Ueckerdt
\cite{ku-dfang-14} on the density of fan-planar graphs and show that
configuration II has no impact on the density.
\begin{corollary}
Adjacency-crossing and fan-crossing graphs have at most $5n-10$
edges.
\end{corollary}
\section{Conclusion} \label{sect:conclusion}
We extended the study of fan-planar graphs initiated by Kaufmann and
Ueckerdt \cite{ku-dfang-14} and continued in \cite{bcghk-rfpg-17,
bddmpst-fan-15} and clarified the situation around fan-crossings.
We proved that triangle-crossings can be avoided whereas
configuration II is essential for graphs but not for their density.
Thereby, we solved a problem by Kaufmann and Ueckerdt
\cite{ku-dfang-14} on the density of adjacency-crossing graphs.
Recently, progress has been made on problems for 1-planar graphs
\cite{klm-bib-17} that are still open for fan-crossing graphs, such
as (1) sparsest fan-crossing graphs, i.e., maximal graphs with as
few edges as possible \cite{begghr-odm1p-13} or (2) recognizing
specialized fan-crossing graphs, such as optimal fan-crossing graphs
with 5n-10 edges \cite{b-ro1plt-16}.
In addition,
non-simple topological graphs with multiple
edge crossings and crossings among adjacent edges have been studied
\cite{at-mneqpg-07}, and they may differ from the simple ones, as it
is known for quasi-planar graphs \cite{aapps-qpg-97}. Non-simple
fan-crossing graphs have not yet been studied.
\section{Acknowledgements}
I wish to thank Christian Bachmaier for the discussions on
fan-crossing graphs and his valuable suggestions.
\bibliographystyle{abbrv}
|
2,869,038,155,520 | arxiv | \section{Introduction}
The crystallization process we consider here deals with germs $g=(x_g,t_g)$ that appear at random times $t_g$ on random locations $x_g$. The born process $\mathcal{N}$ is a Poisson point process on $\mathbb{R}^d\times\mathbb{R}^+$ with intensity measure denoted by $\Lambda$. Once the germs or cristallization centers are born, crystals are allowed to grow if their location is not still occupied by another crystal and when two crystals meet the growth stops a meeting points. There are then many ways to describe crystal expansion. The first approach is to consider the random sets (called crystallization state) that corresponds to the fraction of space occupied by crystals at a given time. In this case, crystallization is studied through the theory of set-valued processes. Another way to describe crystal growth is to deduce the expression of the speed growth from characteristic local media properties of the state space. One can also consider for a germ $g\in\mathbb{R}^d\times\mathbb{R}^+$ and a point $x\in\mathbb{R}^d$ the time $A_g(x)$ at which $x$ is reached by the free crystal associated to the germ $g$. The crystallization process is then caracterized by the following random field $\xi$ giving for a location $x\in\mathbb{R}^d$ its crystallization time $\xi(x)$:
\begin{equation}\label{process}\xi(x)=\inf_{g\in\mathcal{N}}A_g(x).
\end{equation}
We adopt in this paper the last definition and study the crystallization process througth the random field $\xi$.
This model was introduced by Kolmogorov \cite{Kol37} and independently
by Johnson \& Mehl \cite{JM39}, and intensively studied by many authors. We mention here only a few number of papers which represent the main approaches
and where one can find an exhaustive liste of references :
M\o ller \cite{Mol89}, \cite{Mol92}, and also Micheletti \& Capasso \cite{MC97}.
A very large part of these investigations deals with the geometrical structure of the mosaic once all the germs have finished their growth. Here, we are rather interested in estimation problems (such as the estimation of the parameters of the intensity measure $\Lambda$ or other functionals like the number of crystals in the limit mosaic) in the case when only one realisation can be observed on a sufficiently large domain compared to the mean size of crystals. Naturally, we suppose that the crystallisation process is space homogeneous. More precisely, we assume that the intensity measure is defined as follows,
\begin{equation}\label{product}
\Lambda=\lambda^d\times m,
\end{equation}
where $\lambda^d$ is the Lebesgue measure on $\mathbb{R}^d$ and $m$ is a measure on $\mathbb{R}^+$ finite on bounded Borelians.
This article is mainly devoted to ergodic properties of the random field $\xi(x)_{x\in \mathbb{R}^d}$ defined by (\ref{process}) which deliver a solid base for efficient estimation of parameters of the model and subsequent application to the study of its asymptotical normality. Under the above hypothesis and rather general conditions on growth speed and geometrical shape of crystals, we demonstrate that the random field $\xi$ is mixing in the sens of the ergodic theory. Moreover, under some additional assumptions, we obtain estimates of the absolute regularity coefficient of $\xi$.
The statistical application represents de second part of our work and will be published elsewhere.
\section{Assumptions on the birth-and-growth process}
\subsection{The birth process}
Germs are born according to a Poisson point process on $E=\mathbb{R}^d\times \mathbb{R}^+$ denoted by $\mathcal{N}$. Thus, a germ is a random point $g=(x_g,t_g)$ in $\mathbb{R}^d\times \mathbb{R}^+$, where $x_g$ is the location in the growing space $\mathbb{R}^d$ and $t_g$ is the time of birth on the time axes $\mathbb{R}^+$. We suppose that the intensity measure $\Lambda$ of $\mathcal{N}$ is the product (\ref{product}) of the Lebesgue measure $\lambda^d$ on $\mathbb{R}^d$ and a measure $m$ on $\mathbb{R}^+$ such that $m([0,a])<\infty$ for all $a>0$. The most interesting cases to be considered (see \cite{Mol86}) are for a discret measure $m$ or when $m(dt)=\alpha t^{\beta-1}\lambda(dt)$ with $\alpha>0$ and $\beta>0$.
Since the Lebesgue measure is invariant by the translations on $\mathbb{R}^d$,
we derive that $\mathcal{N}$ is space homogeneous. So we are led to consider only sets around the origine. In particular, we introduce for a time $t$, the causal cone:
\begin{equation}\label{cone}
K_t=\{g\in E\,|\,A_g(0)\leq t\}
\end{equation}
which consists of all the possible germs that are able to reach the origine before time~$t$. The measure $\Lambda(K_t)$ of the causal cone $K_t$ is denoted by $F(t)$.
\subsection{Expansion of crystals}
We call ``free crystal'' a crystal which is born in a fraction of space non-occupied by other crystals at the time of its birth. We associate to each germ $g$ in $E$ a function $A_g$:
\begin{equation}
\label{fonction}
\begin{array}{llcl}
A_g: & \mathbb{R}^d & \rightarrow & \mathbb{R}^+\\
& x & \mapsto & A_g(x)
\end{array}
\end{equation}
where $A_g(x)$ is the time when $x$ is reached by the crystal assumed to be free and associated to the germ $g$. Consequently, at time $t$ a free crystal is defined by the set
\begin{equation}\label{crystal}
C_g(t)=\{x\,|\,A_g(x)\leq t\}.
\end{equation}
In the following, we make several assumptions on the free crystals family $\{C_g,\;g\in\mathcal{N}\}$ and the functions family $\{A_g,\;g\in\mathcal{N}\}$. We also specify when necessary the link between assumptions and crystal growth.
\begin{enumerate}[1)]
\item $\forall\, g=(x_g,t_g)\in E,\hspace{0.2cm} A_g(x)=A_{(0,t_g)}(x-x_g)\hspace{0.2cm}\forall\,x\in\mathbb{R}^d.$
Crystal growth is space homogeneous. This assumption implies that for all germ $g=(x_g,t_g)$,
$$C_g(t)=C_{(0,t_g)}(t)+x_g\hspace{0.5cm}\forall \,t\in\mathbb{R}^+.$$
\item $\forall\, g=(x_g,t_g)\in E,\hspace{0.2cm} A_g(x_g)=t_g\textrm{ and }A_g(x)\geq t_g\hspace{0.2cm}\forall \,x\in\mathbb{R}^d.$
A crystal can only reach a point $x$ after its birth.
\item The free crystals $C_g(t)$ are bounded, convex sets and the family $(C_g(t))_{t\in \mathbb{R}^+}$ is increasing that means
$$C_g(s)\subset C_g(t)\hspace{0.5cm}\forall \,0\leq s<t.$$
\item The functions $x\mapsto A_g(x)$ are continuous.
Thus, crystals grow in each space direction and without any jump so that
$$\partial C_g(t)=\{x\,|\,A_g(x)=t\}.$$
\item There exists $M>0$ such that $\forall\,t_g\in\mathbb{R}^+ $ we have
$$A_{(0,t_g)}(x)\geq t_g+\frac{1}{M}|x|\hspace{0.5cm}\forall \,x\in\mathbb{R}^d.$$
The growth speed is then bounded by the constant $M$.
\item $\forall \,g=(0,t_g), \hspace{0.2cm}\forall \,r>0, \hspace{0.2cm} \exists \,t>0$ such that
$$C_g(t)\supset B(0,r)=\{x\in\mathbb{R}^d\,|\,|x|\leq r\}.$$
A free crystal grows in each direction and never definitively stops growing.
\item If $L_g=\{(x,t)\,|\,A_g(x)\leq t\}$ denotes the epigraph of $A_g$, then $\forall \,g_1\in L_g$,
$$A_g(x)\leq A_{g_1}(x)\hspace{0.5cm}\forall \,x\in\mathbb{R}^d.$$
This means that a crystal born inside $L_g$ never exits.
\end{enumerate}
When $d=1$, we introduce for each germ $g$ the restrictions $A^+_g$, $A^-_g$ of $A_g$ respectively to $[x_g,+\infty)$, $(-\infty,x_g]$ and consider when necessary a stronger version of Assumption $7)$:
\begin{enumerate}[7a)]
\item $\forall g_1 \in E,\,\forall g_2\in E$, if for some $x_0\geq x_g$, $A^+_{g_1}(x_0)\geq A_{g_2}(x_0)$ then
$$A_{g_1}(x)\geq A_{g_2}(x) \hspace{0.5cm}\forall x\geq x_0$$
and if for $x_1\leq x_g$, $A_{g_1}^-(x_1)\geq A_{g_2}(x_1)$ then
$$A_{g_1}(x)\geq A_{g_2}(x) \hspace{0.5cm}\forall x\leq x_1.$$
\end{enumerate}
Some remarks can be made on a part of these assumptions.
\begin{rem} The assumption $3)$ implies that for all $0<s<t$,
$$C_g(s)\subset[C_g(t)]^{\circ}.$$
Indeed, if there exists $x\in \partial C_g(t)\cap C_g(s)$, then we should have$A_g(x)=t$ and $A_g(x)\leq s$. But, this cannot occur.
\end{rem}
\begin{rem} Let $g=(0,t_g)$ be a germ. Observe that,
$$\sup_{|x|\leq r}A_g(x)=\sup_{|x|=r}A_g(x).$$
If $x_g$ satisfies $|x_g|=r$ and $t_g=A_g(x_g)=\sup_{|x|= r}A_g(x)$ then $C_g(t_g)$ contain all the points $x$ such that $|x|=r$ and by convexity (assumption $3)$), we obtain that $C_g(t_g)\supset B(0,r)$.
\end{rem}
\begin{rem} \label{Mg}The function $r\mapsto M_g(r)=\sup_{|x|=r}A_g(x)$ is increasing on $\mathbb{R}^+$ and
$$\lim_{r\rightarrow\infty}M_g(r)=+\infty.$$
\noindent To prove the first assertion, let us consider $0\leq r_0<r_1$ and $(x_g,t_g)\in\mathbb{R}^d\times\mathbb{R}^+$ such that $|x_g|=r_0$ and $t_g=A_g(x_g)=M_g(r_0)$. Then, $x_g\in\partial C_g(t_g)$ and by convexity, $C_g(t_g)\supset B(0,r_0)$. If $M_g(r_1)=M_g(r_0)$, $B(0,r_1)$ would also be included in $C_g(t_g)$ and $x_g$ would be inside $[C_g(t_g)]^{\circ}$. But, this is impossible.\\
For the second point, the assumption $5)$ implies
that
$$A_{(0,t_g)}(x)\geq t_g+\frac{1}{M}|x|$$
and
$$M_g(r)\geq t_g+\frac{r}{M}.$$
Thus, $\lim_{r\rightarrow\infty}M_g(r)=+\infty$.
\end{rem}
To obtain the absolute regularity property of $\xi$ when $d$ is greater or equal to $2$, we add two other assumptions. Let us introduce some definitions before stating the assumptions. For a germ $g=(x_g,t_g)$ and the associated free crystal $C_g(t)$ at time $t$, we call ``interior diameter'' and write $d_g(t)$ the diameter of the greatest ball centered in $x_g$ and included in $C_g(t)$. In the same way, $D_g(t)$ named the ``exterior diameter'' denotes the diameter of the smallest ball centered in $x_g$ containing $C_g(t)$. From the preceding assumptions, we deduce that for any germe $g\in E$, the functions $t\mapsto d_g(t)$ and $t\mapsto D_g(t)$ are continuous and for all $t\in\mathbb{R}^+$, we have $d_g(t)\leq D_g(t)$. The additional assumptions are the following ones:
\begin{enumerate}[8)]
\item $\exists A>0$ such that $\forall\, g\in E,\hspace{0.2cm}\forall\, t\in\mathbb{R}^+$,
$$\frac{1}{A}D_g(t)\leq d_g(t).$$
This assumption ensure that free crystals have non-degenerated shapes.
\end{enumerate}
\begin{enumerate}[9)]
\item $\forall g=(x_g,t_g)\in E$, the function $t\mapsto D_g(t)$ is ``subadditive'':
$$D_g(t+h)\leq D_g(t)+D_{(0,t)}(h)\hspace{0.5cm}\forall\, t\geq t_g,\hspace{0.2cm}\forall\, h\geq 0.$$
\end{enumerate}
We give now an example that satisfies all the assumptions from $1)$ to $9)$.
\begin{anex} \label{modelsimple}For any germ $g=(x_g,t_g)$, we suppose that the crystal at time $t\geq t_g$ is as follows:
$$C_g(t)=x_g+[V(t)-V(t_g)]K,$$
where $K$ is a convex compact set such that $0\in K^{\circ}$ and the function $t\mapsto V(t)$ represents the distance achieved with function speed $t\mapsto v(t)$. We assume that $v$ is positive almost everywhere. Moreover, we suppose that $V$ is absolutely continuous:
$$V(t)=\int_{0}^t v(s)ds\hspace{0.5cm}\forall t\in\mathbb{R}^+$$
and such that for all $t\geq 0$, $h>0$,
$$V(t+h)-V(t)>0.$$
Observe that
$$C_g(t+h)=C_g(t)\varoplus [V(t+h)-V(t)]K$$
where $\varoplus$ represents here the Minkowski summation of two sets $A$ and $B$:
$$A\varoplus B=\{x+y\,|\,x\in A,\;y\in B\}.$$
Now, we denote by $p_{x,K}$ the norm of the intersection point between $\partial K$ and the line $(0,x)$. Then, a point $x$ is reached at time $t$ by the crystal born in $x_g$ at time $t_g$ if
\begin{equation}\label{ass5}
(V(t)-V(t_g))p_{x-x_g,K}=|x-x_g|.
\end{equation}
As $V$ is invertible,
$$t=A_g(x)=V^{-1}\left(\frac{|x-x_g|}{p_{x-x_g,K}}+V(t_g)\right).$$
Thus, all the assumptions $1)$ to $9)$ except assumption $5)$ are satisfied in this example.
For the last assumption, we can suppose for example that $v$ is bounded, $v(s) \leq L.$
We can take $x_g = 0$. As $K$ is compact, there exists a constant $C$ such that
$p_{x,K} \leq C$ for all $x$. From (\ref{ass5}) we get
$$
|x| = (V(t)-V(t_g))p_{x,K}\leq LC_{}(A_{g}(x)-t_{g}),
$$
which gives assumption 5) with the constant $M = LC_{}.$
Note that if $K=B(0,1)$ and $v(t)=c$, this example corresponds to the linear homogeneous expansion in all directions.
\end{anex}
\section{Mixing property}
We assume without loss of generality that the random field $\xi=(\xi(x))_{x\in \mathbb{R}^d}$ defined by (\ref{process}) is a canonical random field on $(\Omega,\mathcal{F},\mathbb{P})$. Namely, we suppose that $\Omega=\mathbb{R}^{T}$ with $T=\mathbb{R}^d$, $\mathcal{F}$ is the $\sigma$-algebra generated by the cylinders and $\mathbb{P}$ is the distribution of $\xi$ so that for all $\omega\in\Omega$, $\xi(x,\omega)=\omega(x)$. As Lebesque measure $\lambda^d$ on $\mathbb{R}^d$ is invariant by the translations on $\mathbb{R}^d$, we deduce that $\xi$ is homogeneous. This means that $\mathbb{P}$ is invariant by the translations
$$S_h(\omega)(x)=\omega(x+h),\;h\in\mathbb{R}^d.$$
We precise here what we call a mixing random field.
\begin{defi}\label{mixdef} A random field $\xi=(\xi(x))_{x\in \mathbb{R}^d}$ is mixing if for all $A$, $B\in\mathcal{F}$,
\begin{equation}\label{cyl}
\mathbb{P}(A\cap S^{-1}_h(B))\xrightarrow[|h|\rightarrow\infty]{}\mathbb{P}(A)\mathbb{P}(B).
\end{equation}
\end{defi}
\begin{rem} We note here that if a random field is mixing in the sense of Definition \ref{mixdef}, then the random field is also egodic.
\end{rem}
To prove that a random field is mixing, it is sufficient to verify Condition (\ref{cyl}) for cylinders and establish the following condition
\begin{equation}\label{cond}
\begin{split}
& \forall x_1,\dots,x_k\in\mathbb{R}^d,\;\forall y_1,\dots,y_m\in\mathbb{R}^d,\;\forall E_1\in\mathcal{B}^k,\;\forall E_2\in\mathcal{B}^m,\\
& \mathbb{P}\left\{(\xi(x_1),\dots,\xi(x_k))\in E_1,\;(\xi(y_1),\dots,\xi(y_m))\in E_2\right\} \\
& \xrightarrow[|h|\rightarrow\infty]{} \mathbb{P}\left\{(\xi(x_1),\dots,\xi(x_k))\in E_1\right\}\mathbb{P}\left\{(\xi(y_1),\dots,\xi(y_m))\in E_2\right\}\\
\end{split}
\end{equation}
where $\mathcal{B}^k$ (respectively $\mathcal{B}^m$) is the $k$-dimentional (respectively $m$-dimentional) Borelian $\sigma$-field.
\begin{thm} \label{mixing} $(d\geq 1)$ Under assumtions $1)$ to $7)$, the random field $\xi=(\xi(x))_{x\in \mathbb{R}^d}$ defined by (\ref{process}) is mixing.
\end{thm}
To demonstrate Theorem \ref{mixing}, we need three auxiliary lemmas.
\begin{lem}\label{inegalite} If $A_1$, $A_2$, $B_1$ and $B_2$, are four events, then
\begin{itemize}
\item[(i)] $|\mathbb{P}(A_1)-\mathbb{P}(A_2)|\leq\mathbb{P}(A_1\triangle A_2)$,
\item[(ii)]$|\mathbb{P}(A_1\cap B_1)-\mathbb{P}(A_2\cap B_2)|\leq\mathbb{P}(A_1\triangle A_2)+\mathbb{P}(B_1\triangle B_2)$,
\end{itemize}
where for two events $A$ and $B$, $A\triangle B=(A\cap B^c)\cup(A^c\cap B)$.
\end{lem}
\begin{pro} Elementary.
\end{pro}
Now, for all $h\in\mathbb{R}^d$ and $r\geq 0$, we define new random fields to approximate $\xi(\cdot)$ and its translations $\xi(\cdot+h)$:
$$\xi_r^h(x)=\inf_{
\scriptsize
\begin{array}{c}
g\in\mathcal{N}\\
|x_g-h|\leq r
\end{array}}A_g(x)\hspace{0.5cm}\forall\, x\in\mathbb{R}^d.$$
\begin{lem} \label{lemme}Let $H(R)=M_{(0,R)}(R)$ with $M_g(r)$ defined in Remark \ref{Mg}.
Under Assumptions $1)$ to $7)$, we have for all $h\in\mathbb{R}^d$,
$$\mathbb{P}\left(\xi(x+h)=\xi_{(M+1)H(R)}^h(x),\;|x|\leq R\right)\geq 1-\textrm{e}^{-F(R)}$$
where $M$ is the constant of Assumption $5)$ and $F(R)$ is the measure of the causal cone $K_R$ defined by relation (\ref{cone}).
\end{lem}
\begin{pro} As $\mathcal{N}$ is space homogeneous,
$$\mathbb{P}\left(\xi(x+h)=\xi_{(M+1)H(R)}^h(x),\;|x|\leq R\right)=\mathbb{P}\left(\xi(x)=\xi_{(M+1)H(R)}^0(x),\;|x|\leq R\right)$$
and it is then sufficient to demonstrate Lemma \ref{lemme} for $h=0$. First, observe that Assumption $7)$ implies that
$$\{\xi(0)\leq R\}\subset\left\{\sup_{|x|\leq R}\xi(x)\leq H(R)\right\}.$$
Now, let us prove that
\begin{equation}\label{rel}
\left\{\sup_{|x|\leq R}\xi(x)\leq H(R)\right\}\subset\{\xi(x)=\xi_{R+MH(R)}^0(x),\;|x|\leq R\}.
\end{equation}
To prove (\ref{rel}), note that Assumptions $1)$ and $5)$ imply that for all germ $g$,
$$A_g(x)\geq t_g+\frac{|x-x_g|}{M}\hspace{0.5cm}\forall\, x\in\mathbb{R}^d.$$
In particular, for germs $g$ such that $|x_g|>R+MH(R)$, we deduce that
$$A_g(x)>H(R)\hspace{0.5cm}\forall\, x\in\mathbb{R}^d,\hspace{0.2cm}|x|\leq R.$$
Hence, for all $x$ such that $|x|\leq R$,
$$\inf_{\scriptsize
\begin{array}{c}
g\in\mathcal{N}\\
|x_g|> R+MH(R)
\end{array}}A_g(x)>H(R)\geq \xi(x)$$
and (\ref{rel}) follows. On the other and, for $0\leq r_1\leq r_2$
$$\xi(x)\leq \xi_{r_2}^0(x)\leq\xi_{r_1}^0(x)\hspace{0.5cm}\forall\, x\in\mathbb{R}^d.$$
From Assumption $2)$, remark that $R\leq H(R)$ and deduce that
$$\{\xi(x)=\xi_{R+MH(R)}(x),\;|x|\leq R\}\subset\{\xi(x)=\xi_{(M+1)H(R)}(x),\;|x|\leq R\}.$$
Finally, we obtain that
$$\mathbb{P}\left\{\xi(x)=\xi_{(M+1)H(R)}(x),\;|x|\leq R\right\}\geq\mathbb{P}\left\{\xi(0)\leq R\right\}$$
and
$$\mathbb{P}\left\{\xi(0)\leq R\right\} = \mathbb{P}\left\{\mathcal{N}\cap K_{R}\ne\emptyset\right\}.$$
But,
\begin{equation*}
\mathbb{P}\left\{\mathcal{N}\cap K_{R}\ne\emptyset\right\}=1-\textrm{e}^{-\Lambda(K_{R})}.
\end{equation*}
\end{pro}
\begin{lem}\label{infinity}Assumptions $1)$ and $6)$ imply that
$$F(R)=\Lambda(K_{R})\xrightarrow[R\rightarrow\infty]{}\infty.$$
\end{lem}
\begin{pro} The assumptions $1)$ and $6)$ imply that for all germ $g\in E$, there exists $R>0$ such that $g\in K_{R}$ or equivalently such that $0$ belongs to the crystal $C_g(R)$. But,
$$\bigcup_{R\geq 0} K_{R}=E$$
and since $\Lambda(E)=+\infty$, the result follows.
\end{pro}
We come back to the demonstration of Theorem \ref{mixing}.
\begin{pro} For $(x_1,\dots,x_k)$ in $E^k$, $(y_1,\dots,y_m)$ in $E^m$, $E_1$ in $\mathcal{B}^k$ and $E_2\in\mathcal{B}^m$, we define the sets:
$$\begin{array}{lll}
A & = &\{(\xi(x_1),\dots,\xi(x_k))\in E_1\},\\
& & \\
B & = &\{(\xi(y_1),\dots,\xi(y_m))\in E_2\},\\
& & \\
B_h & = & \{(\xi(y_1+h),\dots,\xi(y_m+h))\in E_2\}.\\
\end{array}$$
Let us define $r=\max\{|x_i|,\;i=1\dots k;\;|y_j|,\;j=1\dots m\}$ and consider a positive real number $\epsilon$. From Lemma \ref{lemme} and Lemma \ref{infinity}, we find $R>r$ such that
$$\mathbb{P}\left\{\xi(x)=\xi^0_{(M+1)H(R)}(x),\;|x|\leq R\right\}\geq 1-\epsilon.$$
Let us now introduce $h\in\mathbb{R}^d$ such that $|h|>2R_1$, where $R_1=(M+1)H(R)$. We also define some other sets:
$$\begin{array}{lll}
\tilde A & = &\{(\xi_{R_1}^0(x_1),\dots,\xi_{R_1}^0(x_k))\in E_1\},\\
& & \\
\tilde B & = &\{(\xi_{R_1}^0(y_1),\dots,\xi_{R_1}^0(y_m))\in E_2\},\\
& & \\
\tilde B_h & = & \{(\xi_{R_1}^h(y_1),\dots,\xi_{R_1}^h(y_m))\in E_2\}.\\
\end{array}$$
Lemma \ref{inegalite} $(ii)$ leads to the following inequality:
$$|\mathbb{P}(A\cap B_h)-\mathbb{P}(\tilde A\cap\tilde B_h)|\leq\mathbb{P}(A\triangle\tilde A)+\mathbb{P}(B_h\triangle\tilde B_h).$$
We introduce the set $D=\{\xi(x)=\xi_{R_1}^0(x),\;|x|\leq R\}$ and obtain by Lemma \ref{lemme} that
$$\begin{array}{lll}
\mathbb{P}(A\triangle\tilde A) & = & \mathbb{P}((A\triangle\tilde A)\cap D)+\mathbb{P}((A\triangle\tilde A)\cap D^c)\\
& = & \mathbb{P}((A\triangle\tilde A)\cap D^c)\\
& \leq & \mathbb{P}(D^c)\\
& \leq & \epsilon.
\end{array}$$
If we introduce the set $D_h=\{\xi(x+h)=\xi_{R_1}^h(x),\;|x|\leq R\}$ in place of $D$, we obtain by the same arguments that
$$\mathbb{P}(B_h\triangle\tilde B_h)\leq \epsilon.$$
These two inequalities imply that
\begin{equation}\label{in1}
|\mathbb{P}(A\cap B_h)-\mathbb{P}(\tilde A\cap\tilde B_h)|\leq 2\epsilon.
\end{equation}
On the other hand, the events $\tilde A$ and $\tilde B_h$ are independent because $|h|>2R_1$. Thus,
$$\mathbb{P}(\tilde A\cap \tilde B_h)=\mathbb{P}(\tilde A)\mathbb{P}(\tilde B_h)$$
and by space homogeneity of $\mathcal{N}$, $\mathbb{P}(\tilde B_h)=\mathbb{P}(\tilde B)$ so that
\begin{equation}\label{in2}
\mathbb{P}(\tilde A\cap \tilde B_h)=\mathbb{P}(\tilde A)\mathbb{P}(\tilde B).
\end{equation}
Moreover, by Lemma \ref{inegalite} $(i)$
$$\begin{array}{lll}
|\mathbb{P}(A)\mathbb{P}(B)-\mathbb{P}(\tilde A)\mathbb{P}(\tilde B)| & \leq & |\mathbb{P}(A)-\mathbb{P}(\tilde A)|+|\mathbb{P}(B)-\mathbb{P}(\tilde B)|\\
& \leq & \mathbb{P}(A\triangle\tilde A)+\mathbb{P}(B\triangle\tilde B).
\end{array}$$
and Lemma \ref{lemme} implies that
\begin{equation}\label{in3}
|\mathbb{P}(A)\mathbb{P}(B)-\mathbb{P}(\tilde A)\mathbb{P}(\tilde B)|\leq 2\epsilon.
\end{equation}
Inequalities (\ref{in1}), (\ref{in3}) and relation (\ref{in2}) imply that for all $h\in\mathbb{R}^d$ such that $|h|>2R_1$,
$$
|\mathbb{P}(A\cap B_h)-\mathbb{P}(A)\mathbb{P}(B)|\leq 4\epsilon
$$
and Theorem \ref{mixing} is then proved.
\end{pro}
\section{Absolute regularity}
\subsection{General definitions}
For a subset $T$ of $\mathbb{R}^d$, we denote by $\mathcal{F}_T$ the $\sigma$-field generated by the random variables $\xi(x)$ for all $x$ in $T$. Now, consider two disjoint sets $T_1$ and $T_2$ in $\mathbb{R}^d$ and define the absolute regularity coefficient for the $\sigma$-fields $\mathcal{F}_{T_1}$ and $\mathcal{F}_{T_2}$ as follows:
$$\beta(T_1,T_2)=\|\mathcal{P}_{T_1\cup T_2}-\mathcal{P}_{T_1}\times\mathcal{P}_{T_2}\|_{var},$$
where $\|\mu\|_{var}$ is the total variation norm of a signed measure $\mu$ and $\mathcal{P}_T$ is the distribution of the restriction $\xi_{|T}$ in the set $\mathcal{C}(T)$ of continuous real-valued functions defined on T. If $T_1\cap T_2=\emptyset$, note that $\mathcal{C}(T_1\cup T_2)$ is canonically identified to $\mathcal{C}(T_1)\times\mathcal{C}(T_2)$.
The strong mixing coefficient is defined as follows,
$$\alpha(T_1,T_2)=\sup_{
\begin{array}{c}
A\in\mathcal{F}_{T_{1}}\\
B\in\mathcal{F}_{T_{2}}
\end{array}}|\mathbb{P}(A\cap B)-\mathbb{P}(A)\mathbb{P}(B)|$$
The process $\xi$ is said to be absolutely regular ($\alpha$-mixing) if the absolute regularity coefficient (the strong mixing coefficient) converges to zero when the distance between $T_1$ and $T_2$ tends to infinity with $T_1$ and $T_2$ belonging to a certain class of sets.
\begin{rem}It is well known that
$$\alpha(T_1,T_2)\leq \frac{1}{2}\beta(T_1,T_2)$$
so that absolute regularity of the process $\xi$ implies $\alpha$-mixing.
\end{rem}
When $d=1$, one usually chooses $T_1=(-\infty,0]$ and $T_2=[r,+\infty)$ whereas in the case $d\geq 2$, there are several sorts of sets to be considered. The results we obtain in this paper when $d\geq 2$ deal with quadrant domains as represented on Figure $1$ and enclosed cube domains as represented on Figure $2$.
\subsection{Upper bounds}
\subsubsection{Approach}
In order to obtain upper bounds for the absolute regularity coefficient $\beta(T_1,T_2)$, we approximate the restrictions of $\xi$ on $T_1$ and $T_2$ by two independent random fields and apply the following lemma.
\begin{lem} \label{outil} Let us consider a random field $(\eta(x))_{x\in\mathbb{R}^d}$ and two disjoint subsets $T_1$ and $T_2$ of $\mathbb{R}^d$. If there exists two random fields $(\eta_1(x))_{x\in\mathbb{R}^d}$ and $(\eta_2(x))_{x\in\mathbb{R}^d}$ and two positive constants $\delta_1$ and $\delta_2$ such that:
\begin{itemize}
\item $\eta_1$ and $\eta_2$ are independent
\item $\mathbb{P}\left\{\xi(x)=\eta_i(x),\hspace{0.2cm}\forall\,x\in T_i\right\}\geq 1-\delta_i$ for $i=1,2$.
\end{itemize}
then
$$\beta(T_1,T_2)\leq 8\,(\delta_1+\delta_2).$$
\end{lem}
\begin{pro}Let us denote by $\mathcal{P}_1$ the distribution of the restriction $\xi_{|T_1}$ of $\xi$ to $T_1$, by $\mathcal{P}_2$ the distribution of the restriction $\xi_{|T_2}$ of $\xi$ to $T_2$, by $\mathcal{Q}_1$ the distribution the restriction ${\eta_1}_{|T_1}$ of $\eta_1$ to $T_1$, and by $\mathcal{Q}_2$ the distribution of the restriction ${\eta_2}_{|T_2}$ of $\eta_2$ to $T_2$. We have for $i=1,2$, that
$$\|\mathcal{P}_i-\mathcal{Q}_i\|_{var}\leq 4\delta_i.$$
Indeed, it is clear that
$$\|\mathcal{P}_i-\mathcal{Q}_i\|_{var}=2\,\sup_{A}|\mathcal{P}_i(A)-\mathcal{Q}_i(A)|.$$
If we denote by $D_i$ the set $\{\xi(x)=\eta_i(x),\;\forall\,x\in T_i\}$, we obtain that
$$\|\mathcal{P}_i-\mathcal{Q}_i\|_{var}=2\,\sup_A|\mathbb{P}(\{\xi_{|T_i}\in A\}\cap D_i^c)-\mathbb{P}(\{{\eta_i}_{|T_i}\in A\}\cap D_i^c)|$$
and deduce that
$$\|\mathcal{P}_i-\mathcal{Q}_i\|_{var}\leq 4\,\mathbb{P}(D_i^c).$$
Since $\mathbb{P}(D_i)\geq 1-\delta_i$, we conclude that
$$\|\mathcal{P}_i-\mathcal{Q}_i\|_{var}\leq 4 \,\delta_i.$$
Now, we denote by $\mathcal{P}$ the distribution of $\xi$ on $T_1\cup T_2$ and $\mathcal{Q}$ the disctribution of $\eta$ on $T_1\cup T_2$ with $\eta$ defined as follows:
$$\eta(x)=\left\{\begin{array}{ll}
\eta_1(x) & x\in T_1,\\
\eta_2(x) & x \in T_2.
\end{array}\right.$$
We have
$$\mathbb{P}\left\{\xi(x)=\eta(x),\hspace{0.2cm} \forall\, x\in T_1\cup T_2\right\}=\mathbb{P}(D_1\cap D_2).$$
But,
$$\mathbb{P}(D_1\cap D_2)=1-\mathcal{P}(D_1^c\cup D_2^c)\geq 1-\mathbb{P}(D_1^c)-\mathbb{P}(D_2^c)$$
and since $\mathbb{P}(D_i)\geq 1-\delta_i$ for $i=1,2$,
$$\mathbb{P}\left\{\xi(x)=\eta(x),\hspace{0.2cm} \forall\, x\in T_1\cup T_2\right\}\geq 1-(\delta_1+\delta_2).$$
We deduce by the same previous arguments that
$$\|\mathcal{P}-\mathcal{Q}\|_{var}\leq 4\,(\delta_1+\delta_2).$$
Finally, we have that
$$\|\mathcal{P}-\mathcal{P}_1\times\mathcal{P}_2\|_{var}\leq \|\mathcal{P}-\mathcal{Q}\|_{var}+\|\mathcal{Q}-\mathcal{Q}_1\times\mathcal{Q}_2\|_{var}+\|\mathcal{Q}_1\times\mathcal{Q}_2-\mathcal{P}_1\times\mathcal{P}_2\|_{var}.$$
As $\eta_1$ and $\eta_2$ are independent,
$$\|\mathcal{Q}-\mathcal{Q}_1\times\mathcal{Q}_2\|_{var}=0.$$
Moreover,
$$\|\mathcal{P}_1\times\mathcal{P}_2-\mathcal{Q}_1\times\mathcal{Q}_2\|_{var}\leq\|\mathcal{P}_1-\mathcal{Q}_1\|_{var}+\|\mathcal{P}_2-\mathcal{Q}_2\|_{var}\leq 4\,(\delta_1+\delta_2)$$
and
$$\|\mathcal{P}-\mathcal{Q}\|_{var}\leq 4\,(\delta_1+\delta_2).$$
Thus, we derive that
\begin{equation*}
\|\mathcal{P}-\mathcal{P}_1\times\mathcal{P}_2\|_{var}\leq 8\,(\delta_1+\delta_2).
\end{equation*}
\end{pro}
\subsubsection{Dimension $d=1$}
Remind that in this case $T_1=(-\infty,0]$ and $T_2=[r,+\infty)$ and denote by $\beta(r)$ the coefficient $\beta(T_1,T_2)$.
\begin{thm}\label{reg1} (d=1) If Assumtions $1)$- $6)$ and $7a)$ are statisfied, the process $\xi$ has the absolute regularity property and for all $r>0$,
$$\beta(r)\leq C_1\textrm{e}^{-F(C_2 r)},$$
where the constants $C_1$ and $C_2$ can be choosen such that $C_1=16$ and $C_2=\frac{1}{2M}.$
\end{thm}
We introduce for any subset $T$ of $\mathbb{R}$, the process $\xi_T$ defined as follows
\begin{equation}\label{xiT}
\xi_T(x)=\inf_{\scriptsize
\begin{array}{c}
g\in\mathcal{N}\\
x_g\in T
\end{array}}A_g(x) \hspace{0.5cm}\forall\, x\in\mathbb{R}^d.
\end{equation}
The proof of Theorem \ref{reg1} is based on the two following lemmas.
\begin{lem}\label{lemme1d1}
Under the same assumptions as in Theorem \ref{reg1}, for all $R>0$, we have that
$$\mathbb{P}\left\{\xi(x)=\xi_{(-\infty,MR]}(x),\hspace{0.2cm}\forall\, x\leq 0\right\}\geq 1-\textrm{e}^{-F(R)}$$
with $\xi_{(-\infty,MR]}$ defined by relation (\ref{xiT}) with $T=(-\infty,MR]$.
\end{lem}
\begin{pro}Let us show first that
\begin{equation}\label{eq1}
\{\xi(0)\leq R\}\subset\{\xi(x)=\xi_{(-\infty,MR]}(x),\;\forall x\leq 0\}.
\end{equation}
Suppose that $\xi(0)\leq R$ and prove that
\begin{equation}\label{eq2}
\inf_{\scriptsize
\begin{array}{c}
g\in\mathcal{N}\\
x_g\leq MR
\end{array}}A_g(x)\leq \inf_{\scriptsize
\begin{array}{c}
g\in\mathcal{N}\\
x_g>MR
\end{array}}A_g(x)\hspace{0.5cm}\forall\, x\leq 0.
\end{equation}
For all $g=(x_g,t_g)\in E$ such that $x_g>MR$, assumptions $1)$ and $5)$ leads to
$$A_g(0)\geq t_g+\frac{|x_g|}{M}>R.$$
Since $\xi(0)\leq R$, we then deduce that
$$\xi(0)=\inf_{\scriptsize
\begin{array}{c}
g\in\mathcal{N}\\
x_g\leq MR
\end{array}}A_g(0).$$
Consequently, there exists $g_0\in\mathcal{N}$ such that $x_{g_0}\leq MR$ and $A_{g_0}(0)=\xi(0)$. Hence $A^-_g(0)>A_{g_0}(0)$ , for all $g=(x_g,t_g)\in \mathcal{N}$ such that $x_g>MR$ and we deduce from Assumption $7a)$ that
$$A_g(x)\geq A_{g_0}(x)\hspace{0.5cm}\forall\, x\leq 0$$
and (\ref{eq2}) follows. Since
$$\xi(x)=\min\{\inf_{\scriptsize
\begin{array}{c}
g\in\mathcal{N}\\
x_g\leq MR
\end{array}}A_g(x),\inf_{\scriptsize
\begin{array}{c}
g\in\mathcal{N}\\
x_g>MR
\end{array}}A_g(x)\},$$
we derive that
$$\xi(x)=\xi_{(-\infty,MR]}(x)\hspace{0.5cm}\forall\, x\leq 0$$
and (\ref{eq1}) is then proved. Finally,
$$\mathbb{P}\left\{\xi(x)=\xi_{(-\infty,MR]}(x),\;\forall x\leq 0\right\}\geq \mathbb{P}\left\{\xi(0)\leq R\right\}$$
and \begin{equation*}
\mathbb{P}\left\{\xi(0)\leq R\right\}\geq 1-\textrm{e}^{-\Lambda(K_{0,R})}.
\end{equation*}
\end{pro}
Thanks to symmetry arguments, we derive the following lemma.
\begin{lem}\label{lemme2d1}
Under the same assumptions as in Theorem \ref{reg1}, for all $R>0$, we have that
$$\mathbb{P}\left\{\xi(x)=\xi_{[MR,+\infty)}(x),\hspace{0.2cm}\forall\, x\geq 2MR\right\}\geq 1-\textrm{e}^{-F(R)}$$
where $\xi_{[MR,+\infty)}$ is defined by relation (\ref{xiT}) with $T=[MR,+\infty)$.
\end{lem}
We turn back to the demonstration of Theorem \ref{reg1}.
\begin{pro} Let $r$ be a positive real and consider $R$ such that $2MR=r$ with $M$ the constant of Assumption $5)$. Lemma \ref{lemme1d1} and Lemma \ref{lemme2d1} allow us to apply Lemma \ref{outil} with $\eta_1=\xi_{(-\infty,MR]}$, $\eta_2=\xi_{[MR,+\infty)}$, $T_1=(-\infty,0]$, $T_2=[2MR,+\infty)$ and $\delta_1=\delta_2=e^{-F(R)}$. We obtain then that
\begin{equation*}
\beta(r)\leq 8\,(\delta_1+\delta_2)=16\,e^{-F(\frac{r}{2M})}.
\end{equation*}
\end{pro}
\subsubsection{Dimension $d\geq 2$}
We obtain first an upper bound for the absolute regularity coefficient in the case of two quadrants $T_1$ and $T_2$ which are separated by a $2r$-width band. As the random field $\xi$ is homogeneous, we can choose $T_1=\prod_{i=1}^d (-\infty,0]$ and $T_2=\prod_{i=1}^d[a_i,+\infty)$. We denote by $L_1$, (respectively $L_2$) the hyperplane orthogonal to $e=\frac{1}{\sqrt{d}}(1,\dots,1)$ and containing the point $(0,\dots,0)$ (respectively $(a_1,\dots,a_d)$) as represented on Figure $1$ when $d=2$. The distance between the hyperplanes $L_1$ and $L_2$ equals $2r=<e,a>$. Since $<e,a>$ is positive, we can introduce the hyperplane $L_0$ situated at equal distance between $L_1$ and $L_2$. Finally, we denote by $E_1$ (respectively $E_2$) the open half-space delimited by $L_0$ and containing $L_1$ (respectively $L_2$).
\begin{figure}[h]
\begin{center}
\label{figquad}
\includegraphics[width=6cm]{quad.eps}
\caption{Quadrant domains for $d=2$}
\end{center}
\end{figure}
\begin{thm}\label{reg2} $(d\geq 2)$ If Assumptions $1)$- $9)$ are satisfied and $T_1$ and $T_2$ are the quadrant domains previously described, then
\begin{equation}\label{sigma1}
\beta(T_1,T_2)\leq 16\sum_{k=1}^{\infty}k^{d-1}\textrm{e}^{-F(C\,k)},
\end{equation}
where $F(t)$ is the measure of $K_t$, $C=\frac{2R}{dH}$, $R=\frac{r}{H}$ and $H=2(A+M)$ with $A$ and $M$ the constants of Assumptions $5)$ and $8)$.
\end{thm}
Before proving the theorem we give an estimate of the majorant series in (\ref{sigma1}) for two typical cases.
\begin{anex}\label{ex2}If $F(t)\geq (d+\delta)\ln t-\ln \gamma$ with $\delta,\,\gamma>0$, we have $\textrm{e}^{-F(t)}\leq \gamma \,t^{-(d+\delta)}$ and obtain a polynomial estimation of the sum:
$$\sum_{k=1}^{\infty}k^{d-1}\textrm{e}^{-F(Ck)}\leq \gamma'\,C^{-(d+\delta)}$$
with
$$\gamma'=\gamma\,\sum_{k=1}^{\infty}k^{-(1+\delta)}.$$
\end{anex}
\begin{anex}\label{ex3}
If we rather suppose that $F(t)\geq \gamma \,t^{\delta}-c$ with $\delta,\,\gamma,\,c>0$, then $\textrm{e}^{-F(t)}\leq c_1\,\textrm{e}^{-\gamma \,t^{\delta}}$ with $c_1=\textrm{e}^c$. We derive a super-exponential estimation of the sum:
$$\sum_{k=1}^{\infty}k^{d-1}\textrm{e}^{-F(Ck)}\leq c_2\,\textrm{e}^{-\gamma \,C^{\delta}},$$
with $$c_2=c_1\sum_{k=1}^{\infty}k^{d-1}\textrm{e}^{-\gamma \,C^{\delta}\,(k^{\delta}-1)}.$$
\end{anex}
\begin{pro}
Let us now introduce for all $r>0$ the following random fields:
\begin{equation}\label{eta1}
\eta_r^1(x)=\inf_{\begin{array}{c}
g\in\mathcal{N}\\
x_g\in E_1
\end{array}}A_g(x)\hspace{0.5cm}\forall\,x\in\mathbb{R}^d,
\end{equation}
\begin{equation}\label{eta2}
\eta_r^2(x)=\inf_{\begin{array}{c}
g\in\mathcal{N}\\
x_g\in E_2
\end{array}}A_g(x)\hspace{0.5cm}\forall\,x\in\mathbb{R}^d.
\end{equation}
For all $R>0$, we denote by $\xi_R$ the random field defined as follows:
\begin{equation}\label{xir}
\xi_R(x)=\inf_{\begin{array}{c}
g\in\mathcal{N}\\
|x_g|\leq R
\end{array}}A_g(x)\hspace{0.5cm}\forall\,x\in\mathbb{R}^d.
\end{equation}
The proof is based on three lemmas.
\begin{lem}\label{lemme1d2} Under the assumptions of Theorem \ref{reg2}, for all $R>0$,
\begin{equation*}
\mathbb{P}(\xi(x)=\xi_{HR}(x),\hspace{0.2cm}|x|\leq R)\geq 1-\textrm{e}^{-F(R)}
\end{equation*}
with $\xi_{HR}$ defined by equation (\ref{xir}).
\end{lem}
\begin{pro} We demonstrate that
$$\{\xi(0)\leq R\}\subset\{\xi(x)=\xi_{2(A+M)R}(x),\;|x|\leq R\}.$$
Suppose that $\xi(0)\leq R$ and consider $g_0\in\mathcal{N}$ such that $\xi(0)=A_{g_0}(0)$. Thanks to Assumption $5)$, we obtain that $|x_{g_0}|\leq MR$. On the other hand, we introduce $g=(0,R)$ and deduce from the definition of $\xi$ that
$$\xi(x)\leq A_{g_0}(x)\hspace{0.5cm}\forall\,x\in\mathbb{R}^d.$$
Since $g\in L_{g_0}$, Assumption $6)$ leads to
$$ A_{g_0}(x)\leq A_{g}(x)\hspace{0.5cm}\forall\,x\in\mathbb{R}^d.$$
Thus,
$$\sup_{|x|\leq R}\xi(x)\leq \sup_{|x|\leq R}A_g(x).$$
Now for time
$$t=\inf\{s>R\,|\,d_g(s)=2R\},$$
it is clear that
$$\sup_{|x|\leq R}A_g(x)\leq t$$
and from Assumption $8)$ we deduce that $D_g(t)\leq 2AR.$
Let us consider now a germ $g_1$ such that $|x_{g_1}|>2(A+M)R$. If $t_{g_1}\geq t$, Assumption $2)$ implies that
$$A_{g_1}(x)\geq t\hspace{0.5cm}\forall\,x\in\mathbb{R}^d,\hspace{0.2cm}|x|\leq R.$$
If $t_{g_1}<t$, then Assumptions $3)$, $8)$ and $9)$ lead to
$$D_{g_1}(t)\leq D_{g_1}(R)+D_g(t).$$
But, assumption 5) implies that $D_{g_1}(R)\leq 2MR$ and the time $t$ is such that $D_g(t)\leq 2AR.$ Consequently,
$$D_{g_1}(t)\leq 2(A+M)R$$
and the crystals $C_g(t)$ and $C_{g_1}(t)$ are disjoint. Thus,
$$A_{g_1}(x)\geq t\hspace{0.5cm}\forall\,x\in\mathbb{R}^d,\hspace{0.2cm}|x|\leq R.$$
So, we conclude that
\begin{equation*}
\xi(x)=\xi_{2(A+M)R}(x)\hspace{0.5cm}\forall\,x\in\mathbb{R}^d,\hspace{0.2cm}|x|\leq R.
\end{equation*}
\end{pro}
\begin{lem} \label{lemme2d2}Under the assumptions of Theorem \ref{reg2}, for all $r>0$,
$$\mathbb{P}\{\xi(x)=\eta_r^1(x),\;x\in T_1\}\geq 1-\sum_{m=1}^{\infty}m^{d-1}\textrm{e}^{-F(Cm)}$$
with $\eta_r^1$ defined by (\ref{eta1}), $C=\frac{2R}{dH}$, $R=\frac{r}{H}$, $H=2(A+M)$.
\end{lem}
\begin{pro} We split the set $T_1$ into d-dimentional cubes denoted by $D_{\overline k}$, where for all $\overline k=(k_1,\dots,k_d)\in(-\mathbb{N})^d$,
$$D_{\overline k}=\prod_{i=1}^d[\frac{2R}{\sqrt d}(k_i-1),\frac{2R}{\sqrt d}(k_i)].$$
Each cube $D_{\overline k}$ is centered in $x_{\overline k}=(\frac{R}{\sqrt d}\left(2k_i-1)\right)_{i=1\dots d}$ and has diameter equal to $2R$. Remark also that the distance between $x_{\overline k}$ and $L_1$ equals $l_{\overline k}$ with
$$l_{\overline k}=R+|<\frac{2R}{\sqrt d}\overline{k},e>|=R(1+\frac{2}{d}|\sum_{i=1}^dk_i|).$$
Denote by $p$ the probability $\mathbb{P}(\xi(x)=\eta_r(x),\;x\in T_1)$ and note that
\begin{equation}\label{interbk}
p=\mathbb{P}(\bigcap_{\overline{k}\in(-\mathbb{N})^d} B_{\overline{k}}),
\end{equation}
with
$$B_{\overline{k}}=\{(\xi(x)=\eta_r(x),\;x\in D_{\overline k}\}.$$
From Lemma \ref{lemme1d2}, we obtain for all $a>0$ that
$$\mathbb{P}(\xi(x)=\xi_{B(x_{\overline k},Ha)}(x),\;|x-x_{\overline k}|\leq a)\geq 1-\textrm{e}^{-F(a)},$$
where
$$\xi_{B(x_{\overline k},Ha)}(x)=\inf_{\scriptsize\begin{array}{c}
g\in\mathcal{N}\\
|x_g-x_{\overline k}|\leq aH
\end{array}}A_g(x).$$
We choose $a=R+\frac{l_{\overline k}}{H}$. Hence, $D_{\overline k}\subset B(x_{\overline k},a)$ and
$$\{\xi(x)=\xi_{B(x_{\overline k},Ha)}(x),\;|x-x_{\overline k}|\leq a\}\subset\{\xi(x)=\xi_{B(x_{\overline k},Ha)}(x),\;x\in D_{\overline k}\}.$$
Moreover $B(x_{\overline k},Ha)$ is included in the half-space $E_1$. Consequently,
$$\{\xi(x)=\xi_{B(x_{\overline k},Ha)}(x),\;x\in D_{\overline k}\}\subset\{\xi(x)=\eta_r^1(x),\;x\in D_{\overline k}\}.$$
Denoting by $p_{\overline k}$ the probability $\mathbb{P}(B_{\overline k})$, we finally obtain that
\begin{equation}\label{pk}
p_k\geq 1-\textrm{e}^{-F\left(R+\frac{l_{\overline k}}{H}\right)}.
\end{equation}
On the other hand, equation (\ref{interbk}) implies that
\begin{equation*}
p=1-\mathbb{P}(\bigcup_{\overline k\in(-\mathbb{N})^d}B_{\overline{k}}^c).
\end{equation*}
From (\ref{pk}), we deduce that
\begin{equation}\label{somme}
p\geq 1-\sum_{\overline k\in(-\mathbb{N})^d}\textrm{e}^{-F\left(R+\frac{l_{\overline k}}{H}\right)}.
\end{equation}
Now, we obtain an upper bound for the sum in (\ref{somme}) as follows:
$$\begin{array}{ll}
& \sum_{\overline k\in(-\mathbb{N})^d}\textrm{e}^{-F\left(R+\frac{l_{\overline k}}{H}\right)}\\
& \\
= & \sum_{m=0}^{\infty}\#\{\overline k,\;|\sum_{i=1}^d k_i|=m\}\,\textrm{e}^{-F\left(R+\frac{R}{H}(1+\frac{2}{d}m)\right)}\\
& \\
\leq & \sum_{m=0}^{\infty}(m+1)^{d-1}\textrm{e}^{-F\left(R\left(1+\frac{1}{H}(1+\frac{2}{d}m)\right)\right)}.\\
\end{array}$$
Since $R\left(1+\frac{1}{H}(1+\frac{2}{d}m)\right)\geq C(m+1)$ with $C=\frac{2R}{dH}$ when $d\geq 2$, we finally derive that
\begin{equation*}
p\geq 1-\sum_{m=1}^{\infty}m^{d-1}\textrm{e}^{-F(Cm)}.
\end{equation*}
\end{pro}
Symmetry arguments lead to the following lemma.
\begin{lem}\label{lemme3d2}Under the assumptions of Theorem \ref{reg2}, for all $r>0$,
$$\mathbb{P}(\xi(x)=\eta_r^2(x),\;x\in T_2)\geq 1-\sum_{m=1}^{\infty}m^{d-1}\textrm{e}^{-F(Cm)}$$
with $\eta_r^2$ defined by (\ref{eta2}), $C=\frac{2R}{dH}$, $R=\frac{r}{H}$, $H=2(A+M)$.
\end{lem}
We make use of these three lemma to finish the proof of Theorem \ref{reg2}.
We apply Lemma \ref{outil} thanks to Lemma \ref{lemme2d2} and Lemma \ref{lemme3d2} with $\eta_1=\eta_1^r$, $\eta_2=\eta_2^r$, $\delta_1=\delta_2=\sum_{m=1}^{\infty}m^{d-1}e^{-F(C\,m)}$ and $T_1$, $T_2$ the quadrant domains. We then have that
\begin{equation*}
\beta(T_1,T_2)\leq 8\,(\delta_1+\delta_2)=16 \sum_{m=1}^{\infty}m^{d-1}e^{-F(C\,m)}.
\end{equation*}
\end{pro}
We give now an upper bound for the absolute regularity coefficient $\beta(T_1,T_2)$ in the case of enclosed cube domains separated by a $2r$-width polygonal band. As the random field $\xi$ is homogeneous, we consider centered domains $T_1=[-a,a]^d$ and $T_2=([-b,b]^d)^c$ as represented on Figure $2$ for $d=2$.
\begin{figure}[h]
\begin{center}
\label{figsqua}
\includegraphics[width=6cm]{squa.eps}
\caption{Sketch for $d=2$}
\end{center}
\end{figure}
\begin{thm}\label{reg3} $(d\geq 2)$ If Assumptions $1)$-$9)$ are satisfied and $T_1$, $T_2$ are the enclosed domains previously described with $b\geq 2\,(H-1)\,a$, then
$$\beta(T_1,T_2)\leq 8\,(1+d\,2^d)\sum_{k=1}^{\infty}k^{d-1}\textrm{e}^{-F(C\,k)},$$
where $F(t)$ is the measure of $K_t$, $C=\frac{2R}{dH}$, $R=\frac{r}{H}$ and $H=2(A+M)$ with $A$ and $M$ the constants of Assumptions $5)$ and $8)$.
\end{thm}
The proof of Theorem \ref{reg3} make use of the same kind of arguments as in the proof of Theorem \ref{reg2}. Therefore, we intoduce first sets in order to define the random fields $\eta_1^r$ and $\eta_2^r$ approximating $\xi$ respectively on $T_1$ and $T_2$. Thus, we denote by $e_1,\dots,e_d$ the $d$ vectors of the canonical base in $\mathbb{R}^d$ and consider the set $A=\{\alpha\,|\,(\alpha_1,\dots,\alpha_d),\hspace{0.2cm} \alpha_i=\pm 1\}$ which cardinal equals $\# A=2^d$. For all $i$, the hyperplane $e_i^{\perp}$ separates the set $\mathbb{R}^d$ into two open half-space $E_i^{\epsilon}$ with $\epsilon=\pm 1$ and $\epsilon e_i$ contained in $E_i^{\epsilon}$. For all $\alpha\in A$, we introduce the quadrant:
\begin{equation*}
\mathcal{Z}_{\alpha}=\displaystyle\bigcap_{i=1}^d E_i^{\alpha_i}
\end{equation*}
and for all $i=1\dots d$ the translated quadrant:
\begin{equation}\label{Zalpha}
\mathcal{Z}_{\alpha,i}=\mathcal{Z}_{\alpha}\varoplus \alpha_i\,b\,e_i.
\end{equation}
Observe that
\begin{equation*}
T_2=\displaystyle\bigcup_{\alpha\in A}\bigcup_{i=1}^d\mathcal{Z}_{\alpha,i}.
\end{equation*}
On the other hand, let us define for all $\alpha\in A$, the normed vector of $\mathcal{Z}_{\alpha}$:
$$d_{\alpha}=\frac{1}{\sqrt{d}}\sum_{i=1}^d\alpha_i\,e_i.$$
To separate the sets $T_1$ and $T_2$ by a $2r$-width polygonal band, the quantity $r=\frac{(b-2a)\sqrt{d}}{4}$ must be positive. Thus, we assume that $b\geq 2a$. In this case, we consider the hyperplanes
$$\begin{array}{lcl}
L_{\alpha}^0 & = & d_{\alpha}^{\perp}+\frac{(b+2a)\sqrt{d}}{4}\,d_{\alpha}\\
& & \\
L_{\alpha}^2 & = & L_{\alpha}^0 +r\,d_{\alpha}= d_{\alpha}^{\perp}+ \frac{b}{2}\sqrt{d}\,d_{\alpha}\\
& & \\
L_{\alpha}^1 & = & L_{\alpha}^0 -r\,d_{\alpha}= d_{\alpha}^{\perp}+ a\,\sqrt{d}\,d_{\alpha}\\
\end{array}$$
as represented on Figure $3$ for $d=2$ and $\alpha=(1,1)$.
\begin{figure}[h]
\begin{center}
\label{figsqua}
\includegraphics[width=6cm]{limite.eps}
\caption{Sketch for $d=2$}
\end{center}
\end{figure}
\noindent We introduce now for all $\alpha$ in $A$ the open half-space $S_{\alpha}^2$ delimited by the hyperplane $L_{\alpha}^0$ and containing the quadrants $\mathcal{Z}_{\alpha,i}$ for $i=1\dots d$. At last, we consider the set $S_2$ containing $T_2$:
\begin{equation*}
S_2=\displaystyle\bigcap_{\alpha\in A}S_{\alpha}^2.
\end{equation*}
Then, we introduce for all $\alpha\in A$, the random field:
$$\eta_{\alpha}(x)=\inf_{g\in S_{\alpha}^2}A_g(x)\hspace{0.5cm}\forall \, x\in\mathbb{R}^d$$
and approximate $\xi$ on $T_2$ by the following random field:
\begin{equation}\label{eta2bis}
\eta_2^r(x)=\inf_{g\in S_2}A_g(x)\hspace{0.5cm}\forall \, x\in\mathbb{R}^d.
\end{equation}
\begin{lem}\label{lemme4d2}If Assumptions $1)$-$9)$ are satisfied, then
$$\mathbb{P}\left\{\xi(x)=\eta_2^r(x),\hspace{0.2cm} x\in T_2\right\}\geq 1-d\,2^d\sum_{m=1}^{\infty}m^{d-1}e^{-F(C\,m)}$$
where $C$ is the constant of Theorem \ref{reg3} and $\eta_2^r$ is defined by (\ref{eta2bis}).
\end{lem}
\begin{pro} As for all $\alpha\in A$ and all $i=1\dots d$ the sets $\mathcal{Z}_{\alpha,i}$ defined by relation (\ref{Zalpha}) are quadrants included in $S_{\alpha}^2$, $\xi$ can be approximate by $\eta_{\alpha}$ on each $\mathcal{Z}_{\alpha,i}$ by Lemma \ref{lemme2d2} so that:
$$\mathbb{P}\left\{\xi(x)=\eta_{\alpha}(x),\hspace{0.2cm}\forall\,x\in \mathcal{Z}_{\alpha,i}\right\}\geq 1-\sum_{m=1}^{\infty}m^{d-1}e^{-F(C\,m)}.$$
Since,
$$\xi(x)\leq\eta_2^r(x)\leq \eta_{\alpha}(x),\hspace{0.5cm}\forall\,x\in\mathbb{R}^d$$
we deduce for all $\alpha\in A$ and all $i=1\dots d$ that
$$\mathbb{P}\left\{\xi(x)=\eta_2^r(x),\hspace{0.2cm}\forall\,x\in \mathcal{Z}_{\alpha,i}\right\}\geq 1-\sum_{m=1}^{\infty}m^{d-1}e^{-F(C\,m)}.$$
Finally, we derive that
\begin{equation*}
\mathbb{P}\left\{\xi(x)=\eta_2^r(x),\hspace{0.2cm} x\in T_2\right\}\geq 1-d\,2^d\sum_{m=1}^{\infty}m^{d-1}e^{-F(C\,m)}.
\end{equation*}
\end{pro}
We consider now for all $\alpha$ in $A$, the open half-space $S_{\alpha}^1=(S_{\alpha}^2)^c\backslash L_{\alpha}^0$. We also introduce the intersection
$$S_1=\displaystyle\bigcap_{\alpha\in A}S_{\alpha}^1$$
on which $\xi$ can be approximated by the following random field:
\begin{equation}\label{eta1bis}
\eta_1^r(x)=\inf_{g\in S_1}A_g(x)\hspace{0.5cm}\forall\,x\in\mathbb{R}^d.
\end{equation}
\begin{lem}\label{lemme5d2}If Assumptions $1)$-$9)$ are satisfied with sets $T_1$ and $T_2$ such that $b\geq 2\,(H-1)\,a$, then
$$\mathbb{P}\left\{\xi(x)=\eta_1^r(x),\hspace{0.2cm} x\in T_1\right\}\geq 1-\sum_{m=1}^{\infty}m^{d-1}e^{-F(C\,m)}$$
where $C$ is the constant of Theorem \ref{reg3} and $\eta_1^r$ is defined by (\ref{eta1bis}).
\end{lem}
\begin{pro} We consider the centered open ball $B_1=B(0, a\,\sqrt{d})$ included in $T_1$ and the ball $B_2=B(0,a\,\sqrt{d}+r')$ with $r'\leq r$ so that $B_2$ is contained in $S_1$. If we denote by $R$ the radius of $B_1$ and assume that $R\,H=a\,\sqrt{d}+r'$ with $H$ the constant of Theorem \ref{reg3}, we derive that
$$r'=(H-1)\,a\,\sqrt{d}\leq r=\frac{(b-2\,a)\sqrt{d}}{4}$$
and finally that $b$ must be such that $b\geq 2\,(H-1)\,a$. Since $A\geq 1$, it follows that $H\geq 2$ and $b\geq2\,a$. We intoduce the random fields $\eta_{B_2}$:
\begin{equation*}
\eta_{B_2}(x)=\inf_{g\in B_2}A_g(x)\hspace{0.5cm}\forall\,x\in\mathbb{R}^d.
\end{equation*}
We remark that
\begin{equation}\label{majo}
\xi(x)\leq \eta_1^r(x)\leq \eta_{B_2}(x)\hspace{0.5cm}\forall\,x\in\mathbb{R}^d.
\end{equation}
But by Lemma \ref{lemme1d2},
$$\mathbb{P}\left\{\xi(x)=\eta_{B_2}(x),\hspace{0.2cm}\forall x\in B_1\right\}\geq 1-e^{-F(R)}$$
and from inequality (\ref{majo})
$$\mathbb{P}\left\{\xi(x)=\eta_1^r(x),\hspace{0.2cm}\forall x\in B_1\right\}\geq 1-e^{-F(R)}.$$
As $B_1\subset T_1$, we also have that
$$\left\{\xi(x)=\eta_1^r(x),\hspace{0.2cm} x\in B_1\right\}\subset\left\{\xi(x)=\eta_1^r(x),\hspace{0.2cm} x\in T_1\right\}$$
and then
$$\mathbb{P}\left\{\xi(x)=\eta_1^r(x),\hspace{0.2cm}\forall x\in T_1\right\}\geq 1-e^{-F(R)}.$$
Finally, as $H\geq 2$, $R\geq C$ with $C=\frac{2R}{dH}$ and $e^{-F(R)}\leq e^{-F(C)}$. We note that $e^{-F(C)}\leq \sum_{m=1}^{\infty}m^{d-1}e^{-F(Cm)}$, hence
\begin{equation*}
\mathbb{P}\left\{\xi(x)=\eta_1^r(x),\hspace{0.2cm}\forall x\in T_1\right\}\geq 1-\sum_{m=1}^{\infty}m^{d-1}e^{-F(Cm)}.
\end{equation*}
\end{pro}
\begin{pro}[Theorem \ref{reg3}] We apply again Lemma \ref{outil} thanks to Lemma \ref{lemme4d2} and Lemma \ref{lemme5d2} with $\eta_1=\eta_1^r$, $\eta_2=\eta_2^r$, $\delta_1=\sum_{m=1}^{\infty}m^{d-1}e^{-F(C\,m)}$, $\delta_2=d\,2^d\,\delta_1$ and $T_1$, $T_2$ the enclosed domains. We then have that
\begin{equation*}
\beta(T_1,T_2)\leq 8\,(\delta_1+\delta_2)=8\,(1+d\,2^d) \sum_{m=1}^{\infty}m^{d-1}e^{-F(C\,m)}.
\end{equation*}
\end{pro}
\subsection{Lower bounds}
In conclusion we give a lower bound of $\beta$-coefficient in the context of
Examples \ref{ex2} and \ref{ex3} which are of the same type as the upper ones.
It shows that
the upper bounds in Theorem \ref{reg1}, Theorem \ref{reg2} and Theorem \ref{reg3} are sufficiently precise.
Let the dimension $d =1$. We choose $A=\{\xi(0)>a\}$ and $B=\{\xi(x)>a\}$ with $|x|=r$. It is clear that
\begin{equation}\label{minor}
\beta(r)\geq 2|\mathbb{P}(A\cap B)-\mathbb{P}(A)\mathbb{P}(B)|.
\end{equation}
Since $\xi$ is space homogeneous, we obtain that
$$\begin{array}{lcl}
\mathbb{P}(A)=\mathbb{P}(B) & = & \mathbb{P}\left\{\mathcal{N}\cap K_a=\emptyset\right\}\\
& = & e^{-F(a)}.
\end{array}$$
To compute $\mathbb{P}(A\cap B)$, we assume that there exists $\tau>0$ such that for all $t$ large enough,
$$K_t\cap(K_t+h)\subset K_{(1+\tau)\,t}\hspace{0.5cm}\forall\,h\in\mathbb{R}^d,\hspace{0.2cm}|h|\leq \tau t.$$
Under this assumption,
$$\begin{array}{lcl}
\mathbb{P}(A\cap B) & = & \mathbb{P}\left\{\mathcal{N}\cap K_a=\emptyset,\hspace{0.2cm} \mathcal{N}\cap(K_a+x)=\emptyset\right\}\\
& \geq & \mathbb{P}\left\{\mathcal{N}\cap K_{(1+\tau)\,a}=\emptyset\right\}\\
& = & e^{-F\left((1+\tau)\,a\right)}.
\end{array}$$
We choose, $r=\delta\,a$ so that
\begin{equation}\label{betaminor}
\beta(r)\geq\left|e^{-2\,F\left(\frac{r}{\tau}\right)}-e^{-F\left(\frac{(1+\tau)}{\tau}r\right)}\right|
\end{equation}
We compute the minoration term in inequality (\ref{betaminor}) for the two examples. In the case of Example \ref{ex2} where $F(t)=(d+\delta)\,\ln(t)-\ln(\gamma)$ with $\delta,\,\gamma>0$, we obtain that
$$e^{-2\,F\left(\frac{r}{\tau}\right)}=\gamma^2\tau^{2\,(d+\delta)}r^{-2\,(d+\delta)}$$
and
$$e^{-F\left(\frac{(1+\tau)}{\tau}r\right)}=\gamma\left(\frac{\tau}{\tau+1}\right)^{d+\delta}r^{-(d+\delta)}.$$
Thus, for $r$ sufficiently large,
$$\beta(r)\geq \kappa_1r^{-(d+\delta)}$$
with $\kappa_1>0$. For Example \ref{ex3} where $F(t)=\gamma\,t^\delta-c$ with $\gamma,\,\delta,_,c>0$, we derive that
$$e^{-2\,F\left(\frac{r}{\tau}\right)}=e^{2\,c}e^{-\frac{2\,\gamma}{\tau^\delta}r^\delta}$$
and
$$e^{-F\left(\frac{(1+\tau)}{\tau}r\right)}=e^ce^{-\frac{\gamma\,(1+\tau)^\delta}{\tau^\delta}r^\delta}.$$
Finally, if $\tau<2^{\frac{1}{\delta}}-1$, then for $r$ sufficiently large,
$$\beta(r)\geq \kappa_2 e^{-\gamma\left(\frac{1+\tau}{\tau}\right)^\delta r^\delta}$$
with $\kappa_2>0.$
|
2,869,038,155,521 | arxiv | \section{Introduction}
Systematic trading strategies are algorithmic procedures that allocate assets aiming to optimize a certain performance criterion. To obtain an edge in a highly competitive environment, the analyst needs to proper fine-tune its strategy, or discover how to combine weak signals in novel alpha creating manners. Both aspects, fine-tuning and combination, have been extensively researched in different domains, with distinct emphasis and assumptions:
\begin{itemize}
\item Forecasting and Financial Econometrics: proper model fine-tuning is also known as preventing {\em Backtesting Overfitting}: partly due to the endemic abuse of backtested results, there is an increasing interest in devising procedures for the assessment and comparison of strategies \cite{harvey2015backtesting,romano2016efficient,bailey2015probability}. {\em Model/Forecasting combination} is an established area of research \cite{timmermann2006forecast}, starting with the seminar work of \cite{bates1969combination} in the 60s, and still active \cite{hsiao2014there}.
\item Computational Statistics and Machine Learning: model tuning falls under the guise of {\em Hyperparameter Optimization} \cite{eggensperger2013towards,loshchilov2016cma} and {\em Model Validation} schemes \cite{arlot2010survey,lahiri2013resampling}; research on their interaction is scarce, and dealing with dependent data scenarios is still an open area of research \cite{jiang2017markov,bergmeir2018note}. Forming ensembles is a modelling strategy widely adopted by this community, being Random Forest and Gradient Boosting Trees the two main workhorses of Bagging and Boosting strategies \cite{friedman2001elements,efron2016computer}.
\end{itemize}
In summary, proper model tuning and combination are still an active area of research, in particular to dependent data scenarios (e.g., time series). Emerging techniques such as Conditional Generative Adversarial Networks \cite{mirza2014conditional} can have an impact into aspects of trading strategies, specifically fine-tuning and to form ensembles. Also, we can list a few advantages of such method, like: (i) generating more diverse training and testing sets, compared to traditional resampling techniques; (ii) able to draw samples specifically about stressful events, ideal for model checking and stress testing; and (iii) providing a level of anonymization to the dataset, differently from other techniques that (re)shuffle/resample data.
The price paid is having to fit this generative model for a given time series. In this work we show how the training and selection of the generator is made; overall, this part tends to be less costly than the whole backtesting or ensemble modelling process. Therefore, our work proposes the use of Conditional Generative Adversarial Networks (cGANs) for trading strategies calibration and aggregation. We provide evidence that cGANs can be used as tools for model fine-tuning, as well as on setting up ensemble of strategies. Hence, we can summarize the main highlights of this work:
\begin{itemize}
\item We have considered 579 assets, mainly equities, but we have also included swaps and equity indices, and currencies data.
\item Our findings suggest that cGAN can be an alternative to Bagging via Stationary Bootstrap, that is, when bootstrap approaches have failed to outperform, cGAN can be employed for Stochastic Gradient Boosting or Random Forest.
\item For model fine-tuning, we have evidence that cGAN is a viable procedure, comparable to many other well established techniques. Therefore, it should be considered part of the quantitative strategist toolkit of validation schemes for time series modelling.
\item A side outcome of our work is the wealth of results and comparisons: to the best of our knowledge most of the applied model validation strategies have not yet been cross compared using real datasets and different models.
\end{itemize}
Therefore, in addition to this introductory section, we structured this paper with other four sections. Next section provides background information about GANs and cGANs, how the training and selection of cGANs for time series was performed, as well as their application to fine-tuning and ensemble modelling of trading strategies. Third section outlines the experimental setting (scenario, parameters, etc.) used for two case studies: fine-tuning and combination of trading strategies. After this, section IV presents the results and discussion of both case studies, with section V exhibiting our concluding remarks and potential future works.
\section{Generative Adversarial Networks}
\subsection{Background}
Generative Adversarial Networks (GANs) \cite{goodfellow2014generative} is a modelling strategy that employ two Neural Networks: a Generator ($G$) and a Discriminator ($D$). The Generator is responsible to produce a rich, high dimensional vector attempting to replicate a given data generation process; the Discriminator acts to separate the input created by the Generator and of the real/observed data generation process. They are trained jointly, with $G$ benefiting from $D$ incapability to recognise true from generated data, whilst $D$ loss is minimized when it is able to classify correctly inputs coming from $G$ as fake and the dataset as true. Competition drive both Networks to improve their performance until the genuine data is indistinguishable from the generated one.
From a mathematical perspective, we start by defining a prior $p_{\mathbf{z}}(\mathbf{z})$ on input noise variables $\mathbf{z}$, which will be used by the Generator, denoted as a neural network $G(\mathbf{z}, \Theta_G)$ with parameters $\Theta_G$, that maps noise to the data/input space $G: \mathbf{z} \rightarrow \mathbf{x}$. We also need to set the Discriminator, represented as a neural network $D(\mathbf{x}^\ast, \Theta_D)$, that scores how likely is that $\mathbf{x}^\ast$ was sampled from the dataset ($p_{data}(\mathbf{x})$ -- $D: \mathbf{x}^\ast \rightarrow [0, 1]$). As outlined before, $D$ is trained to maximize correct labelling, whilst $G$, in the original formulation, is trained to minimize $\log(1- D(G(\mathbf{z})))$. It follows from \cite{goodfellow2014generative} that $D$ and $G$ play the following two-player minimax game with value function $V(G,D)$:
\begin{eqnarray}
\min_G \max_D V(D,G) = \mathbb{E}_{\mathbf{x} \sim p_{data}(\mathbf{x})} [\log D(\mathbf{x})] + \nonumber \\ \mathbb{E}_{\mathbf{z} \sim p_{\mathbf{z}}(\mathbf{z})} [\log (1 - D(G(\mathbf{z})))]
\end{eqnarray}
Overall, GANs have been successfully applied to image and text generation \cite{creswell2018generative:paper2}. However, some issues linked to its training and applications to special cases \cite{salimans2016improved,gulrajani2017improved} have fostered a substantial amount of research in newer architectures, loss functions, training, etc. We can classify and summarise these new methods as of belonging to:
\begin{itemize}
\item Ensemble Strategies: train multiple GANs, with different initial conditions, slices of data and tasks to attain; orchestrate the generator output by employing an aggregation operator (summing, weighted averaging, etc.), from multiple checkpoints or at the end of the training. Notorious instantiations of these steps are the Stacked GANs \cite{huang2017stacked}, Ensembles of GANs \cite{wang2016ensembles} and AdaGANs \cite{tolstikhin2017adagan}.
\item Loss Function Reshaping: reshape the original loss function (a lower-bound on the Jensen-Shannon divergence) so that issues linked to training instability can be circumvented. Typical examples are: employing Wasserstein-1 Distance with a Gradient Penalty \cite{arjovsky2017wasserstein,gulrajani2017improved}; using a quantile regression loss function to implicitly push $G$ to learn the inverse of a cumulative density function \cite{ostrovski2018autoregressive}; rewriting the objective function with a mean squared error form -- now minimizing the $\chi^2$-distance \cite{mao2017least}; or even view the discriminator as an energy function that assigned low energy values to the regions of high data density, guiding the generator to sample from those regions \cite{zhao2016energy}.
\item Adjusting Architecture and Training Process: we can mention Deep Convolutional GAN \cite{radford2015unsupervised}, in which a set of constraints on the architectural topology of Convolutional GANs are put in place to make the training more stable. Also, Evolutionary GANs \cite{wang2018evolutionary} that adds to the training loop of a GAN different metrics to jointly optimize the generator, as well as employing a population of Generators, created by introducing novel mutation operators.
\end{itemize}
Another issue, more associated to our work, is the handling of time series since learning an unconditional model, similar to the original formulation, works for image and text creation/discovery. However, when the goal is to use it for time series modelling, a sampling process that can take into account the previous state space is required to preserve the time series statistical properties (autocorrelation structure, trends, seasonality, etc.). In this sense, next subsection deals with Conditional GANs \cite{mirza2014conditional}, a more appropriate modelling strategy to handle dependent data generation.
\subsection{Conditional GANs}
As the name implies, Conditional GANs (cGANs) are an extension of a traditional GAN, when both $G$ and $D$ decision is based not only in noise or generated inputs, but include an additional information set $\mathbf{v}$. For example, $\mathbf{v}$ can represent a class label, a certain categorical feature, or even a current/expected market condition; hence, cGAN attempts to learn an implicit conditional generative model. Such application is more appropriate in cases where the data follows a sequence (time series, text, etc.) or when the user wants to build "what-if" scenarios (given that S\&P 500 has fallen 1\%, how much should I expect in basis points change of a US 10 year treasury?).
Most of the applications of cGANs related to our work have centred in synthesizing data to improve supervised learning models. The only exception is \cite{zhou2018stock}, where the authors use a cGAN to perform direction prediction in stock markets. Works \cite{fiore2017using,douzas2018effective} deal with the problem of imbalanced classification, in particular to fraud detection; they are able to show that cGANs compare favourably to other traditional techniques for oversampling. In \cite{esteban2017real}, the one that is closest to our work, the authors propose to use cGANs for medical time series generation and anonymization. They used cGANs to generate realistic synthetic medical data, so that this data could be shared and published without privacy concerns, or even used to augment or enrich similar datasets collected in different or smaller cohorts
of patients.
\begin{algorithm*}[h!]
\caption{cGAN Training and Selection}\label{training}
\begin{algorithmic}[1]
\Procedure{cGAN}{$[y_1,...,y_T], params$}
\For{number of epochs}
\State Sample minibatch of $L$ noise samples $\{\mathbf{z}^{(1)}, ..., \mathbf{z}^{(L)}\}$ from noise prior $p_{\mathbf{z}} (\mathbf{z})$
\State Sample minibatch of $L$ examples $\{(y_t; y_{t-1}, ..., y_{t-p})^{(1)}, ..., (y_t; y_{t-1}, ..., y_{t-p})^{(L)}\}$ from $p_{data}(y_t | y_{t-1}, ..., y_{t-p})$
\State Update the discriminator by ascending its stochastic gradient:
\begin{equation}
\nabla_{\Theta_D} \frac{1}{L} \sum_{l=1}^{L} \Big[\log D(y_t^{(l)} | y_{t-1}^{(l)}, ..., y_{t-p}^{(l)}) + \log (1 - D(G(\mathbf{z}^{(l)} | y_{t-1}^{(l)}, ..., y_{t-p}^{(l)})))\Big] \nonumber
\end{equation}
\State Sample minibatch of $L$ noise samples $\{\mathbf{z}^{(1)}, ..., \mathbf{z}^{(L)}\}$ from noise prior $p_{\mathbf{z}} (\mathbf{z})$
\State Update the generator by ascending its stochastic gradient:
\begin{equation}
\nabla_{\Theta_G} \frac{1}{L} \sum_{l=1}^{L} \Big[\log (D(G(\mathbf{z}^{(l)} | y_{t-1}^{(l)}, ..., y_{t-p}^{(l)})))\Big] \nonumber
\end{equation}
\If{$rem(epoch, snap) == 0$}
\State $G_k \gets G$, $D_k \gets D$ \Comment{store current $G$, $D$ as $G_k$, $D_k$}
\For{$c \gets 1, C$} \Comment{draw $C$ samples from $G_k$}
\For{$t\gets p+1, T$} \Comment{generate time series}
\State sample noise vector $\mathbf{z} \sim p_{\mathbf{z}} (\mathbf{z})$
\State draw $y_t^\ast = G_k(\mathbf{z} | y_{t-1}...., y_{t-p})$
\EndFor
\State Measure cGAN sample goodness-of-fit (akin to chi-square distance):
\begin{equation}
RMSE_c = \sqrt{\frac{1}{T-p-1} \sum_{p+1}^{T} (y_t - y_t^\ast)^2} \nonumber
\end{equation}
\EndFor
\State Average of all samples: $RMSE(G_k) = \frac{1}{C} \sum_{c=1}^{C} RMSE_c$
\EndIf
\EndFor
\State \textbf{return} $G := arg\min_{G_k} RMSE(G_k)$, $D := arg\min_{G_k} RMSE(G_k)$
\EndProcedure
\end{algorithmic}
\end{algorithm*}
Formally, we can define a cGAN by including the conditional variable $\mathbf{v}$ in the original formulation. Therefore, now $G: \mathbf{z} \times \mathbf{v} \rightarrow \mathbf{x}$ and $D: \mathbf{x}^\ast \times \mathbf{v} \rightarrow [0, 1]$, as before $D$ is trained to maximize correct labelling, whilst $G$, in the original formulation, is trained to minimize $\log(1- D(G(\mathbf{z} | \mathbf{v})))$. Similarly, it follows from \cite{mirza2014conditional} that $D$ and $G$ play the following two-player minimax game with value function $V(G,D)$:
\begin{eqnarray}
\min_G \max_D V(D,G) = \mathbb{E}_{\mathbf{x} \sim p_{data}(\mathbf{x})} [\log D(\mathbf{x} | \mathbf{v})] + \nonumber \\ \mathbb{E}_{\mathbf{z} \sim p_{\mathbf{z}}(\mathbf{z})} [\log (1 - D(G(\mathbf{z} | \mathbf{v})))]
\end{eqnarray}
\noindent in our case, given a time series $y_1, y_2, ..., y_t, ..., y_T$, our conditional set is $\mathbf{v} = (y_{t-1}, y_{t-2}, ..., y_{t-p})$ and what we are aiming to sample/discriminate is $\mathbf{x} = y_{t}$ (with $p_{data}$ $(y_t | y_{t-1}, ..., y_{t-p})$). In this sense, $p$ sets the amount of past information that is considered in the implicit conditional generative model. If $p = 0$, then a traditional GAN will be trained; if $p$ is large, than the Neural Network have a larger memory, but it will need bigger capacity to model and deal with selecting the right past values and dealing with noise vector $\mathbf{z}$; experimental setting section outline the values we have used during our experiments.
\subsection{Training and Selecting Generators for Time Series}
With the addition of the conditional vector $\mathbf{v}$, training cGANs is akin to GANs; what substantially change is how the right architecture is chosen across the training. Algorithm \ref{training} detail a minibatch stochastic gradient descent Training and Selecting of cGANs.
\vspace{0.25cm}
\noindent $params$ represents a set of hyperparameters that the user has to define before running \textit{cGAN Training}. It mainly encompasses: $G$ and $D$ architectures, number of lags $p$, noise vector size and prior distribution, minibatch size $L$, number of epochs, snapshot frequency ($snap$), number of samples $C$, and parameters associated to the stochastic gradient optimizer; all of them are specified in the Experimental Setting section (see Table \ref{cgan-configs}).
\begin{figure*}[h!]
\centering
\subfloat[]{\includegraphics[width=.33\linewidth]{rmse_samples4_20_100_snap_100.png}} \subfloat[]{\includegraphics[width=.33\linewidth]{rmse_samples4_20_100_snap_500.png}}
\subfloat[]{\includegraphics[width=.33\linewidth]{rmse_samples4_20_100_snap_2500.png}}
\caption{RMSE curves, considering a range of snapshot frequencies and number of samples.}
\label{spx-cgan-rmse}
\end{figure*}
\begin{figure*}[h!]
\centering
\subfloat[]{\includegraphics[width=.33\linewidth]{spx_cgan_returns_100samples_epoch200.png}} \subfloat[]{\includegraphics[width=.33\linewidth]{spx_cgan_returns_100samples_epoch1000.png}}
\subfloat[]{\includegraphics[width=.33\linewidth]{spx_cgan_returns_100samples_epoch5000.png}} \\
\subfloat[]{\includegraphics[width=.33\linewidth]{spx_cgan_cumreturns_100samples_epoch200.png}} \subfloat[]{\includegraphics[width=.33\linewidth]{spx_cgan_cumreturns_100samples_epoch1000.png}}
\subfloat[]{\includegraphics[width=.33\linewidth]{spx_cgan_cumreturns_100samples_epoch5000.png}}
\caption{SPX Index log-returns (a-c) from cGAN in different epochs, with their respective cumulative returns (d-e).}
\label{spx-cgan-cumreturns}
\end{figure*}
Selecting the right cGAN during the training is a difficult task, since it is computationally expensive to every iteration draw multiple samples and evaluate them. An approximation that we considered was to add a snapshot frequency in which every $snap$ iterations $G$ and $D$ weights are store. This parameter plays a relevant role in regulating the available number of cGANs to draw samples, evaluate and select. To illustrate the selection part of Algorithm \ref{training}, Figure \ref{spx-cgan-rmse} presents a sensibility analysis of it to SPX Index.
Overall, after a sharp decrease in the first 2000 epochs we observe a stabilization of RMSE to the 0.018 level. Drawing more samples improve estimation, but the gain is almost imperceptible from 20 to 100 samples. Snapshot frequency is an important parameter, with a noticeable difference between 100 to 2500, but without much change moving from 100 to 500. Number of samples draw from $G$ and the snapshot frequency are also reported in the Experimental Setting section. Figure \ref{spx-cgan-cumreturns} presents SPX Index samples (a-c) from cGAN and their respective cumulative returns (d-e)\footnote{We are highlighting this period in particular because our analyses and results concentrated on taking samples from 2001-2013.} in different stages of training: 200, 1000 and 5000 epochs.
Clearly, with just a 200 epochs the samples generated do not resemble well the index, whilst with 1000 the results are closer. For SPX Index, not much improvement was observed after 1000-2000 epochs. Although the samples still appear similar to the index, some issues related to scaling and presence of overshoots in the prediction damaged the RMSE values. Figure \ref{spx-cgan-corr} look into the estimated autocorrelations (ACF) and partial autocorrelations (PACF) functions using samples of cGAN with 1000 epochs. With few exceptions, most of the observed ACF and PACF values were in the neighbourhood of the average of several cGAN samples, with the confidence interval (CI) covering most of the 63 lags; this anecdotal evidence suggest that our cGAN Training and Selection Algorithm is able to replicate to a certain extent some statistical properties of a time series, in particular its ACF and PACF. The proper evidence toward this last assertion are provided in our case studies.
\begin{figure}[h!]
\centering
\subfloat[]{\includegraphics[width=\columnwidth]{cganlarge_spx_autocorr_63.png}} \\ \subfloat[]{\includegraphics[width=\columnwidth]{cganlarge_spx_partialautocorr_63.png}}
\caption{Autocorrelations (a) and partial autocorrelations (b) for SPX Index using a cGAN with 1000 epochs.}
\label{spx-cgan-corr}
\end{figure}
A final note: we have adopted the Root Mean Square Error as the loss function between the generator samples and the actual data, however nothing limits the user to use another type of loss function. In the next two subsections we outline new applications that can be made using the cGAN generator: fine-tuning and combination of trading strategies.
\subsection{cGAN: Fine-Tuning of Trading Strategies}
Fine-tuning of trading strategies consists of identifying a suitable set of hyperparameters such that a desired goal is attained. This goal depends on what is the utility function $P$ that the quantitative analyst is targeting: outperformance during active trading, hedging a specific risk, reaching a certain level of risk-adjusted returns, and so on. This problem can be decomposed into two tasks -- model validation and hyperparameter optimization -- that are strongly connected. Using \cite{bergstra2012random} as the initial step, given a(n):
\begin{itemize}
\item finite set of examples $\mathbf{X}^{(train)}$ draw from a probability distribution $p_\mathbf{x} (\mathbf{x})$
\item set of hyperparameters $\lambda \in \Lambda$, such as number of neurons, activation function of layer $j$, etc.
\item utility function $P$ to measure a trading strategy $S_\lambda$ performance in face of new samples from $p_\mathbf{x} (\mathbf{x})$
\item trading strategy $M_\lambda$ with parameters $\theta$ identifiable by an optimization of a training criterion, but only spotted after a certain $\lambda$ is fixed
\end{itemize}
mathematically, a trading strategy is fine-tuned properly if we are able to identify:
\begin{equation}
\lambda_\ast = arg\max_{\lambda \in \Lambda} \mathbb{E}_{\mathbf{x} \sim p_\mathbf{x}} [P(x; M_\lambda(\mathbf{X}^{(train)}))]
\end{equation}
that is, the optimal configuration for $ M_{\lambda_\ast}$ that maximizes the generalization of utility $P$. In reality, since drawing new examples from $p_\mathbf{x}$ is hard, and $\Lambda$ could be extremely large, most of the work in hyperparameter optimization and model validation is done by a double approximation:
\begin{eqnarray}
\lambda_\ast = arg\max_{\lambda \in \Lambda} \mathbb{E}_{\mathbf{x} \sim p_\mathbf{x}} [P(\mathbf{x}; M_\lambda(\mathbf{X}^{(train)}))] \approx \nonumber \\
arg\max_{\lambda \in \{\lambda_1, \lambda_2, ..., \lambda_m\}} \mathbb{E}_{\mathbf{x} \sim p_\mathbf{x}} [P(\mathbf{x}; M_\lambda(\mathbf{X}^{(train)}))] \label{approx1} \approx \\
arg\max_{\lambda \in \{\lambda_1, \lambda_2, ..., \lambda_n\}} mean_{\mathbf{x} \in \mathbf{X}^{(val)}} [P(\mathbf{x}; M_\lambda(\mathbf{X}^{(train)}))] \label{approx2}
\end{eqnarray}
\noindent the first approximation (eq. \ref{approx1}) is discretizing the search space of $\lambda$ (hopefully including $\lambda_\ast$) due to finite amount of computation. There are better ways to do this search, such as using Evolution Strategies \cite{loshchilov2016cma} or Bayesian Optimization \cite{eggensperger2013towards}, but this is not the focus of our work. The second approximation (eq. \ref{approx1}) replaces the expectation over sampling from $p_\mathbf{x}$, by an average over validation sets $\mathbf{X}^{(val)}$. Creating proper validation sets have been the focus of a substantial amount of research:
\begin{itemize}
\item when $\mathbf{x}_1, ...,\mathbf{x}_n$ are sampled independently and identically distributed (iid), techniques such as k-fold-cross-validation \cite{bergmeir2018note} and iid bootstrap \cite{arlot2010survey} can be employed to create both $\mathbf{X}^{(train)}$ and $\mathbf{X}^{(val)}$.
\item when $\mathbf{x}_1, ...,\mathbf{x}_n$ are not iid, then modifications have to be employed in order to create $\mathbf{X}^{(train)}$ and $\mathbf{X}^{(val)}$ adequately. In itself, it is an ongoing research topic, but we can mention the block-cross-validation and $hv$-block-cross-validation \cite{racine2000consistent}, sliding window \cite{arlot2010survey}, one-split (single holdout) \cite{arlot2010survey}, stationary bootstrap \cite{lahiri2013resampling}, as potential candidates.
\end{itemize}
\noindent in this work we follow a different thread: we attempt to build an approximation of drawing new examples from $p_\mathbf{x}$ using a cGAN. Algorithm \ref{calibration} outlines the steps followed to fine-tune a trading strategy using a cGAN generator.
\begin{algorithm}
\caption{Fine-tuning trading strategies using cGAN}\label{calibration}
\begin{algorithmic}[1]
\Procedure{cGAN}{$[y_1,...,y_T], params$} \\
\Comment{train and select a cGAN for a time series $y_1,...,y_T$}
\State \textbf{return} $G, D$
\EndProcedure
\Procedure{cGAN-Fine-tuning}{$G$, $[y_1,...,y_T]$, $B$}
\For{$\lambda \gets \lambda_1, ..., \lambda_m$}
\For{$b\gets 1, B$}
\For{$t\gets p+1, T$}
\State sample noise vector $\mathbf{z} \sim p_{\mathbf{z}} (\mathbf{z})$
\State draw $y_t^\ast = G(\mathbf{z} | y_{t-1}...., y_{t-p})$
\EndFor
\State train data: $\mathbf{X}^{(train)} := (y_{p+1}^\ast, ..., y_{T-h}^\ast)$
\State fit trading strategy: $ M_\lambda^{(b)} (\mathbf{X}^{(train)})$
\State val data: $\mathbf{X}^{(val)} := (y_{T-h+1}^\ast, ..., y_{T}^\ast)$
\State perf: $s_\lambda^{(b)} = P(\mathbf{X}^{(val)}; M_\lambda^{(b)} (\mathbf{X}^{(train)}))$
\EndFor
\State average: $perf(\lambda) = (1/B) \sum_{b=1}^B s_\lambda^{(b)}$
\EndFor
\State \textbf{return} $arg \max_{\lambda \in \{\lambda_1, \lambda_2, ..., \lambda_m\}} perf(\lambda)$
\EndProcedure
\end{algorithmic}
\end{algorithm}
Hence, we train a cGAN and use the generator $G$ to draw $B$ samples from the time series. For every sample, we perform an one-split to create $\mathbf{X}^{(train)}$ and $\mathbf{X}^{(val)}$, so that we are able to identify $M_\lambda$ parameters $\lambda$ and assess a set of hyperparameters $\lambda$. Following eq. (\ref{approx2}), we return the hyperparameter $\lambda_\ast$ that maximize the average performance across the cGAN samples. The one-split method has one parameter $h$ which sets the holdout set size; its value is specified in the experimental setting section. We compared our methodology results with other schemes that produce $\mathbf{X}^{(train)}$ and $\mathbf{X}^{(val)}$ from a given dataset. Next subsection outline another use of the cGAN generator: ensemble modelling.
\subsection{cGAN: Sampling and Aggregation}
Another potential use of cGAN is to build an ensemble of trading strategies, that is, using base learners that are individually "weak" (e.g. Classification and Regression Tree), but when aggregated can outcompete other "strong" learners (e.g., Support Vector Machines). Notorious instantiations of this principle are Random Forest, Gradient Boosting Trees, etc., techniques that make use of Bagging, Boosting or Stacking \cite{friedman2001elements,efron2016computer}. In our case, the closest parallel we can draw to cGAN Sampling and Aggregation is Bagging. Algorithm \ref{cganning} shows this method. After have trained and selected a cGAN, we repeatedly draw a cGAN sample and train a base learner; having proceed this way for $b=1,...,B$ steps we return the whole set of base models as an ensemble.
\begin{algorithm}
\caption{cGAN Sampling and Aggregation}\label{cganning}
\begin{algorithmic}[1]
\Procedure{cGAN}{$[y_1,...,y_T], params$} \\
\Comment{train and select a cGAN for a time series $y_1,...,y_T$}
\State \textbf{return} $G, D$
\EndProcedure
\Procedure{cGAN-Sample-Agg}{$G$, $[y_1,...,y_T]$, $B$}
\For{$b\gets 1, B$}
\For{$t\gets p+1, T$}
\State sample noise vector $\mathbf{z} \sim p_{\mathbf{z}} (\mathbf{z})$
\State draw $y_t^\ast = G(\mathbf{z} | y_{t-1}...., y_{t-p})$
\EndFor
\State train base learner: $ M_\lambda^{(b)} (y_{p+1}^\ast, ..., y_T^\ast)$
\EndFor
\State \textbf{return ensemble} $M_\lambda^{(1)}, ..., M_\lambda^{(B)}$
\EndProcedure
\end{algorithmic}
\end{algorithm}
An argument that is often used to show why this scheme work is the variance reduction lemma \cite{friedman2001elements}: let $\hat{Y}_1, ..., \hat{Y}_B$ be a set of base learners, each one trained using distinct samples draw repeatedly from the cGAN generator. Then, if we average their predictions and analyse its variance we have:
\begin{equation}
\mathbb{V}\Big[\frac{1}{B} \sum_{b=1}^{B} \hat{Y}_b\Big] = \frac{1}{B^2} \Big(\sum_{b=1}^B \mathbb{V}[\hat{Y}_b] + 2 \sum_{1\leq b \leq j \leq B}^B\mathbb{C}[\hat{Y}_b, \hat{Y}_j]\Big)
\end{equation}
\noindent if we assume, for analytical purpose, that $\mathbb{V}[\hat{Y}_b] = \sigma^2$ and $\mathbb{C}[Y_b, Y_j] = \rho \sigma^2$ for all $b$, that is, equal variance $\sigma^2$ and average correlation $\rho$, this expression simplifies to:
\begin{equation}
\mathbb{V}\Big[\frac{1}{B} \sum_{b=1}^{B} \hat{Y}_b\Big] = \sigma^2 \Big(\frac{1}{B} + \frac{B-1}{B} \rho \Big) \leq \sigma^2
\end{equation}
\noindent hence, we are able to reduce a base learner variance by averaging many slightly correlated predictors. By Bias-Variance trade-off \cite{friedman2001elements,efron2016computer}, the ensemble Mean Squared Error tend to be minimized, particularly when low bias and high variance base learners are used, such as deep Decision Trees. Diversification in the pool of predictors is the key factor; commonly it is implemented by taking $B$ iid bootstrap samples from a dataset. When dealing with time series, iid bootstrap can corrupt its autocorrelation structure, and taking $B$ stationary bootstrap samples \cite{lahiri2013resampling} is preferred. Bagging predictors using stationary bootstrap is, therefore, the appropriate benchmark to compare cGAN Sampling and Aggregation. The method that is able to produce $\hat{Y}_b$ and $\hat{Y}_j$ with low $\sigma^2$ and as slightly correlated as possible, will tend to outperform out-of-sample. A final note: one potential risk is that cGAN is unable to replicate well $p_{data}$. Therefore, thought the samples are more diverse they are also more "biased". This can make the base learners to miss patterns displayed in the real dataset, or even spot ones that did not existed in the first place.
\section{Experimental Setting}
\subsection{Datasets and Holdout Set}
Table \ref{datasets} presents aggregated statistics associated to the datasets used, whilst Figure \ref{cum_returns_assets_class} illustrates the cumulative returns per asset pool. We have considered three main asset classes during our evaluation: equities, currencies, and fixed income. The data was obtained from Bloomberg, with the full list of 579 assets tickers and period available at \texttt{\url{https://www.dropbox.com/s/08mjq7z49ybftqg/cgan_data_list.csv?dl=0}}. The typical time series started on 03/01/2000 and finished at 02/03/2018, with an average length of 4565 data points. We converted the raw prices in excess log returns, using a 3-month Libor rate as the benchmark rate.
\begin{table*}[h!]
\centering
\small
\caption{Aggregated statistics of the assets used during our empirical evaluation.} \label{datasets}
\begin{tabular}{p{3cm}|ccccccc}
\hline
\hline
Asset Pool* & N & Avg Return & Volatility & Sharpe ratio & Calmar ratio & Monthly skewness & VaR 95\% \\
\hline
All Assets & 579 & 0.0220 & 0.0453 & 0.4854 & 0.1051 & -1.2169 & -0.9443 \\
World Equity Indices & 18 & 0.0152 & 0.0717 & 0.2114 & 0.0504 & -0.8170 & -1.0048 \\
S\&P 500, FTSE 100 and DJIA Equities & 491 & 0.0251 & 0.0525 & 0.4785 & 0.1127 & -1.1133 & -0.9517 \\
World Swaps Indices & 48 & -0.0191 & 0.0446 & -0.4295 & -0.0458 & 0.0275 & -0.9753 \\
Rates Aggregate Indices & 16 & 0.1220 & 0.0637 & 1.9135 & 0.5504 & -0.8227 & -0.9440 \\
World Currencies & 24 & -0.0025 & 0.0315 & -0.0798 & -0.0157 & -1.0052 & -0.8856 \\
\hline
\hline
\end{tabular}
\\ * Before being averaged, each individual asset was volatility scaled to 10\%
\end{table*}
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{cum_returns_assets_class.png}
\caption{Cumulative returns aggregated across asset pool. Before being averaged, each individual asset was volatility scaled to 10\%} \label{cum_returns_assets_class}
\end{figure}
We have established a testing procedure to assess all the different approaches spanned in this research. Figure \ref{onesplit} summarize the whole procedure. The process start by splitting a sequence of returns $r_1, ..., r_T$ in a single in-sample/training ($IS$) and out-sample/testing/holdout ($OS$) set, with both sets sizes being determined by the trading horizon $h$. During our experiments we have fixed $h=1260$ days $\approx 5$ years. Every method used or cGAN trained tap only into the $IS$ data. Some methods, such as the other Model Validation schemes will create training and validation sequences, but all of them only based on $IS$ set. However, the data used to measure their success is the same: by computing a set of metrics using the fixed $OS$ set.
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{onesplit.png}
\caption{One-split/single holdout approach to assess all approaches in this work. During our experiments we have fixed $h=1260$ days $\approx 5$ years.} \label{onesplit}
\end{figure}
\subsection{Performance Metrics}
This subsection outline the utility function employed. We opted for a financial performance metric, instead of a generic metric based in prediction error. Low prediction error is a necessary, but not a sufficient condition to construct alpha-generating trading strategies \cite{acar2002advanced}. In this sense, we mainly reported Sharpe and Calmar ratios: metrics that combine risk and reward measures, and make different investment strategies comparable \cite{young1991calmar,sharpe1994sharpe,eling2007does}. These metrics can be defined as:
\begin{equation}
SR = \frac{\bar{R}^{(M)}}{\sigma_R^{(M)}} \ \ \mathrm{and} \ \ CR = \frac{\bar{R}^{(M)}}{-MDD(R^{(M)})}
\end{equation}
\noindent where $\bar{R}^{(M)}$ is the strategy average annualized excess returns, $\sigma_R^{(M)}$ is it volatility, and $MDD(R^{(M)})$ is the strategy maximum drawdown. All of them are calculated using the strategy instantaneous returns $r^{(M)}_t = r_t \cdot f(\hat{r}_t)$ as the building block. In this case, $f(\hat{r}_t)$ is our trading signal, a transformation $f$ of the estimated returns $\hat{r}_t$ outputted by a predictive model. We opted to use an identity function for $f(\hat{r}_t) = \hat{r}_t$ so we can avoid having another layer of hyperparameters; in practice an user can select another transformation.
Finally, it should be mentioned that most of our reported results are aggregated across all 579 assets. Although the ratios provide a common basis to compare strategies with different risk profiles, still, the asset pool is quite diverse and is affected by outliers. Hence, we opted for robust statistics (median, mean absolute deviation, quantiles, ranks, etc.) to compare and statistically test the different hypothesis\footnote{The readers interested to understand more about the nonparametric statistical tests used in this work -- Friedman, Holm Correction and Wilcoxon rank-sum test -- should consult this reference \cite{derrac2011practical}.}.
\subsection{cGAN Configuration}
Table \ref{cgan-configs} outlines the three different architectures used for $G$ and $D$ of a cGAN. Since the main variation is the number of neurons used, we abbreviated their names across the cases as cGAN-Small, cGAN-Medium and cGAN-Large. This variation will allow us to check how different architecture sizes perform across the benchmarks.
\begin{table}[h!]
\centering
\footnotesize
\caption{Configurations used to train and select the cGANs.} \label{cgan-configs}
\begin{tabular}{p{4cm}p{4cm}}
\hline
\hline
\multicolumn{1}{c}{\textbf{Abbreviation}} & \multicolumn{1}{c}{\textbf{Individual Configuration}} \\
\hline
cGAN-Small ($G$, $D$) & number of neurons = (5, 5) \\
cGAN-Medium ($G$, $D$) & number of neurons = (100, 100) \\
cGAN-Large ($G$, $D$) & number of neurons = (500, 500)\\
\hline
\hline
\hline
\multicolumn{1}{c}{\textbf{Other Configuration}} & \multicolumn{1}{c}{\textbf{Values}} \\ \hline
Architecture & Multilayer Perceptron \\
Number of hidden layers & 1 \\
Hidden layer activation function & rectified linear \\
$G$ Output layer activation function & linear \\
$D$ Output layer activation function & sigmoid \\
Epochs & 20000 \\
Batch Size & 252 \\
Solver & Stochastic Gradient Descent \\
Solver Parameters & learning rate = 0.01 \\
Conditional info & $r_{t-1}, r_{t-2}, ..., r_{t-252}$ ($p=252$) \\
Noise prior $p_{\mathbf{z}}(\mathbf{z})$ & $N(0, 1)$ \\
Noise dimension $dim(\mathbf{z})$ & 252 \\
Snapshot frequency ($snap$) & 200 \\
Number of samples for evaluation & $C =$ 50 \\
Input features scaling function & $Z$-score (standardization) \\
Target scaling function & $Z$-score (standardization) \\
\hline
\hline
\end{tabular}
\end{table}
After a few initial runs, we opted for Stochastic Gradient Descent with learning rate of 0.01 and batch size of 252 as the optimization algorithm. Input features and target were scaled using a z-score function to ease the training process. We selected the right cGAN to use by taking snapshots every 200 iterations ($snap=200$), drawing and evaluating 50 samples per generator along 20000 epochs. Finally, we used 252 consecutive lags as conditional information (around one year) with the noise prior (Standard Normal - $N(0,1)$) wielding the same dimension of the conditional input; we did it to increase the chance to create a more diverse pool of examples, as well as to make it harder for the generator to omit/nullify the weights associated with this part of the input layer.
\subsection{Case I: Combination of Trading Strategies}
\subsubsection{Overview}
This case evaluates the success of different combination of trading strategies. In this sense, Algorithm \ref{combination-algo} presents the main loop used for cGANs and Stationary Bootstrap. First step is to resample the actual returns $RS(r_1, ...,r_{T-h})$ using Stationary Bootstrap or cGAN, creating a new sequence of returns $\{r_1^\ast, ...,r_{T-h}^\ast\} = \mathbf{X}^{(train)}$ set. We then proceed as usual: use $\mathbf{X}^{(train)}$ to train a base learner $M_\lambda^{(b)}$, and add it to the ensemble set $ES$ All of these steps are repeated $B$ times. Finally, we can propagate the $OS$ feature set through the ensemble $ES$, get the aggregated prediction, and compute its performance within this holdout set.
\begin{algorithm}
\caption{Generic loop for combination of strategies}\label{combination-algo}
\begin{algorithmic}[1]
\For{$b\gets 1, B$}
\State $\mathbf{X}^{(train)} := \{r_1^\ast, ...,r_{T-h}^\ast\} = RS(r_1, ...,r_{T-h})$
\State fit trading strategy: $ M_\lambda^{(b)} (\mathbf{X}^{(train)})$
\State add to ensemble: $ES \gets M_\lambda^{(b)} (\mathbf{X}^{(train)})$
\EndFor
\State test ensemble: $P(r_{T-h}, ...,r_{T}; Agg(ES))$
\end{algorithmic}
\end{algorithm}
\subsubsection{Methods and Parameters}
Table \ref{config-combination} presents the instantiations of $RS$, $M_\lambda$, and $Agg$ of Algorithm \ref{combination-algo}. The main competing method is the Stationary Bootstrap; for all $RS$ schemes, we have taken different number of resamples $B$, so that we could compare the efficiency for different sizes of the ensemble. We used two main base learners: a deep Regression Tree and a large Multilayer Perceptron. The main idea was to follow the usual principle of using low bias and high variance base learners. We employed a fixed feature set of 252 consecutive lags, and averaged the prediction of all members. Therefore, we can describe the main hypothesis as: {\em which resampling scheme $RS$ is able to create a set of trading strategies $ES =\{M_\lambda^{(1)}, ..., M_\lambda^{(B)}\}$ that in aggregate manage to outcompete during the $OS$ period?}
\begin{table}[h!]
\centering
\footnotesize
\caption{Main configuration used for Case I: Combination of Trading Strategies.} \label{config-combination}
\begin{tabular}{p{4cm}|p{4cm}}
\hline
\hline
\multicolumn{1}{c}{\textbf{Resampling Scheme ($RS$)}} & \multicolumn{1}{c}{\textbf{Parameters}} \\
\hline
Stationary Bootstrap \cite{lahiri2013resampling} & $B = \{20, 100, 500\}$ samples and \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ block size = 20 \\
cGAN-Small & $B = \{20, 100, 500\}$ samples \\
cGAN-Medium & $B = \{20, 100, 500\}$ samples \\
cGAN-Large & $B = \{20, 100, 500\}$ samples \\
\hline
\hline
\multicolumn{1}{c}{\textbf{Trading Strategy ($M_\lambda$)}} & \multicolumn{1}{c}{\textbf{Hyperparameters ($\lambda$)}} \\
\hline
Regression Tree (Reg Tree) \cite{efron2016computer} & unlimited depth, with minimum number of samples required to split an internal node of 2 \\
\hline
Multilayer Perceptron (MLP) \cite{efron2016computer} & number of neurons = $\{200\}$, weight decay = $\{0.00001\}$ and \ \ \ \ \ activation function = $\{tanh\}$ \\
\hline
\hline
\multicolumn{1}{c}{\textbf{Other details}} & \multicolumn{1}{c}{\textbf{Values}} \\
\hline
Number of lags used as features & $r_{t-1}, r_{t-2}, ..., r_{t-252}$ \\
Aggregation function ($Agg$) & Mean \\
\hline
\hline
\end{tabular}
\end{table}
\subsection{Case II: Fine-tuning of Trading Strategies}
\subsubsection{Overview}
This case evaluates the success of different fine-tuning strategies, in particular those that create $\mathbf{X}^{(train)}$ and $\mathbf{X}^{(val)}$ sets for time series. In this sense, Algorithm \ref{finetuning-algo} presents an unified loop used regardless of the methodology employed: from data splitting, hyperparameter selection, and performance calculation.
\begin{algorithm}
\caption{Generic loop for fine-tuning of trading strategies}\label{finetuning-algo}
\begin{algorithmic}[1]
\For{$b\gets 1, B$} \Comment{All training and validation folds}
\State $\mathbf{X}^{(train)}, \mathbf{X}^{(val)} := MV(r_1, ...,r_{T-h})$
\For{$\lambda \gets \lambda_1, ..., \lambda_m$}
\State fit trading strategy: $ M_\lambda^{(b)} (\mathbf{X}^{(train)})$
\State check strategy: $s_\lambda^{(b)} = P(\mathbf{X}^{(val)}; M_\lambda^{(b)} (\mathbf{X}^{(train)}))$
\EndFor
\EndFor
\For{$\lambda \gets \lambda_1, ..., \lambda_m$}
\State average across sets: $perf(\lambda) = (1/B) \sum_{b=1}^B s_\lambda^{(b)}$
\EndFor
\State \textbf{opt hyperparam}: $\lambda^\ast := arg \max_{\lambda \in \{\lambda_1, \lambda_2, ..., \lambda_m\}} perf(\lambda)$
\State \textbf{fit trading strategy}: $M_{\lambda^\ast} (\mathbf{X}^{(train)}:= r_1, ...,r_{T-h})$
\State \textbf{test trading strategy}: $P(r_{T-h}, ...,r_{T}; M_\lambda^{(b)} (\mathbf{X}^{(train)}))$
\end{algorithmic}
\end{algorithm}
It start by splitting the $IS = \{r_1, ...,r_{T-h}\}$ set in $\mathbf{X}^{(train)}$ and $\mathbf{X}^{(val)}$ using a Model Validation methodology $MV$ -- one-split, stationary bootstrap, cGAN, etc. Then, for every hyperparameter $\lambda_1, ..., \lambda_m$, we fit a trading strategy (e.g., Multi-layer Perceptron - $M_\lambda^{(b)}$) that aims to predict $r_t$ using lagged information $r_{t-1}, ..., r_{t-p}$ as the feature set. We check the strategy performance $s_\lambda^{(b)}$ using a validation set $\mathbf{X}^{(val)}$ and an utility function $P$ (e.g., Sharpe ratio). This process is repeated for all training and validation sets ($B$). Then, we measure the worthiness of a hyperparameter $\lambda$ (e.g., (number of neurons, weight decay) = (20, 0.05)) by averaging its performance across the validation folds $perf(\lambda)$; the optimal configuration is the one that maximizes the expected utility. Using this hyperparameter, a final model is fitted and tested using $OS$ set.
\subsubsection{Methods and Parameters}
Table \ref{config-finetuning} presents the instantiations of $MV$, $M_\lambda$, $\lambda$ and $P$ of Algorithm \ref{finetuning-algo}.
\begin{table}[h!]
\centering
\footnotesize
\caption{Main configuration used for Case II: Fine-tuning of trading strategies.} \label{config-finetuning}
\begin{tabular}{p{4cm}|p{4cm}}
\hline
\hline
\multicolumn{1}{c}{\textbf{Model Validation ($MV$)}} & \multicolumn{1}{c}{\textbf{ Parameters}} \\
\hline
Naive ($\mathbf{X}^{(val)} = \mathbf{X}^{(train)}$) & - \\
Sliding window \cite{arlot2010survey} & stride and window sizes $= 252$ days \\
Block cross-validation \cite{racine2000consistent} & block size $= 252$ days \\
hv-Block cross-validation \cite{racine2000consistent} & block size = 252 days and \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ gap size = 10 days \\
One-split/Holdout/Single split \cite{arlot2010survey} & $\mathbf{X}^{(val)}$ = last 1260 days \\
k-fold cross-validation \cite{bergmeir2018note} & $k$ = 10 folds \\
Stationary Bootstrap \cite{lahiri2013resampling} & $B$ = 100 samples and \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ block size = 20 \\
cGAN-Small & $B=$ 100 samples \\
cGAN-Medium & $B=$ 100 samples \\
cGAN-Large & $B=$ 100 samples \\
\hline
\hline
\multicolumn{1}{c}{\textbf{Trading Strategy ($M_\lambda$)}} & \multicolumn{1}{c}{\textbf{Hyperparameters ($\lambda$)}} \\
\hline
Gradient Boosting Trees (GBT) \cite{efron2016computer} & number of trees = $\{50, 100, 200\}$, learning rate = $\{0.0001, 0.001$, $0.01, 0.1, 1.0\}$ and maximum depth = $\{1, 3, 5\}$ \\
\hline
Multilayer Perceptron (MLP) \cite{efron2016computer} & neurons = $\{20, 50, 100, 200\}$, weight decay = $\{0.001, 0.01$, $0.1, 1.0\}$ and activation function = $\{tanh\}$ \\
\hline
Ridge Regression (Ridge) \cite{efron2016computer} & shrinkage = $\{0.00001, 0.00005$, $0.0001, 0.0005$, $0.001, 0.005$, $0.01, 0.05$, $0.1, 0.5, 1.0\}$ \\
\hline
\hline
\multicolumn{1}{c}{\textbf{Other details}} & \multicolumn{1}{c}{\textbf{Values}} \\
\hline
Number of lags used as features & $r_{t-1}, r_{t-2}, ..., r_{t-252}$ \\
Hyperparameter search & Grid-search or Exhaustive search \\
Utility function $P$ & Sharpe ratio \\
\hline
\hline
\end{tabular}
\end{table}
Apart from the three different architectures of cGANs, the competing methods to cGAN for fine-tuning trading strategies are: naive (training and validation sets are equal), one-split and sliding window; block, hv-block and k-fold cross-validation; stationary bootstrap. Hence, the main hypothesis is: {\em given a trading strategy $M_\lambda$, which $MV$ mechanism is able to uncover the best configuration $\lambda$ to apply during the $OS$ period?} We search for an answer to this hypothesis using linear and nonlinear trading strategies (Ridge Regression, Gradient Boosting Trees and Multilayer Perceptron). We used the Sharpe ratio as the utility function, grid-search as the hyperparameter search method, and a fixed feature set consisting of 252 consecutive lags.
\section{Case Studies}
\subsection{Case I: Combination of Trading Strategies}
Table \ref{main-results-case2-median} presents the median and mean absolute deviation (MAD - in brackets) results of ensemble strategies in the $OS$ set. Starting with Regression Tree (Reg Tree), we observe that the median Sharpe and Calmar ratios of cGAN-Large was higher across distinct number of base learners ($B = 20, 100, 500$). In fact, it was already twice as much of Stationary Bootstrap (Stat Boot), even when the number of samples was smaller ($B=20$); after this point some gain can still be obtained, but it seems that most of the diversification effect had already been realised. A different picture can be draw for Multilayer Perceptron (MLP): in this case Stat Boot produced better median Sharpe and Calmar ratios across the assets, with some exceptions when $B=20$.
Looking into cGAN results, often the configuration cGAN-Large performed better, whilst in the other side of the spectrum cGAN-Small underperforming. Overall, our results suggest that using a high capacity MLP as the Generator/Discriminator helps to produce a Resampling Strategy that favours the training of base learners. We also reported the Root Mean Square Error (RMSE) since it is usual to report it for Ensemble Strategy. Numerically, they were very similar, nevertheless cGAN-Medium obtained the best values across $B$ and trading strategies.
\begin{table*}[h!]
\centering
\scriptsize
\caption{Median and Mean Absolute Deviation (MAD) results of Trading and Ensemble Strategies on the $OS$ set.} \label{main-results-case2-median}
\begin{tabular}{c|c|c|cccc}
\hline
\hline
\multirow{2}{*}{Trad Strat} & \multirow{2}{*}{Metric} & \multirow{2}{*}{$B$} & \multicolumn{4}{c}{Ensemble Strategy} \\
& & & Stat Boot & cGAN-Small & cGAN-Medium & cGAN-Large \\
\hline
\multirow{9}{*}{Reg Tree} & \multirow{3}{*}{Sharpe} & 20 & 0.042560 (0.380039) & 0.053867 (0.378896) & 0.044741 (0.380228) & \textbf{0.080540} (0.360695) \\
& & 100 & 0.062837 (0.378920) & 0.058820 (0.387749) & 0.030588 (0.390575) & \textbf{0.086423} (0.406171) \\
& & 500 & 0.074116 (0.397212) & 0.067905 (0.392788) & 0.072071 (0.392382) & \textbf{0.098094} (0.424621) \\ \cline{2-7}
& \multirow{3}{*}{Calmar} & 20 & 0.019442 (0.230641) & 0.022619 (0.201044) & 0.018987 (0.200625) & \textbf{0.035473} (0.191353) \\
& & 100 & 0.027235 (0.241023) & 0.024254 (0.209783) & 0.011890 (0.201523) & \textbf{0.036523} (0.239046) \\
& & 500 & 0.034422 (0.266419) & 0.031174 (0.212710) & 0.032761 (0.221514) & \textbf{0.042232} (0.251194) \\ \cline{2-7}
& \multirow{3}{*}{RMSE} & 20 & 0.014397 (0.005570) & 0.014561 (0.005604) & \textbf{0.014289} (0.005414) & 0.014411 (0.005432) \\
& & 100 & 0.014096 (0.005486) & 0.014281 (0.005545) & \textbf{0.013988} (0.005357) & 0.014099 (0.005373) \\
& & 500 & 0.014035 (0.005470) & 0.014203 (0.005531) & \textbf{0.013912} (0.005346) & 0.014033 (0.005361) \\ \cline{2-7}
\hline
\multirow{9}{*}{MLP} & \multirow{3}{*}{Sharpe} & 20 & 0.080722 (0.390515) & 0.079428 (0.416847) & \textbf{0.087960} (0.393913) & 0.069866 (0.398197) \\
& & 100 & \textbf{0.097576} (0.382028) & 0.063012 (0.415537) & 0.091344 (0.397506) & 0.087216 (0.414697) \\
& & 500 & \textbf{0.092262} (0.390161) & 0.059344 (0.415700) & 0.073652 (0.389588) & 0.085333 (0.414096) \\ \cline{2-7}
& \multirow{3}{*}{Calmar} & 20 & 0.035525 (0.223141) & 0.030805 (0.217727) & \textbf{0.037877} (0.219139) & 0.031145 (0.214533) \\
& & 100 & \textbf{0.045916} (0.227827) & 0.023479 (0.223602) & 0.040718 (0.217648) & 0.040572 (0.223359) \\
& & 500 & \textbf{0.038678} (0.237459) & 0.024014 (0.225413) & 0.035688 (0.215691) & 0.035885 (0.222552) \\ \cline{2-7}
& \multirow{3}{*}{RMSE} & 20 & 0.014030 (0.005416) & 0.013999 (0.005408) & \textbf{0.013910} (0.005345) & 0.014055 (0.005369) \\
& & 100 & 0.013924 (0.005399) & 0.013973 (0.005403) & \textbf{0.013878} (0.005339) & 0.014028 (0.005363) \\
& & 500 & 0.013924 (0.005400) & 0.013974 (0.005402) & \textbf{0.013887} (0.005337) & 0.014033 (0.005362) \\
\hline
\hline
\end{tabular}
\end{table*}
\begin{figure}[h!]
\centering
\subfloat[]{\includegraphics[width=\columnwidth]{cganlarge_statboot_scatter_sharpe_tree.png}} \\
\subfloat[]{\includegraphics[width=\columnwidth]{cganlarge_statboot_scatter_rmse_tree.png}}
\caption{Scatterplot of Sharpe ratio (a) and RMSE (b) values obtained using cGAN-Large and Stat Boot across 579 assets.} \label{cgan-statboot-metrics}
\end{figure}
Except for RMSE, MAD values were high for Sharpe and Calmar ratios across the different combinations. Hence in aggregate, any numerical difference can become imperceptible from statistical lens. Table \ref{main-results-case2-pvalues} shows if some of the differences raised about the values in Table \ref{main-results-case2-median}, only between cGAN-Large and Stat Boot, are statistically significant or not. Overall, apart from RMSE, p-values of Wilcoxon rank-sum test were in general above 0.05 (significance level adopted), meaning that the differences observed were not substantial across models, number of samples, and Sharpe or Calmar ratios.
\begin{table}[h!]
\centering
\footnotesize
\caption{p-values of Wilcoxon rank-sum test comparing cGAN-Large with Stationary Bootstrap results.} \label{main-results-case2-pvalues}
\begin{tabular}{c|c|ccc}
\hline
\hline
\multirow{2}{*}{Trad Strat} & \multirow{2}{*}{Metric} & \multicolumn{3}{c}{$B$} \\
& & 20 & 100 & 500 \\
\hline
\multirow{3}{*}{Reg Tree} & Sharpe & 0.2440 & 0.3432 & 0.7832 \\
& Calmar & 0.8495 & 0.2958 & 0.9106 \\
& RMSE & \textbf{0.0456} & \textbf{$<$ 0.0001} & \textbf{$<$ 0.0001} \\
\hline
\multirow{3}{*}{MLP} & Sharpe & 0.7941 & 0.3994 & 0.4295 \\
& Calmar & 0.8119 & 0.4805 & 0.4053 \\
& RMSE & 0.7973 & \textbf{0.0043} & \textbf{0.0007} \\
\hline
\hline
\end{tabular}
\end{table}
In principle, so far it seems that there is little difference between cGAN-Large and Stat Boot, across models, metrics and number of samples. However, this equivalence in aggregate often do not manifest itself at the micro level. Figure \ref{cgan-statboot-metrics} presents this analysis: plotting the Sharpe ratio and RMSE obtained for every asset using cGAN-Large and Stat Boot ($B=500$). For RMSE there is an almost perfect correlation -- when cGAN-Large thrives, Stationary Bootstrap also do, with the converse also holding. However, a different phenomena occurs for Sharpe ratio: apart from a few outliers that skewed the correlation (0.407733), when Stat Boot fails to deliver reasonable results, cGAN-Large can provide a feasible alternative for combining weak signals. This complementarity, not perceived when looked in aggregate, can be an asset for the quantitative analyst in its pursuit to build alpha generating strategies.
To give a more concrete example of this complementarity, Figure \ref{spx-combination} presents the main findings obtained for SPX Index. Figures \ref{spx-combination}a and \ref{spx-combination}b show cGAN-Large as the ensemble strategy using Regression Tree and Multilayer Perceptron as the base learners, respectively. Regression Trees seemed more successful, obtaining a Sharpe and Calmar ratios of 1.00 and 0.75 approximately; but for both methods, cGAN-Large managed to produce positive Sharpe and Calmar ratios. Conversely, Stat Boot failed in both cases, scoring a Sharpe ratio near to 0.0 (Figures \ref{spx-combination}c and \ref{spx-combination}d). This outperformance manifested in a substantial gap between the cumulative returns of the different approaches (Figures \ref{spx-combination}g and \ref{spx-combination}h). Finally, although both methods similarly minimized RMSE (Figures \ref{spx-combination}e and \ref{spx-combination}f), this minimization manifested itself very differently from a Sharpe/Calmar ratio points of view. As a side note, this suggest that minimizing RMSE (a predictive metric) is not an ideal criteria when Sharpe ratio (a financial metric) is the metric that will decide which strategy to be implemented.
\begin{figure*}[h!]
\centering
\textbf{Regression Tree \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ Multilayer Perceptron}\par
\subfloat[]{\includegraphics[width=.85\columnwidth]{cganlarge_spxindex_sharpecalmar_tree.png}} \subfloat[]{\includegraphics[width=.85\columnwidth]{cganlarge_spxindex_sharpecalmar_mlp.png}} \\
\subfloat[]{\includegraphics[width=.85\columnwidth]{statboot_spxindex_sharpecalmar_tree.png}} \subfloat[]{\includegraphics[width=.85\columnwidth]{statboot_spxindex_sharpecalmar_mlp.png}} \\
\subfloat[]{\includegraphics[width=.85\columnwidth]{cganlarge_statboot_spxindex_rmse_tree.png}} \subfloat[]{\includegraphics[width=.85\columnwidth]{cganlarge_statboot_spxindex_rmse_mlp.png}} \\
\subfloat[]{\includegraphics[width=.85\columnwidth]{spx_trees_comb500_cumreturns.png}} \subfloat[]{\includegraphics[width=.85\columnwidth]{spx_mlp_comb500_cumreturns.png}}
\caption{Main findings for SPX Index; (a-d) Sharpe and Calmar ratios per additional unit in the ensemble; (a,c) Regression Trees built on cGAN-Large and Stat Boot samples, respectively; (b,d) Multilayer Perceptron built on cGAN-Large and Stat Boot samples, respectively. Figures (e,f) outline the RMSE of both approaches per additional unit in the ensemble; (e) Regression Tree, (f) Multilayer Perceptron. Figures (g,h) present the cumulative returns for $B=500$ using (g) Regression Tree and (h) Multilayer Perceptron (targeting 10 \% of volatility per year).}
\label{spx-combination}
\end{figure*}
\subsection{Case II: Fine-tuning of Trading Strategies}
Table \ref{main-results-case1-median} presents the quantiles of Sharpe and Calmar ratios in the $OS$ set across the 579 assets for different trading strategies and model validation schemes. Starting from Ridge, we can spot that there not much differences between the model validation schemes, with Naive yielding the worst median (50\%) values (0.121), and hv-Block, Block and cGAN-Medium with the best median (0.138); same can be said with respect to Calmar ratios.
\begin{table*}[h!]
\tiny
\centering
\caption{Quantiles of Sharpe and Calmar ratios in the $OS$ set across the 579 assets for different trading strategies and model validation schemes.} \label{main-results-case1-median}
\begin{tabular}{c|c|c|cccccccccc}
\hline
\hline
\multirow{2}{*}{Trad Strat} & \multirow{2}{*}{Metric} & \multirow{2}{*}{Quant} & \multicolumn{10}{c}{Model Validation Scheme} \\
& & & Naive & One-Split & Sliding & hv-Block & Block & k-Fold & Stat Boot & cGAN-Small & cGAN-Medium & cGAN-Large \\
\hline
\multirow{10}{*}{Ridge} & \multirow{5}{*}{Sharpe} & 0\% & -1.594 & -1.594 & -1.594 & -1.594 & -1.594 & -1.493 & -1.594 & -1.493 & -1.493 & -1.493 \\
& & 25\% & -0.215 & -0.212 & -0.213 & -0.201 & -0.201 & -0.214 & -0.213 & -0.197 & -0.197 & -0.197 \\
& & 50\% & 0.121 & 0.134 & 0.122 & 0.138 & 0.138 & 0.135 & 0.135 & 0.135 & 0.138 & 0.136 \\
& & 75\% & 0.403 & 0.418 & 0.409 & 0.409 & 0.409 & 0.424 & 0.410 & 0.424 & 0.419 & 0.419 \\
& & 100\% & 3.156 & 3.177 & 3.238 & 3.226 & 3.226 & 3.177 & 3.203 & 3.226 & 3.226 & 3.226 \\ \cline{2-13}
& \multirow{5}{*}{Calmar} & 0\% & -0.290 & -0.290 & -0.218 & -0.290 & -0.290 & -0.290 & -0.209 & -0.218 & -0.203 & -0.203 \\
& & 25\% & -0.075 & -0.071 & -0.071 & -0.071 & -0.071 & -0.071 & -0.071 & -0.071 & -0.071 & -0.071 \\
& & 50\% & 0.055 & 0.063 & 0.060 & 0.064 & 0.064 & 0.063 & 0.064 & 0.063 & 0.064 & 0.064 \\
& & 75\% & 0.236 & 0.251 & 0.232 & 0.239 & 0.241 & 0.241 & 0.241 & 0.241 & 0.244 & 0.244 \\
& & 100\% & 5.074 & 4.561 & 4.561 & 4.561 & 4.561 & 4.561 & 4.561 & 4.561 & 4.561 & 4.561 \\
\hline
\multirow{10}{*}{MLP} & \multirow{5}{*}{Sharpe} & 0\% & -1.362 & -1.583 & -1.554 & -1.291 & -1.297 & -1.389 & -1.062 & -1.150 & -1.212 & -1.176 \\
& & 25\% & -0.310 & -0.280 & -0.263 & -0.246 & -0.254 & -0.207 & -0.226 & -0.290 & -0.241 & -0.247 \\
& & 50\% & 0.020 & 0.061 & 0.073 & 0.086 & 0.097 & 0.112 & 0.115 & 0.059 & 0.061 & 0.067 \\
& & 75\% & 0.352 & 0.390 & 0.400 & 0.396 & 0.416 & 0.406 & 0.429 & 0.380 & 0.396 & 0.416 \\
& & 100\% & 1.249 & 1.579 & 1.390 & 1.564 & 1.903 & 1.663 & 1.896 & 1.733 & 1.757 & 1.464 \\ \cline{2-13}
& \multirow{5}{*}{Calmar} & 0\% & -0.330 & -0.324 & -0.254 & -0.264 & -0.266 & -0.328 & -0.213 & -0.286 & -0.238 & -0.276 \\
& & 25\% & -0.099 & -0.095 & -0.089 & -0.081 & -0.090 & -0.073 & -0.073 & -0.093 & -0.087 & -0.093 \\
& & 50\% & 0.008 & 0.026 & 0.031 & 0.039 & 0.046 & 0.050 & 0.056 & 0.022 & 0.028 & 0.032 \\
& & 75\% & 0.194 & 0.213 & 0.212 & 0.214 & 0.242 & 0.229 & 0.235 & 0.211 & 0.229 & 0.224 \\
& & 100\% & 1.565 & 1.981 & 1.738 & 2.399 & 2.381 & 1.724 & 1.586 & 2.209 & 1.554 & 1.579 \\
\hline
\multirow{10}{*}{GBT} & \multirow{5}{*}{Sharpe} & 0\% & -1.197 & -1.171 & -1.155 & -1.289 & -1.143 & -1.038 & -1.073 & -1.157 & -1.275 & -1.157 \\
& & 25\% & -0.233 & -0.192 & -0.208 & -0.167 & -0.143 & -0.212 & -0.214 & -0.239 & -0.209 & -0.224 \\
& & 50\% & 0.088 & 0.159 & 0.142 & 0.175 & 0.211 & 0.174 & 0.162 & 0.123 & 0.150 & 0.133 \\
& & 75\% & 0.391 & 0.503 & 0.488 & 0.537 & 0.546 & 0.534 & 0.527 & 0.446 & 0.531 & 0.473 \\
& & 100\% & 5.174 & 5.174 & 4.411 & 5.174 & 5.174 & 5.174 & 5.174 & 3.443 & 1.929 & 5.174 \\ \cline{2-13}
& \multirow{5}{*}{Calmar} & 0\% & -0.665 & -0.218 & -0.251 & -0.300 & -0.346 & -0.393 & -0.198 & -0.246 & -0.471 & -0.222 \\
& & 25\% & -0.080 & -0.070 & -0.078 & -0.058 & -0.060 & -0.076 & -0.077 & -0.084 & -0.076 & -0.084 \\
& & 50\% & 0.038 & 0.071 & 0.066 & 0.081 & 0.105 & 0.077 & 0.072 & 0.053 & 0.067 & 0.064 \\
& & 75\% & 0.232 & 0.325 & 0.299 & 0.348 & 0.385 & 0.331 & 0.333 & 0.295 & 0.325 & 0.306 \\
& & 100\% & 7.492 & 7.492 & 6.454 & 7.492 & 7.492 & 7.492 & 7.492 & 3.782 & 2.845 & 7.492 \\
\hline
\hline
\end{tabular}
\end{table*}
Regarding Multilayer Perceptron (MLP) Sharpe ratio results, we can spot a bigger contrast in median terms: Naive fared worst as expected (0.020), with Stationary Bootstrap (Stat Boot) obtaining a median value six fold bigger than Naive. In this case, cGAN-Large (0.067) fared better across the cGANs, but still far from the top median values. For Gradient Boosting Trees (GBT), cGAN-Medium was the best of all cGAN approaches, obtaining better results than Sliding window scheme. However, these figures fell short to those of Block and hv-Block schemes, both faring 0.211 and 0.175 median Sharpe ratios, respectively.
So far we have focused on mainly at the median values, and though we can spot some discrepancies across the methods, these become small when we take into account the average interquartile range\footnote{A measure of dispersion calculated by taking the difference between the 3rd quartile (75\%) and 25\% 1st quartile.} of 0.4 units of Sharpe ratio, around 3-5 times the size of the median values. In this sense, to statistically assess whether some of the observed difference is substantial, Table \ref{RankFriedmanHolmIR} presents a statistical analysis using the average ranks\footnote{When we rank the model validation schemes for a given asset, it means that we sort all them in such way that the best performer is in the first place (receive value equal to 1), the second best is positioned in the second rank (receive value equal to 2), and so on. We can repeat this process for all assets and compute metrics, such as the average rank (e.g., 1.35 means that a particular scheme was placed mostly near to the first place).}, Friedman $\chi^2$ test and Holm correction for multiple hypothesis testing of the different model validation schemes for Ridge, MLP and GBT based on the Sharpe ratio results.
\begin{table*}[h!]
\centering
\tiny
\caption{Average ranks, Friedman and Holm post-hoc statistical tests of the Sharpe ratio for Ridge, MLP and GBT.} \label{RankFriedmanHolmIR}
\begin{tabular}{ccc|ccc|ccc|c}
\hline
\hline
\multicolumn{3}{c}{Ridge-Sharpe} & \multicolumn{3}{c}{MLP-Sharpe} & \multicolumn{3}{c}{GBT-Sharpe} & \\
\hline
Method & Avg Rank & p-value & Method & Avg Rank & p-value & Method & Avg Rank & p-value & Holm Correction \\
\hline
\textbf{Naive} & 5.700 & 0.0022 & \textbf{Naive} & 5.900 & $<$ 0.0001 & \textbf{Naive} & 6.074 & $<$ 0.0001 & 0.0055 \\
Sliding & 5.630 & 0.0081 & \textbf{One-Split} & 5.718 & 0.0004 & \textbf{cGAN-Large} & 5.642 & $<$ 0.0001 & 0.0062 \\
Stat Boot & 5.605 & 0.0121 & cGAN-Small & 5.549 & 0.0114 & \textbf{Sliding} & 5.628 & $<$ 0.0001 & 0.0071 \\
k-Fold & 5.592 & 0.0148 & cGAN-Medium & 5.497 & 0.0254 & \textbf{cGAN-Small} & 5.597 & $<$ 0.0001 & 0.0083 \\
One-Split & 5.510 & 0.0481 & Sliding & 5.489 & 0.0287 & \textbf{cGAN-Medium} & 5.431 & 0.0032 & 0.0100 \\
cGAN-Small & 5.503 & 0.0531 & hv-Block & 5.449 & 0.0492 & \textbf{k-Fold} & 5.427 & 0.0034 & 0.0125 \\
cGAN-Large & 5.487 & 0.0644 & Block & 5.444 & 0.0525 & \textbf{Stat Boot} & 5.427 & 0.0034 & 0.0167 \\
cGAN-Medium & 5.477 & 0.0729 & cGAN-Large & 5.411 & 0.0783 & \textbf{One-Split} & 5.415 & 0.0043 & 0.0250 \\
hv-Block & 5.253 & 0.4743 & k-Fold & 5.359 & 0.1368 & \textbf{Block} & 5.359 & 0.0112 & 0.0500 \\
Block & 5.243 & - & Stat Boot & 5.183 & - & hv-Block & 4.992 & - & - \\
\hline
Friedman $\chi^2$ & 5715.01 & \textbf{$<$0.0001} & Friedman $\chi^2$ & 2040.8 & \textbf{$<$0.0001} & Friedman $\chi^2$ & 2865.34 & \textbf{$<$0.0001} & \\
\hline
\hline
\end{tabular}
\end{table*}
For Ridge Regression, the lowest rank was obtained by Block cross-validation (Block), whilst the worst by Naive (5.700). cGANs methods were consecutively in the third, fourth and fifth places, beating other methods, such as Stat Boot, k-fold cross-validation, etc. The Friedman $\chi^2$ statistics of 5715.01 signal that the hypothesis of equal average rank across the approaches is not statistically credible (p-value $<$ 0.0001). By running a pairwise comparison between Block and the remaining approaches, we can spot that only Naive has stand out as a substantially worst approach, even when we control for multiple hypothesis testing (check Holm Correction column for the adjusted level of significance).
In respect to MLP ranking results, Naive performed worst as well (5.900), with Stat Boot being the top scheme in this case (5.183); cGAN-Large in the third position, comparing favourably to the other cGAN configurations, as well as hv-Block, Sliding window, etc. Apart from Naive and One-Split/Single holdout scheme, all the remaining approaches were not statistically different from Stat Boot. On the GBT case, we can spot that hv-Block outperformed all approaches, with the cGANs do not delivering reasonable results in this case.
Overall, apart from a few analyses and cases (e.g., GBT and Naive method), in aggregate the model validation schemes do not appear to be significantly distinct from each other. This can be interpreted that cGAN is a viable procedure to be part of the fine-tuning pipeline, since its results are statistically indistinguishable to well established methodologies. When we drill down into the results, in particular to the Sharpe ratios of the different approaches, we can spot a low correlation among the validation schemes; Figure \ref{cgan-corrmatrices}\footnote{We decided to omit Ridge since all of the correlations were above 0.8.} presents correlation matrices based on Sharpe ratios of model validation schemes for MLP (a) and GBT (b).
Though in median and rank terms the strategies look similar, at the micro-level they appear quite the opposite, in particular to the MLP case. Even the cGANs provide distinct Sharpe ratios, showing the importance of the underlying configuration of the Generator/Discriminator. In general, this outline that distinct model validation schemes are arriving with different hyperparameter combinations, incurring in distinct values for Sharpe and Calmar ratios in the $OS$ set. Hence, it may be that in some assets cGAN outcompeted the remaining model validation schemes. To exemplify that, Table \ref{special-cases-mlp} presents a sample of Sharpe ratio results in the $OS$ set for cases where cGAN-Large outcompeted the other methods.
\begin{table}[h!]
\centering
\tiny
\caption{A sample of Sharpe ratio results in the $OS$ set for cases where cGAN-Large outcompeted the other methods.} \label{special-cases-mlp}
\begin{tabular}{c|ccccc}
\hline
\hline
\multirow{3}{*}{MV Scheme} & ADSWAP2Q & CADJPY & ED UN & NKY & NZDUSD \\
& CMPN & BGN & Equity & Index & BGN \\
& Curncy & Curncy & & & Curncy \\
\hline
Naive & -0.3477 & 0.2009 & -0.0343 & -0.5785 & 0.4503 \\
One-Split & -0.0184 & -0.4218 & 0.0108 & 0.0520 & -0.0809 \\
Sliding & 0.5600 & -0.8328 & -0.227 & 0.1083 & 0.1034 \\
k-Fold & 0.0505 & -0.2861 & 0.4068 & -0.3347 & -0.3104 \\
Block & 0.4344 & 0.2219 & -0.0971 & -0.8870 & 0.1215 \\
hv-Block & 0.1120 & -0.3932 & 0.5364 & -0.0244 & -0.272 \\
Stat Boot & 0.4296 & -0.19498 & 0.3107 & -0.3068 & -0.2616 \\
cGAN-Small & 0.5146 & -0.6222 & -0.0980 & 0.1095 & 0.3059 \\
cGAN-Medium & 0.84 & -0.0901 & 0.3443 & 0.0884 & 0.0582 \\
\textbf{cGAN-Large} & 1.4207 & 0.5885 & 1.1224 & 0.2263 & 0.6703 \\
\hline
\hline
\end{tabular}
\end{table}
\begin{figure}[H]
\centering
\subfloat[MLP]{\includegraphics[width=\columnwidth]{corr_mlp_sharpe.png}} \\
\subfloat[GBT]{\includegraphics[width=\columnwidth]{corr_gbt_sharpe.png}}
\caption{Correlation matrices based on Sharpe ratios of model validation schemes for MLP (a) and GBT (b).} \label{cgan-corrmatrices}
\end{figure}
We can spot a few instances that cGAN-Large substantially fared better results, such as in a 2y Australian Dollar Swap, New Zealand Dollar vs US Dollar Currency, and Consolidated Edison Inc. Equity. This set of results suggest that cGAN-Large is a viable alternative for fine-tuning machine learning models when other methodologies provide poor results, and it should be considered in the portfolio of different validation schemes aside of the distinct trading strategies models.
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{spx_mlp_mvschemes_cumreturns.png}
\caption{SPX Index cumulative returns in the $OS$ set for different model validation schemes using MLP as the trading strategy. cGAN-Large and hv-Block found out the same hyperparameters, therefore obtaining similar profiles.} \label{spx-mlp-mvschemes}
\end{figure}
Finally, Figure \ref{spx-mlp-mvschemes} outlines the cumulative returns for SPX Index for a few of the different model validation schemes using MLP as the trading strategy. In this case, Stat Boot and One-Split were unable to produce a profit after five years of trading, whilst cGAN-Large and hv-Block produced around 10\% of return for a given initial amount of investment (they both found out the same hyperparameters, therefore obtaining similar profiles). This is another example that demonstrate the relevance of having a set of model assessment schemes, as similar as the more common defensive posture of having a portfolio of trading strategies/models and hyperparameter optimization schemes.
\section{Conclusion}
This work has proposed the use of Conditional Generative Adversarial Networks (cGANs) for trading strategies calibration and aggregation. This emerging technique can have an impact into aspects of trading strategies, specifically fine-tuning and to form ensembles. Also, we can list a few advantages of such method, like: (i) generating more diverse training and testing sets, compared to traditional resampling techniques; (ii) able to draw samples specifically about stressful events, ideal for model checking and stress testing; and (iii) providing a level of anonymization to the dataset, differently from other techniques that (re)shuffle/resample data.
The price paid is having to fit this generative model for a given time series. To this purpose, we provided a full methodology on: (i) the training and selection of a cGAN for time series generation; (ii) how each sample is used for strategies calibration; and (iii) how all generated samples can be used for ensemble modelling. To provide evidence that our approach is well grounded, we have designed an experiment encompassing 579 assets, tested multiple trading strategies, and analysed different capacities for Generator/Discriminator. In summary, our main contributions were to show that our procedure to train and select cGANs is sound, as well as able to obtain competitive results against traditional methods for fine-tuning and ensemble modelling.
Being more specific, in the Case Study I: Combination of Trading Strategies, we compared cGAN Sampling and Aggregation with Stationary Bootstrap. Our results suggest that both approaches are equivalent in aggregate, with non-statistically significant advantage for cGAN when using Regression Trees, and for Stationary Bootstrap when using a shallow Multilayer Perceptron. But when Bagging via Stationary Bootstrap fails to perform properly, it is possible to use cGAN Sampling and Aggregation as a tool to combine weak signals in alpha generating strategies; SPX Index was an example where cGAN outcompeted Stationary Bootstrap by a wide margin.
In relation to the Case Study II: Fine-tuning of trading Strategies, we compared cGAN with a wide range of model validation strategies. All of these were techniques designed to handle time series scenarios: these ranged from window-based methods, to shuffling and reshuffling of a time series. We have evidence that cGANs can be used for model tuning, bearing better results in cases where traditional schemes fail. A side outcome of our work is the wealth of results and comparisons: to the best of our knowledge most of the applied model validation strategies have not yet been cross compared using real datasets and different models.
Finally, our work also open new avenues to future investigations. We list a few potential extensions and directions for further research:
\begin{itemize}
\item cGANs for stress testing: a stress test examines the potential impact of a hypothetical adverse scenario on the health of a financial institution, or even a trading strategy. In doing so, stress tests allow the quantitative strategist to assess a strategy resilience to a range of adverse shocks and ensure they are sufficiently hedged to withstand those shocks. A proper benchmark can be model-based bootstrap, since it allows conditional variables which facilitates the process of generating resamples of crisis events.
\item Selection metrics for cGANs: we have adopted the Root Mean Square Error as the loss function between the generator samples and the actual data, however nothing limits the user to use another type of loss function. It could be a metric that take into account several moments and cross-moments of a time series. With financial time series, taking into account stylized facts \cite{cont2001empirical} can be a feasible alternative to produce samples that are more meaningful and resemble more a financial asset return.
\item Combining cGAN with Stationary Bootstrap: in our results section, in particular to Figure \ref{cgan-statboot-metrics}, we observed the low correlation between the Sharpe ratios obtained in both approaches. This imply that a mixed approach, that is, combining resamples from cGAN and Stationary Bootstrap, can yield better results than opting for a single approach.
\item Extensions and other applications: a natural extension is to consider predicting directly multiple steps ahead, or considering modelling multiple financial time series. Both can improve our results, as well as, may reduce the time to train and select a cGAN. Another extension is to consider other architectures, such as AdaGANs \cite{tolstikhin2017adagan} or Ensemble of GANs \cite{wang2016ensembles}. Also, other applications such as fine-tuning and combination of time series forecasting methods; a good benchmark are the M3 and M4 competitions \cite{makridakis2000m3,makridakis2018m4} that involve a large number of time series as well as results from a wide array of forecasting methods.
\end{itemize}
\section*{Acknowledgment}
The authors would like to thank EO for his insightful suggestions and critical comments about our work. Adriano Koshiyama would like to acknowledge the National Research Council of Brazil for his PhD scholarship, and The Alan Turing Institute for providing infra-structure and environment to conclude this work.
\newpage
\balance
\bibliographystyle{plain}
|
2,869,038,155,522 | arxiv | \section{Introduction}
Ultra-reliable and low-latency communication (URLLC) is one of the generic applications required to be covered in the fifth-generation (5G) \cite{durisi2016toward,bennis2018ultrareliable,zhang2021ris}.
As a result, it has been attracted significant interests since it enables several innovative usages, especially in industrial production, such as remote heavy industrial machines operation and factory automation \cite{simsek20165g,liu2018tractable}.
However, compared with conventional communication systems, the achievable rate under URLLC is quite different since short blocklength is adopted to shorten the latency such that the classical Shannon-sense capacity no longer holds.
Specifically, the URLLC rate is a complicated function of the transmission power, the precoding vector, the bandwidth, the transmission time, and the decoding error probability \cite{polyanskiy2010channel}.
Indeed, guaranteeing URLLC represents unique challenges to resource allocation design due to the non-convexity introduced by the finite blocklength.
In the literature, much attention has been devoted to designing effective resource allocation algorithms that support URLLC \cite{she2018joint,nasir2020resource,he2021beamforming}.
However, the systems considered in these works are all cellular networks and their performance is known to be limited by severe inter-cell interference.
Cell-free massive multiple-input multiple-output (MIMO) architecture is a new promising solution to overcome the issue discussed above \cite{zhang2021local,zhang2021improving,zheng2022cell,9737367}.
It reaps the advantages of massive MIMO and network MIMO, since massive distributed access points (APs) facilitate coherent signal transmission to serve all the users without any cell boundaries \cite{bjornson2019making,ngo2017cell,zhang2020prospective}.
However, current literature focuses on the resource allocation in cell-free massive MIMO systems for URLLC is still limited.
For example, in \cite{nasir2021cell}, the authors applied the path-following algorithm (PFA) for optimizing the power allocation with a special class of conjugate beamforming to maximize the users' minimum URLLC rate and the energy efficiency.
However, an adaptive and optimized precoding design at the APs is generally more effective that the fixed one.
Besides, in \cite{lancho2022cell}, the upper bounds of the uplink and downlink decoding error probabilities (DEPs) were derived by using the saddlepoint method to support URLLC.
While the closed-form expression of DEP can characterize the performance, it is generally intractable for the the design of cooperatively efficient resource allocation.
As such, there is an emerging need for designing the precoding with the performance metric of the URLLC rate.
Motivated by the above discussion, the PFA-based precoding design for maximizing the users' minimum URLLC rate is studied in this correspondence.
First, a PFA-based centralized precoding design is proposed which generates a sequence of feasible points and converges to a locally optimal solution of the design optimization problem.
Second, we propose a decentralized PFA-based precoding design by dividing the APs into several non-overlapping cooperative clusters in which the APs only share the data and instantaneous channel state information (CSI) in each cluster to design the precoding vectors to reduce the computational complexity. Simulation results show that compared with the centralized precoding, the decentralized PFA precoding can achieve 80\% of the 95\%-likely URLLC rate and 89\% of the average URLLC rate with only 12\% of the computational complexity of the counterpart. We also investigate the impact of the precoding schemes, the length of transmission duration, and the size of the AP cluster on the URLLC rate via extensive simulations.
\section{System Model}
We consider a cell-free massive MIMO system, which consists of $L$ APs and $K$ single-antenna users that are distributed arbitrarily over a large area. We assume that each AP is equipped with $N$ antennas. Moreover, all the APs are connected with each other and a central processing unit (CPU) via dedicated fronthaul links with sufficient capacity. All APs serve all users on the same time-frequency resource through time division duplex (TDD) operation \cite{9743355}.
The channel coefficient between AP $l$ and user $k$, ${{\bf{h}}_{kl}} \in {{\mathbb{C}}^{N \times 1}}$, is assumed to follow a correlated Rayleigh fading distribution.
We adopt a classic block fading model for modeling the channels such that ${{\bf{h}}_{kl}}$ remains constant in $t$ channel uses of the time-frequency blocks and experience an independent realization in every block. Note that the channel coefficients can be acquired at the APs by existing channel estimation algorithms \cite{bjornson2017massive} and this is beyond the scope of this work as we aim to optimize the precoding for URLLC. Therefore, we assume that perfect CSI is available at the APs.
In the downlink payload data transmission phase, the received signal at user $k$ can be expressed as
${y_k} =\sum\limits_{l = 1}^L {\bf{h}}_{kl}^H{{\bf{w}}_{kl}}{s_k} + \sum\limits_{l = 1}^L {{\bf{h}}_{kl}^H} \sum\limits_{i \ne k}^K {{\bf{w}}_{il}}{s_i} + {n_k}$,
where ${s_i} \sim {{\cal N}_{\mathbb{C}}}\left( {0,1} \right)$ at AP $l$, ${{\bf{w}}_{il}} \in {{\mathbb{C}}^{N \times 1}}$ is the precoding vector for user $i$ at AP $l$, and ${n_k} \sim {{\cal N}_{\mathbb{C}}}\left( {0,{\sigma ^2}} \right)$ represents the thermal noise at user $k$.
Then, the corresponding effective signal-to-interference-plus-noise ratio (SINR) is given as
\begin{equation}
{\varphi _k} = \frac{{{{\left| {{\bf{ h}}_k^H{{\bf{w}}_k}} \right|}^2}}}{{\sum\limits_{i \ne k}^K {{{\left| {{\bf{ h}}_k^H{{\bf{w}}_i}} \right|}^2}} + {\sigma ^2}}},
\end{equation}
where ${{\bf{h}}_k} = {\left[ {{\bf{h}}_{k1}^H, \cdots ,{\bf{h}}_{kL}^H} \right]^H} \in {{\mathbb{C}}^{LN \times 1}}$ and ${{\bf{w}}_i} = {\left[ {{\bf{w}}_{i1}^H, \cdots ,{\bf{w}}_{iL}^H} \right]^H} \in {{\mathbb{C}}^{LN \times 1}}$. By treating the inter-user interference ${\bf{h}}_{kl}^H\sum\limits_{i \ne k}^K {{{\bf{w}}_{il}}{s_i}}$ as Gaussian noise, where $p_{il}^{{\rm{dl}}} \buildrel \Delta \over = {\left\| {{{\bf{w}}_{il}}} \right\|^2}$ is the power allocated to user $i$ at AP $l$, the achievable rate in nats/sec/Hz for user $k$ for the case of sufficiently long blocklength is given by the Shannon rate function
${{\tilde R}_k}= \ln \left( {1 + {\varphi _k}} \right)$,
and the achievable URLLC rate in nats/sec/Hz for user $k$ can be approximated as \cite[eq. (30)]{nasir2020resource}
\begin{equation}\label{URLLC Rate}
{R_k} = \ln \left( {1 + {\varphi _k}} \right) - \sqrt {\frac{1}{{tB}} \times {V_k}} \times {Q^{ - 1}}\left( {\epsilon} \right),
\end{equation}
where $t$ is the transmission duration, $B$ is the communication bandwidth, ${V_k}$ is the channel dispersion \cite{nasir2020resource} which can be expressed as
${V_k} = 1 - \frac{1}{{{{\left( {1 + {\varphi _k}} \right)}^2}}}$,
${Q^{ - 1}}\left( \cdot \right)$ is the inverse of the Gaussian Q-function, i.e., $Q\left( x \right) = \int_x^\infty {\frac{1}{{\sqrt {2\pi } }}\exp \left( { - {t^2}/2} \right)} dt$, and ${\epsilon}$ is the decoding error probability. Note that (\ref{URLLC Rate}) is the normal approximation when the channel ${{\bf{h}}_k}$ is assumed to be quasi-static and deterministic over the transmission duration $t$. The subtrahend in (\ref{URLLC Rate}) captures the rate penalty due to the finite block length, $tB$.
\section{Max-min Rate Based Precoding Design}\label{Design}
\subsection{Centralized Precoding Design}
In the centralized precoding design, the optimization of the precoding vectors takes place at the CPU, where the estimate of the global instantaneous CSI ${{{\bf{h}}}_{kl}},\forall k \in \left\{ {1, \cdots ,K} \right\},\forall l \in \left\{ {1, \cdots ,L} \right\}$, available.
The centralized max-min URLLC rate optimization problem can be expressed as
\begin{align}
&\mathop {\max }\limits_{\bf{w}} \mathop {\min }\limits_{k = 1, \cdots ,K} \left\{ {{R_k}\left( {\bf{w}} \right)} \right\}\label{P1}\tag{3a}\\
&{\rm{s.}}{\rm{t.}}\;\;\;\;\;\;\sum\limits_{k = 1}^K {{{\left\| {{{\bf{w}}_{kl}}} \right\|}^2}} \le {p_{\max }},\forall l,\label{3b}\tag{3b}
\end{align}
where ${\bf{w}} = \left\{ {{{\bf{w}}_{kl}}:k = 1, \cdots ,K,l = 1, \cdots ,L} \right\}$ and ${p_{\max }}$ is the maximum power at each AP. The problem (\ref{P1}) is non-convex due to the URLLC rate function ${R_k}\left( {\bf{w}} \right)$. With the help of \cite{nasir2020resource}, we apply the PFA to develop a concave lower bound for ${R_k}\left( {\bf{w}} \right)$.
Without loss of generality, the URLLC rate expression for user $k$ can be rewritten as
${R_k}\left( {\bf{w}} \right) = {f_k}\left( {\bf{w}} \right) - a{g_k}\left( {\bf{w}} \right)$,
where $a = {Q^{ - 1}}\left( {\epsilon } \right)/\sqrt {t{ B}}$, ${f_k}\left( {\bf{w}} \right) = \ln \left( {1 + {\varphi _k}\left( {\bf{w}} \right)} \right)$, and ${g_k}\left( {\bf{w}} \right) = \sqrt {1 - 1/{{\left( {1 + {\varphi _k}\left( {\bf{w}} \right)} \right)}^2}}$. Now, we aim to establish a convex lower bound for ${f_k}\left( {\bf{w}} \right)$ and a concave upper bound for ${g_k}\left( {\bf{w}} \right)$.
Let ${{\bf{w}}^{\left( n \right)}}$ be a feasible point for (\ref{P1}) that is computed from the $\left( {n - 1} \right)$th iteration of the iterative PFA.
\subsubsection{Lower bounding for ${f_k}\left( {\bf{w}} \right)$}
According to \cite{nasir2020resource}, the following inequality holds for all ${\bf{x}} \in {{\mathbb{C}}^{{M_1}}},{\bf{y}} \in {{\mathbb{C}}^{{M_2}}}$ and ${\bf{\bar x}} \in {{\mathbb{C}}^{{M_1}}},{\bf{\bar y}} \in {{\mathbb{C}}^{{M_2}}}$
\begin{align}\label{I1}\tag{4}
\ln\! \left( \!\!{1\!\! +\! \frac{{{{\left\| {\bf{x}} \right\|}^2}}}{{{{\left\| {\bf{y}} \right\|}^2} \!\!+\! {\sigma ^2}}}}\!\! \right) \!\!\ge\!\! a\!\! -\! \frac{{{{\left\| {{\bf{\bar x}}} \right\|}^2}}}{{2{\cal R}\!\!\left\{ {{{{\bf{\bar x}}}^H}{\bf{x}}} \right\} \!\!- \! {{\left\| {{\bf{\bar x}}} \right\|}^2}}}\!-\! b{\left\| {\bf{x}} \right\|^2} \!\!-\! c{\left\| {\bf{y}} \right\|^2}.
\end{align}
Applying the inequality in (\ref{I1}) for $x = {\bf{h}}_k^H{{\bf{w}}_k}$, $y = {{\cal L}_k}\left( {\bf{w}} \right)$, $\bar x = {\bf{h}}_k^H{\bf{w}}_k^{\left( n \right)}$, $\bar y = {{\cal L}_k}\left( {{{\bf{w}}^{\left( n \right)}}} \right)$, where ${{\cal L}_k}\left( {\bf{w}} \right)$ arranges ${\bf{h}}_k^H{{\bf{w}}_i},i \ne k$ into a vector of dimension $K-1$, we obtain
\begin{align}\label{f_n}
{f_k}\left( {\bf{w}} \right) &\ge \bar a_k^{\left( n \right)} - \frac{{{{\left| {{\bf{h}}_k^H{\bf{w}}_k^{\left( n \right)}} \right|}^2}}}{{2{\cal R}\left\{ {{{\left( {{\bf{w}}_k^{\left( n \right)}} \right)}^H}{{{\bf{h}}}_k}{\bf{h}}_k^H{{\bf{w}}_k}} \right\} - {{\left| {{\bf{h}}_k^H{\bf{w}}_k^{\left( n \right)}} \right|}^2}}}\notag\\
&- \!\bar b_k^{\left( n \right)}{\left| {{\bf{h}}_k^H{{\bf{w}}_k}} \right|^2} \!-\! \bar c_k^{\left( n \right)}\sum\limits_{i \ne k} {{{\left| {{\bf{h}}_k^H{{\bf{w}}_i}} \right|}^2}} \!\buildrel \Delta \over = \!f_k^{\left( n \right)}\left( {\bf{w}} \right)\tag{5},
\end{align}
with the constraint of
\begin{equation}\label{trust region}\tag{6}
2{\cal R}\left\{ {{{\left( {{\bf{w}}_k^{\left( n \right)}} \right)}^H}{{{\bf{h}}}_k}{\bf{h}}_k^H{{\bf{w}}_k}} \right\} - {\left| {{\bf{h}}_k^H{\bf{w}}_k^{\left( n \right)}} \right|^2} > 0,
\end{equation}
where
$\bar a_k^{\left( n \right)} \!=\! {f_k}\left( {{{\bf{w}}^{\left( n \right)}}} \right) \!+\! 2 \!-\! \frac{{{{\left| {{\bf{h}}_k^H{\bf{w}}_k^{\left( n \right)}} \right|}^2}}}{{\beta _k^{\left( n \right)}}}\frac{{\sigma ^2}}{{\alpha _k^{\left( n \right)}}}$,
$0 \!< \!\bar b_k^{\left( n \right)} \!=\! \frac{{\bar a_k^{\left( n \right)}}}{{\beta _k^{\left( n \right)}{{\left| {{\bf{h}}_k^H{\bf{w}}_k^{\left( n \right)}} \right|}^2}}}$,
$0 \!<\! \bar c_k^{\left( n \right)} \!= \!\frac{{{{\left| {{\bf{h}}_k^H{\bf{w}}_k^{\left( n \right)}} \right|}^2}}}{{\beta _k^{\left( n \right)}\alpha _k^{\left( n \right)}}}$,
$\alpha _k^{\left( n \right)} \!\buildrel \Delta \over =\! \sum\limits_{i \ne k} \!{{{\left| {{\bf{h}}_k^H{\bf{w}}_i^{\left( n \right)}} \right|}^2}} \!+\! {{\sigma ^2}}$,
and $\beta _k^{\left( n \right)} \!\buildrel \Delta \over = \! \sum\limits_{i = 1}^K {{{\left| {{\bf{h}}_k^H{\bf{w}}_i^{\left( n \right)}} \right|}^2}} \!+\! {{\sigma ^2}}$.
According to \cite{nasir2020resource}, the function $f_k^{\left( n \right)}\left( {\bf{w}} \right)$ is concave over the trust region (\ref{trust region}) and achieves the same value as ${f_k}\left( {\bf{w}} \right)$ at ${{{\bf{w}}^{\left( n \right)}}}$,
$f_k^{\left( n \right)}\left( {{{\bf{w}}^{\left( n \right)}}} \right) = {f_k}\left( {{{\bf{w}}^{\left( n \right)}}} \right)$.
\subsubsection{Upper bounding for ${g_k}\left( {\bf{w}} \right)$}
Since the function $f\left( x \right) = \sqrt x$ is concave on $x > 0$, the following inequality for all $x > 0$ and $\bar x > 0$ holds true
\begin{align}\label{I2}\tag{7}
\sqrt x \!=\! f\left( x \right)\le f\left( {\bar x} \right) \!+\! {\left. {\frac{{\partial \!f\!\left( x \right)}}{{\partial x}}} \right|_{x = \bar x}}\left( {x \!-\! \bar x} \right)\!=\! \frac{{\sqrt {\bar x} }}{2} \!+ \!\frac{x}{{2\sqrt {\bar x} }},
\end{align}
where $\frac{{\partial f\left( x \right)}}{{\partial x}}$ refers to the partial derivative of the function $f\left( x \right)\le f\left( {\bar x} \right)$ with respect to $x$.
Applying the inequality in (\ref{I2}) for $x = 1 - 1/{\left( {1 + {\varphi _k}\left( {\bf{w}} \right)} \right)^2}$ and $\bar x = 1 - 1/{\left( {1 + {\varphi _k}\left( {{{\bf{w}}^{\left( n \right)}}} \right)} \right)^2}$ and
using
\begin{align}\label{I3}
&{{{{\left( {\sum\limits_{i \ne k} {{{\left| {{\bf{h}}_k^H{{\bf{w}}_i}} \right|}^2}} + {{\sigma ^2}}} \right)}^2}}}/{{{{\left( {\sum\limits_{i = 1}^K {{{\left| {{\bf{h}}_k^H{{\bf{w}}_i}} \right|}^2}} + {{\sigma ^2}}} \right)}^2}}}\notag\\
&\ge\!\!\! \frac{{4\alpha _k^{\left( n \right)}}}{{{{\left( {\beta _k^{\left( n \right)}} \right)}^2}}}\!\!\left( \!{\sum\limits_{i \ne k}\! {\left(\! {2{\cal R}\!\left\{ \!{{{\left( \! {{\bf{h}}_k^H{\bf{w}}_i^{\left( n \right)}} \!\right)}^*}{\bf{h}}_k^H{{\bf{w}}_i}} \right\} \!\!-\! {{\left| {{\bf{h}}_k^H{\bf{w}}_i^{\left( n \right)}} \!\right|}^2}} \!\right)} } { + {{\sigma ^2}}} \!\!\right)\notag\\
&-\!\! \frac{{2{{\left( \!{\alpha _k^{\left( n \right)}} \!\right)}^2}}}{{{{\left(\! {\beta _k^{\left( n \right)}} \!\right)}^3}}}\!\!\left( \!{\sum\limits_{i = 1}^K \!{{{\left| {{\bf{h}}_k^H{{\bf{w}}_i}} \right|}^2}} \!\! +\! {{\sigma ^2}}} \!\!\right)\!-\! \frac{{{{\left( \!{\sum\limits_{i \ne k}\! {{{\left| {{\bf{h}}_k^H{{\bf{w}}_i}} \right|}^2}} \!\!+\! {{\sigma ^2}}} \!\!\right)}^2}}}{{{{\left( {\beta _k^{\left( n \right)}} \right)}^2}}}\tag{8},
\end{align}
with the constraints of
\begin{align}
&\sum\limits_{i = 1}^K {{{\left| {{\bf{h}}_k^H{{\bf{w}}_i}} \right|}^2}} + {{\sigma ^2}} \le 2\beta _k^{\left( n \right)},\label{cons_g1}\tag{9}\\
&\frac{1}{{{{\left( {\beta _k^{\left( n \right)}} \right)}^2}}}\left( {\sum\limits_{i = 1}^K {{{\left| {{\bf{h}}_k^H{{\bf{w}}_i}} \right|}^2}} + {{\sigma ^2}}} \right)\le \!\! \frac{2}{{\alpha _k^{\left( n \right)}}}\notag\\
&\;{\times}\!\!\left( \! {\sum\limits_{i \ne k}\! {\left( {2{\cal R}\!\left\{ {{{\left( {{\bf{h}}_k^H{\bf{w}}_i^{\left( n \right)}} \! \right)}^*}{\bf{h}}_k^H{{\bf{w}}_i}} \!\right\}} \right.} } {\left. { \!\!-\! {{\left| {{\bf{h}}_k^H{\bf{w}}_i^{\left( n \right)}} \right|}^2}} \right) \!\!+\! \!{{\sigma ^2}}} \right),\label{cons_g2}\tag{10}
\end{align}
we have
\begin{align}\label{g_n}
{g_k}\left( {\bf{w}} \right) &\le d_k^{\left( n \right)} - \frac{{4\alpha _k^{\left( n \right)}e_k^{\left( n \right)}}}{{{{\left( {\beta _k^{\left( n \right)}} \right)}^2}}}\left( {\sum\limits_{i \ne k} {\left( {2{\cal R}\left\{ {{{\left( {{\bf{h}}_k^H{\bf{w}}_i^{\left( n \right)}} \right)}^*}{\bf{h}}_k^H{{\bf{w}}_i}} \right\}} \right.} } \right. \notag\\
&\left.{\left. { - \!{{\left| {{\bf{h}}_k^H{\bf{w}}_i^{\left( n \right)}} \right|}^2}} \right) \!\!+\! {{\sigma ^2}}} \!\right)\!\! + \!\! \frac{{2{{\left( \!{\alpha _k^{\left( n \right)}} \!\right)}^2}e_k^{\left( n \right)}}}{{{{\left( {\beta _k^{\left( n \right)}} \right)}^3}}}\!\!\left( \!{\sum\limits_{i = 1}^K \!{{{\left| {{\bf{h}}_k^H{{\bf{w}}_i}} \right|}^2}} \!\! +\!\! {{\sigma ^2}}} \!\! \right)\notag\\
&+\frac{{{{\left( {\sum\limits_{i \ne k} {{{\left| {{\bf{h}}_k^H{{\bf{w}}_i}} \right|}^2}} + {{\sigma ^2}}} \right)}^2}e_k^{\left( n \right)}}}{{{{\left( {\beta _k^{\left( n \right)}} \right)}^2}}}\buildrel \Delta \over = g_k^{\left( n \right)}\left( {\bf{w}} \right)\tag{11},
\end{align}
where
$0 \!< \!d_k^{\left( n \right)} \!= \!\frac{{\sqrt {1 \!-\! 1/{{\left( {1 \!+\! {\varphi _k}\left( {{{\bf{w}}^{\left( n \right)}}} \right)} \right)}^2}} }}{2} \!+\! \frac{1}{{2\sqrt {1 \!-\! 1/{{\left( {1 \!+\! {\varphi _k}\left( {{{\bf{w}}^{\left( n \right)}}} \right)} \right)}^2}} }}$, and
$0 < e_k^{\left( n \right)} = \frac{1}{{2\sqrt {1 - 1/{{\left( {1 + {\varphi _k}\left( {{{\bf{w}}^{\left( n \right)}}} \right)} \right)}^2}} }}$.
The function $g_k^{\left( n \right)}\left( {\bf{w}} \right)$ is convex and achieves the same value as ${g_k}\left( {\bf{w}} \right)$ at ${{\bf{w}}^{\left( n \right)}}$,
$g_k^{\left( n \right)}\left( {{{\bf{w}}^{\left( n \right)}}} \right) = {g_k}\left( {{{\bf{w}}^{\left( n \right)}}} \right)$.
\subsubsection{Concave Lower bound for ${R_k}\left( {\bf{w}} \right)$}
By applying (\ref{f_n}) and (\ref{g_n}), we have
${R_k}\left( {\bf{w}} \right) \ge f_k^{\left( n \right)}\left( {\bf{w}} \right) - ag_k^{\left( n \right)}\left( {\bf{w}} \right) \buildrel \Delta \over = R_k^{\left( n \right)}\left( {\bf{w}} \right)$,
under the trust region constrained by (\ref{trust region}), (\ref{cons_g1}), and (\ref{cons_g2}). The function $R_k^{\left( n \right)}\left( {\bf{w}} \right)$ is concave and matches with the function ${R_k}\left( {\bf{w}} \right)$ at ${{\bf{w}}^{\left( n \right)}}$:
\begin{equation}\label{R_n}\tag{12}
{R_k}\left( {{{\bf{w}}^{\left( n \right)}}} \right) = R_k^{\left( n \right)}\left( {{{\bf{w}}^{\left( n \right)}}} \right).
\end{equation}
At the $n$th iteration, we solve the following convex problem with the computational complexity ${\cal O}\left( {{{\left( {LNK} \right)}^3}\left( {2K + 1} \right)} \right)$ to generate the next feasible point ${\bf{w}}^{\left( {n+1} \right)}$:
\begin{equation}\label{P2}\tag{13}
\mathop {\max }\limits_{\bf{w}} \mathop {\min }\limits_{k = 1, \cdots ,K} \left\{ {R_k^{\left( n \right)}\left( {\bf{w}} \right)} \right\}
\;\;\;{\rm{s.}}{\rm{t.}}\;\;\;\text{(\ref{3b}),\;(\ref{trust region}),\;(\ref{cons_g1}),\;(\ref{cons_g2})}.
\end{equation}
According to (\ref{f_n}) and (\ref{g_n}), we can conclude that
$\mathop {\min }\limits_{k = 1, \cdots ,K} {R_k}\left( {{{\bf{w}}^{\left( {n + 1} \right)}}} \right) \ge \mathop {\min }\limits_{k = 1, \cdots ,K} {R_k}\left( {{{\bf{w}}^{\left( n \right)}}} \right),\;\forall n$,
which guarantees the monotonicity in convergence.
According to \cite{nasir2020resource,nasir2021cell,xing2020matrix1,xing2020matrix2}, it is important to have a proper initial point ${{\bf{w}}^{\left( 0 \right)}}$ with the positive URLLC rate. Thus, we start from any random point ${{\bf{w}}^{\left( 0 \right)}}$ satisfying the convex power constraint $\sum\limits_{k = 1}^K {{{\left| {{{\bf{w}}_{kl}}} \right|}^2}} \le K,\forall l$ and (\ref{trust region}), and then iterate
\begin{equation}\label{Shannon}\tag{14}
\mathop {\max }\limits_{\bf{w}} \mathop {\min }\limits_{k = 1, \cdots ,K} f_k^{\left( n \right)}\left( {\bf{w}} \right)
\;\;\;{\rm{s.}}{\rm{t.}}\;\;\;\text{(\ref{3b})},
\end{equation}
The solution obtained by these iterations can be adopted as the feasible initial point ${{\bf{w}}^{\left( 0 \right)}}$. Finally, Algorithm 1 provides the pseudo-code for the applied path-following procedure.
\begin{algorithm}[t]
\caption{Path-Following Algorithm for Solving Problem (\ref{P1})}
\begin{algorithmic}[1]
\State \textbf{Initialization}: Iterate the convex problem (\ref{Shannon}) until the convergence to obtain an initial point ${{\bf{w}}^{\left( 0 \right)}}$. Set $n=0$.
\State Using (\ref{f_n}) to obtain a concave lower bound for ${f_k}\left( {\bf{w}} \right)$ with constraint (\ref{trust region}).
\State Using (\ref{g_n}) to obtain a convex upper bound for ${g_k}\left( {\bf{w}} \right)$ with constraints (\ref{cons_g1}) and (\ref{cons_g2}).
\State Using (\ref{R_n}) to obtain a concave lower bound for ${R_k}\left( {\bf{w}} \right)$ under the trust region constrained by (\ref{trust region}), (\ref{cons_g1}), and (\ref{cons_g2}).
\State \textbf{Repeat until (\ref{P1}) converges} : Solve the convex problem (\ref{P2}) to generate ${\bf{w}}^{\left( {n+1} \right)}$.
\end{algorithmic}
\end{algorithm}
\subsection{Decentralized Precoding Design}
The previously proposed centralized precoding design requires all the APs to upload the instantaneous CSI to the CPU, which put a significant burden on the fronthaul signaling. Besides, the computational complexity of the centralized precoding design can be exceedingly high for a huge number of antennas. As such, there is a desire for designing the precoding in a decentralized manner which only requires local instantaneous CSI at the APs. In practice, the APs can be divided into several non-overlapping cooperation clusters in which the APs in the same cluster shares both the data and the instantaneous CSI to design the precoding vectors. The APs in different clusters only have the knowledge of the statistical CSI, such as the mean and the variance.
Note that although APs are divided into clusters, each user is served by all the APs instead of the APs in the cluster which the user resides in.
Assume each cluster contains $M$ APs, therefore, there are $L/M$ clusters in the network. As stated before, each AP can obtain the instantaneous CSI of the APs in the same cluster and the statistical CSI of the APs in different clusters. Therefore, the virtual SINR of user $k$ in cluster ${\cal{L}}$ for designing the precoding vector can be expressed as
\begin{equation}\label{VSINR-1}\tag{15}
\varphi _{k{\cal L}}^{\rm{V}}\left( {{{\bf{w}}_{k{\cal L}}}} \right)\! \! = \!\!\frac{{{{\left| {\sum\limits_{l \in {\cal L}} {{\bf{h}}_{kl}^H{{\bf{w}}_{kl}}} + \sum\limits_{\bar l \notin {\cal L}} {{\mathbb{E}}\left\{ {{\bf{h}}_{k\bar l}^H} \right\}{{\bf{w}}_{k\bar l}}} } \right|}^2}}}{{\sum\limits_{i \ne k}^K {{{\left| {\sum\limits_{l \in {\cal L}} {{\bf{h}}_{kl}^H{{\bf{w}}_{il}}} + \sum\limits_{\bar l \notin {\cal L}} {{\mathbb{E}}\left\{ {{\bf{h}}_{k\bar l}^H} \right\}{{\bf{w}}_{i\bar l}}} } \right|}^2}} \!+ \!{\sigma ^2}}}.
\end{equation}
Since we consider Rayleigh fading channels, we have ${\mathbb{E}}\left\{ {{\bf{h}}_{k\bar l}^H}\right\} = {\bf{0}}$. Therefore, (\ref{VSINR-1}) can be written as
\begin{equation}\label{VSINR-2}\tag{16}
\varphi _{k{\cal L}}^{\rm{V}}\left( {{{\bf{w}}_{k{\cal L}}}} \right) = \frac{{{{\left| {\sum\limits_{l \in {\cal L}} {{\bf{h}}_{kl}^H{{\bf{w}}_{kl}}} } \right|}^2}}}{{\sum\limits_{i \ne k}^K {{{\left| {\sum\limits_{l \in {\cal L}} {{\bf{h}}_{kl}^H{{\bf{w}}_{il}}} } \right|}^2}} + {{\sigma ^2}}}}.
\end{equation}
The decentralized max-min URLLC rate optimization problem can be expressed as
\begin{align}\label{P_distributed}
&\mathop {\max }\limits_{{\bf{w}}_{\cal L}^{\rm{V}}} \mathop {\min }\limits_{k = 1, \cdots ,K} R_{k{\cal L}}^{\rm{V}}\left( {{\bf{w}}_{\cal L}^{\rm{V}}} \right)\notag\\
&\;{\rm{s.}}{\rm{t.}}\;\;\;\;\;\sum\limits_{k = 1}^K {{{\left| {{{\bf{w}}_{k{\cal L}}}} \right|}^2}} \le {p_{\max}},\forall l \in {\cal L},\tag{17}
\end{align}
where ${\bf{w}}_{\cal L}^{\rm{V}}$ represents the precoding vectors designed for all the users by APs in cluster ${\cal{L}}$ according to (\ref{VSINR-2}), and
$R_{k{\cal L}}^{\rm{V}}\left( {{\bf{w}}_{k{\cal L}}^{\rm{V}}} \right) = \ln \!\left( {1 + \varphi _{k{\cal L}}^{\rm{V}}\left( {{\bf{w}}_{k{\cal L}}^{\rm{V}}} \right)} \right) - \sqrt {\frac{1}{{tB}} \times V_{k{\cal L}}^{\rm{V}}} \times{Q^{ - 1}}\left( \epsilon \right)$,
$V_{k{\cal L}}^{\rm{V}} = 1 - \frac{1}{{{{\left( {1 + \varphi _{k{\cal L}}^{\rm{V}}\left( {{\bf{w}}_{k{\cal L}}^{\rm{V}}} \right)} \right)}^2}}}$.
The problem (\ref{P_distributed}) can be solved in a similar approach as the one for (\ref{P1}). When the problem (\ref{P_distributed}) has been solved for all the clusters, we can obtain the precoding vector for user $k$ by
\begin{equation}\label{w}\tag{18}
{{\bf{w}}_k} = {\left[ {{{\left( {{\bf{w}}_{k1}^{\rm{V}}} \right)}^H}, \cdots ,{{\left( {{\bf{w}}_{k\left( {L/M} \right)}^{\rm{V}}} \right)}^H}} \right]^H}.
\end{equation}
Then, the URLLC rate of user $k$ can be obtained by computing (\ref{URLLC Rate}) using the precoding vector obtained from (\ref{w}). The computational complexity for each iteration in decentralized precoding design is ${\cal O}\left( {{{\left( {\left( {\frac{L}{M}} \right)NK} \right)}^3}\left( {2K + 1} \right)} \right)$. Compared with the centralized precoding, the computational complexity decreased by $M^3$.
\section{Numerical Results}
In this section, we evaluate the performance of the proposed PFA precoding design for the centralized and the decentralized fashion and investigate the impact of the precoding schemes, the length of transmission duration $t$, the number of antennas equipped at each AP $N$, and the size of the AP cluster $M$ on the URLLC rate. We first describe our adopted simulation parameters.
We adopt the similar parameters setting as in \cite{ngo2017cell} as the basis to establish our simulation system model.
$L$ APs and $K$ users are deployed in a rectangular area of $96\times48$ $\text{m}^{2}$. In particular, the APs are deployed on a rectangle grid. The area is wrapped around at the edges to avoid the boundary effects \cite{ngo2017cell}. The horizontal spacing between APs are $24$ m, and the vertical spacing is $12$ m. The $K$ users are deployed randomly. We adopt a similar propagation model as in \cite{bjornson2019making}. Besides, we set $L = 16$, $\tau_p = 3$, and $\epsilon = 10^{-5}$. Note that in all the figures, the achievable rates are calculated in bits/s/Hz.
\begin{figure}[t!]
\centering
\includegraphics[width=3in]{centralized_MMSE_vs_PFA.eps}
\caption{CDF of the achievable rate achieved by the centralized PFA precoding and the duality-based MMSE precoding with $t = 0.05$ ms, $B = 1$ MHz, $K = 6$, and $N = 4$.}
\label{fig_centralized_MMSE_vs_PFA}
\end{figure}
Fig. \ref{fig_centralized_MMSE_vs_PFA} shows the cumulative distribution functions (CDFs) of the achievable rate per user achieved by the proposed PFA centralized precoding and the duality-based MMSE precoding with $t = 0.05$ ms, $B = 1$ MHz, $K = 6$, and $N = 4$ which is given by
\begin{equation}\label{MMSE}\tag{19}
{{\bf{w}}_k} = \frac{{{{\bf{v}}_k}}}{\left\| {{{\bf{v}}_{kl}}} \right\|},\;\;\;
{{\bf{v}}_k} = p{\left( {\sum\limits_{i = 1}^K p {{{\bf{h}}}_i}{\bf{h}}_i^H + {\sigma ^2}{{\bf{I}}_{LN}}} \right)^{ - 1}}{{{\bf{h}}}_k},
\end{equation}
where $p$ is the transmit power intend for each user at each AP. It can be observed that the proposed PFA centralized precoding scheme performs very well. The achievable rate per user distribution with the proposed PFA centralized precoding almost uniformly outperforms the duality-based MMSE precoding, and the former is more steeper.
Specifically, applying the PFA centralized precoding leads to 32\% improvement in terms of average URLLC rate and 65\% improvement in terms of 95\%-likely URLLC rate. Note that the duality-based MMSE precoding in (\ref{MMSE}) is only a heuristic solution utilizing the uplink-downlink duality and cannot effectively minimize the MSE ${\mathbb{E}}\left\{ {\left. {{{\left| {{y_k} - {s_k}} \right|}^2}} \right|{{{\bf{h}}}_{kl}}} \right\}$.
Moreover, compared with the PFA centralized precoding, the duality-based MMSE precoding has a lower computational complexity since it only requires $\frac{{{N^2}{L^2}K + NLK}}{2} + \frac{{{N^3}{L^3} - NL}}{3} + {N^2}{L^2}$ complex-valued multiplications. Besides, as expected, the performance of Shannon rate serves as a performance upper bound of the URLLC rate at the expense of infinitely long code length.
\begin{figure}[t!]
\centering
\includegraphics[width=3in]{T_revise.eps}
\caption{Optimized 95\%-likely achievable rate versus the transmission time $t$ with $N = 4$ and $B = 1$ MHz.}
\label{fig_T}
\end{figure}
Fig. \ref{fig_T} plots the optimized 95\%-likely achievable rate by Algorithm 1 versus the transmission time $t$ with $N = 4$ and $B = 1$ MHz .
As expected, the URLLC rate increases along with the transmission time $t$ according to the expression of the URLLC rate.
Note that the Shannon rate is fixed since it is computed assuming a sufficient long blocklength, e.g., $t \to \infty$.
Besides, when the number of user increases from 6 to 15, we can observe that the achievable rate decreases since there are more users competing for limited resources that reduces the flexibility of the resource allocation for effective beamforming.
The performance gap between the Shannon rate and URLLC rate is also reduced with the increasing number of users as the performance of these two scheme is limited by the user with the weakest channel gain.
\begin{figure}[t!]
\centering
\includegraphics[width=3in]{URLLC_revise.eps}
\caption{CDF of the URLLC rate achieved by the PFA precoding in the centralized and decentralized way with $t = 0.05$ ms and $N = 4$.}
\label{fig_URLLC}
\end{figure}
Fig. \ref{fig_URLLC} shows the performance of the PFA precoding in the centralized and decentralized fashion in terms of the URLLC rate.
The curve ``C-PFA'' represents the URLLC rate computed using the centralized PFA precoding design. Also, the curve ``D-4-cluster'', ``D-2-cluster'', and ``D-16-cluster'' stand for the performance of the decentralized PFA precoding design with 4 APs, 8 APs, and 1 AP in each cluster, respectively.
The first observation from Fig. \ref{fig_URLLC} is that compared with the centralized PFA precoding, the 95\%-likely URLLC rate with the decentralized PFA precoding is generally lower.
This is because when the decentralized PFA precoding is adopted, only the instantaneous CSI within the cluster and the statistical CSI outside the cluster are used for optimization in each cluster.
As there is a mismatch between the statistical CSI and the instantaneous CSI, the optimization for the decentralized setting is less effective for the utilization of the system resources.
Besides, the performance of the 2-cluster decentralized PFA precoding outperforms the centralized PFA precoding for the strong users. The reason is that the performance of the centralized PFA precoding is always limited by the worst-case users, since substantial resources are allocated to equalize all the SINRs, while the decentralized PFA precoding benefits from being more scalable.
Compared with the 2-cluster decentralized PFA precoding, when adopting the 4-cluster or 16-cluster decentralized PFA precoding, the mismatch between the statistical CSI and the instantaneous CSI is pronounced, so the performance is the worse.
Specifically, compared with the centralized precoding, the 95\%-likely URLLC rate is reduced from 16.73 bits/s/Hz to 13.25 bits/s/Hz with the 2-cluster decentralized PFA precoding and to 8.95 bits/s/Hz with the 4-cluster decentralized PFA precoding.
Moreover, when the fully distributed 16-cluster decentralized PFA precoding is adopted, the 95\%-likely URLLC rate is only 0.17 bits/s/Hz.
However, since the computational complexity is also reduced, the performance loss of adopting the 2-cluster decentralized PFA precoding instead of the centralized precoding is tolerable.
In particular, the 2-cluster decentralized PFA precoding achieves 80\% of the 95\%-likely URLLC rate, 89\% of the average URLLC rate, and 12\% of the computational complexity of the centralized precoding.
The second observation is that the CDF of users' URLLC rate is not as steep as the counterpart when the decentralized PFA precoding design is adopted. The reason is that the optimization target of each cluster contains virtual SINR rather than the actual SINR, leading to under utilisation of system resources.
\section{Conclusion}
In this correspondence, we considered the precoding design in the cell-free massive MIMO system for URLLC in the centralized and decentralized fashion. PFA was designed for maximizing the users' minimum URLLC rate and its performance was evaluated with different settings of the transmission time, the number of antennas per AP, and the size of the AP cluster. Simulation results showed that the centralized PFA precoding design can effectively improve the performance of 95\%-likely achievable rate and the decentralized PFA precoding with a reasonable setting can approach the performance of the former but with low computational complexity. In the future, we will jointly optimize the precoding vector, the cluster formation, and the number of APs in each cluster in a distributed fashion for URLLC.
\bibliographystyle{IEEEtran}
|
2,869,038,155,523 | arxiv | \section{Tight-binding Hamiltonians}
For the calculation of the relaxation time and resistivity, we rely on the use of a tight-binding model. We adopt a general formulation of the tight-binding approach with Hamiltonian
\begin{align}
H = -\sum_{\langle i,j\rangle} t_{\parallel} (\r_i-\r_j) \; (a_{1,i}^{\dagger}a_{1,j}+h.c.) - \sum_{\langle i,j\rangle} t_{\parallel} (\r_i-\r_j) \; (a_{2,i}^{\dagger}a_{2,j}+h.c.) - \sum_{(i,j)} t_{\perp}(\r_i-\r_j) \; (a_{1,i}^{\dagger}a_{2,j}+h.c.)\;.
\label{tbh}
\end{align}
The sum over the brackets $\langle...\rangle$ runs over pairs of atoms in the same layer (1 or 2), whereas the sum over the curved brackets $(...)$ runs over pairs with atoms belonging to different layers. $t_{\parallel} (\r)$ and $t_{\perp} (\r)$ are hopping matrix elements which have an exponential decay with the distance $|\r|$ between carbon atoms. A common parametrization is based on the Slater-Koster formula for the transfer integral\cite{Moon13}
\begin{align}
-t(\d)=V_{pp\pi}(d)\left[1-\left(\frac{\d\cdot{\bm e}_z}{d}\right)^2\right]+V_{pp\sigma}(d)\left(\frac{\d\cdot{\bm e}_z}{d}\right)^2
\end{align}
with
\begin{align}
V_{pp\pi}(d)=V_{pp\pi}^0\exp\left(-\frac{d-a_0}{r_0}\right)\;,
V_{pp\sigma}(d)=V_{pp\sigma}^0\exp\left(-\frac{d-d_0}{r_0}\right)\;,
\end{align}
where $\d $ is the vector connecting the two sites, ${\bm e}_z$ is the unit vector in the $z$-direction, $a_0 $ is the C-C distance and $d_0$ is the distance between layers. A typical choice of parameters is given by $V_{pp\pi}^0=-2.7$ eV, $V_{pp\sigma}^0=0.48$ eV and $r_0=0.319 a_0$ \cite{Moon13}. In particular, we have taken these values to carry out the analysis reported in the main text. For an alternative comparison between the continuous and the tight-binding model, see Ref. \cite{Stauber18b}.
\begin{figure}[h]
\includegraphics[width=0.20\columnwidth]{fig5a}
\includegraphics[width=0.20\columnwidth]{fig5b}
\caption{Energy contour maps of the second highest valence band in the Moir\'e Brillouin zone of a twisted graphene bilayer with twist angle $\theta_{28}\approx 1.16^\circ$, showing the Fermi lines for filling levels shifted $-0.2$ meV (left) and $-1.5$ meV (right) below the level of the saddle points placed along the $\Gamma K$ lines.
\label{fline}}
\end{figure}
\section{Transport decay rate of quasi-particles at the Fermi line}
At low temperature, the transport decay rate is dominated by electron quasiparticles close to the Fermi line. In Fig. \ref{fline}, we show the Fermi line for two different chemical potentials $\Delta \mu$ taken with respect to the level of the van Hove singularity (vHS) arising from the saddle points at the $\Gamma K$ line. Also indicated are the discrete points on the Fermi line for which explicit calculations are here illustrated.
In Fig. \ref{dr02}, we show the transport decay rate as function of temperature, computed according to the expression (2) in the main text, for the different points on the Fermi line indicated in Fig. \ref{fline}. As can be appreciated, the behavior depends crucially on the value of $\Delta \mu $, turning from linear to quadratic at low temperatures as the Fermi level deviates significantly from the vHS.
\begin{figure}[h]
\includegraphics[width=0.40\columnwidth]{fig6a}
\hspace{1cm}
\includegraphics[width=0.40\columnwidth]{fig6b}
\\
\hspace*{1.0cm} (a) \hspace{7.7cm} (b)
\caption{Plot of the temperature dependence of $1/\tau_{\rm tr}$ (weighted with the inverse of the square of the Fermi velocity to get dimensions of energy) when the Fermi level is $0.2$ meV (left) and $1.5$ meV (right) below the vHS, for six points along de Fermi line following the sequence shown in Fig. \ref{fline}, from the farthest position (1) to the closest location to the $K$ point (6). The on-site Hubbard interaction is taken as $U/(2\pi ) = 3$ meV $a_M^2$, $a_M$ being the lattice constant of the superlattice.
\label{dr02}}
\end{figure}
\section{Quasi-particle properties at the Fermi line}
Also for the self-energy, we can analyse the low-energy behaviour for different quasi-particles on the Fermi line. This is seen in Fig. \ref{imself}(a), which represents the imaginary part of the self-energy $\Sigma (\k, \omega )$ as function of $\omega $, computed according to the expression (5) in the main text, for the points on the Fermi line indicated in Fig. \ref{fline}. The linear behavior at low frequencies is consistent with the low-temperature dependence of the transport decay rate shown in Fig. \ref{dr02} for $\Delta \mu = -0.2$ meV.
\begin{figure}[h]
\includegraphics[width=0.43\columnwidth]{fig7a}
\hspace{1cm}
\includegraphics[width=0.40\columnwidth]{fig7b}
\\
\hspace*{1.0cm} (a) \hspace{7.7cm} (b)
\caption{Plot of the frequency dependence of the imaginary (left hand side) and real (right hand side) part of $\Sigma (\k, \omega )$ for six points along de Fermi line following the sequence shown in Fig. \ref{fline}, from the farthest position (1) to the closest location to the $K$ point (6). The Fermi level is $0.2$ meV below the vHS and the on-site Hubbard interaction is taken as $U/(2\pi ) = 3$ meV $a_M^2$, $a_M$ being the lattice constant of the superlattice.
\label{imself}}
\end{figure}
From the imaginary part of $\Sigma (\k, \omega )$, we can compute the real part of the self-energy by applying the Kramers-Kronig relation in Eq. (6) of the main text. The results corresponding to the different curves in Fig. \ref{imself}(a) are represented in Fig. \ref{imself}(b), which shows a clear logarithmic correction consistent with the linear dependence at low frequencies of the respective imaginary counterparts.
\section{Umklapp processes}
Umklapp processes define scattering events for which the final momentum lies outside the first Brillouin zone. The final momentum and its corresponding energy can be mapped back onto the first Brillouin zone, but this is not allowed for its wave function. Nevertheless, mapping also the eigenvectors onto the first Brillouin zone facilitates the numerical calculations and this approximation leads to a susceptibility that is periodic on the first Brillouin zone.
The above approximation has been employed in the calculations of the main text and shall here be discussed for the continuum model, i.e., we compare it to the exact result. Another approximation would simply neglect all scattering processes that lie outside the first Brillouin zone. In Fig. \ref{Susceptibility}, we see that both approximations coincide in the case of the susceptibility for small wave numbers. All protocols are thus consistent with our main assumption, i.e., the marginal Fermi liquid behavior is caused by small momentum transfer along the quasi-one dimensional segments of the Fermi line. The same is true for the corresponding relaxation times.
\begin{figure}[t]
\includegraphics[width=0.99\columnwidth]{fig8}
\caption{Full susceptibility of the second highest VB (black line) for two different chemical potentials relative to the vHS $\Delta\mu=0.2$meV (left) and $\Delta\mu=1.5$meV (right). Also shown the susceptibilities that include approximate treatments of umklapp processes.
\label{Susceptibility}}
\end{figure}
\section{Relaxation time for effective dielectric media}
In the main text, we have assumed a strongly screened Hubbard interaction $U$, valid for gates close to the twisted bilayer sample. Here, we will outline the formalism including screening effects within the $G_0W$-approximation. We will first discuss the case of a dielectric function due to long-ranged Coulomb interaction and then also estimate the effect of localised plasmonic modes predicted in TBG.\cite{Stauber16}
\subsection{Relaxation time for long-ranged interaction.}
For gates further away, we expect also effects from the long-ranged Coulomb potential to become important. In this case, we calculate the scattering rate by incorporating the intrinsic screening effects within the $G_0W$-approximation of the self-energy, starting from the Coulomb potential, screened by a bottom and top gate at distance $D$:\cite{Cea19}
\begin{align}
v_q=\frac{e^2}{2\epsilon_0\epsilon q}\frac{1-e^{-qD}}{1+e^{-qD}}\;.
\end{align}
We will set $\epsilon=5$, the approximate value for hBN.
The relaxation time within the $G_0W$-approximation at finite temperature for a quasiparticle (hole) state with $\Delta=E_{\bf p}\mu$ is given by\cite{Giuliani82}
\begin{align}
\frac{1}{\tau(\Delta)}&=\int_{-\infty}^{\infty}\frac{d\omega}{2\pi}f(\omega)\frac{1}{A}\sum_{\bm{q}} v_q|\langle {\bf p}|{\bf p}+{\bm{q}}\rangle|^2\Im\left(\frac{1}{\epsilon({\bm{q}},\omega)}\right)\delta\left(\hbar\omega-(E_{\bf p}-E_{{\bf p}+{\bm{q}}})\right)\notag
\end{align}
which involves the dielectric function within the RPA
\begin{align}
\epsilon({\bm{q}},\omega)=1-v_q\chi({\bm{q}},\omega)
\end{align}
with the polarisability $(g_s=g_v=2)$
\begin{align}
\chi({\bm{q}},\omega)=\frac{g_sg_v}{A}\sum_\k|\langle \k|\k+{\bm{q}}\rangle|^2\frac{n_F(E_{\k})-n_F(E_{\k+{\bm{q}}})}{\hbar\omega-(E_{\k+{\bm{q}}}-E_{\k})+i0}\;,
\end{align}
and the temperature-dependent weight factor
\begin{align}
\label{f}
f(\omega)=\frac{\coth(\beta\hbar\omega/2)-\tanh(\beta(\hbar\omega-\Delta)/2)}{1+e^{-\beta\Delta}}\;.
\end{align}
We further defined the eigenstates $|{\bf p}\rangle$, the Fermi function $n_F(E)=(e^{\beta E}+1)^{-1}$, the inverse temperature $\beta=1/(k_BT)$.
The $G_0W$-approximation requires the knowledge of the real and imaginary parts of the susceptibility and, in order to speed up the calculations, we have worked with the less time-consuming continuum model.\cite{Lopes07} In Fig. \ref{ScatteringRateLongRange}, we show the resulting scattering rate for different chemical potentials relative to the vHS as function of the temperature. Interestingly, for larger gate distances $D\sim15$nm the results are close to the Planckian scattering rate $\hbar/\tau=0.086meV\cdot T[K]$ indicated as dashed line which is in good agreement to the experimental findings of Ref. \onlinecite{Cao19}.
\begin{figure}[t]
\includegraphics[width=0.99\columnwidth]{fig9}
\caption{The scattering rate $\hbar/\tau$ of TBG with $i=29$ as function of the temperature for two chemical potentials around the vHS and screened long-ranged interaction with surrounding dielectric material $\epsilon=5$. $D$ denotes the distance of TBG to the top and bottom gate. The dashed line indicates the Planckian scattering rate $\hbar/\tau=0.086meV\cdot T[K]$.
\label{ScatteringRateLongRange}}
\end{figure}
What is seen independent of the gate distance $D$ is that for a chemical potential close to the van Hove singularity there is a linear low-temperature behaviour ($\Delta\mu\llless0.2$) in contrary to the quadratic low-temperature behaviour for $\Delta\mu\gsim1.5$meV. Also seen for all curves is the crossover to a different quasi-linear temperature regime for $T_{cr}\sim5$K.
\subsection{Relaxation time from collective modes}
For temperatures larger than the band-gap, the system is expected to reach the classical regime, characterised by a linear behaviour of the resistivity and dominated by the thermal charge fluctuations. In Fig. \ref{LossFunction}, we show the loss function $S(\omega)=$-Im$\epsilon^{-1}({\bm{q}},\omega)$ for $|{\bm{q}}|a=0.02$ in the $KK'$-direction for $\epsilon_0=4.8$ for various twist angles. The peak resembles a true plasmonic resonance as discussed in Ref. \cite{Stauber16} which shifts to smaller energies with decreasing twist angles.
For $i=29$, also a Lorentzian fit is shown with
\begin{align}
\tilde S(\omega)=\frac{2C\gamma}{(\omega-\omega_0)^2+\gamma^2}\to2\pi C\delta(\omega-\omega_0)\;,
\end{align}
where $\hbar C=3$meV, $\hbar\gamma=4$meV, and $\hbar\omega_0\sim8$meV. This yields the following scattering rate for $U=5meVa_M^2$ and $\mu$ close to the van Hove singularity, i.e., $U\rho(\mu)=4$:
\begin{align}
\frac{1}{\tau}=0.16\frac{k_BT}{\hbar}
\end{align}
With the plasmon energy $\hbar\omega_0/t\approx0.003$, the crossover temperature corresponds to $100$K which is clearly too high to explain the experiments of \cite{Cao19}. But this limit is imposed by the accuracy of our numerical solution and we expect a linear behaviour for the resonance as indicated by the red line in the inset. In fact, the plasmonic resonance is related to the band-width of the lowest valence/conduction bands which is around 1meV resp. 10K.
The quasi-particle relaxation time will be mainly determined by the specific form of the loss function which was discussed in Ref. \cite{Stauber16} for small angle twisted bilayer graphene. There, a quasi-flat mode was found for small twist angles, independent of moderate doping-levels related to the localised states around the $AA$-region. As a first approach, we can thus approximate the loss function by the following analytical function:
\begin{align}
\Im\left(\frac{1}{\epsilon({\bm{q}},\omega)}\right)=2\pi C\delta(\omega-\omega_0)
\end{align}
with some suitable constant $C$ which permits for an analytical solution of the relaxation time neglecting the wave function overlap $|\langle {\bf p}|{\bf p}+{\bm{q}}\rangle|^2$. We get
\begin{align}
\frac{1}{\tau(\Delta)}=UCf(\omega_0)\rho(E_{\bf p}-\hbar\omega_0)
\end{align}
where $\rho(\omega)$ denotes the density of states. For large temperature, $\omega_0\ll k_BT$
We expect that the decay rate is dominated by the electron-plasmon coupling active for quasi-particle energies $\Delta\approx\omega_0$. Then, for sufficiently large temperatures $k_BT\gg\hbar\omega_0$, we obtain
\begin{align}
\frac{1}{\tau(\omega_0)}&\approx UC\rho(\mu)\frac{k_BT}{\hbar\omega_0}
\end{align}
Our approach thus yields the observed linear $T$-resistivity above some energy scale $\hbar\omega$. Furthermore, the prefactor is governed by the density of states which is decreasing as one approaches the regime of half-filling from below.
\begin{figure}[t]
\includegraphics[width=0.99\columnwidth]{fig10}
\caption{Loss function $S(\omega)=$-Im$\epsilon^{-1}({\bm{q}},\omega)$ for $|{\bm{q}}|a=0.02$ in the $KK'$-direction for $\epsilon_0=4.8$ for twist angles with $i=20-29$ (left). For $i=29$, also a Lorentzian fit is shown with $\tilde S(\omega)=2C\gamma[(\omega-\omega_0)^2+\gamma^2]^{-1}$ where $\hbar C=3$meV, $\hbar\gamma=4$meV, and $\hbar\omega_0\sim8$meV.
\label{LossFunction}}
\end{figure}
\section{Entropy of the electron liquid}
The electronic contribution to the entropy $S$ can be obtained by applying the formula\cite{Abrikosov63}
\begin{align}
\frac{S}{A} = \frac{i}{\pi}\frac{1}{T} \int \frac{d^2 k}{(2\pi)^2} \int_{-\infty }^{\infty } d\omega \: \omega \frac{\partial n_F(\omega)}{\partial \omega } \log \left( \frac{G_R (\k, \omega )}{G_A (\k, \omega )} \right)\;,
\end{align}
where $A$ is the area of the system and $G_R , G_A$ are the retarded and advanced electron Green's functions, respectively. When looking for the low-temperature dependence of the entropy, one can perform the momentum integral along the Fermi line by decomposing $\k $ into longitudinal $k_{\parallel }$ and transverse $k_{\perp }$ components. This leads to
\begin{align}
\frac{S}{A} \approx \frac{1}{\pi^2 T} \oint \frac{dk_{\parallel }}{2\pi } \int \frac{d\varepsilon_{\k}}{v_{\k}} \int_{-\infty }^{\infty } d\omega \: \omega \frac{\partial n_F(\omega)}{\partial \omega } \arctan \left( \frac{{\rm Im} \: \Sigma (\k, \omega )}{\omega - {\rm Re} \: \Sigma (\k, \omega ) - \varepsilon_{\k}} \right)
\end{align}
where the integral in $k_{\parallel }$ is carried out along the Fermi line.
The integral over the energy variable $\varepsilon_{\k}$ can be computed by adopting a principal value prescription. Then we get
\begin{align}
\frac{S}{A} \approx \frac{1}{2 \pi^2} \frac{1}{T} \oint \frac{dk_{\parallel }}{v_{\k}} \int_{-\infty }^{\infty } d\omega \: \omega \frac{\partial n_F(\omega)}{\partial \omega } \left( \omega - {\rm Re} \: \Sigma (\k, \omega ) \right)
\label{entr}
\end{align}
The temperature dependence of the entropy can be estimated by absorbing $T$ into a dimensionless variable $\omega/T$ in the integrand of Eq. (\ref{entr}). In particular, when the real part of the electron self-energy has an anomalous logarithmic correction, we see that this is translated to the entropy, which gets a dominant scaling behavior $S \sim T \: |\log(T)|$.
\end{widetext}
|
2,869,038,155,524 | arxiv | \section{Introduction}
\label{sec-intro}
Given any finite square matrix $B$ with nonnegative integer entries and no
zero rows or columns, Cuntz and Krieger defined a $C^*$-algebra
$\mathcal{O}_B$, which is generated by partial isometries satisfying
relations associated to $B$ \cite{CK1}. Also, given any square matrix $B$
with nonnegative integer entries, we can build a directed graph by putting
$B_{ij}$ edges from vertex $i$ to vertex $j$. In \cite{kpr} Kumjian, Pask,
and Raeburn defined the graph $C^*$-algebra $C^*(E)$ of any countable
row-finite directed graph $E$ as the universal $C^*$-algebra generated by
a family of projections and partial isometries which satisfy relations
coming from $E$. Given a graph $E$ with finitely many vertices and no
sources or sinks, the vertex matrix $B_E$ associated to the graph is
finite and has no zero rows or columns. In this case it turns out that
$C^*(E)$ coincides with $\mathcal{O}_{B_E}$. The graph thus becomes a
useful tool for visualizing and generalizing the Cuntz-Krieger algebras.
In \cite{kprr} Kumjian, Pask, Raeburn, and Renault defined the graph
groupoid $\mathcal{G}_E$ of any countable row-finite directed graph $E$
with no sinks. In this case, the $C^*$-algebra of the groupoid coincides
with the $C^*$-algebra of the graph, so the graph groupoid is another tool
for understanding the Cuntz-Krieger algebras and a large class of the
graph algebras. In this paper, we examine several equivalence relations
on graphs which preserve the graph groupoid or, in some cases, the
groupoid of a pointed version of the graph.
In \cite{wat-prim} Enomoto, Fujii, and Watatani defined primitive
equivalence of finite directed graphs with no sources, no sinks, and no
multiple edges, and showed that it is a sufficient condition for
isomorphism of the resulting graph algebras. In Section~\ref{sec-primeq}
we generalize their result to countable row-finite graphs. Further,
Enomoto, Fujii, and Watatani showed that primitive equivalence is also a
necessary condition for isomorphism of the graph algebras of strongly
connected graphs with three vertices. In
Section~\ref{sec-classification}, we show by counterexample that this does
not hold in the four-vertex case.
Primitive equivalence involves changing the rows of a matrix. In
graph-theoretical terms, this corresponds to changing the outgoing edges
at a vertex. In Section~\ref{sec-revprimeq}, we define an equivalence
relation which involves changing the columns of the matrix (alternatively,
the incoming edges at a vertex). We call this reverse primitive
equivalence, and show that it preserves the Morita equivalence class,
though not the isomorphism class, of the graph algebras.
Primitive equivalence and reverse primitive equivalence only make sense
for graphs with the same number of vertices. In
Section~\ref{sec-explosions}, we define explosion and reverse explosion,
operations which can change the size of the graph. The graph operation we
call reverse explosion was defined and called explosion in \cite{wat-prim}.
We show that exploding a graph does not change the graph groupoid, hence
does not change the isomorphism class of its graph $C^*$-algebra. Reverse
exploding a graph does not preserve the graph groupoid, but the resulting
graphs can be pointed in such a way that their groupoids are isomorphic.
Thus, reverse exploding a graph preserves the Morita equivalence class of
its $C^*$-algebra.
In Section~\ref{sec-SSE}, we recall from \cite{ashton} the notion of
elementary strong shift equivalence of graphs, and show that elementary
strong shift equivalent graphs can be pointed in such a way that their
groupoids are isomorphic. This is an alternate proof of a result of
Cuntz and Krieger in \cite{CK1}, which states that elementary strong
shift
equivalent matrices correspond to Morita equivalent Cuntz-Krieger
algebras. We then examine
the relationship between elementary strong shift equivalence and explosion
equivalence.
The authors would like to thank Alex Kumjian, David Pask, John Quigg, and
Jack Spielberg for many helpful discussions.
\section{Preliminaries}
\label{sec-prelim}
In \cite{wat-prim}, a $C^*$-algebra was associated to every connected
finite directed
graph with no multiple edges, no sources, and no sinks. We review this
construction as we set up the notation. A {\it directed graph} $E$
consists of a set $E^0$ of vertices, a set $E^1$ of edges and maps
$s,r:E^1 \rightarrow E^0$ describing the source and range of each
edge. Denote by $E^j$ the set of paths in $E$ of length $j$. Here,
zero-length paths (i.e., vertices) are allowed. Denote by $E^*$
the set of all finite paths in $E$ and by
$E^\infty$ the infinite one-sided path space of
$E$. We extend $s$ and $r$ to $E^*$ and $s$ also to $E^\infty$.
Associated to every directed graph $E$ is an $E^0 \times
E^0$ {\it vertex matrix} $B_E$, defined by $B_E(v,w) = \#\{e \in
E^1\,|\,s(e)
= v, r(e) = w\}$. That is, the $(v,w)$ entry of $B_E$ is the number of
edges
in $E$ from $v$ to $w$. $E$ has no multiple edges if and
only
if $B_E$ is a 0-1 matrix. $E$ has no sources (resp. sinks) if and only if
$B_E$ has no zero columns (resp. rows). A directed graph is said to be
{\it strongly connected}
if for every pair of vertices $v$ and $w$, there is a path
from $v$ to $w$ and a path from $w$ to $v$. A directed graph is said to
be
{\it connected} if between each pair $v, w$
there is an undirected path from $v$ to $w$.
In this paper, all graphs
are assumed to be countable, directed, connected, and to have no multiple
edges.
Note that in \cite{wat-prim}, Enomoto, Fujii and Watatani worked with
the adjacency matrix instead of the vertex matrix, whose transpose is the
adjacency matrix. We choose to work with the vertex matrix in order
to
be consistent with the more recent graph algebra literature \cite{kprr}.
For any row-finite graph $E$, a Cuntz-Krieger $E$-family is a set
$\{P_v\,|\,v \in E^0\}$ of mutually orthogonal projections together with
a set $\{S_e\,|\,e \in E^1\}$ of partial isometries satisfying:
\begin{enumerate}
\item[(a)] $S_e^*S_e = P_{r(e)}$;
\item[(b)] $P_v = \sum_{s(e)=v}S_eS_e^*$, for $v \in s(E^1)$.
\end{enumerate}
Kumjian, Pask, and Raeburn \cite{kpr} defined the $C^*$-algebra of the
graph, denoted by $C^*(E)$, to be the universal $C^*$-algebra generated by
a Cuntz-Krieger $E$-family.
We now recall the construction of a groupoid from a
row-finite graph \cite{kprr}. For $x,y \in E^{\infty},\,k \in
{\bf Z},$ say
$x \sim_k y$ if and only if $x_i = y_{i-k}$ for large $i$, where $x_i$
denotes the $i$-th edge of $x$. We remark that this definition differs
slightly from the one given in \cite{kprr}, but it coincides with the
currently accepted convention.
Define $\mathcal{G}_E,$ the path groupoid
of $E,$ by $\mathcal{G}_E = \{(x,k,y)\,|\,x \sim_k y\}.$ The groupoid
operations in $\mathcal{G}_E$ are
$$
(x,k,y)^{-1} = (y,-k,x) \qquad \hbox{ and } \qquad (x,k,y) \cdot (y,l,z)
= (x,k+l,z).
$$
Alternatively, one can define $$\mathcal{G}_E = \{(\alpha, x, \beta) \in
E^* \times E^\infty \times E^*\,|\,r(\alpha) = r(\beta) = s(x)\}/\sim,$$
\noindent where $\sim$ denotes the equivalence relation $(\alpha, \gamma
x, \beta) \sim (\alpha \gamma , x, \beta \gamma)$. To see that the two
definitions coincide, the reader may check that the map $[\alpha, x,
\beta] \mapsto (\alpha x, |\alpha| - |\beta|, \beta x)$ is a groupoid
isomorphism. With
this definition, we find that $[\alpha, x, \beta]^{-1} = [\beta, x,
\alpha]$ , that $[\alpha, x, \beta]$ and $[\gamma, y,
\delta]$ are composable if and only if $\beta x = \gamma y$, and that
\[ [\alpha, x, \beta] [\gamma, y, \delta] = \left\{
\begin{array}{ll}
[\alpha, x, \delta \epsilon] & \hbox{if $\gamma \epsilon = \beta$ and
$y = \epsilon x$} \\
[\alpha \epsilon , y, \delta] & \hbox{if $\beta
\epsilon =
\gamma$ and $x = \epsilon y$}
\end{array}
\right. \hbox{.} \]
With the topology generated by the sets $Z(\alpha,
\beta) := \{[\alpha, x, \beta]\,|\,s(x) = r(\alpha)\}$, $\mathcal{G}_E$
is a locally compact Hausdorff $r$-discrete groupoid with Haar system.
If $E$ has
no sinks, then $C^*(E)$, the $C^*$-algebra of $E$ constructed in
\cite{kpr}, coincides with $C^*(\mathcal{G}_E)$.
We now seek to remove the restriction on sinks. First recall from \cite{kprr}
that a pair $(E,S)$, where $E$ is a row-finite graph with no sinks and $S$ is
a set of vertices of $E$, is called a {\it pointed graph}. If $(E,S)$ is a
pointed graph, then $S$ determines a clopen subset $\{x \in E^\infty\,|\,s(x)
\in S\}$ of the unit space of $\mathcal{G}_E$, which we also denote by $S$. If
$S$ is {\it cofinal}, meaning that given any $x \in E^\infty$ there exists $v
\in S$ and a finite path from $v$ to $s(x_i)$ for some $i$, then
$C^*(\mathcal{G}_{(E,S)})$ is Morita equivalent to
$C^*(\mathcal{G}_E)$, where $\mathcal{G}_{(E,S)}$ denotes
$\mathcal{G}_E$ restricted to $S$ \cite{kprr}.
Given a row-finite graph $E$ with sinks at
$\{v_i\}_{i \in I}$,
define a pointed graph $\tilde{E}$ as follows: the vertices of
$\tilde{E}$ are
the
vertices of $E$, along with the additional vertices $\{w^j_i\}$ for $i \in
I$, $j=1,2,\ldots$. The edges in $\tilde{E}$ are the edges in $E$ along
with
an
edge from $v_i$ to $w_i^1$ for every $i \in I$, and an edge from $w_i^j$
to $w_i^{j+1}$ for every $i \in I$, $j = 1,2,\ldots$. We have simply
added a distinct infinite tail to each sink in $E$. Define the pointing
set of $\tilde{E}$ to be the original set of vertices of $E$. The reader
may verify that $C^*(\mathcal{G}_{\tilde{E}}) \cong C^*(E)$ by checking
that both are generated by the same family of projections and partial
isometries.
\section{Primitive Equivalence}
\label{sec-primeq}
The following definition is due to Enomoto, Fujii, and Watatani. Let $B$
be an $n \times n$, 0-1 matrix. For $1 \leq p \leq n$, denote
the $p$-th row of $B$ by $B_p$ and denote the row
$(0,\dots,0,1,0,\dots,0)$, where the $1$ is in the $i^{th}$ position, by
$E_i$.
We apologize for any confusion caused by this
multiple use of the letter $E$, but we are using standard conventions of
\cite{wat-prim} for primitive transfer and standard conventions of \cite{kpr}
for graphs.
Now suppose that there is a $p$ such that $B_p$ is not a zero row
and
$$B_p = E_{k_1} + \cdots + E_{k_r} + B_{m_1} + \cdots + B_{m_s}$$
\noindent for some distinct $k_1, \dots ,k_r$, $m_1, \dots ,
m_s$ such that $p \not \in \{m_1, \dots, m_s\}$ and $B_{m_i}$ is not a zero
row for any $i$.
Define a new matrix $C$
by
$$
C_{ij} = \left\{ \begin{array}{ll}
B_{ij} & \hbox{if $i \neq p$} \\
1 & \hbox{if $i = p$ and $j \in \{k_1,
\dots, k_r, m_1, \dots, m_s\}$} \\
0 & \hbox{otherwise.}
\end{array}
\right. $$
\noindent That is, start with B, zero out the $p$-th row, and then put
back $1$'s
in positions $k_1, \dots, k_r, m_1, \dots, m_s$. $C$ is called a {\it
primitive transfer} of $B$ at $p$. Note that this notion does not depend
on finiteness of the matrix $B$.
\begin{defn}
If $B$ and $C$ are 0-1, square matrices of the same (possibly infinite) size,
we say $B$ is {\it
primitively equivalent} to $C$ if and only if there exist matrices $D_1,
\dots, D_q$ such that $D_1 = B$, $D_q = C$, and for every $1 \leq i \leq
q-1$, one of the following holds:
\begin{enumerate}
\item[(i)] $D_i$ is a primitive transfer of $D_{i+1};$
\item[(ii)] $D_{i+1}$ is a primitive transfer of $D_i;$
\item[(iii)] $D_i = PD_{i+1}P^{-1}$ for some permutation matrix $P.$
\end{enumerate}
\end{defn}
We say that two matrices which satisfy the third condition above are
{\it permutations} of each other. Matrices which are permutations of
each
other should be primitively equivalent because we would like the vertex
matrices
of isormorphic graphs to be primitively equivalent. This is implicit but never
stated in \cite{wat-prim}. There are matrices
which cannot be primitively transferred in any number of steps into a
permutation of the same matrix. For example,
$$A = \left( \begin{array}{ccc}
1 & 1 & 1\\
1 & 0 & 1\\
1 & 0 & 0
\end{array}
\right) \hbox{ and }
B = \left( \begin{array}{ccc}
0 & 1 & 1\\
1 & 1 & 1\\
0 & 1 & 0
\end{array}
\right) \hbox{,}
$$
\noindent are permuations of each other, but the reader may check that
they would not be primitively equivalent if condition (iii) were removed
from the definition.
Franks \cite[Corollary 2.2]{franks} defined a similar matrix operation.
His operation applies to multigraphs and it involves only two rows or
columns. He used this move in finding a canonical form for the flow
equivalence class of a matrix.
The primitive transfer has the following graph-theoretical interpretation:
suppose
vertex $p$ points to the same vertices which are pointed to by vertices $m_1,
\dots, m_s$ (and only one of the vertices $m_1, \dots , m_s$ points to each
of those vertices) and, in addition, vertex $p$ points to vertices $k_1,
\dots, k_r$. Then a {\it primitively transferred graph} can be obtained
by
erasing all the edges emanating from vertex $p$, except those pointing to
vertices $k_1, \dots, k_r$, and adding an edge from vertex $p$ to each of the
vertices $m_1, \dots, m_s$. Note that this procedure is only allowed if we do
not create any multiple edges. The following example shows that we may first
erase and then redraw the same edge.
\begin{ex} \label{ex-prim} Consider the following graph $E$ and its primitive
transfer $F$:
$$
\xymatrix{
v & w & m_3 \ar[l] \\
& p \ar[d] \ar@{.>}[u] \ar@{.>}[lu] \ar@{.>}[ru] \ar@{.>}@(dr,r)[] \\
m_1 \ar[uu] \ar[ur] & k_1 & m_2\ar[uu]
}
\qquad\qquad
\xymatrix{
v &w & m_3 \ar[l] \\
& p \ar[d] \ar@{.>}@/^/[ld] \ar@{.>}[rd] \ar@{.>}[ur] \\
m_1 \ar[uu]\ar[ur] & k_1 & m_2 \ar[uu]
}
$$
\end{ex}
\noindent Vertex $p$ points to vertices $p$, $v$, $w$, $m_3$ and $k_1$.
Together, vertices $m_1$, $m_2$ and $m_3$ point to vertices $p$, $v$, $w$ and
$m_3$. This means that, if $B$ is the vertex matrix of $E$, then $B_p =
E_{k_1} + B_{m_1} + B_{m_2} + B_{m_3}$. So row $p$ of the vertex matrix
of $F$ has 1's in positions $k_1$, $m_1$, $m_2$ and $m_3$. Thus $F$ has
edges from vertex $p$ to vertices $k_1$, $m_1$, $m_2$ and $m_3$.
\begin{defn}
Two graphs with no multiple edges are said to be {\it primitively
equivalent} if and only if their vertex matrices are primitively
equivalent.
\end{defn}
We will show that if $F$ is a primitive transfer of $E$,
then $\mathcal{G}_E$
and $\mathcal{G}_F$ are isomorphic. But first we need the following
definitions, notation and lemmas.
Let $E$ and $F$ be row-finite graphs with no sinks. To simplify notation, we
denote
their vertex matrices by $B$ and $C$, respectively. Suppose that $F$ is a
primitive tranfer of $E$. Without loss of generality, assume that $1 \in E^0$
and $B_1 = E_{k_1}
+ \cdots + E_{k_r} + B_{m_1} + \cdots + B_{m_s}$. We identify $E^0$ and
$F^0$. Define $K := \{k_1,\dots, k_r\}$ and $M := \{m_1, \dots, m_r\}$.
Note that $K \cap M = \emptyset$. Since $E$ and
$F$ have no multiple edges we can use the notation $e^{ij}$ to denote the
unique edge in $E$ with source $v_i$ and range $v_j$, if there is one,
that is, if $B_{ij} = 1$. Likewise, we will denote edges in $F$ by
$f^{ij}$.
The following lemma is an immediate consequence from the definition of the
primitive transfer.
\begin{lem}
\label{lem-KUM}
If $f^{1j}$ exists, then $j \in K \cup M.$
\end{lem}
If $f \in F^1$ is of the form $f^{1m}$ for some $m \in M$, then we will
call $f$ a {\it new} (or {\it newly introduced}) edge.
\begin{lem}
\label{lem-fijnotneweijexists}
If $f^{ij}$ is not new, then $e^{ij}$ exists.
\end{lem}
\begin{proof}
$C_{ij} = 1$ because $f^{ij}$ exists. Now, if $f^{ij}$ is not
new, then either $i \neq 1$ or $j \not\in M$. If $i \neq 1$, then the
primitive transfer does not change row $i$, so $B_{ij} = C_{ij} = 1$. If,
on the other hand, $i = 1$ and $j \not\in M$, then $j \in K$ by
Lemma~\ref{lem-KUM}. So $B_i = \cdots + E_j + \cdots$ and thus $B_{ij} =
1$. In either case, $B_{ij} = 1$, so $e^{ij}$ exists.
\end{proof}
\begin{lem}
\label{lem-no2consecnew}
If $f$ and $f'$ are consecutive edges in $F$ {\rm (}that is, $r(f) =
s(f')${\rm )}, then
$f$ and $f'$ cannot both be new.
\end{lem}
\begin{proof}
This follows from the definition of new and the fact that $1 \not\in M$.
\end{proof}
\begin{lem}
\label{lem-newoldhasinverse}
If $f^{1m}f^{mj} \in F^2$ for some $m \in M$, then $e^{1j}$ exists, and $j
\not \in K$.
\end{lem}
\begin{proof}
Since $f^{mj}$ is not new, $e^{mj}$
exists by Lemma~\ref{lem-fijnotneweijexists}, and so $B_{mj} = 1$. Hence
we have $B_{1j} = 1$, since $B_1 = \cdots + B_m + \cdots$. Thus $e^{1j}$
exists.
Now suppose $j \in K$. Then we have $B_1 = \cdots + E_j + \cdots + B_m +
\cdots$. But we know that $B_{mj} = 1$, and thus $B_{1j} > 1$, a
contradiction.
\end{proof}
\begin{prop}
\label{prop-main}
If $E$ is a row-finite graph, and $F$ is a primitive
transfer of $E$, then $\mathcal{G}_E \cong \mathcal{G}_F$.
\end{prop}
\begin{proof} The strategy of the proof is as follows: We use the
properties
of the primitive transfer to construct an $s,r$-preserving injective map $\phi$
from the edges in $E$ to paths of length one or two in $F$. This will
induce
injective maps from finite paths in $E$ to finite paths in $F$, from infinite
paths in $E$ to infinite paths in $F$, and from $\mathcal{G}_E$ to
$\mathcal{G}_F$. The injective map between the groupoids will turn out to be a
surjective homomorphism.
We use the same notation as above and assume, without loss of generality,
that $B_1 = E_{k_1} + \cdots + E_{k_r} + B_{m_1} + \cdots + B_{m_s}$.
Note that if $E$ has an edge from vertex 1 to vertex $j$ for some $j
\not\in K$,
then there is a unique $m \in M$ such that
$B_{mj} = 1$. Since $C_{1m} = 1$, we can define $\phi: E^1 \rightarrow
F^1
\cup F^2$ by
$$
\phi(e^{ij}) = \left\{ \begin{array}{ll}
f^{1m}f^{mj} & \hbox{if $i = 1$ and $j \not\in K$}
\\
f^{ij} & \hbox{else.}
\end{array}
\right.
$$
For instance, in Example \ref{ex-prim}, $\phi(e^{pp}) = f^{pm_1}f^{m_1p}$,
$\phi(e^{pv}) = f^{pm_1}f^{m_1v}$, $\phi(e^{pw}) = f^{pm_3}f^{m_3w}$ and
$\phi(e^{pm_3})=f^{pm_2}f^{m_2m_3}$. All the other edges would be mapped
to their corresponding edges.
$\phi$ induces a map, which we also denote by $\phi$, from $E^* \rightarrow
F^*$ by
$$
\phi(\alpha_1\alpha_2\cdots\alpha_{|\alpha|}) =
\phi(\alpha_1) \phi(\alpha_2) \cdots \phi(\alpha_{|\alpha|}).
$$
$\phi:E^\infty \rightarrow F^\infty$ is defined similarly, and
$\phi:\mathcal{G}_E \rightarrow \mathcal{G}_F$ is defined by
$$
\phi [\alpha, x,\beta] = [\phi(\alpha), \phi(x), \phi(\beta)].
$$
It is easily seen that $\phi:\mathcal{G}_E \rightarrow \mathcal{G}_F$ is
a well-defined homomorphism.
In order to show that $\phi:\mathcal{G}_E \rightarrow \mathcal{G}_F$ is
injective, we need to know that $\phi:E^* \rightarrow F^*$ and $\phi:
E^\infty \rightarrow F^ \infty$ are injective.
First note that $K$ and $M$ are disjoint. Hence if
$\phi(e)_1=\phi(e')_1$ for some $e,e'\in E^1$ then $|\phi(e)|=|\phi(e')|$.
Now suppose that $\phi(\alpha) = \phi(\beta)$ for some finite or infinite paths
$\alpha$ and $\beta$. It follows by induction that
$\phi(\alpha_i)=\phi(\beta_i)$ for all $i$. Since $\phi: E^1 \rightarrow F^1
\cup F^2$ is clearly injective, we have that $\phi: E^* \rightarrow F^*$ and
$\phi: E^\infty \rightarrow F^\infty$ are injective.
We are now in position to show that
$\phi:\mathcal{G}_E \rightarrow \mathcal{G}_F$ is injective. Suppose
$\phi[\alpha, x, \beta] = \phi[\gamma, y, \delta]$. Then we can assume,
without loss
of generality, that $\phi(\alpha) = \phi(\gamma) \eta$, $\phi(\beta) =
\phi(\delta) \eta$, and $\phi(y) = \eta \phi(x)$. We claim that $\eta \in
\phi(E^*)$. If $|\phi(\alpha)| \leq 1$ then either $\eta = \phi(\alpha)$ or $\eta =
r(\phi(\alpha)) = \phi(r(\alpha))$, so we can suppose that
$|\phi(\alpha)| > 1$. Since $\phi(\gamma) \eta =
\phi(\alpha_1) \cdots \phi(\alpha_{|\alpha|})$ and $\phi(\alpha_i)$ has
length one or two for every $i$, it follows that for some $k$, either
$\phi(\gamma) =
\phi(\alpha_1) \dots \phi(\alpha_k)$ or $\phi(\gamma) =
\phi(\alpha_1) \dots \phi(\alpha_k) f$, where $\phi(\alpha_{k+1}) = fg$
($f,g \in F^1$). However, the latter case is not possible since, by
definition of $\phi$, the last edge of $\phi(\gamma)$ cannot be a new
edge, but $f$ must be a new edge. In the former case, $\eta =
\phi(\alpha_{k+1} \dots \alpha_{|\alpha|})$. Thus there exists $\mu \in
E^*$ such that $\eta = \phi(\mu)$. So $\phi(\alpha) = \phi(\gamma) \phi(\mu) = \phi(\gamma \mu)$. Similarly,
$\phi(\beta) = \phi(\delta \mu)$, and $\phi(y) = \phi(\mu x)$.
By injectivity of $\phi:E^* \rightarrow F^*$ and $\phi: E^\infty
\rightarrow F^\infty$, we have $\alpha = \gamma \mu$, $\beta = \delta
\mu$, and $y = \mu x$, and hence $[\alpha, x, \beta] = [\gamma, y,
\delta]$. Thus $\phi: \mathcal{G}_E \rightarrow \mathcal{G}_F$ is
injective.
We now show that $\phi:\mathcal{G}_E \rightarrow \mathcal{G}_F$
is onto. Since elements of the form $[f, y, r(f)]$, where $f \in F^1$, and $y
\in F^\infty$ with $r(f) = s(y)$, generate $\mathcal{G}_F$, it suffices to show
that each of them is in the range of $\phi$.
Use the following procedure to find an inverse image for $y$. Note that
every new edge is followed by an edge which is not new
(Lemma~\ref{lem-no2consecnew}). Lemma~\ref{lem-newoldhasinverse} says
that we can find an inverse image for these new-not new pairs. The
remaining edges are
not new, and by Lemma~\ref{lem-fijnotneweijexists} these edges can be
pulled
back individually.
Now we fix an edge $f = f^{ij} \in F^1$ and a path $y \in
F^\infty$ with $s(y) = r(f)$. If $f$ is not new, then clearly
$\phi[e^{ij},
\phi^{-1}(y), r(e^{ij})] = [f, y, r(f)]$.
If, on the other hand, $f = f^{1m}$ is a new edge, then there
must be an edge $e' \in E^1$ such that $\phi(e') = fy_1$.
Further, if $f$ is a new edge, then $y_1$ cannot be new, so there must be
an edge $e'' \in E^1$ with $\phi(e'') = y_1$. In
this case,
$$\phi[e', \phi^{-1}(y_2y_3\cdots), e''] = [fy_1, y_2y_3\cdots, y_1] =
[f, y, r(f)].$$
Thus $\phi$ is onto.
All that remains to show is that $\phi$ is continuous and open. To see
that $\phi$ is open, the reader may check that $\phi(Z(\alpha, \beta)) =
Z(\phi(\alpha), \phi(\beta))$ for any finite paths $\alpha$ and $\beta$.
Likewise, $\phi^{-1}(Z(\gamma, \delta)) = Z(\phi^{-1}(\gamma),
\phi^{-1}(\delta))$ for any $\gamma, \delta$, so $\phi$ is continuous.
\end{proof}
The proposition immediately yields the following corollary, which was
proved
in \cite{wat-prim} for the case where $E$ and $F$ are finite graphs which
satisfy (L) and have no sources or sinks:
\begin{cor}
\label{cor-maincor}
If $E$ is a row-finite graph with no sinks and $F$ is primitively
equivalent to $E$, then $\mathcal{G}_E \cong \mathcal{G}_F$
and hence $C^*(E) \cong C^*(F)$.
\end{cor}
Recall that if $E$ has sinks, we can build a graph $\tilde{E}$ with no
sinks by
affixing a distinct infinite tail to each sink. By pointing $\tilde{E}$
at all the original vertices of $E$, we obtain a pointed graph whose
groupoid $C^*$-algebra coincides with $C^*(E)$.
\begin{lem}
\label{lem-sinks}
Let $E$ be a row-finite graph, possibly with sinks, and let $F$ be a
primitive
transfer of $E$. Then $\tilde{F}$ is a primitive transfer of $\tilde{E}$.
\end{lem}
\begin{proof}
Let $B$, $C$, $\tilde{B}$, and $\tilde{C}$ denote the vertex matrices of
$E$, $F$, $\tilde{E}$, and $\tilde{F}$, respectively. Assume that $B_p =
E_{k_1} + \cdots E_{k_r} + B_{m_1} + \cdots + B_{m_s}$, so that $C_p =
E_{k_1} + \cdots E_{k_r} + E_{m_1} + \cdots + E_{m_s}$. Since for each
$i$, vertex $m_i$ is not a sink in $E$, $B_{m_i j} = 1$ if and only if
$\tilde{B}_{m_i j} = 1$. Similarly, since vertex $p$ is not a sink in
$E$, $B_{pj} = 1$ if and only if $\tilde{B}_{pj} = 1$, Thus $\tilde{B}_{p}
= E_{k_1} + \cdots E_{k_r} + \tilde{B}_{m_1} + \cdots + \tilde{B}_{m_s}$,
and $\tilde{C}_p = E_{k_1} + \cdots E_{k_r} + E_{m_1} + \cdots + E_{m_s}$.
Thus $\tilde{F}$ is a primitive transfer of $\tilde{E}$.
\end{proof}
\begin{thm}
If $F$ is primitively equivalent to the row-finite graph $E$,
then $C^*(E) \cong C^*(F)$.
\end{thm}
\begin{proof}
By Lemma~\ref{lem-sinks} and
Corollary~\ref{cor-maincor} we have $C^*(\mathcal{G}_{\tilde{E}}) \cong
C^*(\mathcal{G}_{\tilde{F}})$, and hence $C^*(E) \cong C^*(F)$.
\end{proof}
\section{Classification}
\label{sec-classification}
A matrix $B$ is said to be {\it irreducible} if for every $i, j$, there
exists an $N \in {\bf N}$ such that $B^N(i,j) > 0$. In \cite{wat-prim} a
computer, along with some $K$-theory, was used to show that for all $3
\times 3$ irreducible matrices which are not permutation matrices,
primitive equivalence is a necessary as well as sufficient condition for
isomorphism of the Cuntz-Krieger algebras.
We used a similar method to see whether this result is true for
irreducible $4 \times 4$ matrices. It is not. The following
counterexample is one of many:
$$A = \left( \begin{array}{cccc}
1 & 1 & 1 & 1 \\
1 & 0 & 1 & 1 \\
1 & 1 & 0 & 1 \\
1 & 1 & 1 & 0
\end{array}
\right) \hbox{ and }
B = \left( \begin{array}{cccc}
0 & 1 & 1 & 1 \\
1 & 0 & 1 & 1 \\
1 & 1 & 0 & 1 \\
1 & 1 & 0 & 0
\end{array}
\right)
$$
By \cite{pask/rae-K}, $K_0(\mathcal{O}_A)$ is the abelian group generated
by $\{[P^A_i]\,|\,i = 1, \dots , 4\}$ subject to the relations $[P^A_i] =
\sum_j
A(i,j)[P^A_j]$, and similarly for $\mathcal{O}_B$. One may check that
$$[P^A_1] \mapsto (0,2), [P^A_2] \mapsto (0,1), [P^A_3] \mapsto (1,4),
[P^A_4] \mapsto (1,1)$$
\noindent is a faithful representation of
$K_0(\mathcal{O}_A)$ as ${\bf Z}_2 \oplus {\bf Z}_6$, and that
$$
[P^B_1] \mapsto (0,1), [P^B_2] \mapsto (1,1), [P^B_3] \mapsto
(0,4), [P^B_4] \mapsto (1,2)$$
\noindent is a faithful representation of
$K_0(\mathcal{O}_B)$ as ${\bf Z}_2 \oplus {\bf Z}_6$. Further, notice
that $[1_{\mathcal{O}_A}] = \sum_1^4 [P^A_i] \mapsto (0,2)$, and
$[1_{\mathcal{O}_B}] \mapsto (0,2)$, as well. Thus, since $A$ and
$B$ are irreducible and there is an
isomorphism between the $K_0$ groups which preserves the class of the
identity, $\mathcal{O}_A \cong \mathcal{O}_B$ by \cite{rordam-K}.
However, it is easy to check by hand using the definition of the primitive
transfer that each of these matrices is primitively equivalent only to its
permutations, and that $A$ and $B$ are not permutations of each other.
\section{Reverse Primitive Equivalence}
\label{sec-revprimeq}
In this section, we define a modified version of primitive
equivalence using column operations instead of row operations. Recall
that a cofinal vertex is one from which any infinite path can be
intercepted.
\begin{defn}
Suppose that $B$ and $C$ are 0-1 (possibly infinite) square matrices. We
say $C$ is a {\it reverse primitive transfer} of $B$ if $C^T$ is a
primitive transfer of $B^T$ at a cofinal vertex. We say that $B$ and $C$
are {\it reverse primitively equivalent} if there is a sequence $B = D_1,
D_2, \dots, D_q = C$ such that for each $i < q$, $D_{i+1}$ is a
permutation of $D_i$, $D_{i+1}$ is a reverse primitive transfer of $D_i$,
or $D_i$ is a reverse primitive transfer of $D_{i+1}$.
\end{defn}
Two graphs $E$ and $F$ are \emph{reverse} graphs if the vertex matrices
$B_E$ and $B_F$ are transposes of each other, that is, $E$ and $F$ have
the same vertices and their edges have opposite directions. Disregarding
cofinality, two graphs are reverse primitive equivalent if their reverse
graphs are primitively equivalent. The following example shows that
reverse primitive equivalence of graphs $E$ and $F$ does not imply that
$C^*(E) \cong C^*(F)$.
\begin{ex}
\label{ex-rpe}
If
$$
B = \left( \begin{array}{cccc}
0 & 1 & 1 & 1 \\
1 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 \\
1 & 0 & 0 & 0
\end{array}
\right) \hbox{ and }
C = \left( \begin{array}{cccc}
0 & 1 & 0 & 1 \\
1 & 0 & 1 & 0 \\
1 & 0 & 0 & 0 \\
1 & 0 & 0 & 0
\end{array}
\right) \hbox{,}
$$
\noindent then $C^T$ is a primitive transfer of $B^T$ (via $B^T_3 =
B^T_2$). However, $\mathcal{O}_C \cong \mathcal{O}_3$ and $\mathcal{O}_B
\cong \mathcal{O}_3 \otimes M_2$, so they are not isomorphic
\cite{paschke-salinas}.
\end{ex}
Suppose that $E$ and $F$ are row-finite graphs and that $F$ is a
reverse primitive transfer of $E$ at vertex 1. That is, $B_1^T = E_{k_1}
+ \cdots + E_{k_r} + B_{m_1}^T + \cdots + B_{m_s}^T$. Where appropriate,
we use the notation established prior to Proposition~\ref{lem-KUM}.
We have analogues of some of the preliminary lemmas for
Proposition~\ref{prop-main}, and their proofs are similar:
\begin{lem}
\label{lem-KUM2}
If $f^{i1}$ exists, then $i \in K \cup M.$
\end{lem}
The definition of {\it new} must be altered slightly. $f \in F^1$ is
said to be a {\it new} edge if it is of the form $f^{m1}$ for some $m \in
M$.
\begin{lem}
\label{lem-oldnewhasinverse}
If $f^{im}f^{m1} \in F^2$ for some $m \in M$, then $e^{i1}$ exists.
\end{lem}
Note that, with the modified definition of new,
Lemma~\ref{lem-fijnotneweijexists} and Lemma~\ref{lem-no2consecnew} are
true exactly as stated.
\begin{prop}
\label{prop-reverse}
If the row finite graph $F$ is a reverse primitive
transfer of $E$ at $v$, then
$\mathcal{G}_{(E,\{v\})}
\cong
\mathcal{G}_{(F,\{v\})}$.
\end{prop}
\begin{proof}
Let $B$ be the vertex matrix of $E$ and $C$ the vertex matrix of $F$.
Without loss of generality, assume $B_1^T = E_{k_1} + \cdots + E_{k_r} +
B_{m_1}^T + \cdots + B_{m_s}^T$.
We again define a map $\phi: E_1 \rightarrow F^1 \cup F^2$ by
$$
\phi(e^{ij}) = \left\{ \begin{array}{ll}
f^{im}f^{m1} & \hbox{if $j = 1$ and $i \not\in K$}
\\
f^{ij} & \hbox{else}
\end{array}
\right.
$$
\noindent and extend it to a map $\phi: \mathcal{G}_E \rightarrow
\mathcal{G}_F$. By arguments similar to those in the proof of
Proposition~\ref{prop-main}, $\phi$ is an injective groupoid homomorphism.
In this case, however, it fails to be onto, and this is because of the
difference between Lemma~\ref{lem-newoldhasinverse} and
Lemma~\ref{lem-oldnewhasinverse}. Thus it is necessary to point.
It is not hard to see that $\phi$ restricts to an injective groupoid
homomorphism from $\mathcal{G}_{(E,\{v\})}$ to
$\mathcal{G}_{(F,\{v\})}$ which, of course, we also denote by
$\phi$.
We claim that any finite or infinite path whose first edge
is not new can be
pulled back through $\phi$. First, find all the new edges in the path.
These are preceded by edges
which are not new, and these (not new)-new pairs can be pulled back. The
remaining edges are all not new and can be pulled back individually.
Now, fix any $[\alpha, x, \beta]$, with $s(\alpha) = s(\beta) = 1$.
Since $\alpha$ and $\beta$ start at $1$, their first edge cannot be new
(because $1 \not\in M$). Hence they can be pulled back through $\phi$.
Now, if $x_1$ is not new, $x$ can be pulled back as well, so $[\alpha, x,
\beta]$ has an inverse image. Since $\phi$ preserves source and range,
this inverse image will be in $\mathcal{G}_{(E,\{v\})}$. If, on
the
other hand, $x_1$ is new, then we know $x_2$ is not new, so we pull back
the triple $[\alpha x_1, x_2 x_3 \dots , \beta x_1]$. This shows that
$\phi: \mathcal{G}_{(E,\{v\})} \rightarrow
\mathcal{G}_{(F,\{v\})}$ is onto.
It is not hard to check that $\phi$ is continuous. We show that
it is open. Clearly, $\phi(Z(\alpha, \beta)) \subseteq
Z(\phi(\alpha), \phi(\beta))$. We show the reverse inclusion: let
$[\phi(\alpha), y, \phi(\beta)] \in Z(\phi(\alpha), \phi(\beta))$. If
$y_1$ is not
new, then $y$ has an inverse image and hence $[\phi(\alpha), y,
\phi(\beta)] \in \phi(Z(\alpha, \beta))$. If, on the other hand, $y_1$ is
a new edge, then note that $y_2$ is not new, so the path
$y_2y_3\dots$ has an inverse image. Since
$$
Z(\phi(\alpha), \phi(\beta)) = \bigcup_{s(f) = r (\alpha)}
Z(\phi(\alpha)f, \phi(\beta)f),
$$
$[\phi(\alpha), y, \phi(\beta)] \in \phi(Z(\alpha, \beta))$.
Thus
$\phi(Z(\alpha, \beta)) = Z(\phi(\alpha), \phi(\beta))$, which shows that
$\phi$ is open.
\end{proof}
\begin{cor}
If $E$ is a graph with no sinks and $F$ is reverse primitively
equivalent to $E$, then $C^*(E)$ is Morita equivalent to $C^*(F)$.
\end{cor}
\begin{proof}
If $F$ is a reverse primitive transfer of $E$, then the
result follows easily from the previous proposition and the fact that
pointing a
graph at a cofinal vertex does not change the Morita equivalence class of
its $C^*$-algebra \cite{kprr}. Since reverse primitive equivalence is
generated by the reverse primitive transfer and permutation, the result
follows.
\end{proof}
\section{Explosions}
\label{sec-explosions}
Given a graph $E$, its {\it edge matrix} $A_E$ is an $E^1
\times
E^1$ matrix defined by
$$
A_E(e,f) = \left\{\begin{array}{ll}
1 & \hbox{if $r(e) = s(f)$}\\
0 & \hbox{otherwise.}
\end{array}
\right.
$$
The {\it adjoint graph} of $E$ is the graph whose vertex matrix is the
edge matrix of $E$. In \cite{wat-prim} the explosion of a graph was
defined as a
generalization of the adjoint graph, and it was shown that exploding a
graph does not change its $C^*$-algebra. Since we work with the vertex
matrix instead of the adjacency matrix, our edges are backwards. To be
consistent with our earlier terminology, we shall call {\it reverse
explosion} what Enomoto, Fujii, and Watatani called explosion, and we
develop a very similar notion, which we shall call {\it explosion}.
Let $E$ be a graph. Let $v \in E^0$ satisfy $|s^{-1}(v)| > 1$, and fix an
edge $e$ whose source is $v$. First assume that $e$
is not a loop, that is,
$v\not=r(e)$. Denote the set of non-loop edges pointing to $v$ by
$K=\{k_1,k_2,\dots\}$ and the set of non-loop edges different from $e$
starting
at $v$ by $M=\{m_1,m_2,\dots\}$. The \emph{edge explosion} $F$ of $E$ at
the edge
$e$ is defined as follows. Split the vertex $v$ into two vertices $v'$ and
$v''$. The source of $e$ is replaced by $v'$. The source of each edge in
$M$ is replaced by $v''$. Every edge in $K$ is replaced by a pair of edges
$k'$, $k''$ having the same source as $k$ and pointing to $v'$ and $v''$
respectively. If there is a loop edge $f$ at $v$ then it is replaced by a loop
$f''$ at $v''$ and an edge $f'$ pointing from $v''$ to $v'$.
The
following picture shows an example of $E$ and its explosion at $e$.
$$
\xymatrix{
\bullet \ar[r]^{k_1} & v \ar@{.>}[r]^e \ar@<2pt>[dl]^{m_2} \ar[dr]^{m_1} \ar@(lu,u)[]^f & \bullet\\
\bullet \ar@<2pt>[ur]^{k_2} & & \bullet
}
\qquad\qquad
\xymatrix{
\bullet \ar[r]^{k_1'}_<>(.4){k_1''} \ar[rd] & {v'} \ar@{.>}[r]^e & \bullet\\
\bullet \ar[ur]^<>(.1){k_2'} \ar@<2pt>[r]^{k_2''} & {v''}
\ar@(d,rd)[]_{f''}\ar[u]_{f'} \ar[r]^{m_1}\ar@<2pt>[l]^{m_2} & \bullet
}
$$
Next assume that $e$ is a loop. To get the explosion at $e$, split the
vertex
$v$ into $v'$ and $v''$ and change the edges in $M$ and $K$ as before.
Also, replace $e$ by a loop $e'$ at $v'$ and an edge $e''$ pointing from
$v'$ to $v''$. The following picture shows an example of $E$ and its
explosion at a loop $e$.
$$
\xymatrix{
\bullet \ar[r]^{k_1} & v \ar@<2pt>[dl]^{m_2} \ar[r]^{m_1} \ar@{.>}@(lu,u)[]^e & \bullet\\
\bullet \ar@<2pt>[ur] ^{k_2}& &
}
\qquad\qquad
\xymatrix{
\bullet \ar[r]^{k_1'}_<>(.4){k_1''} \ar[rd] & {v'} \ar@{.>}[d]^{e''}
\ar@{.>}@(lu,u)[]^{e'} & \bullet\\
\bullet \ar[ur]^<>(.1){k_2'} \ar@<2pt>[r]^{k_2''} & {v''} \ar@<2pt>[l]^{m_2} \ar[ru]_{m_1}
}
$$
\begin{defn}
Two graphs $G$ and $E$ are said to
be {\it explosion equivalent} if there is a finite sequence
$E=F_0,F_1,\dots,F_n=G$ of graphs such that for every $i$, either
$F_i$ is an edge explosion of $F_{i+1}$ or $F_{i+1}$ is an edge
explosion of $F_i$.
\end{defn}
Consider the following more general notion of explosion, which we call
{\it vertex explosion}. Fix $v
\in E^0$ with $|s^{-1}(v)| > 1$. Instead of exploding at an edge whose
source is $v$, we explode at a subset of edges whose source is $v$. Write
$s^{-1}(v) = M_1 \cup M_2$, where $M_1$ and $M_2$ are disjoint and
nonempty. Again we split the vertex $v$ into two vertices $v'$ and $v''$
and put an edge from every vertex in $s(r^{-1}(v))$ to both $v'$ and
$v''$. Also, put an edge from $v'$ to every vertex in $r(M_1)$ and from
$v''$ to every vertex in $r(M_2)$. If there is a loop edge in $M_1$, we
add an edge from $v'$ to $v''$. If there is a loop edge in $M_2$, we add
an edge from $v''$ to $v'$.
If $F$ is an explosion of $E$ at $v$, we always identify $E^0 \setminus \{v\}$
with $F^0 \setminus \{v',v''\}$.
The following lemma gives a characterization of vertex explosion in terms
of the vertex matrix. The proof follows immediately from the definition.
\begin{lem}
\label{lem-explosionmatrix}
Let $F$
be an explosion of $E$ at vertex $v$ with respect to the decomposition
$s^{-1}(v) = M_1
\cup M_2$. Let $B$ and $C$ respectively
denote
the vertex matrices of $E$ and $F$. Then we have the following:
\begin{enumerate}
\item[(i)] $B_{uw} = C_{uw}$ for every $u \in E^0 \setminus
\{v\}$
and
every $w
\in
F^0 \setminus \{v',v''\}${\rm ;}
\item[(ii)] $C_{wv'} = C_{wv''}$ for every $w \in F^0${\rm ;}
\item[(iii)] $C_{v''w} = 1 \Leftrightarrow e^{vw} \in M_2$ and $C_{v'w} =
1 \Leftrightarrow e^{vw} \in M_1$ for every $w \in F^0
\setminus \{v', v''\}${\rm .}
\end{enumerate}
Further, if $B$ and $C$ are 0-1 matrices satisfying {\rm (i)-(iii)},
then the graph of $C$ {\rm(}i.e. the graph whose vertex
matrix is $C${\rm)} is an explosion at vertex $v$ of the graph of $B$.
\end{lem}
\begin{defn}
Let $E$ be a graph and suppose that $v \in E^0$ satisfies $|s^{-1}(v)| = k
> 1$. Order the vertices of $E$ so that $v$ is vertex 1, and let
$B^{(1)}$ denote the vertex matrix of $E$. Now, for $m=1,2,\dots,k-1$,
perform the following procedure. First find the largest $j$ such that
$B^{(m)}_{1j} = 1$. Next, insert the row $E_j$ between rows $1$ and $2$
of $B^{(m)}$. Then duplicate the first column of the resulting matrix.
Finally, change the 1 in the $(1,j)$ position to a 0. Name this new
matrix $B^{(m+1)}$. The {\it complete explosion of E at v} is defined to
be the graph of the matrix $B^{(k)}$.
\end{defn}
Note that, by the previous lemma, each of the $k-1$ steps in the above
procedure corresponds to an explosion at an edge. We now
offer the following example to guide the reader through the definition
of complete explosion.
\begin{ex} The graph on the left can be completely exploded in two steps
by exploding at the dotted edge each time.
The dotted edge becomes the dashed edge in the exploded graph at each stage.
In the matrices, $*$ denotes the unaffected parts of the matrix.
$$
\xymatrix{
\\
3 \ar@<2pt>[r] & 1 \ar@(u,ru)[] \ar[r] \ar@{.>}@<2pt>[l] & 2
}
\qquad\qquad\qquad
\xymatrix{
\\
4 \ar@<2pt>[r] \ar@<2pt>[rd] & 1 \ar@(u,ru)[] \ar@{.>}[r] \ar[d] & 3 \\
& 2 \ar@{-->}@<2pt>[ul]
}
\qquad\qquad\qquad
\xymatrix{
& 2 \ar@{-->}@<2pt>[dr] \\
5 \ar@<2pt>[r] \ar@<2pt>[rd] \ar[ur] & 1 \ar@(u,ru)[] \ar@<2pt>[u] \ar[d] & 4 \\
& 3 \ar@<2pt>[ul]
}
$$
$$
B^{(1)}=\left(\begin{matrix}
1 & 1 & 1 \\
0 & * & * \\
1 & * & *
\end{matrix}\right)
\qquad
B^{(2)}=\left(\begin{matrix}
1 & 1 & 1 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & * & * \\
1 & 1 & * & *
\end{matrix}\right)
\qquad
B^{(3)}=\left(\begin{matrix}
1 & 1 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 1 \\
0 & 0 & 0 & * & * \\
1 & 1 & 1 & * & *
\end{matrix}\right)
$$
\end{ex}
The reader may check that if $E$ is a graph with no sinks, the complete
explosion of $E$ at every vertex with more than one edge emanating from it
yields the adjoint graph of $E$.
\begin{lem}
\label{lem-samerel}
Edge explosion and vertex explosion generate the same equivalence
relation.
\end{lem}
\begin{proof}
Since edge explosion is a special case of vertex explosion, it suffices to
show that an arbitrary graph $E$ and any vertex explosion $F$ of $E$ can
be edge exploded into a common graph.
Now if $E$ is any graph and $F$ is a vertex explosion of $E$ at $v$, the
reader may verify by examining the vertex matrices that the complete
explosion of $E$ at vertex $v$ coincides with the graph obtained by
performing a complete explosion of $F$ at $v'$ and $v''$.
\end{proof}
The following is closely related to the notion of explosion defined in
\cite{wat-prim}.
\begin{defn}
A graph $F$ is a \emph{reverse explosion} of the graph $E$ at a vertex
$v$ if the reverse graph of $F$ is the explosion of the reverse graph of
$E$ at $v$ and $v$ is cofinal. Two graphs are said to be {\it
reverse explosion equivalent} if there is a finite sequence of reverse
explosions connecting them.
\end{defn}
\begin{prop}
\label{prop-expleq}
If $E$ is a row-finite graph and $F$ is an explosion of
$E$, then the groupoids of $E$ and $F$ are
isomorphic.
\end{prop}
\begin{proof} By Lemma~\ref{lem-samerel}, it suffices to prove the case
when $F$ is the explosion of
$E$ at
an edge $e$. First we assume that $e$ is not a loop edge and, we use the
notation $f$, $M$ and $K$ as in the definition of explosion. We define a
map
$\phi:E^\infty\to F^\infty$ as follows. If $x\in E^\infty$ then $\phi$ makes
the following replacements on path segments of $x$:
\begin{align*}
\phi(\cdots ke\cdots)&=\cdots k'e\cdots \\
\phi(\cdots k\overbrace{f\cdots ff}^{n+1} e\cdots)&=
\cdots k''\overbrace{f''\cdots f''}^n f'e\cdots\\
\phi(\cdots k\overbrace{f\cdots f}^{n} m\cdots)&=
\cdots k''\overbrace{f''\cdots f''}^n m\cdots \\
\phi(\overbrace{f\cdots ff}^{n+1} e\cdots)&=
\overbrace{f''\cdots f''}^n f'e\cdots\\
\phi(\overbrace{f\cdots f}^{n} m\cdots)&=
\overbrace{f''\cdots f''}^n m\cdots\\
\phi(\cdots k\overbrace{fff\cdots }^{\hbox{\tiny{all} $f$}} ) &=
\cdots k''\overbrace{f''f''f''\cdots}^{\hbox{\tiny{all} $f''$}} \\
\phi(\overbrace{fff \cdots}^{\hbox{\tiny{all} $f$}} )&=
\overbrace{f''f''f''\cdots}^{\hbox{\tiny{all} $f''$}}
\end{align*}
where $k\in K$, $m\in M$ and $n$ is a non-negative integer. It is easy but tedious to check
that $\phi:\mathcal G_E\to \mathcal G_F$ defined by
$\phi(x,k,y)=(\phi(x),k,\phi(y))$ is a bijective homomorphism.
It remains to check that it is open
and continuous. First we extend $\phi$ to a subset of $E^*$. If
$\alpha\in E^*$ and $r(\alpha) \neq v$,
then we define $\phi(\alpha)$ similarly to the above. To show that $\phi$
is
open it
suffices to check that $\phi(Z(\alpha,\beta))$ is open. First suppose
that $r(\alpha)=r(\beta) \neq v$. In this case,
\begin{align*}
\phi(Z(\alpha,\beta)) &=
\{(\phi(\alpha x,|\alpha|-|\beta|,\beta x) : x\in E^\infty, s(x)=r(\alpha) \}\\
&=
\{(\phi(\alpha)\phi(x),|\phi(\alpha)|-|\phi(\beta)|,\phi(\beta)\phi(x)) : x\in E^\infty, s(x)=r(\alpha) \}\\
&=
\{(\phi(\alpha)y,|\phi(\alpha)|-|\phi(\beta)|,\phi(\beta)y) : y\in F^\infty,
s(y)=r(\phi(\alpha)) \}\\
&=
Z(\phi(\alpha), \phi(\beta)).
\end{align*}
Now, if $r(\alpha) = r(\beta) = v$, we have three cases. If $\alpha =
\alpha'k$ and $\beta = \beta'l$ for some $k,l \in K$, then
$\phi(Z(\alpha, \beta)) = Z(\phi(\alpha')k', \phi(\beta')l') \cup Z(\phi(\alpha')k'',
\phi(\beta')l'')$. If $\alpha = \alpha'k$ and $\beta = \beta'lff\cdots f$, then
$\phi(Z(\alpha, \beta)) = Z(\phi(\alpha')k'', \phi(\beta')l''f''f''\cdots f'')$. Finally,
in the case where $\alpha = \alpha'kff\cdots f$ and $\beta =
\beta'lff\cdots f$, we have
$\phi(Z(\alpha, \beta)) = Z(\phi(\alpha')k''f''f''\cdots f'',
\phi(\beta')l''f''f''\cdots f'')$.
Continuity of $\phi$ follows from a similar argument.
The case when $e$ is a loop edge handled similarly, using a slightly different
definition for $\phi$. It now makes the following replacements on $x\in
E^\infty$:
\begin{align*}
\phi(\cdots km\cdots)&=\cdots k''m\cdots \\
\phi(\cdots k\overbrace{e\cdots ee}^{n+1} m\cdots)&=
\cdots k'\overbrace{e'\cdots e'}^n e''m\cdots \\
\phi(\overbrace{e\cdots ee}^{n+1} m\cdots)&=
\overbrace{e'\cdots e'}^n e''m\cdots \\
\phi(\cdots k \overbrace{ee\cdots}^{\hbox{\tiny{all} $e$}}) &=
\cdots k \overbrace{e'e'\cdots}^{\hbox{\tiny{all} $e'$}} \\
\phi(\overbrace{ee\cdots}^{\hbox{\tiny{all} $e$}}) &=
\overbrace{e'e'\cdots}^{\hbox{\tiny{all} $e'$}}
\end{align*}
where $k\in K$, $m\in M$ and $n$ is a non-negative integer.
\end{proof}
\begin{cor}
\label{cor-expleq}
If $E$ is a graph and $F$ is explosion equivalent to $E$, then
$C^*(E) \cong C^*(F)$.
\end{cor}
\begin{proof}
If $E$ has no sinks, this follows immediately from the proposition. If
$E$ has sinks, then one can readily verify that $\tilde{F}$ is an
explosion of $\tilde{E}$ and so $C^*(E) \cong C^*(\mathcal{G}_{\tilde{E}})
\cong C^*(\mathcal{G}_{\tilde{F}}) \cong C^*(F)$.
\end{proof}
The following two results are proven similarly to
Proposition~\ref{prop-expleq} and Corollary~\ref{cor-expleq}.
\begin{prop} If $E$ is a row-finite graph and $F$ is the reverse
explosion of $E$ at an edge whose source is $v$, then the groupoid
of $E$
pointed at $v$ and the groupoid of $F$ pointed at $v''$ are isomorphic.
\end{prop}
\begin{cor} If $E$ is a row-finite graph with no sinks and $F$ is a
reverse explosion of $E$ then $C^*(E)$ and $C^*(F)$ are Morita equivalent.
\end{cor}
\section{Elementary strong shift equivalence}
\label{sec-SSE}
A matrix $A$ is \emph{elementary strong shift equivalent} to a matrix $B$
if there
are matrices $R$ and $S$ such that $A=RS$ and $B=SR$ \cite{williams}.
Note that, for any
permutation matrix $P$, $PBP^{-1}$ and $B$ are elementary strong shift
equivalent via
$R = PB$, $S=P^{-1}$. Thus, any two vertex matrices of the
same graph are elementary strong shift equivalent. Two graphs $E$ and $F$
are said to be elementary strong shift equivalent if
their vertex matrices $B_E$ and $B_F$ are elementary strong shift
equivalent.
Note that elementary strong shift equivalence is not an equivalence relation.
The equivalence relation generated by elementary strong shift equivalence is
called {\it strong shift equivalence}.
Note that if $B_E=RS$ and $B_F=SR$ then the rows and columns of $R$
can be indexed by $E^0$ and $F^0$ respectively. Also the rows and columns of
$S$ can be indexed by $F^0$ and $E^0$ respectively. Using this property we
define a bipartite \emph{imprimitivity graph} $X$ as follows: the
set of vertices of $X$ is
the disjoint union of $E^0$ and $F^0$ and the vertex matrix of $X$ is
$$
B_X=\left(\begin{matrix}
0 & R \\
S & 0
\end{matrix}\right).
$$
The construction of $X$ is due to Ashton \cite{ashton}.
\begin{ex}
Using
$
R=\left(\begin{smallmatrix}
1 & 1 & 0 \\
0 & 0 & 1
\end{smallmatrix}\right)
$ and
$
S=\left(\begin{smallmatrix}
1 & 0 \\
0 & 1 \\
0 & 1
\end{smallmatrix}\right)
$
we have $E$, $X$ and $F$
$$
\xymatrix{
\\
u \ar@(dl,ul)[] \ar[r] & v \ar@(rd,ru)[]
}
\qquad\qquad\qquad
\xymatrix{
u \ar@<2pt>[d]\ar[dr] & v \ar@<2pt>[dr] \\
p \ar@<2pt>[u] & q \ar[u] & r \ar@<2pt>[ul]
}
\qquad\qquad\qquad
\xymatrix{
\\
p \ar@(dl,ul)[] \ar[r] & q \ar[r] & r \ar@(rd,ru)[]
}
$$
with vertex matrices
$$
B_E=\left(\begin{matrix}
1 & 1 \\
0 & 1
\end{matrix}\right)
\qquad\qquad
B_X=\left(\begin{matrix}
0 & 0 & 1 & 1 & 0 \\
0 & 0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 & 0
\end{matrix}\right)
\qquad\qquad
B_F=\left(\begin{matrix}
1 & 1 & 0 \\
0 & 0 & 1 \\
0 & 0 & 1
\end{matrix}\right)
$$
\end{ex}
\begin{prop}
If $E$ and $F$ are elementary strong shift equivalent, row-finite graphs
and $X$ is the imprimitivity graph, then the
groupoid of
$X$ pointed at $E^0$ is isomorphic to $\mathcal G_E$ and the groupoid of $X$
pointed at $F^0$ is isomorphic to $\mathcal G_F$.
\end{prop}
\begin{proof}First note that $E^0$ and $F^0$ are automatically cofinal pointing sets.
By symmetry it suffices to show that $\mathcal{G}_{(X,E^0)} \cong \mathcal G_E$. By the
construction of $X$ we have a unique bijection $\phi:E^1\to X^2$ such that
$s=s\circ \phi$ and $r=r\circ \phi$. We extend $\phi$ to $E^*\cup
E^\infty$. It is easy to check that $\phi:\mathcal G_E\to \mathcal
G_{(X,E^0)}$ defined by
$$
\phi[\alpha,x,\beta]=[\phi(\alpha),\phi(x),\phi(\beta)]
$$
is an isomorphism.
\end{proof}
The $C^*$-algebras of strong shift equivalent graphs are not necessarily
isomorphic, (see Example~\ref{ex-counter1}), but we have:
\begin{cor} If $E$ and $F$ are strong shift equivalent, row-finite
graphs with no sinks then $C^*(E)$ and $C^*(F)$ are Morita equivalent.
\end{cor}
Following Ashton \cite{ashton}, we say a 0-1 matrix is {\it column
subdivision} if each
of its columns
contains at most one 1. Two matrices $A$ and $B$ are said to be
elementary strong
shift equivalent with column subdivision if $A = RS$, $B=SR$, and either
$R$ or $S$ is column subdivision. Likewise, two graphs are said to be
elementary strong shift equivalent with {\it column subdivision} if their
vertex matrices are. It was shown in both \cite{ashton} and
\cite{wat-prim} that if two finite 0-1 matrices $A$ and
$B$ are elementary strong shift equivalent with column subdivision, then
$\mathcal{O}_A \cong \mathcal{O}_B$. The following
result, combined with Corollary~\ref{cor-expleq}, provides an alternate
proof of a special case of this fact:
\begin{prop}
Let $E$ and $F$ be graphs with no sinks. Suppose that $E$ and
$F$ have $n$ and $n+1$ vertices
respectively. Then $E$ and $F$ are elementary strong shift equivalent
with column
subdivision if
and only if $F$ is an explosion of $E$.
\end{prop}
\begin{proof}
Denote the vertex matrix of $E$ by $B$ and the vertex matrix of $F$ by
$C$. Now suppose that $F$ is an explosion of $E$, with vertex
$v$ splitting into $v'$ and $v''$. Without loss of generality, we may
assume that $v$ is vertex 1 in $E$ and $v'$ and $v''$ are the first two
vertices of $F$. That is, the first row and column of $B$ corresponds to
$v$ and the first two rows and columns of $C$ correspond to $v'$ and
$v''$.
Define $S$ to be $C$ with the first column deleted. That is, $S_{ij} =
C_{ij}$ for $i=1,\dots,n+1$, $j=1,\dots,n$. If $R$ is the following $n
\times (n+1)$ column subdivision matrix
$$
R=\left(\begin{array}{ccccc}
1 & 1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 \\
& & & \ddots & \\
0 & 0 & 0 & 0 & 1
\end{array}\right)
$$
\noindent then $B=RS$ and $C=SR$.
Now suppose that $E$ and $F$ are elementary strong shift equivalent with
column
subdivision. That is, for any choices $B$ and $C$ of vertex matrices of
$E$ and $F$, there exist $R,S$ such that $B = RS$, $C = SR$, and either
$S$ or $R$ is column
subdivision. Now, $S$ is an $(n+1) \times n$ matrix, so in order for it
to be column subdivision, it must have a zero row. But a zero row in $S$
would yield a zero row in $C$, and hence a sink in $F$. Thus it must be
$R$ which is column subdivision.
Since $R$ is an $n \times (n+1)$ matrix which is column subdivision and
has no zero rows, there exist an $n \times n$ permutation matrix $P$ and
an $(n+1) \times (n+1)$ permutation matrix $Q$ such that
$$
PRQ=\left(\begin{array}{ccccc}
1 & 1 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 \\
& & & \ddots & \\
0 & 0 & 0 & 0 & 1
\end{array}\right)\hbox{.}
$$
\noindent So by replacing $B$ with $PBP^{-1}$ and $C$ with $QCQ^{-1}$, we
may assume that $R$ has this form.
Now, if we index the rows of $R$ and the columns of $S$ by
$\{1,2,\dots,n\}$ and the columns of $R$ and the rows of $S$ by
$\{0,1,\dots,n\}$, then we have the following:
\begin{enumerate}
\item[(i)] $B_{ij} = C_{ij}$ for $i=2,\dots,n$, $j=1,\dots,n$;
\item[(ii)] $C_{i0} = C_{i1}$ for $i=0,\dots,n$;
\item[(iii)] $C_{0j} + C_{1j} \leq 1$ for $j=0\dots,n$.
\end{enumerate}
\noindent (i) and (ii) are easy to check. To see (iii), suppose that for
some $j$, $C_{0j}$ and $C_{1j}$ are both 1. If $j > 1$, then
$$2=C_{0j}+C_{1j}=\sum_k S_{0k}R_{kj} + \sum_l S_{1l}R_{lj} =
S_{0j}+S_{1j} = \sum_m R_{1m}S_{mj} = B_{1j},$$
which is a contradiction. If, on the other hand, $j \leq 1$, then similar
calculations show that $B_{11} = 2$.
These three facts imply that $B$ and $C$ satisfy the three conditions of
Lemma~\ref{lem-explosionmatrix} for some suitable choice of $v$, $M_1$ and
$M_2$. Thus $E$ is an explosion of $F$.
\end{proof}
\begin{rem}
Note that the restriction on sinks is necessary in the preceding
proposition, since if
$$
B=\left(\begin{array}{cc}
1 & 1\\
0 & 0\\
\end{array}\right)\hbox{ and }
C=\left(\begin{array}{ccc}
1 & 1 & 1\\
0 & 0 & 0\\
0 & 0 & 0\\
\end{array}\right)\hbox{,}
$$
\noindent then $B$ and $C$ are elementary strong shift equivalent with
column
subdivision via
$$
R=\left(\begin{array}{ccc}
1 & 1 & 1\\
0 & 0 & 0\\
\end{array}\right)\hbox{ and }
S=\left(\begin{array}{ccc}
1 & 0\\
0 & 1\\
0 & 0\\
\end{array}\right)\hbox{,}
$$
\noindent but the graph of $C$ is not an explosion of the graph of $B$.
\end{rem}
\section{Counterexamples}
\label{sec-counterexamples}
In this section we collect several examples which show that neither
primitive equivalence nor reverse primitive equivalence is implied by
any of the other equivalence relations discussed in this paper.
\begin{ex}
\label{ex-counter1}
Elementary strong shift equivalence does not imply primitive
equivalence. This is trivially true because two matrices which are
primitively equivalent must be the same size, while elementary strong
shift
equivalence may change the size. But this example, taken from
\cite{wat-prim}, shows that elementary strong shift equivalent matrices
need not be
primitively equivalent, even if they have the same size: if
$$
R=\left(\begin{array}{ccc}
1 & 0 & 0\\
1 & 0 & 0\\
0 & 1 & 1\\
\end{array}\right)\hbox{ and }
S=\left(\begin{array}{ccc}
0 & 0 & 1\\
1 & 0 & 0\\
0 & 1 & 1\\
\end{array}\right)\hbox{,}
$$
\noindent then $\mathcal{O}_{RS} \cong \mathcal{O}_3$ and
$\mathcal{O}_{SR} \cong \mathcal{O}_3 \otimes M_2$ are not
isomorphic \cite{paschke-salinas}.
Hence the graph corresponding to $RS$ cannot be primitively equivalent to
the graph corresponding to $SR$. This example also answers negatively a
question
posed in \cite{ashton}, namely, do elementary strong shift equivalent
graphs always yield isomorphic algebras?
\end{ex}
\begin{ex} Elementary strong shift equivalence does not imply reverse
primitive
equivalence. Using $R$ and $S$ from the previous example, $S^TR^T =
(RS)^T$ and $R^TS^T = (SR)^T$ are not primitively
equivalent. This can be verified from the table on page 450 of
\cite{wat-prim}. We remark in passing that the second graph in the final
row of that table is misprinted. There should not be a loop on the top
vertex, and there should be a loop added to the lower left vertex.
\end{ex}
\begin{ex} Reverse primitive equivalence does not imply primitive
equivalence. Example~\ref{ex-rpe} shows two matrices which are reverse
primitively equivalent, but whose Cuntz-Krieger algebras are
not isomorphic. Hence they are not primitively equivalent.
\end{ex}
\begin{ex} Explosion equivalence does not imply primitive
equivalence and reverse explosion equivalence does not imply reverse
primitive equivalence.
Consider the matrices
$$
B=\left(\begin{array}{cccc}
1 & 1 & 0 & 1\\
0 & 0 & 1 & 0\\
1 & 1 & 1 & 0\\
1 & 1 & 0 & 1\\
\end{array}\right)\hbox{ and }
C=\left(\begin{array}{cccc}
1 & 1 & 0 & 0\\
0 & 0 & 1 & 1\\
1 & 1 & 1 & 0\\
1 & 1 & 0 & 1\\
\end{array}\right)\hbox{.}
$$
\noindent Both are explosions of
$A=
\left(\begin{smallmatrix}
1 & 1 & 1 \\
1 & 1 & 0 \\
1 & 0 & 1
\end{smallmatrix}\right)
$
so they are explosion equivalent. But they are not primitively
equivalent. The reader with a spare afternoon may verify this by checking
that there are 60 elements in the primitive equivalence class of $C$, and
$B$ is not one of them. Also note that, since $A$ is irreducible, every
vertex is cofinal. Thus $B^T$ and $C^T$ are reverse explosion
equivalent, but not reverse primitively equivalent.
\end{ex}
\begin{ex} Reverse explosion equivalence does not imply primitive
equivalence and explosion equivalence does not imply reverse primitive
equivalence. If
$$
B=\left(\begin{array}{ccccc}
0 & 0 & 0 & 1 & 0\\
0 & 1 & 1 & 0 & 0\\
1 & 0 & 0 & 0 & 1\\
0 & 1 & 0 & 0 & 0\\
0 & 1 & 0 & 0 & 0\\
\end{array}\right)\hbox{ and }
C=\left(\begin{array}{ccccc}
0 & 0 & 0 & 0 & 1\\
0 & 1 & 0 & 1 & 0\\
0 & 1 & 0 & 1 & 0\\
1 & 0 & 0 & 0 & 1\\
0 & 0 & 1 & 0 & 0\\
\end{array}\right)\hbox{,}
$$
\noindent $B$ and $C$ are reverse explosion equivalent because their
transposes are explosions of
$A=
\left(\begin{smallmatrix}
0 & 0 & 0 & 1\\
0 & 1 & 0 & 1\\
0 & 1 & 0 & 0\\
1 & 0 & 1 & 0
\end{smallmatrix}\right)
$
but they are not primitively equivalent. Unfortunately, it
requires a computer program to verify this (the primitive equivalence
class of $C$ has 183204 elements), and we could not find a more
manageable example.
\end{ex}
\begin{ex} Primitive equivalence does not imply reverse primitive
equivalence. It can be verified using Section 4 of \cite{wat-prim} that
$$
B=\left(\begin{array}{ccc}
1 & 1 & 1\\
1 & 0 & 0\\
1 & 0 & 0\\
\end{array}\right)\hbox{ and }
C=\left(\begin{array}{ccc}
1 & 1 & 0\\
1 & 0 & 1\\
0 & 1 & 0\\
\end{array}\right)\hbox{}
$$
\noindent are primitively equivalent, but not reverse primitively
equivalent.
\end{ex}
It is an open question whether or not primitive equivalence and explosion
equivalence together are enough to characterize graph groupoid
isomorphism. That is, given two graphs with isomorphic groupoids, are
they explosion-primitive equivalent? There
are difficulties on both ends. In particular, given two graphs, determining
whether or not their groupoids are isomorphic is highly non-trivial. Also,
we currently do not have an efficient algorithm for determining whether
or not two graphs are explosion-primitive equivalent. There are two
difficulties here. First, even in the $5 \times 5$ case, some matrices
have primitive equivalence classes with an unmanageable number of elements, so
the computer time required to check whether two matrices are primitively
equivalent becomes an issue. Second, we cannot say with
certainty that two matrices are not explosion equivalent. For example,
consider the matrices $A$ and $B$ from Section~\ref{sec-classification}. We
have checked that no explosion of $A$ to a $5 \times 5$ matrix is primitively
equivalent to any explosion of $B$ to a $5 \times 5$ matrix. However,
it may be possible, for example, that $A$ and $B$ can be exploded into $6
\times 6$ (or larger) matrices which are primitively equivalent.
It would be desirable to have more graph transformations that preserve the
isomorphism class of the groupoid or the $C^*$-algebra, or the Morita
equivalence class of the $C^*$-algebra, of the graph. Having more
operations would increase our chances of finding a canonical form.
\bibliographystyle{amsplain}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
|
2,869,038,155,525 | arxiv | \section{Estimating the black hole spin from the resonance models}
The resonance model \citep[][]{KluzniakAbramowicz2000} explains the twin peak QPOs as being caused by a 3:2 non-linear resonance between two global modes of oscillations in accretion flow in strong gravity. The modes in resonance are often assumed to be the epicyclic modes. The \emph{orbital resonance model} \citep[see][]{ KluAbr:2003:aph} demonstrates that {\it fluid accretion flows} admit two linear quasi-incompressible modes of oscillations, radial and vertical, with corresponding eigenfrequencies equal to the radial and vertical epicyclic frequencies for free particles \citep{Ali-Gal:1981:GENRG2, Now-Leh:1998:TheoryBlackHoleAccretionDisks:}. According to the resonance hypothesis, the two modes in resonance have eigenfrequencies $\nu_{\rm r}$ (radial epicyclic frequency) and $\nu_{\rm v}$ (vertical epicyclic frequency $\nu_{\theta}$ or Keplerian frequency $\nu_{\rm K}$). Several resonances of this kind are possible and have been discussed (see, e.g., \citealt[]{AbrKlu:2004}).
Formulae for the Keplerian $\nu_{\mathrm{K}}$ and the epicyclic frequencies $\nu_{\rm r}$ and $\nu_{\theta}$ in the field of a Kerr black hole with mass $M$ and spin $a$ are well known, and have the general form
\begin{equation}
\label{eq:oneoverM:theory}
\nu=\left ({{GM_0}\over {r_G^{~3}}}\right )^{1/2}f_\mathrm{i}(x,\,a)~~\doteq 32.3\left(\frac{M_0}{M_\odot}\right)f_\mathrm{i}(x,\,a)\,\mathrm{kHz},\quad\mathrm{i}\in{\mathrm{K},~\mathrm{r},\theta}
\end{equation}
where $f_\mathrm{i}(x,a)$ are functions of a dimensionless black hole spin $a$ and a dimensioless radial coordinate \mbox{$x\!=r/M$}.
For a $n\!:\!m$ orbital resonance, the dimensionless resonance radius $x_{\mathrm{n}:\mathrm{m}}$ is determined as a function of spin $a$ by an equation
$
\mathrm{n}\nu_{\rm r}\!= \mathrm{m}\nu_{\rm v}~(\nu_\mathrm{v}\!=\nu_\theta\,~\mathrm{or}~\,\nu_\mathrm{K})
$
\footnote{Because of the properties of Kerr black hole spacetimes, \emph{any} relativistic model of black hole QPOs should be rather sensitive to the spin $a$, however this sensitivity can be negligible on large scales of mass (\citealt[]{Abr-etal:2004:apj}).}.
Thus, from the observed frequencies and from the estimated mass, one can determine the relevant spin (\citealt[]{AbramowiczKluzniak2001,TAKS}). We summarize the spin estimates for the three microquasars in Table \ref{table:1}.
\newlength{\sirkaA}\newlength{\sirkaB}\settowidth{\sirkaA}{+}\settowidth{\sirkaB}{-}\advance\sirkaA by -\sirkaB
\newcommand{$\hspace{\sirkaA}$}{$\hspace{\sirkaA}$}
\begin{table*}[t!]
\label{table:1}
\begin{center}
\caption[]{\label{Table3} Spin estimates from the resonance models for microquasars \citep[for details and other considered resonances see, e.g.,][]{TAKS, Tor:2005:ASN}. Spin intervals correspond to the 1$\sigma$ uncertainty in mass, the small error resulting from uncertainty of the frequency measurement is for XTE 1550-564 (GRO 1655-40, GRS 1915+105) up to $0.03$ (0.01, 0.05).}
\begin{tabular}{ l l l l l }
\hline
~& \multicolumn{4}{c}{{{Interval of possible spin $a$ relevant for}}}\\
Model for &~~~~~1550--564 &~~~~~1655--40 &~~~~~1655--40$^*$ &~~~~~1915+105\\
\hline
\hline\\
3:2 [$\nu_{\theta},~\nu_r$] ~~&
+0.89~---~+0.99 &+0.96~---~+0.99 &+0.88~---~+0.93 &+0.69~---~+0.99\\
\hline
2:1 [$\nu_{\theta},~\nu_r$] ~~&+0.12~---~+0.42 &+0.31~---~+0.42 &+0.10~---~+0.25 & $\hspace{\sirkaA}$ -0.41~---~+0.44\\
\hline
3:1 [$\nu_{\theta},~\nu_r$] ~~& +0.32~---~+0.59 & +0.50~---~+0.59 &+0.31~---~+0.44 & $\hspace{\sirkaA}$-0.15~---~+0.61\\
\noalign{\smallskip}
\hline
\end{tabular}
\end{center}
$^{*}$ The two columns for GRO 1655--40 indicate numbers following from the two different mass analysis - \cite{Beer2002} vs. \cite{Greene2001}. Note that while the spin estimates from the 3:2 parametric resonance is for both the cases similar ($a\approx 0.9$), for the other models, the given mass range implies a large range of spins.
\end{table*}
\vspace{-.5cm}
\section{Comparison with the fits of spectral continua}
Except for one case, all the resonances considered in \cite{TAKS} are consistent with reasonable values of the black hole spin covering the range $a\in(0,~1)$. In particular, the 3:2 epicyclic parametric (internal) resonance model, supposed to be the most natural one in Einstein gravity \citep{Hor:2005:ASN:}, implies the spin $a\sim 0.9$.
The most recent results of the spectral fits correspond for GRO~1655--40~ to the spin $a\in(0.65,~0.75)$, and for GRS~1915+105 to $a\!>0.98$ \citep[][]{McC-etal:2006:APJ:}. Obviously, the value for GRS~1915+105 is in agreement with the prediction of the 3:2 parametric epicyclic resonance model, but the same prediction for GRO~1655--40 does not match the spectral fitting. No particular resonance model considered so far can cover the spectral limits to the spin for both microquasars. It could be interesting that the recently proposed 3:2 periastron precession resonance \citep{Bur:2005:RAG:} implies the spin of GRO~1655--40 to be $a\sim 0.7$. Nevertheless, eventuall periastron precession resonance requires the spin of GRS~1915+105 $a<0.8$ which is in strong disagreement with the spectral fitting limit, $a\!>0.98$.
\vspace{-.5cm}
\section{Troubles with the spin: $1/M$ scaling}
In principle one cannot exlude the possibility of different mechanisms exciting the high frequency QPOs in different sources, but there are many indicies that the mechanism is the same or similar \citep[e.g.,][]{Kli:2005:ASN:, Tor-etal:06}.
\citet{McClintockRemillard2003} found that the upper QPO frequency in microquasars scales well as
$\nu_\U\! =\!2.793 ( {M_0/M_{\odot}})^{-1}\, \mathrm{kHz}$ which is in good agreement with the 1$/M$ scaling of the first term in equation (\ref{eq:oneoverM:theory}).
On the other hand the exact 1$/M$ scaling holds only for the fixed value of the spin $a$ as functions $f_\mathrm{i}$ in equation (\ref{eq:oneoverM:theory}) are sensitive to the spin. The spectral limits to the spin for the two microquasars are \emph{very different}: $a\!\sim0.7$ vs. $a\!>0.98$, and, in addition, functions $f_\mathrm{i}(a,x)$ are more sensitive to the value of the spin when it is close to $a\!=1$ \citep[e.g.,][]{tor-stu:05:AA}. Hence, if the spin values obtained from the spectral fits are correct, the observed high frequency QPOs do not show sensitivity to the spin $a$ under the assumption of a unified QPO model. This is a serious problem for any relativistic QPO model handling with the orbital and epicyclic frequencies (\ref{eq:oneoverM:theory}).
\vspace{-.5cm}
\section{Requirement of a more realistic description}
It was found recently that the pressure effects may have a strong influence on the oscillation frequencies. \cite{Sra:2005:ASN:} and \cite{Bla-etal:06} studied properties of the radial ad vertical epicyclic modes of slightly non-slender tori within Newtonian theory using the Paczy\'nski-Wiita potential, and found the epicyclic frequencies to decrease with increasing thickness of the torus. The same behaviour was found for the resonant radius where the frequencies are in a 3:2 ratio, which on the contrary implies {\it increase} of the resonant frequencies. Considering the appropriate corrections to frequencies in the Kerr metric, one can reestimate the values of the spin using the resonance model. If the results in the Kerr metric were following the same trend as those in the Paczy\'nski-Wiita case, the spin for some configurations can be {\it lower} than previously estimated.
\footnote{In case of the 3:2 parametric resonance, the maximal realistic increase of the resonant frequency due to the pressure effects is about 15 percent \citep{Bla-etal:06}, which for GRO 1655-40 and the mass estimate by Beer \& Podsiadlowski would lower the spin down to $a\sim$0.8.}
\begin{acknowledgments}
This research is supported by the Czech grant MSM~4781305903.
\end{acknowledgments}
\vspace{-.5cm}
|
2,869,038,155,526 | arxiv | \section{introduction}
The spin-transport properties of Pt have been studied intensively. Pt exhibits efficient, reciprocal conversion of charge to spin currents through the spin Hall effect (SHE)\cite{Saitoh_2006,Mosendz_2010,Liu_2011,Wang_2014}. It is typically used as detection layer for spin current evaluated in novel configurations\cite{Uchida_2008,Ellsworth_2016,Zhou_2016}. Even so, consensus has not yet been reached on the experimental parameters which characterize its spin transport. The spin Hall angle of Pt, the spin diffusion length of Pt, and the spin mixing conductance of Pt at different interfaces differ by as much as an order of magnitude when evaluated by different techniques\cite{Kurt_2002,Vila_2007,Ando_2008,Liu_2011,Mosendz_2010,Azevedo_2011,Althammer_2013}.
Recently, Chen and Zhang \cite{Chen_2015,Kai_Chen_2015} (hereafter CZ) have proposed that interfacial spin-orbit coupling (SOC) is a missing ingredient which can bring the measurements into greater agreement with each other. Measurements of spin-pumping-related damping, particularly, report spin diffusion lengths which are much shorter than those estimated through other techniques\cite{Feng_2012,Rojas_S_nchez_2014}. The introduction of Rashba SOC at the FM/Pt interface leads to interfacial spin-memory loss, with discontinuous loss of spin current incident to the FM/Pt interface. The model suggests that the small saturation length of damping enhancement reflects an interfacial discontinuity, while the inverse spin Hall effect (ISHE) measurements reflect the bulk absorption in the Pt layer\cite{Feng_2012,Rojas_S_nchez_2014}.
The CZ model predicts a strong anisotropy of the enhanced damping due to spin pumping, as measured in ferromagnetic resonance (FMR). The damping enhancement for time-averaged magnetization lying in the film plane ({\it pc}-FMR, or parallel condition) is predicted to be significantly larger than that for magnetization oriented normal to the film plane ({\it nc}-FMR, or normal condition). The predicted anisotropy can be as large as 30\%, with {\it pc}-FMR damping exceeding {\it nc}-FMR damping, as will be shown shortly.
In this paper, we have measured the anisotropy of the enhanced damping due to the addition of Pt in symmetric Pt/Ni$_{81}$Fe$_{19}$ (Py)/Pt structures. We find that the anisotropy is very weak, less than 5\%, and with the opposite sign from that predicted in \cite{Chen_2015}.
\section{theory}
We first quantify the CZ-model prediction for anisotropic damping due to the Rashba effect at the FM/Pt interface. In the theory, the spin-memory loss for spin current polarized perpendicular to the interfacial plane is always larger than that for spin current polarized in the interfacial plane. The pumped spin polarization $\bm{\sigma}=\bm{m} \times \dot{\bm{m}}$ is always perpendicular to the time-averaged or static magnetization $\langle\bm{m}\rangle_t \simeq \bm{m}$. For {\it nc}-FMR, the polarization $\bm{\sigma}$ of pumped spin current is always in the interfacial plane, but for {\it pc}-FMR, is nearly equally in-plane and out-of-plane. A greater damping enhancement is predicted in the {\it pc} condition than in the {\it nc} condition, $\Delta\alpha_{pc}>\Delta\alpha_{nc}$:
\begin{equation}
\Delta\alpha_{nc}=K\Big[\frac{1+4\eta\xi(t_{Pt})}{1+\xi(t_{Pt})}\Big]
\label{eqn1}
\end{equation}
\begin{equation}
\Delta\alpha_{pc}=K\Big[\frac{1+6\eta\xi(t_{Pt})}{1+\xi(t_{Pt})}+\frac{\eta}{2[1+\xi(t_{Pt})]^{2}}\Big]
\label{eqn2}
\end{equation}
\begin{equation}
\xi(t_{Pt})=\xi(\infty)\times\coth(t_{Pt}/\lambda_{sd})
\label{eqn3}
\end{equation}
where the constant of proportionality K is the same for both conditions and the dimensionless parameters, $\eta$ and $\xi$, are always real and positive. The Rashba parameter
\begin{equation}
\eta=(\alpha_{R}k_{F}/E_{F})^{2}
\label{eqn4}
\end{equation}
is proportional to the square of the Rashba coefficient $\alpha_{R}$, defined as the strength of the Rashba potential, $V(\bm{r})=\alpha_{R}\delta(z)(\boldsymbol{\hat{k}}\times\boldsymbol{\hat{z}})\cdot\bm{\sigma}$, where $\delta(z)$ is a delta function localizing the effect to the interface at $z=0$ (film plane is {\it xy}), $k_{F}$ is the Fermi wavenumber, and $E_{F}$ is the Fermi energy. The backflow factor $\xi$ is a function of Pt layer thickness, where the backflow fraction at infinitely large Pt thickness defined as $\epsilon=\xi(\infty)/[1+\xi(\infty)]$. $\epsilon=\textrm{0 (1)}$ refers to zero (complete) backflow of spin current across the interface. $\lambda_{sd}$ is the spin diffusion length in the Pt layer.
\begin{figure}
\includegraphics[width=\columnwidth]{fig1.png}
\caption{Frequency-dependent half-power FMR linewidth $\Delta H_{1/2}(\omega)$ of the reference sample Py(5 nm) (black) and symmetric trilayer samples Pt(t)/Py(5 nm)/Pt(t) (colored). (a) {\it pc}-FMR measurements. (b) {\it nc}-FMR measurements. Solid lines are linear fits to extract Gilbert damping $\alpha$. (Inset): inhomogeneous broadening $\Delta H_0$ in {\it pc}-FMR (blue) and {\it nc}-FMR (red).}
\label{fig1}
\end{figure}
To quantify the anisotropy of the damping, we define Q:
\begin{equation}
Q\equiv(\Delta\alpha_{pc}-\Delta\alpha_{nc})/\Delta\alpha_{nc}
\label{eqn5}
\end{equation}
as an {\it anisotropy factor}, the fractional difference between the enhanced damping in pc and nc conditions. Positive Q (Q\textgreater0) is predicted by the CZ model. A spin-memory loss $\delta$ factor of 0.9 $\pm$ 0.1, corresponding to nearly complete relaxation of spin current at the interface with Pt, was measured through current perpendicular to plane-magnetoresistance (CPP-GMR)\cite{Kurt_2002} According to the theory\cite{Chen_2015,Kai_Chen_2015}, the spin-memory loss can be related to the Rashba parameter by $\delta=2\eta$, so we take $\eta\sim0.45$. The effect of variable $\eta<0.45$ will be shown in Figure \ref{fig3}. To evaluate the thickness dependent backflow $\xi(t_{Pt})$, we assume $\lambda_{sd}^{Pt}=14$ nm, which is associated with the absorption of the spin current in the bulk of Pt layer, as found from CPP-GMR measurements\cite{Kurt_2002} and cited in \cite{Chen_2015}. Note that this $\lambda_{sd}^{Pt}$ is longer than that used sometimes to fit FMR data\cite{Feng_2012,Rojas_S_nchez_2014}; Rashba interfacial coupling in the CZ model brings the onset thickness down. The calculated anisotropy factor Q should then be as large as 0.3, indicating that $\Delta\alpha_{pc}$ is 30\% greater than $\Delta\alpha_{nc}$ (see Results for details).
\begin{figure}
\includegraphics[width=\columnwidth]{fig2.png}
\caption{Pt thickness dependence of Gilbert damping $\alpha=\alpha(t_{Pt})$ in {\it pc}-FMR (blue) and {\it nc}-FMR (red). $\alpha_{0}$ refers to the reference sample ($t_{Pt}=0$). (Inset): Damping enhancement $\Delta \alpha(t_{Pt})=\alpha(t_{Pt})-\alpha_{0}$ due to the addition of Pt layers in {\it pc}-FMR (blue) and {\it nc}-FMR (red). Dashed lines refer to calculated $\Delta \alpha_{nc}$ using Equation \ref{eqn1} by assuming $\lambda_{sd}^{Pt}=14$ nm and $\epsilon=10\%$. The red dashed line ($\eta=0.15$) shows a similar curvature with experiments; The black dashed line ($\eta \geq 0.25$) shows a curvature with the opposite sign.}
\label{fig2}
\end{figure}
\begin{figure*}
\includegraphics[width=\textwidth]{fig3.png}
\caption{Anisotropy factor Q for spin-pumping enhanced damping, defined in Equation \ref{eqn5}. Solid lines are calculations using the CZ theory\cite{Chen_2015}, Equations \ref{eqn1}--\ref{eqn3}, for variable Rashba parameter $0.01\leq\eta\leq0.45$. $\lambda_{sd}^{Pt}$ is set to be 14 nm. Backflow fraction $\epsilon$ is set to be 10\% in (a) and 40\% in (b). Black triangles, duplicate in (a) and (b), show the experimental values from Figure 2.}
\label{fig3}
\end{figure*}
\section{experiment}
In this paper, we present measurements of the anisotropy of damping in the symmetric Pt($t_{Pt}$)/Py(5 nm)/Pt($t_{Pt}$) system, where ``Py''=Ni$_{81}$Fe$_{19}$. Because the Py thickness is much thicker than its spin coherence length\cite{GhostPRL}, we expect that spin-pumping-related damping at the two Py/Pt interfaces will sum. The full deposited stack is Ta(5 nm)/Cu(5 nm)/Pt($t_{Pt}$)/Py(5 nm)/Pt($t_{Pt}$)/Al$_2$O$_3$(3 nm), $t_{Pt}=\textrm{1--10 nm}$, deposited via DC magnetron sputtering under computer control on ion-cleaned Si/SiO$_{2}$ substrates at ambient temperature. The deposition rates were 0.14 nm/s for Py and 0.07 nm/s for Pt. Heterostructures deposited identically, in the same deposition chamber, have been shown to exhibit both robust spin pumping effects, as measured through FMR linewidth\cite{Ghosh2011,Caminale_2016}, and robust Rashba effects (in Co/Pt), as measured through Kerr microscopy\cite{MironNM2010,MironNM2011}. The stack without Pt layers was also deposited as the reference sample. The films were characterized using variable frequency FMR on a coplanar waveguide (CPW) with center conductor width of 300 $\mu$m. The bias magnetic field was applied both in the film plane ({\it pc}) and perpendicular to the plane ({\it nc}), as previously shown in \cite{Yang_2016}. The {\it nc}-FMR measurements require precise alignment of the field with respect to the film normal. Here, samples were aligned by rotation on two axes to maximize the resonance field at 3 GHz.
\section{results and analysis}
Figure \ref{fig1} shows frequency-dependent half-power linewidth $\Delta H_{1/2}(\omega)$ in {\it pc}- and {\it nc}-FMR. The measurements were taken at frequencies from 3 GHz to a cut-off frequency above which the signal-to-noise ratio becomes too small for reliable measurement of linewidth. The cutoff ranged from 12--14 GHz for the samples with Pt (linewidth $\sim$ 200--300 G) to above 20 GHz for $t_{Pt}=0$. Solid lines stand for linear regression of the variable-frequency FMR linewidth $\Delta H_{1/2}=\Delta H_{0}+2\alpha\omega/\gamma$, where $\Delta H_{1/2}$ is the full-width at half-maximum, $\Delta H_{0}$ is the inhomogeneous broadening, $\alpha$ is the Gilbert damping, $\omega$ is the resonance frequency and $\gamma$ is the gyromagnetic ratio. The fits show good linearity with frequency $\omega/2\pi$ for all experimental linewidths $\Delta H_{1/2}(\omega)$. The inset summarizes inhomogeneous broadening $\Delta H_{0}$ in {\it pc}- and {\it nc}-FMR; its errorbar is $\sim 2$ Oe.
In Figure \ref{fig2}, we plot Pt thickness dependence of damping parameters $\alpha(t_{Pt})$ extracted from the linear fits in Figure \ref{fig1}, for both {\it pc}-FMR and {\it nc}-FMR measurements. Standard deviation errors in the fits for $\alpha$ are $\sim 3\times10^{-4}$. The Gilbert damping $\alpha$ saturates quickly as a function of $t_{Pt}$ in both pc and nc conditions, with 90\% of the effect realized with Pt(3 nm). The inset shows the damping enhancement $\Delta\alpha$ due to the addition of Pt layers $\Delta\alpha=\alpha-\alpha_{0}$, normalized to the Gilbert damping $\alpha_{0}$ of the reference sample without Pt layers. The Pt thickness dependence of $\Delta\alpha$ matches our previous study on Py/Pt heterostructures\cite{Caminale_2016} reasonably; the saturation value of $\Delta\alpha_{Pt/Py/Pt}$ is 1.7x larger than that measured for the single interface $\Delta\alpha_{Py/Pt}$\cite{Caminale_2016} (2x expected). The dashed lines in the inset refer to calculated $\Delta \alpha_{nc}$ using Equation \ref{eqn1} (assuming $\lambda_{sd}^{Pt}=14$ nm and $\epsilon=10\%$). $\eta=0.25$ shows a threshold of Pt thickness dependence. When $\eta>0.25$, the curvature of $\Delta \alpha(t_{Pt})$ will have the opposite sign to that observed in experiments, so $\eta=0.25$ is the maximum which can qualitatively reproduce the Pt thickness dependence of the damping.
As shown in Figure \ref{fig2} inset, the damping enhancement due to the addition of Pt layers is slightly larger in the {\it nc} geometry than in the {\it pc} geometry: $\Delta\alpha_{nc}>\Delta\alpha_{pc}$. This is opposite to the prediction of the model in \cite{Chen_2015}. The anisotropy factor $Q\equiv(\Delta\alpha_{pc}-\Delta\alpha_{nc})/\Delta\alpha_{nc}$ for the model (Q\textgreater0) and the experiment (Q\textless0) are shown together in Figure \ref{fig3} (a) and (b). The magnitude of Q for the experiment is also quite small, with -0.05\textless Q\textless0. This very weak anisotropy, or near isotropy, of the spin-pumping damping is contrary to the prediction in \cite{Chen_2015}, and is the central result of our paper.
The two panels (a) and (b), which present the same experimental data (triangles), consider different model parameters, corresponding to negligible backflow ($\epsilon=0.1$, panel {\it a}) and moderate backflow ($\epsilon=0.4$, panel {\it b}) for a range of Rashba couplings $0.01\le \eta \le0.45$. A spin diffusion length $\lambda_{sd}=14$ nm for Pt\cite{Kurt_2002} was assumed in all cases.
The choice of backflow fraction $\epsilon=0.1$ or $0.4$ and the choice of spin diffusion length of Pt $\lambda_{sd}=14$ nm follow the CZ paper\cite{Chen_2015} for better evaluation of their theory. For good spin sinks like Pt, the backflow fraction is usually quite small. If $\epsilon=0$, then there will be no spin backflow. In this limit, $\Delta \alpha_{pc}$, $\Delta \alpha_{nc}$ and the Q factor will be independent of Pt thickness.
In the case of a short spin diffusion length of Pt, e.g., $\lambda_{sd}=3$ nm, the anisotropy Q as a function of Pt thickness decreases more quickly for ultrathin Pt, closer to our experimental observations. However, we note that the CZ theory requires a long spin diffusion length in order to reconcile different experiments, particularly CPP-GMR with spin pumping, and is not relevant to evaluate the theory in this limit.
Leaving apart the question of the sign of Q, we can see that the observed absolute magnitude is lower than that predicted for $\eta=0.05$ for small backflow and 0.01 for moderate backflow. According to ref \cite{Chen_2015}, a minimum level for the theory to describe the system with strong interfacial SOC is $\eta=0.3$.
\section{discussion}
Here, we discuss extrinsic effects which may result in a discrepancy between the CZ model (Q$\sim$+0.3) and our experimental result (-0.05\textless Q\textless0). A possible role of two-magnon scattering\cite{Arias1999,McMichael2004}, known to be an anisotropic contribution to linewidth $\Delta H_{1/2}$, must be considered. Two-magnon scattering is present for {\it pc}-FMR and nearly absent for {\it nc}-FMR. This mechanism does not seem to play an important role in the results presented. It is difficult to locate a two-magnon scattering contribution to linewidth in the pure Py film: Figure \ref{fig1} shows highly linear $\Delta H_{1/2}(\omega)$, without offset, over the full range to $\omega /2\pi=20$ GHz, thereby reflecting Gilbert-type damping. The damping for this film is much smaller than that added by the Pt layers. If the introduction of Pt adds some two-magnon linewidth, eventually mistaken for intrinsic Gilbert damping $\alpha$, this could only produce a measurement of Q\textgreater0, which was not observed.
One may also ask whether the samples are appropriate to test the theory. The first question regards sample quality. The Rashba Hamiltonian models a very abrupt interface. Samples deposited identically, in the same deposition chamber, have exhibited strong Rashba effects, so we expect the samples to be generally appropriate in terms of quality. Intermixing of Pt in Ni$_{81}$Fe$_{19}$ (Py)/Pt\cite{Golod2011} may play a greater role than it does in Co/Pt\cite{BERTERO1994173}, although defocused TEM images have shown fairly well-defined interfaces for our samples\cite{Bailey2012}.
A second question might be about the magnitude of the Rashba parameter $\eta$ in the materials systems of interest. Our observation of nearly isotropic damping is consistent with the theory, within experimental error and apart from the opposite sign, if the Rashba parameter $\eta$ is very low and the backflow fraction $\epsilon$ is very low. Ab-initio calculations for (epitaxial) Co/Pt in the ref\cite{Grytsyuk2016} have indicated $\eta=\textrm{0.02--0.03}$, lower than the values of $\eta \sim$ 0.45 assumed in \cite{Chen_2015,Kai_Chen_2015} to treat interfacial spin-memory loss.
The origin of the small, negative Q observed here is unclear. A recent paper has reported that $\Delta \alpha_{pc}$ is smaller than $\Delta \alpha_{nc}$ in the YIG/Pt system via single-frequency, variable-angle measurements\cite{Zhou_2016}, which is contrary to the CZ model prediction as well. It is also possible that a few monolayers of Pt next to the Py/Pt interfaces are magnetized in the samples\cite{Caminale_2016}, and this may have an unknown effect on the sign, not taken into account in the theory.
\section{conclusions}
In summary, we have experimentally demonstrated that in Pt/Py/Pt trilayers the interfacial damping attributed to spin pumping is nearly isotropic, with an anisotropy between film-parallel and film-normal measurements of \textless5\%. The nearly isotropic character of the effect is more compatible with conventional descriptions of spin pumping than with the Rashba spin-memory loss model predicted in \cite{Chen_2015}.
\section{acknowledgements}
We acknowledge support from the US NSF-DMR-1411160 and the Nanosciences Foundation, Grenoble.
|
2,869,038,155,527 | arxiv | \section{Introduction}
In recent years, Implicit Neural Representations (INRs) have been proposed as continuous data representations for various tasks in computer vision. With INR, data is represented as a neural function that maps continuous coordinates to signals. For example, an image can be represented as a neural function that maps 2D coordinates to RGB values, a 3D scene can be represented as a neural radiance field (NeRF~\cite{mildenhall2020nerf}) that maps 3D locations with view directions to densities and RGB values.
Compared to discrete data representations such as pixels, voxels, and meshes, INRs do not require resolution-dependent quadratic or cubic storage. Their representation capacity does not depend on grid resolution but instead on the capacity of a neural network, which may capture the underlying data structure and reduce the redundancy in representation, therefore providing a compact yet powerful continuous data representation.
However, learning the neural functions of resolution-free INRs from given observations usually requires optimization with gradient descent steps, which has several challenges:
(i) Optimization can be slow if every INR is learned independently from a random initialization;
(ii) The learned INR does not generalize well to unseen coordinates if the given observations are sparse and no strong prior is shared.
From the perspective of efficiently building INRs, previous works~\cite{sitzmann2019siren} proposed to learn a latent space where each INR can be decoded by a single vector with a hypernetwork~\cite{Ha2017HyperNetworks}. However, a single vector may not have enough capacity to capture the fine details of a complex real-world image or 3D object, while these works show promising results in generative tasks~\cite{skorokhodov2021adversarial,anokhin2021image,chan2021pi}, they do not have high precision in reconstruction tasks~\cite{chan2021pi}. The single-vector modulated INRs are mostly used for representing local tiles ~\cite{mehta2021modulated} for reconstruction. Recent works~\cite{chen2021learning,saito2019pifu,yu2021pixelnerf} revisit the grid-based discrete representations and define INRs over deep feature maps, where the capacity and storage will be resolution-dependent and the decoding is bounded by feature maps as the INRs rely on local features. Going beyond the limitation of the resolution, our work is inspired by recent works~\cite{sitzmann2019metasdf,tancik2020meta} which explore a promising direction in the intersection between gradient-based meta-learning and INRs. Without grid-based representation, these works can efficiently and precisely infer the whole set of INR weights without the single-vector bottleneck. However, the computation of higher-order derivatives and a learned fixed initialization make these methods less flexible, and gradient descent that involves sequential forward and backward passing is still necessary for obtaining INRs from given observations in these works.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/teaser.pdf}
\caption{Implicit Neural Representation (INR) is a function that maps coordinates to signals. We propose to use Transformers as meta-learners for directly building the whole weights in INRs from given observations. Our method supports various types of INRs, such as continuous images and neural radiance fields.}
\label{fig:teaser}
\end{figure}
Motivated by a generalized formulation of the gradient-based meta-learning methods, we propose the formulation that uses Transformers~\cite{vaswani2017attention} as effective hypernetworks for INRs (Figure~\ref{fig:teaser}). Our general idea is to use Transformers to transfer the knowledge from image observations to INR weights. Specifically, we first convert the input observations to data tokens, then we view the weights in INR as the set of column vectors in weight matrices of different layers, for which we create initialization tokens each representing one column vector. These initialization tokens are passed together with data tokens into a Transformer, and the output tokens are mapped to their corresponding location (according to the location of initialization tokens) as the weights in INRs.
We verify the effectiveness of our method for building INRs in both 2D and 3D domains, including image regression and view synthesis. We show that our approach can efficiently build INRs and outperform previous gradient-based meta-learning algorithms on reconstruction and synthesis tasks. Our further analysis shows qualitatively that the INRs built by the Transformer meta-learner may potentially exploit the data structures without explicit supervision.
To summarize, our contributions include:
\begin{itemize}
\item We propose a Transformer hypernetwork to infer the whole weights in an INR, which removes the single-vector bottleneck and does not rely on grid-based representation or gradient computation.
\item We draw connections between the Transformer hypernetwork and the gradient-based meta-learning for INRs.
\item Our analysis sheds light on the structures of the generated INRs.
\end{itemize}
\section{Related Work}
\textbf{Implicit neural representation.} Implicit neural representations (INRs) have been demonstrated as flexible and compact continuous data representations in recent works. A main branch of these works use INRs for representing 3D objects or scenes, their wide applications include 3D reconstruction~\cite{deng2020nasa,genova2019learning,genova2020local,michalkiewicz2019implicit} and generation~\cite{schwarz2020graf,chan2021pi,devries2021unconstrained}. Typical examples of resolution-free INRs include DeepSDF~\cite{Park_2019_CVPR} which represents 3D shapes as a field of signed distance function, Occupancy Networks~\cite{Mescheder_2019_CVPR} and IM-NET~\cite{Chen_2019_CVPR} which represents 3D shapes as binary classification neural network that classifies each 3D coordinate for being inside or outside the shape. NeRF and its follow-up works~\cite{mildenhall2020nerf,martin2021nerf,park2020deformable,liu2020neural} are proposed to represent a scene as a neural radiance field that maps each position to a density and a view-dependent RGB value, with differentiable volumetric rendering that allows optimizing the representation from 2D views. The idea of INR has also been adapted for representing 2D images in recent works~\cite{chen2021learning,skorokhodov2021adversarial,anokhin2021image,karras2021alias}, which allows decoding for arbitrary output resolution. Several recent works observe that coordinate-based MLPs with ReLU activation may lack the capacity for representing fine details, solutions proposed to address this issue include replacing ReLU with sine activation function~\cite{sitzmann2019siren}, and using Fourier features of input coordinates~\cite{tancik2020fourfeat}.
\textbf{Hypernetworks for INRs.} A hypernetwork~\cite{Ha2017HyperNetworks} $g$ generates the weights $\theta$ for another network $f_\theta$ from some input $z$, i.e. $\theta = g(z)$. Directly building an INR from given observations will usually require performing gradient descent steps, which is inefficient and does not generalize well with sparse observations. A common way to tackle this short-come is learning a latent space for INRs~\cite{Park_2019_CVPR,Mescheder_2019_CVPR,sitzmann2019srns,sitzmann2019siren}, where each INR corresponds to a latent vector that can be decoded by a hypernetwork. Since a single vector may have limited capacity for representing the fine details (e.g. lack of details in reconstructing face image~\cite{sitzmann2019siren,chan2021pi}), many recent works~\cite{chen2021learning,genova2020local,chibane2020implicit,jiang2020local,peng2020convolutional,chabra2020deep,mehta2021modulated} address this issue by revisiting discrete representation and defining INRs with feature maps in a hybrid way, where the data still corresponds to a grid-based representation. Different from these hybrid methods, our goal is to obtain a hypernetwork that allows for building a resolution-free neural function (i.e. a global function instead of a grid-based representation).
\textbf{Meta-learning.} Learning to build a neural function from given observations is related to the topic of meta-learning, where a differentiable meta-learner is trained for inferring the weights in a neural network. Most previous works on meta-learning have been focus on few-shot learning~\cite{vinyals2016matching,NIPS2017_cb8da676,Sachin2017,sung2018learning,mishra2018a} and reinforcement learning~\cite{finn2017model,fernando2018meta,jaderberg2019human}, where a meta-learner allows fast adaptation for new observations and better generalization with few samples. Gradient-based methods is a popular branch in meta-learning algorithms, typical examples include MAML~\cite{finn2017model}, Reptile~\cite{nichol2018first}, and their extentions~\cite{antoniou2018how,fallah2020convergence,Rajeswaran2019MetaLearningWI}. A recent paper provides a comprehensive survey on meta-learning algorithms~\cite{hospedales2020meta}.
While most previous works in meta-learning aim at building a neural function for processing the data, the recent rising topic of implicit neural representation connects neural functions and data representations, which extends the idea of meta-learning with new possibilities for building neural functions that represent the data. MetaSDF~\cite{sitzmann2019metasdf} adopts a gradient-based meta-learning algorithm for learning signed distance functions, which leads to much faster convergence than standard learning. Learned Init~\cite{tancik2020meta} generalizes this idea to wider classes of INRs and shows the effectiveness of using the meta-learned initialization as encoded prior. While these works have shown promising results, their methods only learn a fixed initialization and require test-time optimization. We show that it is possible to directly build the whole INR with a Transformer meta-learner and it is more flexible than a fixed initialization.
\textbf{Transformers.} Transformers~\cite{vaswani2017attention} were initially proposed for machine translation, and has later been a state-of-the-art architecture used in various methods~\cite{devlin-etal-2019-bert,Radford2018ImprovingLU,Radford2019LanguageMA,brown2020language} in natural language processing. Recent works~\cite{dosovitskiy2021an,touvron2021training,liu2021Swin} also demonstrate the potential of Transformers for encoding visual data. In this work, we show promising results of using Transformers in meta-learning for directly inferring the whole weights in a neural function of INR.
\section{Method}
\subsection{Problem Formulation}
\label{subsec:pf}
We are interested in the problem of recovering a signal $I$ from observations $O$. The signal is a function $I: X \rightarrow Y$ defined in a bounded domain that $X\subseteq \mathbb{R}^c, Y\subseteq \mathbb{R}^d$. For instance, an image can be represented as a function that maps 2D coordinates to 3D tuples of RGB values. A 3D object or scene can be represented as a neural radiance field (NeRF)~\cite{mildenhall2020nerf}, which maps 3D locations with view directions $v$ (normalized 3D vectors) to 4D tuples that describe the densities and RGB values.
In implicit neural representation, the signal $I$ is estimated and parameterized by a neural function $f_\theta$ with $\theta$ as its weights (learnable parameters). A typical example of $f_\theta$ is a multilayer perceptron (MLP). We consider a more general class of $f_\theta$ where its weights consist of a set of matrices
\begin{equation*}
\theta = \{W_i \mid W_i \in \mathbb{R}^{\textrm{in}_i\times \textrm{out}_i}\}_{i=0}^{m-1},
\end{equation*}
the biases to add (if exist) are merged into these matrices. Given the observations $O$, our goal is now to obtain $\theta$ that fits the signal $I$ with the neural function $f_\theta$.
Observations $O$ is a set $O=\{T_i(I)\}_{i=0}^{|O|-1}$ with transform functions $T_i$. For example, to estimate a continuous image, each pixel $i$ in the given image can be approximately viewed as $T_i(I) = I(x_i)$, where $x_i$ is the center coordinate of pixel $i$ and $I(x_i)$ are the RGB values. To estimate a 3D object with NeRF, an input view provides each pixel $i$ with its corresponding rendering ray $r_i$, that can be represented as $T_i(I) = R(I, r_i)$, where $R$ is the function renders the RGB values from ray $r_i$ in the radience field $I$.
Given the observation set $O$, estimating $I$ with the INR $f_\theta$ can be addressed by minimizing the L2 loss
\begin{equation}
\label{eq:l_theta_o}
L(\theta; O) = \frac{1}{|O|} \sum_{T_i\in O} \|T_i(f_\theta) - T_i(I)\|_2^2.
\end{equation}
If we assume $T_i(f_\theta)$ is differentiable to $\theta$, minimizing this loss with gradient descent steps is referred to as fitting an INR to given observations or learning the INR. The goal of a meta-learner is to efficiently find $\theta$ with given $O$ and improve the generalization of the neural function $f_\theta$.
\subsection{Motivating from gradient-based meta-learning}
\label{sec:conn_gradient}
In meta-learning, the goal is to train a meta-learner that infers the weights $\theta$ of a target network $f_\theta$ from given observations. In MAML~\cite{finn2017model}, the learnable component is an initialization $\theta_0$, $\theta=\theta_n$ is inferred by updating $\theta_0$ for $n$ steps
\begin{equation}
\theta_{i+1} = \theta_i + (-\nabla_\theta \mathcal{L}(\theta; O)|_{\theta=\theta_i}),
\end{equation}
where $\mathcal{L}$ is the differentiable loss function computed with observations $O$. The update formula above defines a computation graph from $\theta_0$ to $\theta_n$, if the computation graph (with higher-order derivatives) is differentiable, the gradient for optimizing $\theta_n$ can be back-propagated to $\theta_0$ for training this meta-learner.
We consider a more general class of meta-learners, where its learnable components contain: (i) A learnable initialization $\theta_0$; (ii) A total number of update steps $n$; (iii) A step-specific learnable update rule $U_{\psi_i}$ (with $\psi_i$ as its parameters) that conditions on some provided data $D_i$:
\begin{equation}
\label{eq:residual_theta}
\theta_{i+1} = \theta_i + U_{\psi_i}(\theta_i; D_i).
\end{equation}
The meta-learning objective is applied to the final vector $\theta_n$, which is typically fitting the seen observations or generalizing to unseen observations.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/residual.pdf}
\caption{\textbf{Motivating from gradient-based meta-learners.} The residual link in the Transformer meta-learner shares a similar formulation as subtracting the gradients in gradient descent for updating the weights.}
\label{fig:residual}
\end{figure}
We observe that this formulation can be naturally instantiated with a Transformer architecture. In general, we propose to represent observations as a set of data tokens, which are passed into a Transformer encoder with a set of initialization tokens that are learnable parameters defined in addition, as shown in Figure~\ref{fig:residual} (b). The computation graph with the residual link can be written as
\begin{equation}
\label{eq:residual_varphi}
\varphi_{i+1} = \varphi_i + U_{\psi_i}(\varphi_i; d_i),
\end{equation}
where $d_i$ are the data tokens at layer $i$, $U_{\psi_i}$ is the function that describes how the output residual is conditioned on $i$-th layer's input, i.e. the function that is composed of the attention layer and the feed-forward layer, $\varphi_0$ is the learnable initialization tokens and tokens $\varphi_i$ corresponds to the target weights $\theta_i$.
\begin{figure}[t]
\centering
\begin{minipage}{\linewidth}
\includegraphics[width=\linewidth]{figures/method.pdf}
\end{minipage}
\caption{\textbf{Transformers as meta-learners.} We propose to use a Transformer encoder as the meta-learner that directly builds the whole weights of an INR from given observations. The observations are split into patches and mapped to data tokens by a fully connected (FC) layer. The INR weights are viewed as the set of column vectors in weight matrices. For each column vector, we create a corresponding initialization token at the input. The data tokens and the initialization tokens are passed together into the Transformer encoder. The weight tokens are generated at the output and are mapped to the column vectors in INR weights with layerwise FCs (denoted by FC$^*$).}
\label{fig:method}
\end{figure}
\subsection{Transformers as Meta-Learners}
\label{subsec:tm}
In this section, we introduce the details of our Transformer hypernetwork. We use Transformers to directly build the whole weights $\theta$ by transferring the knowledge from encoded information of observations $O$. Our method is demonstrated in Figure~\ref{fig:method}, in general, it represents the observations as data tokens and decodes them to weight tokens, that each weight token corresponds to some locations in the INR weights.
In practice, the observation set usually consists of images (or with given poses). We follow a similar strategy as in Vision Transformer~\cite{dosovitskiy2021an}, where the images are split into patches. The patches are flattened and then mapped by a fully connected (FC) layer to vectors in the dimension of the input to the Transformer. We denote these vectors as data tokens, i.e. the tokens that represent the observation data, which are the blue input tokens in Figure~\ref{fig:method}.
To decode for the whole INR weights $\theta = \{W_i\}_{i=0}^{m-1}$, we view each weight matrix $W_i$ as a set of column vectors, and $\theta$ can be represented as the joint of the column vector sets. For each of these column vectors, we create an initialization token (which is a learnable vector parameter) correspondingly at the input for the Transformer. In Figure~\ref{fig:method}, they are illustrated as green tokens.
These initialization tokens and data tokens are passed together into the Transformer encoder, which jointly models: (i) building features of the observations through interactions in data tokens; (ii) transferring the knowledge of observations to the weights through interactions between data tokens and initialization tokens; (iii) the relation of different weights in INR through interactions in initialization tokens.
Finally, the output vectors at the positions of the input initialization tokens are denoted as weights tokens, which are shown in Figure~\ref{fig:method} as the tokens in orange color. To map them into the INR weights, since the dimensions of the column vectors in $W_i$ can be different for different $i$, we have $m$ independent FC layers for each $i\in \{0,\dots, m-1\}$ that maps the weight tokens to their corresponding column vectors in $W_i$, which gets whole INR weights $\theta = \{W_i\}_{i=0}^{m-1}$.
To train this Transformer meta-learner, a loss is computed with regard to the INR weights $\theta$. Let $O$ denotes the observations from which we generate $\theta$, for the optimization goal of the meta-learner, the loss can be defined as $L(\theta; O)$ in Equation~\ref{eq:l_theta_o}. In tasks that require improving the generalization of the INR $f_\theta$ (e.g. view synthesis from a single input image), we sample $O'\neq O$ from the training set and compute the loss $L(\theta; O')$ instead. $L(\theta; O')$ requires the estimated $f_\theta$ to generalize to unseen observations, which explicitly adds generalization of INR as an objective.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/wgroup.pdf}
\caption{\textbf{Weight Grouping.} Columns in weight matrix $W$ are divided into groups, each group can be generated by a single vector. $\bar{w}_i$ are learnable vectors assigned for every column in $W$, which are independent of the input observations.}
\label{fig:wgroup}
\end{figure}
\subsection{Weight Grouping}
\label{subsec:wg}
Assigning tokens for each column vector in weight matrices might be inefficient when the size of $\theta$ is large. To improve the efficiency and scalability of our Transformer meta-learner, we present a weight grouping strategy that offers control for the trade-off of precision and cost.
The general idea is to divide the columns in a weight matrix into groups and assign a single token for each group, as illustrated in Figure~\ref{fig:wgroup}. Specifically, let $W\in \theta$ denotes a weight matrix that can be viewed as column vectors $W=[w_0 \dots w_{r-1}]$. For weight grouping with a group size of $k$, $W$ will be defined by a new set of column vectors $U=[u_0 \dots u_{r/k -1}]$ (assume $r$ is divisible by $k$). Specifically, $w_i$ is defined by $u_{\lfloor i/k\rfloor}$ with the formula
\begin{equation*}
w_i = \textrm{normalize}(u_{\lfloor i/k\rfloor} \cdot \bar{w_i}),
\end{equation*}
where normalize refers to L2 normalization, $\bar{w_i}$ are the learnable parameters assigned for every weight $w_i$, note that they are independent of the given observations. With this formulation, $U$ will replace $W$ as the new weights for the Transformer meta-learner to build.
The weight grouping strategy will roughly reduce the number of weight tokens by a factor of $k$, which makes it more efficient for the Transformer meta-learner to build the weights while maintaining the representation capacity of the inferred INR $f_\theta$. $\bar{w}_i$ for every column vector $w_i\in W$ are learnable vectors that do not need to be generated by the Transformer, and they make the columns vector within the same group still different from each other.
\section{Experiments}
\subsection{Image Regression}
\begin{figure}[t]
\centering
\begin{tabular}{cc|cc}
$f_\theta(x)$ & GT & $f_\theta(x)$ & GT \\
~~\includegraphics[width=.17\linewidth]{data/imgrec_1b_pred.png}~~ & ~~\includegraphics[width=.17\linewidth]{data/imgrec_1b_input.png}~~ &
~~\frame{\includegraphics[width=.17\linewidth]{data/imgrec_2b_pred.png}}~~ & ~~\frame{\includegraphics[width=.17\linewidth]{data/imgrec_2b_input.png}}~~ \\
~~\includegraphics[width=.17\linewidth]{data/imgrec_1_pred.png}~~ & ~~\includegraphics[width=.17\linewidth]{data/imgrec_1_input.png}~~ &
~~\frame{\includegraphics[width=.17\linewidth]{data/imgrec_2_pred.png}}~~ & ~~\frame{\includegraphics[width=.17\linewidth]{data/imgrec_2_input.png}}~~ \\
\end{tabular}
\caption{\textbf{Qualitative results of image regression.} Our method builds the weights of $f_\theta$ that fit the observations of the target image and recovers the details of real-world images. Examples are shown on CelebA (left) and Imagenette (right), which are face images and natural images of general objects.}
\label{fig:imgrec}
\vspace{-1em}
\end{figure}
Image regression is a basic task commonly used for evaluating the representation capacity of INRs in recent works~\cite{sitzmann2019siren,tancik2020meta}. In image regression, a target image $J$ is sampled from an image distribution $J\sim \mathcal{J}$. An INR $f_\theta$ is a neural network that takes as input a 2D coordinate in the image and outputs the RGB value. The goal is to infer the weights $\theta$ in INR $f_\theta$ for a given target image $J$ so that $f_\theta$ can reconstruct $J$ by outputting the RGB values at the center coordinates of pixels in $J$. Unlike previous works that perform gradient descent steps to optimize the INR weights for given observations, our goal is to use a Transformer to directly generate the INR that can fit the pixel values in the target image without test-time optimization.
\textbf{Setup.} We follow the datasets of real-world images used in recent work~\cite{tancik2020meta}. \textit{CelebA}~\cite{liu2015deep} is a large-scale dataset of face images. It contains about 202K images of celebrities, which are split into 162K, 20K, and 20K images as training, validation, and test sets. \textit{Imagenette}~\cite{howard2020imagenette} is a dataset of common objects. It is a subset of 10 classes chosen from the 1K classes in ImageNet~\cite{deng2009imagenet}, which contains about 9K images for training and 4K images for testing.
\textbf{Input encoding.} To apply the Transformer meta-learner to the task of image regression, we will need to encode the given input of the target image to a set of tokens as the Transformer's data tokens. To achieve this, we follow the practice in Vision Transformer (ViT)~\cite{dosovitskiy2021an} that split the input image into patches. Specifically, the input image is represented by a set of patches of shape $P\times P$, which are converted to flattened vectors $\{p_i\}_{i=0}^{n_p-1}$ with dimension $P\times P\times 3$ for RGB images. For each patch $p_i$, it is assigned with a learnable positional embedding $e_i$. The $i$-th data token is obtained by $\textrm{FC}(p_i + e_i)$ with a FC layer.
\textbf{Implementation details.} On the Imagenette dataset, we apply RandomCrop data augmentation for training our Transformer meta-learner. For both CelebA and Imagenette datasets, the resolution of target images is $178\times 178$ which follows prior practice. We apply a zero-padding of 1 to get the input resolution $180\times 180$, and split the image with patch size $P=9$. For INR, we follow the same 5-layer MLP structure as in prior work~\cite{tancik2020meta}, which has the hidden dimension of 256. The number of groups in weight grouping is 64 by default for a good balance in performance and efficiency. The Transformer follows a similar structure as ViT-Base~\cite{dosovitskiy2021an}, but we reduce the number of layers by half to 6 layers for efficiency. The networks are trained end-to-end with Adam~\cite{kingma2014adam} with a learning rate $1\cdot 10^{-4}$ and the learning rate decays once by 10 when the loss plateaus.
\textbf{Qualitative results.} We first show qualitative results in Figure~\ref{fig:imgrec}. We observe that the Transformer meta-learners are surprisingly effective for building INRs of images in high precision, that can even recover the fine details in complex real-world images. For example, the left example from CelebA shows that our inferred INR $f_\theta$ can successfully reconstruct various details in a face image, including the teeth, lighting effect, and even the background patterns which is not a part of faces. From the right figure of Imagenette dataset, we observe that our inferred INR can recover the digital texts on the object with high fidelity. While it is observed in prior work~\cite{sitzmann2019siren} that learning a latent space of vectors and decoding INRs by hypernetwork can not recover the details in a face image, we show that an INR with precise information can be directly built by a Transformer without any gradient computation.
\begin{table}[t]
\centering
\begin{tabular}{ccc}
\toprule
& ~~~~~~~~~~CelebA~~~~~~~~~~ & ~~~~Imagenette~~~~ \\
\midrule
Learned Init~\cite{tancik2020meta} & 30.37 & 27.07 \\
Ours & \textbf{31.96} & \textbf{29.01} \\
\bottomrule
\end{tabular}
\caption{\textbf{Quantitative results of image regression (PSNR).} Learned Init is a gradient-based meta-learning algorithm that adapts to an image with a few gradient steps.}
\label{tab:imgrec}
\end{table}
\begin{table}[t]
\centering
\begin{tabular}{cccc}
$G=1$ & $G=4$ & $G=16$ & $G=64$ \\
~{\includegraphics[width=.17\linewidth]{data/abl_group_g1.png}}~ & ~{\includegraphics[width=.17\linewidth]{data/abl_group_g4.png}}~ &
~{\includegraphics[width=.17\linewidth]{data/abl_group_g16.png}}~ & ~{\includegraphics[width=.17\linewidth]{data/abl_group_g64.png}}~
\end{tabular}
\begin{tabular}{c|cccc}
\toprule
Num of groups ($G$) & 1 & 4 & 16 & 64 \\
\midrule
PSNR & ~~~~25.63~~~~ & ~~~~27.89~~~~ & ~~~~29.93~~~~ & ~~~~\textbf{31.96}~~~~ \\
\bottomrule
\end{tabular}
\caption{\textbf{Ablations on the number of weight groups.} The PSNR is evaluated on CelebA dataset. Having more groups ($G$) in weight grouping will make the output INR more flexible and help for representing the details (the yellow box in the shown example).}
\label{tab:abl_groups}
\vspace{-1em}
\end{table}
\textbf{Quantitative results.} In Table~\ref{tab:imgrec}, we show quantitative results of our Transformer meta-learner and compare our performance with the gradient-based meta-learning algorithm Learned Init proposed in prior work. Learned Init meta-learns an initialization that can be quickly adapted to target images within a few gradient steps. On both real-world image datasets, we observe that our method achieves the PSNR at around 30 for image regression, and our method without any gradient computation outperforms prior gradient-based meta-learning. The gradient steps involve the repeated process of forward and backward passing through the whole INR sequentially, while ours can directly build the INR by forwarding the information into a shallow Transformer. In summary, our method provides a precise yet efficient hypernetwork-based way of converting image pixels to a global neural function as their underlying INR.
\textbf{Ablations on the number of weight groups.} To justify that the Transformer meta-learner learns about building a complex INR, we show by experiments that the number of groups in weight grouping is not redundant. The qualitative and quantitative results are both shown in Table~\ref{tab:abl_groups}. We observe that by increasing the number of groups $G$ from 1 to 64, the recovered details for the target image are significantly improved in vision, and the PSNR is consistently improving by large margins. The results demonstrate the effectiveness of the weight grouping strategy, and it indicates that the Transformer meta-learner can learn about the complex relations between different weights in the INR so that it can effectively build a large set of weights in a structured way to achieve high precision.
\subsection{View Synthesis}
View synthesis aims at generating a novel view of a 3D object with several given input views. Neural radiance field (NeRF)~\cite{mildenhall2020nerf} has been recently proposed to tackle this task by representing the object as an INR that maps from a 3D coordinate and a viewing direction to the density and RGB value. With the volumetric rendering, the generated views of NeRF are differentiable to the INR weights. View synthesis can be then achieved by first fitting INR for the given input views, then rendering the INR from novel views. The goal of a meta-learner is to infer the INR from given input views efficiently, and improve its generalization so that view synthesis can be achieved with fewer input views.
\textbf{Setup.} We perform view synthesis on objects from ShapeNet~\cite{chang2015shapenet} dataset. We follow prior work~\cite{tancik2020meta} which considers 3 categories: chairs, cars, and lamps. For each category, the objects are split into two sets for training and test, where for each object 25 views (with known camera pose) are provided for training. During testing, a random input view is sampled for evaluating the performance of novel view synthesis.
\textbf{Input encoding.} For each input view image, given the known camera pose, we first compute the ray emitted from every pixel for rendering. The emitted ray at each pixel can be represented as a 3D starting point and a 3D direction vector (normalized as a unit vector). With the original image which has RGB channels, we concatenate all the information at every pixel, which gets an extended image with 9 channels. The extended image contains all the information about an input view, therefore, it can be then split into patches and mapped to data tokens in the Transformer meta-learner. This representation naturally generalizes to multiple input views. Since the information of a single view is represented by a set of patches, when multiple input views are available, their data tokens can be simply merged as a set for representing all the observation information for passing into the Transformer.
\textbf{Adaptive sampling.} To improve the training stability, we propose an adaptive sampling strategy for the first training epoch. Specifically, when we sample the pixel locations for computing the reconstruction loss, we ensure that half of them are sampled from the foreground of the image. This is implemented by selecting the non-white pixels as the background in ShapeNet image is white. We found that the training process is stable after having this simple sampling strategy.
\textbf{Implementation details.} In ShapeNet, input views are provided in resolution $128\times 128$. We split input views with patch size $8$ for the Transformer input. We use NeRF as the INR representation, it follows the architecture in \cite{tancik2020meta} which consists of 6 layers with the hidden dimension of 256 and does not use ``coarse'' and ``fine'' models for simplicity. The architecture of the Transformer and the optimizer are the same as the experiments for image regression.
\begin{figure}[t]
\begin{subtable}{.3\linewidth}
\centering
\begin{tabular}{@{}c|c@{}c@{}c@{}}
\multicolumn{1}{c}{Input} & GT & w/o T. & w/ T. \\
\includegraphics[width=.21\linewidth]{data/1view/chairs_input_1.png} & \includegraphics[width=.21\linewidth]{data/1view/chairs_target_1.png} & \includegraphics[width=.21\linewidth]{data/1view/chairs_wopred_1.png} & \includegraphics[width=.21\linewidth]{data/1view/chairs_wpred_1.png} \\
\includegraphics[width=.21\linewidth]{data/1view/cars_input_1.png} & \includegraphics[width=.21\linewidth]{data/1view/cars_target_1.png} & \includegraphics[width=.21\linewidth]{data/1view/cars_wopred_1.png} & \includegraphics[width=.21\linewidth]{data/1view/cars_wpred_1.png} \\
\includegraphics[width=.21\linewidth]{data/1view/lamps_input_2.png} & \includegraphics[width=.21\linewidth]{data/1view/lamps_target_2.png} & \includegraphics[width=.21\linewidth]{data/1view/lamps_wopred_2.png} & \includegraphics[width=.21\linewidth]{data/1view/lamps_wpred_2.png}
\end{tabular}
\end{subtable}
\begin{subtable}{.3\linewidth}
\centering
\begin{tabular}{@{}c|c@{}c@{}c@{}}
\multicolumn{1}{c}{Input} & GT & w/o T. & w/ T. \\
\includegraphics[width=.21\linewidth]{data/1view/chairs_input_3.png} & \includegraphics[width=.21\linewidth]{data/1view/chairs_target_3.png} & \includegraphics[width=.21\linewidth]{data/1view/chairs_wopred_3.png} & \includegraphics[width=.21\linewidth]{data/1view/chairs_wpred_3.png} \\
\includegraphics[width=.21\linewidth]{data/1view/cars_input_4.png} & \includegraphics[width=.21\linewidth]{data/1view/cars_target_4.png} & \includegraphics[width=.21\linewidth]{data/1view/cars_wopred_4.png} & \includegraphics[width=.21\linewidth]{data/1view/cars_wpred_4.png} \\
\includegraphics[width=.21\linewidth]{data/1view/lamps_input_4.png} & \includegraphics[width=.21\linewidth]{data/1view/lamps_target_4.png} & \includegraphics[width=.21\linewidth]{data/1view/lamps_wopred_4.png} & \includegraphics[width=.21\linewidth]{data/1view/lamps_wpred_4.png}
\end{tabular}
\end{subtable}
\begin{subtable}{.375\linewidth}
\centering
\begin{tabular}{@{}c@{}c|c@{}c@{}c@{}}
\multicolumn{2}{c}{Input} & GT & w/o T. & w/ T. \\
\includegraphics[width=.168\linewidth]{data/2view/chairs_input1_0.png} & \includegraphics[width=.168\linewidth]{data/2view/chairs_input2_0.png} & \includegraphics[width=.168\linewidth]{data/2view/chairs_target_0.png} & \includegraphics[width=.168\linewidth]{data/2view/chairs_wopred_0.png} & \includegraphics[width=.168\linewidth]{data/2view/chairs_wpred_0.png} \\
\includegraphics[width=.168\linewidth]{data/2view/cars_input1_0.png} & \includegraphics[width=.168\linewidth]{data/2view/cars_input2_0.png} & \includegraphics[width=.168\linewidth]{data/2view/cars_target_0.png} & \includegraphics[width=.168\linewidth]{data/2view/cars_wopred_0.png} & \includegraphics[width=.168\linewidth]{data/2view/cars_wpred_0.png} \\ \includegraphics[width=.168\linewidth]{data/2view/lamps_input1_0.png} & \includegraphics[width=.168\linewidth]{data/2view/lamps_input2_0.png} & \includegraphics[width=.168\linewidth]{data/2view/lamps_target_0.png} & \includegraphics[width=.168\linewidth]{data/2view/lamps_wopred_0.png} & \includegraphics[width=.168\linewidth]{data/2view/lamps_wpred_0.png}
\end{tabular}
\end{subtable}
\caption{\textbf{View synthesis with Transformer meta-learner on ShapeNet.} The rows show for chairs, cars, and lamps categories. ``w/o T.'' denotes the results of using the Transformer to infer the INR weights without test-time optimization. ``w/ T.'' performs a few test-time optimization steps on the generated INR for the sparse input views, which further helps to reconstruct the fine details in the input views. The corresponding quantitative results are shown in Table~\ref{tab:viewsyn_tto}.}
\label{fig:viewsyn_tto_visual}
\end{figure}
\begin{table}[t]
\centering
\begin{tabular}{cccc}
\toprule
& ~~~~Chairs~~~~ & ~~~~Cars~~~~ & ~~~~Lamps~~~~ \\
\midrule
NeRF~\cite{mildenhall2020nerf} (Standard~\cite{tancik2020meta}) & 12.49 & 11.45 & 15.47 \\
Matched~\cite{tancik2020meta} & 16.40 & 22.39 & 20.79 \\
Shuffled~\cite{tancik2020meta} & 10.76 & 11.30 & 13.88 \\
\midrule
Learned Init~\cite{tancik2020meta} & 18.85 & 22.80 & 22.35 \\
Ours & \textbf{19.66} & \textbf{23.78} & \textbf{22.76} \\
\bottomrule
\end{tabular}
\caption{\textbf{Comparison of building INR for single image view synthesis (PSNR).} The compared methods are baselines and the gradient-based meta-learning algorithm in prior work. Ours does not perform test-time optimization.}
\vspace{-1em}
\label{tab:cmp_viewsyn}
\end{table}
\begin{table}[t]
\centering
\begin{tabular}{ccccc}
\toprule
& \multicolumn{2}{c}{1-view} & \multicolumn{2}{c}{2-view} \\
& ~~~~~~~w/o T.~~~~~~~ & ~~~~~~~w/ T.~~~~~~~ & ~~~~w/o T.~~~~ & ~~~~w/ T.~~~~ \\
\midrule
~~Chairs~~ & 19.66 & \textbf{20.56} & 21.10 & \textbf{23.59} \\
Cars & 23.78 & \textbf{24.73} & 25.45 & \textbf{27.13} \\
Lamps & 22.76 & \textbf{24.71} & 23.11 & \textbf{27.01} \\
\bottomrule
\end{tabular}
\caption{\textbf{Effect of test-time optimization and more input views for view synthesis (PSNR).} We observe that our method for view synthesis can take benefits from test-time optimization and more views.}
\vspace{-1em}
\label{tab:viewsyn_tto}
\end{table}
\textbf{Results.} We first compare our method to the prior gradient-based meta-learning algorithm of building INR for single image view synthesis, the results are shown in Table~\ref{tab:cmp_viewsyn}. Standard, Matched, and Shuffled are the baselines trained from different initializations from the prior work~\cite{tancik2020meta}. Specifically, Standard denotes a random initialization, Matched is the initialization learned from scratch which matches the output of the meta-learned initialization, Shuffled is permuting the weights in the meta-learned initialization. We observe that our method outperforms the baselines and the gradient-based meta-learning algorithm for inferring the weights in an INR.
Our method can also naturally take benefits from test-time optimization and more input views. The qualitative and quantitative results are shown in Figure~\ref{fig:viewsyn_tto_visual} and Table~\ref{tab:viewsyn_tto}. We observe that the Transformer meta-learner can effectively build the INR of a 3D object with sparse input views. Since our method builds the whole INR, we can perform further test-time optimization on the INR with given input views just as the original training in NeRF. For efficiency, our test-time optimization only contains 100 gradient steps, it further helps for constructing the fine details in input views. Since the Transformer takes a set as the input, it can gather the information from multiple input views, and we observe the performance can be improved in the setting with more input views.
\section{Does the INR Exploit Data Structures?}
\label{sec:does_inr}
A key potential advantage of INRs is that their representation capacity does not depend on grid resolution but instead on the capacity of the neural network, which allows it to exploit the underlying structures in data and reduce the representation redundancy. To explore whether the structure is modeled in INRs, we visualize the attention weights at the last layer from the weight tokens to the data tokens. Intuitively, since each data token corresponds to a patch in the original image, the attention weights may represent that, for weight columns in different layers, which part of the original image they mostly depend on.
We reshape the attention weights to the 2D grid of patches and bilinearly up-sample the grid to a mask with the same resolution as the input image. We mask the input image so that parts with higher attention will be shown, the visualization results on CelebA dataset are shown in Figure~\ref{fig:vis_attn}. We observe that there exist weight columns in different layers that attend to structured parts. For example, there are weights roughly attending to the nose and mouth in layer 1, the forehead in layer 2, and the whole face in layer 3. Our observations suggest that the generated INRs may potentially capture the structure of data, which is different from the grid-based discrete representation, where every entry independently represents a pixel and the data structure is not well exploited.
\begin{figure}[t]
\centering
\begin{tabular}{c|cccc}
Image & Layer 1 & Layer 1 & Layer 2 & Layer 3 \\
\includegraphics[width=.16\linewidth]{data/vis_attn/149_input.png} & \includegraphics[width=.16\linewidth]{data/vis_attn/149_0a.png} & \includegraphics[width=.16\linewidth]{data/vis_attn/149_0b.png} & \includegraphics[width=.16\linewidth]{data/vis_attn/149_1.png} & \includegraphics[width=.16\linewidth]{data/vis_attn/149_2.png} \\
\includegraphics[width=.16\linewidth]{data/vis_attn/150_input.png} & \includegraphics[width=.16\linewidth]{data/vis_attn/150_0a.png} & \includegraphics[width=.16\linewidth]{data/vis_attn/150_0b.png} & \includegraphics[width=.16\linewidth]{data/vis_attn/150_1.png} & \includegraphics[width=.16\linewidth]{data/vis_attn/150_2.png}
\end{tabular}
\caption{\textbf{Attention masks from weight tokens to data tokens.} Representative examples are selected from tokens for different INR layers. The attention map shows where the corresponding INR weight is attending to.}
\vspace{-1em}
\label{fig:vis_attn}
\end{figure}
\section{Conclusion}
In this work, we proposed a simple and general formulation that uses Transformers as meta-learners for building neural functions of INRs, which opens up a promising direction with new possibilities. While most of the prior works of hypernetworks for INRs are based on single-vector modulation and high precision reconstruction as a global INR function was mostly achieved by gradient-based meta-learning, our proposed Transformer hypernetwork can efficiently build an INR in one forward pass without any gradient steps, and we observed it can outperform the previous gradient-based meta-learning algorithms for building INRs in the tasks of image regression and view synthesis. While gradient information is not necessary for our model, our method simply builds the weights in a standard INR, therefore it is also flexible to be further combined with any INRs that involve test-time optimization.
Our method draws connections between the Transformer hypernetworks and the gradient-based meta-learning algorithms, and our further analysis sheds light on the generated INRs. We observed that the INR which represents data as a global function may potentially capture the underlying structures without any explicit supervision. Understanding and utilizing these encoded structures could be a promising direction for future works.
~
\textbf{Acknowledgement.} This work was supported, in part, by grants from DARPA LwLL, NSF CCF-2112665 (TILOS), NSF 1730158 CI-New: Cognitive Hardware and Software Ecosystem Community Infrastructure (CHASE-CI), NSF ACI-1541349 CC*DNI Pacific Research Platform, and gifts from Meta, Google, Qualcomm and Picsart.
\bibliographystyle{splncs04}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.